Sunday, March 15, 2015

We're Well On Our Way to Serverless Computing

As I discussed in my first post, I came up with an idea I titled "Serverless Computing" in 2002.  At the time I was frustrated by the limitations of web, application and data server capabilities.  I was implementing a rather amazing B2B marketplace for a drug company; a leading edge architecture I had developed using XML at its core along with Java messaging services and styling objects to render the final views.  The same architecture and implementation had to support multiple lines of business without any crossover.  My frustrations led me to start questioning everything.  If things weren't working, why was I continuing to do everything the same way I had before?

In the middle of a snow storm in central Connecticut, sitting in a frozen rental car waiting for warmth (I'm from the South), I had an epiphany.  I realized most of the constructs of computing are driven by human needs, not the computer.  It dawned on my all my architecture work was about putting stakes in the ground as anchors for our thinking and development of the portal.  I was reasonably good at refactoring applications to improve security, efficiency, and efficacy; why not apply the same thinking to architecture?  I scurried down a path of thinking that led me to the conclusion our modern architectures are built on so many layers of abstraction that we've lost sight of the why.  I was perpetuating the problem by blindly following the norm.

As yourself this: where did the concept of a "server" come from?  Many people refer back to the origination of client/server computing; decoupling the processing unique to each user (the client), from the processing common to everyone (the server).  Once software was decomposed into two complimentary applications, it could run on two different computers where the server does the heavy lifting and is therefore optimized for its workload.  In reality client/server is really an extension of the mainframe architecture where desktop PC's replace the dumb green screen terminal and, by their nature of having a processor on board, share some of the processing load.  That's all good, but what drove the creation of the mainframe, and therefore client/server, was economics.  By centralizing processing power and enabling remote access, mainframes delivered a reasonable economic model for the automation of basic business tasks.  Dumb terminals made sense when people and the mainframe were local, few applications existed, applications were simple, and the costs of infrastructure was high.

Today none of the original drivers of mainframes and client/server exist, yet we still use the architecture unchanged.  If you took an ultra-modern data center and walked someone from 1965 through the center, there is no way they wouldn't mistake the mass of pods for mainframes.  Those massive data centers are nowhere near where the people who use them work.  In fact we can no longer assume employees even work in buildings, or from 8am to 6pm. The software landscape consists of billions of applications with thousands more created every day.  And the cost of infrastructure is so low thanks to density and scale, a modern smartphone has more processing power, memory, storage and network bandwidth than a "server" did just a decade ago.  We are surrounded by highly capable, network accessible computing devices which spend the majority of their life I/O bound, just waiting around for something to do.  Why are we letting all that computing power go to waste?  We're ignoring the real promise of cloud computing, a concept closer to P2P than the Internet and what we think of as public cloud today.  I'm talking massive distribution of applications, data and infrastructure; the kind of infrastructure people cower at when talking about cloud, but fully embrace when talking about the Internet of Things.

We need to rethink our approach to computing.  Period.

When you tear out the non-value added elements of client/server, the one tenet which survives is decoupling: separating the user interface from the business logic.  Decoupling's primary value proposition is at the software layer, not hardware, as we are reminded of every time a web server goes down.  And hardware, when viewed through cloud computing optics, is nothing more than a pool of resources (compute, memory, storage, etc.).  If we take the widely available computing resources we have on our client devices and run our user oriented "server" software there, we gain several benefits:
  • decreased impact of "server" outages
  • reduced complexity of "server" environments
  • federation of power consumption over the entire power grid
  • elimination of the need for large, centralized data centers
  • reduced long haul bandwidth requirements
  • raises the barrier for DDoS attacks while reducing the risk of penetrations as key data never leaves the premises
Preposterous!  You're crazy! Insane! Never! Yet that's precisely the direction we're not only heading in, but we are fast approaching the arrival platform.

The whitepaper I submitted to multiple outlets in 2002 told me I was crazy (including my employer at the time, IBM).  Nobody asked me to explain my thinking or even gave the idea a second thought.  Yet today I'm more convinced than ever it's the endgame of where we're heading.  Consider the rise in the popularity of Docker, a container oriented tool which approaches virtualization correctly (as opposed to the crazy idea of virtual machines which replicate the bloated operating systems multiple times over).  Consider the rise of microservices, self contained services which are distributed with the core application.  We are at the threshold already.

Moving over the threshold requires a tweak to Docker so it can be deployed seamlessly as part of an existing operating system install similar to Java, and the management tools required in a massively distributed system.  Second we need similarly scaled data federation tools which I don't believe exist today (for more on data federation see my entry on The Data War and Mobilization of  IT, or my  upcoming entry on Data Analytics in the Network).

Just imagine how the world of business computing would change if we eliminated just 20% of the web and application servers?  How about reducing web and application server instances for consumer  cloud offers such as Office 365, or for your bank.  Go way out on a limb and consider the adoption of P2P tools such as BitTorrent Sync.

And by the way, I'm still waiting for someone to provide mainframe based public cloud services.  Where is the new EDS?

2 comments:

  1. Another sign of the eventuality of serverless computing: http://radar.oreilly.com/2014/12/why-the-data-center-needs-an-operating-system.html

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete