Sunday, March 29, 2015

The Last Mile and the Future of Cloud


"Look back to where you have been, for a clue to where you are going"

It's an unattributed quote, but I find it applies repeatedly throughout technology.  So often what appears new is really a twist on a tried and true approach.  Anyone who's spent any time around networks knows the phrase "last mile".  It's a reference to the final leg of the network connecting a home or office.  When AT&T was broken up in 1984 by the US Government, AT&T emerged as the long distance company while local service was divided into seven "baby bells" including Pacific Telesis, Ameritech, and BellSouth.  Experts believed long distance held the promise of higher profits while the Regional Bell Operating Companies (RBOC's) were doomed to a capital intensive, low margin struggle.

The experts were wrong.

Owning that "last mile" turned out to be very profitable; so profitable one of the RBOC's, Southwestern Bell, was able to buy those other three RBOC's, changed it's name to SBC, and then bought it's former parent, AT&T.  Although mobile networks are great for connecting smartphones and tablets and satellites can deliver radio and television, it turns out nothing yet can replace fiber and copper for bandwidth and low latency.  After the Telecommunications Act of 1996, last mile services exploded and today instead of just the local phone company there are a variety of competitors including Google.  And providing that last mile of service continues to be a significant revenue driver.

Let's put the last mile conversation to the side and switch gears.  Today large corporations are investing billions of dollars in Big Data; growing their analytic capabilities to generate the oxygen required by their growth engines.  These fierce competitors are slowly realizing there simply isn't enough time available to:

  1. capture data at the point of origination
  2. move the data across the country
  3. filter the data to focus on the most valuable elements
  4. combine the data with other data to broaden the perspective
  5. execute analytics on the data
  6. generate a result
  7. communicate the result back across the country
  8. leverage the result to drive some benefit

If the network operates at the speed of light, how can there not be enough time?  Beyond the reality that light slows down in fiber (by about 31%), there is not one single direct link between the user and the corporate data center.  Users have to be authenticated, security policies applied, packets routed, applications load balanced.  The multitude of events that occur, each one very quickly, add up to a delay we call latency.  In a world where everything is measured in minutes latency goes unnoticed, but in our Internet world we are moving from seconds to sub-second time frames.  Think about how patient you are when navigating through a website.  After that first second ticks off the clock, people begin to wonder if something is wrong.  It's the byproduct of taking our high speed access to the Internet for granted.  Marketers want to collect metadata about what you're trying to do, figure out how they can influence you, and insert themselves into your decision chain; and they only have the time between when you click the mouse and when the browser refreshes.  Hopefully now you can understand why that eight step process, moving data across the country is so unappealing.

For the past three years I have advocated an alternate approach; putting their servers as close to the end user as possible (commonly called "the edge").  Where "the edge" is located depends on the conversation, however the furthest it can be is at the start of that last mile, the last point on the network before it connects to the end user.  Today the edge extends as far as the same city for large populations, more often it's a region or even a state.  Although my serverless computing concept could be part of the answer and move the analytics into end user's computer, in truth at least some analysis needs to occur off-site, if for no other reason than to stage the right data.  Moving analytics closer to the edge requires us to move compute and storage resources closer.

Let's return to the "last mile".

If you looked at a map of the network which serves your home or business, you would notice the wire goes from your house, out through a bunch of routers, switches and signal boosters until it finally reaches a distribution point owned by your provider (or wherein they lease space).  These locations are often large, having previously housed massive switching systems for the telephone network, and they are secure, built like cold war bomb shelters.  What if these locations were loaded with high density compute and storage available much like a public cloud to augment the resources within the corporate data center?  If a business can operate while leveraging public cloud resources, what we lovingly refer to as a Hybrid Cloud model, then wouldn't it make sense to push the resources as far out toward the edge as possible?

I'm hoping you are increasingly buying into this idea, or at least skeptical enough to wait for it to implode, and want to know how this is broadly applicable.  I do not see this as a panacea, any more than I do serverless computing, cloud, big data, mobility or any other technology.  However I do see it at a minimum as moving the conversation forward on how to deal with a world where our endpoints are no longer fixed, and at a maximum another arrow in the quiver.  Consider how much data is being collected today; from the GPS location in your phone to the RFID tag on your razor blades, we are living in the Data Age.  Every single device powered by electricity is a likely candidate to be internet enabled, what we call the Internet of Things.  Each of these devices will communicate something, creating new data every day and adding to the pile of data that already exists.  To deal with the onslaught, companies need to filter what's coming in, remove the noise, and then execute their normalization routines (standardizing date formats, to get the data ready for use.  Since compared to the cost of moving data, everything else is free, there is an economic incentive to move data as short a distance as possible.  Handling the grunt work of analytics locally could have a dramatic impact on overall system speed.  And over time, having local compute resources will enable software architects to push analytics closer and closer to "the edge".

Today I am unaware of anyone working on this issue, despite pushing for it and finding a few leading edge Fortune 500 executives already facing this challenge.  The truth is we live locally, we're served locally, why not compute locally?  I see this as a gift of future revenue, sitting on the doorsteps of the telco's and cable providers waiting for them to create the product.  However I don't believe they realize what they have.  There is no last mile provider who has made a splash in cloud or big data. They own the last mile; they're the ones who own the gateway that links the world to their customers. Moving the public cloud from super-regional data centers to the local central office where the last mile connects could make the telcos instantly relevant in cloud and give them a nearly insurmountable competitive advantage over today's public cloud leaders like Amazon and Microsoft (perhaps this is part of the reason Google created Google Fiber).  Imagine a legacy infrastructure being resurrected to meet an emerging need.  But then, as I said at the beginning, "Look back to where you have been, for a clue to where you are going"

P.S.  I was so excited to see the headline "IBM, Juniper Networks work to build networks capable of real-time, predictive analysis", until I realized it was the opposite of integrating data analytics into the network.  Oh well, my quest lives on.

Sunday, March 22, 2015

Value * Easy = Consumption

Over the past decade I've witnessed a constant stream of IT executives and technology professionals view cloud as a threat to their careers.  When viewed through the eyes of an internal IT shop where the business has been a captive customer I can understand their worry.  Now they're being asked to enable innovation instead of taking orders; bring solutions to the business instead of begrudgingly accept new challenges.  However I can't understand why they don't see the other side of the cloud coin, the very equation which drives cloud adoption: Value * Easy = Consumption.

Public Cloud has been built on two value propositions.  First providing value through availability to resources, in a short time period, without capital investment.  Those three values align with the strategic goals of every CxO no matter how you write them:
  • "Do more with less"
  • "Improve agility, elasticity, efficiency"
  • "Reduce costs"
  • "Shift from maintain to innovate"
  • "Remake the cost curve" (*my personal favorite)
Those are just a sample of quotes from CxO's I've worked with over the past decade.  Moreso, each of the CxO's had a common opinion of IT: too slow and expensive for the value delivered.  This is the environment into which AWS started selling it's cloud capabilities, back before we had the phrase "cloud computing".  It's important to remember AWS grew out of Amazon's own internal needs, it was not the result of market surveys and product development.  Although Fortune 1000 adoption of public cloud has been slow, the concepts of cloud computing rapidly penetrated corporate America in an attempt to bring the AWS value proposition to the enterprise.  Shifting from a hardware centric view to a capability centric view of infrastructure is a major upheaval in approach.  

Given very few companies have been successful in adopting cloud, what's the holdup?

Whereas a CIO can buy "Value" in the form of tools (BMC, VMWare, etc.) or rent it (AWS, Google, Azure, etc.), the truth is they can't buy, rent, lease, borrow or even steal "Easy".  Making something easy isn't easy, and cloud is anything but easy.  Put yourself in the shoes of a business executive such as the Chief Marketing Officer or the Chief Financial Officer.  In your world you have very few hard assets, having shifted most everything to a lease model; from office space and PC's to digital advertising and audit.  You can shift your spend as your business changes throughout the year.  What you need are technology solutions able to meet your need for agility, elasticity and efficiency.  How does your IT team respond?  Building out large scale data centers, buying servers, writing software.  Do any of these approaches appear to be in synch with the CMO and CFO's needs?  No.  In fact strategic planning with IT is so difficult, these leaders are increasingly willing to go outside the company, which is not easy, to get what they need.  They're willing to invest their reputation and the success of their team in taking a risk to convince the CEO, corporate security, office of the general counsel, and fellow business leaders that going around IT is the right strategy.  Then they spend money on consultants and hire talent to move in the new direction.  And yet all of that is considered easier than getting solutions from IT, the place they would prefer be the first, last and only stop.

Without a concerted effort to make cloud use easy the entire equation is upset.  Easy is the governor on the economic engine of cloud.  Having cloud capabilities, being able to deliver the "Value", isn't enough.  When done right, "Easy" is a multiplier of "Value" and drives consumption significantly beyond expectations.  At that point IT executives and technology professionals don't view cloud as a threat to their careers, it morphs into a driver of career opportunity.  Their own value increases dramatically, and their strategic value to the long term success of the business in particular.  In my experience it's much more rewarding to have a seat at the table to discuss how to accomplish some new goal than being berated as the barrier to accomplishing an old goal.

Cloud in the enterprise will never be a success without "Easy".

Sunday, March 15, 2015

We're Well On Our Way to Serverless Computing

As I discussed in my first post, I came up with an idea I titled "Serverless Computing" in 2002.  At the time I was frustrated by the limitations of web, application and data server capabilities.  I was implementing a rather amazing B2B marketplace for a drug company; a leading edge architecture I had developed using XML at its core along with Java messaging services and styling objects to render the final views.  The same architecture and implementation had to support multiple lines of business without any crossover.  My frustrations led me to start questioning everything.  If things weren't working, why was I continuing to do everything the same way I had before?

In the middle of a snow storm in central Connecticut, sitting in a frozen rental car waiting for warmth (I'm from the South), I had an epiphany.  I realized most of the constructs of computing are driven by human needs, not the computer.  It dawned on my all my architecture work was about putting stakes in the ground as anchors for our thinking and development of the portal.  I was reasonably good at refactoring applications to improve security, efficiency, and efficacy; why not apply the same thinking to architecture?  I scurried down a path of thinking that led me to the conclusion our modern architectures are built on so many layers of abstraction that we've lost sight of the why.  I was perpetuating the problem by blindly following the norm.

As yourself this: where did the concept of a "server" come from?  Many people refer back to the origination of client/server computing; decoupling the processing unique to each user (the client), from the processing common to everyone (the server).  Once software was decomposed into two complimentary applications, it could run on two different computers where the server does the heavy lifting and is therefore optimized for its workload.  In reality client/server is really an extension of the mainframe architecture where desktop PC's replace the dumb green screen terminal and, by their nature of having a processor on board, share some of the processing load.  That's all good, but what drove the creation of the mainframe, and therefore client/server, was economics.  By centralizing processing power and enabling remote access, mainframes delivered a reasonable economic model for the automation of basic business tasks.  Dumb terminals made sense when people and the mainframe were local, few applications existed, applications were simple, and the costs of infrastructure was high.

Today none of the original drivers of mainframes and client/server exist, yet we still use the architecture unchanged.  If you took an ultra-modern data center and walked someone from 1965 through the center, there is no way they wouldn't mistake the mass of pods for mainframes.  Those massive data centers are nowhere near where the people who use them work.  In fact we can no longer assume employees even work in buildings, or from 8am to 6pm. The software landscape consists of billions of applications with thousands more created every day.  And the cost of infrastructure is so low thanks to density and scale, a modern smartphone has more processing power, memory, storage and network bandwidth than a "server" did just a decade ago.  We are surrounded by highly capable, network accessible computing devices which spend the majority of their life I/O bound, just waiting around for something to do.  Why are we letting all that computing power go to waste?  We're ignoring the real promise of cloud computing, a concept closer to P2P than the Internet and what we think of as public cloud today.  I'm talking massive distribution of applications, data and infrastructure; the kind of infrastructure people cower at when talking about cloud, but fully embrace when talking about the Internet of Things.

We need to rethink our approach to computing.  Period.

When you tear out the non-value added elements of client/server, the one tenet which survives is decoupling: separating the user interface from the business logic.  Decoupling's primary value proposition is at the software layer, not hardware, as we are reminded of every time a web server goes down.  And hardware, when viewed through cloud computing optics, is nothing more than a pool of resources (compute, memory, storage, etc.).  If we take the widely available computing resources we have on our client devices and run our user oriented "server" software there, we gain several benefits:
  • decreased impact of "server" outages
  • reduced complexity of "server" environments
  • federation of power consumption over the entire power grid
  • elimination of the need for large, centralized data centers
  • reduced long haul bandwidth requirements
  • raises the barrier for DDoS attacks while reducing the risk of penetrations as key data never leaves the premises
Preposterous!  You're crazy! Insane! Never! Yet that's precisely the direction we're not only heading in, but we are fast approaching the arrival platform.

The whitepaper I submitted to multiple outlets in 2002 told me I was crazy (including my employer at the time, IBM).  Nobody asked me to explain my thinking or even gave the idea a second thought.  Yet today I'm more convinced than ever it's the endgame of where we're heading.  Consider the rise in the popularity of Docker, a container oriented tool which approaches virtualization correctly (as opposed to the crazy idea of virtual machines which replicate the bloated operating systems multiple times over).  Consider the rise of microservices, self contained services which are distributed with the core application.  We are at the threshold already.

Moving over the threshold requires a tweak to Docker so it can be deployed seamlessly as part of an existing operating system install similar to Java, and the management tools required in a massively distributed system.  Second we need similarly scaled data federation tools which I don't believe exist today (for more on data federation see my entry on The Data War and Mobilization of  IT, or my  upcoming entry on Data Analytics in the Network).

Just imagine how the world of business computing would change if we eliminated just 20% of the web and application servers?  How about reducing web and application server instances for consumer  cloud offers such as Office 365, or for your bank.  Go way out on a limb and consider the adoption of P2P tools such as BitTorrent Sync.

And by the way, I'm still waiting for someone to provide mainframe based public cloud services.  Where is the new EDS?

Monday, March 9, 2015

Going Against the Grain

I've struggled for the past two months to write this entry.  It started with the topic of innovation and why companies are struggling at it, but that quickly devolved into a "how to be more innovative" treatise.  However if you're like me, you've already read several great articles and heard numerous speakers lay out a foundation for innovation.  And at some point you realize you're reading the same thing over and over again because, for whatever reason, nobody's listening to the message.  So another person jumped into say it a second time.  Then a third time.  Fourth time.  Fifth.  Well that's obviously a broken path, so rather than be the sixth I realized I needed to take a new direction.

In that moment of despair after endless edits, thinking perhaps my argument was flawed (which would explain why it was so hard to capture), it dawned on me to go back to the basics.  Yes, innovation is a struggle. But why?  Is it really just because innovation requires a willingness to invest in failure which is anathema to a company focused on quarterly results?  I don't think so.  I think the problem is more basic and has to do with the cultural proximity of the smart people who invent, the entrepreneurs who innovate, and the public who wants everything better, cheaper and faster.

Through the 1960's in the US we had a healthy habit of churning out earth shattering inventions which drove economic growth for decades.  However invention requires patience, a tolerance for failure, and funding.  As companies tightened up their bottom lines through the 1970's and 1980's we subdued this habit in the name of global sourcing and cost cutting, moving our research off-shore to locales with lower cost labor.  Of course nobody considered the opportunity cost of this shift.  One of the most often discussed result of this approach is how billions of dollars in economic growth have been shifted from the US to foreign countries, raising their standards of living and education while ours have remained stagnant or dropped.  However there is another opportunity cost rarely considered; what happens when you move research half-way around the world to a new culture which doesn't share the same appetite for change and risk as the United States?

Our culture in the United States has an acute case of individualitis.  In fact the "American Dream" is based on the concept of the individual controlling their destiny through dedication and hard work.  Our government was established to protect the rights of the individual from tyranny.  The reason a free market capitalist system works in the US is because it's the only system in which the talent and effort invested by the individual delivers a powerful dividend.  Capitalism is the great equality engine because it rewards innovation.  The rise of American economic power started with the Industrial Revolution, but it wasn't fueled by invention as is so often argued.  Innovation, the use of inventions to solve real problems, was the real rocket fuel.  Alexander Graham Bell invented the telephone, but the switchboard was the innovation which connected people over great distances.  On it's own the phone did very little.  Morris Tanenbaum invented the silicon transistor, but the silicon wafer was the innovation which brought microprocessors to the masses.

Our American culture embraces innovation.  We like cool, new technologies which purport to make life better, even when they don't.  Although there are certainly pockets the world over, there is no better market for launching new products that challenge the status quo or establish entire new segments.  Our software businesses prove this on a daily basis.  Despite repeated efforts by large companies to move software development off-shore, the most innovative software is still largely developed in the United States for the US market.

So back to the question then, why are companies struggling with innovation?  I believe it's because we've added noise between each step of the invention to insight to innovation process by separating the functions on a cultural plane.  I've learned over the past thirty years not to underestimate the importance of culture.  If you want to sell on anything other than price, you have to innovate.  But to innovate, you need access to invention, and that access is much more than reading whitepapers and listening to lectures.  People need to share more than a language, they need to share cultural experiences.  How are we as leaders enabling cultural exchange to occur as part of our daily routine?  How are we growing ourselves by creating interactions with other cultures at work and at home?

We need to consciously choose to go against the grain; to recognize even when we speak the same words, we can mean two different things when cross the culture divide.  Now that my eyes are open, it's incumbent upon me to make the time to move forward.  I'm sure to many this is all very obvious, in which case although I'm admittedly late to the party, at least I'm on my way.