Sunday, July 5, 2015

Cloud and the Omni-channel Customer Experience

We are truly living in a connected world.  Today I checked the weather on my computer and paid some bills before checking my email on my phone en route to my SUV.  Once I sat down, my SUV automatically connected to my phone and minutes later showed an incoming text from my daughter, whom I called back via voice command.  She texted me her order for a meal I bought via an app while waiting in a parking lot, and returned home to watch a video she created earlier in the day, by mirroring her phone with the TV.

Impressive to say the least.  Remember NONE of the solutions I used were designed by or bought from the same company.  Yet amazingly they meet my need, to help me make the most of my time.  In our connected world we have the opportunity to always be busy, to cram as much as possible into our daily life.  Mobility, eCommerce, Social Media, Collaboration all let us take advantage of those previously lost moments in time, whether waiting in the car or sitting at the airport.  And at the foundation of all of this technology, which is able to work so seamlessly together, is cloud computing.

While my consumer oriented world is full of neat new toys, the world of retail is trying to figure out how to play a bigger role; not just Retail as in stores, but retail as in producer/consumer interaction including banks and healthcare, automotive and high technology.  I haven't seen a business strategy or talked to a C-suite executive in retail the past three years without the topic of Omni-channel finding it's way into the conversation.  An Omni-channel Customer Experience is a simple concept: the creation of a common look and feel across all channels through which a customer interacts with a company.  Simple in concept, yet frustratingly difficult in reality, but companies know their customers expect a consistent experience.  CIO's know whoever delivers on the promise best has the opportunity to create some daylight as they pull ahead.

Why is omni-channel difficult to implement in a single company yet already a thread holding my personal life together?  The difference is cloud computing.  Each of the solutions I used today was built in the last five years on a cloud foundation.  However cloud continues to prove elusive for corporate America, and for numerous reasons.  There is simply no way to build an omni-channel customer experience and avoid cloud computing, yet focusing on cloud computing won't deliver the experience nirvana either.  It's easy to understand why CEO's to CIO's are frustrated.  It's the enigma of modern IT: all the data, compute and storage one could possibly ever need, and nothing put together in a way that makes it truly useful.  It's the cost of holding on to outdated models content to reap the benefits of one technology generation without considering the next.  Companies today are islands; islands of applications and data if not servers and storage as well.  Data in isolation is almost worthless in today's world of Real Time Analytics and Big Data.  And making it all the more frustrating, you can't hurry cloud, you just have to wait.

Cloud Computing is the single most important technological shift which has happened in Information Technology.  For the first time it's not about a domain, such as the network or data or applications.  Cloud is about everything, from technology to taxes.  Like the artillery shell that's a degree off when fired, those who get cloud wrong will simply miss the mark, measured in cost in the short term, but ultimately measured in customer satisfaction and solvency.

Sunday, March 29, 2015

The Last Mile and the Future of Cloud


"Look back to where you have been, for a clue to where you are going"

It's an unattributed quote, but I find it applies repeatedly throughout technology.  So often what appears new is really a twist on a tried and true approach.  Anyone who's spent any time around networks knows the phrase "last mile".  It's a reference to the final leg of the network connecting a home or office.  When AT&T was broken up in 1984 by the US Government, AT&T emerged as the long distance company while local service was divided into seven "baby bells" including Pacific Telesis, Ameritech, and BellSouth.  Experts believed long distance held the promise of higher profits while the Regional Bell Operating Companies (RBOC's) were doomed to a capital intensive, low margin struggle.

The experts were wrong.

Owning that "last mile" turned out to be very profitable; so profitable one of the RBOC's, Southwestern Bell, was able to buy those other three RBOC's, changed it's name to SBC, and then bought it's former parent, AT&T.  Although mobile networks are great for connecting smartphones and tablets and satellites can deliver radio and television, it turns out nothing yet can replace fiber and copper for bandwidth and low latency.  After the Telecommunications Act of 1996, last mile services exploded and today instead of just the local phone company there are a variety of competitors including Google.  And providing that last mile of service continues to be a significant revenue driver.

Let's put the last mile conversation to the side and switch gears.  Today large corporations are investing billions of dollars in Big Data; growing their analytic capabilities to generate the oxygen required by their growth engines.  These fierce competitors are slowly realizing there simply isn't enough time available to:

  1. capture data at the point of origination
  2. move the data across the country
  3. filter the data to focus on the most valuable elements
  4. combine the data with other data to broaden the perspective
  5. execute analytics on the data
  6. generate a result
  7. communicate the result back across the country
  8. leverage the result to drive some benefit

If the network operates at the speed of light, how can there not be enough time?  Beyond the reality that light slows down in fiber (by about 31%), there is not one single direct link between the user and the corporate data center.  Users have to be authenticated, security policies applied, packets routed, applications load balanced.  The multitude of events that occur, each one very quickly, add up to a delay we call latency.  In a world where everything is measured in minutes latency goes unnoticed, but in our Internet world we are moving from seconds to sub-second time frames.  Think about how patient you are when navigating through a website.  After that first second ticks off the clock, people begin to wonder if something is wrong.  It's the byproduct of taking our high speed access to the Internet for granted.  Marketers want to collect metadata about what you're trying to do, figure out how they can influence you, and insert themselves into your decision chain; and they only have the time between when you click the mouse and when the browser refreshes.  Hopefully now you can understand why that eight step process, moving data across the country is so unappealing.

For the past three years I have advocated an alternate approach; putting their servers as close to the end user as possible (commonly called "the edge").  Where "the edge" is located depends on the conversation, however the furthest it can be is at the start of that last mile, the last point on the network before it connects to the end user.  Today the edge extends as far as the same city for large populations, more often it's a region or even a state.  Although my serverless computing concept could be part of the answer and move the analytics into end user's computer, in truth at least some analysis needs to occur off-site, if for no other reason than to stage the right data.  Moving analytics closer to the edge requires us to move compute and storage resources closer.

Let's return to the "last mile".

If you looked at a map of the network which serves your home or business, you would notice the wire goes from your house, out through a bunch of routers, switches and signal boosters until it finally reaches a distribution point owned by your provider (or wherein they lease space).  These locations are often large, having previously housed massive switching systems for the telephone network, and they are secure, built like cold war bomb shelters.  What if these locations were loaded with high density compute and storage available much like a public cloud to augment the resources within the corporate data center?  If a business can operate while leveraging public cloud resources, what we lovingly refer to as a Hybrid Cloud model, then wouldn't it make sense to push the resources as far out toward the edge as possible?

I'm hoping you are increasingly buying into this idea, or at least skeptical enough to wait for it to implode, and want to know how this is broadly applicable.  I do not see this as a panacea, any more than I do serverless computing, cloud, big data, mobility or any other technology.  However I do see it at a minimum as moving the conversation forward on how to deal with a world where our endpoints are no longer fixed, and at a maximum another arrow in the quiver.  Consider how much data is being collected today; from the GPS location in your phone to the RFID tag on your razor blades, we are living in the Data Age.  Every single device powered by electricity is a likely candidate to be internet enabled, what we call the Internet of Things.  Each of these devices will communicate something, creating new data every day and adding to the pile of data that already exists.  To deal with the onslaught, companies need to filter what's coming in, remove the noise, and then execute their normalization routines (standardizing date formats, to get the data ready for use.  Since compared to the cost of moving data, everything else is free, there is an economic incentive to move data as short a distance as possible.  Handling the grunt work of analytics locally could have a dramatic impact on overall system speed.  And over time, having local compute resources will enable software architects to push analytics closer and closer to "the edge".

Today I am unaware of anyone working on this issue, despite pushing for it and finding a few leading edge Fortune 500 executives already facing this challenge.  The truth is we live locally, we're served locally, why not compute locally?  I see this as a gift of future revenue, sitting on the doorsteps of the telco's and cable providers waiting for them to create the product.  However I don't believe they realize what they have.  There is no last mile provider who has made a splash in cloud or big data. They own the last mile; they're the ones who own the gateway that links the world to their customers. Moving the public cloud from super-regional data centers to the local central office where the last mile connects could make the telcos instantly relevant in cloud and give them a nearly insurmountable competitive advantage over today's public cloud leaders like Amazon and Microsoft (perhaps this is part of the reason Google created Google Fiber).  Imagine a legacy infrastructure being resurrected to meet an emerging need.  But then, as I said at the beginning, "Look back to where you have been, for a clue to where you are going"

P.S.  I was so excited to see the headline "IBM, Juniper Networks work to build networks capable of real-time, predictive analysis", until I realized it was the opposite of integrating data analytics into the network.  Oh well, my quest lives on.

Sunday, March 22, 2015

Value * Easy = Consumption

Over the past decade I've witnessed a constant stream of IT executives and technology professionals view cloud as a threat to their careers.  When viewed through the eyes of an internal IT shop where the business has been a captive customer I can understand their worry.  Now they're being asked to enable innovation instead of taking orders; bring solutions to the business instead of begrudgingly accept new challenges.  However I can't understand why they don't see the other side of the cloud coin, the very equation which drives cloud adoption: Value * Easy = Consumption.

Public Cloud has been built on two value propositions.  First providing value through availability to resources, in a short time period, without capital investment.  Those three values align with the strategic goals of every CxO no matter how you write them:
  • "Do more with less"
  • "Improve agility, elasticity, efficiency"
  • "Reduce costs"
  • "Shift from maintain to innovate"
  • "Remake the cost curve" (*my personal favorite)
Those are just a sample of quotes from CxO's I've worked with over the past decade.  Moreso, each of the CxO's had a common opinion of IT: too slow and expensive for the value delivered.  This is the environment into which AWS started selling it's cloud capabilities, back before we had the phrase "cloud computing".  It's important to remember AWS grew out of Amazon's own internal needs, it was not the result of market surveys and product development.  Although Fortune 1000 adoption of public cloud has been slow, the concepts of cloud computing rapidly penetrated corporate America in an attempt to bring the AWS value proposition to the enterprise.  Shifting from a hardware centric view to a capability centric view of infrastructure is a major upheaval in approach.  

Given very few companies have been successful in adopting cloud, what's the holdup?

Whereas a CIO can buy "Value" in the form of tools (BMC, VMWare, etc.) or rent it (AWS, Google, Azure, etc.), the truth is they can't buy, rent, lease, borrow or even steal "Easy".  Making something easy isn't easy, and cloud is anything but easy.  Put yourself in the shoes of a business executive such as the Chief Marketing Officer or the Chief Financial Officer.  In your world you have very few hard assets, having shifted most everything to a lease model; from office space and PC's to digital advertising and audit.  You can shift your spend as your business changes throughout the year.  What you need are technology solutions able to meet your need for agility, elasticity and efficiency.  How does your IT team respond?  Building out large scale data centers, buying servers, writing software.  Do any of these approaches appear to be in synch with the CMO and CFO's needs?  No.  In fact strategic planning with IT is so difficult, these leaders are increasingly willing to go outside the company, which is not easy, to get what they need.  They're willing to invest their reputation and the success of their team in taking a risk to convince the CEO, corporate security, office of the general counsel, and fellow business leaders that going around IT is the right strategy.  Then they spend money on consultants and hire talent to move in the new direction.  And yet all of that is considered easier than getting solutions from IT, the place they would prefer be the first, last and only stop.

Without a concerted effort to make cloud use easy the entire equation is upset.  Easy is the governor on the economic engine of cloud.  Having cloud capabilities, being able to deliver the "Value", isn't enough.  When done right, "Easy" is a multiplier of "Value" and drives consumption significantly beyond expectations.  At that point IT executives and technology professionals don't view cloud as a threat to their careers, it morphs into a driver of career opportunity.  Their own value increases dramatically, and their strategic value to the long term success of the business in particular.  In my experience it's much more rewarding to have a seat at the table to discuss how to accomplish some new goal than being berated as the barrier to accomplishing an old goal.

Cloud in the enterprise will never be a success without "Easy".

Sunday, March 15, 2015

We're Well On Our Way to Serverless Computing

As I discussed in my first post, I came up with an idea I titled "Serverless Computing" in 2002.  At the time I was frustrated by the limitations of web, application and data server capabilities.  I was implementing a rather amazing B2B marketplace for a drug company; a leading edge architecture I had developed using XML at its core along with Java messaging services and styling objects to render the final views.  The same architecture and implementation had to support multiple lines of business without any crossover.  My frustrations led me to start questioning everything.  If things weren't working, why was I continuing to do everything the same way I had before?

In the middle of a snow storm in central Connecticut, sitting in a frozen rental car waiting for warmth (I'm from the South), I had an epiphany.  I realized most of the constructs of computing are driven by human needs, not the computer.  It dawned on my all my architecture work was about putting stakes in the ground as anchors for our thinking and development of the portal.  I was reasonably good at refactoring applications to improve security, efficiency, and efficacy; why not apply the same thinking to architecture?  I scurried down a path of thinking that led me to the conclusion our modern architectures are built on so many layers of abstraction that we've lost sight of the why.  I was perpetuating the problem by blindly following the norm.

As yourself this: where did the concept of a "server" come from?  Many people refer back to the origination of client/server computing; decoupling the processing unique to each user (the client), from the processing common to everyone (the server).  Once software was decomposed into two complimentary applications, it could run on two different computers where the server does the heavy lifting and is therefore optimized for its workload.  In reality client/server is really an extension of the mainframe architecture where desktop PC's replace the dumb green screen terminal and, by their nature of having a processor on board, share some of the processing load.  That's all good, but what drove the creation of the mainframe, and therefore client/server, was economics.  By centralizing processing power and enabling remote access, mainframes delivered a reasonable economic model for the automation of basic business tasks.  Dumb terminals made sense when people and the mainframe were local, few applications existed, applications were simple, and the costs of infrastructure was high.

Today none of the original drivers of mainframes and client/server exist, yet we still use the architecture unchanged.  If you took an ultra-modern data center and walked someone from 1965 through the center, there is no way they wouldn't mistake the mass of pods for mainframes.  Those massive data centers are nowhere near where the people who use them work.  In fact we can no longer assume employees even work in buildings, or from 8am to 6pm. The software landscape consists of billions of applications with thousands more created every day.  And the cost of infrastructure is so low thanks to density and scale, a modern smartphone has more processing power, memory, storage and network bandwidth than a "server" did just a decade ago.  We are surrounded by highly capable, network accessible computing devices which spend the majority of their life I/O bound, just waiting around for something to do.  Why are we letting all that computing power go to waste?  We're ignoring the real promise of cloud computing, a concept closer to P2P than the Internet and what we think of as public cloud today.  I'm talking massive distribution of applications, data and infrastructure; the kind of infrastructure people cower at when talking about cloud, but fully embrace when talking about the Internet of Things.

We need to rethink our approach to computing.  Period.

When you tear out the non-value added elements of client/server, the one tenet which survives is decoupling: separating the user interface from the business logic.  Decoupling's primary value proposition is at the software layer, not hardware, as we are reminded of every time a web server goes down.  And hardware, when viewed through cloud computing optics, is nothing more than a pool of resources (compute, memory, storage, etc.).  If we take the widely available computing resources we have on our client devices and run our user oriented "server" software there, we gain several benefits:
  • decreased impact of "server" outages
  • reduced complexity of "server" environments
  • federation of power consumption over the entire power grid
  • elimination of the need for large, centralized data centers
  • reduced long haul bandwidth requirements
  • raises the barrier for DDoS attacks while reducing the risk of penetrations as key data never leaves the premises
Preposterous!  You're crazy! Insane! Never! Yet that's precisely the direction we're not only heading in, but we are fast approaching the arrival platform.

The whitepaper I submitted to multiple outlets in 2002 told me I was crazy (including my employer at the time, IBM).  Nobody asked me to explain my thinking or even gave the idea a second thought.  Yet today I'm more convinced than ever it's the endgame of where we're heading.  Consider the rise in the popularity of Docker, a container oriented tool which approaches virtualization correctly (as opposed to the crazy idea of virtual machines which replicate the bloated operating systems multiple times over).  Consider the rise of microservices, self contained services which are distributed with the core application.  We are at the threshold already.

Moving over the threshold requires a tweak to Docker so it can be deployed seamlessly as part of an existing operating system install similar to Java, and the management tools required in a massively distributed system.  Second we need similarly scaled data federation tools which I don't believe exist today (for more on data federation see my entry on The Data War and Mobilization of  IT, or my  upcoming entry on Data Analytics in the Network).

Just imagine how the world of business computing would change if we eliminated just 20% of the web and application servers?  How about reducing web and application server instances for consumer  cloud offers such as Office 365, or for your bank.  Go way out on a limb and consider the adoption of P2P tools such as BitTorrent Sync.

And by the way, I'm still waiting for someone to provide mainframe based public cloud services.  Where is the new EDS?

Monday, March 9, 2015

Going Against the Grain

I've struggled for the past two months to write this entry.  It started with the topic of innovation and why companies are struggling at it, but that quickly devolved into a "how to be more innovative" treatise.  However if you're like me, you've already read several great articles and heard numerous speakers lay out a foundation for innovation.  And at some point you realize you're reading the same thing over and over again because, for whatever reason, nobody's listening to the message.  So another person jumped into say it a second time.  Then a third time.  Fourth time.  Fifth.  Well that's obviously a broken path, so rather than be the sixth I realized I needed to take a new direction.

In that moment of despair after endless edits, thinking perhaps my argument was flawed (which would explain why it was so hard to capture), it dawned on me to go back to the basics.  Yes, innovation is a struggle. But why?  Is it really just because innovation requires a willingness to invest in failure which is anathema to a company focused on quarterly results?  I don't think so.  I think the problem is more basic and has to do with the cultural proximity of the smart people who invent, the entrepreneurs who innovate, and the public who wants everything better, cheaper and faster.

Through the 1960's in the US we had a healthy habit of churning out earth shattering inventions which drove economic growth for decades.  However invention requires patience, a tolerance for failure, and funding.  As companies tightened up their bottom lines through the 1970's and 1980's we subdued this habit in the name of global sourcing and cost cutting, moving our research off-shore to locales with lower cost labor.  Of course nobody considered the opportunity cost of this shift.  One of the most often discussed result of this approach is how billions of dollars in economic growth have been shifted from the US to foreign countries, raising their standards of living and education while ours have remained stagnant or dropped.  However there is another opportunity cost rarely considered; what happens when you move research half-way around the world to a new culture which doesn't share the same appetite for change and risk as the United States?

Our culture in the United States has an acute case of individualitis.  In fact the "American Dream" is based on the concept of the individual controlling their destiny through dedication and hard work.  Our government was established to protect the rights of the individual from tyranny.  The reason a free market capitalist system works in the US is because it's the only system in which the talent and effort invested by the individual delivers a powerful dividend.  Capitalism is the great equality engine because it rewards innovation.  The rise of American economic power started with the Industrial Revolution, but it wasn't fueled by invention as is so often argued.  Innovation, the use of inventions to solve real problems, was the real rocket fuel.  Alexander Graham Bell invented the telephone, but the switchboard was the innovation which connected people over great distances.  On it's own the phone did very little.  Morris Tanenbaum invented the silicon transistor, but the silicon wafer was the innovation which brought microprocessors to the masses.

Our American culture embraces innovation.  We like cool, new technologies which purport to make life better, even when they don't.  Although there are certainly pockets the world over, there is no better market for launching new products that challenge the status quo or establish entire new segments.  Our software businesses prove this on a daily basis.  Despite repeated efforts by large companies to move software development off-shore, the most innovative software is still largely developed in the United States for the US market.

So back to the question then, why are companies struggling with innovation?  I believe it's because we've added noise between each step of the invention to insight to innovation process by separating the functions on a cultural plane.  I've learned over the past thirty years not to underestimate the importance of culture.  If you want to sell on anything other than price, you have to innovate.  But to innovate, you need access to invention, and that access is much more than reading whitepapers and listening to lectures.  People need to share more than a language, they need to share cultural experiences.  How are we as leaders enabling cultural exchange to occur as part of our daily routine?  How are we growing ourselves by creating interactions with other cultures at work and at home?

We need to consciously choose to go against the grain; to recognize even when we speak the same words, we can mean two different things when cross the culture divide.  Now that my eyes are open, it's incumbent upon me to make the time to move forward.  I'm sure to many this is all very obvious, in which case although I'm admittedly late to the party, at least I'm on my way.

Sunday, February 22, 2015

What is Net Neutrality Really About?

Let me start by stating my position.  I am very passionate about a low cost, open Internet.  I've shared my opinion with the FCC like thousands of others and advocate for Net Neutrality openly.  However that's about as far as I can go without starting to get into the micro-issues which comprise the real challenge behind Net Neutrality that the average person doesn't understand.  Unfortunately it's not a cut and dry, this or that issue.  Let me shed some light by providing a perspective I have yet to read or hear about in the mainstream or even tech media.

Initial Issue: Fast Lanes
This is the primary concern consumer advocates have latched on to.  Using examples from Comcast and others of intentionally slowing down Netflix traffic; they argue that once one type of data is treated differently than another, the Internet is no longer a level playing field.  Advocates believe the Internet is a tremendous resource driving the global economy because of its equal access.  It's this equal access, proponents argue, which enabled companies such as Facebook, Twitter, eBay and Amazon to blossom.  If Facebook had to pay extra for its bandwidth to get better service, then it would never have survived.  Countering this argument, if Facebook provides a value to their user, their business model should assure their ability to pay for the bandwidth, otherwise there's no market for a Fast Lane.  I too am concerned when any company can arbitrarily advantage themselves by interfering with the service of another company.  Would you react positively to Microsoft Internet Explorer redirecting you to Bing when you tried to access Google's search engine?

Consider who is opposing Net Neutrality: large communication companies who provide consumer Internet access: Verizon, AT&T, Comcast, and Time Warner Cable.  Each of those companies provides, or soon will provide, Internet based video streaming services to compete with Netflix.  Proponents argue this is why Comcast actively throttled Netflix bandwidth, to advantage their own service.  Is that fair?

Netflix doesn't get Internet access for free; they pay for it just like subscribers to Comcast and Verizon do.  And the consumer certainly isn't getting Internet for free last time I checked my TWC bill.  You need to understand what constitutes the Internet.  Large ISP's (Verizon, Comcast, Time Warner, AT&T, CenturyLink, Level 3, Cox Cable, etc.) interconnect their networks to form what we think of as the Internet.  Way back in the early days they decided when connecting to each other there would be an even balance of trade between traffic moving in and out.  As a result the cost implications would be zeroed out and therefore there was no need to bill each other for the traffic switching between their networks.  

Today Internet traffic is not an even balance of input and output.  In fact it's heavily skewed as outflow from producer to consumer.  When the producer and consumer have the same ISP there's no issue.  However once traffic needs to move from one ISP's backbone to another, the receiving ISP now sees a revenue opportunity to re-balance their business model.  They want to use their leverage, their consumer Internet subscriber base which cannot be accessed without their backbone, to extract a payment.  Legally they can't refuse access, so instead they simply limit the interconnection bandwidth unless the producer is willing to pay a premium, the Fast Lane.  However isn't their subscriber already paying for the traffic as part of their monthly fee?  In fact, isn't access to services like Netflix precisely what consumers are paying for?

The answers are clearly YES.  

Again, this is about generating more revenue.  So what are the options for an ISP to increase revenue?  If they don't charge the consumer, the only other option is charging the producer.  Let's assume Fast Lanes are good and play that scenario forward.  Netflix shells out money to the ISP's for the Fast Lanes and as a result Netflix is forced to raise it's subscription fee.  In the end the consumer still pays.  However what's interesting is that ONLY the consumers of Netflix pay; everyone who doesn't use Netflix isn't impacted.  

Let's walk through the other option, charging consumers directly.  There are essentially two models available: 

Option 1: 'Cable TV' Model
In this model, consumers' monthly rates are increased to generate the additional revenue desired by the ISP's.  In doing so, EVERY consumer ends up paying for the traffic of those who consume the most data.  We end up with a subsidy situation analogous to ESPN in Cable TV.  Today the average cable subscriber pays around $6 for ESPN.  However if only those people who watch ESPN paid, the cost per subscriber would be over $40 and thus cost prohibitive.  Cable TV built it's business on a shared burden model just like insurance, sharing costs across the broadest base possible.

Option 2: 'Data Plan' Model
Instead of offering subsidies, the other option is to bill consumers based on actual consumption.  Most likely the ISP's would adopt a mobile phone data model whereby the consumer would pay for a specified amount of data traffic per month with overage fees for exceeding the limit.  People like predictability.  Comcast, Time Warner Cable and others tried this model and it failed miserably.  Opposing the idea, invariably someone argues a 'Data Plan' model disadvantages the poor by disproportionately impacting their scarce income.  In truth, tech geeks including me, who constitute the majority of bandwidth use, enjoy being subsidized by those in the bottom half of the spending curve.  Did you happen to notice this is the same financial model as charging the producers directly?  Only those who use the service pay, but by charging the producer the model is significantly more efficient and therefore will be cheaper to deliver than bringing a 'Data Plan' model to consumer Internet subscriptions.  I not an insider, but I have to believe the failure of 'Data Plan' attempts is the primary driver behind Fast Lanes; achieving the same objective by another means.

Whoa!  What did I just admit to?  Yep.  So it would appear by advocating for Net Neutrality I'm just a selfish oaf who wants to benefit from the subsidies of the common man by forcing ISP's into a corner where Option 1 is the only option.  If that were true, understand I wouldn't be the only one.  According to research on the side of Net Neutrality, someone has determined that Time Warner Cable runs a 97% profit margin on their Internet services, using the numbers provided by TWC in their financial reporting to the SEC.  They're not exactly a charity.

When considering the original argument, supporting Net Neutrality to ensure equal access, the better model is actually Option 2 or the existing controversial Fast Lane model.  In truth the large ISP's are, in essence, subsidizing the Internet by not charging interchange fees to carry traffic.  So is it unfair of them to now ask companies like Facebook and Netflix, who generate billions in revenue via advertising or subscription fees, to pay their debt?  It quickly devolves into a circular argument, because without Facebook, Netflix et al there would be very little consumer demand for the Internet and thus a small subscriber base.

So what's the REAL problem?
Since revenue generation is at the heart of the issue, the real question driving the Net Neutrality debate is whether or not the profitability of ISP's should be limited.  However that question obscures a more fundamental question which is rarely raised in the debate but is the reason I advocate for Net Neutrality: should a monopoly be restricted in its rates?  

When it comes to broadband Internet access, an area where the US lags much of the world and something considered a fundamental requirement of a strong economy today, most consumers have little to no choice in ISP.  For example at my house, to get greater than 5mbps I have one choice: Time Warner Cable.  By definition that's a monopoly.  TWC has worked hard over the past two decades to prevent other companies from entering their markets, just as all the other communications companies have done.  I was involved in a project in 2000 when a new company, Carolina Broadband, secured almost $300M in funding to develop a broadband offering in North Carolina.  They were fought at every turn by Bell South and Time Warner and eventually evaporated.  
Monopolies are a perfectly legitimate business model, but only when properly regulated.  The free market requires competition to work.

As a result, Net Neutrality is a legitimate argument ONLY in those markets where there is no competition for broadband access, such as my market.  To foster competition, the FCC should learn from California's tentative approval of the Comcast, Time Warner Cable merger.  In it they effectively force Comcast to open access to their network to enable competition, the same approach advocated by the cable companies to ensure access to the telephone network 20 years ago.  Of course since the enemy of my enemy is my friend, the big ISP's have banded together to fight any such approach on data networks.

What's it all mean?
In the end, network bandwidth is a limited resource just like everything else is in the world: money, water, healthcare and NFL caliber talent who obey the law.  In Econ 101 you learned about supply and demand; the greater the demand the higher the cost until supply can re-balance the equation. Unregulated monopolies have a long history of restricting supply to drive revenue.  We don't need Net Neutrality, what we need is Broadband Competition.

And that is what Net Neutrality is REALLY all about!

Tuesday, February 3, 2015

Disposable Software

For the past 20+ years we have been on a journey in application development; moving from large, monolithic stovepipes to decomposed, distributed services.  Along the way we've identified, tried and discarded more approaches than any one person can possibly remember.  Over time we evolved to a point where we differentiated between back-end enterprise service, such as billing or scheduling, and the user interface.  That's the point at which the whirlwind began and software development exploded along with the democratization of technology.  

We are now entering a new age where our success will drive us to ingest the last vestiges of "traditional" software development, and in the process make applications truly indispensable to business success.  We are entering the era of disposable software.

The days of building applications are over; we're done, or we'd better be very soon.  Business executives have lost their appetite for large software projects spanning multiple time zones with timelines measured in months and budgets measured in millions of dollars.  Emboldened by the stories of success, business software has advanced from a nice to have to a core requirement, from a solution you go somewhere to use to a solution you use wherever you are.  Software is now expected to work when needed, where needed, and be no more difficult or expensive to implement than the often repeated story of Flappy Birds.  

If you're not among those on the inside of this transformation, look to the mobile app world for inspiration.  Apps are built by leveraging pre-engineered services and frameworks.  Agile is the approach because it accommodates imperfect requirements and short windows of opportunity.  The leaders in this space have made software so invaluable it's become disposable.  First, enabled by cloud SaaS services, open source, and the ability to rapidly develop new solutions; business users can no longer be held captive by their software solution.  The pace of business and rate of change require solution owners to seek solutions which maximize agility, efficiency and elasticity.  Any software unable to meet these requirements will increasingly be disposed of, not updated.  Second, the line separating the apps we use in our personal lives and those at work is blurring rapidly. Just as the public is always on the lookout for newer, better, more innovative apps; so are business users.  

The reality of disposable software requires us to look differently at how we manage our development teams and budgets.  We need the back-end services plus several supporting services to be in place and accessible across a distributed infrastructure.  We need frameworks which minimize the time spent in foundation work so our end customer can see the majority of the value of their spend.  Finally, we need to rethink how we hire, train and incent our developers so they are focused on collaboration, communication and reuse rather than viewing software as their magnum opus.  Of course wrapping all of this together highlights the importance of DevOps; the glue which makes this approach work.

I know many experts bristle at my advice.  They liken modern software development to "hacking", and defend it as nothing more than shoddy software engineering.  These are the very people who will be waving from the side of the track as this train roars through their careers.