Tuesday, October 23, 2012

The Consolidation of Personal Technology

How many pockets do you have? I find on a daily basis this limitation more than any other determines which devices I bring with me when I travel. I have an iPhone for work and an Android for home, an iPad tablet for easy stuff like blogging and movies, and two laptops (one work, one home) for the heavy lifting. The end result is often pockets stuffed with stuff along with a backpack that is slowly turning me into a hunchback. Cables and cords, power bricks and cases. Consolidation is what I want but I don't want to lose capabilities. There is an answer but so far no solutions.

A smartphone is the no-brainer minimalist choice when you have a pocket or none. With technologies such as dual persona the need to bring two devices will fade away quickly. It's a juggernaut that every Fortune 500 company is reacting to and enabling in some form either today or planned for tomorrow. Add to this the electronic wallet, car keys, and even your house keys which are all being app enables and will become increasingly common over the next three years. Finally with all of this moving to off the device and into the cloud, along with media and documents, there is little doubt the smartphone becomes the one stop access shop in our lives. Oh but for that small screen...

Enter the tablet; clearly the next step up from the smartphone. In many cases the only difference between the smartphone and the tablet is the screen size (especially with the ability to Skype and make VoIP calls from the tablet). With the touch screen the need for a mouse disappears leaving the clunky on screen "hint and peck" keyboard, device connectivity (USB, card readers, eSATA, HDMI, etc), application familiarity, and speed as the only deterrents to replacing the PC. However improved near field communications and the expanding array of connected devices will make the device connectivity issue disappear; devices will go wireless. To resolve the keyboard conundrum there are tablet cases with integrated Bluetooth keyboards that are as well designed as the best desktop keyboard (excluding ergonomics). Like the Smartphone the power of the tablet comes from its connectivity to the cloud. Well that leaves only two unaddressed issues: traditional applications and speed.

Change is challenging. No matter how open our minds are we are the composite of our experiences, and as such what we have done is often more comfortable than what we are being asked to do. People largely want the applications to which they have become accustomed which in turn perpetuates the buy-upgrade-replace model of PC's. What has kept many of these applications from transitioning directly into the tablet world is bloat. The applications were built and grew in a world where the resources of the PC were greater than the need of the application. As a result of the available headroom application developers implemented more features and functions, often of limited or dubious value, to differentiate their solution in the market. The resources required by many desktops applications today far exceed the capabilities of the tablet. As a result tablet developers have reinvented the application. Called "apps" these lightweight applications are designed to the constraints of the tablet device. As a result new innovations have entered the market faster, such as cloud storage and sharing, and the number of people for whom the apps are good enough is large and growing. However there will always be a segment of the population who either by need or choice will need a PC.

Today when people think of a PC they think of a monitor, keyboard, mouse, and box/processor/CPU/computer (that thing to which everything connects). In our bold, new world the PC no longer is a physical set of devices. The tablet can be the monitor, the mouse, and the keyboard with either the on screen version or an external device. But what about that box? Enter the cloud desktop.

There are very few options available in the market today for consumers which is interesting given the growth of virtual desktops in the cloud for the enterprise. The concept is simple. Using cloud infrastructure provide the user a full PC desktop from the operating system upward accessible via the tablet device. To the end user its a PC; install and execute applications just like old physical PC. For the majority of PC owners its all you'll ever need until the tablet becomes the full time replacement. The benefits are tremendous:
1. monthly expense well below the monthly cost of purchasing a desktop of equal capability
2. no need to upgrade
3. the user owns the data and can take it and discontinue the service at any time
4. improved security
5. no exposure to hardware or operating system issues
6. no operating system updates

In addition some solutions include applications such as Microsoft Office. One option, although limited but worth checking out, is OnLive Desktop.

I was asked my opinion on an employer adopting a BYOD approach for laptops where the employee provides the device. As someone who travels a lot the last thing I want is to provide the laptop and take all the cost and risk: purchase, maintenance, theft, breakage. And this after the recent 60 Minutes report on TSA theft? No thanks! What I would prefer is a virtual desktop accessible via my tablet with a portable keyboard. Now that's a powerful combination even for someone who does quite a bit of content creation. I write this blog now on my tablet using just the soft keyboard.

Consolidating to the smartphone and tablet, powered by a cloud which provides all the features and functions needed, establishes a platform for future consolidation. In the future we'll be adding the set-top box, security system, game console, medical device and vehicle to the cloud. One cloud, many uses, accessed by the method of your choice.

I'm looking forward to it!

Friday, September 28, 2012

The Cloud Marketplace - All Resources Welcome

There is a singularity in the future of cloud which I believe has guided its evolution since server virtualization became a topic in the early 2000's. However I only realized today that not everyone sees this inevitability. Cloud is about agility, efficiency, and elasticity where the low hanging fruit, cost savings, has brought a by-product of the transformation to the forefront as its value proposition.  This in turn has focused our attention on the infrastructure side of cloud largely ignoring the original value proposition; the global collaborative applications composable from services and available anywhere, anytime, on any device.  So today we see cloud as an enabler of new models for infrastructure consumption matching hardware needs to cost: on premise, off premise, private, public, hybrid, IaaS and even PaaS and SaaS.  The product mix is typically viewed as mutually exclusive however in the optimal cloud it's not about one or the other, it's about having all of them.  Only by maximizing our consumption option can we tailor available resources to our needs.  I believe this road leads to one end state: an automated, autonomous cloud resource marketplace.

What on earth is that?

I admit I jumped to the punch line so let me make my case in a more linear fashion starting with an easier topic: the automobile. We use autos everyday to achieve many of our objectives: getting to work, picking up children, traveling to a client. At home we may own a car but when away from home we often have the same needs. And there are many who don't even own a car but need one from time to time. It's no surprise businesses have grown to meet these needs whether it's use a car for a few minutes (taxi), a few hours (asset shares like ZipCar), a few days (car rental), a few years (lease) or forever (purchase). The need for the resource drives the consumption model at any given time. That need is composed of a variety of factors that are contextually dependent such as where, when, for what purpose, for how long, with whom, etc. We balance these needs through prioritization and look to the market to provide the solution: a bullet-proof limo for the President, a rented mini-van for the family on vacation, a leased truck to deliver furniture, and a taxi cab to run downtown for a meeting.

We need the same market options in technology but so far there have been serious limitations. What companies need is the ability to benefit from technology without having to buy and become experts in it.  However due to the limitations of the market, most companies have chosen to own and operate their technology.  As a result the value of each dollar invested in IT is diluted by the cost of moving up the knowledge curve which does not lead directly to improved revenue or reduced costs.  The first big step out of own and operate was mainframe time sharing which quickly led to outsourcing, a model of own and let someone else operate. It's such a small step there really isn't an equivalent in the more general market which probably explains all the pains behind outsourcing.  Furthering the adoption of alternative models we've had some success with co-location (lease model), Application Service Providers and now Software as a Service (rental model), a we're nibbling at the shared asset model with public IaaS.  However large scale adoption of a shared asset model is years away and there is no existing path to the taxi cab model due to issues of security, regulatory compliance, trust, and a healthy dose of fear, uncertainty and doubt (FUD).

Today we are at nearing the peak of the hill and it's time to start planning for what happens when we crest.  Our journey into cloud isn't about adopting new models and leaving old ones behind.  Cloud is about doing it all; the new and the old.  Top to bottom.  It's about the right resources at the right time in the right place.  I believe a real cloud can deliver any kind of resource required from a mainframe processor to heat monitor's data. However getting to this ubiquitous cloud requires a fundamental shift.

As an infrastructure technology today, the vision for cloud is, at best, cloudy.  The entire purpose of what we call Information Technology is to execute applications.  To an application everything is a resource.  Therefore it follows that from an application point of view, a cloud is all about the marshaling of resources (compute, memory, data, algorithm, bandwidth, security, etc.).  There is no one size fits all answer, and as our applications become more user-centric by nature they will exhibit unique tendencies at the user level requiring micro-levels of optimization.  Hence the need for all resource types and consumption models which enables cloud solutions to leverage the best available combination of resources at any given moment based on what's available. Continuous optimization becomes a scanning routine constantly looking for better resources or a change in the parameters.  Inherent in this approach are all the benefits of cloud we know and love: scale-up/scale-down, pay as you go, no long term contract, etc.  For lack of a better term I call this the Cloud Marketplace (NOTE: I am advocating the marketplace as an architectural pattern which would, by nature, enable the cascading of marketplaces to maximize the resource and consumption model pools.

Using the car analogy this approach would allow us to use the best vehicle possible for each task (go to work, visit a client, pick up the new widescreen TV, go out to dinner) rather than compromise by trying to fit all the uses to a single model.  There is no convenient, economic model today that can get us a hybrid to get to work, a town car for the client visit, a delivery truck for picking up the TV, and a sports car for a night out. Luckily in the virtual world the only barriers are those we create.

A cloud marketplace would be governed by a set of business attributes (time to market, capex sensitivity, flexibility, self serve, service quality, criticality, compliance, IP protection, and continuity) and  technical attributes (cost, location, availability, execution speed, latency & bandwidth, integration, reliability, security, scalability).  Each application would include in its profile a predefined prioritized list of attributes to be used for by the marketplace.  To a well designed cloud app there is no dependency on the infrastructure so the marketplace would use the attributes to marshal the necessary resources via the orchestration and provisioning systems to get the compute, memory, bandwidth, and other resources required. When the parameters change (time to market drops from high to medium) the system would look to modify the resource pool (move from $0.08/CPU/hr to $0.05/CPU/hr compute) to comply.  The power of a marketplace model is in its inherent organization and optimization.  Resources can be ranked by value based on real usage patterns providing clarity in capacity decisions.

No Fortune 500 company is going to rush out and close down data centers throwing away billions of dollars in investment to adopt this pattern.  And they shouldn't.  The concept naturally lends itself to a stepwise adoption.  Once we have applications unbounded to resources (virtualized in some way) and can operate across more than one set of resources we have a need to manage and optimize.  The marketplace can be built over time with self-identifying resources enabling plug-and-play consumption models.  The more consumption models it supports the more resources made available for use.  Once we have a marketplace we can expand our thinking on what constitutes a resource.  Imagine a future where data is collected and stored at the endpoints, where compute power and connectivity are ubiquitous. Taking advantage of these resources in a marketplace is simple - just define their parameters and make them available.  Without the marketplace concept the risk is likely too high to consider.

To get to this point we have several challenges.  First we need to eliminate dependencies.  Dependencies between resources.  Dependencies between applications and resources.  The fewer dependencies the better the marketplace operates.  Second, we need better security tools able to define, execute, and audit policies throughout the execution cycle of an application.  Third we need application management tools that are non-intrusive to the application which in turn requires a new standard for communicating application state.  Fourth we need to address data management and security in a federated model.

To make such a model work requires a level of insight and automation well beyond where we are today. No such marketplace exists today and I'm sure it's not on any product roadmap. However I'm convinced it's the vision we need to work toward. Cloud is a global collaboration platform, nothing more but also nothing less. The power of cloud is delivered by the applications built to take advantage of the platform: Social Business, Real Time Analytics (aka Big Data), Mobility, and Collaborative Communications. The reason to adopt cloud is to bring our connected world to the next level: the collaborative world. Doing so requires a platform that delivers continuous availability, unlimited scalability, and optimal efficiency at all times. To do this cloud needs options and the ability to match need to resource. Anything less isn't a cloud; it's just another silo.

Tuesday, September 18, 2012

Where is Disney Going with Mobile Technology

In 2001 while working for a Big 5 consulting firm I was involved in a project to update Disney's content management.  At the time my firm did tens of millions of dollars of consulting work each year for the Walt Disney Company.  As one of the lead architects for our eBusiness team I was asked to sketch out a view of how the Disney Company could take advantage of mobile and web technologies to improve the guest experience at their theme parks worldwide.  As a result I developed ten ideas which we presented to Disney leadership.  Unfortunately SarBox came along and as the auditor for Disney we were suddenly no longer able to provide consulting services to avoid any perception of a conflict of interest.  The good news is the ideas did not die and in fact have lived on in the form of Disney's NexGen project.

Although I have had no involvement in NexGen I have to believe either my ideas were shared with the team by Disney leadership as their view of the future, or it's another example of the convergence of thought such as when both Alexendar Graham Bell and Elisha Gray invented the telephone filing for patents just hours apart.  However I am confident that my strategy document predates anything from NexGen.

Here are the ideas that as of now have been implemented through the web and mobile devices:
1. On-line reservations
2. On-line itinerary for trip planning
3. Robust electronic guide map with estimated queue times and parade times (now available as the My Disney Experience app)
4. Games to be played in line during longer waits
5. Location based themed scavenger hunt

The ideas not implemented
1. Disney.com email addresses for the public (who wouldn't want one?)
2. Electronic diary including drop-in professional photos and GPS location. Data would be uploaded to an online vacation portal (extension of on-line itinerary) with photo albums by visit including user uploaded photos, user diary, and complete GPS based walk-thru of the park for every visit (now this could be done via a Disney version of facebook).  Each album would be available in hard copy (sponsored by Kodak) showing a map based on the actual GPS route followed mixing in user uploaded and professional images.  There is a very simplified version of this available but only for the Disney PhotoPass pictures.
3. Location based dining reservations whereby table availability and reservations are managed by GPS location.  If a person is too far to make reservation on time they can confirm or cancel and others nearby can be pinged with an open table offer.  Intent was to decrease impact of voided reservations.
4. Bluetooth wallet for fastpasses.  One person could contain the fastpasses for an entire group so there would be one check-in by one person.  It could also represent room keys and park tickets as well.  This idea could use RFID as a cheaper option however the interactivity would be reduced.
5. Reservation based attraction access (fastpass 2.0).  The idea was to let people check into a ride via Bluetooth (now it could be done with a FourSquare app) at which point they would be given an estimated ride time.  They would then be able to explore a given area while the system optimized the lines.  Once a fixed place was defined the person would be notified of the time to enter the line.  Similar to fast pass, this model would completely do away with traditional queues.  When you entered the line your wait would be 10-15min every time.  You could check in to multiple rides at a time and if conflicts occurred the system would figured out a coordinated plan of attack.

Realizing half of the ideas have been implemented, I'm starting to wonder if perhaps the second set will be.  Perhaps not for idea #1.  I doubt Disney will give away Disney.com email addresses at this point.  I think it would have been a smash hit had they done it a decade ago but that ship has sailed.  In fact I'm still amazed Disney stuck with go.com for all their addresses, as if go.com means anything to anyone anymore (in fact it never really did).  I think idea #2 will happen whether Disney is involved or not; I've thought of developing the idea myself.  I think Disney would benefit greatly from idea #3 which would enable them to get rid of their new solution, to charge a no-show fee on voided reservations.  Idea #4 should happen in some form to make managing fast passes easier for people and groups in particular, and it sounds like at least some part of it is happening with FastPass+.  My favorite idea, after #1, is #5.  Nobody likes waiting in line.  The guests don't like to stand around and move like cattle.  Disney managment doesn't like having their guests waiting in line when they could be buying more trinkets or consuming more food and beverage. 

It's good to see a great company like Disney moving in the right direction with technology.  Properly applied mobility solutions can greatly enhance the guest experience.  Disney theme parks have never been fast movers.  Look at how long it takes between the emergence of a new character and any attraction based on that character.  I hope they are thinking along the lines of my next generation ideas:

1. attraction enhancement - letting users select the back-story, storyline soundtrack, images, etc. of an attraction or interact with the attraction via a mobile device or RFID.
2. unencumbered views - stream parades, fireworks, concerts and special events so people with limited viewing access can enjoy them as well, even if they are on the other side of the park.
3. virtual wallet - no brainer! extends the bluetooth wallet to include credit cards too.
4. premium content channels - streaming of movies, games, etc. for a per day fee to keep children engaged.
5. virtual tours - providing content rich, location and context aware tours on a self-paced basis for a per day fee

The future will tell.

UPDATE 04-16-2017:  I realized I need to give Disney credit for additional developments.  The Magic Bands used RFID technology to act as a virtual wallet for park tickets, fastpasses, payment and room keys.

Tuesday, September 11, 2012

Ten Cloud Native App Rules and Lessons Learned

To my way of thinking cloud native applications are how we should have been building applications all along. My belief is rooted in how I believe apps should be architected which is not an opinion shared by all. I already know several people whom I respect who disagree with me. After several years of architecting clouds and enterprise applications and now having built bepublicloud.com, my own cloud native application, I'm convinced I'm right.

My rules for cloud native application development are:

1. DO NOT manage the cloud within the application. Application management is a separate domain and should be kept separate. If you cannot use one of the many existing tools to manage your application then build a management app, but do not consolidate it with the app.

2.Believe in the power of encapsulation; trapping the complexity of each layer within that layer and exposing only the simplest interface required to the layer above. Think low coupling and high cohesion. It's a great design principle for a reason! Plan for every service you use, because you will use some, to be replaced. Encapsulate them so they can be swapped out easily and painlessly.

3. Start with a multi-vendor deployment, don't just plan for it. There are lots of little things you need to learn about each cloud vendor. Some provide robust API's, some use third parties. Some have great support, some have next to none. Some have great management portals, tools, and/or API's while others are the bare minimum. And price is NOT an accurate indicator.

4. Test the speed and error conditions of cloud services before you use them, you might be surprised. I make calls to several third party cloud services via REST within bepublicloud and I was surprised at the speed of most, while one proved so unreliable due to the vendor's growth issues I couldn't risk putting it into production.

5. Expect the hardware and services to fail. The focus needs to be on continuous availability, albeit with degraded service when necessary. Netflix has the right model having multiple zones within each Amazon region, and multiple regions, so a user's access and experience will NEVER be offline.

6. Understand the long term ramifications if you decide to use vendor specific/proprietary services. As soon as a cloud application begins making calls to the hardware layer, whether to provision resources or provide a status, by definition you break cohesion and amplify coupling. In the real world the result is an application paired to a particular cloud. I believe the application should be portable across clouds. The upside is vendor independence. The downside is a whole lot more work building software services you could otherwise buy (email distribution, queue management, site search, payment processing, etc.).

7. Not all servers are equal in the cloud. Cloud server pricing is predicated on overprovisioning, selling more slices of a box than there really are. Most applications are input/output bound and waiting for instructions as users are figuring out what they want. Amazon differentiates with High-CPU and High-Memory instances, but not everyone does. Performance is the line of demarcation, so research by deploying apps on multiple instance types to get an idea of the ideal target environment. This is true of both public and private clouds.

8. Adopt a strong source code management solution that supports multiple users with remote capabilities.  I have settled on Git having tried Mercurial and a few others.  Git is by far the easiest to learn, most common used, and most robust.  I can manage my codebase with ease, get friends to pitch in from time to time, and keep everything sane.

9. An agile approach works best leveraging working prototypes, continuous builds, and a focus on the user experience first.

10. Automated testing is paramount.  Spend the time and money where it counts, on the user interface and feature/functionality.  The heavy lifting (80-90%) of testing previous features with each new version (regression testing) can be done through automation.

In building bepublicloud.com the code base is PHP running in Ubuntu on Amazon EC2. The storage services come from Amazon S3, RackSpace CloudFiles, Microsoft Azure Blob, and Nirvanix (although is not priced competitively and therefore disabled as of now). In the few, isolated cases where for development speed I adopted Amazon services, such as their email service and queue service, the functions are encapsulated and optional to program execution. Sure the application would be degraded, however users would still have the access they need. As a result bepublicloud's code can run on any IaaS cloud, and I even have my choice of PaaS clouds which support PHP should I go that route. My design criteria was simple - no single point of failure in a world of failure. I trap every exception I can with recovery routines and robust log messages so I can manage the app without coupling to a provider's platform.

As a new service that is unmarketed, bepublicloud has very little traffic today, but I'm preparing for success just in case. My next step is to use memcache for session data so sessions are preserved as servers are added and removed. I am ready, when needed, to move to a cloud MySQL instance to give me rapid scalability. I won't use one service, but at least two.

I am now looking at virtualization on top of virtualization. To get the power benefits, is it possible to deploy linux containers on top of a Large Amazon instance and then divide up my resources as I need them. A large instance is four times the size of a small instance at four times the cost; however it also has more native storage, is ebs optimized and has high I/O.

There are plenty of use cases where an application is appropriately built on a Platform, especially applications with limited relevancy (specialized applications) or limited time (marketing).  In those cases time to market trumps vendor independence.

Like all software development, the rules are really guidelines.  There are always exceptions.

Sunday, September 2, 2012

The Cloud and the Pendulum

The single greatest challenge with cloud computing is the shear breadth of knowledge required to understand it. Servers, storage, networks, operating systems, application architecture, data architecture, integration, security, identity management, and on and on. And those are just the tip of the technical domains. Although today we see plenty of talk about cloud, the reality is almost all "private clouds" are really just mass virtualization.  Very little has been done in automation and almost nothing with the applications, so calling these environments clouds is misleading at best.  So why are we stuck at the crossroads of virtualization and cloud computing?

I attribute the stall out to the fact so few people are able get their arms around the concepts of cloud and get comfortable. We have seen the entrenchment of fear, uncertainty, and doubt across corporate America. I agree with new technologies a prudent approach is to take it slowly and deliberately making sure the risks are mitigated and lessons learned. What's confusing about cloud is the technology isn't new!  Not a single piece of new technology has been invented for cloud. Cloud is the natural intersection of solving three pressing IT issues of the 1990's: a global corporate footprint, low server and storage utilization, and efficient code reuse. In fact the average IT department got pretty good at dealing with these challenges in the early 2000's, so why the cloud conundrum? Lack of understanding the forest vs the trees.

To me the focus on the trees is the predictable result of taking a specialist approach. Solving problems in engineering typically starts with functional decomposition; breaking the problem down into smaller problems which are easier to solve, and whose solutions can be combined to ultimately solve the original problem. What we forget is the solutions must be guided by a vision and strategy lest we choose a solution incompatible with our desired state. Developing this vision and strategy requires a complimentary set of skills, the breadth of expertise to see the forest.

Virtualization has exploded because it's narrowly focused.  The only teams that are truly involved in virtualization, who need to understand it at the implementation level, are the data center systems engineers.  They have added a new skill set in designing, implementing, and managing these virtualized environments composed of networks, servers, and storage.  However very little imaginative thinking occurs within this domain because, to limit risk, most companies are aligned with proprietary vendor solutions.  And even less imaginative thinking occurs outside the data center infrastructure side because people don't see the applicability.  Cloud today is relegated in the Fortune 500 to data center consolidation, yet the value proposition of cloud is, and always has been, as a global collaboration platform.

What we lack in quantity today are the broad minded experts who can paint the picture of the future focusing on the art of the possible. As everyone has rushed to specialize a rare breed of professional has emerged in IT, often but not always coming from the ranks of the enterprise architects. This knowledge worker has been exposed to every element of the technology and business stacks. Jacks of all trades and masters of some, at some point in their career, by accident more often than intent, they had to learn the job next to them rather than the one above, then the next job over, then the next. What has emerged is a well rounded individual with the capacity to see the world from multiple points of view. Those whose areas of knowledge include hardware and software at their core; rounded by additional knowledge in security, networks, and data; tend to be the cloud experts. Tilt the balance toward software and data and you get Big Data experts, another broad technology. Tilt again toward software, mobile device and networks and it's Mobility experts. Rebalance on software and user experience and the expertise is Collaboration and Social Media.

Companies need to identify their experts and incentivize others to choose a path they might not otherwise. There is a dearth of talent in each of the most discussed technology domains because all are multi-disciplinary in nature. Specialists are still needed where the rubber hits the road, but as companies continue to outsource non-core capabilities, which increasingly includes data center operations, more of those specialists will work for solution providers.

Cloud impacts everything from finance, accounting and tax to HR, legal, and marketing. But nowhere is the impact greater than IT where the very composition of the talent pool has to change and change rapidly. Generalists will pave the preferred path to glory for the next generation of IT talent. And as everything is cyclical, certainly the pendulum will swing the other way just as we think we have it all figured out.



Friday, August 10, 2012

PaaS: Battleground of the Future and it's Dirty Little Secret

I believe most will agree that today in cloud IaaS is the battleground.  Stalwarts Amazon, Rackspace, Microsoft, and Terremark have been joined by AT&T and now even Google.  The key challenge in the IaaS world is getting public companies to trust the concept of public cloud including its multitenant model.  Without multitenancy the public cloud economic model falls apart.  Many don't realize this is why storage in the cloud is more expensive than compute power; I can over provision and sell the same compute power to multiple users because most applications sit waiting for user input.  However in storage, a bit stored is a bit stored and I can't store anything else in that location until that bit is no longer needed.  So although public cloud has a compelling economic model, the security concerns have relegated it to the background.  Some of the walls are slowly coming down: dev/test environments, marketing web sites, web content storage.  However few companies are running production enterprise applications in a public cloud.  In fact Google's move into IaaS lends credence to the corporate argument by trying to give companies a lower level sense for Google's abilities.  The argument goes if I can run a server for you for your software, shouldn't I be able to take that same server and have you build your own application on top?  It's a sound argument.

What's interesting is that IaaS is the focus of today because cloud companies need to generate revenue, but the real battleground for the future is in PaaS.  The competition pits the status quo including private clouds against Private PaaS and Public PaaS.  Today companies seem to be happy with standard applications running in virtualized environments.  However at some point the needle will move due to the compelling arguments of cloud and companies will want to build native cloud applications.  When they make this shift, companies will need a cloud platform on which to build this software.  Microsoft focused on this early but hasn't gained many new additional developers because of their cloud offerings instead migrating more of their existing .NET experienced crowd to cloud.

Cloud application development requires a new approach to applications.  It is expected that backend systems will be service enabled and preferably accessible via REST-ful web services.   A cloud application is the combination of a UI built in HTML5, CSS3 and the ubiquitous but annoying JavaScript and a business logic layer written in PHP, Python, Ruby, Scala, Rebol, or several other languages.  Yes, Java and .NET are also used, but the vast majority of solutions in my experience from Facebook and Twitter to eBay and Netflix use server side scripting languages simply because they work.  Cloud applications are built on the concept of reliable software on unreliable hardware where the application takes responsibility for ensuring its own survival.  Applications properly written test for error conditions and identify ways to continue operation rather than simply ending with those confusing exception dumps on poorly formatted web pages.

Platforms for cloud provide several key foundation elements for building cloud enabled applications.  First and foremost they provide the underlying infrastructure so the application developer doesn't need to know or worry about the compute and storage resources.  Second the platforms provide developer tools to expedite development.  In the case of Amazon they provide several tools that are common across applications: the need to send emails, queues, payment processing, SQL database, and others.  Third the platforms deal with scaling the resources up and down to meet the needs of the applications based on its traffic.  Implicit in this is that the Platform must, by necessity, limit access to developers.

There are vendors going after the Private PaaS (deployed to an on-premise private cloud) such as Apprenda, Stackato, LongJump, GigaSpaces, and CloudSoft.  There are many vendors as discussed with Public PaaS offerings.  However everyone agrees the future of cloud is a hybrid, so the only possible winners will be those who can take their existing solution and extend it to run on the other side some how.  Private PaaS vendors will have to engage cloud solution providers to run their platforms which is difficult considering the majority of cloud solution providers have their own PaaS entries.  Vice versa the Public PaaS vendors haven't built their solutions as commercial software ready to be installed in a private cloud.  Obviously the two sides will need to work together and the ones with the best "seamless" story will be in the best position to win.

What this means to the enterprise is do your research and pick a platform based on the future, not today, or risk lock-in.  The dirty little secret of PaaS: all the platforms are proprietary.  I hope cloudstack or openstack steps up with a universal option, but today none exist.  So whichever direction is chosen remember the first priority is a platform on which you can build and run applications seamlessly across on-premise and off-premise clouds, that meets your development needs, and on which you are comfortable building a generation of software, because once you start down the road, changing roads means starting over.

Monday, July 16, 2012

Automation Is On the Path to Cloud


It's interesting to me how much the manufacturing world depends on automation today yet how much the majority of IT leaders fear it. Take a tour of a brewery, auto assembly line, or computer assembler and you'll see significantly more machines than people.  In college working for Allen Bradley and then Eaton Cutler Hammer who specialized in  plant floor conrol systems, I was introduced early to two key elements of cloud: distributed systems and automation. Making a trailing arm for a GM "G" car required an amazing quantity of knowledge, data collection, and computing power. The liquid aluminum had to be the right temperature, pumped magnetically into the die with the proper force, the clamping force on the die had to be within a narrow band to prevent having molten aluminum squeeze out the sides, the cooling system had to drop the temperature rapidly and uniformly to ensure the compressed crystaline structure of the metal was correct and no longer dynamic, and the die had to be cooled rapidly while being lubricated to prevent the next injection of aluminum from sticking to the hardended steel. There were natural variables that impacted quality and also had to be measured but could not be controlled so easily: humidty, ambient temperature, seismic events.  The challenge was to sense it all and manage it effectively to create parts within tight tolerances in the thousands per day.  Everthing was automated to the push of one button.

Automation is an extremly important step in moving from mass virtualization to a cloud infrastructure as a service.  At the scale and speed of cloud manual tasks just are not an option. It is amazing how many tools offer automation capabilities and how few companies take advantage. I know of story after story after story where companies were unable to meet demand spikes, traffic spikes, and even threats becuase they had always relied on human intervention but hit that out of band situation where there weren't enough humans who could work quickly enough to prevent the problem.

The value of automation is rarely argued. Everyone wants it, everyone talks about it, and it's often a requirement of a solution. So why isn't it configured or turned on?

I think the answer lies in each of three areas.  First, few people document what they do.  Automating a process is, by necessity, first a process engineering task. If you can't define what happens, when, why, the inputs, the outputs, the error conditions, and the expected results then you can't automate it.  And of course you need a period of "test and adustment" where the automation is phased in under the control of a human to ensure out of bounds conditions are tested, safeties trigger appropriately, and the system always returns to a safe state.  Second, many at the technologist level are very good at protecting their jobs and have tremendous fear in letting go of anything they perceive as "value added" regardless of how much, or how little value is actually delivered.  Third there is an inherent distrust of the unknown. Comfort with automation comes with exposure and experience.  So few IT data center operations are automated that it feels out of character to automate something. Putting the list together of atuomated tasks it basically looks like something out of the 1990's: patches (sometimes), backups, spam filtering, health monitoring, and...and...well that's about the list. Sure some go further but when I say automated I mean ZERO human interaction.

The reality is that automation not only can work, it does work, and it saves money while at the same time preventing errors. The hard fact is that an automated solution does quickly only what it was told to do.  If you don't establish the right rules the results can be disasterous.  However after 20yrs of consulting I can tell you how many process mapping documents I've seen at client companies: zero (discouting ones consultants like me created).

If you re-read the fourth paragraph I belive it provides the answer. Companies need to adopt the same engineering disciplined approach that Kraft, Intel, Ford, Proctor and Gamble and a slew of other companies already use. Define what needs to be automated and then map out the processes. The impact is tremendous measured in reducing cycle time, defects, and labor input.

Sunday, July 8, 2012

The Future and the Big 5

The Big 5 consulting companies have been well entrenched, starting in the 1980's, with the Fortune 1000 as their "trusted advisors". What many outside the Big 5 do not understand is the business model. In essence all the firms operate similarly as denoted by how easy it is for people to move between them at will. The Big 5 are:
  1. Accenture
  2. Deloitte Consulting
  3. PwC (formerly PricewaterhouseCoopers LLP)
  4. KPMG
  5. Ernst & Young
The above rankings are based on revenue as reported by Gartner since, with the exception of Accenture, the firms are all private.

First a little Consulting Firm 101. As a Director at a Big 5 consultancy and a small strategy firm I learned the model works like this:
  • Partners are focused only on sales which is their primary success metric. They are 80% sales focused and 20% delivery.  This makes sense given they are the equity holders, and the company must grow to survive.
  • Directors who are on the Partner track are in flux, moving from their days as a manager at 40% sales and 60% delivery to the Partner model of 80% sales and 20% delivery.
  • Managers are making the transition into sales moving from 100% delivery to 60% delivery and 40% sales.
  • Senior Associates are focused on deepening their knowledge within their vertical.
  • Associates are focused on learning about consulting in general.
Big 5 consulting firms have made their money by providing unique insights leading to unique solutions built and delivered by high quality personnel. Combining their knowledge, experience, intelligence, and often too much arrogance, their solutions perpetuate a belief that they and only they can deliver the project on time, on budget, and do so at low risk. These solutions are sold to clients through a relationship model where the emphasis is on the sale; the delivery an afterthought but in the capable hands of the upwardly mobile professional elite. These pre-packaged solutions, where 30-80% of the work is already completed, are the lifeblood of consulting. Each time the consulting firms sells the same work they deepen their expertise, grow their margin, and refine their solution. And for this "proven" innovative solution the consulting firms charge top dollar justifying the rates on the basis of low risk, high reward.

More often this is smoke and mirrors rather than blood, sweat and tears.  I foresee rough times ahead for Big 5, not today but at the start of our next recession.

Clients are wise to the model and attacking it on all sides.  Rates are the first element under attack and have been for the past decade. As technology has increasingly been a part of the success of enterprises, the need for technology talent has leveled the global playing field. Whereas a decade ago a technologist was a second rate citizen, today they often command higher rates than their non-tech savvy business equivalents. At the low end of the pay scale, foreign firms such as Infosys and Cap Gemini, US firms such as Cognizant and HP, and others are working hard to break into traditional Big 5 consulting as a way to grow their revenue and margins. In doing so they are bringing along significantly lower overhead with fewer executives, lower salaries, and honestly less experienced staff. What these companies have, and in spades, is technical expertise. What they have been traditionally lacking is vertical expertise and consulting accumen. They have learned to "diamond mine", a term I used to apply to EDS when they would train people to be software developers ("coal") while sprinking in a few developer experts ("diamonds") so clients would be dazzled by the sparkle not realizing they were paying for diamonds and getting coal. Today the downmarket consulting firms are hiring the consulting talent to handle the client facing roles. As a result a Cognizant can look identical to a E&Y without the high costs. One of my friends oversees a project where his Big 5 consultants charge him $250 while his downmarket consultants charge $100 for the same skill set. He's complained to leadership however they perceive a benefit that is not being realized.

In the past a key ace up the sleeve of the Big 5 was the vertical expert; experienced senior staff who understand the in's and out's of an industry. Today clients are increasingly telling firms they already have all the vertical expertise they need. What they want are new ideas. It's tough to deliver a unique, innovative idea when the Big 5 have been built on replicating their ideas via "solutions" available for sale to anyone who wants it. As a result clients are managing projects differently through the RFP process farming out elements of the work rather than entire bodies of work.  In many areas consulting is now nothing more than BODY SHOP'ING; selling individuals rather than the firm. In my role, even as a Director, I was body shop'ed more often than not, most often because the client wanted the idea without having to pay the high fees for the implementation.

Adding to the Big 5 pain, clients realize they need to keep more of the higher end capabilites (strategy, architecture, process development, etc.) in house leaving the "chore" duties such as PMO to the consulting firms. Sure there's money to be made, but those rates are under the same pressure and it's tough to keep talent when their role is a glorified project manager rather than being a delivery expert.  To strengthen their high skill capabilities, Fortune 1000 companies are hiring Big 5 consultants to join their ranks. It's a trend that started years ago however I see it's pace accelerating over the past three years since the downturn in 2008 and subsequent layoffs by the Big 5. And what's driving the Big 5 talent into industry is the closing of the compensation gap. In days past a Big 5 consultant commanded a 20-30% premium in salary and bonus over similar roles in industry. For that the consultant worked more and longer hours and traveled away from home five days a week. However today many consultants find Fortune 1000 jobs with better compensation packages, better benefits, and significantly reduced travel. And those Fortune 1000 firms are offering non-Partners something you can't get at a Big 5; equity. Consultants are jumping ship and stealing away their friends to lucrative opportunities outside the Big 5. At the same time there is tremendous downward pressure on compensation at the Big 5 because of the downward pressure on margin. The gap in compensation between a Director and a partner is tremendous. I know of many situations where Directors in a given sector averaged less than $5k in bonuses while Partners, one level higher, received $500k and more. As the demand for transparency increases there's going to be a lot of explaining to do.

Compounding the challenge for the Big 5 is the pace of innovation in technology which continues to accelerate. Simply put the Big 5 don't have the technical talent to keep abreast and advise clients. And since innovation projects are small, many in the Big 5 are happy to look the other way while searching for $1M and larger deals.  The new world is being developed in mobile apps, collaboration, social media, and big data on a cloud engine in less than $100k increments.  Cloud service providers such as Amazon, Rackspace, AT&T, Terremark and Google each have more cloud experts and experience than the entire Big 5 combined and multiplied by a order of magnitude. Add to this that companies have learned to manage their technology vendors better requiring them to often talk in generic, agnostic terms, and much of this expertise is given away for free as part of the pre-sales cycle. How can the Big 5 compete when the talent and expertise being given away is done so by former Big 5 talent? That's a tough sell, one that is most often made by arguing "3rd party independence".

Consultants at the Big 5 will argue until they're blue in the face that they are vendor neutral, independent "3rd party" experts. It's not only not true, it can't be true. Look at how many companies use the logos of the Big 5 on their web sites and list them as "partners". All the Big 5 have large vendor relationships who drive a significant part of their go to market strategy and investment. Talk with Gartner, Forrester, or IDC who are all very knowledgeable of who's working with whom across the industry. There is no such thing as 3rd party independence if you're talking to anyone with experience.  Everyone has an agenda.

All of these issues, from margin pressure to body shopping and the incompatability of the Big 5 business model are having a tremendous impact.  However, the single greatest threat to the Big 5 is the dwindling opportunity to become a Partner. Consutling firms are, whether stated or not, up or out organizations. You advance or you leave. In addition to the fact that an increasing number of staff (Director and lower) no longer want to be Partners, there are fewer and fewer Partner opportunities every year. I was told for years by people at Accenture the path to partner was on hold for everyone because they had enough.  Due to margin pressures firms are expanding their partner-to-staff ratio moving from 30:1 to as many as 150:1. As a result many firms already have too many partners. Consulting firms thrive when they grow, but when growth stagnates its layoff time. And no longer are people waiting around to see what happens. I survived several rounds of layoffs at a Big 5 consultancy and its subsequent buyer. However I saw an increasing number of talented people with no risk of losing their jobs leaving for greener pastures. One firm actually commissioned a company to write a report on how much better it was to stay than go, but apparently never circulated the repot realizing it's fundamental arguments were flawed (although a copy could be found on their Intranet site). As a result the modern partnership in Big 5 consulting looks more like Amway than McKinsey.

Now Big 5 consultancies can save themselves however they don't operate in a vacuum; most have Audit and Tax partners to consider. Changing the model for one business will have repercussions throughout the firm and the industry.  It will take a significant change in thinking and an amazing amount of risk to take charge of and upset 100+ years of status quo in the Big 5. Changing course will require the firms to become more democratic: transparency in decisions, rules that apply equally to everyone, compensation based on contribution instead of position, and the distribution of equity to everyone.  I believe IBM Business Consulting Services is thriving in part because they distribute equity to top performers, the rules apply equally to everyone, and compensation is at least in part driven by contribution.

Talent today is drawn out of college to the Big 5 in large part by the money. However as the Fortune 1000 hire more of the senior consulting talent they'll hire their own staffs which will give graduating college students more opportunity and they'll end up competing for that same pool of talent. Once that happens the value proposition of the Big 5 will have faded away...

...and so might they.

Monday, June 25, 2012

Enterprise Cloud Storage Adoption - What's the Holdup?

Adoption of Enterprise Cloud Storage is picking up steam, however many organizations today maintain a fundamental belief that data will not be allowed outside the four walls of their company.  Easy to do in an on-premise private cloud only world, more difficult when a company wants to take advantage of off-premise options.  I question the value of the majority of the data being so blindly and passionately protected, but there is definitely a core that simply cannot be put at risk.  So the fundamental question is what constitutes risk?

I believe people see risk as a combination of cost, access, governance, security and location.  It's easy to see that cloud storage gets a big check in the positive column for off-premise, standard sized check for on-premise, when it comes to cost.  Cloud storage is simply the cheapest storage available, period.  Access I feel is a neutral for on-premise because the same tools we use today to manage authentication and authorization are applicable in the cloud world, and all storage is connected to the network in some way making it accessible to applications. Off-premise providers haven't focused enough on this area, and they really need to get to work on it.  A company shouldn't have to replicate their directory to unlock the value of off-premise cloud.  However as companies mobilize their workforce the tip of the hat definitely goes to off-premise where enterprise mobile storage clouds are readily available.  Governance again is a wash for on-premise and can be for off-premise as long as the data is controlled and owned by the company.  Some off-premise providers include more refined storage offerings obviating the need for backups and lifecycle management bringing new value over on-premise solutions.

Security is a key issue.  Today's defacto standards of encrypt at rest and encrypt in transit must be applied universally, and once they are there is only one differentiation between on-premise and off-premise.  When a subpoena is delivered, what does an off-premise provider do?  The answer has to be 'hand over the data', and this is where companies balk, push back from the table, and walk out of the room.  However there is a simple solution: build the solution so the consumer owns and has sole access to the encryption keys.  The less a provider knows about the details of the data the better off they are because the risk is lower.  No accidental leaks.  No mischievous downloads.  No secrets divulged by a successful hack.  The owner of the data will still have the ability to exhaust all of their legal obligations before turning over the data in the form of the decryption keys.  If the government can decrypt AES-256, currently estimated to take 4.7 trillion years per key, then they already have enough power to hack into the system and get the data directly in which case the whole argument is moot.

Location is another key issue because it implies control.  Part of this control is the ability to pass through several physical or logical security checkpoints before being able to hug the storage cabinet.  However this is the appearance of control because putting your arms around something and doing something with it are two different tasks.  A more pressing issue of control is the gravity of assets.  Companies have storage today, lots of it, and storage lasts longer than servers.  Who wouldn't want to take advantage of storage for as long as possible.  However again with a little imagination, in the form of buybacks, early retirements, and asset transfers, moving off-premise or building an on-premise storage cloud can make the location issue immaterial.

Of course I had to save the best for last...to entice you to read this far...

There is one tremendous benefit of off-premise cloud that will slowly tip the entirety of storage into its favor.  As interactions grow, as more data is gathered, our centralized model of bringing data back to one location will strain and ultimately prove untenable.  As I have quoted before, next to the cost of moving data everything else in any data center is free.  Although the network equipment providers, telcos, and others are salivating the reality is we didn't lay enough extra fiber in the late 1990's to take up all the traffic.  There just isn't enough to go around.  The only other option is to adopt a distributed data model with federated data management.  I was able to get traction with this model in smart grid and believe it's as inevitable as cloud, death and taxes.

Tuesday, June 12, 2012

A Step Between

Anyone who knows me or has read this blog understands my views on private cloud.  As I've often stated, the vast majority of private cloud implementations and solutions (in nearly all I know of) are really mass virtualization.  In essence they provide nothing in terms of services, tools, or process to support the application layer.  As a result enterprise cloud adoption in the Fortune 500 is slow because enterprise native cloud applications just are not being built today by the builders and owners of these private clouds.  Instead vast numbers of applications are being virtualized which is, of course, a good thing.  Lower costs, higher efficiency, and improved return on assets are lofty goals, and delivering on them certainly looks good to the CEO and CFO.

What I realize today is that perhaps we need a step between mass virtualization and the cloud, a new on-ramp encouraging companies to continue down the cloud road.  Private clouds are viewed by some as a step between virtualization and cloud; a path justifying a measured approach while focusing on how to leverage existing assets and move up the learning curve.  Not unreasonable and I'm on the record as to my concern about unintended consequences.  And one real, indisputable consequence is that private clouds have made virtualization a requirement and made cloud more palatable.  The proverbial ball has been moved forward.  However private cloud is just a limited version of cloud.  I feel we need something fundamental, a capability which fills the gap between operating system dependent machine virtualization and operating system independent cloud resources.  So the question is what should be the next step so we keep the ball rolling and continue gaining momentum?

For the foreseeable future applications will run on operating systems, although I'm also on the record on this topic as well.  Server virtualization enables us to load up applications inside a protected container, the operating system, separated from other applications by the hypervisor.  The hypervisor is an artificial construct created because applications simply do not share resources well, and the most common hypervisor, VMWare, is not cheap.  But how much value does a hypervisor provide? If the operating system were improved to provide a sandbox for each application to which resources could be applied; and tools were added to provide application management capabilities including images, backups, and portability; we would no longer need the hypervisor.  The grid computing pioneers, and specifically United Devices, had it right from the start.  They provided a sandbox in which the grid application would operate and to which resources could be assigned or shared.  Who says grid isn't the father of cloud?!!

Of course none of that matters except to the innovator.  What does matter is that software licensing, which fell behind the cloud curve as publishers struggle to figure out how to protect revenue streams and intellectual property, ends up nicely realigning in a container world.  Licensing of operating systems would be easier to manage, but more importantly more efficient because fewer instances should mean fewer administrators.  And an entire layer of software costs would be eliminated, a layer that can cost nearly as much as the server it runs on.  Based on my experience the cost of a cloud environment would drop by as much as 30% to 45%.  That's enough to interest anyone.

In this transformation from hypervisor to container, the operating system battleground could reverse direction, from low end operating systems such as Linux and Windows back toward data-center-centric UNIX stalwarts such as Solaris, AIX and HP-UX.  Oracle would be in the catbird seat because of Solaris Containers, that is assuming they know what they have.  However many companies have migrated off Solaris to Linux, as is happening to AIX And HP-UX.  So what is the core Linux team working on? What are the various distribution providers preparing to bring to market?  Of course a move by the Linux world would more likely than not drive Microsoft forward, but only after the concept is proven.  Naturally VMWare would have to reinvent itself, Citrix would have the opportunity to double down on CloudStack.  And the system management vendors would have to update their tools too.

The best news is that current cloud providers wouldn't need to change anything immediately.  Over time they would be able to seamlessly move to the new model and support a bifurcated environment in perpetuity if necessary.  Integration points would change but the right recipe for success is for the innovators to make the change invisible to the application and systems management tools.

I'll have to start digging to see if anyone is working in this direction.  It may be the new bandwagon I need to jump on as my current bandwagon, Enterprise Private PaaS, is a five year marathon.

Saturday, May 26, 2012

Evil Assets, Expensive Clouds and Open Source

One of the primary focuses on Cloud Computing is the cost benefit, a focus driven primarily by those who adopt only the virtualization aspect of cloud.  However in reality cloud can be very expensive.  Sometimes being more expensive as a cost is still beneficial to becoming an asset, but sometimes expensive is just simply expensive.

The real economic value of Cloud Computing is in its delivery model; the ability for companies to consume the resources they need while leveraging the assets of another.  Assets are evil. Assets cost capital, a rare resource in companies which they protect and work to minimize the distribution of each year.  Assets also require maintenance, good old tender love and care.  CIO's today complain about the large percentage of their budget that just goes to the care and feeding of their IT systems.  Just like a pet, a server also requires its equivalents of food, air, a litter box, and play time.

Pretending these two issues didn't exist there is a darker, more hideous side to assets; one that makes them orders of magnitude more evil than we mortals truly understood years ago.  Assets create gravity.  First once there is an asset it tends to attract other assets like the formation of a planet.  Although the weakest force, gravity works over the longest distance pulling together other assets until soon something near the size of a real planet begins to emerge in a data center.  It becomes impossible to separate the assets and the collection of assets takes on a life of its own.  Soon companies are paying the legacy tax, often upwards of $1MM, just to make even the smallest change because layers and layers and layers of assets have to be fumbled through and moved out of the way over the course of any implementation.

And of course with gravity comes friction, that negative force which slows momentum eventually bringing everything to a stop.  Why have companies become so bad at innovation over the past 25yrs?  I argue because the weight of the things they own and thus are compelled to take advantage of keep them from doing what they should, most often leveraging things they don't own.  Remember that one of the key financial indicators for a company is Return on Assets, but only if you own assets.

Public cloud, even private cloud when done right in an off-premise model, is asset free.  However internal private clouds, where the Fortune 500 is clearly focused, are asset intensive.  Organizations see the assets as free because the initial set are already on the books.  CIO's are often blinded to the eventual reality of the albatross after a generation or two of changing out hardware.  Of equal concern is the cost.  Software vendors are known collectively for one thing: getting their money.  The greatest concern I have for private clouds is the cost basis on which they are being built.  First any proprietary elements one should assume will increase in cost as they increase in value to the company.  The larger the implementation, the greater the bill.  This is in contrast to existing models however once a company builds their own private cloud they will be extremely reluctant to make any changes to the core and the vendors know this well.  Second the cost basis often consumes the benefits of scale making costs scale linearly when they should in fact drop as more nodes are moved into production.  And third since there are no real cloud solutions in the market, the best being nothing more than hyped virtualization, what cost elements are not represented but will be required to execute native cloud applications?

There is no better application of open source than in cloud computing in all of its forms.  Open Source solutions deliver:

  • code level scrutiny required to meet stringent security requirements
  • a common base from which everyone benefits
  • innovation by encouraging contributions and participation
  • customization as required by a company
  • the ability to remand issues rather than waiting for a vendor

Add to the above the low/no cost and the availability of support for all the most popular tools today and the value proposition is compelling.  In addition the vast majority of innovations in technology over the past decade have been created in or been ported to open source.

Although futures tend to be uncertain there is one prediction that appears sound.  Assets will continue to be evil, on-premise private clouds today misrepresent their true costs looking for one of those suckers born every minute, and the lowest risk, lowest cost route to success is paved by Open Source technologies.  CIO's need solutions enabling them to leverage the assets of others while maintaining all the control they need.

Saturday, May 12, 2012

The Coming Fortune 500 Ready Cloud

Cloud computing represents a seismic shift in the delivery of technology services.  By combining internal and external resources a cloud maximizes flexibility and efficiency while minimizing the quantity of assets under management.  As a result IT budgets gain room for business critical innovation initiatives and IT gains the agility required to address rapidly shifting markets.  When properly implemented cloud represents a global collaboration platform upon which to develop new services, enter new markets and transform how a company does business.  It's no surprise it's on the agenda of every CEO, CFO and CIO in the Fortune 500.
Today the power of cloud computing is trapped between promise and responsibility.  Executives used to worry about security but today they take it personally.  Sarbanes-Oxley taught Fortune 500 boardrooms about the extent Congress was wiling to go on personal corporate responsibility.  As a result CxO's scrutinize security to a degree previously unseen; a reasonable response in light of growing threats and vectors of attack.  Adoption of cloud is hindered by one simple fact: the best solutions today require giving up control to a 3rd party.  What is needed is a solution that gives the Fortune 500 assurance that they have control over the who, what, where, when, and why of their applications and only outsource the how.
There is a fast growing trend across IT that 10yrs ago would have been heresy to repeat.  CIO's are looking to exit the data center operations and ownership domain.  Taking a page right out of Nicholas Carr's paper they realize the value of IT is at the application and data layers, not the underlying sausage factory.  I expect in 2013 and 2014 we'll see a downward trend in hardware and software purchases at corporations, in particular servers, storage, and operating systems.  Purchases by providers should be lower as additional contraction is effected through virtualization so overall volumes should decrease. Not the future that the big tech vendors want, but one they won't escape from either.  For this to happen Fortune 500 CIO's need to build synergy between their infrastructure and application development teams.
Today private clouds tend to be nothing more than large scale virtualization focused entirely on delivering a stable, low cost unit called a "server" to their customer, the rest of IT.  These customers then treat that "server" as they always have: an application silo.  Very few companies are building native cloud applications, sometimes for lack of talent but most often for lack of a cloud platform.  To build a cloud application requires a purpose built Service Oriented Architecture (SOA).  Most SOA implementations predate cloud and thus lack the refinements necessary such as the elimination of traditional languages like J2EE and.NET and adoption of their light weight replacements in the forms of Ruby on Rails, PHP, Python, Scala and others.  Other changes include the adoption of HTML5 and CSS3, adopting REST to replace SOAP, coding for security and reliability, leveraging resources as services, and of course exposure through prototyping and education. 
At some point soon these two major issues are going to be addressed.  A provider will bring to the market a solution that enables large corporations to outsource real payloads without having to worry extend their trust to a 3rd party.  And a technology company or cloud provider will deliver a compelling Enterprise Private PaaS solution compelling enough to get CIO's to adopt native cloud application development as the next step in the value chain above building virtualized infrastructure. When solutions to both problems are in the market ready and available for use I believe we will have reached the tipping point at which the Fortune 500 will begin to eagerly adopt cloud. Of course it will start with a trickle, but soon an all out flood will ensue as the "space race" of Big Data and Mobility will be on.
What cannot continue to happen is getting blood from the stone. Virtualization, in the form of the vast majority of solutions available today from IBM, VMWare, Citrix, and others, will only get you so far.  The cost savings are tangible but they are also limited.  To unlock the value of cloud one has to shift focus to revenue generation and that means applications.  Applications can be built in a cloud but not in a vacuum which today is the only option for most enterprise developers.  Fortune 500 companies need cloud to compete, to innovate, and ultimately to survive.

Tuesday, March 6, 2012

Always Another Barrier

Cloud computing represents change which gets 20% of the world excited by the possibilities of the future while 80% dig their heels in and prepare for a fight.  As Fortune 500 CIO's have told me in priviate on multiple occaisions, cloud challenges their very value propositions which have largely been focused on building and managing complex IT operations.  One in particular told me under no circumstances was he going to commit career suicide by telling the CEO that the $10B he spent on building out modern data centers over the past ten years was folderol (I expect to see news of his termination in the WSJ in late 2012 or 2013).

The reality is that many IT organizations are still busy building barriers rather than figuring it out and trying to make progress.  Like daylight, wishing cloud away won't change the fact that it's here and making a huge impact.  Cloud is the engine behind incredible innovations such as mobile apps and the analysis of largescale data.  As the mainstream media reports on these successes the CIO's are being forced into the corner throwing any objection they can at cloud like an actor in a B rated movie tipping over boxes while trying to escape.

Following are a list of common barriers and strategies for success:
  1. Security:  Security has never been a strong argument against cloud because the reality is any proprietary data should always be encrypted in transit and at rest.  Together that covers about 90% of the penetration space used by hackers to gain ilicit access.  New solutions are being developed to protect data during execution.  In reality no new security technologies have been created for cloud; all the existing models have simply been applied and so far the result has been enough to enable financial, health, and government secured data to be used in the cloud.  It's time to turn the tables on the CSO and ask them how rather than may I.
  2. IT Cost Savings Focus: How can focusing on cost savings be a barrier?  Easy.  When it keeps people from considering that the biggest value cloud can provide is in revenue enhancement!  Cloud is a global scale collboration platform and despite all the smarts in IT, it takes the additional smarts of the Business to unlock that value.  Companies have leveraged cloud to create entire new markets let alone revenue streams.  Zynga's entire operational model does not exist without cloud, neither does Google's.  Increasingly it's not just the high technology companies that are reliant on cloud.  Kroger, a $4B grocer leverage private cloud to enhance its supply chain.  Charles Schwab leverages cloud to provide high wealth clients with portfolio management tools.  The success stories in the Fortune 500 are hard to come by because everyone is protecting their competitive advantage.  However all have one thing in common: they educated the business and enabling them to reimagine how they did business, sometimes down to their very core.  Cost savings is a good thing, but there are also examples like GE's entire procurement system, in the cloud for nearly five years, where business costs have been reduced, not just IT.  Additional thoughts are available in a previous post.
  3. Enterprise Strategy: It's amazing to me how often this critical element is overlooked.  Most cloud efforts started as ground up movements led by developers just as virtualization efforts were led by system administrators.  However the value of cloud will always be trapped in silos unless an enterprise strategy is developed to spread the wealth.  Today there is no reason to consider any hardware dedicated even if it is, in fact, dedicated.  Moving to an abstracted resource model enables separation of operational and development activities from the hardware maintenance and execution.  Only through this model can we eek out every ounce of value from our existing investments, and do so with a single architecture which reduces complexity while maximizing flexibility.  I have yet to see or hear of a successful general purpose private cloud that did not have an enterprise cloud strategy behind it.  An enterprise strategy need not be all encompassing or even complex, but it should form a foundation through simple steps like adopting a definition of cloud, definining what the goals are of leveraging cloud technology, identifying the required skills and technologies, and identiyfing the low hanging fruit to maximize the opportunity for success out of the gate.  A little bit goes a long way.
  4. Service Based IT: Cloud spells an end to silos and therefore an end to the political fiefdoms that ruled data centers for decades.  The business people need to be shown back to their seats in the audience and IT needs to put the right people in place to deliver IT services.  Having a service catalog is a great start, but the bigger change is to adopt a service mindset throughout IT.  Cloud, in the end, is a service model and therefore can only be governed as as service.  However nobody will allow this to happen unless they are getting the service they need.  An increasing number of IT leaders are taking traditional service courses from companies such as Ritz-Carlton and Disney.  In one large retailer IT leaders are required to work in a retail outlet to learn what it means to deliver world class customer service.  It's a real eye opener.  At a minimum focus groups with the business are necessary where IT observes from behind the mirrored glass a moderator led frank discussion of how well IT meets their needs.
  5. Governance: Cloud is, in essence, a new model for technology which provides multiple abstraction layers within which the complexity of a given technology realm is entirely captures.  Developers need to know nothing about the hardware to deploy a service.  Data Modellers need to know nothing about the database engine to create the right schemas.  Most importantly the business needs to know nothing about how the applications work beyond the business processes they instantiate.  To this end the language of managing the business/technology interface needs to change.  Capabilities and challenges need to be expressed in business langauge, translated behind the scenes into the correct technical elements by IT.  The business needs to control what is done, why, and when.  IT needs to control how it's done and where.  It sounds simple on paper but we have executives with 30yrs experience with broken models where IT explains what can be done so business leaders water down what they want to its barest elements.  The result is something IT can accomplish in a timeframe that's too long for the business minimizing the value delivered.  Penetrating this barrier starts with the CIO sharing the new vision for a cloud enabled IT with their peers and including them in the journey.
  6. Available Skills: Interestingly this barrier exists from the start but is stealthly enough to be ignored by most companies in their initial cloud endeavors.  Only during the debriefs when the intial enterprise cloud effort failed do people realize they were limited by their knowledge; they didn't know what the didn't know.  Like any other innovation, cloud requires a new way of thinking.  However unlike most, cloud covers just about every skillset that exists in IT making it very difficult for specialists to understand outside the limits of their own knowledge.  Administration, networking, security, data modelling, programming, architecture.  Like Prego, it's all in there.  To drastically reduce risk start with a strong, knowledgeable enterprise architect who is skilled in cloud computing.  Getting the architecture right, and having that knowledgeable person available to provide course corrections throughout implementation, is critical.  And it's those skills that are available from technology companies providing at least one way to get over the barrier. Additional thoughts are available in a previous post.
  7. Data: It's true that data as delivered in current centralized architectures is a valid barrier to cloud adoption.  Applications require data and data is expensive to move.  Getting over this barrier requires a deeper understand of what cloud represents (decentralization) and the realization that a new data architecture is required (federated).  Accept that latency needs to be minimized, bigger pipes will be required, and data will be distributed and then figure out how to make things work through existing concepts like data staging, caching, deduplication, replication, and best of all delivering answers rather than raw data (i.e. push the computation out to the data).  More information is available in a previous post.  At a minimum make sure the data and compute power are co-located when maximum throughput is required.
  8. Lack of Robust Development Tools: This is often a cover for enterprise applciation vendors to sling arrows against the open source programming foundation of cloud.  The reality with cloud is that simple scales and therefore many of our old assumptions about how to develop for the web need to be dropped forever.  AJAX, XML/JSON, OAuth, REST; these are the new tools of the web leveraged by companies from Facebook to Google to deliver their applications on a scale never before seen.  What needs to change is the application development model, and in the new model the tools are robust enough to meet the needs of the largest technology companies in the world.  Part of this revolution, against the wishes of Microsoft, IBM, Oracle, and SAP; is the migration away from .NET and Java/J2EE.  Languages such as Python, Ruby, PHP, and Scala are establishing a foothold in the development groups within financial services, healthcare, consumer packaged goods, and retail.
  9. N+1: So the cloud is live and now it's time patch the operating system, or update the router tables, or deploy a software update.  Cloud requries a nearly maniacal approach to change management and automated testing.  Nothing should go into production without understanding its potential impacts.  Once something goes into production it needs to be scrutinized and reversed at the first sign of an issue.  New challenges will always arise but through diligence the size of the challenge can be limited.  A method growing in popularity is continuous environmental testing whereby tools test various conditions to create issues thus exposing weak points early enabling remediation efforts.  NetFlix has their Chaos Monkey, Amazon's efforts resulted in the startup OpsCode.  Clouds must be reliable and predictable and the best way to ensure they are is to introduce disruptions and figure out how to minimize their impact.
That's enough to chew on for now.  In grid / utility computing / cloud projects working with clients for the past 10yrs I can tell you first hand companies don't hit one of these barriers, they hit them all.  Cloud done properly is transformative.  Most companies today are focused on the celebrity of cloud "Hey, we're in the cloud" rather than driving value and as a result they're not seeing the returns of a Bechtel or NYSE who were early adopters.  However the limitation is in their vision and implementation, not the concept of cloud.

Friday, February 24, 2012

Cloud Drives New Thinking About Data Architecture

It doesn't take much experience in the cloud space to realize it is subject to the same limitations as Grid computing and for the same reasons. One of the biggest limitations is data; the size of data sets, the bandwidth and cost required to move it, and the introduced latency to process it. I have repeatedly heard Alistair Croll quote another cloud visionary that "next to the cost of moving data, everything else is free". It's simple, and it's true, yet when adopting cloud so few people seem to be aware of the importance to architect for this reality.

As Geva Perry points out the reality of cloud is that like most technologies it enters a company through the everyday knowledge worker and then wends its way upward as it delivers value until finally the CxO's become aware of its success; typically right before they engage in top down initiatives to bring the technology into the company. Most of these early adopters are software developers and systems administrators, both of whom are eternally on the lookout for better solutions that make their lives easier. Neither take a data centeric focus which results in sub-optimal solutions. And in cloud where the return can be so high, a sub-optimal solution still looks great masking it's inherent shortcomings.

As I've explained to many who confuse cloud with mainframes using the centralization argument it's important to realize there is a huge difference. Mainframes were about physical centralization. And if we proved nothing else, we proved physical centralization is bad thing from a disaster recovery point of view, a cost of operations point of view, a response time point of view, and several others that I won't detail. Cloud gives us the best of both worlds: logical centralization within a physically dispersed reality.

New models require new architectures. Since the fundamental value element of any computing system is the data being processed, it's natural to use optimize the architecture for data. In the cloud the optimal data architecture takes advantage of the geographic diversity leveraging virtualization concepts to logically manage the data in a traditional centralized model. The reality is no matter how much storage you can put in one location, you'll never have enough. And even if you did, you'd never have enough buildings to house it. And if you did, you'd run out of bandwidth to move it around and make use of it. We are in a data driven age where we're better at collecting than using data, and combined with mobile technologies in which every electronic device suddenly becomes a sensor of some type we're starting to sink under the weight. Telco's had it right with CLEC's, Carrier Local Exchanges, the distribution points that linked the home to the global network.

In the Smart Grid arena I helped one of the largest utilities to re-think their smart grid strategy. The original design called for bringing all the consumption data back from the smart meters to the data center. The primary challenge on the table was how to move the data - wireless technology? Broadband over the wire? I turned the tables by refocusing the discussion on the fundamental assumptions. Why did the data need to be transferred to the data center? The answer? The business requirement was to be able to tell every user the accumulated cost of their power consumed to date at any given point in time. Digging deeper we found the current mainframe could not meet this requirement being able to only process 200,000 bills each day; hardly the real time answer the business wanted. So I asked if we could flip the architecture taking a distributed control model from my powertrain engineering days with GM. I argued it would make more sense to accumulate the data at the smart meter or neighborhood, calculate the billing at that level, and then only access the detailed data from the data center when needed. Through research the "when needed" use cases were limited to billing disputes, service issues, and the occasional audit. Since only 1% of customers called each month, even if 100% required the customer rep to reach down into the smart meter to retrieve the detailed data it ws certainly eaiser and less of a load than bringing 100% of the data back to the data center. Frankly it was hard to argue against a distributed model which became the standard and has slowly replaced the original centralized model of smart grid touted for years as the answer.

I have advocated the same distributed architecture approach for use by mobile providers (accumulate usage on the mobile phone, execute the billing algorithm, and if the provider needs the detail download what you need when you need it). I have advocated a more generic version for healthcare payers, retailers, and within the supply chain touting the advantage of storing the data where it's collected.

The data management tools are in their infancy but there is significant work going on around the world on the subject. Consider that five years ago your options within the database world were limited to a cluster. Now we have sharding, high performance filesystems, highly scalable SQL databases plus a whole new class of data management from the map-reduce/hadoop world.

Embrace it or fight it the reality doesn't change. Data growth is exploding. Storage densities are plateauing. Is it better to learn how to hold your breath, tread water, or swim?

Tuesday, February 14, 2012

Yet Another Barrier to Public Clouds - Hacktivism

Public cloud providers like Amazon, Google, Rackspace and Microsoft are struggling to be relevant to the enterprise, and to the Fortune 500 in particular. At a recent conference when a keynote asked if people felt confident enough in public cloud storage to put their data in the public cloud, I was the only person who raised my hand (and that only because of bepublicloud). However sitting through the keynote by a founding member of the Cloud Security Alliance brought me to the realization that there is another side to security that will block the adoption of public cloud even once all the security issues are addresses and confidence in the secure public cloud storage surges.

One of the fundamentals of public cloud is that it uses the Internet for connectivity. Even the VPN solutions use the Internet. Connectivity is limited resource and with the thin margins in public cloud bandwidth is a heavily scrutinized, monitored, and protected resource. Similarly enterprises labor continuously to optimize network architecture and minimize the size of the pipes to the Internet. Enter hacktivism and its favorite tool of disruption, the distributed denial of service (DDOS) attack.

A DDOS attack is basically a flood of requests that hit a targeted range of internet addresses seeking to overwhelm the systems ability to respond. Small attacks take down a server, medium attacks take down a site, large attacks saturate the nework and take down an entire company. Essentially so much garbage is being thrown down the drain that eventually the system blocks up and nothing can get through. When this happens nothing goes in or out.

Imagine a bank, hospital, or any other company who begins to use public cloud for enterprise solutions. To the hacktivists it would be the same as inviting their disruptive methods into the data center. A DDOS attack could essentially take the company off-line unable to complete any transaction involving the public cloud. No more access to systems, data, records, images. I expect this is an issue already faced by salesforce.com and other SaaS providers who become the target because of who their customers are rather than as a result of their own actions. It would certainly make a prospect want to know who else uses the service in advance, but well beyond the concern of shared hardware and co-mingled databases.

I'm sure there are ways to architect around this, however it those will likely increase costs and complexity, the direction opposite the strategy of enterprises. Of course adding this issue to the litany of security concerns in the end only serves to decrease confidence in the public cloud.

Ouch!

Sunday, January 22, 2012

So Where Are the Mainframe Cloud Providers?

The very first thing I'll admit is that I'm not a mainframe expert. I have steered clear of the behemoths my entire career. My primary motivation has been a desire to focus on distributed computing which is the antithesis of the mainframe. However I do have quite a bit of respect for what the mainframe offers in terms of throughput, virtualization, and parallel processing. Which is why I don't understand why there aren't any cloud providers offering mainframe capabilities. EDS was founded on the concept of sharing mainframe time between companies who did not need and/or could not afford a mainframe of their own. Outsourcing today at companies like Perot Systems, HP and IBM charge by the MIP; isn't that a pay-as-you-go model in the same way reservations based virtual private clouds are priced?

Here are my theories; perhaps someone smarter than me can tell me which one or ones are right, or what I'm missing.

The Need
Do all the companies that need mainframes already have them? Are the existing outsourcing models satisfying the needs of the market? Perhaps. However having done consulting work for companies who are mainframe based, I know there is a desire to have a variable allocation (and thus cost) of mainframe resources (most often measured in MIPS) rather than pay for a fixed allocation. IT needs prices to reflect workload and that cannot happen in a fixed price environment, especially one where, like cell phone plans, you pay a higher penalty price for exceeding your allotment.

The Architecture
Mainframes assume local access to data and processing power; a truly centralized model. Perhaps this architecture is the limiting factor. Companies would need to send their data to the cloud providers for processing. However this same problem exists in the virtual machine world of cloud so unless all mainframe apps are data hogs at the top 10% of the scale it has to be a wash.

The Cost
Owning and operating a mainframe is expensive. Specialists, air conditioning, power, and footprint are all on a larger scale with a mainframe than typical blade or "pizza box" servers. Perhaps there's not a viable economic model to deliver mainframe resources. However the costs of an internal or outsourced mainframe have to be comparable. At worst the cost of operating a mainframe at a cloud provider would be equivalent to an outsourced model. Although a premium would be charged to provide a resource on a "taxicab" model (or perhaps "rental car" model), it seems the cost avoidance opportunity to purchasing a new mainframe or supporting an existing mainframe would be of value.

The Benefit
Would the cost of elastic and efficient mainframe resources be high enough to justify the cost? Is there enough load in complimentary usage patterns to enable a mainframe to be fully utilized enabling a lower cost per unit resource? Traditionally mainframes operated in the 60 -70% utilization category implying tremendous potential to consolidate 1 in 3 netting a 33% reduction. It seems like the benefit would be significant.

In the end I believe the issue is leadership, or the lack thereof. IBM has never been a leader in cloud and has struggled to be relevant despite having created the concept of virtualization. I honestly believe the zSeries team at IBM has kept the mainframe out of the cloud discussion out of fear, uncertainty and doubt. Nobody has emerged on the z team to champion cloud with the mainframe. The arguments always seem to fall short, just after loading some version of Linux onto the z platform. If the home team can't support the concept and get excited, it's hard to expect anyone else to want to take the risk.

What's funny to me is how many older IT staffers equate cloud to the mainframe. To me it's a gross oversimplification that glosses over significant differences, such as the one between centralization and distribution of resources. However I can definitely see some similarities, such as not caring how the cpu, RAM, and storage resources are marshalled to satisfy a request. Some of that is through automation, some through the magic man behind the curtain (the support team). When a developer doesn't care, things are just available, it looks pretty cloud like.