Monday, June 25, 2012

Enterprise Cloud Storage Adoption - What's the Holdup?

Adoption of Enterprise Cloud Storage is picking up steam, however many organizations today maintain a fundamental belief that data will not be allowed outside the four walls of their company.  Easy to do in an on-premise private cloud only world, more difficult when a company wants to take advantage of off-premise options.  I question the value of the majority of the data being so blindly and passionately protected, but there is definitely a core that simply cannot be put at risk.  So the fundamental question is what constitutes risk?

I believe people see risk as a combination of cost, access, governance, security and location.  It's easy to see that cloud storage gets a big check in the positive column for off-premise, standard sized check for on-premise, when it comes to cost.  Cloud storage is simply the cheapest storage available, period.  Access I feel is a neutral for on-premise because the same tools we use today to manage authentication and authorization are applicable in the cloud world, and all storage is connected to the network in some way making it accessible to applications. Off-premise providers haven't focused enough on this area, and they really need to get to work on it.  A company shouldn't have to replicate their directory to unlock the value of off-premise cloud.  However as companies mobilize their workforce the tip of the hat definitely goes to off-premise where enterprise mobile storage clouds are readily available.  Governance again is a wash for on-premise and can be for off-premise as long as the data is controlled and owned by the company.  Some off-premise providers include more refined storage offerings obviating the need for backups and lifecycle management bringing new value over on-premise solutions.

Security is a key issue.  Today's defacto standards of encrypt at rest and encrypt in transit must be applied universally, and once they are there is only one differentiation between on-premise and off-premise.  When a subpoena is delivered, what does an off-premise provider do?  The answer has to be 'hand over the data', and this is where companies balk, push back from the table, and walk out of the room.  However there is a simple solution: build the solution so the consumer owns and has sole access to the encryption keys.  The less a provider knows about the details of the data the better off they are because the risk is lower.  No accidental leaks.  No mischievous downloads.  No secrets divulged by a successful hack.  The owner of the data will still have the ability to exhaust all of their legal obligations before turning over the data in the form of the decryption keys.  If the government can decrypt AES-256, currently estimated to take 4.7 trillion years per key, then they already have enough power to hack into the system and get the data directly in which case the whole argument is moot.

Location is another key issue because it implies control.  Part of this control is the ability to pass through several physical or logical security checkpoints before being able to hug the storage cabinet.  However this is the appearance of control because putting your arms around something and doing something with it are two different tasks.  A more pressing issue of control is the gravity of assets.  Companies have storage today, lots of it, and storage lasts longer than servers.  Who wouldn't want to take advantage of storage for as long as possible.  However again with a little imagination, in the form of buybacks, early retirements, and asset transfers, moving off-premise or building an on-premise storage cloud can make the location issue immaterial.

Of course I had to save the best for last...to entice you to read this far...

There is one tremendous benefit of off-premise cloud that will slowly tip the entirety of storage into its favor.  As interactions grow, as more data is gathered, our centralized model of bringing data back to one location will strain and ultimately prove untenable.  As I have quoted before, next to the cost of moving data everything else in any data center is free.  Although the network equipment providers, telcos, and others are salivating the reality is we didn't lay enough extra fiber in the late 1990's to take up all the traffic.  There just isn't enough to go around.  The only other option is to adopt a distributed data model with federated data management.  I was able to get traction with this model in smart grid and believe it's as inevitable as cloud, death and taxes.

Tuesday, June 12, 2012

A Step Between

Anyone who knows me or has read this blog understands my views on private cloud.  As I've often stated, the vast majority of private cloud implementations and solutions (in nearly all I know of) are really mass virtualization.  In essence they provide nothing in terms of services, tools, or process to support the application layer.  As a result enterprise cloud adoption in the Fortune 500 is slow because enterprise native cloud applications just are not being built today by the builders and owners of these private clouds.  Instead vast numbers of applications are being virtualized which is, of course, a good thing.  Lower costs, higher efficiency, and improved return on assets are lofty goals, and delivering on them certainly looks good to the CEO and CFO.

What I realize today is that perhaps we need a step between mass virtualization and the cloud, a new on-ramp encouraging companies to continue down the cloud road.  Private clouds are viewed by some as a step between virtualization and cloud; a path justifying a measured approach while focusing on how to leverage existing assets and move up the learning curve.  Not unreasonable and I'm on the record as to my concern about unintended consequences.  And one real, indisputable consequence is that private clouds have made virtualization a requirement and made cloud more palatable.  The proverbial ball has been moved forward.  However private cloud is just a limited version of cloud.  I feel we need something fundamental, a capability which fills the gap between operating system dependent machine virtualization and operating system independent cloud resources.  So the question is what should be the next step so we keep the ball rolling and continue gaining momentum?

For the foreseeable future applications will run on operating systems, although I'm also on the record on this topic as well.  Server virtualization enables us to load up applications inside a protected container, the operating system, separated from other applications by the hypervisor.  The hypervisor is an artificial construct created because applications simply do not share resources well, and the most common hypervisor, VMWare, is not cheap.  But how much value does a hypervisor provide? If the operating system were improved to provide a sandbox for each application to which resources could be applied; and tools were added to provide application management capabilities including images, backups, and portability; we would no longer need the hypervisor.  The grid computing pioneers, and specifically United Devices, had it right from the start.  They provided a sandbox in which the grid application would operate and to which resources could be assigned or shared.  Who says grid isn't the father of cloud?!!

Of course none of that matters except to the innovator.  What does matter is that software licensing, which fell behind the cloud curve as publishers struggle to figure out how to protect revenue streams and intellectual property, ends up nicely realigning in a container world.  Licensing of operating systems would be easier to manage, but more importantly more efficient because fewer instances should mean fewer administrators.  And an entire layer of software costs would be eliminated, a layer that can cost nearly as much as the server it runs on.  Based on my experience the cost of a cloud environment would drop by as much as 30% to 45%.  That's enough to interest anyone.

In this transformation from hypervisor to container, the operating system battleground could reverse direction, from low end operating systems such as Linux and Windows back toward data-center-centric UNIX stalwarts such as Solaris, AIX and HP-UX.  Oracle would be in the catbird seat because of Solaris Containers, that is assuming they know what they have.  However many companies have migrated off Solaris to Linux, as is happening to AIX And HP-UX.  So what is the core Linux team working on? What are the various distribution providers preparing to bring to market?  Of course a move by the Linux world would more likely than not drive Microsoft forward, but only after the concept is proven.  Naturally VMWare would have to reinvent itself, Citrix would have the opportunity to double down on CloudStack.  And the system management vendors would have to update their tools too.

The best news is that current cloud providers wouldn't need to change anything immediately.  Over time they would be able to seamlessly move to the new model and support a bifurcated environment in perpetuity if necessary.  Integration points would change but the right recipe for success is for the innovators to make the change invisible to the application and systems management tools.

I'll have to start digging to see if anyone is working in this direction.  It may be the new bandwagon I need to jump on as my current bandwagon, Enterprise Private PaaS, is a five year marathon.