Wednesday, December 4, 2013

A Rational Approach to Cloud Adoption

Today is the day.  You've built the mass virtualized infrastructure.  You have the buy-in from leadership across the business and IT.  You've included all the constituent teams from development and QA to audit and help desk.  It's time to flip the switch, move the cloud, do things better, cheaper and faster.  Ready, set.

STOP!!!  What about the applications?

What about the applications?  They're locked and loaded and ready to GO!

Yes, but are they the right applications? 

Huh?

Ugh...

It's a great day when the internal private cloud is ready to go with real applications doing real work, demonstrating real value.  There is good reason to be proud and get excited.  However, too often the one question not asked is which applications will be migrated to the cloud?  That simple question needs to be answerd BEFORE cloudification begins.

As I've said for several years now, every good cloud strategy begins with a rationalization project.  Application rationalization is the eye opener which sets the strategic foundation for cloud adoption.  Just because an application can be moved to the cloud, or people want it moved to the cloud, is not enough of a justification.  There is only one chance to make a first impression.  Therefore we need to make sure we hit the cloud running which means we're already sure of the direction, tempo, and weather conditions.  Nothing sells a new technology like relevance and nothing kills a new technology like irrelevance.  Good, bad or indifferent it's the business who gets to decide which is the future of cloud.

Often the discussion on migrating applications to the private cloud starts rightfully with focusing on the low hanging fruit.  However how can one know what the low hanging fruit is when many organizations do not have a consolidated list of their applications, their dependencies, their costs, their users, nor their technology stacks?  The decisions are often made on cursory, incomplete knowledge based on the prior experience of team members and their friends.  I personally believe this single factor is a significant reason why so many initial private clouds failed to meet the expectations of executives; they were built to support and undefined set of applications that did not match the reality of the company.  There's nothing like setting out on a journey of a thousand miles with no more than a vague idea of where you want to end up.

Application Rationalization is a discipline within Application Portfolio Management (APM) which provides a company with a 360 degree view of their application inventory.  It's surprising how many organizations, large and small, do not maintain accurate inventories as part of their APM strategy.  Done properly, an application rationalization effort provides an invaluable set of application metadata and drives strategic decisions to answer questions such as:
  • what applications are used, why, and by whom?
  • which applications have similar or overlapping functions?
  • which applications are strategic, tactical, or end of life?
  • what are an application's dependencies including its underlying infrastructure?
  • where does the application sit and what does it consume?
Of course I'm assuming the proper time and effort are expended so the metadata collected is robust and not superficial.  In such a case, mining the application inventory can provide tremendous information for planning cloud adoption, especially when the inventory can be queried to provide killer reports such as strategic applications with cloud compatible dependencies.  The mining can help identify architectural requirements pre-implementation, shape the adoption roadmap, and establish baseline budget numbers for each adoption phase. 

"Moving to the cloud" is the new war cry of CIO's big and small, however like most things it's a combination of efforts which make cloud adoption successful.  Having a strong handle on the application landscape is the key to success because in the end cloud is about applications.  Adopting cloud as an infrastructure first strategy is akin to driving away for vacation and hoping you end up somewhere relaxing.

Tuesday, November 19, 2013

The Holy Trinity of Enterprise Public Cloud

It's only fair to use a religious reference when writing about the religious war public cloud computing is often subjected to in discourse.  Over the past five years we have established several immutable facts.  First, public cloud is cheaper than internal IT, period.  Second, the vast majority of security concerns surrounding public cloud are in fact nothing more than fear, uncertainty and doubt.  Third, companies cannot invest and innovate at the rate of cloud entrepreneurs and thus the gap between public and private cloud capabilities is widening at an increasing rate.  As a result the number of CIO's interested in adopting public cloud as part of their enterprise compute strategy is beginning to grow, particularly in the Fortune 1000.  It appears at this point we are down to three issues: noisy neighbor, guaranteed availability, and Internet security.

Noisy neighbors are just what you think; those neighbors who fill the air with sound disturbing the tranquility of shared spaces.  When people think of public cloud they think multi-tenant, often to the exclusion of any dedicated model.  I've heard speakers at conferences say it isn't cloud if it's not public, and it's not public cloud if it's not multi-tenant.  Such thinking created an impasse for many years as enterprise CIO's could not take the risk of having a noisy neighbor consume all the computing resources available on a given server starving out their virtual machines.  However the barriers are dropping as cloud providers are addressing the noisy neighbor concern.  Amazon AWS and Verizon Terremark for example allow users to provision environments with dedicated servers.  Of course no multi-tenancy means a higher cost, but only marginally higher and the value outweighs the increase.  As subscriptions to a dedicated model increase so will competition as others join the fray.

Guaranteed availability is a difficult requirement to address.  Enterprise CIO's are concerned that at some point when a new VM is needed the provider won't have any more space available.  It's an interesting theoretical possibility, yet like many theories it has not been proven in practice (which is why it remains a theory rather than a law).  Public cloud providers have stayed well ahead of the demand curve ensuring a steady supply of VM's available even during the incredible growth rate in cloud over the past several years.  Combine this track record with the best practice that software should be cloud agnostic capable of running on any cloud, the unlikeliness of a scenario where all providers are starved of capacity, and the availability of dedicated servers; and a starkly different reality emerges.  Done correctly, cloud guarantees greater availability than ANY company can achieve inside its four walls.

Finally we have the security concerns stemming from the use of the Internet to access public clouds.  What was true in 2010 is not true in 2013.  First we have providers such as Amazon AWS who support dedicated connections.  Although simple to understand on the surface, once the engineering is done to remove any single point of failure the implementation is quite complex.  Enter a much more sophisticated solution with AT&T NetBond, currently connecting clients via their private networks to IBM, Microsoft and CSC clouds.  NetBond eliminates the uncertainty of the Internet, connecting cloud provisioned resources seamlessly into the compute fabric of the enterprise.

Where do we stand today?  Although the market has addressed each of the individual issues of the trinity, we currently lack maturity and widespread adoption.  Few cloud providers deliver solutions to all three challenges.  Some of these solutions are new to the market with less than a year of experience under their belt.  However the change is real and we are now at the point where its time for customers to start voting with their checkbook.  In order to reach the ubiquitous cloud nirvana desired by all, the Enterprise CIO's need to start paying for all the things they've said they need in order to use public cloud.  It's time to put up and the next 24mo will determine the fate of cloud in the enterprise.  Will public cloud bring an economic argument unhinging businesses from the pace and skillset of their internal IT department?  Or will the market find new concerns to thwart progress and push the promise of public cloud further into the future?  From where I sit the die is cast.  Enterprises will find ways to adopt public cloud, and those who do so first will gain the greatest first mover advantage in the history of technology adoption.

Wednesday, July 24, 2013

Cloud vs Telecom

My current role has brought what I call the Achilles Heal of cloud into full view.  Experts agree telecom's should own the cloud outright.  After all a cloud is predicated on bandwidth, latency and security so having the keys to the kingdom the telecoms are in the best position to take the throne.  However to date no carrier has established a significant footprint let alone a dominant one.  When people talk about cloud invariably they do not talk about AT&T, Verizon or CenturyLink despite each having multiple cloud solutions, many recognized by Gartner as Magic Quadrant leaders.  I believe the primary issue is a lack of understanding in the market of the differences between mass virtualization and cloud.  Mass virtualization in the form of servers and storage are of no threat to telecoms.  Use fewer physical boxes, shift to virtual machines, buy capacity on demand.  All of those are complimentary to the underlying business of a telecom: selling bandwidth.  Cloud however is a threat because it's all about the applications and data, and the carriers want to provide the bandwidth to move and protect the data.  Today at telecom's cloud is little more than a functionary to serve the sale of bandwidth to customers focused on mass virtualization and nobody has stepped out of line to focus on cloud.  However we are on the brink of a tremendous shift in the bandwidth world with the advent of Software Defined Networks (SDN), and I'm beginning to believe its the game changer that telecoms wish they could avoid.

SDN virtualizes the network in a matter similar to server and storage virtualization.  The goal is to ensure only as much bandwidth as needed is purchased and it can be controlled via automation.  CIO's expect SDN will not only transform their data centers but transform their Wide Area Networks (WAN) as well.  Through the tried and true mechanism of competition telecom's will be incented to provide SDN like features enabling companies to dynamically scale their network up and down and thus their ultimate bill.  Will this generate the 70% savings common in the world of server virtualization?  I don't think so, because most companies have already done the leg work to consolidate their telecom spend so there aren't circuits that are 5-7% utilized like we had with servers.  In some situations companies are kidding themselves thinking they'll be able to save reams of money yet they are saturating their existing bandwidth and demanding more.

However there is a dark side here that either the telecoms don't see OR they see all too well.  Once SDN penetrates the market the pieces will be in place to engender a completely new architecture.  Combine the fault tolerance of cloud based applications designed to provide reliability on unreliable infrastructure with the cheap bandwidth of Cable, DSL, and other last mile technologies and you have a compelling option versus expensive telecom bandwidth.  Instead of purchasing a 1Gbps ethernet connection why not purchase 20 x 50Mbps connections at a significantly lower cost.  Yes you give up the robust management tools, reliability, and SLA's of the enterprise quality connectivity.  You give up some ground in security.  However with applications that assume the infrastructure isn't reliable, running on a distributed infrastructure, and leveraging encryption both at rest and in transit, what is the real downside?  In fact there should be a benefit to having a more distributed network environment and as cloud adoption increases I, like many, expect there to be fewer and fewer large scale data centers and an increasing number of highly distributed micro-data centers.  Owing to the adage "compared to the cost of moving data everything else is free" companies are incented to find new solutions to reduce the cost of bandwidth.

There are situations, such as high-speed trading and national intelligence, where such an approach doesn't make sense (at least not yet).  However there are also numerous situations where such an approach makes more sense than shelling out the money for bandwidth of a higher quality and resiliency than is required..  If SDN makes using cheaper alternatives easier why would a telecom be interested in adopting SDN?  Of course we know by the hundreds of examples the worst strategy is to fight change.  Therefore at least one proactive option for mitigating the loss of revenue is to move up from mass virtualization to real cloud; focusing on the applications and data.  Large carriers can build capabilities into their network to facilitate the collection, analysis and distribution of data at the point of collection, near the point of collection, or in a central repository.  If carriers built solutions which automatically determined where an application should run, where the data should be collected and distributed, and did so in a secure manner I believe it would make a very compelling argument for their more expensive but more capable bandwidth.

Today the telecoms are going down a road of providing increasingly undifferentiated network services which puts them squarely in the dreaded commodity bucket.  I don't see how the market doesn't find a way to negate the need for quality and reliability.  It's simple economics at play and economics always wins!

Wednesday, June 26, 2013

The New Path to Success

For decades vendors selling technology products and services have focused on IT leadership as their target.  When the CIO title came into being the leader of IT became the quick sales target.  Relationships were established, decisions made, and increasingly the sales discussions slid down the executive ladder until Directors were driving technology decisions.  Then wheels came off the cart.  CEO's began to notice how much money was being spent in IT and how little demonstrable value was being attained.  New CIO's came in to end the practice of implementing technology for its own sake instead focusing on business needs.  Conversations ascended to the top of the ladder but still focused on the needs of IT.  After some time passed, conversations pushed down the stack, when CEO's noticed again how much money was being spent in IT without demonstrable value.  New CIO's came in to end the days of order taking and start taking a strategic approach to meeting the needs of the business.  Again the conversations started at the top of the ladder, and again after new directions were established conversations descended down the ladder.  Over the past five years CEO's again are questioning the cost of IT versus the value delivered realizing IT was too focused on what the company was doing and not enough on what the company should be doing.  CIO's had succeeded in aligning with business operations, but missed out on business innovation because it's rare a company has a systemic approach to evolving their business model.  As a result many CIO's struggle with the redefinition of their role where they are in the odd position of both leading and following their customer.  Those making progress appear to have had the same epiphany: it's all about the money!  The new focus is revenue realization: how technology can drive new business models creating new revenue opportunities.

A course correction by the CIO toward innovation is manifesting itself in multiple ways.  Now the conversations are strategic, they are focused on business results, and they sound a lot more like talking with the COO, CFO or CEO than a technology leader.  CIO's are refocusing on strategic partners, identifying those companies of greatest value to them as seen through the lens of the future.  No longer are CIO's identifying strategic partners on spend alone.  In fact some are turning their backs on those with big wallet share if they can't earn a seat at the new table and talk business strategy.  No vendor or service provider willingly sits back and watches their revenue erode.  The winners are those who can, or already are, talking business strategy at the CIO level, technology strategy at the CTO level, architecture at the VP level, and tactics at the Director levels and below.  Most paint the acquisition of PwC Consulting by IBM in 2002 as marking a shift in IBM to a "services led" company.  However IBM already had IBM Global Services, a consulting arm with tens of thousands of employees.  It wasn't "services led" that made the difference leading to a $200+ stock price, it was strategic relevance.

PwC Consulting had entre to the C-suite at most Fortune 1000 organizations.  How?  PwC was, and continues to be, an audit firm.  As an auditor they get direct access to the most senior executives and typically the audit committee of the board of directors.  Often there is a burden of knowledge; knowing the most intimate secrets of a company.  Trust is earned over years of dedication to a client, and this trust fostered opportunities when clients needed operational and later technological assistance.  Each of the audit firms established large consulting practices to serve the needs of their captive clients.  Of course regulators became suspicious of such a close relationship, and through Sarbanes-Oxley and the dogged but erroneous pursuit of Arthur Andersen pushed auditors to divest their consulting arms.  Ernst & Young sold theirs to Cap Gemini, KMPG spun theirs off as BearingPoint, Deloitte created some separation but maintained the relationship, and PwC prepared to take PwC Consulting public as Monday (ugh).  IBM jumped the curve through the acquisition of PwC Consulting and gained immediate relevancy in the C-suite.  The entire company pivoted.  Gone were the commercials about technology and in came the famous IBM'er commercials talking about how they were focused on solving business problems.

Today the new path to success for vendors and service providers is business relevance.  In many cases this means creating relationships with the business leaders, sometimes without the endorsement of IT.  CIO's are learning that business leaders are technology savvy enough to carry on high level discussions with their beloved partners, when those partners have demonstrated an understanding of the business.  A new perspective leads to new conversations, new insights, and expands the opportunities for innovation.  And no partner worth partnering with will willingly throw IT under the bus.

As IT shifts from a service provider to business platform provider; operating as a broker responsible for finding solutions whether bought, rented, leased, or built; the onus is on IT to be strategically relevant to the business.  The days of business-IT alignment are over; it's not enough.  The two need to become one operating in unison instead of at arms length.  The new rising term for this relationship: immersion.  Product and service providers who can immerse themselves in the business will reap the rewards.  Those who don't will backslide into oblivion, regardless of how much money is spent.

Wednesday, June 19, 2013

Economics Always Wins

Much of what we learned early in school proves to be only partially correct.  Thomas Edison invented a light bulb, not the light bulb.  Nikola Tesla didn't discover alternating current, he invented the AC multi-phase induction motor.  And Henry Ford didn't invent the assembly line, he was the first to apply it to mass production.  So the question is why were the considerable accomplishments of these men, and many others, embellished?  Of course it's because they changed the world, but the way they changed the world is simple economics.

Successful inventors tend to have one thing in common; they made their inventions a must have.  In other words they created a compelling economic argument in which the benefits of the invention far outweighed the cost.  Edison made the first long lasting light bulb, one with a life measured in months, not days.  Tesla's motor was the key to unlocking the power of AC which delivers power over a greater distance at a much lower cost than the competing DC system (of Edison).  And Ford continuously worked to make his vehicles less and less expensive through optimization and automation continually expanding his potential market.  Economics drove the success of the McCormick reaper, Otis elevator, Carnegie steel, and even the Singer sewing machine (which Singer built by infringing on the patent of another inventor, but Singer made the sewing machine affordable so many give him the undeserved credit).

The reality is that by definition, economics always wins.  More properly said, a compelling economic opportunity always wins in the market.  When one doesn't win we find out that, in fact, there was another more economically compelling option.  Public cloud will win over Private cloud for one simple reason: economics.

The foundation of the compelling economic argument of Public cloud is simple: it enables companies to leverage the assets of others for their benefit on a pay-as-you-go basis.  There are numerous examples of cloud in the real world, but my favorite is shorting stocks.  When an investor wants to short a stock, make a bet against the future value of a stock, they have to find someone who has a 'long' position, someone who owns the stock and intends to keep it.  The 'shorter' buys the rights to the 'long's' stock for a fee, sells that stock to lock in the profit, waits for the stock to drop (and that's the risk), then buys back the stock at the lower price.  The 'long' is given back stock equivalent to what was borrowed plus the fee and the shorter books the profit.  The only way this transaction can work is because someone owns the underlying asset, the stock, and it doesn't have to be the beneficiary of the transaction.  Conventional thinking says own the asset so there is a redeemable value; money spent as an expense has no residual value.  This argument is true when assets have the opportunity to grow in value or the cost of capital is low enough to justify the investment.  Nobody has ever argued technology is a wise asset investment, but it is a necessary one.  So its no surprise that company after company is getting over their fear, uncertainty, and doubt and moving in the direction of Public cloud where they can leverage the assets of others.

Companies, in particular banks, make quite a bit of money by leveraging the assets of others.  With no assets to buy, capital is preserved which can be used for other investments (the Capital Preservation argument).  Instead, the company pays for the computing and storage power as an expense enabling them to book the cost against the earnings generated from the expense.  The net result is a more accurate reflection of the financial health of the company (the Cap-Ex to Op-Ex argument).  By not acquiring assets there is no need to service them in the form of people, power, facilities, management, and overhead (the Cost Savings argument).  Best of all, because the underlying asset is owned by someone else, making a decision to change assets is easier and less costly (the Time to Market argument and my Asset Gravity and Friction argument).

What about Private cloud?  It's capital intensive and asset heavy weighing down both a company's cash flow and balance sheet unnecessarily while restraining innovation by piling up assets that must be re-used.  Private cloud is an on-ramp to cloud, and like the on-ramp to a highway it's best left in the rear view mirror if the driver wants to progress.  What Private cloud gives a company is a security blanket as they learn the ins and outs of cloud.  How?  Private cloud, as it largely exists toady, embraces traditional views of security, control and reliability.  However as the business responds to the consumerization of technology, the business needs new capabilities.  And those capabilities are largely incompatible with traditional security, control, and reliability approaches.  This creates a quandary: either continuing doing what we've always done and surf the downward wave of our demise, or reinvent ourselves and ride the crest of change.  Successful companies in their markets will make the transition.  They'll question convention and build new approaches compatible with new needs.

To those who don't agree, all I can say is feel free to fight economics.  You'll be in great company with names like Studebaker, Sun Microsystems, Blockbuster, and Hostess.

Monday, April 8, 2013

2nd Generation Cloud Lessons

So where's the post on 1st generation lessons? I didn't bother to write one because I feel the popular press has done a good job of detailing the promise and shortcomings of the public cloud.  The fact we had to divide cloud into private and public (and the mythical hybrid cloud) underscores we started on a rough foundation.  Marketers have done a great job of making cloud as confusing as possible to the point that again, the term is meaningless because it's so misapplied and misunderstood.  Ok, off the soap box.

What I want to share are the 10 lessons I've documented by those in the throws of the 2nd generation of cloud which is primarily the combination of Public PaaS and Private IaaS.  These are listed in the order in which companies seem to run into each issue.

Lesson #1: The Fear of Automation
I have seen a tremendous fear of turning over the operation of an infrastructure to rules based management systems. First, the rules by which administrators operate are not as well defined and understood as managers would like to believe. More intuition and on-the-fly problem solving are involved than anyone wants to admit. Its hard to automate something that is not well understood and for which rule cannot be defined. However the bigger limitation seems to be fear; if we automate it we lose control of it. I would agree if the concept of fail-safes and emergency stops didn't exist.  If we can automate assembly lines to the point modern factories are often lights-out environments, we can automate the data center.  The same rules apply.

Lesson #2 Dynamics of Image Management
Imaging is an important part of rapid provisioning.  Each environment from operating system through the application is wrapped into a nice package for quick delivery to compute resources when required. However the constant need for patches, updates, emergency fixes and the like make images obsolete within days of creation. It is important to understand the days of a static, golden server image are over. There are several approaches of which at least three seem to prevail: active manual image management, automated image management, and image cascading.  Active manual management requires a subset of the team to rebuild images as necessary based on changes to ensure images stay up to date.  Automated management lets the team push the burden of updates onto a tool which merges changes into the image.  Image cascading de-couples the variety of installed components and does an automated install of each rather than a true image copy.  Each approach has it's merits and drawbacks, but the biggest issue to date is the quality of the available tools.

Lesson #3 Where's the "pay as you go"?
The economics of cloud are what make cloud such an enticing tool for the enterprise. Even in the single minded private cloud model, the business users who are paying the bills only want to pay for what they consume. Today the IT department solves this problem, if at all, by jacking up the per hour rate so the end result looks like the consumer is only paying for what they use. Per hour consumption of resources makes IT services a commodity and glosses over the value of what they provide. Hence there is tremendous fear in providing a "pay as you go" model to the business. However once some smart analyst compares internal costs to the public cloud all hell breaks loose and he explanations begin. In the end its this "trapped customer" opinion that will go a long way to unraveling a significant part of the on-premise private cloud. The end model needs to provide the business with options: host internally with all the control and security at X per CPU hour, host off-premise in a private cloud with a bit less control and security, or go for the bargain basement at Z for little control and minimum security. Of course when options Y and Z are on the table it will require the vendors to provide "pay as you go" just like the regular public cloud, and they fear this as much as internal IT.

Lesson #4 De-provisioning
Once the "pay as you go" mountain has been climbed, the next challenge is to actively manage the costs. Public cloud users have been on this bandwagon for the past several years, but its new to the Enterprise. As budgets for cloud are moving from IT to the lines of business for Public SaaS, it stands to reason they will for other technology services and with good reason. Want IT to be competitive? Require them to compete! The challenge is identifying servers which are at low utilization and migrating jobs off those servers to servers with capacity while not interrupting users. Tools, integration, and automation are all required to make de-provisioning happen in a meaningful way.

Lesson #5 Tool Maturity (and the lack thereof)
By this time organizations get pretty frustrated by the amount of integration and business rules development required, and the barriers caused by having disparate systems from vendors whose visions may overlap but rarely lead to the same future. Open standards and open source will be the keys, but both will necessitate patience which threatens the competitive advantage of cloud. Organizations that run complex IT shops well will benefit while others should understand what they are getting into. Moving to a software defined world will help, but it will not be the silver bullet.

Lesson #6 The Application Development Gap
Most Private IaaS implementations fail to garner the widespread adoption used in the business case to justify the investment, and a pattern has emerged on the reason why. Application developers are rarely invited to the party. As its name implies, implementers of Private IaaS tend to be infrastructure people who focus on hardware, virtual machines, and operating systems. To them the job is done once the self-serve web site is up and running enabling consumers to build the server which suits their need. Web developers tend to be comfortable in this model however they are not the norm. Most developers cannot double as system administrators and wouldn't know what to do if a Linux or Windows box fell into their lap. My favorite quote of all time from a Computer Science major was "My knowledge of the hardware is limited to the on/off button". And that's why I got my degree in Computer Engineering!  Application developers have needs too, and if those needs are ignored they will return the favor. Every large enterprise I know of with one exception has made same gaff.

Lesson #7 The Role of Automated Testing
Developing applications in the cloud is half the battle; the other half is testing. Clouds tend to be built on web technologies (Apache, server side scripting languages, web services, HTML5 and CSS3, etc.) using web approaches (frameworks, Agile methods, etc.) which result in a much faster pace of development and release. As a result its important to be able to test the application in each iteration; add in the constant patches and updates and it becomes paramount. Without automated testing cloud development is stillborn.

Lesson #8 N+1
Ugh. From the stories I've heard and situations I've experienced, this is the one challenge that catches EVERYONE off guard. Moving from one cloud version to another is not for the faint of heart, but it's a reality. Major upgrades in capability or underlying components are often the cause of significant outages at even the big players like Amazon, Netflix, eBay and Facebook. The only way to avoid this is to grow new clouds next to existing clouds and migrate over time using automated testing, keen observation, strong alerting and exception management, and a bit of prayer. Falling back to the previous environment is a requirement. It's the change management gurus ultimate test!

Lesson #9 Security Scares
At some point the security team needs to get comfortable with the idea data will sit off site. I know of no Fortune 500 with a well defined policy on what data can be stored in a cloud off-site and what corresponding standards, processes and tools are to be used. yet every Fortune 500 company has data stored off-site. If for no other reason, the proliferation of data and the emerging business value of mobility will compel the business to need capabilities outside the traditional data center. 

Lesson #10 Leader Egos and Agendas:
Managers build kingdoms of control, leaders unify kingdoms into nations. However many a great leader still has an ego and an agenda and the wheels came come off the cart quickly when leaders clash. As the budgets move to the business and IT moves toward a commodity, the claws will come out. CIO's are increasingly a competitive threat to COO's and I believe will be the next generation of CEO's, a view shared by several friends in management consulting. I have to believe CFO's will have a say, a traditional source for Fortune 500 CEO's for decades. In my humble opinion the CIO role will dissolve and CIO's will become the next generation COO's for all the right reasons: technology and the business will be viewed as one coin instead of two sides.

Wednesday, March 20, 2013

Enterprise PaaS is a Great Strategy

When the CxO's get together to talk about strategy in our technical world, where is the conversation focused? Increasingly the discussion includes catch phrases like mobility, cloud, social media, and big data. In my experience the discussion starts with the business process, recently they've added the customer experience to the conversation, and about as deep as they go is down to the data. Why is that important to understand? In the world of the CxO they don't care about any technology below the apps and data (with the obvious exceptions of the CSO and the CTO who, often enough, only cares down to the integration layer). Every thing else is "staffed out" to their senior execs then down to senior managers, then managers, and then to the teams responsible. I'm painting with a broad brush I admit, and I'm not saying there are CxO's who can't talk stored procedures or network design, but it's not their focus. When it comes to cloud, the platform for mobile applications, social media, big data and business collaboration, the C-suite gets it! CEO's love the time to market benefits. CFO's love the capex vs opex and asset light approach of cloud. CIO's want the agility just as much as the COO's. CSO's like the private cloud model keeping everything inside the four walls.

Today the cloud strategy at large enterprises starts with building an infrastructure only private cloud build-out, often built upon a VMWare foundation. Company by company the consistent learning is mass virtualization does not lead to native cloud application development, the ultimate goal of cloud. Whether from a vendor or developed internally, native cloud applications deliver tremendous benefits including continuous availability, software reuse, efficient use of resources, and easier integration. What is missing at first, and often added quickly after adoption of the infrastructure private cloud fails to meet adoption goals, is the development of an Enterprise Platform as a Solution capability to engage the in-house developers and bring them into the cloud.  Enterprise Private PaaS is a significant part of the Fortune 500 Ready Cloud.

Simply put, a PaaS solution provides both a framework and supporting services to simplify the development of native cloud applications. An Enterprise PaaS scales the concept to meet the needs of the enterprise instead of an individual developer, team or small company. It may appear inconsistent that someone who advocates the use of Public Cloud (off-premise, multi-tenant) is advocating a Private Cloud focus. True, however here are my arguments in favor of Enterprise PaaS:
  • Developing Native Cloud Apps - drives developers up the learning curve on developing native cloud applications by reducing barriers and providing a platform for learning. Nobody can learn to develop for a cloud without a cloud.
  • Leveraging a Hybrid Cloud - CFO's and CIO's are keenly interested in tapping into the economic benefits of using someone else's IT assets (private off-prem or public). This is where the agility benefits are realized; the ability to scale up and down and deploy on demand as needed.
  • Taking Out Costs - infrastructure costs run 4-15% of an IT budget whereas applications comprise 30-50%. In addition the application costs are directly related to personnel costs which are an expensive, difficult to attract and retain resource.  Developing enterprise cloud services provides a significant opportunity to reduce development, testing, and people costs.  It helps that cloud solutions are largely predicated on open source and not proprietary solutions.
  • Moving to an Asset Light Foundation - cloud provides the opportunity to unchain ourselves from the evil reality of asset ownership, so driving adoption with Enterprise PaaS helps drive the benefits of asset ownership reduction.
  • Third Party Web Services - Nobody wants to reinvent the wheel, especially when it means starting back at the beginning when the wheel was made of rock shaped by another rock and lots of elbow grease.  Third parties are building and offering API's as a means of deeper and broader integration.  In the future many solutions bought today as services, and even applications, will be replaced by automated integration via web services.
  • Talent Management - Its already tough enough to compete for talent against companies like Google, Facebook, and Amazon.  Providing access to the same technologies will go a long way to attracting and retaining talent who don't want to fall behind their peers.
  • Path to the Public Cloud - you knew it was coming, my argument for how private cloud benefits public clouds.  As one who as warned about the pitfalls of private clouds, the benefit that outweighs the cost is ITS STILL CLOUD! Any step that increases the understanding of cloud is a positive step. Let the security and economic considerations play out over time. I have cast my lot with the group who expect the public cloud will win in the end. I don't care how companies get there.
Enterprise Private PaaS is a significantly important step in the direction of cloud adoption. Without the ability to write native cloud applications the true value of cloud, revenue generation, cannot be delivered. Writing these applications requires a shift in the mindset of developers, new tools, new programming skills, and steady movement up the learning curve.

Enterprise PaaS is the gateway to cloud innovation - step on through!

Wednesday, March 6, 2013

Creating the Cross Channel Customer Experience

We all know it can be pretty frustrating today. You start doing something on line, but the phone rings. You have to run so you grab your keys and head out the door. Luckily there's a break in the day so out comes the mobile phone but the app has no record of what you were doing. So you call the company for support where you spend the first 10min in a queue followed by 10min of trying to describe to the person what you were doing. The empathetic agent is willing to help, but you need to start over. Of course the piece of paper with the information you need is on the desk, at home, next to the phone.  When you get home you rush to your desk, jiggle the mouse so your screen turns back on, only to find out your session has timed out.

Ugh.

Today companies are struggling to provide a consistent customer experience across the variety of interaction channels. Is it unreasonable to expect what you start via one channel you continue via others, bouncing back and forth as needed, until the transaction is complete? No. However at the technical level it's no easy task to accomplish. The online application comes from the eCommerce team, mobile application from the Mobile team, CRM from the contact center group, and the retail outlets only have their Point of Sale system which is not the most friendly tool. We've spent alot of time and money getting to "one view of the customer", bringing CRM into the shared services IT organization as the golden record of the customer. We've spent more time and money integrating our other tools to the golden record; some are new enough they had access from day one. So we know who the customer is, what they've bought before, every time they've called, and even how valuable they are as a customer. However what we don't know is what they are doing right now with us. Some of the systems don't track it, often in the name of security despite the customer having passed multiple tests to initiate the interaction. Why didn't we solve for this problem at the same time as we did the golden record? Simple. We had fewer channels, customers had a strong preference for a single channel, and we were focused on the customer identity, not their goal.

How do we solve this problem? We need to start tracking what the customer is doing and sharing that data across channels. Many web sites are good at solving at least part of the problem enabling a customer to pick up where they left off, or signaling for help when they get confused. However this capability is integrated into the channel. Do we want to build a solution for each channel? If so how do we keep transaction components in synch especially now that there is no channel affinity? I believe the answer is building a transaction index.

Building a transaction index service enables each channel, including ones not yet main stream such as the connected car and M2M (not familiar with M2M? Worth doing some research) to work in concert. As expected the transaction index would uniquely identify every transaction by person and identify every transaction type. Combined with date/time stamps, channel indicators and device identifiers, a developer would have enough information to make intelligent decisions to maintain the customer experience throughout the transaction across all channels. As a service its designed for scale, reliability, and availability with geographic caching to aid in speed (keeping the data local while transactions are in a pending state). Every time a transaction is initiated the index is consulted to determine if the transaction is a continuation of a previous interaction or new. Should any doubt arise the customer could be queried. The index identifies the current step in the transaction, provides access to the data already collected, and returns this information to the requesting application. Each transaction step updates the index so it contains the most current information. Once a transaction is complete it is removed from the index, whether deleted or archived. Existing policy management would continue to govern what processes and data are available in each channel.

Such a solution enables a more robust customer experience. Transactions can be aged to trigger escalations or support. Abandoned transactions can trigger proactive responses from outbound emails and calls to a query from a retail employee to determine whether or not the customer needs assistance.

As a challenge I've been considering for the past few years, my wife's experience with a well known web site over the past 12hrs prompted me to put forth my idea. With an account she accesses via the web as well as an iPad app, she did some preparatory work on the web site last night. She decided this morning to complete the work on her iPad. When she went to save her work, she had to sign in to the service. After doing so, her three hours of work on the iPad app disappeared and only the preparatory work done via the web site remained. Honestly there was no excuse yet I totally understand why it happened. At a minimum you'd hope it would ask you a question. However the online and app versions were written by different teams, at different times, and the transaction data is siloed even though they obviously share a common golden record.

Perhaps my idea is naive, perhaps its genius. I see it as an extension of a tool, the index, into an area where a problem has emerged. Like most problems I expect people to look at the most common solution; building new capabilities into each channel application. I prefer the cloud approach. Build the service as a native cloud app and you gain all the scale, reliability and availability attributes while operating at the lowest possible cost. What's not to like?

Thursday, February 21, 2013

Who is the Competition

Everyone in business maintains some level of vigilence over their competitors. IBM watches HP closely, Deloitte watches PwC, Apple peers over their shoulder at Google. Although I agree in many cases, I often see companies blindsided by a "non-traditional" competitor which begs the question, who really is the competition? I'm not waxing poetic about inner-demons, the crazed lone employee who steals the company secret recipe, the equally crazy genius who takes his unorthodox and ridiculed ideas elsewhere, nor those competitors the analysts and investors have identified for the CEO. There is one very significant competitor few seem to realize ever, even after that competitor has disrupted their business. Perhaps there is an MBA term, but not having an MBA I'll coin my own for this entity: the Hidden Competitor.

Hidden Competitors are not obvious, primarily because they are not trying to hide. Innocently they execute on their business model day in and day out doing whatever it is they do. These companies are off the radar because they don't compete head to head with those companies they are disrupting. What they are doing is changing expectations: of customers, business partners, and possibly even the market. Although there are many examples one of my favorite goes back to a conversation at the Disney Company several years ago.

While providing consulting services at Disney World I asked a Disney executive what he thought about the recently opened Isles of Adventure at Universal Studios. He said he wasn't concerned. I asked him if he felt that way because of Disney's superior product, the head start Disney had on the Disney-MGM Studios, or perhaps the size of the tourism business in Orlando. No. He said he wasn't worried because Universal doesn't elevate a guest's expectations on service. What? He went on to explain he worries about those companies who provide their customers a great customer experience. Why? Those same customers come to Disney World as guests and expect Disney to provide an even better experience, the best in the world. Wow! Who was on his radar? Starbucks, WalMart, and FedEx. I was shocked. When I saw Disney's NexGen concept roll out there was no question in my mind, whether proactive or reactive, there was some company in their rearview mirror they saw coming on quickly preparing to disrupt their business.

Today disruption is a tremendous challenge. Our big industries and big companies do not react well to change that is unplanned. As a result retail, banking, communications, auto, insurance, and just about every other corner of the business world is under threat by Hidden Competitors. Although it doesn't take a wild imagination to see a future with the Mall of Amazon, Bank of WalMart, Google Networking (Kansas City ring a bell?), Apple Car, or PayPal Insurance; the real threat is in reacting quickly enough to the Hidden Competitor who disrupts your business from afar. Who redefines the expectations of your customers, partners, and even employees. Whomever reacts quickest to meet the changed expectations has the opportunity to gain market share, rapidly. Disruption does not happen over night, however it's rarely recognized when it hits the market. At first disruption is dismissed, then recognized for what it is, and finally panic sets in and everything but the kitchen sink hits the wall. Perhaps someday I'll understand why so many companies don't start planning for the disruption at the start. Sure, some ideas will fail. However what is the cost of preparation vs the cost of reaction?

Naturally this challenge aligns with the bigger issue of large companies innovating effectively; not a strong suit. Large companies are struggling to keep apace of the consumerization of technology and gaining a deeper understanding of social media. As they invest their time and effort, I hope they consider the actions of companies in other verticals beyond the cursory glance of what technologies they are using and how. Focus on understanding how they are changing the expectations of those they interact with on a daily basis. Consider how these new expectations will impact your business. And define a course of action to disrupt yourself in anticipation rather than waiting until after you've dismissed it, recognized it, and hit panic mode.

Monday, February 11, 2013

The Next Evolution of IT

The times they are a changing...how trite! It's amazing how bad humans are at change considering nothing in our life is constant, including ourselves. However it's also true that some things never change. Frankly it is this dichotomy that generates the employment of millions in the field of consulting. One group of consultants tells companies how to change losing sight of the constants, another group advises on how to reduce the variables to regain consistency. Then there's this small group, of which I'd like to think I'm a member, who advise state change where a new set of constants and variables apply. So you can imagine in cloud computing with everyone running around helter-skelter just perhaps what's happening is a state change.

Operating models must evolve in reaction to evolving market forces. As a result processes, procedures, and tools change too and as there are a variety of approaches there are a variety of options. As the primary tool provider, IT must deliver new tools supporting the new business operating model. However to achieve its goals IT must evolve its own operating model.
In comparison to double entry accounting, invented in the early 15th century, IT is new, only decades old. Once the mainframe was invented the pace of business innovation has accelerated on a yearly basis. Every year we move further with digitizing business processes to drive productivity gains and establish new products and services. As a result the pace of business innovation is inextricably tied to that of technology; a company cannot win in one without winning in the other.

Focusing on just the past twenty years, advancements in technology have given rise to the Internet, email, chat, smartphones, blogs, micro-blogs, eCommerce, and tablets. IT has valiantly tried to stay ahead while falling further and further behind the curve. Against this backdrop IT budgets have been squeezed, talent hard to attract and keep, and risk tolerance all but eliminated. As a result IT has been forced into an ever escalating challenge to fund the future with reinvestments from the successes of previous investments. Finally, to this we now add the unknowns driven by the Consumerization of IT. Wow! It's a wonder IT hasn't imploded. By comparison in the world of business using an IT timeline, it's like moving from the single proprietor merchant of the 1700's to WalMart in 50yrs. But of course IT gets no credit for the size of the challenge. As viewed by the business results have been largely a disappointment with isolated wins punctuated by innovations which are driving real revenue. I measure disappointment by how rarely I hear anyone in business extol the virtues of their IT organization. Instead the perpetual complaint continues to be "too slow, too expensive" and the wins too few and far between.

After 50yrs of trying the same approach it's time to evolve. IT does not operate in a vacuum; the world is too big and too complex. Like everyone else, IT needs a list of friends to call when its time to move the couch or build the fence. It's time to recognize this need and formally adopt it as the central element of our IT strategy; the indoctrination of operating as an Ecosystem. In doing so we recognize we don't operate alone, we need to grow relationships and complimentary capabilities. Our focus shifts from building solutions to finding and adopting solutions. Its subtle, but this is what the business has been clamoring for over the past decade (or two). We move IT up the value chain from service or technology provider to solution provider, focusing on the solution to a business problem rather than its constituent technologies. Changing our approach will require a culture shift including a massive shift in skills. Today we manage information technologies with a design, build, run mindset. To meet the business goals today and in the future we need to focus on business solutions with an architect, oversee, audit approach. With this change, who provides the design, build, run skills go? The ecosystem. Who makes sure the right things are built the right way at the right time for the right value? IT in partnership with the business.

IT will never deliver on the ideal model: free, instantaneous, and clairvoyant. However that doesn't mean the business will ever release IT from these expectations. The Ecosystem approach shifts the operating model from developing technologies to providing a solution. Time to market opportunities in a highly competitive market necessitate a dynamic mix of capabilities. Its time for us to admit IT doesn't know everything, and it can't. Instead of trying to hire and train every conceivable technology expertise, and always falling short, IT needs to raise the bar and its game focusing on leading, not constructing.

Ecosystem based IT uses time to market and cost as primary drivers, along with requirements, to make quick decisions of rent, lease, buy, or build. Combine this with a strategic decision to jettison assets (see my earlier post on Assets are Evil) and you get the kind of agile, efficient and elastic IT leaders in the C suite are clamoring for and strategically minded CIO's are striving to deliver.

Thursday, February 7, 2013

The Data War and Mobilization of IT

In the end everything is about data.  We use data to make decisions which, combined with thumbs, helps to separate we humans from primates.  Today the supply of data far outstrips the demand for its use.  We create data all day, every day, and everything we do creates data.  From shipments to tweets, from logins to purchases, there is a "paper" trail behind every step we take.  It's not too surprising that Cisco estimates a 13 fold increase in global internet data traffic on mobile devices from 2012 to 2017.  That's an astounding number!  Naturally such incredible growth will require wireless providers to expand their bandwidth, companies to expand their storage and compute to utilize the data, and new tools and techniques to make use of all that data.  Last thing we want to do is just leave all that data laying around, collecting dust, right?  True, but unfortunately that's the most likely scenario.

Although we are good at using data, we are only good at using familiar data. We are not so good at using data folded between complex relationships, the data equivalent of the fog of war.  Why?  In part I believe because we are creatures of habit who often use intuition rather than deduction to drive our decisions.  However companies rightly view data as treasure, the new gold, capable of providing new insights which can in turn generate new business opportunities. Data is seen as objective, the "truth".  This is the challenge of Big Data; to provide humans with the tools to overcome our innate inability to see trends and gain new insights through the fog of war.

Of course data can only provide value in decision making if the analysis is ready and relevant before the decision needs to be made.  I call this Real Time Analytics which is the business solution enabled by the concept of Big Data.  We have plenty of examples in life and business where companies have waited too long for the right answer when a good enough answer would have carried the day.  Leaders at all levels need guidance from data analysis to make decisions.  Whether in the form of an answer (this is what you should do) or simply insights (here is what is happening), we now realize a good decision today is better than a great decision tomorrow.

So our future is one of making better informed decisions faster using the ever expanding scope of data we generate combined with increasing capabilities to analyze it.  Sounds great, but is IT structured to handle the work? Our IT departments have been designed with a centralized model; bring everything back to a subset of locations which house massive amounts of network bandwidth, compute, and storage.  A 13 fold increase in mobile internet data traffic means more data, faster and a growing fog of war. The implications for companies moving Big Data programs mainstream is enormous.  There is a point at which a centralized architecture simply will not work; it will not be possible to bring all the data back to one place to perform the calculations and then distribute the results again in the time frame required by Real Time Analytics.

In short, data does not scale vertically.  We've already maximized its compression and transmit it at the speed of light.  We are up against the physical laws of nature. There is no solution on the horizon enabling us to transmit data in ever increasing increments faster from one location to another.  Instead today we have a linear relationship between the quantity of data being analyzed and the time it takes to analyze it.  There is a model, popular in discussion but often misunderstood and misapplied in practice, which addresses the scalablity issue of data: parallelization.  If we perform the calculations on a set of data in parallel rather than one at a time in serial, we can reduce the time required for analysis (note this approach is not always viable). Parallelization is built into ETL (Extract, Transform, Load) tools and is at the heart of Big Data tools such as Map Reduce (the foundation of Hadoop).  Once data can be processed in parallel, processing can occur in a distributed/federated environment with reliable, repeatable results. Taken to its logical extreme if data were analyzed at its point of origin, some percentage of the overall analysis workload would be as distributed as possible.  The net result is greater throughput of the overall system and thus reduced cycle times for analysis. It's like adding fog lights to the jeep traversing the fog of war focusing on the delivery of answers, not raw data.

We need prepare IT for a new world where the "work" is done as close to the user as possible, what I call mobilizing IT.  Now that we understand parallelization and have been applying it in various forms for over a decade, it's time to unleash it's power by moving out of the traditional data center (several will note much of this trail has been blazed already by the Content Delivery Network).  One of the first challenges we must be girding up to solve is how to push the analysis of data out toward the endpoints where the data is collected. It turns out the point of origin for much of the data being generated is very near storage and compute resources whether in the device (mobile phone) or one hop away on a network (cloud).  Combine this with in-network data management and routing capabilities and the solution is very compelling.  Of course there are ancillary benefits as well, such as the opportunity to architect a solution for continuous availability rather than the more expensive but less stable approach of disaster recovery.  Real time compliance routines could be applied.  The opportunities are endless, however the solutions are few.

Decomposition is a great approach to making big problems solvable. We have grown IT into a monstrous centralized monolith which boggles our minds.  As our data sets explode we need to think about new approaches, and one approach worth considering is federating our data. We'll need an index mechanism with associated service to locate data so it can be used properly. We can leverage the index metadata to make real time decisions on how to execute the analytics. How much will be done at the endpoint, where will the data be collected and sub-analyzed, at what point can analysis stop because we have the "good enough" answer. We need to stay ahead of the data war fog.

Thursday, January 17, 2013

Is it Big Data or Real Time Analytics?

Big Data is all the rage, a term which quickly became synonymous with diving into the data from social media to discover new insights. As with most things the reality could never live up to the hype because, by design, hype is generated primarily those who are getting educated. Once educated, the hype dies as reality sets it. So what I don't understand is why is everyone still talking about and focusing on Big Data when it's not the goal. Isn't the goal really Real Time Analytics of Big Data?

Sure Big Data is sexy. Gaining insight into what people are thinking and doing en-masse by reading their Twitter updates, Facebook posts, FourSquare check-ins and the like lets companies know what people are doing and saying (not what they are thinking as is often reported - believe it or not many people say things that conflict with their thinking). However as interesting as that information is, the depth of knowledge it takes to glean valuable information supersedes, by definition, the depth of knowledge required to glean the information. It sounds obvious, but gaining the ability to do something is not the same as having the capability of benefiting from doing it. Two quarterbacks throw the ball, but only one wins the game.

It's the benefits of Big Data that should be the focus, not the technology, and so far its benefits have been pretty limited. Am I throwing Big Data under the bus as another disillusioned soul who went on the hype train? No. I'm throwing our overall lack of understanding across businesses of what data is, the conditions under which it can deliver value, how to find the value, and finally realizing the value. We are putting the tools into the hands of people who are not qualified to mine the data. It's not their fault. Nobody has taken the time and effort to train people on data: taxonomy, structure, access, normalization, and on and on. I fancy myself reasonably good at data architecture and yet I know people who can make me look like a fool because it's their focus in life. Who is going to do a better job at surgery? The surgeon or the groundskeeper. We are talking about a seismic shift in the skills mix required in business today. It started back, oh, in the 1970's. As computing spread as a productivity tool we increasingly protected people from having to understand data. We treated data as something too difficult, arcane, or unimportant to understand. Now we are in the Matrix where in order to see the truth you have to be able to look past the "visualizer" (aka application) and at the raw data.

Big Data is forcing a new dialog to occur. In itself Big Data represents a new set of data storage, management, and analysis tools for large data sets. As a result we can now do things we couldn't do before that deliver tremendous value: real time analytics (RTA). RTA is marriage of our exploits in data warehousing and data mining with the ability to traverse nearly unlimited data sets. Real time analysis helps companies keep web sites up and running, car engines running, and planes in the sky. We are in a new evolution of reducing the time it takes to find, access, normalize and filter data. The faster we slice and dice the more rocks we can look under, the more we can learn, and the better we can understand the dynamics of our world.

So in the end it's not about the name. Big Data. Real Time Analytics. Call it whatever you want. However remember that the keys to the kingdom are in understanding the domain, knowing what data is important, and using the right tools to derive the most value in the shortest time.

Dollars follow hype and innovations follow dollars. And that's why I think the Big Data hype is great.