Saturday, December 31, 2011

An End of Year Prediction

Before anyone gets too interested this is not a prediction for 2012, 2013 or even 2020. This is my prediction for global economic shift driven by some fundamentals of economics. First, information is the new currency. We have done a tremendous job over the past three decades of understanding trends and drivers of behavior by collecting data, creating forecasting models, and tweaking our models over time to account for the variety of unknowns. Some areas are easier to model than others with greater or lesser accuracy, but the reality is we now have a good idea what weather we'll be facing more than a week in advance, trillions of dollars have been made in the equity markets, and the progression of debilitating diseases can be stopped before it starts.

Second, money chases problems. If you're able to solve a problem it's likely you'll be able to find money from investors to grow and make your solution as available as possible. The fundamental problem here is that the problem has to be worth solving, and many are not such as purchasing animal food on-line (

Three, as solutions are optimized they become commoditized lowering the opportunity threshold and thus profit margins, and at some point are subsumed by the "system" to which the solution was originally applied. Think of enterprise resource management, supply chain management, and customer relationship management as examples of technologies which are becoming commodities as their growth in the cloud SaaS market expands.

Four, computing power continues to grow at Moore's law. Even if it slows down it won't matter because everyone will agree what we can do today in computing will be dwarfed by what we can do in 10yrs, and that by what can be done 10yrs later, etc.

Five, gold is valuable because it is real. US dollars are less valuable because their value is based on the esoteric idea of the US economy. You cannot trade in a US dollar for its equivalent in economic activity, and every investment in US dollars rides the roller coaster of confidence. However gold has been and continues to be valued by everyone universally. If paper currency was truly valuable, it's value would fluctuate in synch with other valuables rather than being counter-cyclical.

My prediction is over the next 50yrs the US economy, followed by the global economy, will make a shift from providing services back to providing goods. I do not foresee a return to the 1950's manufacturing dominance of the United States. We are a global economy good, bad, or indifferent and other countries will participate. However as our focus on the value of information is optimized over the next few decades there will be a tremendous amount of money available, poured into the market to drive innovative solutions. Those solutions will move the technologies into the mainstream driving them toward commodities until, finally, the solutions will become indistinguishable parts of their parent "systems". All of this new knowledge will enable companies to design, build, and deliver better products. The computing power will be available to do things we have not even dreamed of today beyond real time monitoring of the social web to identify a purchasing opportunity and deliver a customized offer with details, pricing and near immediate availability all within a second or two. Information companies will have a difficult time competing because everyone will have access to the same information and analysis tools. Investment bankers, brokers, consultants will all become the professions of bygone eras as the world turns to differentiation through the real-time application of knowledge. The value of information will drop like US currency and the value of goods, things you can put your hands, will rise in value.

I believe the shift will be so seismic that companies will repackage services as products, services getting such a negative connotation. And how we consume those products will be such a bold new model we have a hard time wrapping our heads around the idea of nanobots and technologies augmenting our human nervous system.

What I don't predict is mass employment in this new model. Employment will continue to drop as productivity grows. Unfortunately that's also a reality of economics. However I believe children born after 2020 are going to have to be equally skilled in managing information and managing the production of goods.

Of course I could always be wrong....

Tuesday, December 13, 2011

Take an Inclusive Approach

Most people have a difficult time wrapping their head around Cloud Computing because it's a multi-disciplinary disrupting technology. It crosses just about every technical and business barrier that exists from accounting to system administration, legal to storage, and HR to application development. Changes within each group are required to make cloud successful. Who should be responsible for the assets? How will systems be managed? What limitations need to be placed on data access? The questions go on and on and on...

Organizations adopting cloud technologies in the second wave, the fast followers, are learning from the early adopters that having a strategy from day one is critical to success. Everyone needs to be singing from the same sheet of music to ensure the investments make sense and will generate the expected returns to the business. However many organizations are taking a narrow approach in defining who is a member of the chorus. I have worked with many firms that from the start exclude critical organizations such as internal audit, legal, software development, and HR. The net result is often a private or public cloud with few or no applications, that violates policies, adds risk to the business, and cannot be supported because of a mismatch of skills.

Socializing cloud is not easy but it's required for success. Everyone in the organization, from the CEO down must understand how their life changes with the incorporation of cloud technologies. Most importantly the line of business leaders need to be educated in what is now possible via virtualization that was not possible before. It's time for the business executives to dream; to stop limiting their vision of the future to what they believe their IT department can deliver or technology can support. Thinking without limitation is a core of creating innovative solutions.

My minimum recommendation to prepare an organization for the transformation engendered by adopting cloud technologies is to execute the following:
  1. Develop an enterprise strategy and roadmap
  2. Educate everyone on what is happening and why
  3. Buy 3rd party education (computer based training) on the basics of cloud and require everyone to take the classes, even the business people.
  4. Create a cloud council with representatives from all the constituent groups to act as a clearinghouse for questions, collection point for input, and distribution point for sub-strategies.
  5. Bring in skilled 3rd parties to augment your team because no matter who you are you don't have enough in-house cloud knowledge.
It sounds like simple logic, and it is, but it's overlooked more often than implemented.

Friday, October 28, 2011

SAS 70 Type II Is Not Enough

It's interesting how so many cloud providers point to their SAS 70 Type II attestation. Having been through the process of a SAS 70 Type II audit for a SaaS finanical services firm I'll point out the major gaps that providers do not make clear about a SAS 70 attestation.
  • A SAS 70 Type II attestation is about financial controls, not operations, not security. Therefore if something does not impact the finances of a company it is considered out of bounds for the audit. Remember the AICPA (American Institute of Certified Public Accountants) governs the SAS70 auditing standard. I respect accountants and auditors but I have yet to find one who I can't make their head spin with technobabble.
  • There is no standard for the contents of the attestation. The SAS 70 contains self reported financial controls. Therefore each company, by shopping for an auditor, can intentionally omit germane information and still have an attested report.
  • The attestation report is not available to non-customers by rule. If it is shared with a customer it is a violation and that alone should make someone wonder. Even as a customer there is often a significant cost associated with getting a copy of a SAS 70 audit report (in the tens of thousands of dollars based on my experience).
  • The attested report simply validates that the organizations does what it says it does. During the audit it is possible to obfuscate issues because the audits do not contain a significant amount of process execution observation.
A more proper standard given the significant concern over cloud security is ISO 27001 focused on information security. And it is my understanding from friends in the audit industry better standards and guidance are coming.

Demand more; don't be satisfied with a SAS 70 Type II audit. And if you can't get your hands on it your missing nothing.

Friday, October 21, 2011

A Legacy Laced Cloud

The times they are a changing! And with all of the external forces driving change into organizations, plenty of internal forces are ebbing away at the status quo. I wonder is cloud the impetus for change or the reaction to the need for change? I think the answer is both, but the former is a tactical answer while the latter is a strategic answer.

Cloud helps companies in dramatic ways as proven by early adopters. The primary use case for cloud technology deployment to date has been to reduce costs within IT. A second emerging use case is the time to market benefit as cloud resources are often easier to procure, configure, and/or build solutions atop. However in the adoption of cloud we need to have a pronounced, intentional plan, especially when it comes to legacy applications.

I have witnessed situations where dropping an application to a cloud environment increases risk more than value. Not all applications benefit from virtualization: stable applications with little fluctuation in usage of resources often look identical with the single benefit being the sharing of unused resources (which may be enough of a benefit to argue for cloud). However many of the open questions when moving legacy applications into the cloud (or any alternate platform) include:
  • software library support
  • level of coupling (operating system, libraries, other applications, utilities, etc.)
  • component versioning
  • development, test, qa, and staging environment setup and certification
  • test cases and automated testing
  • testing of the application and its associated components and certification
  • migration to production
  • training of developers and testers
  • update of disaster recovery plans
  • update of systems management services (monitoring, troubleshooting, backups, etc.)
This is the easy short list and assumes no change in functionality!

Finally lets not forget the law of unintended consequences. Something will go wrong as a result of the change that was not forecasted and is a direct result of the new architecture.

Minimizing these risks is possible through a methodical, structured migration program governed by the following concepts:
  • Establish a framework - define an application framework to align redevelopment and future development with the growing capabilities of a cloud, even if only a portion of the cloud functionality is available.
  • Challenge conventional thinking - there is no reason to assume clouds cannot benefit siloed applications, mainframes cannot participate in a cloud, CICS applications cannot be executed in a cloud, or that the end game is even cloud (it should be delivered value).
  • Crawl, walk, run - learn from the first efforts and cascade that learning into following efforts until an optimization loop becomes part of the DNA of migration projects.
  • Components or Containers - for each application determine whether its better to componentize, optimize the application for the new environment adopting a SOA model, or containeraize and keep the application isolated as a whole solution perhaps using web services to map inputs and outputs extending the reach of the application. In an earlier blog post I provide four options to handle legacy applications.
  • The learning curve - architects, designers, developers, and testers will all need to relearn significant parts of their job. If internal development is a core need make sure the necessary skills are identified, quantified and present within the teams.
Legacy and cloud can coexist in many forms. The reality is many applications do not deliver enough value to justify their migration to a cloud environment. The goal should be to maximize the value delivered which means applying cloud capabilities where there is a business case to do so.

Saturday, October 1, 2011

The Dissolution of IT

This post is based on one of the half dozen white papers I've written in my career which were too "out there" to be published. I wrote this one in 2007 about the time I left Diamond Management & Technology Consultants for what became a failed financial services SaaS startup. You have been forwarned!

In 2004 I went through the IBM certification process for Enterprise Architecture emerging as a certified Senior Enterprise Architect. Starting in 2001 I was pushing my architecture envelope to understand as much as I could about the business. I started questioning my mentor father about his experiences including his 30+ year history in Supply Chain Management. I started learning everything I could about cool sounding ERP processes: order to cash, quote to cash, and procure to pay. As I pushed my boundaries I found companies were doing the same; assigning senior business analysts as the liaisons to IT. I worked with many in partnership asking them to teach me about the business while I would deepen their understanding of IT. Each of these business experts I found had a common trait: all had an IT background.

As I applied my business knowledge I started getting accolades from clients about the robustness of my knowledge of their business. In project after project I became the lead business process architect as my approach and requirement to diagram everything helped senior leaders identify inefficiencies and opportunities. It was this experience combined with the revelation that my business partners all had IT backgrounds that I realized the end of IT was coming!

Not following? My hypothesis is that as we digitize businesses the people within IT, by necessity, become acutely aware of the ins and outs of a business process. As their job responsibilities grow they naturally become aware of the ins and outs of other business processes and the integration of how processes work together. They live the reality of the interconnectedness of business processes and their automation gaining an education on both sides simultaneously. By comparison their business colleagues gain additional depth on the business side but are largely shielded from the technology. As the pace of business continues to quicken, as it has throughout time, depth becomes less important giving way to breadth (i.e. a company needs fewer specialists but they are still needed, and more generalists). As if guided by Darwinian evolution, IT analysts are becoming the experts on the business as well as the technology used for its automation.

Companies are slowing realizing this largely untapped natural resource and have begun moving IT people out into the business vs the trend ten years ago to move business people into IT (which was necessary and a good thing). Therefore my prediction, if my hypothesis is correct, is that IT will slowly dissolve back into the businesses as the level of knowledge about technology takes a revolutionary step forward in a short period of time. The end result is an IT that is largely fungible between internal and external expertise. And cloud computing will help expedite this changeover through its concentration on automation.

IT will continue to exist, but its form will change dramatically! All the strategic planning and architecture will be done in what we now call "the business" and design, implementation, operations and program management services will be used (whether internal or outsourced) to execute on the plan. The CIO of today will become the COO of tomorrow and the CTO of today will be focused on one or more of the services and very likely will not be an employee. If I had to lay all of my cards on the table I'd argue the new role of consulting will be that CTO role and management of the services which will be executed by the traditional systems integrators.

If I'm correct, things to keep in mind include:
  1. Vertical domain expertise will be required for those in IT who want to be on the forefront of the revolution
  2. As a revolution I believe the critical mass will be reached rapidly and therefore believe the change will occur over the next 8-10yrs
  3. Cloud computing will help drive this change
  4. A natural synergy is the decoupling of internal technical expertise from its application which has hindered companies for decades from making great leaps
  5. Legacy systems will not hinder the revolution because they're already in the mix due to the issues of cost, complexity and compliance
  6. Demographic challenges in finding qualified people will drive the consolidation of positions that used to be split between the business and IT
  7. Companies who get it right will benefit so tremendously they'll never turn back
  8. The shift is already underway!

Thursday, September 15, 2011

Rethinking IT for the Modern Era

CIO's today need to ask themselves two questions. Would I have built my IT organization across the pillars of people, process, technology, structure and strategy to look like it does today? If not, what would I do differently? Why does a CIO need to ponder and pontificate on the state of IT? Because the pace of innovation continues to accelerate and we are rapidly approaching the end of steady state IT. It's time to Rethink IT and question all of our sacred rules.

For years IT has been focused on eliminating change in the name of stability, trying to reach the perfect steady state. However since the only constant is change IT has failed miserably in the mission, to the extent that many disagree it was a goal in the first place. I argue if change was the goal, then why is change within IT so difficult? Change management is not part of the core DNA of IT, but it should be. Instead IT does a bad job with change and a worse job with continual change. Yet change is the starkest reality of technology. I don't see anyone using Windows for Workgroups 3.11 or MS Word 5.0 anymore. Technology progresses and the pace is always quickening.

About 15 years ago the primary method for a customer to get support for a product was to call a 1-800 number. Quickly those requests starting coming in the form of emails in the mid 1990's followed by text chats and call-me-back requests in the early 2000's. Then in the mid 2000's we added social media such as MySpace, Twitter and LinkedIn along with discussion forums where customers could use a self-help model relying on other customers. In that same 15year time span many companies have, via bolt-ons and platform extensions, taken their original CRM solution and added functionality to support these new models. Still built on the original foundation of an inbound 1-800 call, we have added new staircases, wings, doors, chimneys, and floors to create the solution needed to serve the customer. What has resulted is something more analogous to the Winchester House than a family home. As a microcosm of IT, this approach of keeping legacy cores and extending solutions has created unwieldy, expensive, inefficient solutions. Rob Carter, CIO for FedEx, is on record stating that the greatest threat to FedEx is the complexity of IT. Rethinking IT, breaking down the old preconceived notions and forgone solutions, is the only way to address the critical mass of legacy IT.

Companies change their headquarter buildings more often than the applications which drive and manage their revenue stream. Architecture, for both buildings and software, is a managed planning step where decisions are made to support current needs and to the extent possible incorporate flexibility to meet undefined future needs. At some point in the lifetime of a building, or software, the limits of the architecture are reached becoming unsuitable for sustaining growth and must be overhauled or demolished to meet new demands.

Software, because it's viewed as inherently simple to change, is much more likely to be modified. Each modification makes the software a little bit slower, a bit harder to support, a bit more difficult to extend, and more expensive to operate. Add hundreds of modifications over decades and it's not surprising how many companies find themselves paying the legacy tax where changes, even minor changes, can cost millions of dollars. One aspect few consider is that little code is every eliminated. Multiple times in my career when asking why a process is executed we have found the answer in the code, the answer being because nobody told IT to eliminate that process or that algorithm. I contend there are millions of lines of superfluous code running every day which add to the ball of complexity and cost yet deliver NO VALUE! Worse, often those processes limit the business in what they can do which in turns limits the revenue generation capability of the company. There is a cost!

It's reasonable to expect that just as we moved through four technology iterations in 15 years (1-800 to email to text to social media), the next four iterations will take 10 years, and the next four after that only 5 years. IT can no longer afford the time and the business can no longer afford the cost of extending applications through bolt-ons. The only solution available today that provides the flexibility required is migrating to a services framework where new functionality is developed as a service available to other services as needed. Just as services replace monoliths (think ERP, CRM, SFA, MRP, etc.), application composition replaces application integration. The side benefit of moving to services is their natural alignment with cloud computing. In fact cloud computing as a holistic vision for IT is the execution of software services sitting on a virtualized hardware infrastructure. Amazing how things work together!

When the CIO resolves that no, the IT organization would look dramatically different if designed to meet today's needs, the first step on the Rethinking IT journey is under foot. Realizing the cost of not changing is greater than the cost of change means the second step is under foot. The remaining steps are to build the blueprint and roadmap and then effect the change with a particular emphasis on making change stick.

Of course the other option is to continue the status quo. Then it's time to ask question three: What happens to the CIO when IT is forced to forgo generations of innovation because of the time and cost required to update their ghastly complex legacy solutions?

Monday, September 5, 2011

Cloud in Healthcare

A recent discussion on LinkedIn reminded me of my past exploits in deploying cloud solutions in Healthcare. I was involved in several projects with great organizations including Kaiser Permanente, The Diabetes Foundation, St Judes Children's Research Hospital, the LSU Eye Center and the Mayo Clinic. During these efforts we applied cloud in some really interesting ways including the following:

Diabetic Retinopathy: We presented on this at HIMSS in 2005. Our project addressed the rampant increase in diabetes among the poor of Louisiana. The two challenges were getting people to visit the doctor because the doctors were only available on certain days of the year, travelling to remote areas as part of their humanitarian efforts. The second challenge was finding qualified doctors who cloud leave their practices to travel. Our solution used low cost imaging devices to capture images of the back of a person's eye. It turns out the eye is one of the earliest ways to detect diabetes. The images were then uploaded to a cloud from which remote Doctors could identify early onset diabetes. Our goal was to improve service by using an automated filter to flag potential issues reducing the number of images reviewed by physicians. The approach resolved both challenges making the imaging locations available for a significantly longer time, some permanent, and didn't require the doctors to travel.

Long Term Digital Medical Image Archive: Digital medical images are captured in PACS systems which typically use expensive SAN technology for performance and redundancy. The challenge is that 98% of medical images are never used again after their first two weeks. As a result healthcare providers want to move the images to a low cost, long term archives as quickly as possible. HIPAA requires providers to keep the images for the life of the patient plus seven years, so retention is required. We developed a private cloud solution which provided an economical model for storing the images while also improving the ability to share the images between imaging centers, the hospital, ER/trauma centers, out-patient facilities and the primary physician's home office.

Electronic Medical Records: to be of use an electronic medical record (EMR), a folder containing a person's entire medical history from allergies and blood pressure readings to digital images and prescriptions, requires ubiquity in availability but strong security. Our solution used a cloud for the storage and management of EMR's. Each EMR was accessible through three methods. First, a person can grant others access to the account by creating logins. Second the person can grant temporary access by using their finger scan or RFID necklace. Third, a Doctor can request a limited version of the EMR in an emergency. Using WiFi enabled tablet devices the EMR's were visible via a browser from any enabled site (each site had it's own unique key which provided an additional layer of security so someone could not access the records from an unapproved location such as from home or the parking lot).

Disease Modeling: One area I found fascinating was the ability of medial researchers to create mathematical models to predict the course of a disease in a patient. They could identify with amazing results how weight, diet and even some genetic factors impacted the disease which could then be used to identify treatment regimens. Pharmaceutical companies could use the models to develop advanced chemical compounds. The compute power needed for even a simple analysis mandated the use of a cloud.

There is one other germane application I worked on however due to its proprietary nature and the potential value of the program I don't want to disclose it since I'm not the owner.

As I have repeatedly found, cloud really unlocks us from classical linear thinking in how we create solutions. We are in the early days of clouds and people way smarter than me will come up with solutions that will make us all stand in awe.

Wednesday, August 31, 2011

Time Value of Data

Value is an inherent way humans quantify entities. Whether thoughts or products, ideas or tools, their value to us heavily influences how our brains classify the entity. Forks and spoons are of higher value than sporks, cars over horse and buggy, computers over typewriters, and so on. One early lesson we study in Economics is that money the value of money is not constant. A dollar today is, under normal economic condition, worth more than a dollar tomorrow. We call this the Time Value of Money to show that the timing of earnings and payments should be optimized to derive the greatest overall value.

To me it's a very gentle leap of faith to combine the Time value of Money with the idea that data is the new currency: I call it the Time Value of Data. Today we assume data has the same value on day one as it does on day infinity. For some data that's true. However I argue for the vast majority of data it's value starts at a peak and rapidly falls off nearing zero for the majority of its life until deletion (whether by intent or accident). For example how much I earn per hour today is something I safeguard however I much I earned per hour as a reservationist at Disney World in 1990 is pretty unimportant to me (the answer - $6.35/hr which was $1/hr better than the theme park jobs because of the typing skills).

When it comes to data we are pack rats loading the data onto our computerized pack mules. However one has to consider the security, efficiency, and monetary cost of this approach. From a security point of view should we protect data of low value with the same vigor as data of high value? Instinctively I say no but I can find no examples in the wild that show this rational thinking to be true. Valuing all data equally robs us of a valuable metric for finding pertinent data. We have to look through all the muck to find the golden data we need which impacts our ability to answer questions efficiently. And lastly there is a cost to storing data when it's value is at or near zero. Is my life going to be improved knowing the details of my mortgage from 2000 when I've refinanced twice since then? Not likely but keeping those papers has caused me to have to purchase yet another filing cabinet.

I don't have the answers; at this point I can only ask the question. I'm not sure how we define the value of data and how we differentiate the levels of value. I do, however, believe this is a larger issue of organizations not rationalizing the data they keep and use. It's easy to calculate the cost side, it's difficult to define the value equation. In money we use interest rates and years in a well defined mathematical equation to determine the current or future value of a dollar. A dollar is a single entity, data is all over the board, so they are not equivalent. However I do believe in one rule which I use at home. If I haven't touched something in 6mo it should go into the attic. If it's not used within a year then I have to question it's value. There are some pieces of data that we value forever such as baby photos. But like digital medical images, I bet 98% of data is never touched again after it's first few weeks.

But today we pay for it for it's entire lifetime.

Monday, August 22, 2011

The Role of Open Standards in Cloud Computing

Over the past 12mo I have increasingly found what I feel is an obvious argument is mired in marketing hyperspeak. Cloud computing is predicated on open standards. There, it's been said as clearly as I can state it. Now to answer the question, why?

Probably the easiest way to answer this is by analogy. Remember back to the Betamax/VHS wars? How about CD/Digital Tape? Blu-Ray/HD DVD? The winner in each category had at least two of three benefits. First, lower cost. Second, media availability which is why anyone bought a device anyway. Third, ubiquity, availability in multiple formats. Believe it or not, technical superiority was not a driving factor. Beta was better than VHS, and HD-DVD better than Blu-ray. These three elements taken together drove market share which in turn determined who won the war and as such became the de facto standard.

So we proved that people like standards. Being the defacto standard meant being cheaper, having more media available, and being available in more form factors. Which in turn led to even cheaper devices... see the effect? What's amazing is that rather than fighting it out in the market to create a defacto standard, a standard can be set up front and thus drive the development of better solutions. (NOTE: open standards are simply standards for which there is no licensing fee. For example to use RAMBUS or JPG there is a licensing fee required and hence there's been a serious withdrawal from using either as a standard).

The goal of open standards is to drive interoperability, from day one, to maximize flexibility while keeping costs down. In turn the interoperability combines the efforts of all market entrants to achieve the three goals of driving down costs, increasing options, and enabling the development of new form factors. In fact open standards are everywhere. Your car has OBD-II, On Board Diagnostics 2nd Generation, which every automaker must comply with so a mechanic can connect a device and get a little help from your car in diagnosing a problem. HTML is another great example and there are plenty of others from email (SMTP, POP, IMAP) to managing IT (ITIL v3). In fact many of the technologies underlying cloud computing are themselves based on open standards. If we want cloud computing to be as ubiquitous and consumable as we all say, then we need open standards upon which to base interoperability. An isolated cloud is, in essence, nothing more than a large silo. Clouds need to have virtual boundaries which can be traversed easily and quickly as needs dictate.

So the real question is why aren't there open cloud interoperability standards?

The short answer is there are and more are coming. Standards evolve over time via learning. The original answer was the advanced advocates of cloud want to recoup their investment and make fat profits which is more easily done with proprietary technologies. However they almost all come around eventually. Several technology companies are very open to open standards including Rackspace, CA, AT&T, IBM, and HP. Others are coming late to the game like Amazon, VMWare, and Microsoft and still others may never join the team such as Google and

Open cloud standards are here today in one form or another as evidenced by the Rackspace OpenStack and Eucalyptus. In fact OpenStack is quickly gaining momentum and warrants further investigation if you were previously unaware. In addition several governance groups have appeared beyond the original Open Cloud Manifesto, such as the OMG Cloud Standards Group, each working to establish open standards.

So when looking at cloud technologies keep in mind that proprietary solutions move you in the wrong direction. The days of proprietary vendor lock-in are not over. Open standards are out there so make sure they're part of your architecture and part of your technology selection criteria.

Sunday, August 21, 2011

Learn From The Mistakes of Others

In my career I've had some difficult projects but one in particular stands out as being my Bataan Death March. If successful the project would cement the fortunes of two rising executives; one client and one consultant. If failure prevailed, it would leave both executives careers significantly diminished. The project was to establish a beachhead for integrating the marketing efforts of several autonomous business units, cleverly disguised as an Enterprise Content Management implementation and web presence refresh. Sales were expected to grow as customers realized they could buy an end-to-end solution from the same company even though it might be composed of two or more brands.  With such a vast scope, and all the political challenges I cannot detail, it's incredible to believe the project was even undertaken (which it was against my advice and the advice of several other experts). Here I will list the major lessons I learned, and hopefully provide insight into what should have been done differently.

First Lesson: enterprise transformations are not a re-branding exercise
Day one I knew the project's scope did not reflect its challenges. The client was using the project as a way to effect an organization wide transformation to become a better customer focused company. Transformations require the support of senior leadership, investment, and the acceptance of risk.  Visions of a better future need to be seeded with the executives and given time to germinate, with the hope passion will build within each leader to make the future happen.  Instead we had the feint, background support of some in the C-suite, tepid support from IT at best, and a range from lackluster acceptance to abject opposition from the lines of business.

Second Lesson: any transformation requires a well thought through and defined strategy 
Without the support of the executives, the vision was empty.  As a result nobody wanted to participate in developing a strategy for the transformation.  And without buy-in and a strategy, there was nothing in place to guide the project as it hit critical junctures.  Often during discussions drilling down on a particular topic, nobody could articulate details on what needed to change, how, when and who would be the owner moving forward.  Without a target future state we found there was nowhere for new responsibilities emerging from the transformation to land.  This caused significant friction as we were on the clock, expected to make progress, but everything we did was dependent on client resources providing guidance and making decisions

Third Lesson: transformation success requires talent
All three client resources portrayed themselves as experts in the ECM space when, in reality, their knowledge was limited to content creation. In fact it was this arrogance that created the foundation for the technical challenges. And driving inexperienced, incapable people to work long hours cannot create success out of a vacuum.The client project leader was ineffective at providing input to the timeline and managing client tasks. Due dates were treated as meaningless and repeatedly missed without acknowledging the subsequent impacts to the timeline and cost of the project. The client subject matter expert was an off-site third party consultant spread extremely thin whose goals were not always aligned with our customer. The client technical leader was a junior resource with no architecture, web or application development experience who was quickly relegated to the role of a go-fer.

Fourth Lesson: success does not end with the sale, it requires client satisfaction throughout the project
Based on the above items there was more than enough reason to not accept the risk of the project, or run away when SOW negotiations continued for months after the project was already underway. But revenue generation was necessary as part of the business case for promotion, so the engagement lead ignored the concerns of his team.

Fifth Lesson: risk is inherent in every project, but it should be limited and managed
The engagement leader spent 1-2 days per week on the project but refused to listen to the highly experienced, dedicated directors on the ground every day. Blind to risks, within days he made four core decisions which pushed the project to the edge of the cliff for the remainder of its life:

  • dismissed concerns about the missing customer strategy inability to articulate requirements
  • used inexperienced, off-shore application developers for an on-site web overhaul and content management implementation
  • delayed engaged the ECM platform expert until the statement of work was signed, which didn't occur until four months into the project (two months too late)
  • accepted new responsibilities that required client level decisions

Sixth Lesson: communications must be transparent
The client had no reason to believe the decisions of the consultant threatened project success because they were never informed.  In fact they were insulated by the engagement lead who closely managed and controlled all information shared. Regardless issues raised by the consulting team, the message to the client was everything is normal and manageable. As the growing gap between expectations and delivery became increasingly difficult to mask, the engagement lead pushed the team to be "creative" and find ways to make progress.  Doing so quickly degenerated from difficult to impossible.  The team looked busy, but very little of the work actually progressed the project.

Seventh Lesson: take responsibility
Among the directors we knew we were out of options.  However because the engagement lead was also the best friend of his leader, and the engagement lead had driven a wedge between us and the client, we focused on trying to make it work.  We should have documented our concerns and requested a Quality Assurance review.  We had the right to, but doing so would also put our careers in jeopardy, especially considering the reputation of the engagement lead.  We shirked our responsibility and it came back to haunt all three of us repeatedly.

Eighth Lesson: know what you're getting into
To make it a perfect storm, the underlying technology had it's own issues.The ECM platform used by the client had a unique architecture and implementation. Since many members of the consulting team had prior ECM experience, we used that experience to drive estimates.  And because the platform was new and relatively unknown, it was difficult to find expertise.  It took several months to identify an expert and then several more for approval to get the person on-site.  And because the project had been budgeted to use off-shore resources, the cost of the expert pushed the project even further over budget. On-site, we worked with the ECM platform expert to address significant issues with the existing implementation.  He was able to validate my architecture as the correct approach, a contentious point with the client experts from the start which had stymied progress.  His documented expertise, supported by the platform vendor's own testimonial on his brilliance, wasn't enough.  We had to make the changes we knew were right and then show the client why it was better than their way.  Of course this didn't help our timeline, nor budget, but it was effective.  We essentially discarded the existing slow, unsupportable solution for a streamlined, very fast, very extensible implementation. Luckily our new direction changed some of the requirements enabling us to finally use those off-shore resources.

Unfortunately the client was in too deep to walk away or choose a different partner.  Follow-on projects made few changes for the better.  The engagement leader continued to focus on revenue at the expense of people and the truth.

From a deliverable point of view, the project is viewed as a huge success.  It delivered a better mouse trap and, for the first time, made the company appear to be integrated. However, as predicated on day one, it wasn't enough to appear integrated so the net impact on sales was negligible.  The project grossly overran it's budget.  Two of the consultant directors lost their jobs.  Several on the implementation team were given unflattering ratings as a way to spread the blame.  The executive on the client side was terminated and their team took a reputational hit.

And what happened to the consulting engagement lead?  Well of course he got that promotion, and a huge bonus.

Friday, July 29, 2011

Yes, I understand you have legacy applications

More than one CIO has explained to me that until someone comes to them with the solution on how to move their legacy apps into the cloud, they're not interested. Now that approach I find interesting. First, why isn't the CIO looking for the solution instead of waiting for someone to bring it to them? Second, does this mean they will continue to create new apps, which are the legacy apps of tomorrow, in a non-cloud manner perpetuating the problem indefinitely? And of course third, why would someone dig their heals in when it comes to changing the highest cost, least flexible elements of everything under their control?

The good news is that legacy apps are not precluded from cloud computing. The easy part is dealing with the legacy apps of tomorrow. It's becoming a defacto standard that all new development leverage a cloud architecture. Cloud architecture is a refinement of internet architecture which is a refinement of n-tier, client/server, and mainframe architectures. It's all about evolving as the technologies mature.

The hard part is what to do about the legacy apps of today. There are at least four options available: leave alone, enable with services, migrate key algorithms, and finally migrate. The leave alone option does not necessarily mean nothing can be done. For some mainframe apps there are other options. For example in my past we moved a large number of CICS applications from a mainframe to an HP9000 server with the benefit of higher throughput. In fact the Oracle Exalogic platform framework specifically identifies its ability to be used for CICS applications in the literature. And some applications can be moved with few changes such as a Smalltalk application ported from a mainframe to blades for a large brokerage to speed up execution. Never say never.

Option two, enable with services, is a pretty popular approach with those who will not wait. A large insurance company is working hard to enable its mainframe applications representing hundreds of millions of dollars in value to talk on the enterprise service bus (ESB) via services (I know because I designed the architecture). It is possible to keep that large investment largely intact and just change how it interacts with other applications. Through the ESB, a SOA concept, the mainframe apps have to respond in seconds in order to deliver web page results to prospective customers in a reasonable time, defined as no greater than eight seconds. Not only does this approach work, it can work well extending the life of applications to interact with the cloud at the software level while not participating at the hardware level (although I'll argue that with virtualization on the mainframe, they're just as much cloud as Zynga's games).

Migrating algorithms that are within applications rather than entire applications can deliver a huge benefit. Think of a pricing, scoring or inventory management application. The algorithm that helps management figure out what to do (the price, the score, or the shortages) can be move to, or replicated in, a cloud model enabling cloud applications to interact with the algorithm. To a cloud application that may be all that is needed, especially if that algorithm is exposed as a service. In good SOA design services should give answers, not expose raw data, and often all that's needed is the answer.

Finally migrating applications wholesale to cloud makes sense for some mainframe apps. However there is no requirement it happen today. The big issue has always been the business case; justifying the investment. CIO's have pushed application managers to put so much lipstick on these apps that they now resemble pigs. Bolt ons for integration, data management, identity management, internet access, storage tiering, systems management, and on and on combined with years of change requests have created systems so complex they carry a legacy tax. I have worked with several companies where just touching a mainframe application carries a minimum $1,000,000 fee. That's a pretty high buy-in, yet CIO's wonder why they're spending 70% of their budgets on maintenance. That first $1M is non-value added! CIO's need to expedite the sun-setting of these applications by building the case for change based on both the cost to change and the cost of not changing and accruing these legacy taxes. Sometimes you have to do what's right instead of what's easy and this is it.

The challenge to building the business case is the expectation that because computer code isn't physical it's easy to change. I balk at that argument with facts; if that were so why do software changes cost so much? I had a CIO challenge me to get the CFO to understand so I asked the CFO how much money had been spent to update/modify his highest producing factory while the CIO was to determine how much money has been spent to update/modify the most used application. The difference was staggering; the application costs were well over an order of magnitude higher although the plant was twice as old. So I asked the CFO if he were to invest that amount of money into operations, how much of the plant would have to change. He said he'd never do it, it wouldn't make sense. Instead he would build and outfit a new plant. Ah ha! So why do we use a different approach with software?

I'll admit that was a somewhat special case, but in the end it's all about foundation. Mainframe apps were built with a different architecture, intent, vision, everything. It's like investing in swords and armor and then being surprised when your enemy shows up in a kevlar body suit with his M4 and Seal Team Six tatoo. Ask the French how well their Maginot line worked.

Wednesday, July 13, 2011

Money Makes the World Go Round

Like it or not money is a central part of all of our lives. As a consultant typically one of the first questions I'm asked is "How much?". I worked with a great client several years ago building their IT strategy to accommodate aggressive inorganic growth in the laundry room business. They operated laundry rooms at apartment and condo complexes and knew how to do it right. Their focus was on the management of the money and used it to diagnose any problems. First they could tell you how many loads would be washed and dried based on region and the number of units. They knew how much soap, water and electricity would be used. They could predict the required maintenance with amazing accuracy. Most importantly they knew how much money to expect from every washer, every drier, and every soap dispenser every day across their $1B organization. The EBITDA for each washer. The operating profit of each laundry room. Simply amazing!

Contrast this with the norm at Fortune 500 companies. It is a rare day when a client can tell me how much their technology costs per seat. I've been laughed at and asked why anyone would care. I've been told there are too many variables. Yet when I talk with CEO's about building a new business or offering a new capability the first thing they ask is "How much?". As a guide I typically ask "How much does technology cost per seat today and how many people do you estimate being involved?". Applying an average to generate a back-of-the-napkin estimate works pretty well. So the CEO asks the CFO or CIO who invariably don't know (and if not are not happy to be put on the spot) and invariably we end up taking their technology budget and dividing by the number of people. It's a start, but it's not transformative.

******************************** HINT ******************************
For anyone considering a technology rationalization effort, knowing the detailed costs is a fundamental requirement yet derails more projects than any other single cause based on my experience.

Whereas the average number gives you a starting point it doesn't give you anything on which to provide an insightful analysis. If that number were good enough the next thing in line would be the contract, just sign on the bottom line. Yet what ends up happening is an exhaustive research project with consultants and IT finance gurus to get to the bottom of what everything really costs. Some times it becomes a project unto itself. Worst of all return in a couple years when the details have changed and you'll typically find you have to start all over again; nobody picked up the work. And I won't go into all the rocks you overturn exposing contracts being paid that are no longer required, missing maintenance on production servers, software licenses that are oversubscribed, and more that make you ask one question: "Who owns the money responsibility in IT?".

I cannot rationalize this lackadaisical approach by IT to activity based costing. On the one hand CIO's bemoan the increasing responsibility of IT with a shrinking or static budget. To me that means knowing the who, what, where, when, how, and why of every dime. Yet there's no transparency in the 2/3 of IT budgets which are operations focused. That tells me IT wants to deliver the message "we don't care what things cost" but somehow I've missed the legions of CIO's standing before their leadership to make that statement.

IT needs to know what things cost and track those costs for all the things it has. There are some tremendous tools out there to help with this but, and perhaps it's just me, I've seen an amazing lack of use. This underlies the larger issue of collaboration between the business and IT. The business talks in dollars, capital vs operating expense. IT has to use the same terminology to be understood, however that's very difficult when getting answers to simple questions like "What does that cost you per year?" or "What's the cost per seat?" take days, weeks, or even months to calculate.

I've been advocating this approach since I ran the IT department for a small call center outsourcer in the mid 1990's. We calculated the cost of everything at various levels (seat, program, location, company) which fed into our estimation process. As a result we could tell a client within a day or two of knowing their requirements the one-time startup fee and the ongoing fees within +/- 10% accuracy. IT was a big part of our business so we ran IT as a business.

This approach is now considered a best practice according to the MIT CISR researchers Peter Weill and Jeanne W. Ross as outlined in their book IT Savvy. So if you don't believe me, believe them, it will pay dividends and is a fundamental component of the success of IT.

Monday, June 27, 2011

Where's the Business?

Despite all my attempts it appears cloud computing continues to be a technology topic, at least in the Fortune 1000. Over the past year secrets of cloud including Facebook's private cloud, Zynga's use of 12,500 Amazon EC2 instances, and NetFlix creation of their Chaos Monkey have made big company executives ask what are we doing? Hey Mr CIO are we doing these things? What are we doing that we can tout in Fortune magazine or at our next board meeting? Are we in the cloud? I wouldn't be surprise to find a few CIO's have broken down in heaving sobs or curled up on the floor with their thumb in their mouth.

The preconceived notion is that cloud is all about TECHNOLOGY. Guess what, it's not! In reality each of those examples, and the ones CEO's and CFO's trot out to "motivate" their CIO's have nothing to do with technology. Rather, each has to do with the business. It's the needs of the business which drove the application of the technology, not vice versa. Therefore the right question is not CEO to CIO but CIO to the CEO: "Here is what cloud enables businesses to do differently. How can we take advantage of it?"

As clouds are being built the enemy of efficiency, fiefdom building, is following right along. With the number of business savvy CIO's in the Fortune 1000 I'm surprised at how few have engaged the business in a discussion outside of cost savings. Great, run applications and all at a lower cost. But that wasn't the goal of Facebook valued at over $50B. Or of Zynga who is the fastest growing game company on the globe. Or NetFlix who singlehandedly put Blockbuster out of business and force the cable operators to take notice. Either the CIO's don't understand it which I doubt, nobody will listen, or too many people are busy building fiefdoms to protect themselves and therefore are unwilling to share their knowledge.

I met the CTO of Grooveshark, a great web based music delivery service, and we talked about their technology a bit. What interested me was his lack of interest. He had business problems to solve and that was his focus. He didn't want to talk technology.

I have said in front of clients, conferences, and on this blog that technology is the easy part of cloud. And I understand it's also the most visible thanks to the marketing of companies like Microsoft, Oracle and Google. However the value of cloud is in solving real business problems. The impact of cloud is in its transformative value; moving a company further into the digitization of its services.

Technology is interesting. Cost savings is great. But last time I checked companies are ranked by and investors take notice of revenue, profits, and growth.

Monday, June 13, 2011

So What Is the Public Cloud

Before going to far past one of my recent posts on the evil of private clouds, I suppose I have to define what I mean by private and public. I take a very simple point of view on whether a cloud is public or private: who owns the assets. In private clouds the assets are owned by the consumer; in public clouds the assets are owned by an entity other than the consumer (called a provider). In my view of the cloud world, a public cloud can exist within the four walls of an organization in the same way a private cloud can be hosted externally. I see no other way to differentiate although many try:

Access: Many argue what makes a cloud public is who has access to the cloud. However the cloud doesn't really exist until something is provisioned. Once that thing (service, server, whatever) is provisioned you have a cloud but you also necessarily have restrictions on who can access the server. Amazon AWS, Microsoft Azure, all the vendors require logins and keys and such to protect their systems. And it's always possible to open up the gates and allow external entities to access a cloud within a company's data center. So really the differentiation cannot be access.

Security: Some argue its the security model and domain that make a public cloud. Public clouds are more accessible and therefore are more at risk. I point to all the penetrations at large companies by hackers to demonstrate at best there is no difference.

Multi-tenant: Many argue public cloud is multi-tenant. How is this different if its my competition or my sales department when the virtual machines and storage are all partitioned and sit in separate security domains? Again I see this as more similar between private and public clouds and less different.

The benefit of differentiating cloud models by who owns the assets is it plays into the focus of CFO's, covered in an earlier post, to drive IT to leverage other people's assets. IT needs move from a capital/asset intensive environment to one where costs are expensed and vary with consumption.

Therefore in my humblest of opinions using my point of differentiation, everything will move to public cloud. Why? The benefit of leveraging other people's assets!

Sunday, June 5, 2011

Who's Holding Cloud Back? IT!

Fear, uncertainty and doubt; also known as FUD. If it were up to me we'd change the name of IT to FUD because modern technology organizations are more about telling you what you can't do and why than figuring out how to do what's needed and doing it. FUD-IT. Hmmm...have to remember that one.

Today we face a talent crisis in Cloud Computing that it appears almost nobody has been preparing for over the past five years. Go try to hire a cloud experienced anything. Better yet, go hire a cloud experienced Enterprise Architect, Strategist, or Program Manager. You'll have an easier time replacing your C-suite. The talent is already employed and most companies are anchoring their talent to the ground with impressive golden hand-cuffs. Most popular right now are option grants which seem to run anywhere from $50k/yr to $100k/yr for top talent. And there are cash bonuses which are paid over time to entice people to stay, salary increases, the works. Hopefully my employer catches on...

For the past several months I've been blaming poor management by CIO's and their HR counterparts for not forecasting the need and establishing internal training and development programs to "grow your own". Although I'm not changing that position I realized after a few conversations just how formidable a problem such a plan would have to overcome. Developing cloud talent requires three key ingredients:
  1. Willingness to learn
  2. Desire to be broadly experienced rather than a specialist
  3. Investment
  4. Appetite for risk

Ooops, guess that's four ingredients. CIO's can be held accountable for the bottom two, but number 1 is squarely on the shoulders of the IT staff. Too many, I'll forecast the majority, of IT people today either don't have, have lost, or never had a willingness and desire to learn. They learn out of necessity, not desire, and as a result the majority have large blind spots. For some reason they resent the pace of technical innovation and the need to keep abreast. Perhaps some believe that you have to "relearn everything" every four years and figure what's the point (note: you don't have to relearn everything unless you stay a technician your whole life, for everyone else you realize the more things change the more they stay the same).

People are quick to give lip service when asked about being trained in cloud. We get several people who sign-up for lunch-and-learn sessions and attend 1hr presentations. However the number, more importantly the percentage who have stepped up and said "Hey, I want to learn about cloud" is so dismally small that when I ask HR they reply "Eh? No, nobody's asking about training". Talking with friends and colleagues we estimate about 2-3% of the IT staff have demonstrated, not talked about but executed on, a desire to learn about cloud computing. My favorite reason people give when we've asked why they aren't doing it on their own? Not enough time. Hmmm. Okay. Perhaps they don't realize that Detroit didn't have enough time to learn about economical cars, or Pittsburgh about producing low cost steel, or Las Vegas about the peril of building more homes than there are purchasers. I'll put a dallop of blame on IT and HR for not encouraging the training, in fact making it mandatory in some way, and financing part of it through paying for courses or giving people time or both.

When it comes to item two I see it as a 70/30 split with 70% on the shoulders of IT staff who want to specialize and 30% on IT/HR leadership for encouraging specialization for the past decade. I love when I talk to companies about their biggest problems in IT and they go on about changing technologies, business people who "don't get it", etc. yet when I ask them about their process for finding the root cause of an issue it invariably involves a team of 25 people. Why? Because everyone is a specialist so there's nobody who gets it from end to end. And of course it takes forever to find the actual problem because they all speak different languages. It's a problem the size of the Tower of Babel but one easily swept under the rug because it's only visible to IT.

So we have an under-educated group of technology consumers (can't call them practitioners until they've done it successfully several times) all scurrying around either telling everyone how lousy cloud is or trying to implement it without any help. What drives their thoughts? FUD. It's not just the naysayers; they're only the most obvious. Fear is the gensis of private clouds. Uncertainty keeps public companies from researching public cloud storage. And doubt keeps companies from applying cloud to production systems.

People filled with FUD need to remember this axoim: Lead, follow, or get out of the way!

Monday, May 2, 2011

And to those just tuning in - Private Cloud is a bad thing

Private Cloud, in itself, is not bad. Rather what I find is that companies are willing to blindly follow the Private Cloud mantra while heaping their own expectations of an Amazon AWS or Microsoft Azure solution on top. As a result the deliveries fall well short of the perceived goals leaving companies in a no win situation. The cause of this problem is the vendor push behind Private Cloud solutions which naturally want to make it sound like a pancea for taming the infrasructure beast. In reality Private Cloud is in its infancy and is about 5-7 years behind Public Cloud in maturity.

Private Cloud implementations often mask or ignore the following:
  • Enterprise Cloud Strategy development
  • Governance
  • Migration costs
  • Application development costs
  • Learning curve
  • Disaster recovery effectiveness
  • Success metrics
  • Accounting, Tax, and Finance change
  • Legal impacts
  • Data integration
I can understand the lure of a "private" cloud: COST SAVINGS! Who can shake a fist at that? IT is expensive, not as expensive as the manual labor it replaces, but more expensive than free. Executives would prefer an IT that costs nothing to maintain where more can be done with less until an infinite workload is accomplished with nothing.

A cost savings focus is a myopic view of the future which necessarily puts more important agendas like growth on the back burner. My father, a retired Fortune 10 executive, always says "You can't cut your way to growth". Yet today the vast majority of Private Cloud implementations are driven by expected cost savings. So what do I believe is the right view? Cloud as a revenue generator, a driver of new business opportunities by engendering models not realizable with current IT structures. But of course this view threatens the very safe, reliable foundation of today's IT.

So let me point out how Private Cloud falls short on cost savings:

First, private cloud increases risk by moving companies to adopt a single vendor's proprietary technology. No vendor today has a complete private cloud in a box or offering. Rather vendors have gathered together technologies from across their portfolio to provide a glimpse of cloud but what is really just virtualization. Virtualization is good, but it's not cloud. I have yet to see a private cloud vendor offering that addresses the 70% of cloud that virtualization does not: software services, SLA based management of applications, automated scaling of resources based on application use and need, automated recovery of services, etc. Some of these things you can do with those vendor solutions, but it's an afterthought and what is offered is such a weak solution few if any have implemented it. Ask the vendor for a list of installations and it looks impressive. As a vendor for the list of clients who use 100 or more instances and the sheet staring back at you will be blank. Vendors are following the analysts too; if they could see they'd be leading rather than following.

Second, Private Cloud pretends to be a holistic solution but it's not. Again nothing is provided for in migrating applications into the cloud, and without applications all private cloud does is create larger silos. Most often there is no consideration for future integration with public cloud locking out the opportunity to access the lowest cost resources available. In addition Private Cloud requires a company to grow an expertise base that cloud as easily be outsourced where the implementations from an availability, security, and performance point of view are generally accepted as better. Once the cloud is up and running it should take very little effort to operate it; otherwise it's not a cloud. The money should be spent in building up application layer expertise in cloud which is much more expensive.

Third and most important, Private Cloud failures, and there are plenty, give cloud a bad name creating conflict for future adoptions potentially alienating one of the most valuable technological movements since the development of the relational database or the Internet. Companies are already having to rationalize cloud solutions that are only five and sometimes only three years old. Nobody wants to present to the CIO that a new solution is already in jeopardy. When I ask people why they think this happens almost 100% say its because of undetected security issues which is incorrect. The real reason is the unrecognized need, complexity, and cost of integration.

So to those who are learning I say welcome! To those are surprised to learn that private clouds could be a bad thing please do your own analysis. Cloud computing is one area where you need to do your research not only on the topic but also on those to whom you rely for knowledge.

Monday, February 21, 2011

One of the Hidden Challenges of Cloud - the Blank

I spent many years as a developer in embedded systems, working on control systems to run everything from washing machines and ice makers to engines and transmissions. I learned in that time to make a product great required people who were able to forsee and avoid issues, and those who were able to debug and fix problems. As I transitioned into enterprise IT I found the same rules applied. That combination of talent is behind all great solutions from bridges and governments to iPod's and operating systems.

Considering the technological breadth and depth of cloud which crosses every technology boundary and impacts elements in every layer, no one person can be a one stop solution shop. It will take time and experience, successes and failures, before even the smartest gain the insight to know instinctively what works and what doesn't. And therefore like all new grand experiments we need to start off with the soundest foundation we can which gives us the greatest flexibility to adjust as we learn.

Inherent in this approach is having two talents at our disposal: the cloud architect who views cloud from the business point of view as a technology implementation of a business process; and the blank. It's not a sexy term, the blank, but it's very appropriate. Most organizations have architects, or at least an architect, who is responsible for ensuring technology is implemented logically according to a master design which aligns with the business strategy. These architects are always focusing on the future with limited feedback on how things are working today. However I have yet to find a company or person whose sole job it is to identify and resolve issues and inefficiencies; someone focused on today and how to make things better in the future.

Today when an issue arises it goes to a committee of people: application and data gurus, infrastructure engineers, architects, security shaman. There is no role responsible for capturing a problem, identifying the cost of the problem, then resolving the problem if its worth resolving. As a result the tools we employ to identify and debug problems is deplorable. Our infrastructure teams have gone the furthest down this route in both methodology with ITIL and tools including instrumentation and health monitoring. What about the other three fourths of the world: applications, data, and integration?

Cloud computing only works when the user uses the application. Whatever process it is the user wants to execute must come to a complete and acceptable resolution for the user to continue using the application. Cloud solutions are inherently complex leveraging a dynamic set of services across a dynamic infrastructure with a combination of static and potentially dynamic data sources. Yet for the most part the only tools available for tracing the cause of a problem end at the boundary. If the process doesn't end at the boundary neither can the debugging. If we can't resolve the problem quickly the application provides no value to the user, and tolerance continues to dwindle.

Today we rely on the hero model; that really smart person or group of people who combine intelligence and experience to sweep in and save the day. In the cloud we need to reset that experience to near zero while at least doubling the breadth of the issue. Our only option for survival is to start emphasizing the need for robust tools. However without a focal point, a role whose success depends on such tools, the tools will continue to stop at the domain boundaries: infrastructure, applications, data, and integration.

We need the blank, or perhaps a better name is Quality Architect, a role across domains responsible for ensuring our systems can be debugged, our knowledge is increased and captured, and for being the person in charge when a serious issue arises.

Today it's a blind spot we survive by luck and hard work. In the cloud with it's ability to rapidly scale and morph, that blind spot will become a killing field with the careers of well intended but unsupported IT experts littering the ground.

Wednesday, February 9, 2011

The CIO, the train, and the tracks

In today's IT world the center of the universe is the CIO. They are tasked with ensuring IT is aligned with and delivers on the business mission. Today's CIO's are very business savvy with experience across multiple operations and support units. Often they know how the business operates, the underlying processes and compliance requirements, better than the line of business executives and sometimes better than the COO. However what has been lost over the past 15yrs is a fundamental understanding of technology. I recently had a CIO tell me at a conference "This job is great because I don't have to understand any of the technology stuff; I have a staff to do that. If I had to understand it all I'd be out of a job". What she doesn't realize is she has been laying the tracks to her own destruction for years by not learning about the technology and soon the Cloud Computing train is going to run her down, run her over, and keep on running.

A CIO needs to be a balance between business acumen and technology expertise. They don't need to be an expert in either, but they need to blend in on both sides somewhat like a chameleon. How many COO's know how to operate a stamping press, extruder or fork truck? Probably more than you realize but certainly not all. However they know what the equipment does, why it does it, it's value in the production chain, and often how it operates. I grew up the son of an operations executive in the food industry and I can tell you he knew how every part fit together. Why? Because when bad things happened he invariably got the phone call. Many of our new CIO's have never spent a day in IT prior to becoming an executive. Nobody appreciates the balance between the portal, application, enterprise service bus, and database and the underlying utilities, operating systems, processors, storage arrays, networks, and security appliances who hasn't served in an IT operations role. And think of everything I'm leaving out: architecture, quality assurance, requirements, SDLC, etc, etc, etc.

People today see technology as so complex and difficult to understand and yet they use it in their daily life without a moment of pause. The reality is technology is not complex in its implementation. The complexity is in developing new technologies, a business the vast number of CIO's are not and will never be involved in. So this false wall they hide behind is inexcusable. They should be a technology expert as much as a business expert. They should be held accountable for the architecture as much as the budget. They should be able to explain how a database operates as easily as how depreciation is applied to hardware.

We need to raise the bar!

The train is already on the tracks and it's accelerating. I predict cloud computing will end the careers of 50% or more of the current Fortune 1000 CIO's. Why? For several reasons:
  1. Some will be held accountable for not researching and implementing these technologies when they started appearing in the 2001-2005 timeframe. Think of all the lost value that accumulated over the five year period from 2005 to 2010 as companies ignored cloud.
  2. Many will simply be outpaced by their younger, nimbler competition who understand that CIO's need a balance of skills and can operate as their equal on the business side while still having deep technology knowledge and experience.
  3. Most will be held accountable for a series of failed implementations because they waited to long or never did develop a cloud strategy. So many of the implementations today have been built to a vision with no supporting strategy. Already companies are scrambling to remedy siloed cloud implementations and rationalize multiple instances of the same cloud service.
Whether or not my prediction comes true is largely based on how the media covers cloud, how well CEO's hold their CIO's accountable, and vocal shareholders become. CIO's already have low tenure so my belief is it won't take much to tip the balance.

If I were a CIO I'd be sure to get educated, identify a successor, and get my finances in order.

Sunday, February 6, 2011

Cloud is a Good Thing - Vendor Lock-in Is Not

I equate vendor lock-in to anti-cloud. The entire purpose and value proposition of cloud rests on it having dynamically defined borders. Such an open architecture requires, by nature, strong standards to facilitate the interchange of data, processes, and policies. However underscoring the lack of available expertise in the market and the willingness of leaders to attack without understanding, companies are moving headstrong with short-sighted tactics (I dare not call them strategies for the amount of thinking support the tactic is barely enough to call it more than a whim!). Companies are simply mortgaging their future for benefits today.

The landscape today is Private Cloud. I have yet to see interoperability standards proposed by or supported by a Private Cloud vendor, whether a software provider such as VMWare or a service provider such as Amazon. It was only a few years ago where the cost of virtualization software was so high it was cheaper to add physical servers. Adopting a proprietary platform today is likely not only to lead to higher overall costs, but solution isolation without the adoption of interoperability standards.

As is most often the case I believe it will fall to the open source world to define, build and adopt open standards. Xen came in to existence to duplicate VMWare without the high cost, bloat, and closed architecture. Look at what Amazon AWS has done with Xen! A full slate of tools of in development in the open source community along with improvements in open source operating systems which will subsume many of the capabilities paid for in VMWare, Hyper-V and other solutions today. What today is a service or software will become a component and foundation.

I'm not ready to short the stocks of EMC (VWWare's parent), Microsoft or Citrix (owner of Xen), but I'm watching closely!

Monday, January 31, 2011

Private Cloud is Not Just Infrastructure

I understand the push for Private Clouds. Major technologies incubate inside IT shops before they're moved to hosting providers and outsourcers. So it's no surprise to gain traction cloud needed it's internal equivalent and hence Private Cloud. The cynical side of me also recognizes that Private Cloud kowtows to the security wonks who see their jobs as preventing rather than enabling interactions. It also recognizes that no CIO wants to go to the CEO and explain the past several years investment in data centers now needs to be divested as everything moves to the cloud. As leaders continue to learn about the economics of public cloud and how they cannot, simply cannot be replicated in a Private Cloud, they've now started to reply "Hey, when I said Private Cloud I didn't mean it had to be in my data center. It could be outsourced." I understand it's hard to hit a moving target.

What everyone seems to be forgetting is that private cloud, like any cloud, is half infrastructure virtualization and process automation, and half application virtualization and process automation. A cloud does not exist with only one part. Yet reviewing the private cloud offerings of several software vendors such as VMWare and VCE show their definition is woefully short of a full cloud.

What about application service provisioning? What about the service discovery? What about service level management? What about session availability and continuity? None of these topics are covered by private cloud software vendors.

Where there is value, but it's more that "Private Cloud nesting underneath Public Cloud" flavor are the Private Cloud solution providers (CSC, Amazon ,etc.). They offer an entire stack, PaaS sitting on IaaS, on which one can deploy a private cloud.

A private cloud that only includes infrastructure is not private cloud; it's highly virtualized infrastructure. And sometimes it's not even highly virtualized...