Wednesday, August 31, 2011

Time Value of Data

Value is an inherent way humans quantify entities. Whether thoughts or products, ideas or tools, their value to us heavily influences how our brains classify the entity. Forks and spoons are of higher value than sporks, cars over horse and buggy, computers over typewriters, and so on. One early lesson we study in Economics is that money the value of money is not constant. A dollar today is, under normal economic condition, worth more than a dollar tomorrow. We call this the Time Value of Money to show that the timing of earnings and payments should be optimized to derive the greatest overall value.

To me it's a very gentle leap of faith to combine the Time value of Money with the idea that data is the new currency: I call it the Time Value of Data. Today we assume data has the same value on day one as it does on day infinity. For some data that's true. However I argue for the vast majority of data it's value starts at a peak and rapidly falls off nearing zero for the majority of its life until deletion (whether by intent or accident). For example how much I earn per hour today is something I safeguard however I much I earned per hour as a reservationist at Disney World in 1990 is pretty unimportant to me (the answer - $6.35/hr which was $1/hr better than the theme park jobs because of the typing skills).

When it comes to data we are pack rats loading the data onto our computerized pack mules. However one has to consider the security, efficiency, and monetary cost of this approach. From a security point of view should we protect data of low value with the same vigor as data of high value? Instinctively I say no but I can find no examples in the wild that show this rational thinking to be true. Valuing all data equally robs us of a valuable metric for finding pertinent data. We have to look through all the muck to find the golden data we need which impacts our ability to answer questions efficiently. And lastly there is a cost to storing data when it's value is at or near zero. Is my life going to be improved knowing the details of my mortgage from 2000 when I've refinanced twice since then? Not likely but keeping those papers has caused me to have to purchase yet another filing cabinet.

I don't have the answers; at this point I can only ask the question. I'm not sure how we define the value of data and how we differentiate the levels of value. I do, however, believe this is a larger issue of organizations not rationalizing the data they keep and use. It's easy to calculate the cost side, it's difficult to define the value equation. In money we use interest rates and years in a well defined mathematical equation to determine the current or future value of a dollar. A dollar is a single entity, data is all over the board, so they are not equivalent. However I do believe in one rule which I use at home. If I haven't touched something in 6mo it should go into the attic. If it's not used within a year then I have to question it's value. There are some pieces of data that we value forever such as baby photos. But like digital medical images, I bet 98% of data is never touched again after it's first few weeks.

But today we pay for it for it's entire lifetime.

Monday, August 22, 2011

The Role of Open Standards in Cloud Computing

Over the past 12mo I have increasingly found what I feel is an obvious argument is mired in marketing hyperspeak. Cloud computing is predicated on open standards. There, it's been said as clearly as I can state it. Now to answer the question, why?

Probably the easiest way to answer this is by analogy. Remember back to the Betamax/VHS wars? How about CD/Digital Tape? Blu-Ray/HD DVD? The winner in each category had at least two of three benefits. First, lower cost. Second, media availability which is why anyone bought a device anyway. Third, ubiquity, availability in multiple formats. Believe it or not, technical superiority was not a driving factor. Beta was better than VHS, and HD-DVD better than Blu-ray. These three elements taken together drove market share which in turn determined who won the war and as such became the de facto standard.

So we proved that people like standards. Being the defacto standard meant being cheaper, having more media available, and being available in more form factors. Which in turn led to even cheaper devices... see the effect? What's amazing is that rather than fighting it out in the market to create a defacto standard, a standard can be set up front and thus drive the development of better solutions. (NOTE: open standards are simply standards for which there is no licensing fee. For example to use RAMBUS or JPG there is a licensing fee required and hence there's been a serious withdrawal from using either as a standard).

The goal of open standards is to drive interoperability, from day one, to maximize flexibility while keeping costs down. In turn the interoperability combines the efforts of all market entrants to achieve the three goals of driving down costs, increasing options, and enabling the development of new form factors. In fact open standards are everywhere. Your car has OBD-II, On Board Diagnostics 2nd Generation, which every automaker must comply with so a mechanic can connect a device and get a little help from your car in diagnosing a problem. HTML is another great example and there are plenty of others from email (SMTP, POP, IMAP) to managing IT (ITIL v3). In fact many of the technologies underlying cloud computing are themselves based on open standards. If we want cloud computing to be as ubiquitous and consumable as we all say, then we need open standards upon which to base interoperability. An isolated cloud is, in essence, nothing more than a large silo. Clouds need to have virtual boundaries which can be traversed easily and quickly as needs dictate.

So the real question is why aren't there open cloud interoperability standards?

The short answer is there are and more are coming. Standards evolve over time via learning. The original answer was the advanced advocates of cloud want to recoup their investment and make fat profits which is more easily done with proprietary technologies. However they almost all come around eventually. Several technology companies are very open to open standards including Rackspace, CA, AT&T, IBM, and HP. Others are coming late to the game like Amazon, VMWare, and Microsoft and still others may never join the team such as Google and Salesforce.com/Force.com.

Open cloud standards are here today in one form or another as evidenced by the Rackspace OpenStack and Eucalyptus. In fact OpenStack is quickly gaining momentum and warrants further investigation if you were previously unaware. In addition several governance groups have appeared beyond the original Open Cloud Manifesto, such as the OMG Cloud Standards Group, each working to establish open standards.

So when looking at cloud technologies keep in mind that proprietary solutions move you in the wrong direction. The days of proprietary vendor lock-in are not over. Open standards are out there so make sure they're part of your architecture and part of your technology selection criteria.

Sunday, August 21, 2011

Learn From The Mistakes of Others

In my career I've had some difficult projects but one in particular stands out as being my Bataan Death March. If successful the project would cement the fortunes of two rising executives; one client and one consultant. If failure prevailed, it would leave both executives careers significantly diminished. The project was to establish a beachhead for integrating the marketing efforts of several autonomous business units, cleverly disguised as an Enterprise Content Management implementation and web presence refresh. Sales were expected to grow as customers realized they could buy an end-to-end solution from the same company even though it might be composed of two or more brands.  With such a vast scope, and all the political challenges I cannot detail, it's incredible to believe the project was even undertaken (which it was against my advice and the advice of several other experts). Here I will list the major lessons I learned, and hopefully provide insight into what should have been done differently.

First Lesson: enterprise transformations are not a re-branding exercise
Day one I knew the project's scope did not reflect its challenges. The client was using the project as a way to effect an organization wide transformation to become a better customer focused company. Transformations require the support of senior leadership, investment, and the acceptance of risk.  Visions of a better future need to be seeded with the executives and given time to germinate, with the hope passion will build within each leader to make the future happen.  Instead we had the feint, background support of some in the C-suite, tepid support from IT at best, and a range from lackluster acceptance to abject opposition from the lines of business.

Second Lesson: any transformation requires a well thought through and defined strategy 
Without the support of the executives, the vision was empty.  As a result nobody wanted to participate in developing a strategy for the transformation.  And without buy-in and a strategy, there was nothing in place to guide the project as it hit critical junctures.  Often during discussions drilling down on a particular topic, nobody could articulate details on what needed to change, how, when and who would be the owner moving forward.  Without a target future state we found there was nowhere for new responsibilities emerging from the transformation to land.  This caused significant friction as we were on the clock, expected to make progress, but everything we did was dependent on client resources providing guidance and making decisions

Third Lesson: transformation success requires talent
All three client resources portrayed themselves as experts in the ECM space when, in reality, their knowledge was limited to content creation. In fact it was this arrogance that created the foundation for the technical challenges. And driving inexperienced, incapable people to work long hours cannot create success out of a vacuum.The client project leader was ineffective at providing input to the timeline and managing client tasks. Due dates were treated as meaningless and repeatedly missed without acknowledging the subsequent impacts to the timeline and cost of the project. The client subject matter expert was an off-site third party consultant spread extremely thin whose goals were not always aligned with our customer. The client technical leader was a junior resource with no architecture, web or application development experience who was quickly relegated to the role of a go-fer.

Fourth Lesson: success does not end with the sale, it requires client satisfaction throughout the project
Based on the above items there was more than enough reason to not accept the risk of the project, or run away when SOW negotiations continued for months after the project was already underway. But revenue generation was necessary as part of the business case for promotion, so the engagement lead ignored the concerns of his team.

Fifth Lesson: risk is inherent in every project, but it should be limited and managed
The engagement leader spent 1-2 days per week on the project but refused to listen to the highly experienced, dedicated directors on the ground every day. Blind to risks, within days he made four core decisions which pushed the project to the edge of the cliff for the remainder of its life:

  • dismissed concerns about the missing customer strategy inability to articulate requirements
  • used inexperienced, off-shore application developers for an on-site web overhaul and content management implementation
  • delayed engaged the ECM platform expert until the statement of work was signed, which didn't occur until four months into the project (two months too late)
  • accepted new responsibilities that required client level decisions

Sixth Lesson: communications must be transparent
The client had no reason to believe the decisions of the consultant threatened project success because they were never informed.  In fact they were insulated by the engagement lead who closely managed and controlled all information shared. Regardless issues raised by the consulting team, the message to the client was everything is normal and manageable. As the growing gap between expectations and delivery became increasingly difficult to mask, the engagement lead pushed the team to be "creative" and find ways to make progress.  Doing so quickly degenerated from difficult to impossible.  The team looked busy, but very little of the work actually progressed the project.

Seventh Lesson: take responsibility
Among the directors we knew we were out of options.  However because the engagement lead was also the best friend of his leader, and the engagement lead had driven a wedge between us and the client, we focused on trying to make it work.  We should have documented our concerns and requested a Quality Assurance review.  We had the right to, but doing so would also put our careers in jeopardy, especially considering the reputation of the engagement lead.  We shirked our responsibility and it came back to haunt all three of us repeatedly.

Eighth Lesson: know what you're getting into
To make it a perfect storm, the underlying technology had it's own issues.The ECM platform used by the client had a unique architecture and implementation. Since many members of the consulting team had prior ECM experience, we used that experience to drive estimates.  And because the platform was new and relatively unknown, it was difficult to find expertise.  It took several months to identify an expert and then several more for approval to get the person on-site.  And because the project had been budgeted to use off-shore resources, the cost of the expert pushed the project even further over budget. On-site, we worked with the ECM platform expert to address significant issues with the existing implementation.  He was able to validate my architecture as the correct approach, a contentious point with the client experts from the start which had stymied progress.  His documented expertise, supported by the platform vendor's own testimonial on his brilliance, wasn't enough.  We had to make the changes we knew were right and then show the client why it was better than their way.  Of course this didn't help our timeline, nor budget, but it was effective.  We essentially discarded the existing slow, unsupportable solution for a streamlined, very fast, very extensible implementation. Luckily our new direction changed some of the requirements enabling us to finally use those off-shore resources.

Unfortunately the client was in too deep to walk away or choose a different partner.  Follow-on projects made few changes for the better.  The engagement leader continued to focus on revenue at the expense of people and the truth.

From a deliverable point of view, the project is viewed as a huge success.  It delivered a better mouse trap and, for the first time, made the company appear to be integrated. However, as predicated on day one, it wasn't enough to appear integrated so the net impact on sales was negligible.  The project grossly overran it's budget.  Two of the consultant directors lost their jobs.  Several on the implementation team were given unflattering ratings as a way to spread the blame.  The executive on the client side was terminated and their team took a reputational hit.

And what happened to the consulting engagement lead?  Well of course he got that promotion, and a huge bonus.