Friday, October 30, 2009

Who Will Be The Expert?

I've noticed a disturbing trend as we work to squeeze more and more computing technology graduates out of our universities. It appears we're dumbing the students down. Three quick examples, I have yet to meet a generation Y or Z who knows how:

1. to read and resolve a stack dump

2. how databases store data on the hard drive

3. how a network sends messages from one computer to another

Now I know there are some out there who do know, unfortunately I haven't met you. And I run in a circle of technology consultants whereas if I spent my time at Intel I'm sure more would know. Yet I work with graduates from our top technical schools: Michigan, Cornell, MIT, Virginia Tech, Stanford, Illinois, and Carnegie Mellon to name a few. Why are my expectations important? Here are the answers:

1. The development consultant could not resolve a stack dump and didn't because it was only happening on one developer's machine so they reformatted and reinstalled the software. However the cause of the problem was an automated update which changed the Java Virtual Machine. When the client moved to production the application wouldn't work because the operating system came with the later JVM and could not be downgraded. Downgrading the operating system meant losing crucial updates to improve database performance. Had they traced the stack dump they could have learned the cause months in advance and worked out an alternative instead of calling in all hands for several days and delaying their launch AFTER their public announcement.

2. The data architecture consultant did not understand the relationship of the files within a database so she incorrectly directed the support group to only backup some of the data files believing the rest were "configuration files and stuff". Turns out configuration files are pretty important for interpreting the data and as a result when the primary host went down hard, the data was unrecoverable because it was incomplete.

3. The security consultant didn't understand that Ethernet sends all packets to everyone so he believed the connection between two machines was point-to-point connection and therefore impenetrable. About 30sec of research would have taught him about promiscuous mode. It took the client many weeks and tens of thousands of dollars to define and implement a new architecture (nobody would take my advice to drop in SSL).

I have a cousin who is a developer for Microsoft and has had a lifelong passion for computing. During his degree program in Computer Science he NEVER learned Assembly Language, C or C++ instead being forced to focus on Java and its ilk. Java is ok, but it's got issues among them being it's heft. I spent several years in embedded systems and do not see Assembler being replaced anytime soon. At GM we used Modula, Assembler and C as the languages to program the Engine Control Modules. Lower level languages require you to understand how things like schedulers, pipelines, and memory work in order to maximize performance and sometimes just to make something happen (pointer arithmetic to manipulate memory for example). I encounter few Java developers who understand how memory is allocated and freed, how time-slicing is performed, how the cache operates and invalidates its contents.

I don't know everything. I know how to ask and how to learn. I'm never afraid to admit what I don't know. But I take great pride and work hard to know as much as I can. In each of these cases significant problems were stumbled into because the person in charge lacked breadth and depth. Why? Because none of them ever learned the core elements of computing; they were all one trick ponies.

So what is the solution? Luckily we'll have graduates in my program, Computer Engineering, and other engineering and science disciplines to act as the true experts. And those graduates will be swallowed up by the hardware, networking, telecomm, plant floor automation, aviation and embedded systems companies around the world. For the remainder of the world including consulting firms and corporations in non-R&D roles, I believe it's time to develop computing fundamentals courses to expand their view and understanding of computer technology. Companies today need people with multi-disciplined technology backgrounds to both lead large scale technology efforts and to provide guidance in troubleshooting. Find the good ones, expose them to a wider perspective, and most often they'll start to look at things with a different set of eyes which benefits everyone. Again, in my experience, I've had hundreds of conversations with people which end with "I never knew that. That's so interesting. Thank-you for explaining it to me." I guess that's why I've worked on projects including program/project management, IT strategy, Enterprise Architecture, application rationalization, software development, infrastructure architecture, business intelligence, data warehousing, systems integration, ERP selection, call centers, CRM, SFA, architecture modelling, requirements determination, and many more and in each one been considered the expert.

CRM, ERP, SFA, DW, and BI are not as challenging as operating system development, but they still needs experts!

Sunday, October 25, 2009

The Value of Models

I've had a theory for years which I always intended to research and proove during the course of post-graduate work. My belief is by understanding the core elements of computing (logic gates, transistors, magnetic storage, assembler, ethernet, etc.) makes all of their applications (databases, business intelligence, web architecture, cloud computing, etc.) easy to understand. Each computing technology core is composed of a set of models and I've found many models repeat themselves. We handle multiple simultaneous requests on processors, networks, and storage using the same time-slicing model which is the same way Client Service Representatives handle multiple chat requests. My theory is a person who learns and understands the models has the fastest route to gaining advanced knowledge in any one area and will have the broadest view.

Over the past 30yrs I've become a model driven person. I have taken the models I learned in college and continuously added new models or made existing models more robust to provide my core understanding. I've applied the same rules to business using my consumer side interactions (retail purchases, my bank account, etc.) as the foundation for models in each industry. What this model driven approach gives me is a head start whenever I encounter something new. I have found I can hit the ground in a new business vertical and be considered a technology and process expert within 90 days. My first goal is to understand, second to align one or more of my existing models, third to perform a gap analysis, and four to fill in the gaps. The result tends to be a very robust understanding.

I am often asked by my leadership where we can find more of me. It's not me, it's my way of learning and applying knowledge they want to replicate. But it all starts with an open mind. What I find in my competition, regardless of level or job type, is a very myopic view. I worked with a software engineer early in my career at Eaton/Cutler-Hammer who told me he didn't care about the hardware; he only wanted to know where the on/off switch was located. He felt understanding the hardware would take too much time and too much capacity. If only he realized the models between hardware and software are largely similar.

I work day-in, day-out with ERP and supply chain guru's, CRM experts, and people focused on Enterprise Transformation. What I find interesting is how many are one or two trick ponies. They are considered experts yet they cannot explain how things really work within their own domain, and certainly not to someone new to the domain. Perhaps they know the processes, which is paramount, but they don't understand the model. Every problem is different and I agree one can repeatedly apply the same approach to solving the problem, but too often consultants are trying to apply the same solution. When I dive in it becomes readily apparent the reason for pushing the same solution is that nobody really understands it but it's been proven to work. It's a best practice. The truth leaked out in mid-2002 while I was at PricewaterhouseCoopers Consulting when we were told to use the term "leading practice" in place of "best practice". Now that's some logic I can agree with because there is never one best practice.

As an Enterprise Architect I'm a modeller in a modeller's world. I find it interesting how businesses are now starting to unlock the power of modelling. In a recent internal discussion one of the 2010 technology trends discussed was the evolution of modelling in business to a primary focus. Perhaps not in 2010, but the fact that it was even a topic of discussion surprised me. I guess I'm lucky in that the way I naturally think is evolving as a better mousetrap. Hopefully it has long legs or my thinking continues to evolve.

Perhaps I should develop a course on modelling...

Sunday, October 18, 2009

Google Makes an Important Step Forward in Cloud

I'll leave my criticisms of the hypocrisy at Google between their "Do no evil" motto and their actions aside for now. I have to applaud Google for resolving one of three hurdles to the use of their Cloud products. Google is now releasing methods to transfer the data users have put into Google products such as Google Docs, Blogger, and Gmail out of Google's data center. Google calls it their Data Liberation Front and it has it's own webiste at dataliberation.org. Google will provide an easy to use method for exporting data from every product, in bulk, to the user's selected destination. Bravo Google!

Why is this important? The Cloud is predicated on a virtualized infrastructure (utility computing model) with a service based software layer (SOA). The combination of these two models creates a powerful foundation to provide the most flexibility in the most efficient manner. Creating arbitrary obstacles to moving data in and out of data stores, using application components instead of only full applications, and changing where the data and applications reside destroys some of the of Cloud. Cloud has to be bigger than Google, Amazon, Rackspace, IBM, or any other vendor. The emerging Cloud lacks a definition of data ownership. Ownership means the ability to add, change, delete and move at will and have it still be useful. Companies always seem to forget, whether is the old ASP model or SaaS, that they need to make sure their data can be exported AND imported into another tool, otherwise they don't own the data but rather have granted that ownership to the application/platform provider by proxy.

For the Fortune 1000 Cloud will start inside the data center, as it already is for large banks and a select few others with vision. To be relevant, public Cloud offerings need to enable, not disable, integration across the public/private cloud boundaries. Users need to own their data which is not the case when a cloud provider partitions it for them but ties it inextricably to their platform. Without ownership standards the Cloud represents nothing more than a new larger external silo, but still a silo.

What Google is doing will raise the bar for all providers which should go a long way to making Cloud more palatable from a risk point of view. The next steps should be enabling a federated data model so I can store highly sensitive data at home while using Google for the remainder of the data but all within a single data model. In addition Google can go further with enabling its applications as services for integration into other tools. Of course I expect Google will need remuneration and needs to think through how enterprise licensing will work because everyone knows you get what you pay for. As long as a service is free it also means at the mercy of the provider.

Oh, the other two obstacles Google needs to figure out? First, encryption of data at rest and in transit which I know is already on the drawing board and partly implemented in tools such as Google Docs. Second, interoperability standards to enabling the shifting of data and applications throughout the cloud. Where Google goes the public Clouds will follow.

Sunday, October 11, 2009

A Road Made Longer by Doubt

In 2003 I joined the IBM Grid & Virtualization team as the lead architect for Healthcare in the Americas. As a part of Systems Technology Group our job was to evangelize grid computing and virtualization technologies. We helped early adopters move further faster investing time and money to learn and have an impact. As the harbinger of new technologies we met LOTS of skepticism and dobut. Funny enough, just about everyone is moving that direction now, many having given up their chance at innovation by adopting the technologies first in their industries. Oh well. If I had a dime for every bad business decision about technology I've witnessed in 15+ years of consulting I'd be retired to my own island.

In 2003 grid computing was synonymous with high performance computing, a boundary we worked tirelessly to break down because it was arbritrary at best. In this endeavor I upgraded one of my computers to the latest Nvidia card, a company I have followed for years one because of its technology and two because two of its executives went to the same small engineering college I attended. When I researched the specs and looked at the calculations, and compared those to what a medical research institution was attepmting, I realized the video GPU had much more to offer than the general purpose CPU. I talked to a few fellow IBM'ers who agreed and had been looking at such uses for a few months. Getting my facts together I approached my client to propose we do some joint research into the use of the GPU for the calcuations.

Cost? Zero. I had funding in "blue money" ready to go. Delays? Zero. We had a functioning system but it was resource constrained. Receptiveness of my client. Also zero. No appetite.

The world has moved on and Nvidia has stayed its course now designing a new GPU architecture specifically to advance high throughput computing. It's a great idea, especially considering Nvidia's past architecture has enabled the sharing of resources across four video cards. Impressive to say the least!

I wonder who, beyond Oak Ridge, will bite. More so, I wonder how much faster we could have advanced important causes such as research into pharmacogenomics and protein folding if we had adopted this technology earlier. I have to believe it would have expedited the development, ultimately leading us to the same place but sooner. I don't know about you, but I'm interested in an AIDS vaccine and a cure for cancer BEFORE I die. I'm sure others are too. Isn't there a moral imperative that says research centers should be, perhaps, researching? I found they do as long as the topics aren't too politically sensitive. And for whatever reason the idea we presented was a political landmine. Too many people would have to agree. Too many people would have to approve. It would take too long. There was no guarantee.

Yes. But there is no guarantee in Research, or did I miss something.

Too bad for all the people for whom the vaccine arrives 1min later than needed; and to those whom it could have saved had it come to market on the earliest possible path instead of the one easiest to navigate. I guess that's why I left R&D after my internship in college and never wanted to go back (although I've been dragged, reluctantly, back through the halls of R&D a few times). R&D to me is all promise with very little delivery. Guess that's why we do so little in the United States these days. I didn't realize the reason for poor delivery was because researchers were afraid to do....uh....research.

Friday, October 9, 2009

Welcome to the Cloud Everyone!

I find it interesting how some of us have always envisioned a computing world based on "cloud" technologies while others are just figuring out what cloud means. I started getting into virtualization technologies back in 1993 during an internship in the Artificial Intelligence Group at Eaton/Cutler-Hammer. A student in Computer Engineering at MSOE at the time, I used the FIDE package (Fuzzy Inference Development Environment) to do research on fuzzy logic. I could mimic the Motorola MC68HC11 micro-controller which opened my eyes to the realty of our compute stack. Each layer is an abstraction not for the benefit of the machine, but for the benefit of the human. A CPU cannot differentiate between code at the micro, operating system, or application levels. Packages, libraries, databases, user interfaces all look the same.

About this time I was asked by our neighbor back at home, President of Ameritech's business services division, what I felt the next big thing in computing would be in ten years. I was off on the timing but I replied "Hey, you guys in the phone company have lots of big computers. I think the future is running the applications I want on those machines and charging me only for what I use. You have way more power than I could ever afford because I only need it for a few nanoseconds at a time." My father, a first generation Computer Engineer, reminded me of the conversation earlier this year.

Post-graduation I worked for General Motors via EDS on the Powertrain Embedded Sytems and Controls team where I developed and managed the teams developing components of the Engine Control Module code as well as development utilities such as Cal-Tools and WinVLT. Again we used software to emulate hardware further cementing my belief that the hardware was, in some form, a commodity.

Fast forward to the Linux movement around 1996 when I saw the value of open source as a form of virtualization; consolidating the logic of various systems into a common, open application for everyone to learn and expand. Open source had few proponents in business but that didn't stop friends and I from trying to move an outdated call center outsourcer out of the mainframe age into the internet age. We didn't succeed, not for a lack of technical capability, but for a lack of salesmanship but that's another story for another post. We did succeed in demonstrating the value of open source, and once we did there was no going back (at least for us, the company, sadly regressed to a number of failed implementations including Lotus Notes - whose idea was that!?!?! - and eventually folded into the pages of history being acquired by an Indian firm).

So around 2002 I ran smack into an idea I called serverless computing. The idea is based upon that fundamental realization that most constructs of computing are for our benefit. So why, then, do we need servers? We invented the idea of a server to help us branch out from one system to two. Client. Server. Easy! But what if the client also acts as a server? Preposterous everyone told me. Crazy. Insane! Never! I submitted a paper to multiple outlets, including my employer IBM, but nobody would give it a second look (honestly I bet most didn't look at it a first time either).

Ok. I took a step back, did some research on grid computing and peer to peer networks and realized I wasn't wrong, simply nobody I talked to could see the light.

Well I welcome those who do. Important people like those at Cisco and EMC who have finally understood the operating system is a limitation. I expect now that they're thinking, that thinking will evolve and they'll realize it's not just the OS but how development environments, platforms, and the very concept of a centralized data center are significant limitations with inherent cost disadvantages. Let's move to a new model which frees us from the limitations of the physical which has always been the interface boundary for innovation!

Welcome to the Cloud! Welcome to the Party!

For more on Serverless Computing read the unpublished paper Serverless Computing.