Future Cloudy, Ask Again Later

Recently my pal Bill Schell and I were gassing on about the current and future state of IT employment, and he brought up the topic of IT jobs being “lost to the Cloud”.  In other words, if we’re to believe in the marketing hype of the Cloud Computing revolution, a great deal of processing is going to move out of the direct control of the individual organizations where it is currently being done.  One would expect IT jobs within those organizations that had previously been supporting that processing to disappear, or at least migrate over to the providers of the Cloud Computing resources.

I commented that the whole Cloud Computing story felt just like another turn in the epic cycle between centralized and decentralized computing.  He and I had both lived through the end of the mainframe era, into “Open Systems” on user desktops, back into centralized computing with X terminals and other “thin clients”, back out onto the desktops again with the rise of extremely powerful, extremely low cost commodity hardware, and now we’re harnessing that commodity hardware into giant centralized clusters that we’re calling “Clouds”.  It’s amazingly painful for the people whose jobs and lives are dislocated by these geologic shifts in computing practice, but the wheel keeps turning.

Bill brought up an economic argument for centralized computing that seems to crop up every time we’re heading back into the shift towards centralized computing.  Essentially the argument is summarized as follows:

  • As the capital cost of computing power declines, support costs tend to predominate.
  • Centralized support costs less then decentralized support.
  • Therefore centralized computing models will ultimately win out.

If you believe this argument, by now we should have all embraced a centralized computing model.  Yet instead we’ve seen this cycle between centralized and decentralized computing.  What’s driving the cycle?  It seems to me that there are other factors that work in opposition and keep the wheel turning.

First, it’s generally been a truism that centralized computing power costs more than decentralized computing.  In other words, it’s more expensive to hook 64 processors and 128GB of RAM onto the same backplane than it is to purchase 64 uniprocessor machines each with 2GB of RAM.  The Cloud Computing enthusiasts are promising to crack that problem by “loosely coupling” racks of inexpensive machines into a massive computing array. Though when “loose” is defined as Infiniband switch fabrics and the like, you’ll forgive me if I suspect they may be playing a little Three Card Monte with the numbers on the cost spreadsheets.  The other issue to point out here is that if your “centralized” computing model is really just a rack of “decentralized” servers, you’re giving up some of the savings in support costs that the centralized computing model was supposed to provide.

Another issue that rises to the fore when you move to a centralized computing model is the cost to the organization to maintain their access to the centralized computing resource.  One obvious cost area is basic “plumbing” like network access– how much is it going to cost you to get all the bandwidth you need (in both directions) at appropriately low latency?  Similarly, when your compute power is decentralized it’s easier to hide environmental costs like power and cooling, as opposed to when all of those machines are racked up together in the same room.  However, a less obvious cost is the cost of keeping the centralized computing resource up and available all the time, because now with all of your “eggs in one basket” as it were your entire business can be impacted by the same outage.  “Five-nines” uptime is really, really expensive.  Back when your eggs were spread out across multiple baskets, you didn’t necessarily care as much about the uptime of any single basket and the aggregate cost of keeping all the baskets available when needed was lower.

The centralized vs. decentralized cycle keeps turning because in any given computing epoch the costs of all of the above factors rise and fall.  This leads IT folks to optimize one factor over another, which promotes shifts in computing strategy, and the wheel turns again.

Despite what the marketeers would have you believe, I don’t think the Cloud Computing model has proven itself to the point where there is a massive impact on the way mainstream business is doing IT.  This may happen, but then again it may not.  The IT job loss we’re seeing now has a lot more to do with the general problems in the world-wide economy than jobs being “lost to the Cloud”.  But it’s worth remembering that massive changes in computing practice do happen on a regular basis, and IT workers need to be able to read the cycles and position themselves appropriately in the job market.

Advertisements