Debating a tech pundit: Building a case for 'cloud computing'

By

Hold On, Tim.

I just finished reading an article from a respected technology industry analyst, Tim Bajarin. His viewpoint was that before we rush into adopting cloud computing as the be-all and end-all for firms, we ought to step back and remember that things happen which can disrupt service.

He cites the recent vandalism in Silicon Valley, in which several of AT&T's high-speed fiber optic lines were clipped, cutting phone, Internet and wireless service to thousands of customers in the affected areas. He also wrote:

"Add to that the news [in the Wall Street Journal] that cyber criminals have infiltrated our power grid and the constant attempts by nefarious forces to penetrate our network infrastructure, and all of the sudden the concept of "everything in the cloud" becomes a bit frightening, especially to consumers."

True, but we as professionals ought to know a little better; if we're not careful, we come off sounding like and channeling Chicken Little. For openers, many information security experts have downplayed the power grid story as overplayed and nothing new. Bruce Schneier wrote at his security blog:

"Honestly, I am much more worried about random errors and undirected worms in the computers running our infrastructure than I am about the Chinese military."

And former Gartner analyst Richard Stiennon wrote, "My reaction to the WSJ article was mostly anger over seeing a trumped up story with no sources, no evidence, and frankly, no news."

Bajarin also wrote that the AT&T vandalism revived "fears about overdependence on cloud computing," and worrying about how network uptime could be adversely affected. Tim, with all due respect, I think you're wrong on this one; fears are one thing...facts are another.

Companies that handle outsourced Infrastructure as a Service offerings (IaaS), like Hosted Solutions, routinely promise their customers uptime in excess of 99 percent, and deliver on that promise. In fact, those numbers are better than most companies that try to keep the capabilities in-house. Another former Gartner analyst, Vinnie Marchandani, writes:

"In a given year...we have 525,600 minutes. To meet a 99.99% uptime, the system could only be down 52 minutes - less than an hour - in the year. I can tell you most corporate data centers have scheduled downtimes which exceed that every month, if not every week."

So, what to make of all this? Let's be honest: machines will fail, networks will fail. That's why we use the acronym MTBF...Mean Time Between Failures. But if your IT guys are doing their job, they've factored this into their calculations on how to keep downtime to a minimum. (We have.)

And to raise the spectre of huge outages and to cast a dark shadow over what cloud computing and cloud services have already delivered to both companies and consumers, due to a random event and a speculative newspaper article, isn't really doing justice to the issue. It's kind of like saying that because there were 37,248 fatal car crashes in 2007, that we may want to rethink our use of cars in general.

Tim closes off his article by saying, "Given this attack, and the constant reports that our networks are being attacked, consumers will be inclined to keep a lot of their digital stuff locally for years to come." That's wise advice; both consumers and companies should keep duplicate copies of their information.

But the fear of what might happen should not stop us from prudently moving forward with the advantages of what can be achieved. Things like IaaS simply make good business sense.

And it's going to take far more than a vandal to stop those things from taking hold.

Copyright 2009 by Capitol Broadcasting Company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.