Amazon CEO Jeff Bezos in front of a vintage 1890s
electric generator in a Belgium beer factory.
In explaining the slide, Selipsky said that in the 1890s, before the electric grid was mature, most companies had electric generation capabilities on-site, but before long a more reliable and scalable electric grid infrastructure became available and gradually companies stopped buying electric generators, except for backup or failover purposes. Selipsky, who is VP of product management and developer relations with Amazon Web Services (AWS), said by analogy the current enterprise IT infrastructure developed in-house is as archaic as that of a factory that has in-house electric generation. There are a number of problems with Selipsky's analogy, not the least of which is the scale of electrons being passed in both cases. Without arguing the comparative reliability of AWS versus the current U.S. electric grid, the scalability issue of Web services versus electric generation is what is skewed in this analogy. The current system of three-phase alternating current electrical generation and distribution was invented by the 19th century engineering genius Nicola Tesla, who calculated that that 60 Hz (Hertz, cycles per second) was the best frequency for alternating current (AC) power generating. He preferred 240 volts, which put him at odds with Thomas Edison, whose direct current (DC) systems were 110 volts. Edison initially argued for the safety of the lower voltage, but ultimately came to believe in the superiority of AC, and switched his 110 volt DC systems over to 110 volt AC systems, which is where we get our U.S. standard of 60 Hz and 110 volts AC.
Several times Selipsky made the case for AWS's scalable infrastructure, including the well-known slide of AWS hockey stick graph that describes the bandwidth growth in AWS cloud computing demand over time in comparison with "regular retail" Amazon. He also cited the example of the Animoto Facebook app that ramped from 25,000 users to 250,000 users in three days -- scaling from 50 instances of EC2 usage up to 3,500 instances -- something practically impossible to do in an internal data center.
But there are other reasons besides scalability and reliability for taking a cautious approach to migrating to a large cloud computing vendor like Amazon AWS. First and foremost, to my mind, would be environmental sustainability. While many data centers and hosting providers tout their environmental concern by the dubious practice of trading carbon credits and purchasing green energy certificates, only a small number of facilities are powered by renewable energy that is generated on-site. One of the best known of these is AISO.net, a solar-powered data center in Romoland, Calif. Although the AISO.net facility is small (just 2,000 square feet total), it has successfully demonstrated the power of green technology enough to leverage its visibility into several high profile, eco-friendly deals like the one to host the Web site for the Live Earth concert last summer.
A little over a year ago, the EPA released a report that estimates, based on current growth rates, that by 2011 data centers nationwide would consume almost twice the amount of electricity as they did in 2006. That would amount to 12 gigawatts of electricity during peak loads in the nation's power grid, at a cost of $7.4 billion in annual electricity costs by 2011. As a point of reference, data centers currently use about 7 gigawatts during peak hours, which is the equivalent output of 15 power plants. Energy efficiency, as well as reliability and scalability, ought to be factors in your cloud computing migration strategies. Plus the fact that standards for cloud computing are nowhere near mature enough -- especially when compared with those that underpin the U.S. electric power grid -- to warrant a wholesale migration of enterprise IT infrastructure to the cloud anytime soon.