The Google/Intel effort (which is also supported by HP, Dell, IBM, Lenovo, you, me, and your grandparents) is called the Climate Savers Computing Initiative. As the Intel press release explains, it's intended to "save energy and reduce greenhouse gas emissions by setting aggressive new targets for energy-efficient computers and components, and promoting the adoption of energy-efficient computers and power management tools worldwide."
I'm cool with that. Seriously, I don't mean to make light of the effort. It's serious and it's worthy. But quite frankly, as a day-to-day computing user, I just can't get excited about the immediate effect my keyboard tappings might be having on the ozone layer. (Call me computing selfish.)
Which is why I was more interested that usual to receive an e-mail from AMD's public relations people touting their decision to join the Climate Savers initiative. "AMD-based PC and server platforms already exceed the standards set by Climate Savers today because AMD has focused for years on innovating more energy efficient yet high-performing PC and server technologies," is some of the corporate language in the AMD e-mail.
OK, well and good. However, the reason the AMD missive interested me is because it jogged my memory back to May 2006. That's when AMD unveiled a billboard in New York's Times Square, touting what it claimed was the reduced power consumption of its processors as compared to Intel's chips. (You can see a picture of the billboard here.)
Without getting into a torturous technical discussion on whose chips consume less (it varies), I want to note that this is an extremely important issue on two fronts. First, and perhaps most importantly, it's significant because of the electric bills companies have to pay to run their data centers. There's been a big push toward server consolidation, driven in part by a desire to rein in energy costs.
Second, the whole power issue is connected to the technical challenges in chip design. These challenges led to the move from single-core processors to today's multicore designs. That's a whole 'nother blog post (it could be a whole book), but essentially the move to multicore was driven by the fact that it was no longer possible to increase processor clock speed without driving power dissipation to intolerable levels.
If you're interested in exploring this further, I ask you to take a look at a couple of pieces I wrote a few years ago for The Semiconductor Reporter: