Buying energy-efficient technology isn't the only--or even the best--way to cut down on energy consumption in the data center. Rethinking the way you use the technology you already have can make a bigger impact.
While determining that you've got too much capacity is easy, doing something about it isn't. Most CRAC units are simple: Either they're on or they're off; there's no throttling them down. Less than 10% of CRAC units installed today contain variable-speed motors, but even with the right motors it's not trivial to determine the effect of changing the output of one CRAC unit. Until recently, various vendors had the instrumentation and software capabilities to map airflows and temperature gradients throughout a data center, both in 2-D and 3-D, but no one had the ability to determine the "zone of influence" of each CRAC unit. In late July, HP announced Thermal Zone Mapping, which uses software and instrumentation to measure existing conditions and predict the effects of moving or throttling back CRAC units.
Along with its thermal zone mapping, HP also announced what it calls Dynamic Smart Cooling. DSC was developed with Liebert and STULZ, the two companies that produce the vast majority of room- and building-based cooling units in North America. The partnership lets HP software control the performance of newer CRAC units from the two manufacturers. For data centers built in the last five years or so, CRAC units may only require the addition of a controller board to interface with the HP system, provided those systems are equipped with variable-speed motors. Older CRAC units must be replaced to participate. HP claims DSC will save up to 45% on cooling costs.
To achieve those sorts of savings requires more than just deploying a control system. The placement of CRAC units and computer racks will likely have to be rethought as well.
Once you start imagining moving the furniture around, it's time to call in the pros. No IT staff has the time or expertise to lay out a data center for maximum efficiency. Even if you understand the concepts of laminar airflow (which is good) vs. turbulent airflow (bad), you won't have the tools and software to measure what's going on in your data center. And, of course, when it's time to actually rearrange the facility, you'll need enough plumbers, electricians, and IT pros to get the job done in whatever timeframe you have.
What's impressive about DSC is that it yields a forward-looking data center design. We can all imagine the notion of a virtualized data center where servers are turned on and off automatically based on business needs. DSC provides the ability to sense the changing cooling needs of such a dynamic data center and make adjustments on the fly. Currently, this is just vision--no data center is this dynamic--but given that you get the chance to redesign data centers perhaps once a decade, it's good to have a vision.
Regardless of who creates the new design, two main requirements are instrumentation and modularity. Instrumentation provides the data necessary to understand data center power consumption, while modularity gives you the means to do something about it. Modularity also permits systems to run at their peak efficiency, something that almost never happens in current data center designs (see Where Does The Power Go?).
As businesses become more conscious of energy use, the solution isn't to throw the latest technology at the problem. What's required is a disciplined, well-thought-out approach that consumes less power, personnel, and capital--and the will to make efficiency a priority.
5 Top Federal Initiatives For 2015As InformationWeek Government readers were busy firming up their fiscal year 2015 budgets, we asked them to rate more than 30 IT initiatives in terms of importance and current leadership focus. No surprise, among more than 30 options, security is No. 1. After that, things get less predictable.