There are data centers, and then there are data centers. The first kind ranges from the overheated, wire-tangled, cramped closets that sometimes also host cleaning supplies to the more standard glass-house variety of years past. The second kind--and the topic of this article--cool with winter air, run on solar power, automatically provision servers without human involvement, and can't be infiltrated even if the attacker is driving a Mack truck full-throttle through the front gate.
These "badass" data centers--energy efficient, automated, hypersecure--are held up as models of innovation today, but their technologies and methodologies could become standard fare tomorrow.
Everything at Equinix has been thought through for security
Photo by Mark Richards
Before a massive overhaul completed in April, the university had four "data centers" scattered across campus, including server racks stuffed into closets with little concern for backup and no thought to efficiency. Now Bryant's consolidated, virtualized, reconfigured, blade-based, and heavily automated data center is one of the first examples of IBM's young green data center initiative.
IBM practices what it preaches, spending $79 million on its own green data center in Boulder, Colo. It spends $10 million a month on energy for all its data centers and hopes to keep the same environmental footprint through massive data center expansions.
Microsoft and Google also are putting heavy emphasis on environmental and energy concerns in building out their massive data centers, some of which cost upward of $500 million. Energy consumption at Microsoft's new data center in Ireland is half that of similar-sized data centers with similar configurations, says Rob Bernard, Microsoft's new chief environmental officer. "We looked at every aspect of where to site the building, how to drive more efficiency in the data centers," Bernard says. Google, while tight lipped on details, is careful to locate its data centers near clean power sources.
Now the data center has a closed-loop cooling system using ethylene glycol, chilled by outside air when it's cold enough. On a cold December day, the giant APC chiller sits encased in snow, cooling the ethylene glycol. Rich Bertone, a Bryant technical analyst, estimates a 30% to 40% savings on cooling costs compared with more common refrigerant-based air conditioning.
Another unusual step Bryant has taken is to build on grade, with no raised flooring. The university did it because of space constraints, but Forrester Research analyst James Staten says some of the largest tech companies are turning that building design into a trend for other reasons. "The new cooling systems really work better when they drop cold air down from the aisle rather than blow it up from below," Staten says. "This is true if you're using air cooling or the new liquid cooling." That's a controversial view: IBM's Sams says getting rid of raised floors isn't efficient at scale.
HP's cell architecture isolates cooling needs
The final location was the right size, near an electrical substation at the back of the campus, in a lightly traveled area, which was good for the data center's physical security. Proximity to an electrical substation was key. "The farther away the power supply, the less efficient the data center," Bertone says. Microsoft and Equinix both have data centers with their own substation.
AISO.net, a small Internet hosting company that hosted the Live Earth concert series online, went for a cleaner power supply when it converted to solar power in 2001. An array of 120 solar panels sits on the company's 1-1/3-acre property. "We saw that our costs were going to continually go up and said this was probably the right thing to do, too," says CTO Phil Nail.
It cost about $100,000 to outfit the company with solar panels, but Nail says AISO has made that money back and has continued to look for ways to cut its energy use even further, virtualizing almost every app running in its data center and pulling in cold air whenever the outside temperature drops below 50 degrees.
Bryant is in the midst of deploying software that automatically manages server clock speed to lower power consumption, something that IBM co-developed with APC. Right now, APC technologies monitor and control fan speed, power level used at each outlet, cooling capacity, temperature, and humidity. Power is distributed to server blades as they need it.
When power goes out, Bryant no longer has to take the data center offline or bring out the portable air conditioning. A room near the data center hosts an APC Intelligent Transfer Switch that knows when to switch power resources to batteries, which can run the whole system for 20 minutes. If power quality falls out of line, the data center automatically switches to generator power and pages Bertone. The generator can run for two days on a full tank of diesel.
Other companies, including co-location provider Terremark Worldwide, are doing away with battery backup in some places, opting for flywheels. These heavy spinning wheels from companies such as Active Power can power equipment just long enough for generators to start.
Since Bryant doesn't have to constantly worry about data center reliability, it can focus on new strategic initiatives. It's working with Cisco Systems, Nokia, and T-Mobile to set up dual-band Wi-Fi and cellular service that will let students make free phone calls on campus. The university also is home to Cisco's IPICS communication center, linking emergency responders in Rhode Island and Connecticut; is moving toward providing students with unified communications and IPTV; and is in talks with an accounting software company to host apps in the Bryant data center to bring in extra cash.
Not that VP of IT Gloster is satisfied. He says Bryant can go much further to save energy; it recently had a call with IBM to discuss how the university could cut its power costs by another 50%.
Everything at Equinix seems to have been thought through for security. The floor is a concrete slab, partially to cut down on wires running where eyes can track them and hands can splice them. The walls are painted black, partially to increase customer anonymity by making the environment darker. Security systems in Equinix's data center have their own keyed-entry power supplies and backups.
For Terremark, too, security is part of its value proposition. It recently built several 50,000-square-foot buildings on a new 30-acre campus in Culpepper, Va., using a tiered physical security approach that takes into consideration every layer from outside the fences to the machines inside.
For its most sensitive systems, there are seven tiers of physical security a person must pass before physically touching the machines. Those include berms of dirt along the perimeter of the property, gates, fences, identity cards, guards, and biometrics.
Among Terremark's high-tech physical security measures are machines that measure hand geometry against a database of credentialed employees and an IP camera system that acts as an electronic tripwire. If the cordon is breached, the camera that caught the breach immediately pops up on a bank of security monitors. That system is designed to recognize faces, but Terremark hasn't yet unlocked that capability.
Some of what Terremark says are its best security measures are the lowest tech. "Just by putting a gutter or a gully in front of a berm, that doesn't cost anything, but it's extremely effective," says Ben Stewart, Terremark's senior VP for facility engineering. After the ditches and hills, there are gates and fencing rated at K-4 strength, strong enough to stop a truck moving at 35 mph.
The choice of the Culpepper site was no accident. The company's other main data center is in Miami, but federal government customers didn't like that less-secure urban environment. In making the move, Terremark took not only Culpepper's rural location into consideration, but also the fact that it's outside the nuclear blast zone of Washington, D.C., so any major nuclear attack wouldn't take out precious data.
Banks are notoriously skittish and secretive about their security strategies. But they, too, are digital Fort Knoxes at work, with physical security taken to extremes. Deutsche Bank has located two of its data centers underground in the Black Forest in Germany. That's typical, says IBM's Sams. "In the United States, you'll see them in unlabeled backwoods locations with double-string barbed-wire enclosures," he says.
Glen Sharlun, VP of customer insight at security vendor ArcSight, isn't your typical salesman. He has seen his fair share of security operations as a former commanding officer for network security at the U.S. Marine Corps. Sharlun can't talk about the classified systems in place there, but his experience and insight have given him access to some advanced deployments and enlightened policies.
The user's location is another important factor. "An outsider somewhere in Uzbekistan may look like an insider because he has authenticated access," Sharlun says. Recently, a state government computer security executive told Sharlun about how the state takes firewall logs of IP addresses accessing the system that can determine the general location of someone logging in, and combines them with Google Maps to plot geographically where people accessing the system are coming from. "These types of things aren't that hard to do but can be really insightful," he says.
Even reference customers of innovative vendors are just starting with automation. SunTrust Banks started using BladeLogic six months ago for automated server provisioning and is beginning to use it for automated compliance and management. The infrastructure is fully rolled out, but the automation is in relatively early stages. SunTrust was driven to BladeLogic mainly for cost savings and a need to be more efficient with its time.
Before installing BladeLogic, SunTrust had to manually install all of its server operating systems and manually lay down third-party applications on top of those systems. Now, says Dexter Oliver, VP of distributed server engineering, "you rack a server, you push a button, and then the next button lays on the third-party applications."
Even with BladeLogic in place, there's no way to make sure the installed configurations stay intact once the systems go live, other than through periodic human monitoring. SunTrust's next step is to take products such as WebSphere and WebLogic that have specific configuration needs, build templates for those applications, and provision and maintain those configurations automatically, whether the apps are installed on virtual or physical machines, Windows or Linux.
All that hands-off work frees up people to work on strategic projects, Oliver says. He would like to automate more but says that in order to get there, technologies and methodologies that can integrate the various tools in SunTrust's management toolbox still need to be created. "You've got configuration management databases, you've got this tool that does patching, this tool does monitoring," he says. "That orchestration piece is something that will need to be further developed."
Plenty of others would like to go as far as SunTrust plans to go. Communications technology company Mitel Networks is one. It runs Hewlett-Packard technology that can do scenario testing on workloads for disaster recovery. Mitel would like to proactively manage those workloads. "We originally invested in workload management just for failover scenarios, but we're now looking at it as a way to set up policies and then manage workload automatically," says David Grant, a Mitel data center manager. "We're prepared to invest some time and effort into this because we see our future in the IT space being tied in with this."
Mitel bought into a vision HP calls the Adaptive Enterprise. The company is pushing toward a heavily virtualized environment, aiming to evolve the data center so that "workloads will run where they need to run." It's already easy for Mitel to replicate servers and move virtual workloads around, but the next step is to automate that process.
Server and application provisioning isn't the only automation happening these days, as runtime automation is still going strong. One national insurance company is automating as many processes as it can using Opalis Integration Server. It started automating month-end processes, then moved into early detection and repair of problems, and collection and central storage of audit logs. The month-end processing went from taking 30 people two weeks to do to taking five people a total of three days.
"Our data centers are pretty dark," says Larry Dusanic, the company's director of IT. The insurer doesn't even have a full-time engineer working in its main data center in southern Nevada. Run-book automation is "the tool to glue everything together," from SQL Server, MySQL, and Oracle to Internet Information Server and Apache, he says.
Though Dusanic's organization uses run-book automation to integrate its systems and automate processes, the company still relies on experienced engineers to write scripts to make it all happen. "You need to take the time up front to really look at something," he says. Common processes might involve 30 interdependent tasks, and it can take weeks to create a proper automated script.
One of the more interesting scenarios Dusanic has been able to accomplish fixes a problem Citrix Systems has with printing large files. The insurance company prints thousands of pages periodically as part of its loss accounting, and the application that deals with them is distributed via Citrix. However, large print jobs run from Citrix can kill print servers, printers, and the application itself.
No data center is perfect, and even the innovative few have their own foibles. For those companies that have already driven down the path of hyperefficient, secure, or automated data centers, there's a lot more to be done. That's the mark of true innovators: never giving up the good fight to stay ahead and keep the competitive edge.