6 Models Of The Modern Data Center - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Hardware/Architectures
News
6/2/2014
08:06 AM
Charles Babcock and Chris Murphy
Charles Babcock and Chris Murphy
Slideshows
Connect Directly
Twitter
RSS
E-Mail

6 Models Of The Modern Data Center

Our exclusive look inside the new data centers of Fidelity, GM, Capital One, Equinix, ServiceNow, and Bank Of America shows the future of computing.
3 of 7

GM puts computing near power users
The people who soak up the most computing power at General Motors are engineers doing CAD drawings of new vehicles and those simulating crashes. It's no coincidence that the automaker built its two new data centers alongside two existing GM locations in Michigan where most of those teams work. 
Limiting the distance all that data must travel saves on networking costs and improves responsiveness, notes Jeff Liedel, who leads the data centers as executive director of global IT operations. GM modeled its two new data centers on the cutting-edge practices of Web giants such as Facebook and Google -- for example, it uses in-row cooling and x86 commodity servers. But it also needed to accommodate legacy apps not built for a Web architecture. 
Name-brand, highly virtualized x86 servers provide most of the automaker's computer power, running a standard software stack that includes Linux OS, VMware, Oracle database, and WebLogic Java application server. But this isn't a private cloud for purists. It isn't one single, general-purpose pool of compute, storage, and networking, because many of GM's applications run best on dedicated hardware. 'Like any other 50- or 100-year-old company, we have a lot of other stuff,' Liedel says. That other stuff includes Solaris servers, as well as mainframes that run 300 different applications, including systems that process tens of billions of dollars in material and parts purchases. GM's Outlook and Exchange apps run on Windows servers. 
Liedel distinguishes between 'cloud-ready apps' that can run on the shared private cloud part of GM's environment and older apps that don't fit that model. CAD/CAM probably will never be a cloud app, he says, because it requires so much graphics-intensive local computing. But other apps, such as expense reporting, run on a private cloud environment. 
GM's $130 million Warren, Mich., data center opened last summer, and its Milford, Mich., center is due to open this summer. GM is closing 23 data centers worldwide (some of them operated by outsourcers) and moving most of that capacity to these two, which are built to be identical and provide failover for each other. 
Here are some other features of the data centers: 
Energy efficiency: GM's Warren data center is running at about 1.5 PUE; PUE is the standard ratio that measures how much energy the facility uses divided by the energy the computing equipment itself uses (as opposed to cooling or lighting). The closer to 1, the better, and the most efficient enterprise data centers today run about 1.2 or 1.3. 'We'll get there,' Liedel says.
In-row coolers: Like most modern data centers, GM's has cooling systems that react to heat sensors in a particular server or rack and can cool only that area rather than trying to cool an entire room. It has Plexiglas rooms of servers in which the temperature can run regularly at about 90 degrees, compared with about 70 degrees for the rest of the room. Those 'hot aisle containments' max out at 130 degrees -- at which point the roof pops open, releasing the steaming air into the rest of the building. GM also uses evaporative cooling, using water chilled by the cool Michigan air, for much of the year instead of standard air conditioning. 
Flywheels over batteries: In the event of a power loss, GM's data centers use a flywheel system, which Liedel describes as a 'mechanical UPS.' If the electricity goes out, the flywheels are released to run the facility until conventional diesel generators can kick in. The flywheel system consumes about as much energy as a more conventional uninterruptible power supply system, Liedel says, but it takes far less maintenance and 'replaces a roomful of lead acid batteries.' The Warren center is LEED Gold-certified, and GM will pursue the same standard for Milford.
Power use per square foot: Liedel keeps an eye on a data center metric you don't hear much about -- power use per square foot -- to determine if GM is at risk of running out of capacity. Companies usually build a data center with physical space to expand, and GM is no different. The Warren center is only about 60% to 70% filled. But data centers also can run out of power, from either the utility or its own backup generators. Today, the Warren data center is using less than 50% of its power capacity. As the electronics get smaller, they'll draw more power per square foot, which means Liedel is watching whether he has to add juice well before he needs to pour more concrete.

GM puts computing near power users

The people who soak up the most computing power at General Motors are engineers doing CAD drawings of new vehicles and those simulating crashes. It's no coincidence that the automaker built its two new data centers alongside two existing GM locations in Michigan where most of those teams work.

Limiting the distance all that data must travel saves on networking costs and improves responsiveness, notes Jeff Liedel, who leads the data centers as executive director of global IT operations. GM modeled its two new data centers on the cutting-edge practices of Web giants such as Facebook and Google -- for example, it uses in-row cooling and x86 commodity servers. But it also needed to accommodate legacy apps not built for a Web architecture.

Name-brand, highly virtualized x86 servers provide most of the automaker's computer power, running a standard software stack that includes Linux OS, VMware, Oracle database, and WebLogic Java application server. But this isn't a private cloud for purists. It isn't one single, general-purpose pool of compute, storage, and networking, because many of GM's applications run best on dedicated hardware. "Like any other 50- or 100-year-old company, we have a lot of other stuff," Liedel says. That other stuff includes Solaris servers, as well as mainframes that run 300 different applications, including systems that process tens of billions of dollars in material and parts purchases. GM's Outlook and Exchange apps run on Windows servers.

Liedel distinguishes between "cloud-ready apps" that can run on the shared private cloud part of GM's environment and older apps that don't fit that model. CAD/CAM probably will never be a cloud app, he says, because it requires so much graphics-intensive local computing. But other apps, such as expense reporting, run on a private cloud environment.

GM's $130 million Warren, Mich., data center opened last summer, and its Milford, Mich., center is due to open this summer. GM is closing 23 data centers worldwide (some of them operated by outsourcers) and moving most of that capacity to these two, which are built to be identical and provide failover for each other.

Here are some other features of the data centers:

Energy efficiency: GM's Warren data center is running at about 1.5 PUE; PUE is the standard ratio that measures how much energy the facility uses divided by the energy the computing equipment itself uses (as opposed to cooling or lighting). The closer to 1, the better, and the most efficient enterprise data centers today run about 1.2 or 1.3. "We'll get there," Liedel says.

In-row coolers: Like most modern data centers, GM's has cooling systems that react to heat sensors in a particular server or rack and can cool only that area rather than trying to cool an entire room. It has Plexiglas rooms of servers in which the temperature can run regularly at about 90 degrees, compared with about 70 degrees for the rest of the room. Those "hot aisle containments" max out at 130 degrees -- at which point the roof pops open, releasing the steaming air into the rest of the building. GM also uses evaporative cooling, using water chilled by the cool Michigan air, for much of the year instead of standard air conditioning.

Flywheels over batteries: In the event of a power loss, GM's data centers use a flywheel system, which Liedel describes as a "mechanical UPS." If the electricity goes out, the flywheels are released to run the facility until conventional diesel generators can kick in. The flywheel system consumes about as much energy as a more conventional uninterruptible power supply system, Liedel says, but it takes far less maintenance and "replaces a roomful of lead acid batteries." The Warren center is LEED Gold-certified, and GM will pursue the same standard for Milford.

Power use per square foot: Liedel keeps an eye on a data center metric you don't hear much about -- power use per square foot -- to determine if GM is at risk of running out of capacity. Companies usually build a data center with physical space to expand, and GM is no different. The Warren center is only about 60% to 70% filled. But data centers also can run out of power, from either the utility or its own backup generators. Today, the Warren data center is using less than 50% of its power capacity. As the electronics get smaller, they'll draw more power per square foot, which means Liedel is watching whether he has to add juice well before he needs to pour more concrete.

3 of 7
Comment  | 
Print  | 
Comments
Newest First  |  Oldest First  |  Threaded View
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/17/2014 | 4:23:21 PM
Nebraska data center built to withstand an F3 force gale. What about two?
Fidelity built its new data center near Omaha, Nebraska, which is about 90 miles from where the twin tornadoes struck Pilger, Neb., June 16. Its steel-frame rooms can withstand an F3 force wind, which includes all but the largest tornadoes. Not sure, though, whether it can withstand two of them at the same time.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/3/2014 | 3:32:06 PM
Open Compute key to future data center hardware?
Facebook uses servers based on the Open Compute Project's motherboard design. It's also testing data center switches based on Broadcom's design submitted to the Open Compute Project. Mellanox, Big Switch and Broadcom are all planning on building Open Compute-design switches. Facebook is using some of the swtiches for an SDN production network, Yevgeniy Severdlil reported on Data Center Knowledge today.  http://www.datacenterknowledge.com/archives/2014/06/03/facebook-testing-broadcoms-open-compute-switches-production/
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/3/2014 | 3:18:55 PM
Minimizing power consumption in its distribution
Another phase of modern data center building addresses how it manages its power supply. There are actually a wide variety of schemes to make power uninteruptible -- and they require some small amount of energy themselves to stay ready at an instant's notice for a switchover. A closet full of 12 volt batteries, with some portion of incoming current flowing through them, is one solution. A gateway between the batteries and alternating current can be built from an insulated gate bipolar transistor, which instantly conveys direct current if the alternating current goes away. That bypasses the need to run a little of the incoming current through the batteries, saving energy, an innovation by the Vantage data center builders.
Laurianne
50%
50%
Laurianne,
User Rank: Author
6/3/2014 | 10:06:28 AM
Re: Speed will drive architecture
Midsize companies often struggle just to so an apples-to-apples cost comparison between in house and cloud. Great look inside these data centers, Chris and Charlie. Did anything surprise you here, readers?
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
6/3/2014 | 9:31:46 AM
Re: Speed will drive architecture
Well put, James -- I have heard a number of midsized companies say they're benchmarking their data centers against cloud options, and believe they're competitive on costs. And as you say, cloud doesn't fit well for every app. It seems to me like we're seeing hybrid, but it's hybrid silos -- this goes cloud all the time, that stays on prem all the time, and there's very little dynamic switching (cloud bursting) between cloud and on prem. If others are seeing a lot of that dynamic switching between cloud and on prem, I'd love to hear about it. 
JamesV012
50%
50%
JamesV012,
User Rank: Apprentice
6/3/2014 | 9:25:42 AM
Re: Speed will drive architecture
Agreed that the larger companies aren't building a secret competitive advantage and are pretty open about how they do datacenter. I am playing from the mid-sized company tees. If you are more effiecient on cost or speed, I still consider that a competitive advantage. At the mid size, having data center and networking architecture designed for your needs can be a win.

My point was a bit cryptic. So many people are looking at cloud plays for infrastructure. While that can make sense for many applications, it isn't the new one size fits all. I think you'll see hybrid cloud/on prem architecture patterns being an advantage. 
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
6/3/2014 | 9:14:21 AM
Re: Speed will drive architecture
You note the competitive advantage that comes from the data center. But it's interesting how companies like Facebook are very open about their data center innovations -- seeing data centers as a cost to be lowered, and the more ideas they can share and spur the better. The tactics of running a world-class data center seem well understood, the challenge lies in executing on those tactics and then wringing the most value out, with steps like Capital One is taking to speed development and make sure infrastructure can keep up. 
JamesV012
50%
50%
JamesV012,
User Rank: Apprentice
6/2/2014 | 1:38:35 PM
Speed will drive architecture
As you saw the drive at FB and Google, other companies will realize you can build a competitive advantage in the data center. That could be speed, cost or security. As big data gets crunched more and more, having a dedicated infrastructure designed to handle it, may provide a competitive advantage.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/2/2014 | 1:23:10 PM
The just-in-time data center
Fidelity's idea of a just-in-time data center, based on Open Compute hardware, built in modifiable increments is a drastic departure from the fixed in concrete notions that preceded it. Are there other ways to make data centers more adaptable?
ChrisMurphy
100%
0%
ChrisMurphy,
User Rank: Author
6/2/2014 | 9:40:23 AM
Beyond Google and Facebook
What drew Charlie and I to this article idea is that, even in this age of the cloud, we keep seeing companies make major investments in their own data centers. We've written about DC innovation at the Internet companies like Google and Facebook, but these companies profiled here have different needs, from strict regulations to legacy apps. 
Commentary
Enterprise Guide to Edge Computing
Cathleen Gagne, Managing Editor, InformationWeek,  10/15/2019
News
Rethinking IT: Tech Investments that Drive Business Growth
Jessica Davis, Senior Editor, Enterprise Apps,  10/3/2019
Slideshows
IT Careers: 12 Job Skills in Demand for 2020
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/1/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll