Big Data // Hardware/Architectures
News
6/2/2014
08:06 AM
Charles Babcock and Chris Murphy
Charles Babcock and Chris Murphy
Slideshows
Connect Directly
Twitter
RSS
E-Mail

6 Models Of The Modern Data Center

Our exclusive look inside the new data centers of Fidelity, GM, Capital One, Equinix, ServiceNow, and Bank Of America shows the future of computing.
2 of 7

Fidelity keeps its options open
Fidelity Investments is set to open a state-of-the-art data center in Nebraska based on a design it has been working on for five years. Fidelity aims to use as much open source code and standardized Open Compute hardware as possible in its data center, along with its own proprietary 'Click to Compute' server orchestration and management system. 
As an early member of Facebook's Open Compute Project, Fidelity is hoping to see competing suppliers produce servers for a networked, rack-based hardware platform that encourages rapid cycles of innovation. As part of a highly regulated industry, it also wants a data center that it owns and manages and in which it retains company data. 
The data center, slated to open in September, implements the Centercore design Fidelity has been working on since 2009, as it settled on the right blend of elements for a leading financial services company. Its intention is to capture the elasticity of 'hyperscale' data centers built by the likes of Google and Amazon, says Eric Wells, Fidelity's VP of data center services. 'It's a very open design that can evolve as we decide to add capacity in the future.' 
 What's different
In the previous generation of Fidelity data centers, Wells says, 'We found a lot of stranded power and IT capacity, where the infrastructure couldn't take full advantage of the resources available to it because of a crowding together of the wrong mix of elements.' Fidelity adds units to its Centercore design in 500-kilowatt or 1-megawatt units, with all the power capable of being consumed by the equipment in the unit. A 500-kilowatt CoreUnit might typically represent 2,200 square feet of data center space. 
CoreUnits are steel-frame, one-story rooms that can be assembled together like Lego pieces coming together at the site, Wells says. CoreUnits have sliding panels in the walls that allow a new unit added to the data center to open up and provide contiguous space to another CoreUnit. The units are built off-site to Fidelity's specs by an independent fabricator, Environmental Air Systems, and then trucked to Fidelity's data center construction site. The units, unlike earlier modular designs based on shipping containers, may be stacked into a multistory building, which can be particularly useful in an urban location. 
Within days of arrival, they've been equipped with the power connections and cabling they need to take up their station. An entire data center can be constructed in this way in six months, and expanded as needed. Fidelity calls it 'just-in-time data center construction' and builds no more than it needs at any one time. 
Each CoreUnit has its own cooling system and power distribution system. They're designed to run at a warm 90 degrees, collecting hot air off the equipment and either cooling it or venting it to the outside. CoreUnits can withstand F3-force winds, which can occur in the large tornadoes that strike the Midwest. 
Fidelity's Nebraska data center is expected to use 40% less energy than the company's previous data centers. 
It will contain thousands of x86 servers, but unlike Google's and Amazon's facilities, it will also contain some RISC/Unix servers, Wells says. The servers and switches are based on Open Compute standards.

Fidelity keeps its options open

Fidelity Investments is set to open a state-of-the-art data center in Nebraska based on a design it has been working on for five years. Fidelity aims to use as much open source code and standardized Open Compute hardware as possible in its data center, along with its own proprietary "Click to Compute" server orchestration and management system.

As an early member of Facebook's Open Compute Project, Fidelity is hoping to see competing suppliers produce servers for a networked, rack-based hardware platform that encourages rapid cycles of innovation. As part of a highly regulated industry, it also wants a data center that it owns and manages and in which it retains company data.

The data center, slated to open in September, implements the Centercore design Fidelity has been working on since 2009, as it settled on the right blend of elements for a leading financial services company. Its intention is to capture the elasticity of "hyperscale" data centers built by the likes of Google and Amazon, says Eric Wells, Fidelity's VP of data center services. "It's a very open design that can evolve as we decide to add capacity in the future."


What's different

In the previous generation of Fidelity data centers, Wells says, "We found a lot of stranded power and IT capacity, where the infrastructure couldn't take full advantage of the resources available to it because of a crowding together of the wrong mix of elements." Fidelity adds units to its Centercore design in 500-kilowatt or 1-megawatt units, with all the power capable of being consumed by the equipment in the unit. A 500-kilowatt CoreUnit might typically represent 2,200 square feet of data center space.

CoreUnits are steel-frame, one-story rooms that can be assembled together like Lego pieces coming together at the site, Wells says. CoreUnits have sliding panels in the walls that allow a new unit added to the data center to open up and provide contiguous space to another CoreUnit. The units are built off-site to Fidelity's specs by an independent fabricator, Environmental Air Systems, and then trucked to Fidelity's data center construction site. The units, unlike earlier modular designs based on shipping containers, may be stacked into a multistory building, which can be particularly useful in an urban location.

Within days of arrival, they've been equipped with the power connections and cabling they need to take up their station. An entire data center can be constructed in this way in six months, and expanded as needed. Fidelity calls it "just-in-time data center construction" and builds no more than it needs at any one time.

Each CoreUnit has its own cooling system and power distribution system. They're designed to run at a warm 90 degrees, collecting hot air off the equipment and either cooling it or venting it to the outside. CoreUnits can withstand F3-force winds, which can occur in the large tornadoes that strike the Midwest.

Fidelity's Nebraska data center is expected to use 40% less energy than the company's previous data centers.

It will contain thousands of x86 servers, but unlike Google's and Amazon's facilities, it will also contain some RISC/Unix servers, Wells says. The servers and switches are based on Open Compute standards.

2 of 7
Comment  | 
Print  | 
Comments
Newest First  |  Oldest First  |  Threaded View
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/17/2014 | 4:23:21 PM
Nebraska data center built to withstand an F3 force gale. What about two?
Fidelity built its new data center near Omaha, Nebraska, which is about 90 miles from where the twin tornadoes struck Pilger, Neb., June 16. Its steel-frame rooms can withstand an F3 force wind, which includes all but the largest tornadoes. Not sure, though, whether it can withstand two of them at the same time.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/3/2014 | 3:32:06 PM
Open Compute key to future data center hardware?
Facebook uses servers based on the Open Compute Project's motherboard design. It's also testing data center switches based on Broadcom's design submitted to the Open Compute Project. Mellanox, Big Switch and Broadcom are all planning on building Open Compute-design switches. Facebook is using some of the swtiches for an SDN production network, Yevgeniy Severdlil reported on Data Center Knowledge today.  http://www.datacenterknowledge.com/archives/2014/06/03/facebook-testing-broadcoms-open-compute-switches-production/
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/3/2014 | 3:18:55 PM
Minimizing power consumption in its distribution
Another phase of modern data center building addresses how it manages its power supply. There are actually a wide variety of schemes to make power uninteruptible -- and they require some small amount of energy themselves to stay ready at an instant's notice for a switchover. A closet full of 12 volt batteries, with some portion of incoming current flowing through them, is one solution. A gateway between the batteries and alternating current can be built from an insulated gate bipolar transistor, which instantly conveys direct current if the alternating current goes away. That bypasses the need to run a little of the incoming current through the batteries, saving energy, an innovation by the Vantage data center builders.
Laurianne
50%
50%
Laurianne,
User Rank: Author
6/3/2014 | 10:06:28 AM
Re: Speed will drive architecture
Midsize companies often struggle just to so an apples-to-apples cost comparison between in house and cloud. Great look inside these data centers, Chris and Charlie. Did anything surprise you here, readers?
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
6/3/2014 | 9:31:46 AM
Re: Speed will drive architecture
Well put, James -- I have heard a number of midsized companies say they're benchmarking their data centers against cloud options, and believe they're competitive on costs. And as you say, cloud doesn't fit well for every app. It seems to me like we're seeing hybrid, but it's hybrid silos -- this goes cloud all the time, that stays on prem all the time, and there's very little dynamic switching (cloud bursting) between cloud and on prem. If others are seeing a lot of that dynamic switching between cloud and on prem, I'd love to hear about it. 
JamesV012
50%
50%
JamesV012,
User Rank: Apprentice
6/3/2014 | 9:25:42 AM
Re: Speed will drive architecture
Agreed that the larger companies aren't building a secret competitive advantage and are pretty open about how they do datacenter. I am playing from the mid-sized company tees. If you are more effiecient on cost or speed, I still consider that a competitive advantage. At the mid size, having data center and networking architecture designed for your needs can be a win.

My point was a bit cryptic. So many people are looking at cloud plays for infrastructure. While that can make sense for many applications, it isn't the new one size fits all. I think you'll see hybrid cloud/on prem architecture patterns being an advantage. 
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
6/3/2014 | 9:14:21 AM
Re: Speed will drive architecture
You note the competitive advantage that comes from the data center. But it's interesting how companies like Facebook are very open about their data center innovations -- seeing data centers as a cost to be lowered, and the more ideas they can share and spur the better. The tactics of running a world-class data center seem well understood, the challenge lies in executing on those tactics and then wringing the most value out, with steps like Capital One is taking to speed development and make sure infrastructure can keep up. 
JamesV012
50%
50%
JamesV012,
User Rank: Apprentice
6/2/2014 | 1:38:35 PM
Speed will drive architecture
As you saw the drive at FB and Google, other companies will realize you can build a competitive advantage in the data center. That could be speed, cost or security. As big data gets crunched more and more, having a dedicated infrastructure designed to handle it, may provide a competitive advantage.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/2/2014 | 1:23:10 PM
The just-in-time data center
Fidelity's idea of a just-in-time data center, based on Open Compute hardware, built in modifiable increments is a drastic departure from the fixed in concrete notions that preceded it. Are there other ways to make data centers more adaptable?
ChrisMurphy
100%
0%
ChrisMurphy,
User Rank: Author
6/2/2014 | 9:40:23 AM
Beyond Google and Facebook
What drew Charlie and I to this article idea is that, even in this age of the cloud, we keep seeing companies make major investments in their own data centers. We've written about DC innovation at the Internet companies like Google and Facebook, but these companies profiled here have different needs, from strict regulations to legacy apps. 
In A Fever For Big Data
In A Fever For Big Data
Healthcare orgs are relentlessly accumulating data, and a growing array of tools are becoming available to manage it.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Must Reads Oct. 21, 2014
InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
A roundup of the top stories and community news at InformationWeek.com.
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.