6 Models Of The Modern Data Center - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Hardware/Architectures
News
6/2/2014
08:06 AM
Charles Babcock and Chris Murphy
Charles Babcock and Chris Murphy
Slideshows
Connect Directly
Twitter
RSS
E-Mail

6 Models Of The Modern Data Center

Our exclusive look inside the new data centers of Fidelity, GM, Capital One, Equinix, ServiceNow, and Bank Of America shows the future of computing.
4 of 7

Capital One goes for fast and simple
Capital One has trained 5,000 of its employees on Agile development in the past three years, with the goal of constantly spinning out new software features to customers. The future of financial services, Capital One believes, is digital. But what's the point of knocking out apps in six weeks if it takes another two months to get them deployed and actually in customers' hands because of infrastructure shortfalls?  
'The objective we set is that we never want the infrastructure to slow down the pace at which our Agile teams can operate,' CIO Rob Alexander says. That need for speed in getting new digital features out to customers is a primary reason Capital One opened a $150 million data center outside Richmond, Va., in March.  
The problem with data center infrastructure, Alexander says, is that 'so often, it's the impediment to getting things moving.' Capital One used to get much of its data center capacity through facilities run by third-party operators. The new data center, paired with its existing facility on the other side of Richmond, will allow it to deliver most of that capacity in-house. 
In terms of the data center architecture, the Richmond center is a bit over 70% virtualized. It includes sections of high-density 'blocks' of servers, storage, and networking located in the same racks, with name-brand hardware configured to a design that Capital One customized to its needs. To deliver deployment speed, architects are working to standardize software and hardware stacks for certain platforms -- so all Java software, for example, will run on the same infrastructure block.  
Standardized platforms and DevOps tactics for deploying software more efficiently are helping to meet rising demand: In March, developers were doing 1,200 builds a day on that Java platform, compared with just 300 six months earlier, before the new data center went online.  
Here are some other features of Capital One's new data center: 
Hot-aisle containment for high density: Much of the data center uses a typical alternating hot-aisle/cold-aisle approach. But for its new high-density stacks, architects put a Plexiglas room around the equipment. That lets them run those racks much hotter than in the rest of the data center and cool a smaller space. There are only a few of those contained double aisles today, but the data center is built so that as other rows are converted to a high-density private cloud infrastructure, they can be enclosed in the same way. 
Cool Virginia air? If you've been to Virginia in July, 'cool outside air' isn't what comes to mind -- it's more like muggy and hot. But the Richmond data center can use outside air for cooling about 150 days of the year. The center is LEED Gold-certified.
Active-active data centers: Capital One has its headquarters in the Richmond area, and it has another data center on the other side of the city from its new one. The goal is to run both data centers for active workloads, but also use them to back each other up during periods of maintenance or brief outages, much like public cloud providers do with their facilities. Data can travel a double round trip between the data centers in 1 millisecond over fiber. The centers aren't yet running a true active-active setup, but that's the goal. 
Hardware inventory: When new hardware lands in the data center, it gets an RFID tag so that with one scan, an employee can look up the machine's specs, when it was purchased and delivered, and the like. The team built a tall, slim cart covered with RFID readers that an employee can slowly push past a stack of equipment and know exactly what's there. It's used primarily in the receiving area today, but data center managers plan to use the system to do a monthly inventory of all the equipment deployed on the data center floors, with no manual effort beyond walking a cart past the racks. The collected data will help IT assess when it needs to do tech refreshes of aging gear. 
Redundancy: Capital One displays, in 3-foot-tall letters along a main hallway of its data center, the word 'Simplicity.' Here's an example of that ethos at work: Since you need power from two independent sources coming into every server rack for redundancy, Capital One color-coded those A and B sources -- one red, one blue -- so that anyone can tell at a glance if a piece of equipment has a backup power source. And if it's a type of equipment that doesn't support two power inputs, it gets a yellow cord as a warning that pulling it will turn it off.

Capital One goes for fast and simple

Capital One has trained 5,000 of its employees on Agile development in the past three years, with the goal of constantly spinning out new software features to customers. The future of financial services, Capital One believes, is digital. But what's the point of knocking out apps in six weeks if it takes another two months to get them deployed and actually in customers' hands because of infrastructure shortfalls?

"The objective we set is that we never want the infrastructure to slow down the pace at which our Agile teams can operate," CIO Rob Alexander says. That need for speed in getting new digital features out to customers is a primary reason Capital One opened a $150 million data center outside Richmond, Va., in March.

The problem with data center infrastructure, Alexander says, is that "so often, it's the impediment to getting things moving." Capital One used to get much of its data center capacity through facilities run by third-party operators. The new data center, paired with its existing facility on the other side of Richmond, will allow it to deliver most of that capacity in-house.

In terms of the data center architecture, the Richmond center is a bit over 70% virtualized. It includes sections of high-density "blocks" of servers, storage, and networking located in the same racks, with name-brand hardware configured to a design that Capital One customized to its needs. To deliver deployment speed, architects are working to standardize software and hardware stacks for certain platforms -- so all Java software, for example, will run on the same infrastructure block.

Standardized platforms and DevOps tactics for deploying software more efficiently are helping to meet rising demand: In March, developers were doing 1,200 builds a day on that Java platform, compared with just 300 six months earlier, before the new data center went online.

Here are some other features of Capital One's new data center:

Hot-aisle containment for high density: Much of the data center uses a typical alternating hot-aisle/cold-aisle approach. But for its new high-density stacks, architects put a Plexiglas room around the equipment. That lets them run those racks much hotter than in the rest of the data center and cool a smaller space. There are only a few of those contained double aisles today, but the data center is built so that as other rows are converted to a high-density private cloud infrastructure, they can be enclosed in the same way.

Cool Virginia air? If you've been to Virginia in July, "cool outside air" isn't what comes to mind -- it's more like muggy and hot. But the Richmond data center can use outside air for cooling about 150 days of the year. The center is LEED Gold-certified.

Active-active data centers: Capital One has its headquarters in the Richmond area, and it has another data center on the other side of the city from its new one. The goal is to run both data centers for active workloads, but also use them to back each other up during periods of maintenance or brief outages, much like public cloud providers do with their facilities. Data can travel a double round trip between the data centers in 1 millisecond over fiber. The centers aren't yet running a true active-active setup, but that's the goal.

Hardware inventory: When new hardware lands in the data center, it gets an RFID tag so that with one scan, an employee can look up the machine's specs, when it was purchased and delivered, and the like. The team built a tall, slim cart covered with RFID readers that an employee can slowly push past a stack of equipment and know exactly what's there. It's used primarily in the receiving area today, but data center managers plan to use the system to do a monthly inventory of all the equipment deployed on the data center floors, with no manual effort beyond walking a cart past the racks. The collected data will help IT assess when it needs to do tech refreshes of aging gear.

Redundancy: Capital One displays, in 3-foot-tall letters along a main hallway of its data center, the word "Simplicity." Here's an example of that ethos at work: Since you need power from two independent sources coming into every server rack for redundancy, Capital One color-coded those A and B sources -- one red, one blue -- so that anyone can tell at a glance if a piece of equipment has a backup power source. And if it's a type of equipment that doesn't support two power inputs, it gets a yellow cord as a warning that pulling it will turn it off.

4 of 7
Comment  | 
Print  | 
Comments
Newest First  |  Oldest First  |  Threaded View
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/17/2014 | 4:23:21 PM
Nebraska data center built to withstand an F3 force gale. What about two?
Fidelity built its new data center near Omaha, Nebraska, which is about 90 miles from where the twin tornadoes struck Pilger, Neb., June 16. Its steel-frame rooms can withstand an F3 force wind, which includes all but the largest tornadoes. Not sure, though, whether it can withstand two of them at the same time.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/3/2014 | 3:32:06 PM
Open Compute key to future data center hardware?
Facebook uses servers based on the Open Compute Project's motherboard design. It's also testing data center switches based on Broadcom's design submitted to the Open Compute Project. Mellanox, Big Switch and Broadcom are all planning on building Open Compute-design switches. Facebook is using some of the swtiches for an SDN production network, Yevgeniy Severdlil reported on Data Center Knowledge today.  http://www.datacenterknowledge.com/archives/2014/06/03/facebook-testing-broadcoms-open-compute-switches-production/
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/3/2014 | 3:18:55 PM
Minimizing power consumption in its distribution
Another phase of modern data center building addresses how it manages its power supply. There are actually a wide variety of schemes to make power uninteruptible -- and they require some small amount of energy themselves to stay ready at an instant's notice for a switchover. A closet full of 12 volt batteries, with some portion of incoming current flowing through them, is one solution. A gateway between the batteries and alternating current can be built from an insulated gate bipolar transistor, which instantly conveys direct current if the alternating current goes away. That bypasses the need to run a little of the incoming current through the batteries, saving energy, an innovation by the Vantage data center builders.
Laurianne
50%
50%
Laurianne,
User Rank: Author
6/3/2014 | 10:06:28 AM
Re: Speed will drive architecture
Midsize companies often struggle just to so an apples-to-apples cost comparison between in house and cloud. Great look inside these data centers, Chris and Charlie. Did anything surprise you here, readers?
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
6/3/2014 | 9:31:46 AM
Re: Speed will drive architecture
Well put, James -- I have heard a number of midsized companies say they're benchmarking their data centers against cloud options, and believe they're competitive on costs. And as you say, cloud doesn't fit well for every app. It seems to me like we're seeing hybrid, but it's hybrid silos -- this goes cloud all the time, that stays on prem all the time, and there's very little dynamic switching (cloud bursting) between cloud and on prem. If others are seeing a lot of that dynamic switching between cloud and on prem, I'd love to hear about it. 
JamesV012
50%
50%
JamesV012,
User Rank: Apprentice
6/3/2014 | 9:25:42 AM
Re: Speed will drive architecture
Agreed that the larger companies aren't building a secret competitive advantage and are pretty open about how they do datacenter. I am playing from the mid-sized company tees. If you are more effiecient on cost or speed, I still consider that a competitive advantage. At the mid size, having data center and networking architecture designed for your needs can be a win.

My point was a bit cryptic. So many people are looking at cloud plays for infrastructure. While that can make sense for many applications, it isn't the new one size fits all. I think you'll see hybrid cloud/on prem architecture patterns being an advantage. 
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
6/3/2014 | 9:14:21 AM
Re: Speed will drive architecture
You note the competitive advantage that comes from the data center. But it's interesting how companies like Facebook are very open about their data center innovations -- seeing data centers as a cost to be lowered, and the more ideas they can share and spur the better. The tactics of running a world-class data center seem well understood, the challenge lies in executing on those tactics and then wringing the most value out, with steps like Capital One is taking to speed development and make sure infrastructure can keep up. 
JamesV012
50%
50%
JamesV012,
User Rank: Apprentice
6/2/2014 | 1:38:35 PM
Speed will drive architecture
As you saw the drive at FB and Google, other companies will realize you can build a competitive advantage in the data center. That could be speed, cost or security. As big data gets crunched more and more, having a dedicated infrastructure designed to handle it, may provide a competitive advantage.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/2/2014 | 1:23:10 PM
The just-in-time data center
Fidelity's idea of a just-in-time data center, based on Open Compute hardware, built in modifiable increments is a drastic departure from the fixed in concrete notions that preceded it. Are there other ways to make data centers more adaptable?
ChrisMurphy
100%
0%
ChrisMurphy,
User Rank: Author
6/2/2014 | 9:40:23 AM
Beyond Google and Facebook
What drew Charlie and I to this article idea is that, even in this age of the cloud, we keep seeing companies make major investments in their own data centers. We've written about DC innovation at the Internet companies like Google and Facebook, but these companies profiled here have different needs, from strict regulations to legacy apps. 
Commentary
Study Proposes 5 Primary Traits of Innovation Leaders
Joao-Pierre S. Ruth, Senior Writer,  11/8/2019
Slideshows
Top-Paying U.S. Cities for Data Scientists and Data Analysts
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/5/2019
Slideshows
10 Strategic Technology Trends for 2020
Jessica Davis, Senior Editor, Enterprise Apps,  11/1/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll