Our exclusive look inside the new data centers of Fidelity, GM, Capital One, Equinix, ServiceNow, and Bank Of America shows the future of computing.
2 of 7
Fidelity keeps its options open
Fidelity Investments is set to open a state-of-the-art data center in Nebraska based on a design it has been working on for five years. Fidelity aims to use as much open source code and standardized Open Compute hardware as possible in its data center, along with its own proprietary "Click to Compute" server orchestration and management system.
As an early member of Facebook's Open Compute Project, Fidelity is hoping to see competing suppliers produce servers for a networked, rack-based hardware platform that encourages rapid cycles of innovation. As part of a highly regulated industry, it also wants a data center that it owns and manages and in which it retains company data.
The data center, slated to open in September, implements the Centercore design Fidelity has been working on since 2009, as it settled on the right blend of elements for a leading financial services company. Its intention is to capture the elasticity of "hyperscale" data centers built by the likes of Google and Amazon, says Eric Wells, Fidelity's VP of data center services. "It's a very open design that can evolve as we decide to add capacity in the future."
In the previous generation of Fidelity data centers, Wells says, "We found a lot of stranded power and IT capacity, where the infrastructure couldn't take full advantage of the resources available to it because of a crowding together of the wrong mix of elements." Fidelity adds units to its Centercore design in 500-kilowatt or 1-megawatt units, with all the power capable of being consumed by the equipment in the unit. A 500-kilowatt CoreUnit might typically represent 2,200 square feet of data center space.
CoreUnits are steel-frame, one-story rooms that can be assembled together like Lego pieces coming together at the site, Wells says. CoreUnits have sliding panels in the walls that allow a new unit added to the data center to open up and provide contiguous space to another CoreUnit. The units are built off-site to Fidelity's specs by an independent fabricator, Environmental Air Systems, and then trucked to Fidelity's data center construction site. The units, unlike earlier modular designs based on shipping containers, may be stacked into a multistory building, which can be particularly useful in an urban location.
Within days of arrival, they've been equipped with the power connections and cabling they need to take up their station. An entire data center can be constructed in this way in six months, and expanded as needed. Fidelity calls it "just-in-time data center construction" and builds no more than it needs at any one time.
Each CoreUnit has its own cooling system and power distribution system. They're designed to run at a warm 90 degrees, collecting hot air off the equipment and either cooling it or venting it to the outside. CoreUnits can withstand F3-force winds, which can occur in the large tornadoes that strike the Midwest.
Fidelity's Nebraska data center is expected to use 40% less energy than the company's previous data centers.
It will contain thousands of x86 servers, but unlike Google's and Amazon's facilities, it will also contain some RISC/Unix servers, Wells says. The servers and switches are based on Open Compute standards.
InformationWeek Tech Digest August 03, 2015The networking industry agrees that software-defined networking is the way of the future. So where are all the deployments? We take a look at where SDN is being deployed and what's getting in the way of deployments.