One Big Idea For Data Center Storage - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Leadership // Enterprise Agility
Commentary
9/25/2014
10:40 AM
Connect Directly
Twitter
RSS
50%
50%

One Big Idea For Data Center Storage

A single, global file management system turns data center-centric thinking inside out. It grants ease of access around the world but keeps control centralized.

IT Dress Code: 10 Cardinal Sins
IT Dress Code: 10 Cardinal Sins
(Click image for larger view and slideshow.)

Sometimes one is better than many. Nowhere is that more apparent than in data storage. The traditional way to protect data is by copying it multiple times. These copies, combined with the need to provide access to shared data in multiple locations, create an unwanted explosion of duplicated data in an infrastructure already strained by years of relentless data growth.

Companies struggling under the combined pressure of Tier-2 file-data growth and an expanding global storage footprint increasingly rely on a new generation of global file systems. They enable IT to consolidate all copies of data required for protection and access into a single master copy.

File-data consolidation elevates the IT conversation from the nuts and bolts of storage provisioning, backup, disaster recovery, etc., to a discussion about data management that is well aligned to the business: Who needs access to this data? Where do they need it? What level of performance is required?

A global file system brings much needed order and efficiency to IT organizations struggling with data growth across multiple geographies. That said, the requirements for stability, scale, performance, security, and even the central management of such a global infrastructure technology are formidable and must rely on a new hybrid architecture that combines the nascent resources of the cloud (infrastructure providers) with more traditional data center technology.

[Want to learn more about how intelligence in storage management is shifting? See Storage Intelligence Moves Up The Stack.]

In order to achieve data stability at scale, the modern global file system uses a combination of versioning and replication. This is similar to traditional snapshot and replication techniques. In this case, however, the snapshots cannot be limited to a certain maximum number. Otherwise, the system will not allow IT to enjoy one of the most powerful features of the cloud: an infinite capacity for versions that eliminates the need for backup.

Each file in a modern file system contains every version of itself, which is then replicated across multiple availability zones by the cloud infrastructure providers in order to protect against a single zone's failure. Providers like Amazon and Microsoft are adept at this sort of massive, geographically dispersed replication, which carries the added benefit of increasing the fluidity of data and the speed at which data can be accessed from anywhere in the world.

Performance requirements for Tier-2 file data can vary wildly. A few thousand engineers working on a chip design require much higher performance than does a branch office working on spreadsheets. I know an ambitious executive VP of infrastructure who is using a global file system to allow dozens of production sites worldwide to share a 15-terabyte work set of video data, while some of his smaller sites require only relatively low-performance access to Microsoft Office documents.

(Source: mekuria getinet)
(Source: mekuria getinet)

The key to performance then is flexibility, but in order to enable file-data consolidation, the global file system must take IT out of the business of copying data to each site where performance is needed. Instead, it must rely on the fluidity of the cloud infrastructure backend and on caching and pre-staging algorithms that move the data that's needed into data centers, where the job to deliver the necessary I/O per second falls to the hardware appliances racked at each location.

And because data is not actually stored in the data center but in the cloud, modern global file systems must secure data at the end points by allowing IT to generate and guard its own encryption keys.

Global file systems that connect disparate geographies can eliminate the stress introduced by distance and give every worker equal access to data in the infrastructure. They do so by turning traditional data center thinking inside out. It moves central management out of the data center and into a global core service that can monitor and manage every system regardless of its location. This is a shift that has already gone mainstream in networking with systems like Aerohive and Cisco's Meraki. It is only a matter of time before this same model simplifies storage.

Rather than relying on the data center to be the center of everything, data centers become endpoints for security and the required level of performance, while every other infrastructure function shifts to a core cloud infrastructure.

To imagine a truly global storage infrastructure, one must think beyond the confines of any one physical appliance or data center. Only then can organizations harness the power of one copy of data, protected in many ways, accessible anywhere.

If you just look at vendor financials, the enterprise storage business seems stuck in neutral. However, flat revenue numbers mask a scorching pace of technical innovation, ongoing double-digit capacity growth in enterprises, and dramatic changes in how and where businesses store data. Get the 2014 State of Storage report today. (Free registration required.)

Andres Rodriguez is CEO of Nasuni, a supplier of enterprise storage using on-premises hardware and cloud services, accessed via an appliance. He previously co-founded Archivas, a company that developed an enterprise-class cloud storage system and was acquired by Hitachi Data ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
9/25/2014 | 11:15:37 PM
The speed of the data centger, the control of the cloud
I think Rodriguez is arguing that a storage file system that spans data centers and the cloud, under the right architecture, yields the speed of an on-premises system with the central control of a cloud service. There aer specific requirements, such as on-premises appliances like the ones Nasuni offers. But it could be built in different ways.
asksqn
50%
50%
asksqn,
User Rank: Ninja
9/25/2014 | 6:03:37 PM
Innovation stifled by greed
Cloud storage has certainly made leaps and bounds, but is largely in its infancy.  But all bets will be off if the FCC gives the internet providers carte blanche to erect internet toll booths for the fast lane.  Only deep pockets will be able to access their data meanwhile startups and all innovation will cease because Johnny's Garage Biz won't have the extra funds to access the internet via the 2 tiered internet providers are hot to put into place.
News
Python Beats R and SAS in Analytics Tool Survey
Jessica Davis, Senior Editor, Enterprise Apps,  9/3/2019
Slideshows
IT Careers: 10 Places to Look for Great Developers
Cynthia Harvey, Freelance Journalist, InformationWeek,  9/4/2019
Commentary
Cloud 2.0: A New Era for Public Cloud
Crystal Bedell, Technology Writer,  9/1/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Data Science and AI in the Fast Lane
This IT Trend Report will help you gain insight into how quickly and dramatically data science is influencing how enterprises are managed and where they will derive business success. Read the report today!
Slideshows
Flash Poll