09:06 AM
Connect Directly

One Data, Undivided

Having multiple copies of data to satisfy users and security concerns is costing companies time and money to maintain. The solution is the cloud.

Nothing is certain but death, taxes -- and data growth. Files get larger and there are more of them. There are also many more copies of identical files. In their quest to protect and improve access, IT organizations copy data to many different places.

IDC estimates that over half of the capacity in primary storage systems is devoted to copy data. This estimate does not include additional copies held in backup and archive systems. It also does not include copies generated to satisfy recent demands from mobile users to access data from anywhere.

The problem goes well beyond the cost of raw storage capacity. An unwieldy number of copies creates a data management nightmare as organizations struggle to maintain access control privileges and ensure that users see the right version. This problem is significant, but a new trend has emerged to tackle it:  consolidation all of the copies needed for protection and access into a single logical instance.

[Data centers are relying more than ever on outside services. Read The Thinning Of The Datacenter.]

The goal of data consolidation is to reduce cost by eliminating needless copies, while at the same time simplifying data management. The technology does for data what virtualization did for servers. It abstracts the data away from the hardware so that it can fold all of the copies needed for protection and access into a single instance that can be more easily managed by IT.

(Image: Wikipedia)
(Image: Wikipedia)

The road to data protection is riddled with copies. Data born in primary storage is quickly copied to a secondary array in the same data center in order to provide quick recovery. Most organizations will also make an additional copy of the data in a geographically distinct data center to support business continuity.

The same data might also end up in tape or disk-to-disk backup sub-systems in order to provide long-term versioning and archiving. Often, production systems will spawn development-and-test duplicates. Larger organizations can end up with copies in three data centers: two to ensure business continuity, and a third simply to test failover and fallback. All of this additional infrastructure weighs heavily in the minds of CIOs as they project the budget needed to support the growth of their organizations.

A generation of mobile users accustomed to data being at its fingertips is now demanding "access anywhere!" The problem has been pervasive for some time now. Organizations with remote or branch offices have had to either stretch access with WAN acceleration or replicate data across their locations if high-performance access is required. The traditional infrastructure

is straining well beyond its technical capabilities to deliver access to smartphones, tablets, and an increasingly mobile workforce. Meanwhile the file sync-and-share players, such as Dropbox and Box, have moved in to capitalize on this need for data mobility.

Recently, an IT director from one of the largest engineering firms in the world told me that they were considering adopting Box for their mobile users. They were being kept from moving forward by concerns about security and how to keep data synchronized with Box. Their engineering users couldn't sacrifice high-performance access to the data, which today sits on the firm's powerful file servers. The challenge for them was how to extend mobile access to data without introducing yet another storage platform and adding another unmanaged copy. 

Every organization needs to be able to point to an authoritative copy of data. Access control, versioning, audit trails -- everything -- converges on that single logical instance of data. A forward-looking CIO said it best in a recent email exchange, "We want one central, secure, authoritative copy of each data asset… and many ways to access and present the data (again in a secure manner) to our internal and external teams." He manages the global footprint of a firm with dozens of locations, all needing high-performance, mobile, and selected access of the same data to outside partner organizations.

Just as virtualization first separated the operating system from the hardware -- enabling virtualization to deliver its killer use-case, server consolidation -- the road to data consolidation also begins by separating the data from the hardware. Several firms are trying to deliver on this notion of one central version of the data. EMC has introduced ViPR, which abstracts the data services away from the actual storage gear in use. Up-and-coming companies such as Actifio tout a cure to copy-data creep in order to help large organizations streamline backup and disaster recovery.

My own company, Nasuni, has evolved the storage controller design in order to use public cloud storage. Because the cloud already makes copies, it can eliminate the need for IT to make additional backup or disaster recovery copies, while at the same time it enables global access.

Our common vision for the industry is to give IT organizations control over a single, fully protected, fully versioned instance of every data asset. When high-performance access is needed, the data is instantiated in a high-performance system. When access is needed across multiple locations, the copies of the data appear in all those locations and a global lock prevents conflicts in order to maintain one clean version of the data. Enabling mobile access means just that: stretching the perimeter of access to include mobile devices rather than copying the data to yet another storage platform.

Data consolidation establishes complete control over each data asset and, once and for all, liberates IT from having to make, track, and manage all the copies.

Solid state alone can't solve your volume and performance problem. Think scale-out, virtualization, and cloud. Find out more about the 2014 State of Enterprise Storage Survey results in the new issue of InformationWeek Tech Digest.

Andres Rodriguez is CEO of Nasuni, a supplier of enterprise storage using on-premises hardware and cloud services, accessed via an appliance. He previously co-founded Archivas, a company that developed an enterprise-class cloud storage system and was acquired by Hitachi Data ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
More Insights
Copyright © 2021 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service