Flash First: Your Next Storage Strategy?
As flash storage costs decline, its performance advantages over hard drives become even more appealing.
Many IT departments have a virtualize-first strategy. This means that anytime a new server is requested, the default reaction is to virtualize that server. A standalone physical server requires special justification. We may be heading the same way with storage, where new storage additions are flash first, and hard drives are used only for storing active data.
Cost has been the key hindrance for solid state device (SSD) adoption, but reducing that cost per effective GB is a key reason that data centers will move to a flash-first strategy. As we discuss in our recent article "SSD Can Achieve HDD Price Parity," continued advances in flash controller technology, combined with advanced flash storage system design, have made it possible for flash SSD systems to achieve price parity with enterprise disk storage systems. A key is the enablement of multi-level cell (MLC) based flash systems, which essentially combine consumer grade flash NAND with advanced controllers to deliver enterprise reliability into a system that provides enterprise redundancy.
More Storage Insights
- Building a Hybrid Cloud in Government: It's not that Complicated
- Maximize the benefits of virtualization for greater ROI
- Cloud-based data backup: A buyer's guide - How to choose a third-party provider for development, management of your data backup solution
- Leveraging The Cloud For Business Resilience
On top of safely using MLC-based SSD to drive down price, there is almost a universal adoption of deduplication and/or compression in the flash appliance market. The combination can provide a five times or greater effective capacity, and flash has the performance capabilities to support the additional workload of flash lookup. All deduplication is not created equal though, and as we pointed out in our recent webinar "What is Breaking Deduplication," users and suppliers need to pay careful attention to make sure deduplication does not become a performance problem as their systems scale in capacity.
With cost issues being addressed so rapidly, the other reason for a flash-first strategy is that initiatives like server and desktop virtualization have made storage performance bottlenecks a near-universal problem in the data center. The random I/O that a host loaded with even a few virtual machines is significant and can easily tax hard drive-based systems. This problem will increase as the VM density per host increases with each processor upgrade. Random I/O is, of course, the flash storage trump card. Other than DRAM-based systems, nothing responds to random I/O faster than flash.
Capacity and capacity management are also less of a concern now. Certainly data continues to grow, but designing a system large enough to store all an organization's data is not that difficult. What was difficult was storing all that data and keeping storage response time acceptable. Flash resolves the performance problem, and there is a suite of tools and systems that will manage the movement of active data to a flash storage device.
Finally, thanks to the performance advantage and easier to justify price point, flash makes the storage administrator's life easier. Once everything, or almost everything, is on flash, the job of performance tuning and scaling virtual machine density becomes significantly easier. Also there are so many ways to implement and leverage flash that you don't have to wait for your storage refresh budget to come through. Flash can be added via a standalone appliance, in the server host, or as a network cache to solve specific performance problems right away.
New innovative products may be a better fit for today's enterprise storage than monolithic systems. Also in the new, all-digital Storage Innovation issue of InformationWeek: Compliance in the cloud era. (Free with registration.)