Will Solid-State Storage Kill Tiering?
By George Crump
InformationWeek
First let's be clear, while in my last entry I predicted that solid-state storage may become the dominant form of storage in the data center sooner than expected, that conversion is not going to be complete tomorrow. It'll take the better part of the decade for solid-state storage to dominate. Even at that point, while it is the predominant form of storage, mechanical storage and, of course, tape will still have a major role to play in your storage infrastructure.
This time to convert is going to require bridging technologies that have the intelligence to move data to faster-tier storage as needed. While direct placement of data to the solid-state tier is ideal, in today's dynamic data center, staffed with stretched too thin IT administrators, the reality is that automation is the best bet in making sure that solid-state storage is used to its maximum efficiency.
After the conversion point, when over 50% of the primary data resides on solid-state storage, there will still be mechanical storage. In fact, it's reasonable to expect that from a sheer capacity standpoint that mechanical storage may be larger than the solid-state storage. Again, technologies like auto tiering and caching will be leveraged to move data to the mechanical tier.
Even when we reach the point where greater than 50% of primary data is on solid-state storage, or if we ever get to the point where there is no mechanical storage, there certainly will be tiers of solid-state. Today we have DRAM, SLC flash, eMLC flash, and MLC flash. Each of these variants of memory-based storage have their advantages. DRAM for example, while being the most expensive, has the best write performance and is the most durable. SLC flash is the most reliable flash memory and the MLC variants make solid-state storage more affordable.
Auto tiering and caching technologies are already being designed to take advantage of each of these different types of memory advantages while at the same time trying to avoid their weaknesses. There will be tiers of memory-based storage in the future just as there are tiers of hard drive based storage today.
The result of all this: Not only will solid-state storage not kill tiering and caching technologies, it will actually make them more commonplace. First, as the easy entry into the data center and then finally to leverage multiple types of memory-based storage within the data center. The pressure on vendors cannot simply stop at the caching or tiering technologies that we have today, but that pressure will force them to continue to enhance those technologies so that they can support multiple types of memory.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement. George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS