Kaminario Makes Data Available Despite Storage Node Failure
DataProtect backup feature enables high availability, self healing, automatic failover, and recovery for continuous storage area network operation.
Kaminario announced Monday its new DataProtect feature that provides self healing and automatic recovery even after a storage node failure. New to its K2 storage area network (SAN) is a RAID10HD process, which automatically mirrors and stripes data across multiple storage nodes in an N+1 scheme, resulting in data that is always available. High availability is important in business-critical applications that need to run nonstop in a 24/7 environment.
With the DataProtect feature, the K2 SANs automatically detects node failure and directs data to a spare node to provide non-disruptive operation. During this operation, the data is recovered from the mirrored backup, moved to the spare node's primary data, and redistributed across the remaining nodes. Kaminario is automatically notified so that its support organization can start replacing the failed node. Other features include high-speed snapshot capability and asynchronous replication for disaster recovery and remote backup.
The K2 SAN has options for all-DRAM, all-multi-level cell (MLC) flash, and hybrid DRAM and MLC configurations. When Kaminario introduced its products, all storage nodes had embedded backup hard disk drives (HDD) to provide low-cost backup media. Now customers can use MLC SSDs for this function, which provides high-speed access to data after a storage node failure, albeit at a higher cost.
The K2 SAN uses the Scale-out Performance Storage Architecture (SPEAR) operating system that manages two types of nodes--the ioDirector performance nodes and the data nodes. The ioDirector performance nodes provide Fibre Channel connectivity to servers and control the distribution of data to the storage nodes, in a manner that all data resides in primary storage and backup storage on different storage nodes. In this N+1 configuration, even after the loss of a storage node, the data can be accessed automatically from either primary or backup storage.
A typical configuration would start with three ioDirector nodes and four storage nodes with one spare storage node. Storage nodes come in several configuration options. The K2-D has all DRAM storage in capacities of 0.5 TB to 12 TB. The K2-F has all MLC flash storage in capacities of 3 TB to 100 TB, and the K2-H can be configured with 3 TB to 100 TB of DRAM combined with MLC flash. Storage nodes are x86 architecture with PCIe MLC flash and/or DRAM boards. All models have optional SATA drives or SATA MLC SSD for backup. The K2-D is used for write-intensive or latency-sensitive workloads. The K2-F is used for read-intensive workloads and the K2-H is used for mixed workloads.
The K2 SANs with DataProtect high availability and data protection features are available now with the exception of array-based replication, which is expected to be available in the second half of 2012. Host-based replication is available now. Pricing starts at $20 per GB for the K2-F MLC array bundled with the DataProtect features.
Deni Connor is founding analyst for Storage Strategies NOW, an industry analyst firm that focuses on storage, virtualization, and servers.
IT's spending as much as ever on disaster recovery, despite advances in virtualization and cloud techniques. It's time to break free. Download our Disaster Recovery Disaster supplement now. (Free registration required.)
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
InformationWeek Tech Digest August 03, 2015The networking industry agrees that software-defined networking is the way of the future. So where are all the deployments? We take a look at where SDN is being deployed and what's getting in the way of deployments.