Serengeti Project additions now enable Hadoop to expand and contract if it's running in virtual machines, explains VMware engineer.
Hadoop running in virtual machine nodes can now expand or shrink the cluster on which it's running, while it's running. Its newfound elasticity, a result of the Serengeti Project, allows it to better match the physical resources it commands to its task load, while in the past it had to occupy a static, standalone physical cluster, whether it needed one or not.
Hadoop is the distributed file management system based on MapReduce and the HDFS file system, both designed to exploit large clusters of simple, x86 hardware. Hadoop is noted for the gigabytes of data it can chomp at one time. It can serve as a host system for further big data analysis operations. But in the past, it's also been a big consumer of physical resources.
The additions to the third release of code from Serengeti are giving Hadoop clusters operating characteristics that they have lacked before. Among other things, a cluster hosting Hadoop may become a multi-tenant host, taking on other data management applications as well, such as an in-memory relational database system or NoSQL data management systems, such as Cassandra and MongoDB. These uses put different demands on the cluster at different times, giving a heavily virtualized data management system a higher utilization rate of physical resources than standalone data systems.
"The main point is that rarely do I just see Hadoop. It's always a mix of different application types," wrote Richard McDougall, VMware principal engineer, in a lengthy blog about recent advances in the Serengeti Project posted on the VMware CTO office's blog Tuesday morning. McDougall aired Serengeti's third code release as big data users gathered at O'Reilly's data management Strata Conference, combined with Hadoop World, Oct. 23-25 at the New York Hilton.
Open source code versions of Hadoop already exist from the Apache Software Foundation, with commercial versions added by Cloudera, Hortonworks, and Greenplum. But Hadoop in these forms runs on physical server clusters, and VMware understands it can broaden the market for virtualization if Hadoop could be deployed as a virtual system as well. Serengeti is not in competition with the previously named versions of Hadoop. Instead, it seeks to enable all of them to run in virtual machines.
If Hadoop ran in virtual, instead of physical, clusters, it would reduce the number of servers that need to be dedicated solely to it. It would increase access to its data management capabilities by data scientists in business and universities; it would also increase access by IT managers, giving them "a common infrastructure on which to run many big data workloads," McDougall said in his blog post.
While the advantages are obvious, there are reasons why Hadoop seldom runs as a virtualized system today. To some extent, it reverses the usual problem of data center virtualization--getting one server to appear and perform as many. Instead, virtualized Hadoop needs many servers to perform as one combined system.
To do so, it must recognize itself when it's running on virtual servers and amend its operations accordingly. The JobTracker server in a Hadoop cluster uses MapReduce to distribute tasks around the cluster, after consulting the central NameNode server to see where the data needed by a task is located. It sends the job to the node with the data, or as close as it can get to the data node, speeding the task's execution.
Hadoop's final file-sorting tasks are executed by cluster nodes with both runtime execution logic (task tracker) and data storage. By virtualizing Hadoop, the data-storage and job-executing server can be launched in separate virtual machines. They can be sized separately but these virtual machines, in many cases, may be located on the same physical server, maintaining speedy communications between them.
This also allows the creation of code that recognizes when more virtual servers need to be spun up or when the virtual servers are underutilized and some of them may be taken down. It also allows one function to be isolated from the other, so that a fault in one doesn't freeze up both functions on a cluster server. A stall in one function might be overcome by starting up a replacement virtual machine.
"We are fully embracing virtual resources as the backing for Hadoop, by allowing Hadoop to hot add/remove Virtual Hadoop nodes on the fly," wrote McDougall in his blog post.
"This gives us the capability to rapidly grow or shrink a Virtual Hadoop cluster, based on the changing needs of the application and user mix," he added.
Hadoop may be run through a community-contributed user interface that Serengeti developers have incorporated into milestone three of Serengeti code. It also has a data upload and download interface, aiding the import (or export) of data from outside sources, McDougall said.
The Agile ArchiveWhen it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
2014 Analytics, BI, and Information Management SurveyITís tried for years to simplify data analytics and business intelligence efforts. Have visual analysis tools and Hadoop and NoSQL databases helped? Respondents to our 2014 InformationWeek Analytics, Business Intelligence, and Information Management Survey have a mixed outlook.
Top IT Trends to Watch in Financial ServicesIT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Join us for a roundup of the top stories on InformationWeek.com for the week of September 18, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."