HBase: Hadoop's Next Big Data Chapter - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Software // Information Management
09:11 AM
Doug Henschen
Doug Henschen
Connect Directly

HBase: Hadoop's Next Big Data Chapter

HBase still has a few 'rough edges,' but that hasn't kept this NoSQL database from becoming one of the hottest pockets within the white-hot Hadoop market.

The Apache Hadoop Framework has many components, including the MapReduce distributed data-processing model, Hadoop Distributed File System (HDFS), Pig data-flow language, and Hive distributed data warehouse. But the part stealing much of the attention these day--and arguably maturing most rapidly--is HBase.

HBase is Hadoop's NoSQL database. Patterned after Google BigTable, HBase is designed to provide fast, tabular access to the high-scale data stored on HDFS. High-profile companies including Facebook and StumbleUpon use HBase, and they're part of a fast-growing community of contributors who are adding enterprise-grade reliability and performance upgrades at steady clip.

Why is HBase so important?

First and foremost, it's part of the Hadoop Framework, which is at the epicenter of a movement in which companies large and small are making use of unprecedented volumes and varieties of information to make data-driven decisions. Hadoop not only handles data measured by the tens or hundreds of terabytes or more, it can process textual information like social media streams, complex data like clickstreams and log files, and sparse data with inconsistent formatting. Most importantly, it does all this at low cost, powered by open source software running on highly scalable clusters of inexpensive, commodity X86 servers.

[ Want more on NoSQL databases? Read Amazon DynamoDB: Big Data's Big Cloud Moment. ]

The core data-storage layer within Hadoop is HDFS, a distributed file system that supports MapReduce processing, the approach that Hadoop practitioners invariably use to boil down big data sets into the specific information they're after. For operations other than MapReduce, HDFS isn't exactly easy to work with. That's where HBase comes in. "Anybody who wants to keep data within an HDFS environment and wants to do anything other than brute-force reading of the entire file system [with MapReduce] needs to look at HBase," explains Gartner analyst Merv Adrian. "If you need random access, you have to have HBase."

HBase offers two broad use cases. First, it gives developers database-style access to Hadoop-scale storage, which means they can quickly read from or write to specific subsets of data without having to wade through the entire data store. Most users and data-driven applications are used to working with the tables, columns, and rows of a database, and that's what HBase provides.

Second, HBase provides a transactional platform for running high-scale, real-time applications. In this role, HBase is an ACID-compliant database (meeting standards for Atomicity, Consistency, Isolation, and Durability) that can run transactional applications. That's what conventional relational databases like Oracle, IBM DB2, Microsoft SQL Server, and MySQL are mostly used for, but HBase can handle the incredible volume, variety, and complexity of data encountered on the Hadoop platform. Like other NoSQL databases, it doesn't require a fixed schema, so you can quickly add new data even if it doesn't conform to a predefined model.

Life sciences research firm NextBio uses Hadoop and HBase to help big pharmaceutical companies conduct genomic research. The company embraced Hadoop in 2009 to make the sheer scale of genomic data-analysis more affordable. The company's core 100-node Hadoop cluster, which has processed as much as 100 terabytes of data, is used to compare data from drug studies to publically available genomics data. Given that there are tens of thousands of such studies and 3.2 billion base pairs behind each of the hundreds of genomes that NextBio studies, it's clearly a big-data challenge.

NextBio uses MapReduce processing to handle the correlation work, but until recently it stored the results--now exceeding 30 billion rows of information--exclusively on a MySQL database. This conventional relational database offers fast storage and retrieval, but NextBio knew it was reaching the limits of what a conventional database like MySQL could handle in terms of scale--at least without lots of database administrative work and high infrastructure cost.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
1 of 2
Comment  | 
Print  | 
More Insights
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

10 Things Your Artificial Intelligence Initiative Needs to Succeed
Lisa Morgan, Freelance Writer,  4/20/2021
Tech Spending Climbs as Digital Business Initiatives Grow
Jessica Davis, Senior Editor, Enterprise Apps,  4/22/2021
Optimizing the CIO and CFO Relationship
Mary E. Shacklett, Technology commentator and President of Transworld Data,  4/13/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
Planning Your Digital Transformation Roadmap
Download this report to learn about the latest technologies and best practices or ensuring a successful transition from outdated business transformation tactics.
Flash Poll