HBase still has a few 'rough edges,' but that hasn't kept this NoSQL database from becoming one of the hottest pockets within the white-hot Hadoop market.
The Apache Hadoop Framework has many components, including the MapReduce distributed data-processing model, Hadoop Distributed File System (HDFS), Pig data-flow language, and Hive distributed data warehouse. But the part stealing much of the attention these day--and arguably maturing most rapidly--is HBase.
HBase is Hadoop's NoSQL database. Patterned after Google BigTable, HBase is designed to provide fast, tabular access to the high-scale data stored on HDFS. High-profile companies including Facebook and StumbleUpon use HBase, and they're part of a fast-growing community of contributors who are adding enterprise-grade reliability and performance upgrades at steady clip.
Why is HBase so important?
First and foremost, it's part of the Hadoop Framework, which is at the epicenter of a movement in which companies large and small are making use of unprecedented volumes and varieties of information to make data-driven decisions. Hadoop not only handles data measured by the tens or hundreds of terabytes or more, it can process textual information like social media streams, complex data like clickstreams and log files, and sparse data with inconsistent formatting. Most importantly, it does all this at low cost, powered by open source software running on highly scalable clusters of inexpensive, commodity X86 servers.
The core data-storage layer within Hadoop is HDFS, a distributed file system that supports MapReduce processing, the approach that Hadoop practitioners invariably use to boil down big data sets into the specific information they're after. For operations other than MapReduce, HDFS isn't exactly easy to work with. That's where HBase comes in.
"Anybody who wants to keep data within an HDFS environment and wants to do anything other than brute-force reading of the entire file system [with MapReduce] needs to look at HBase," explains Gartner analyst Merv Adrian. "If you need random access, you have to have HBase."
HBase offers two broad use cases. First, it gives developers database-style access to Hadoop-scale storage, which means they can quickly read from or write to specific subsets of data without having to wade through the entire data store. Most users and data-driven applications are used to working with the tables, columns, and rows of a database, and that's what HBase provides.
Second, HBase provides a transactional platform for running high-scale, real-time applications. In this role, HBase is an ACID-compliant database (meeting standards for Atomicity, Consistency, Isolation, and Durability) that can run transactional applications. That's what conventional relational databases like Oracle, IBM DB2, Microsoft SQL Server, and MySQL are mostly used for, but HBase can handle the incredible volume, variety, and complexity of data encountered on the Hadoop platform. Like other NoSQL databases, it doesn't require a fixed schema, so you can quickly add new data even if it doesn't conform to a predefined model.
Life sciences research firm NextBio uses Hadoop and HBase to help big pharmaceutical companies conduct genomic research. The company embraced Hadoop in 2009 to make the sheer scale of genomic data-analysis more affordable. The company's core 100-node Hadoop cluster, which has processed as much as 100 terabytes of data, is used to compare data from drug studies to publically available genomics data. Given that there are tens of thousands of such studies and 3.2 billion base pairs behind each of the hundreds of genomes that NextBio studies, it's clearly a big-data challenge.
NextBio uses MapReduce processing to handle the correlation work, but until recently it stored the results--now exceeding 30 billion rows of information--exclusively on a MySQL database. This conventional relational database offers fast storage and retrieval, but NextBio knew it was reaching the limits of what a conventional database like MySQL could handle in terms of scale--at least without lots of database administrative work and high infrastructure cost.
The Agile ArchiveWhen it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
2014 Analytics, BI, and Information Management SurveyITís tried for years to simplify data analytics and business intelligence efforts. Have visual analysis tools and Hadoop and NoSQL databases helped? Respondents to our 2014 InformationWeek Analytics, Business Intelligence, and Information Management Survey have a mixed outlook.
Top IT Trends to Watch in Financial ServicesIT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Join us for a roundup of the top stories on InformationWeek.com for the week of September 25, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."