Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
May 21, 2010
10 Min Read
A new generation of low-cost, high-performance database software is rapidly emerging to challenge SQL's dominance in distributed processing and Big Data applications. Some companies have already traded SQL's rich functionality for these new options that let them create, work with, and manage large data sets.
A big reason for this movement, dubbed NoSQL, is that different implementations of Web, enterprise, and cloud computing applications have different requirements of their databases. Not every app requires rigid data consistency, for example.
Also, when an application uses data distributed across hundreds or even thousands of servers, simple economics points to using no-cost server software as opposed to paying per-processor license fees. Once freed from license fees, you can scale horizontally with commodity hardware or opt for a cloud computing service and avoid the capital expenses altogether. Previous tools didn't always facilitate this.
Challenges to SQL's hegemony are coming from specialized products built from the ground up for large-scale analytics and document storage, as well as for building operational systems that require high availability more than consistency when partitioning data.
Applications such as online transaction processing, business intelligence, customer relationship management, document processing, and social networking don't have identical needs for data, query, or index types, nor do they have equivalent requirements for consistency, scalability, and security.
For example, BI applications run analytical and decision-support queries that can exploit bitmap indexes for operations with gigabyte- or terabyte-sized databases. Web analytics, drug discovery, financial modeling, and similar applications look to distributed systems for efficiently processing gigabyte- to terabyte-sized data sets. OLTP puts a premium on reliability. And social network applications such as Facebook and Amazon.com have adopted BASE (basically available, soft state, eventually consistent) properties over the more familiar ACID (atomicity, consistency, isolation, durability) ones to serve their massive Web user communities of millions.
These differences are one reason nonrelational NoSQL data stores, document-centric databases, and column stores have gained traction. They're more like specialized tools rather than the Swiss Army knife functionality of SQL platforms.
System architects should consider the specialized features and functions an app needs in choosing a database. NoSQL databases can be built specifically for functions such as BI, OLTP, CRM, social networks, and data warehousing, and include features such as scalability, partitioning, security, and elasticity.
Scalability And High Availability
For cloud computing and high-volume Web sites such as eBay, Amazon, Twitter, and Facebook, scalability and high availability are essential. In fact, they're the reason distributed databases have relaxed consistency requirements.
Operational systems in high-availability environments must survive software, server, and network segment failures, and provide scalability despite unpredictable surges in demand for computing resources. One approach to building such systems is to use distributed databases with a shared-nothing architecture and horizontal partitioning. Elasticity and sharding (partitioning)--both NoSQL features--are solutions for scaling out horizontally to provide availability and for processing Big Data.
A variety of data stores are gaining popularity for creating applications for scalable Web sites and elastic environments such as the private or public cloud. Distributed key-value stores are great when you don't need SQL rule enforcement, strong consistency, complex queries, integrated queuing, or the ability to operate with operational databases that exceed available RAM.
New low-latency data stores provide scalability for applications that don't require rich query and analytics capabilities. Amazon has developed SimpleDB, and Google developed Bigtable. Other low-latency, open source options include Cassandra, Hypertable, MongoDB, Project Voldemort, Redis, Tokyo Tyrant, and Dynamo, the database used for Amazon S3, which as of March was hosting 102 billion objects.
Google developed Bigtable to distribute data across thousands of servers and scale to petabyte-sized data sets. A variety of Google applications use it, including Web indexing, Google Earth, Google Maps, Blogger, YouTube, and Gmail. The YouTube collection of 100 million videos requires 600 TB of storage. Bigtable is proprietary, but the data model exists in open source implementations, including Hypertable, Cassandra, and HBase. Bigtable can be used as input or output to MapReduce, which enables distributed processing of files or databases using mapping and reduction functions.
Dynamo was created to provide a key-value data store designed for high availability, permitting updates to survive server failures and network outages. Amazon subsequently built SimpleDB as a key-value store available for Amazon Web Services customers. SimpleDB, which is in beta, is restricted to items having no more than 256 attribute name-value pairs, domains having no more than 10 GB, and databases that are no more than 1 TB. Amazon reports consistency is usually attained across all copies of the data within a second. SimpleDB uses a SQL-like query language.
Project Voldemort, an open source clone of Amazon Dynamo, is a key-value data store that supports versioning, eventual consistency (where the database sometimes returns the wrong answer in order to maintain scaling), and automatic partitioning and replication. Keys and values can be complex objects such as maps or lists. Project Voldemort supports offline building of distributed data stores. LinkedIn developers created it, and sites such as Lookery have used it.
Cassandra integrates the Bigtable data model with the Dynamo distributed design. It offers eventual consistency, not the rigid consistency that e-commerce transactions and stock trading require. Instead of data stored in row-major or column-major sequence, Cassandra uses the ColumnFamily order inspired by Bigtable.
Cassandra is geographically distributed across multiple data centers, such as Amazon EC2 availability zones. Bulk loading can be done with Hadoop.
Cassandra provides availability and scalability for a number of well-known sites, including the huge Twitter and Facebook user communities. When Twitter's user numbers took off, it migrated from MySQL to a combination of MySQL/memcached plus 45 nodes running Cassandra. That mixed environment now handles 50 million tweets per day. Facebook adds about 60 million photos per week using Cassandra. For Digg, Cassandra manages about 3 TB of data.
Digg for one made a highly publicized move from MySQL to Cassandra. Digg's primary motivation for using Cassandra was "the increasing difficulty of building a high-performance, write-intensive application on a data set that is growing quickly, with no end in sight," says John Quinn, Digg's VP of engineering. Growth forced Digg into horizontal and vertical partitioning strategies that eliminated most of the value of a relational database, while still incurring all the overhead, Quinn says.
"Our system grows rapidly and requires us to provide performance and redundancy with multiple data centers and to add capacity or replace failed nodes with no downtime," he adds. As for data consistency, Digg's engineers can implement application-level controls much more efficiently with Cassandra than MySQL does generically, Quinn says.
Tokyo Tyrant is an open source database server, with a companion full-text search engine, that has a following in the NoSQL community. It's a key-value database with a hash and b-tree index structure, capable of inserting 1 million records at 0.4 seconds per record and executing 58,000 queries per second. It supports asynchronous replication and transaction processing with ACID properties and write-ahead logging. There are bindings for multiple programming languages, including Perl, Java, Ruby, and PHP. Production deployments include Scribd and Mixi, the Japanese equivalent of Facebook. LightCloud turns Tokyo Tyrant into a horizontally scalable distributed database with the addition of a universal hashing layer. Social journal Plurk uses the LightCloud Tokyo Tyrant offering.
Document Database Stores
CouchDB and MongoDB are representative of the JSON class of document database, whereas there are a large number of products that store documents encoded as XML.
When it comes to distributed processing of massive data sets, Hadoop MapReduce has become the red-hot technology du jour. Researchers at Yahoo, for instance, used 3,800 nodes with it to sort a petabyte of data in 16.25 hours.
Google developed and recently patented MapReduce. The map function produces a list of key-value pairs that MapReduce turns into a list of values.
The Apache Hadoop Project includes the Hadoop Distributed File System (HDFS), MapReduce, HBase database, Pig analysis language, Hive query and analysis tool, and other software. HBase is a distributed column store, modeled after Google Bigtable that can serve as input or output for MapReduce.
HBase is one of several column stores competing in the analytics and business intelligence market. Storing tables in column-major order provides substantial performance improvements over row-major stores. Benefits such as improved locality and cache performance make for better performance of retrieval-oriented queries, but performance is poor for insertion queries. Other column stores include Sybase IQ, Vertica, and CStore, an open source collaboration among several universities.
Increased interest in semantic searching and Linked Data has brought RDF triples store into the spotlight. These offerings include AllegroGraph, Bigdata, Garlik, Jena, Ontotext BigOWLIM, OpenLink Virtuoso, Oracle 11g, and Sesame. Several have been deployed on Amazon EC2 to exploit the distributed processing power of the cloud. Raytheon BBN researchers have also used Hadoop MapReduce to create a distributed RDF store that supports SPARQL query processing.
Restrictions And Best Practices
To ensure durability and data integrity, SQL databases provide logging and data replication. NoSQL offerings need a similar safety net. Cassandra, for example, supports transaction logging and automatic replication. Tokyo Cabinet and HBase support write-ahead logging. Tokyo Cabinet and CouchDB support master-master replication, whereas MongoDB supports master-slave replication and replica pairs.
Architects using document-oriented databases must deal with how to store different document types and whether to have a separate database for each one. Alternatives to separate databases include using an attribute to specify type or using separate collections.
Because the new generation of data stores is intended to address scalability and availability needs, certain restrictions apply for maximum efficiency. With Amazon SimpleDB, for example, the time limit for queries is five seconds. If a query takes longer, SimpleDB returns partial results, and the application must make additional calls to obtain complete results. SimpleDB restricts a query result to a maximum of 250 items, whereas Google recently lifted the AppEngine data store query result limit of a thousand items.
In horizontally partitioned systems, queries that require cross-shard joins are expensive, so the design and algorithms for partitioning require skill and knowledge of data-usage patterns. When complex queries such as aggregation are required, NoSQL operational databases aren't typically a good fit, but they can provide source data for separate solutions for analytics. Organizations using a key-value datastore sometimes need the indexing and query capabilities of SQL. They can turn to other software that supports indexing and queries, such as Apache Lucene. Regardless of whether your organization is using SQL or NoSQL databases, it's still a good idea to use version control and separate databases for testing and production.
For all the areas that NoSQL options address, we're still left with the question of which database software to adopt. The answer depends on fundamental issues: How much and what types of data will you store? Will it be used for complex queries? How many concurrent users are you supporting? And will your database scale as it takes on more users and data? SQL or NoSQL, this is the place to start.
Ken North is an author, consultant, industry analyst, and database specialist. Write to us at [email protected].
You May Also Like