Data variety is such a key part of big data it has spawned an entire computing movement--NoSQL. While the name suggests an all-or-nothing showdown, think of it as "not only SQL." The movement is about alternatives when you don't have conventional structured data that fits neatly into the columns and rows of relational databases such as Greenplum, IBM DB2 or Netezza, Microsoft SQL Server, MySQL, Oracle, or Teradata. NoSQL databases can handle semistructured data or inconsistent, sparse data. That accounts for a lot of data growth coming from sources such as Web log files used by Internet marketers, remote sensor data like that used in emerging smart-meter utility applications, or security log files used to detect and thwart hacking and identity theft.
Some companies are also processing unstructured information such as text comments from Facebook and Twitter, mining the data for customer-sentiment analysis.
Hadoop is a collection of open-source, distributed data-processing components for storing and managing large volumes of structured, unstructured, or semistructured data. Clickstream and social media applications are driving much of the demand, and of particular interest is MapReduce, a data-processing approach supported on Hadoop (as well as in a few other environments) that's ideal for processing big volumes of these relatively new data types. MapReduce breaks a big data problem into subproblems, distributes those onto hundreds or thousands of processing nodes on commodity hardware, then combines the results for an answer or a smaller data set that's easier to analyze.
Internet marketers and e-commerce players were the first to recognize the importance of clickstream data and social media sentiment, but it's now rare to find a company with a prominent brand that isn't paying close attention to online sales and marketing trends and trying to gauge the social media buzz around its products, categories, and company. Whether consumer products companies like Procter & Gamble, automakers like Ford and Toyota, or clothing manufacturers like Levi Strauss, they're analyzing where Internet users spend time, spotting what marketing messages draw their attention, and gauging their mood. They're mining this data to predict product demand, sense new-product acceptance, gauge competitive threats, and detect threats to brand reputations.
Hadoop runs on low-cost commodity hardware, and it scales up into the petabyte range at a fraction of the cost of commercial storage and data-processing alternatives. That has made it a staple at Internet leaders including AOL, eHarmony, eBay, Facebook, Twitter, and Netflix. But more conventional companies coping with big data, like JPMorgan Chase, are embracing the platform.
Data provider Infochimps relies on Hadoop to parse data from Facebook, Twitter, and other social sources and create new data sources. Infochimps' popular "Twitter Census: Trst Rank," for example, provides metrics on the influence of Twitter users. This helps companies with a Twitter presence gauge their followers' clout based on how many other Twitter users they interact with and how many people pay attention to them. This, in turn, helps these organizations know what their most influential customers are saying about their brands and products.
Why is Hadoop processing necessary? First, the data being studied is often semistructured or text-centric information, and second, it's really big. Infochimps has been collecting Twitter data since 2008, and the entire set includes nearly 7 billion tweets and more than 1 billion connections among users.
Hadoop is attracting attention from commercial vendors. In May, EMC Greenplum announced its own distributions of Hadoop software (one open-source and a commercially supported enterprise edition). And in September EMC added a modular version of its Greenplum Data Computing Appliance that will let IT organizations run the Greenplum relational database and Hadoop on the same appliance, albeit with separate processing and storage capacity dedicated to each environment. This is a stab at making it easier to address the variety of information that's part of big data by supporting it all on a single computing platform.
Another vendor bridging the SQL and NoSQL worlds is Aster Data. Acquired last year by Teradata, Aster Data is best known for supporting MapReduce processing within its relational database. This makes MapReduce accessible to SQL-literate analysts so they can do pattern detection, graph analysis, and time-series analysis on clickstreams and social media data.
IBM has embraced Hadoop by way of InfoSphere BigInsights, an analytics platform based on Hadoop. The company has also exploited Hadoop in internal projects and development work, most notably in the development of its Jeopardy-playing Watson supercomputer.
Oracle and Microsoft are the latest vendors to join the Hadoop bandwagon. Oracle confirmed at Oracle OpenWorld in October that it will introduce a Hadoop software release and related Big Data appliance, though it didn't say when. As recently as last year, company executives had told financial analysts that mainstream commercial customers weren't asking for unstructured-data-analysis capabilities.
Microsoft announced in October that it will introduce a beta Hadoop processing service through its Azure cloud by year end. And in 2012 it's promising an open-source-compatible software distribution that will run Hadoop on Windows servers.