Software // Information Management
News
10/17/2011
10:51 AM
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

3 Big Data Challenges: Expert Advice

The "big" part of big data doesn't tell the whole story. Let's talk volume, variety, and velocity of data--and how you can help your business make sense of all three.

12 Top Big Data Analytics Players
12 Top Big Data Analytics Players
(click image for larger view and for slideshow)
You don't need petabytes of information to play in the big data league. The low end of the threshold is more like 10 TB, and, in fact, "big" doesn't really tell the whole story. The many types of data and the speed at which data changes are, along with sheer volume, daunting challenges for businesses struggling to make sense of it all. Volume, variety, velocity--they're the hallmarks of the big data era we're now in.

Variety comes in the form of Web logs, wirelessly connected RFID sensors, unstructured textual information from social networks, and myriad other data types. Velocity breeds velocity. Fast-changing data drives demand for deep analytic insights delivered in hours, minutes, or, in extreme cases, seconds, instead of the weekly or monthly reports that once sufficed.

How are IT organizations coming to grips with data volume, variety, and velocity? Specialized databases and data warehouse appliances are part of the answer. Less heralded but also essential are information management tools and techniques for extracting, transforming, integrating, sorting, and manipulating data.

IT shops often break new ground with big data projects as new data sources emerge and they try unique ways of combining and putting them to use. Database and data management tools are evolving quickly to meet these needs, and some are blurring the line between row and column databases.

Even so, available products don't fill all the gaps companies encounter in managing big data. IT can't always turn to commercial products or established best-practices to solve big data problems. But pioneers are proving resourceful. They're figuring out how and when to apply different tools--from database appliances to NoSQL frameworks and other emerging information management techniques. The goal is to cope with data volume, velocity, and variety to not only prevent storage costs from getting out of control but, more importantly, get better insights faster.

Big data used to be the exclusive domain of corporate giants--Wall Street banks searching for trading trends or retailers like Wal-Mart tracking shipments and sales through their supply chains. Now the challenge of quickly analyzing massive amounts of information is going mainstream, and many of the technologies used by early adopters remain relevant. In the early 1980s, for instance, Teradata pioneered massively parallel processing (MPP), an approach now offered by IBM Netezza, EMC Greenplum, and others. MPP architectures spread the job of querying lots of data across tens, hundreds, or thousands of compute nodes. Thanks to Moore's Law, processing capacity has increased exponentially over the years, as cost per node has plummeted.

A second longstanding technique for analyzing big data is to query only selected attributes using a column-store database. Sybase IQ became the first commercially successful column-oriented database following its launch in 1996. Newcomers like the HP Vertica, Infobright, and ParAccel databases now exploit the same capability of letting you query only the columnar data attributes that are relevant--like all the ZIP codes, product SKUs, and transactions dates in the database. That could tell you what sold where during the last month without wading through all the other data that's stored row by row, such as customer name, address, and account number. Less data means faster results.

As an added bonus, because the data in columns is consistent, the compression engines built into column-store databases do a great job--one ZIP code, date, or SKU number looks like any other. That helps column stores achieve 30-to-1 or 40-to-1 compression, depending on the data, while row-store databases (EMC Greenplum, IBM Netezza, Teradata) average 4-to-1 compression. Higher compression means lower storage costs.

One big change, under way for several years, is that the boundaries between MPP, row-store, and column-store databases are blurring. In 2005, Vertica (acquired this year by HP) and ParAccel introduced products that blend column-store databases with support for MPP, bringing two scalability and query-speeding technologies to bear. And in 2008, Oracle launched its Exadata appliance, which introduced Hybrid Columnar Compression to its row-store database. The feature doesn't support selective columnar querying, but, as the name suggests, it does offer some of the compression benefits of a column-store database, squeezing data at a 10-to-1 ratio, on average.

In the most recent category-blurring development, Teradata said in late September that its upcoming Teradata 14 database will support both row-store and column-oriented approaches. IT teams will have to decide which data they'll organize in which way, with row-store likely prevailing for all-purpose data warehouse use, and column-store for targeted, data-mart-like analyses. EMC Greenplum and Aster Data, now owned by Teradata, have also recently blended row-store and column-store capabilities. The combination promises both the fast, selective-querying capabilities and compression advantages of columnar databases and the versatility of row-store databases, which can query any number of attributes and are usually the choice for enterprise data warehouses.

Previous
1 of 5
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Doug Laney
50%
50%
Doug Laney,
User Rank: Apprentice
11/29/2011 | 3:56:41 AM
re: 3 Big Data Challenges: Expert Advice
Great to see the 3Vs framework for big data catching on. Anyone interested in the original 2001 Gartner (then Meta Group) research paper I published positing the 3Vs, entitled "Three Dimensional Data Challenges," feel free to reach me. -Doug Laney, VP Research, Gartner
HM
50%
50%
HM,
User Rank: Strategist
10/19/2011 | 12:38:57 PM
re: 3 Big Data Challenges: Expert Advice
I noticed that you havenG«÷t mentioned the HPCC offering from LexisNexis Risk Solutions. Unlike Hadoop distributions which have only been available since 2009, HPCC is a mature platform, and provides for a data delivery engine together with a data transformation and linking system equivalent to Hadoop. The main advantages over other alternatives are the real-time delivery of data queries and the extremely powerful ECL language programming model.
Check them out at: www.hpccsystems.com
PulpTechie
50%
50%
PulpTechie,
User Rank: Apprentice
10/18/2011 | 3:13:05 PM
re: 3 Big Data Challenges: Expert Advice
Interesting read. Thanks for sharing.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest, Nov. 10, 2014
Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on InformationWeek.com for the week of November 9, 2014.
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.