Software // Information Management
Commentary
4/26/2011
09:18 AM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

Databases Alone Can't Conquer Big Data Problems

Data-integration and data-transformation steps taken before loading the database are sometimes the best antidote to high-volume storage and scaling challenges.

When it comes to making sense of big data, the glory is hogged by database platforms such as EMC Greenplum, IBM Netezza, Oracle Exadata, and Teradata.

But sometimes simple data processing work done outside of the database can help you scale and eliminate hours or even days of processing on expensive database platforms.

Marketing data provider comScore, for example, uses data-sorting techniques to improve compression and aggregate data before it even gets to the warehouse. As a result it's saving on storage, reducing processing times, and, most importantly, speeding information to its data-hungry customers.

As I detailed late last year, comScore has been a leading source of online marketing data for more than a decade. As such it was a pioneer of big-data computing. At 150 terabytes, the company's latest Sybase IQ warehousing platform sounds big, but it would have to be many times larger if not for the company's skill at compressing and aggregating data.

The big-data leagues span from the tens of terabytes into the petabytes. That's when it becomes essential to add the power of massively parallel processing (MPP), used by most of the leading platforms, or the compression advantages of column-oriented databases (such as Sybase IQ and HP Vertica). But organizations playing at this scale also have to manage big data before it gets into the database.

To give you some idea, comScore tracks the daily Internet surfing (and mobile-access) habits of about 2 million consumer panelists who have registered and supplied their demographic and psychographic profiles. The company also takes a daily census of activity across the Internet so it can report on and compare Internet-wide behavior to that of targeted segments tracked through the panel data. As a result, comScore collects about 2 billion new rows of panel data and more than 18 billion new rows of census data each day.

That means more than 20 million rows of new data is loaded into the data warehouse each day. Of course, most every organization will apply compression to reduce storage demands. But comScore also uses Syncsort DMExpress data integration software to sort and bring alphanumeric order to the data before it's loaded into the warehouse. This improves compression ratios.

Where 10 bytes of unsorted data can be compressed to three or four bytes, says Michael Brown, comScore's chief technology officer, 10 bytes of sorted data can typically be crunched down to one byte. "That makes a huge different in the volume of data we have to store, and it streamlines our processes and reduces our capital costs," Brown says.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest September 18, 2014
Enterprise social network success starts and ends with integration. Here's how to finally make collaboration click.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
The weekly wrap-up of the top stories from InformationWeek.com this week.
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.