ComScore has used DMExpress since 2000, when processing power and storage where still quite costly. The product also supports high-volume extract, transform and load (ETL) work, but these days it's marketed as "data integration acceleration software," designed to be used in conjunction with more popular integration suites such as Informatica PowerCenter and IBM InfoSphere. ComScore only uses DMExpress for sorting, filtering and aggregation, and it uses database-native capabilities, rather than yet another ETL package, for data loading.
In another example of doing the big-data heavy lifting before data enters the database, comScore uses DMExpress to aggregate the thousands of new records collected from each of its two million panelists each week. A first step is to sort the sites visited by URL, so a processing-intensive comScore taxonomy used to categorize Web sites only has to be called when the URL changes.
Instead of classifying the 40 sites a panelist might have visited one-by-one in the order they were visited, they are grouped to, say, the three sites visited overall (with 20 visits to Facebook, 12 to GMail, and eight to The New York Times listed all in a row). "That saves a lot of CPU time and a lot of effort," Brown says.
Put into production in 2009, this sorting step let comScore process daily panel updates in seven hours where it used to take 24. And monthly updates are now delivered on the fifth of the month instead of the 15th. "That's a big win for the business because our customers can get a much quicker understanding of how their campaigns are performing," Brown says.
As I recently reported in "IT And Marketing in the Digital Age", this is exactly the kind of low-latency information marketers now demand from suppliers and the kind of efficiency they're seeking internally.
Not every company operates at comScore's scale, but the lesson is that not every big-data challenge is best left to the high-powered database platform to solve. Sorting, filtering, aggregation, and transformation steps can streamline data before it gets to the data warehouse, saving CPU cycles and storage space before and after the crucial data-loading stage.
Nine times out of 10 when I hear about big data, it's all about the database and the analytics. But I'm increasingly hearing about all the work that takes place even before big data gets moved into warehouses. If you have lessons learned or smart shortcuts you can share, send me an email note. I've placed a big-data-integration feature story in the queue for this fall, and I'm looking for good customer examples.