Success in the Big Data era is about more than size. It's about getting insight from these huge data sets more quickly. As explored in our recent cover story, experienced practitioners are taking advantage of in-database analytics processing, breakthrough techniques such as MapReduce and innovative, new environments such as Hadoop to handle big data volumes and new data types with speed and ease.
What does it take to move into the era of large-scale data analytics? It's almost a given that deployments headed north of 10 terabytes will feature massively parallel processing, column-store architectures, or both. But the story doesn't end there. Innovations including in-database analytics, MapReduce, Hadoop, and in-memory analysis are redefining what's possible. Adknowledge is using both Hadoop and Greenplum to analyze e-mail and digital advertising campaigns. Barnes & Noble has consolidated multiple warehouses into the Aster Data platform, and it's using MapReduce techniques to better understand cross-channel buying patterns. BNP Paribas has deployed Oracle Exadata and its flash-memory edge to stay on top of trading floor application performance and compliance. Catalina Marketing has a massive Netezza deployment including what it bills as the largest loyalty database in the World. Cabela's has mastered in-database analytics on the Teradata platform so it can make the best use of expensive statistical expertise. Hutchison 3G is delving deeper into mobile-phone-contract historical analysis while also optimizing network performance on IBM's Smart Analytic System. McAfee is pioneering sparse-data analysis on Hadoop, using Datameer tools to spot correlations among spam, malware, firewall hack and botnet computer security threats. Provisio is using ParAccel to query millions of medical records and quickly spot potential drug-trial participants in close proximity to pharmaceutical research facilities. Read on for details on the science of what's possible in the new big-data era.