When data grows into the tens or even hundreds of terabytes, you need a special technology to quickly make sense of it all. From Hadoop to Teradata, check out the top platform options.
3 of 13
Hadoop And MapReduce Boil Down Really Big Data Hadoop is a collection of open-source distributed data-processing components for storing and processing structured, semi-structured, or unstructured data at truly high scale (as in tens or hundreds of terabytes of even petabytes). Clickstream and social-media analysis applications are driving much of the demand, and of particular interest is MapReduce, a technique supported by Hadoop (and a few other environments) that is ideal for processing big data sets. MapReduce breaks a big data problem into sub-problems, distributes those onto dozens, hundreds, or even thousands of processing nodes and then combines the results into a smaller data set that's easier to analyze.
Hadoop runs on low-cost commodity hardware and it scales up at a fraction of the cost of commercial storage and data-processing alternatives. That has made it a staple at Internet giants including AOL, eHarmony, eBay, Facebook, Twitter, and Netflix. But even more traditional firms coping with big data, like JPMorgan Chase, are embracing the platform.
Join us for a roundup of the top stories on InformationWeek.com for the week of April 24, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week!