Mortar Data CEO K Young says many companies struggle to keep up with the infrastructure and expertise big data requires. Does your enterprise have the resources it needs to take full advantage of big data?
12 Hadoop Vendors To Watch In 2012
(click image for larger view and for slideshow)
The term "big data" is bandied around a lot these days, but what does it really mean? Are today's data processing tools and technicians up to the task of processing large--and growing--sets of unstructured data? And are many organizations missing the big data revolution because they lack the resources to take advantage of it?
These are just a few of the concerns of Mortar Data co-founder and CEO K Young, who provided an interesting and opinionated take on the state of big data in a recent blog post entitled The Big Big Data Freak-Out of 2012.
Young is qualified to opine on the topic. His company, a New York City-based startup, is a Hadoop-in-the-cloud service that allows you to use the Python and Pig programming languages to write Hadoop jobs directly in a browser. Mortar Data's goal is to make big data processing accessible to organizations that may not be able to afford the hardware, software, and staffing costs of an in-house Hadoop system.
So who's freaking out and why?
"In short, we’re all freaking out because old bottlenecks recently got shattered, the new bottlenecks are us and our existing tools, and mad riches are visible just over the horizon," Young writes. And not just monetary riches, but also the kind associated with using big data to help cure a variety of social ills.
The "old bottlenecks" included the inability to affordably process massive volumes of data. Supercomputers could handle this, of course, but they were beyond the means of all but a few organizations. Hadoop, despite being difficult to use, fixed this bottleneck by enabling data-intensive distributed applications on conventional hardware.
Another former bottleneck was what Young calls "variety"--the need to combine a hodgepodge of data sources. Hadoop and NoSQL stores solved this dilemma by supporting unstructured data (e.g., images and video) and read-time schemas. And real-time processing systems such as Twitter's Storm, in addition to Hadoop and NoSQL stores, provided the tools necessary for high-speed data processing.
But with the old problems out of the way, two new ones popped up: 1) Hadoop is too hard to use, and 2) there's a shortage of data scientists capable of extracting meaning from all this information being collected.
The first problem may take years to solve, Young estimates, as Hadoop is undergoing a lot of innovation and won't become a mature, easy-to-use technology for some time.
"I think we've done a really good job of making it a lot easier, but there's still more work to do there," Young told InformationWeek this week in a phone interview. "It's still limited to people who have a technical background."
The second bottleneck may prove trickier to solve. Young points to 2011 study by the McKinsey Global Institute that says the U.S. could face a shortage of up to 190,000 data scientists by 2018. "We provide a platform where our users create their data pipelines, and they need data scientists in order to construct meaningful data pipelines," Young said.
The Big Big Data Freak-Out happens when organizations are segregated into two distinct classes: the "Hadoops and the Hadoop-Nots," according to Young. Major companies such as Walmart, LinkedIn, and Sears have implemented Hadoop successfully, but other groups, including nonprofits, government agencies, and mid-sized companies saddled with legacy infrastructure, lack the resources to do so.
"On top of that, these companies also lack the data scientists necessary to extract meaning from the data. So they feel like they’re drowning in big data and watching the rescue boat slowly drive away," Young writes.
On the plus side, smaller startups that aren't saddled with legacy hardware or software can get started with the Hadoop right now, he says.
See the future of business technology at Interop New York, Oct. 1-5. It's the best place to learn about next-generation technologies, including cloud computing, BYOD, big data, and virtualization. Register today with priority code YLBQNY02 and save up to $300 on passes with early-bird pricing.
6 Tools to Protect Big DataMost IT teams have their conventional databases covered in terms of security and business continuity. But as we enter the era of big data, Hadoop, and NoSQL, protection schemes need to evolve. In fact, big data could drive the next big security strategy shift.
Big Data Brings Big Security ProblemsWhy should big data be more difficult to secure? In a word, variety. But the business won’t wait to use it to predict customer behavior, find correlations across disparate data sources, predict fraud or financial risk, and more.
Join us for a roundup of the top stories on InformationWeek.com for the week of December 14, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program.