Social network giant Facebook and venture capital blue chip Accel Partners think emerging platforms like Hadoop deserve new business intelligence, data visualization, and analytics tools.

Doug Henschen, Executive Editor, Enterprise Apps

September 28, 2012

6 Min Read

Big Data Talent War: 10 Analytics Job Trends

Big Data Talent War: 10 Analytics Job Trends


Big Data Talent War: 10 Analytics Job Trends (click image for larger view and for slideshow)

Forget about the business intelligence suites from IBM, Oracle, and SAP Business Objects, the analytics from SAS, and even the hot data visualization tools like Tableau Software. New platforms like Hadoop and NoSQL databases demand new tools that are purpose built for these environments.

This is a core theme that Jay Parikh, VP of infrastructure engineering at Facebook, and Ping Li, a partner at venture capital firm Accel Partners, discussed on stage on Thursday at the DataWeek 2012 Conference in San Francisco. Their talk was about the challenges and opportunities facing startups and young companies in the big data arena, and Parikh and Li shared their message with InformationWeek by phone just hours before they took to the stage.

There's little doubt that Hadoop, NoSQL databases, and other emerging big data platforms are quickly evolving, says Li, "but we're hoping to see more new applications on top of these platforms." Parikh and Li are encouraging more innovation because there's not enough speed and breadth of development to truly feed a rich big data community, they say.

New analytics, business intelligence, and data visualization tools are needed, Li says, because "stats platforms like SAS and R for predictive analytics were not built for the big data world. Tableau Software has been wildly successful, but it was built before big data tools were even around."

[ Want more on analytics? Read Can Analytics Degree Bridge The Business-IT Gap? ]

Citing a "huge gap" in connecting big data business users to the new underlying platforms, Li says there's also ample room for new business applications, like CRM, and new vertical industry applications for data-intensive fields, such as oil and gas.

Ping Li

Ping Li


Ping Li Accel Partners

Li manages Accel Partners' Big Data Fund, which clearly stands to benefit if there's a crop of new startups to invest in that ultimately succeed. But why is Facebook taking a stand?

"We've had a long history of innovating on infrastructure very openly and contributing back into various open source projects," Parikh explained. "There's a lot more work to be done on these platforms, but we're not going to hire every smart engineer on the planet. We want to be able to collaborate with the people that we can't hire in the open through various communities."

In its earliest days, Facebook helped push the envelope with open source projects like Memcached and MySQL. The social network giant has since made significant contributions to Hadoop, including foundational work on Hive and many contributions to HBase, HDFS, and MapReduce. The company has been forced to innovate because it runs the largest Hadoop deployment in the world, with more than 100 petabytes of information.

"We built Hive as a way for business users to get what they needed out of our [Hadoop] big data infrastructure," said Parikh. "Writing MapReduce jobs is fine for engineers, but if you're an analyst or a product manager and you want to extract reports or do analysis, you need an easier interface for that data. Hive gave our users a SQL-like interface to Hadoop."

The "we need new tools" thesis seems to write off products that have made huge strides in connecting to new platforms. Parikh grants that there is a bias in the big data community toward "shiny new things," and doesn't believe there's "one magical piece of technology that's going to wipe out everything done in the past."

Li, too, grants that the relational database and the applications built for it will survive, "but we're seeing enough new green field applications that will require a new set of tooling." Most relational databases and BI platforms have sprouted connections to Hadoop, with one of the latest wrinkles is HCatalog-based access to Hadoop data without data movement. But over time, Li foresees new tools built natively for the new platforms.

Jay Parikh

Jay Parikh


Jay Parikh Facebook

"It's kind of like the mobile world where people started by putting Web applications on the mobile phone, but now they're developing natively just for the mobile phone in a way that takes advantage of the fact that it's a mobile device that has location information and all sorts of other good stuff," Li says.

On Li's short list of platforms of the future are Hadoop, HBase, and "a couple of different flavors of NoSQL databases." It's early for real-time platforms, but these will also emerge, he says, mentioning Twitter's Storm open source project and Google's Dremel technology. On the application side, Li sees BI, data visualization, and analytics as ripe for innovation.

"The concept of machine learning is going to change the way that people think about analytics," Li says. "We'll no longer do sampling because we can now run analytics across an entire data set repeatedly."

The in-database processing already being done by the likes of Alpine, Fuzzy Logix, IBM SPSS, and SAS on massively parallel processing grids and platforms like EMC Greenplum, IBM Netezza, and Teradata is "just a first step," according to Li. "The next step is running it natively on some of the newer platforms," he says.

Parikh says that real-time processing and graph analysis are the hottest areas of exploration at Facebook, but the needs aren't yet well served by existing technologies. "The way Hadoop is evolving is great because it's open, but over the next couple of years you're going to see significant changes in the Hadoop stack as we know it today," he says.

Work is needed on HDFS to make it more robust, more scalable, and more efficient, he says. What's more, real-time demands and the need for incremental processing will drive development, because people don't have time to rerun MapReduce jobs they way they have to today.

Graph processing is another area where Facebook is innovating. "Everything in Facebook is modeled as a social graph, and it's a completely different way to model data than is used in the relational world," Parikh explains. "We're trying to develop more powerful ways to query the graph. MapReduce requires a lot of iteration and it's non-intuitive, so we're building a whole set of tools that will allow us to query the graph in real time."

But even Facebook, with fewer than 5,000 employees, seems overwhelmed by the amount of work that needs to be done. Thus the appeal to entrepreneurial software developers to build better mousetraps for big data.

About the Author(s)

Doug Henschen

Executive Editor, Enterprise Apps

Doug Henschen is Executive Editor of InformationWeek, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of Transform Magazine, and Executive Editor at DM News. He has covered IT and data-driven marketing for more than 15 years.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights