Vague Goals Seed Big Data Failures
IT staff survey finds big data projects are plagued by unclear business objectives and unrealistic expectations that Hadoop will solve all problems.
Big Data's Surprising Uses: From Lady Gaga To CIA
Big Data's Surprising Uses: From Lady Gaga To CIA (click image for larger view and for slideshow)
What business problem are you trying to solve? If you could tell your IT employees what it is, they'd have a much better crack at big data success. At least that's the perspective of IT staffers as reported in a recent survey by big data cloud-services provider Infochimps.
The survey, "CIOs & Big Data: What Your IT Team Wants You to Know," confirms that there's big interest in this topic, with 81% of respondents listing "Big Data/Advanced Analytics Projects" as a top-five 2013 IT priority. However, respondents also report that 55% of big data projects don't get completed and that many others fall short of their objectives. "Inaccurate scope" is cited by 58% as the top reason that big data IT projects fail.
"Too many big data projects are structured like boil-the-ocean experiments," Infochimps' CEO, Jim Kaskade, told InformationWeek. Lots of companies are blindly building out Hadoop clusters and collecting new data based on only a vague plan to open up that data store to multiple lines of business in 12 to 24 months, Kaskade said. A better approach, he advised, is to prioritize business use cases first and start solving one problem at a time.
[ Want more on big data dissatisfaction? Read Big Data Perceptions: Good, Bad And Ugly. ]
"Some people would say that approach is too messy and incremental, but you're going to learn much more tackling five uses cases than you would learn after 24 months of building out a platform that has no real usage," he said.
Infochimps conducted its survey of 300 IT staffers with assistance from enterprise software community site SSWUG.ORG. The findings are based on the responses of 174 participants who said they are involved in big data initiatives. Infochimps specifically chose IT staffers so it could to gain insight from those primarily responsible for implementation. Thus, 86% of respondents are directors, managers or systems administrators/developers, while the remaining 14% are VPs, senior VPs or CIOs.
Rather than trying to dream up new business use cases, Kaskade, who spent 10 years at Teradata before Infochimps, advises companies to take a fresh look at the problems they're trying to solve with their existing data infrastructure.
"Whether it's churn, anti-money-laundering, risk analysis, lead-generation, marketing spend optimization, cross-sell, up-sell, or supply chain analysis, ask yourself, 'how many more data elements can you add with big data that can make your analysis more statistically accurate?'" he suggested.
That's practical and refreshing advice for practitioners who might think big data has to be about entirely new analyses. Kaskade offers the example of an online brokerage firm adding clickstream analysis to known data on account profiles and products purchased. The clickstream data could show the brokerage what was browsed but not purchased and where online customers seemed to get hung up on site functionality.
"You're not going to put clickstream data in your Teradata or Oracle database, but you can process that in a Hadoop cluster," Kaskade said. "If you can show value in 30 days and pay as you go, you're solving a problem and it's not in a brokerage firm's overall IT budget.
Another takeaway from the report is that big data planners have to look beyond Hadoop. Hadoop-based techniques aren't enough to meet business needs for analysis, according to the study. As evidence, respondents rate batch processing -- the core approach in Hadoop MapReduce processing -- and real-time processing as almost equally important, with scores of 53% and 49%, respectively.
The Hadoop community is working hard to support faster analysis, with examples including Cloudera's Impala project and Hortonwork's HCatalog initiative, but Kaskade says these tools are geared to ad-hoc, near-real-time query, answering the same sorts of historical questions you'd ask of a data warehouse. What's needed, he says, is real-time analysis that can monitor what people are looking at on a website or mobile app to, say, personalize the experience while customers are still connected.
Kaskade lists SQLstream, HStreaming, StreamBase, VMWare's GemFire and open-source projects Storm and Apache Kafka as emerging in-stream and in-memory processing options capable of delivering real-time analysis.
"A few years from now, we're going to see all three of these use cases -- real-time, near-real-time and batch -- coming together, and we'll finally have everything we need to build truly smart, data-driven applications," Kaskade said.
We'll see. These sorts of streaming technologies have been in use for more than a decade in financial trading, but they have yet to go mainstream -- despite the fact that they've been offered by the likes of IBM (InfoSphere Streams), Microsoft (SQL Server StreamInsight), Oracle (CEP) and SAP (Sybase Event Stream Processor) for more than a few years.
Dreamers who haven't thought through their business priorities might think that Hadoop alone will be enough to deliver big data insight, but Infochimp's study suggests that that's not the case. Perhaps mobile and e-commerce opportunities will finally lead to broad adoption of stream processing as a way to solve the big data velocity challenge.
About the Author
You May Also Like