Software // Information Management
Commentary
1/28/2013
01:21 PM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Vague Goals Seed Big Data Failures

IT staff survey finds big data projects are plagued by unclear business objectives and unrealistic expectations that Hadoop will solve all problems.

Big Data's Surprising Uses: From Lady Gaga To CIA
Big Data's Surprising Uses: From Lady Gaga To CIA
(click image for larger view and for slideshow)
What business problem are you trying to solve? If you could tell your IT employees what it is, they'd have a much better crack at big data success. At least that's the perspective of IT staffers as reported in a recent survey by big data cloud-services provider Infochimps.

The survey, "CIOs & Big Data: What Your IT Team Wants You to Know," confirms that there's big interest in this topic, with 81% of respondents listing "Big Data/Advanced Analytics Projects" as a top-five 2013 IT priority. However, respondents also report that 55% of big data projects don't get completed and that many others fall short of their objectives. "Inaccurate scope" is cited by 58% as the top reason that big data IT projects fail.

"Too many big data projects are structured like boil-the-ocean experiments," Infochimps' CEO, Jim Kaskade, told InformationWeek. Lots of companies are blindly building out Hadoop clusters and collecting new data based on only a vague plan to open up that data store to multiple lines of business in 12 to 24 months, Kaskade said. A better approach, he advised, is to prioritize business use cases first and start solving one problem at a time.

[ Want more on big data dissatisfaction? Read Big Data Perceptions: Good, Bad And Ugly. ]

"Some people would say that approach is too messy and incremental, but you're going to learn much more tackling five uses cases than you would learn after 24 months of building out a platform that has no real usage," he said.

Infochimps conducted its survey of 300 IT staffers with assistance from enterprise software community site SSWUG.ORG. The findings are based on the responses of 174 participants who said they are involved in big data initiatives. Infochimps specifically chose IT staffers so it could to gain insight from those primarily responsible for implementation. Thus, 86% of respondents are directors, managers or systems administrators/developers, while the remaining 14% are VPs, senior VPs or CIOs.

Rather than trying to dream up new business use cases, Kaskade, who spent 10 years at Teradata before Infochimps, advises companies to take a fresh look at the problems they're trying to solve with their existing data infrastructure.

"Whether it's churn, anti-money-laundering, risk analysis, lead-generation, marketing spend optimization, cross-sell, up-sell, or supply chain analysis, ask yourself, 'how many more data elements can you add with big data that can make your analysis more statistically accurate?'" he suggested.

That's practical and refreshing advice for practitioners who might think big data has to be about entirely new analyses. Kaskade offers the example of an online brokerage firm adding clickstream analysis to known data on account profiles and products purchased. The clickstream data could show the brokerage what was browsed but not purchased and where online customers seemed to get hung up on site functionality.

"You're not going to put clickstream data in your Teradata or Oracle database, but you can process that in a Hadoop cluster," Kaskade said. "If you can show value in 30 days and pay as you go, you're solving a problem and it's not in a brokerage firm's overall IT budget.

Another takeaway from the report is that big data planners have to look beyond Hadoop. Hadoop-based techniques aren't enough to meet business needs for analysis, according to the study. As evidence, respondents rate batch processing -- the core approach in Hadoop MapReduce processing -- and real-time processing as almost equally important, with scores of 53% and 49%, respectively.

The Hadoop community is working hard to support faster analysis, with examples including Cloudera's Impala project and Hortonwork's HCatalog initiative, but Kaskade says these tools are geared to ad-hoc, near-real-time query, answering the same sorts of historical questions you'd ask of a data warehouse. What's needed, he says, is real-time analysis that can monitor what people are looking at on a website or mobile app to, say, personalize the experience while customers are still connected.

Kaskade lists SQLstream, HStreaming, StreamBase, VMWare's GemFire and open-source projects Storm and Apache Kafka as emerging in-stream and in-memory processing options capable of delivering real-time analysis.

"A few years from now, we're going to see all three of these use cases -- real-time, near-real-time and batch -- coming together, and we'll finally have everything we need to build truly smart, data-driven applications," Kaskade said.

We'll see. These sorts of streaming technologies have been in use for more than a decade in financial trading, but they have yet to go mainstream -- despite the fact that they've been offered by the likes of IBM (InfoSphere Streams), Microsoft (SQL Server StreamInsight), Oracle (CEP) and SAP (Sybase Event Stream Processor) for more than a few years.

Dreamers who haven't thought through their business priorities might think that Hadoop alone will be enough to deliver big data insight, but Infochimp's study suggests that that's not the case. Perhaps mobile and e-commerce opportunities will finally lead to broad adoption of stream processing as a way to solve the big data velocity challenge.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Patrick Taylor
50%
50%
Patrick Taylor,
User Rank: Apprentice
2/11/2013 | 7:46:17 PM
re: Vague Goals Seed Big Data Failures
In addition to not boiling the ocean from a data capture and analysis perspective we've found an additional key goal is focusing on the action that can come as a result of the analysis. What smarter decisions youG«÷ll be able to make in the organization. When we focus on making smarter decisions on the frontline of business the big data's capability to store and access the details makes a real difference. One smarter decision a day has a dramatic impact on the bottom line. G«Ű Patrick Taylor, CEO, www.oversightsystems.com
J. Nicholas Hoover
50%
50%
J. Nicholas Hoover,
User Rank: Apprentice
1/31/2013 | 7:46:15 PM
re: Vague Goals Seed Big Data Failures
Doug, I've found this exact same problem in my study of government big data projects. There are hazy goals at a number of government agencies, and others have piles of siloed projects.
JHADDAD3380
50%
50%
JHADDAD3380,
User Rank: Apprentice
1/29/2013 | 6:17:02 PM
re: Vague Goals Seed Big Data Failures
Good post Doug. I gave my perspective as to why some projects fail to deliver on the promise of Big Data in yesterdayG«÷s Informatica Perspectives blog, G«£How to Avoid the Big Data Trough of DisillusionmentG«• (http://bit.ly/117GfZO). I then describe what organizations can do to avoid big data project failures by leveraging existing developer skills and more unified data architectures that include a single data integration platform. The other aspect of avoiding a big data hangover is to build a clear business case to maximize your return on data (see G«£Building the Business Case for Big Data: Learn to Walk Before You RunG«• at http://bit.ly/TWxnRJ.
Yangtze
50%
50%
Yangtze,
User Rank: Apprentice
1/29/2013 | 5:10:16 PM
re: Vague Goals Seed Big Data Failures
In my readings, it appears that Big Data is presented as yet another "IT Savior" to the business. If you create this, you'll get this great Oz effect. IMHO, it is yet another IT project where IT is trying to prove it's a business driver. However, IT is just a tool for business. I have yet to see a true IT project that results in any added value to the organization without a STRONG business driver. There has to be a defined need/result for the IT project to succeed.
Ellis Booker
50%
50%
Ellis Booker,
User Rank: Strategist
1/29/2013 | 3:53:01 PM
re: Vague Goals Seed Big Data Failures
A reasonable post, Doug. But isn't there also the promise of using Big Data, combined with machine learning, to *discover* valuable patterns? I agree there's plenty of value in seeking answers to specific questions. But letting the data speak for itself, so to speak, should be part of the plan, yes? --Ellis Booker, InformationWeek Community Editor
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
1/29/2013 | 2:25:49 PM
re: Vague Goals Seed Big Data Failures
Key takeaway: go after a short list of tactical projects -- likely the types of problems you're already trying to solve with existing information management systems. More data = better chance of success.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.