Software // Information Management
Commentary
11/14/2012
10:23 AM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

Sears Hadoop Plans: Check Out Data Warehousing's Future

Will Hadoop become the new enterprise data warehouse? Sears' CTO is not alone in seeing a shift in how we'll use relational databases.

 Big Data Talent War: 7 Ways To Win
Big Data Talent War: 7 Ways To Win
(click image for larger view and for slideshow)
In one example of the kind of tools that Shelley would like to see, Sears is using Datameer, a spreadsheet-style tool that supports data exploration and visualization directly on Hadoop, without copying or moving data.

Another vendor operating on top of Hadoop is Hadapt, which implements a SQL-queryable data store running on Hadoop. It uses the power of Hadoop's distributed infrastructure to speed processing, and it can handle unstructured data applications including full-text search.

Platfora, which launched its first product at Strata New York, is yet another company offering a new-breed analytics platform built to run on Hadoop. The software gives analysts and power-users a data catalog that enumerates the data sets that are available on a Hadoop Distributed File System. When you want to do an analysis, you use a shopping-cart-metaphor interface to pick and choose the dimensions of data you want to use. Behind the scenes, Platfora's software takes care of creating and executing MapReduce processing that brings all the requested data into an interactive data-visualization environment.

[ Want the inside story on big data plans at Sears? Read Why Sears Is Going All-In On Hadoop. ]

Once the data lens is ready, a process that takes a few hours, according to Platfora, analysts and business users alike can query and explore the data with sub-second response time because the data lens runs in memory. Adding new data types or otherwise changing the dimensions used in a data lens takes minutes or hours, says Platfora, not the days, weeks or months it might require to rebuild a conventional data warehouse.

Platfora has a who's who list of prominent venture capital backers -- Andreessen Horowitz, Sutter Hill Ventures, In-Q-Tel -- and on Tuesday it closed a $20 round of funding led by Battery Ventures. The company also reports that more than ten companies are beta testing the software, though none have shared their stories publically as yet.

Established vendors, too, are working on ways to query data in Hadoop without actually moving that data. Teradata, for one, has placed bets on the HCatalog service for Hadoop being developed by Hortonworks, and Microsoft announced last week that it would add a PolyBase feature to the SQL Server 2012 Parallel Data Warehouse that will provide federated querying of Hadoop using Hive.

To get back to the question of how data warehousing will change, it's clear that many vendors are working on ways to run data-warehouse style analyses and analytic applications on Hadoop. The most promising candidates have few references and there's no clear leader as yet.

Amid all the enthusiasm building around Hadoop, you have to catch yourself sometimes and remember that the last public customer count from Cloudera (the leading Hadoop software distributor and support provider) was in the hundreds, not thousands or tens of thousands. Relational database users number in the millions, and the leading commercial vendors have hundreds of thousands of customers each.

Keeping that dose of reality in mind, it's clear that the broad market is not going to substantially change anytime soon, but the early adopters have adopted and the fast followers are now on their heels. It's only a matter of time -- perhaps three to five years -- before the cost, scale and flexibility advantages of Hadoop will give it a sizeable presence in the enterprise market. Showing up in even 20% of large and midsize firms within that timeframe would be breakneck speed compared to how long it took for relational data warehousing to proliferate.

It's at that point that things will really start to change for the relational database.

Just as the Internet and social networks are changing what we watch and how much time we spend watching TV and reading newspapers, the rise of Hadoop will cast relational databases in more tactical roles best suited to the strengths of the platform. Fast querying of structured data is what relational databases do best. Where new data types, complex data and varied data are constantly showing up, relational databases don't adapt quickly or well. And where data volumes are extreme, cost is a killer. That's the bottom line.

What's your view on whether, when and how Hadoop will impact relational databases? Share your comments below.

Previous
2 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
JHADDAD3380
50%
50%
JHADDAD3380,
User Rank: Apprentice
11/16/2012 | 2:05:06 AM
re: Sears Hadoop Plans: Check Out Data Warehousing's Future
I see Hadoop as a key component of a big data analytics strategy that complements and needs to integrate with the rest of an enterprise information management infrastructure that may include legacy systems (like the mainframe), relational databases, ERP, CRM, and cloud applications, data warehouse appliances, etc. Not only are the data volumes growing exponentially but the variety of data is increasing with social media, sensor devices, call detail records, industry standards data (e.g. HL7 in healthcare, FIX, SWIFT, and market data in Financial Services, etc.), log files, and the list goes on.

It certainly makes sense to store a lot of the raw multi-structured and unstructured data in Hadoop rather than a traditional relational database. However, even if you assume over time that more and more data will be stored in Hadoop you still need to access the ever increasing variety of data from multiple organizations, residing in different systems and formats, then you need to parse and transform it on Hadoop, before you can do any useful analysis.

IG«÷m hearing from data scientists that about 80% of the work in a big data project is data integration. In fact, in one study of 35 data scientists one of them stated, G«£I spend more than half my time integrating, cleansing, and transforming data without doing any actual analysis. Most of the time IG«÷m lucky if I get to do any G«ˇanalysisG«÷ at all.G«•, (Kandel, et al. Enterprise Data Analysis and Visualization: An Interview Study. IEEE Visual Analytics Science and Technology (VAST), 2012). The need for data integration is greater today than it ever has been. The challenge is to make data integration easier and more productive on emerging technologies such as Hadoop. InformaticaG«÷s PowerCenter Big Data Edition (http://bit.ly/U25Cn8) provides a no-code development environment to visually design data integration flows and then execute them on Hadoop so that data scientists can spend more of their time doing analysis rather than integrating data.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest September 24, 2014
Start improving branch office support by tapping public and private cloud resources to boost performance, increase worker productivity, and cut costs.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.