Yahoo And Hadoop: In It For The Long Term

Yahoo made an early bet that it could use Hadoop to monetize the big data the content site was collecting. Now it's betting the company on it.

Charles Babcock, Editor at Large, Cloud

June 15, 2012

5 Min Read

12 Hadoop Vendors To Watch In 2012

12 Hadoop Vendors To Watch In 2012

12 Hadoop Vendors To Watch In 2012 (click image for larger view and for slideshow)

As Hadoop's first large-scale user, Yahoo still holds a special place in Hadoop's rapidly expanding universe.

Yahoo started out using Hadoop to speed up indexing of Web crawl results for its search engine. Now it relies on Microsoft's Bing for search results, and a Bing team does the heavy lifting with Web data.

Where does that leave Yahoo's relationship with Hadoop today? The company is still betting the business on it, according to Scott Burke, senior VP of advertising and data platforms. "Yahoo remains committed to Hadoop, more than ever before. It's the only platform we use globally. We have the largest Hadoop footprint in the world," said Burke in his keynote address at the Hadoop Summit 2012 Thursday in San Jose.

Yahoo runs Hadoop on 42,000 servers--that's 1,200 racks--in four data centers. Its largest Hadoop cluster is 4,000 nodes, but that will increase to 10,000 with the release of Apache Hadoop 2.0, expected shortly.

[ Why did VMware launch another Hadoop open source project? It needs to be virtualized. Read VMware Launch Open Source Project To Virtualize Hadoop . ]

After his keynote, Burke explained some of the key ways in which Hadoop has become integral to how Yahoo does business. For starters, Yahoo uses Hadoop to block spam trying to get into its email servers. Burke says it screens out 20.5 billion spam messages a day. "It's really transformed our spam detection capabilities," he said. Hadoop sits at the base of a powerful software stack inside Yahoo and powers the "personalization" Yahoo strives to give its visitors. Hadoop already knows something about you based on your previous visits, so if you request sports or financial news, for example, you'll get a "high-value package" reflecting your interests. (Yahoo uses a combination of automated analysis and human editors to define packages. If left only to automation, everyone would get packages about celebrities, which draw the highest traffic on the site.)

"Our goal is to give every consumer a personalized experience," said Burke. That means Yahoo can give visitors who've just read a particular story a list of related story choices that may also appeal to them.

Yahoo isn't using Hadoop as a stand-alone system. Rather, it serves as an information foundation for an Oracle database system, which pulls presorted, indexed data out and feeds it into a Microsoft SQL Server cube for highly detailed analysis. The resulting data is displayed in either Tableau or Microstrategy visualization systems to Yahoo business analysts. They in turn use it to advise advertisers how their campaigns are faring soon after launch.

Yahoo can advise advertisers on the demographic that responds best to their campaigns so that advertisers can adjust their ads as necessary to appeal to the widest audience. According to Burke, this can lead to improved results, sometimes even doubling expected response rates.

Yahoo's data indicates not only who is viewing advertising, but also how much time they're spending on it, what their click-through rate is, and what they do on the Yahoo site after viewing an ad. This is all valuable feedback to advertisers, as it lets advertisers make decisions early in an ad campaign and helps them plan their future advertising strategies.

In addition, Yahoo has also developed a set of open source code called Cocktails, which gives advertisers the tools to extract information in Yahoo's Hadoop systems. Cocktails are written in JavaScript so they can run in browser windows; Manhattan is a server-side cocktail and Mojito is client-side. Essentially, Burke explained, each cocktail could run the same reporting program on a server or an end-user laptop, respectively, to get information that an advertiser is seeking. "We believe over the years, Cocktails will replace PHP in the enterprise. JavaScript is a better, more interactive language," Burke said.

Cocktails work with a Yahoo service, Advertiser Insights, to find information. The reporting system works on a Hadoop-Oracle-Pentaho Mondrian (open source code for building multi-view cubes) stack. "Yahoo has a lot of valuable data. Advertisers can see all the information that we've learned about their customers," noted Burke.

Finally, Yahoo uses Hadoop for its own internal analysis of information captured from user interactions. It stores 140 petabytes in Hadoop. Since Hadoop keeps all data sets in triplicate, over 400 petabytes of storage are needed to sustain its systems.

Burke, a seven-year veteran at Yahoo, said the company started using Hadoop when it was a rudimentary system. Yahoo developed it, made it open source code, and at the same time bet the business on it by developing a business model for monetizing content. In the process, Burke conceded that Yahoo has "gotten a lot of attention, sometimes for the wrong reasons." Over the years, the company has been rumored to be for sale, or undergoing significant changes in management or its business model.

Burke is confident that Yahoo's success with Hadoop will lead the company to the top in its various business initiatives. Yahoo is becoming an expert at capitalizing on "the big shift from the off-line analysis model to something much closer to a predictive model: Get a response in front of the customer with the right offer at the right time," said Burke.

To achieve this goal, Yahoo has had to implement "science at scale," or invest in an open source system that seemed to have broad potential but wasn't yet proven in prime time. "We've bet the business on this platform at on a global scale," Burke says, "and there's no turning back."

At the Big Data Analytics interactive InformationWeek Virtual Event, experts and solution providers will offer detailed insight into how to put big data to use in ad hoc analyses, what-if scenario planning, customer sentiment analysis, and the building of highly accurate data models to drive better predictions about fraud, risk, and customer behavior. It happens June 28.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights