Big Data // Big Data Analytics
News
2/13/2013
02:19 AM
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

Microsoft's Big Data Strategy: An Insider's View

Microsoft executive Dave Campbell outlines plans for Hadoop, machine learning, high-performance computing and data and analytic offerings on Azure.

IW: Where do Microsoft SQL Server and Microsoft High-Performance Computing (HPC) come in (the latter being Microsoft's distributed, super-computing platform)?

Campbell: We're building out an information production line, and in most scenarios the large volumes of data -- the hundreds of terabytes or petabyte-scale data -- will be in Hadoop. That then gets reduced, usually though MapReduce jobs, down to several terabytes that can fit on a small cluster of fairly modest machines. You would then do the final phase of refinement on HPC.

IW: Where does Microsoft handle in-database analytics (a technique now commonly used to speed predictive modeling work)?

Campbell We have a set of foundation algorithms that we can run across several processing runtimes. Time and location are pretty much fundamental in this new world, so we're looking at running time-series-analyses in-memory in the data warehouse or in HPC. We haven't said a lot about in-database analytics, but we have a lot of people using SQL Server's CLR [common language runtime] to define analytic functions and user-defined functions. Jim Gray [the late computing pioneer] introduced scientists and astronomers to database capabilities, so a lot of scientific work is being done on top of SQL Server using that .Net CLR capability.

[ Want more on Redmond's version of Hadoop? Read Microsoft Releases Hadoop On Windows. ]

IW: What about the non-scientific community and where does HPC fit in?

Campbell: HPC is not about doing 10,000-node clusters for national laboratories; it's about efficient information production for businesses. We have many data scientists in our online services division that have built a machine learning workbench that allows them to run experiments and then operationalize and deploy [model-based] applications. We're currently productizing that machine-learning workbench and incorporating elements of HPC.

We demonstrated one example at a supercomputing conference where we took historical meteorological data over decades and historical airline flight-delay data over decades and we built a predictive model that combined them. We could then ask, "On a clear day, what are the probabilities of delay for various airlines, airports and times?" Based on the historical meteorological data, we could also ask, "What do those probabilities look like when there's six inches of snow in Detroit?" If you then add meteorological forecast data, you can ask, "What are the chances I'll miss a 30-minute-window connection in Detroit next Tuesday?"

IW: Many see machine learning as way to close the data science talent gap because the computers themselves can use data to develop and adjust models on the fly. Will this machine learning workbench enable companies to build predictive applications without needing lots of data scientists?

Campbell: Absolutely. The idea is to be able to scale the efforts of the relatively scarce data scientists. The machine-learning work being done now is being handled by PhDs. They have their versions of duct tape and bailing twine to keep things running, but they don't know they have a problem because they're running small numbers of models. If something breaks they go fix it. But over the last few years, the big ad networks using predictive models have run into problems because they need them for every purchaser. They're running thousands of machine-learning models at once. They want models that take care of themselves and that spawn new models when they're no longer effective. … We want to be able to do that on behalf of customers.

There are fairly well-known models for things like fraud detection, spam detection and such. People are going to be building these models, and we expect to package them with a deployment environment on Azure. For every one person who can build a model, there might be 500 who could deploy it and run it in our cloud environment. Where do they get the data they'll need? That's going to be available on Azure as well.

IW: How do you think Microsoft's big data prospects stack up to competitors?

Campbell: If you look at the Oracles, SAPs and even IBM, frankly, none of them are processing hundreds of petabytes daily for their own businesses. None of them have a copy of the Web every day [like we do for Bing], which gives us social signals. This confluence of our commercial data platform, SQL Server, HPC and our online services is turning out to be an interesting cauldron.

I really do believe that this new era is going to be horizontal, so it's not going to be locked up in any single app or application suite. I'm eager to tell our story because there are few other entities on the planet that match us in terms of having Internet services and scale and also the commercial platform of our operating system, database, BI and High-Performance Computing platform.

Previous
2 of 2
Next
Comment  | 
Print  | 
More Insights
6 Tools to Protect Big Data
6 Tools to Protect Big Data
Most IT teams have their conventional databases covered in terms of security and business continuity. But as we enter the era of big data, Hadoop, and NoSQL, protection schemes need to evolve. In fact, big data could drive the next big security strategy shift.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.