Software // Information Management
News
11/7/2012
11:23 AM
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

Microsoft In-Memory Move Challenges SAP, Oracle

Microsoft SQL Server "Project Hekaton" promises in-memory transactional processing that will stack up against SAP Hana and Oracle Exadata.

Microsoft made three announcements related to its SQL Server database business Wednesday, but the headliner was clearly "Hekaton," a preview project that promises to bring in-memory OLTP (transactional) processing to the next major release of the database. With that move, which analysts expect by 2014 or 2015, Microsoft will have its in-memory answer to both SAP Hana and Oracle Database with its Exadata and Exalytics appliance options.

In Microsoft's other two announcements, the company released Service Pack 1 (SP1) for Microsoft SQL Server 2012 and it announced that the next release of the Microsoft SQL Server Parallel Data Warehouse (PDW) will debut in the first half of 2013. With that, PDW will move from SQL Server 2008 R2 to SQL Server 2012, and it will also gain a new feature called PolyBase, which will support federated querying against the SQL Server 2012 relational database as well as data in Microsoft's just-released Hadoop platform, Microsoft HDInsight Server for Windows.

PolyBase contrasts with the SAP and Oracle approaches to big data in that it provides a single point of federated querying with Hive access to the data in Hadoop. That means you can leave the data in Hadoop in place, whereas Microsoft's competitors provide connectors and suggest that customers take the step of moving MapReduce result sets into their relational database environments.

[ Want more on Microsoft's answer for big data? Read Microsoft Releases Hadoop On Windows. ]

Hekaton is the Greek word for 100 times, and Microsoft says that's the design goal for the peak performance improvements it's expecting. In early lab testing of Hekaton, Microsoft executives said they're already seeing a 50 times improvement in transactional application performance. When the technology finally reaches the market with "the next major release of SQL Server" (the company offered no target dates for that step), Microsoft said it will fill in the last piece of an in-memory computing strategy that will have competitors beat.

"Our design approach is built into SQL Server," explained Doug Leland, Microsoft's general manager of SQL Server marketing, in an interview with InformationWeek. "SAP and Oracle are basically forcing customers to buy, learn and manage a separate solution, but we're taking a differentiated approach which is building the [in-memory] technology directly into the data platform that customers are already using."

Customers will be able to install Hekaton on the commodity servers they're using today, according to Microsoft, and the software will take advantage of the resources that are available in the box. "The system will look at how much storage and memory you have available and will make choices about what tables it loads up into main memory," Leland said.

SQL Server customers expecting to move large or particularly demanding portions of database workload into memory would clearly have to increase available RAM in order to get true in-memory performance, but the database could always fall back on conventional disk-based retrieval of information if RAM isn't available. In SAP's approach, performance "falls off a cliff" when memory capacity requirements are surpassed, according to Leyland, necessitating software and hardware upgrades. And in contrast to Oracle's cache-centric Exadata approach, Hekaton will deliver true in-memory performance.

So how will things stack up in a three-way transactional processing competition between SAP Hana, Microsoft SQL Server with Hekaton and Oracle Exadata? "The Flash used in Exadata is not memory, but it is fast," Gartner analyst Don Feinberg said in an interview with InformationWeek. "You give up the potential for real-time analytics and it won't be as fast as Hana or Hekaton, but for most clients of Exadata, it's going to be fast enough."

Feinberg expects that Oracle, too, will bring in-memory capabilities to Oracle database -- something currently limited to the TimesTen database in the Exalytics appliance -- but there's no telling when that might happen. SAP is promising core application transaction processing on Hana sometime next year. If Hekaton has to wait for the next "201X" release of SQL Server, that would not be likely to be released until late 2014 (at the earliest) as SQL Server 2015, though an R2-designated 2012 release might bring it to market sooner.

Microsoft has already delivered in-memory business intelligence and analytics with Microsoft PowerPivot and Power View. PowerPivot was introduced as an in-memory-analysis plug-in to Microsoft Excel with the release of SQL Server 2008 R2 in 2010. Power View is an in-memory data visualization option introduced in April with the release of SQL Server 2012.

With the release of SQL Server 2012 SP1, the database will support the upcoming release of Office 2013, which builds PowerPivot and Power View directly into Microsoft Excel as data-exploration and visualization options available from the ribbon menu. That means customers will no longer have to download plug-ins to Excel in order for users to tap into data on enterprise SQL Server databases.

Microsoft says its PowerPivot, Power View and X-Velocity in-memory column-store and multidimensional-analysis capabilities built into SQL Server 2012 are already accessible to 1.5 million customers. With Office 2013, now available in beta and expected early next year, in-memory analysis will be accessible to all users of the next generation of Excel.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Dmitrit
50%
50%
Dmitrit,
User Rank: Apprentice
11/27/2012 | 12:08:11 AM
re: Microsoft In-Memory Move Challenges SAP, Oracle
I think the proof of the technology is in customer implementations. HANA performance papers show simple star schema with a single large fact table. Once you go beyond that into real world implementations (multiple large fact tables and many to many relationships between dimensions) HANA performance remains unknown. Especially when resolving the query requires data movement between the nodes - and may result in eventual data spills on the disk.
Rob Klopp
50%
50%
Rob Klopp,
User Rank: Apprentice
11/15/2012 | 6:38:35 PM
re: Microsoft In-Memory Move Challenges SAP, Oracle
Leyland is incorrect in suggesting that HANA "falls off a cliff" when the data size exceeds the memory available. HANA pages data to disk when memory becomes constrained.

When Hekaton arrives in 2015 MSFT will be five years and 3-4 release cycles behind HANA. It is only software so they might catch up... in 2018 or 2019.
sybpro
50%
50%
sybpro,
User Rank: Apprentice
11/8/2012 | 3:00:52 PM
re: Microsoft In-Memory Move Challenges SAP, Oracle
SAP Sybase brought in-memory OLTP (transactional) processing to ASE15.5 three years ago.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.