Software // Information Management
Commentary
3/4/2014
09:06 AM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

Two Approaches To In-Memory Database Battle

In-memory options from IBM, Microsoft, and Oracle aren't the same as SAP's all-in-memory Hana platform.

2014 is shaping up as the year in-memory technology will really burst onto the scene, with Microsoft and Oracle joining the bandwagon SAP got rolling more than three years ago. Yes, in-memory technology has been around a lot longer than three years, with in-memory databases like Oracle TimesTen and IBM solidDB dating back to 1996 and 1992, respectively. But those products have served in niche roles.

SAP has championed its Hana platform as something that can run an entire company, including both its mission-critical transactional applications (like ERP and CRM), and its analytic needs (things heretofore handled by the separate database management systems (DBMSes) underpinning data warehouses and data marts).

We've already seen a number of vendors join the in-memory fray. Last year  IBM announced BLU Acceleration for DB2, which combines both columnar compression and in-memory processing to accelerate analytics. Also last year Teradata introduced an Intelligent Memory feature that automatically moved the most-often-queried data into RAM to ensure fast-as-possible query response. Like BLU, Intelligent Memory addresses analytics alone, not transactional applications.

[What about in-memory in the big data realm? Read In-Memory Options Abound.]

We're also seeing in-memory DBMSes and options introduced by small NoSQL and NewSQL vendors including Aerospike, MemSQL, VoltDB, and, most recently, DataStax. But the competition among these vendors will be mere skirmishes compared to the battle that will break out later this year when DBMS customer-count leader Microsoft and DBMS revenue-market-share leader Oracle introduce in-memory options for their mainstream databases. In-Memory OLTP (formerly Hekaton) will debut with the Microsoft SQL Server 2014 release expected by mid-year.

As we explore in this week's cover story, In-Memory Databases: Do You Need The Speed?, Hekaton has been in the works for more than three years, and the two alpha/beta customers I interviewed in depth are already using the software in production with great results. This feature is going to be a huge boon to the company's vast Microsoft SQL Server customer base.

I'm guessing Oracle's In-Memory Option for Oracle Database 12c will also be a big win for that company's hundreds of thousands of customers, but we really don't know much about the feature at this point. The option was pre-announced by CEO Larry Ellison at Oracle Open World last year, but it won't see the light of day until late this year at the earliest. I'm expecting beta preview announcements at this year's Oracle Open World with general release to follow early next year at the soonest.

SAP is in front on in-memory tech, but Microsoft, Oracle, and IBM will offer in-memory options to hundreds of thousands of current customers.
SAP is in front on in-memory tech, but Microsoft, Oracle, and IBM will offer in-memory options to hundreds of thousands of current customers.

There are important technical differences between SAP's all-in-memory Hana platform and the in-memory add-on features the others are grafting onto their conventional databases. What's more, all of these vendors say their in-memory options can be deployed non-disruptively -- meaning without ripping and replacing existing applications.

Only SAP is promising "radical simplification," whereby copies of data and, in SAP's view, "redundant" layers of infrastructure created to get around old disk input/output bottlenecks, can be eliminated. As you can read in our coverage, even SAP customers have their doubts as to whether "non-disruptive deployment" and "radical simplification" are mutually exclusive propositions.

The real point of this coverage is to explore the need for speed in the context of business. As we document, through interviews with gaming company Bwin.party, retail services firm Edgenet, bicycle manufacturer Avon Cycles, and packaged goods companies ConAgra and Maple Leaf Foods, the need for in-memory speed is real, and the business benefits can be enormous. Read the feature to discover how in-memory options might benefit your business.

Download the entire March 3 issue of InformationWeek,
In-Memory Databases.

You can use distributed databases without putting your company's crown jewels at risk. Here's how. Also in the Data Scatter issue of InformationWeek: A wild-card team member with a different skill set can help provide an outside perspective that might turn big data into business innovation (free registration required).

Doug Henschen is Executive Editor of InformationWeek, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
McObject
50%
50%
McObject,
User Rank: Apprentice
4/11/2014 | 12:15:36 PM
Inaccurate dates
Doug, SolidDB didn't have an in-memory option (then called the Boost Engine) until the mid- 2000s.  Prior to that, it was a conventional disk-based DBMS. It's not really relevant to the main thrust of your article, but I thought the historical record should be clear.
MarkF018
50%
50%
MarkF018,
User Rank: Apprentice
3/10/2014 | 7:07:38 PM
What Attributes Makes It an In-Memory DBMS?
Doug -

Although I've been doing quite a bit of reading, I'm still trying to get my head around just what makes something an In-Memory DBMS.  One of my touch stones for comparing what appears to be a relatively new notion is the relatively older IBM i operating system (a.k.a., AS/400, S/38) with its integrated DBMS.  Within this OS, every byte of the DB resides persistently within the system's address space from the moment the byte of created.   Although IBM i does support a notion of Process-local addressing, there need not be any notion of mapping of DB files into an address space since the tables and indexes (and what have you) already have an address that any process/thread can use during its access.  Now couple that with a large amount of DRAM - I understand now approaching or above the terabyte range - the DRAM then acts as a cache for the DB objects residing persistently on HDD/SDD.  Given all of that, picture a DRAM large enough to hold the active DB objects and all other control structures.  So does this constitute an "In-Memory DBMS"? 

 

Mark
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
3/6/2014 | 5:31:49 PM
Re: The two camps explained
No, you didn't get that quite right. SAP and Oracle are competitors and both are proposing in-memory technology to provide speed advantages where needed. SAP Hana is an all-in-memory database, and it says its platform won't just speed cruicial processes up, it will also allow you to eliminate data aggregates, materialized views and other copies of data that were created to get around disk I/O constraints. Oracle has a very popular incumbent database, and it's adding an option to put selected databse tables into memory while still being compatible with legacy deployments of its database and legacy apps that run on that database. Microsoft is also taking this approach (and will get there first, by mid 2014) with SQL Server 2014.

Readers should dig into this column and the deeper feature to get deeper insight on what each strategy promises and what customers using these products have to say about advantages and benefits.
BRIAN_CIAMPA
50%
50%
BRIAN_CIAMPA,
User Rank: Strategist
3/5/2014 | 12:34:44 PM
Re: The two camps explained
Thanks so much for the article.  Forgive the question if it seems too basic (I'm kind of new to the in-memory stuff) but what is the purpose of having two in-memory options?  Is it that Oracle will be offering a relational database that can be used as both a transactional system and allow for in-memory analytics while HANA only offers in-memory analytics?
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
3/4/2014 | 12:26:21 PM
The two camps explained
One camp is adding in-memory features to conventional database management systems -- Data Stax, IBM, Microsoft, Oracle, Teradata -- the other camp is starting with all-in-memory DBMSs (Aerospike, MemSQL, SAP Hana, VoltDB and a few others). Interestingly, MemSQL recently added flash and disk storage options, acknowledging the occassional need for longer-term, historical data that doesn't belong in RAM.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - August 27, 2014
Who wins in cloud price wars? Short answer: not IT. Enterprises don't want bare-bones IaaS. Providers must focus on support, not undercutting rivals.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Howard Marks talks about steps to take in choosing the right cloud storage solutions for your IT problems
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.