Software // Information Management
Commentary
6/11/2014
11:01 AM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

Oracle In-Memory Option: 6 Key Points

Oracle Database In-Memory comes with big performance promises. But examine a few performance tradeoffs, change requirements, and acceleration constraints.

Oracle CEO Larry Ellison on Tuesday promised performance gains without compromise with Oracle Database In-Memory, an option set for general release in July. But in a familiar pattern for in-memory technology, the claims must be accompanied by a few important caveats.

Oracle's key claims about the in-memory option are that it will deliver dramatic analytical performance improvements and unexpected transactional speed gains all without requiring changes to existing applications. It may be possible to achieve all of these goals independently, but as an Oracle executive acknowledged in a follow-up interview after Tuesday's presentation, performance improvements will vary and in some cases may require changes to applications.

[Want more on the in-memory options and choices? Read In-Memory Databases: Do You Need The Speed?]

Here are the 6 crucial facts about the option along with explanations from Oracle and comments from two in-memory competitors.

1. BI and analytical apps many not work properly if you get rid of analytical indexes. In Tuesday's announcements about Oracle Database In-Memory, Ellison promised "at least 100 times" faster analytical performance thanks to the in-memory columnar data store introduced by this option. Transactional performance is only two-to-four-times faster, according to Oracle, because the row store -- the heart of the database that runs transactional applications -- is unchanged. That data remains on spinning disks.

In advance of the announcement, in-memory competitors SAP and VoltDB questioned whether transactional performance can improve at all, given that the database will now have to sync data between the row and new columnar stores. But Ellison said Oracle's trick in achieving both goals is eliminating analytical indexes no longer required once customer-selected tables and partitions are placed in memory. The question is, how can existing BI and analytical applications designed to work with those indexes run without changes?

"The optimizer on 12c has been enhanced to be aware of the in-memory store, and if it works correctly, those BI queries will be redirected," explained Tim Shetler, Oracle's group VP, product management. The important caveat? "It's pretty easy to forget all the applications that depend on indexes, so we're giving cautious guidance to customers to test before eliminating indexes."

Customers can test with a database setting than can temporary hide indexes without deleting them. "Pick an index, run for a week, and if important applications run slowly, maybe that index needs to stay or you need to do something differently."

Transactional performance can only improve if you eliminate analytical indexes. But the more of these indexes you remove, the more likely it is that you'll need to rewrite BI and analytical apps to run on the in-memory option.

2. Transactional performance might pale in comparison with in-memory rivals.  In contrast to Oracle, Microsoft is focusing strictly on transactional performance with its Microsoft SQL Server In-Memory OLTP option. Microsoft cites low-latency transactional demand from telcos, gaming companies, financial services, and online retailers. Gaming company Bwin.party, for example, reported 10 times to 15 times faster throughput using Microsoft's option for applications that were formerly disk-I/O-bound.

According to SAP, another proponent of in-memory transactional acceleration, Oracle "has not addressed the I/O issues associated with data on disk," said Irfan Khan, senior VP and general manager of SAP Database & Technology, in an email interview with InformationWeek. "While it may remove some maintenance of certain analytic indexes, it adds new overhead keeping two version of data in sync across the row store and columnar cache."

If transactional acceleration is needed, Oracle does offer the Oracle TimesTen in-memory database, Shetler pointed out. "But transactional applications represent a relatively small market opportunity," he asserted. "It's a little bit confusing why Microsoft has focused on transactions as their key point because there's so much [going on in] analytics now."

That's certainly true, but columnar databases geared to analytics "have been in the market for several years now," said Ryan Betts, CTO of VoltDB in an email interview. "In-memory is not only for analytics as Oracle would have you believe." VoltDB's transaction-oriented in-memory database is used in networking, gaming, financial services, and online marketing and retailing applications.

3. Dramatic acceleration will require a little work. The good news is that all applications that currently run on Oracle will run on and see performance improvement with Oracle Database In-Memory. As for all those dramatic performance gains cited during Tuesday's presentation -- queries going from nearly 4 hours down to 4 seconds or 58 hours down 13 minutes? That might take a little work, but it's work that will be done primarily by software vendors.

"Those speedups on Oracle applications weren't achived just by deploying the in-memory option," clarified Shetler of Oracle, noting that Oracle's apps teams were part of the beta program and learned how to exploit the option. "When they modified the apps to take advantage of the in-memory technology, that's when they got the levels of performance improvement shared in the presentation."

In more good news, many applications, including some Oracle applications, have yet to be certified on 12c, so adaptations to the in-memory option can and will be carried out by software vendors. Customers will see what Shetler described as "modest" performance improvements on current applications without any changes. But with upgrades to 12c-certified apps over the next couple of years, customers will be able to take full advantage

Next Page

Doug Henschen is Executive Editor of InformationWeek, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of ... View Full Bio
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
bhall2
50%
50%
bhall2,
User Rank: Apprentice
6/19/2014 | 12:37:46 PM
Re: In-memory and indexes, a question.....
Charlie,

Yes apps create indexes on tables/partitions to speed up access to rows based on queries against column(s). Without that index it may have to do a full table scan instead (slow for large tables). And yes, an application can submit SQL with a hint (for very specific cases) specifying the optimizer to use a certain index. However, that has and always will be a - hint - not an absolute.

So when you drop the index on that column and instead tell oracle to load it into memory, the optimizer will now ignore your hint if you had one (which it may have before anyhow) and just use the faster columnar memory copy instead (when it makes sense).

So no changes beyond dropping the index and telling the database to cache the column instead. This is what newer versions of apps (the "changes") will do. When it makes sense a new version will go drop all the indexes, and tell Oracle to cache. Not exactly tricky - other than to do it in a smart order that makes sense based on how much memory you have available for the column cache.

Bryan
philweir
50%
50%
philweir,
User Rank: Apprentice
6/13/2014 | 3:57:10 PM
Re: All-in-memory versus not: mixed storage is winning
SAP is not positioning an 'all-in-memory" approach. Its position is to use lower cost storage for warm data and DRAM for hot data.  Unfortunately, the definition of Hot and Warm is different by user.  The term 'center' must mean the data modelling application in your context, which Hana is more than capable of doing but does not require all the data be in-memory to do so.  Are you saying the center is the database? 
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
6/13/2014 | 3:22:03 PM
Re: All-in-memory versus not: mixed storage is winning
Or course it's possible for Hana or another in-memory database to tap into information from other systems, but are you going to retain all that information in-memory. And if not, will you get in-memory performance? SAP's assertion that you can get rid of the data warehouse assumes that SAP is the center your DW universe. Also assumes you don't have a lot of history or are prepared to spend a lot of money on DRAM to ratain it all.
philweir
50%
50%
philweir,
User Rank: Apprentice
6/13/2014 | 2:32:22 PM
Re: All-in-memory versus not: mixed storage is winning
Dan - you make two points which are relevant.  First the 'work' to do is done by the software vendors.  Which means the database/ in-memory speeds are possible but only with the application vendors using it.  That is what SAP is doing and others are not.  SAP ignited this technology discussion because its applications were being slowed by the db providers.  It took responsible for the solution used by the users and removed the constraint causing the issue.  Further it is reinvesting in the applications to allow them to take advantage of the speed.   Second, your inclusion of larry's suggestion that it wont work with data not coming from SAP puts you in the position of saying that data warehousing would never had worked because by definition they are taking in data from other sources.   The real point is that the design of data warehouse was flawed because of rigidity, which is reduced within in memory and with the speed creates the justification over this band-aiding of 1980s technology.   
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
6/13/2014 | 9:58:36 AM
Re: In-memory and indexes, a question.....
Not quite. In this case they're talking about analytical (columnar) indexes, but because columns are now available in memory, Oracle is saying you can get rid of them because the in-memory store is so fast. SAP makes similar claims (about all indexes). Whether that can be done "without disruption" is another matter. If an app is looking for/dependent upon an index that's removed, is may not work properly. Oracle is saying it's DBMS optimizer has been trained to reinterpret the index call agains the in-memory store, but there's no assurance that will work in every case.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/12/2014 | 8:50:10 PM
In-memory and indexes, a question.....
I thought specific applications worked with specific indexes that the application, in an early pass, prompted the database system to build through its query patterns. By getting rid of indexes, Oracle must mean they get rid of them temporarily for purposes of in-memory operations? Restore them afterward for routine operations? 
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
6/11/2014 | 1:02:19 PM
All-in-memory versus not: mixed storage is winning
Oracle, like Microsoft, may have an edge over all-in-memory purists, like SAP, when it comes to popular sentiment. Yes, the all-in-memory route means there will be no compromises when it comes to performance and data exploration. But even in-memory advocates in the analytics space like MemSQL and VoltDB recognize that there's a need for warm and cool data stored on less-expensive options like Flash and Disk. I've talked to SAP customers who aren't so sure they can get rid of data warehouses because they have lots of historical data or because not everything they do runs inside of SAP.

Oracle CEO Larry Ellison argued "why not put the hot data in memory and leave everything else in flash or disk?" I think that makes sense to many, and with dynamic, intelligent hot/warm/cold loading and data movement (still to come in this in-memory offering) it's possible to get most of the performance gain you need without breaking the bank. SAP would argue that it can size down the total data footprint by storing data only once in a compressed, columnar, in-memory form and drive all uses. But what about historical data or data not flowing from SAP? People know what they have today and just see big dollar signs when they think of putting it all in RAM.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - August 27, 2014
Who wins in cloud price wars? Short answer: not IT. Enterprises don't want bare-bones IaaS. Providers must focus on support, not undercutting rivals.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.