Software // Information Management
Commentary
6/11/2014
11:01 AM
Doug Henschen
Doug Henschen
Commentary
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Oracle In-Memory Option: 6 Key Points

Oracle Database In-Memory comes with big performance promises. But examine a few performance tradeoffs, change requirements, and acceleration constraints.

of the technology. Importantly, it will be up to the vendors, not customers, to do that optimization work as part of 12c certification.

4. You can't explore data in memory unless you put it there. Ellison spoke of exploratory analysis scenarios and the possibility of getting rid of predefined multi-dimensional cubes, but how is this possible without everything in memory?

"In this first release, there will be no dynamic loading of data into memory [based on queries], so you have to predefine that," said Shetler. "If you put five columns in memory and you query demands access to six, you're hobbling response times because you're going to flash or disk for that last column. In our tests, customers tended to put whole tables or whole partitions into memory, and if they have enough DRAM, they don't even think about economizing."

[Want more on the in-memory options and choices? Read In-Memory Databases: Do You Need The Speed?]

In a future release, Oracle is planning to add automated advisor tools that look at data-access patterns over time and suggest which tables to place into memory, Shetler said. Microsoft introduced similar tools with its Microsoft SQL Server OLTP In-Memory option, and customers say they greatly simplify decisions around which data to store in memory.

5. Oracle is not proposing to get rid of data warehouses. At one point during his presentation, Ellison said the In-Memory option would "make everything a lot simpler ... and you can use a transactional system as a data warehouse." This sounds like an echo of SAP's "radical simplification" claims, but it seemed unexpected coming from Oracle, which makes tons of money on data warehouse deployments. What did Ellison really mean?

"We're not talking about getting rid of data warehouses, because they're not just for analytics, they're a system of record," said Shetler, noting that Ellison "probably meant to say or should have said" data marts. "Marts tend to be separate copies of the production database used for analytics and reporting. With the In-Memory Option, you run mart applications against the production database."

Oracle CEO Larry Ellison announces July availability of the Oracle Database In-Memory option.
Oracle CEO Larry Ellison announces July availability of the Oracle Database In-Memory option.

6. The in-memory option does not require Oracle Engineered Systems. Despite all the touts of Exadata and M6 Sparc during Tuesday's presentation, the in-memory option will run on any hardware certified to run Oracle 12c, Shetler said. "Scale-out [deployment] and fault-tolerance are dependent upon the RAC clustering technology, but that's available on any [certified] hardware."

Last Impressions

Oracle took its in-memory option through a three-month, 60-company beta program, according to Shetler. In contrast, Microsoft went through its customary private and public community technology previews with In-Memory OLTP. Each preview was months long, and in the end, Microsoft saw tens of thousands of downloads and bug reports from customers. By comparison, the Oracle release seems a little rushed. Rest assured features like deployment advisors and dynamic loading based on query patterns -- techniques already used by competitors and elsewhere by Oracle itself -- will appear in a follow-up release of the in-memory option.

Make no mistake: Oracle Database In-Memory will bring important benefits to customers. And Oracle is certainly not alone in highlighting the high points while soft pedaling certain details of its in-memory promise. SAP's promised "radical transformation" of IT infrastructure, for example, can't happen "without disruption" as it claims -- particularly in analytical deployments.

With Oracle Database In-Memory, analytical improvements are a sure thing and modest transactional improvements are possible, but you or your software vendors will have to make some changes to make the most of the technology.

IBM, Microsoft, Oracle, and SAP are fighting to become your in-memory technology provider. Do you really need the speed? Get the digital In-Memory Databases issue of InformationWeek today.

Doug Henschen is Executive Editor of InformationWeek, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of ... View Full Bio
Previous
2 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
bhall2
50%
50%
bhall2,
User Rank: Apprentice
6/19/2014 | 12:37:46 PM
Re: In-memory and indexes, a question.....
Charlie,

Yes apps create indexes on tables/partitions to speed up access to rows based on queries against column(s). Without that index it may have to do a full table scan instead (slow for large tables). And yes, an application can submit SQL with a hint (for very specific cases) specifying the optimizer to use a certain index. However, that has and always will be a - hint - not an absolute.

So when you drop the index on that column and instead tell oracle to load it into memory, the optimizer will now ignore your hint if you had one (which it may have before anyhow) and just use the faster columnar memory copy instead (when it makes sense).

So no changes beyond dropping the index and telling the database to cache the column instead. This is what newer versions of apps (the "changes") will do. When it makes sense a new version will go drop all the indexes, and tell Oracle to cache. Not exactly tricky - other than to do it in a smart order that makes sense based on how much memory you have available for the column cache.

Bryan
philweir
50%
50%
philweir,
User Rank: Apprentice
6/13/2014 | 3:57:10 PM
Re: All-in-memory versus not: mixed storage is winning
SAP is not positioning an 'all-in-memory" approach. Its position is to use lower cost storage for warm data and DRAM for hot data.  Unfortunately, the definition of Hot and Warm is different by user.  The term 'center' must mean the data modelling application in your context, which Hana is more than capable of doing but does not require all the data be in-memory to do so.  Are you saying the center is the database? 
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
6/13/2014 | 3:22:03 PM
Re: All-in-memory versus not: mixed storage is winning
Or course it's possible for Hana or another in-memory database to tap into information from other systems, but are you going to retain all that information in-memory. And if not, will you get in-memory performance? SAP's assertion that you can get rid of the data warehouse assumes that SAP is the center your DW universe. Also assumes you don't have a lot of history or are prepared to spend a lot of money on DRAM to ratain it all.
philweir
50%
50%
philweir,
User Rank: Apprentice
6/13/2014 | 2:32:22 PM
Re: All-in-memory versus not: mixed storage is winning
Dan - you make two points which are relevant.  First the 'work' to do is done by the software vendors.  Which means the database/ in-memory speeds are possible but only with the application vendors using it.  That is what SAP is doing and others are not.  SAP ignited this technology discussion because its applications were being slowed by the db providers.  It took responsible for the solution used by the users and removed the constraint causing the issue.  Further it is reinvesting in the applications to allow them to take advantage of the speed.   Second, your inclusion of larry's suggestion that it wont work with data not coming from SAP puts you in the position of saying that data warehousing would never had worked because by definition they are taking in data from other sources.   The real point is that the design of data warehouse was flawed because of rigidity, which is reduced within in memory and with the speed creates the justification over this band-aiding of 1980s technology.   
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
6/13/2014 | 9:58:36 AM
Re: In-memory and indexes, a question.....
Not quite. In this case they're talking about analytical (columnar) indexes, but because columns are now available in memory, Oracle is saying you can get rid of them because the in-memory store is so fast. SAP makes similar claims (about all indexes). Whether that can be done "without disruption" is another matter. If an app is looking for/dependent upon an index that's removed, is may not work properly. Oracle is saying it's DBMS optimizer has been trained to reinterpret the index call agains the in-memory store, but there's no assurance that will work in every case.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
6/12/2014 | 8:50:10 PM
In-memory and indexes, a question.....
I thought specific applications worked with specific indexes that the application, in an early pass, prompted the database system to build through its query patterns. By getting rid of indexes, Oracle must mean they get rid of them temporarily for purposes of in-memory operations? Restore them afterward for routine operations? 
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
6/11/2014 | 1:02:19 PM
All-in-memory versus not: mixed storage is winning
Oracle, like Microsoft, may have an edge over all-in-memory purists, like SAP, when it comes to popular sentiment. Yes, the all-in-memory route means there will be no compromises when it comes to performance and data exploration. But even in-memory advocates in the analytics space like MemSQL and VoltDB recognize that there's a need for warm and cool data stored on less-expensive options like Flash and Disk. I've talked to SAP customers who aren't so sure they can get rid of data warehouses because they have lots of historical data or because not everything they do runs inside of SAP.

Oracle CEO Larry Ellison argued "why not put the hot data in memory and leave everything else in flash or disk?" I think that makes sense to many, and with dynamic, intelligent hot/warm/cold loading and data movement (still to come in this in-memory offering) it's possible to get most of the performance gain you need without breaking the bank. SAP would argue that it can size down the total data footprint by storing data only once in a compressed, columnar, in-memory form and drive all uses. But what about historical data or data not flowing from SAP? People know what they have today and just see big dollar signs when they think of putting it all in RAM.
The Agile Archive
The Agile Archive
When it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Must Reads Oct. 21, 2014
InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
A roundup of the top stories and trends on InformationWeek.com
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.