In-Memory Databases: Do You Need The Speed? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Big Data Analytics
News
3/3/2014
09:06 AM
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

In-Memory Databases: Do You Need The Speed?

IBM, Microsoft, Oracle, and SAP are ramping up the fight to become your in-memory technology provider.

in a millisecond, it's akin to comparing a 1,200-mph F/A-18 fighter jet to a garden slug.

You can't capture this entire speed advantage because there are CPU processing time and other constraints in the mix, but disk I/O has long throttled performance. In-memory performance improvements vary by application, data volume, data complexity, and concurrent-user loads, but Hana customers report that the differences can be dramatic.

Maple Leaf Foods, a $5 billion-a-year Canadian supplier of meats, baked goods, and packaged foods, finds that profit-and-loss reports that took 15-18 minutes on an SAP Business Warehouse deployed on conventional databases now take 15-18 seconds on the Hana Platform. This is an analytical example demonstrating 60 times faster performance. Kuljeet Singh Sethi, CIO of Avon Cycles, an Indian bicycle manufacturer now running SAP Business Suite on Hana, said a complex product delivery planning process that used to take 15-20 minutes now takes "just a few seconds" on Hana. This is a transactional example demonstrating 300-400 times faster performance (if "a few" seconds is three).

What's important, though, is what that faster speed lets Maple Leaf and Avon do that they couldn't do before. For example, both companies are moving to near-real-time data loading instead of overnight batch processes, so they can support same-day planning and profitability analysis. This step could improve manufacturing efficiency and customer service, as well as simplify the data management processes themselves by eliminating the need for data aggregations.

Similar claims of performance gains come from outside the SAP camp. Temenos, a banking software provider that uses IBM's in-memory-based BLU Acceleration for DB2 (introduced in April), reports that queries that used to take 30 seconds now take one-third of a second, thanks to BLU's columnar compression and in-memory analysis. That speed will make the difference between showing only a few recent transactions online or on mobile devices and providing fast retrieval of any transaction, says John Schlesinger, Temenos's chief enterprise architect. Given that an online or mobile interaction costs the bank 10-20 cents to support, versus $5 or more for a branch visit, the pressure to deliver fast, unfettered online and mobile performance will only increase, Schlesinger says.

In contrast to SAP Hana, which addresses both analytical and transactional applications, BLU Acceleration focuses strictly on analytics. Microsoft has gone the other route, focusing on transactions with its In-Memory Online Transaction Processing (OLTP) option for SQL Server 2014, which is set for release by midyear. Tens of thousands of Microsoft SQL Server customers have downloaded previews of the product, which includes the in-memory feature formerly known as Project Hekaton.

(Source: InformationWeek 2013 Enterprise Application Survey of 263 business technology professionals with direct or indirect responsibility for enterprise applications)
(Source: InformationWeek 2013 Enterprise Application Survey of 263 business technology professionals with direct or indirect responsibility for enterprise applications)

Edgenet, a company that provides embedded software-as-a-service for retailers, started testing Microsoft's In-Memory OLTP two years ago, and it has let the company move from overnight batch loading to where it now offers near-real-time insight into store-by-store product availability. AutoZone, an Edgenet customer, carries 300,000 items across more than 5,000 stores. By putting product availability data on database tables running on In-Memory OLTP, Edgenet now provides inventory updates every 15 minutes. "If the retailer can send us data in a message queuing type process, we can report it in real time," says Mike Steineke, Edgenet's vice president of IT. "The only constraint is how quickly the retailer's backend systems can get us the data."

Without disruption?
In-memory seems promising, but most companies won't go there if it requires replacing applications. Like SAP, Microsoft has promised that customers will be able to move to In-Memory OLTP without such disruption (not counting the upgrade to SQL Server 2014). If true in practice, this promise would apply to the gigantic universe of applications that run on Microsoft SQL Server. SAP's promise applies mainly to SAP customers and the smaller universe of SAP applications.

Bwin.party, which is known for online gambling games such as PartyPoker, wouldn't have moved to in-memory if it required major application revisions. It wanted to scale up its online sports betting business three years ago, but it was bumping up against I/O constraints on SQL Server 2008 R2. "We were able to swap out the database without touching the underlying application at all," says Rick Kutschera, manager of database engineering. Not having to change queries or optimize apps for in-memory tables means Bwin.party can quickly scale up its betting operations. That flexibility came in handy in 2013, when New Jersey legalized online gambling and Bwin.party was able to launch gaming sites in partnership with casinos in Atlantic City.

Both Kutschera of Bwin.party and Steineke of Edgenet say they also implemented In-Memory OLTP without server changes. They could do so because they moved only selected tables requiring RAM speed into memory. (SAP Hana, by contrast, puts the entire database in memory, so it requires dedicated, RAM-intensive servers.) The coming release of Microsoft SQL Server 2014 promises tools that automatically determine which tables within a database would most benefit from in-memory performance, so that administrators can take best advantage of the new feature.

Oracle announced in October that it will introduce an in-memory option for its flagship Oracle Database 12c. (Oracle acquired the TimesTen in-memory database vendor way back in 2005, but that 18-year-old database is used mainly in telecom and financial applications.) CEO Larry Ellison has

Next Page

Doug Henschen is Executive Editor of InformationWeek, where he covers the intersection of enterprise applications with information management, business intelligence, big data and analytics. He previously served as editor in chief of Intelligent Enterprise, editor in chief of ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
2 of 3
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
rjonesx
50%
50%
rjonesx,
User Rank: Apprentice
12/27/2014 | 7:25:04 PM
Look at that mySQL 37% - That is important...
When you look at an in-memory option like memSQL that is wire-compatible with mySQL, there is a huge opportunity for growth. Think about the pain points for a growing business as data grows too quickly to query efficiently. They have a couple of options...

1. Rewrite their code to work with an enterprise DB, hire developers with experience with that DB, and try not to lose a step in the process.

2. or throw in memSQL

The entire migration of our platform to memSQL took a few hours, without any new developers or new code (except for switching out a few primary key forms to handle sharding, which was literally a handful of lines of code). 

If memSQL does its marketing correctly, they will be the DB of choice for these growth companies because they offer such a painless transition. 

 
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
3/4/2014 | 12:37:47 PM
Re: In-memory DBMS vs. In-memory option
Paul,

Great post with deep insight. I believe SAP is addressing some of these application write needs on its own end, whereby its updates of SAP BusinessSuite and certainly its promised Suite On Hana deploymnets (particularly on SAP's cloud) will address these transactional requirements on SAP's end. SAP can rely on ASE, SQL Anywhere and IQ as part of the Hana "Platform," but you are right, SAP doesn't like to talk about anything but Hana.

I, for one, am hoping Hana does not take up another day of Keynotes. Let that be a wonky track. Do we need another session with Hasso Plattner and Vishal Sikka congratulating each other on their technical brilliance and the "never-before-possible applications... radical simplification... incredible performance improvement... yadda, yadda, yadda. Let the customers using Hana take over. It's year four for Hana, so it's time for the real-world case examples to come to the forefront.

If there are 100+ live with Suite on Hana, it should be no problem finding a few talkers.
PV21
IW Pick
50%
50%
PV21,
User Rank: Apprentice
3/4/2014 | 1:06:07 AM
Re: In-memory DBMS vs. In-memory option
Hi Doug, good article.


However I take issue with one of SAP's "4 promises" for HANA, specifically the claim that it runs applications faster.  I should be clear that except where stated otherwise I am talking about HANA as a database here, not "HANA the platform".

HANA is designed for very fast analytics plus acceptable (for some customers) OLTP performance.  Its columnar store puts it at huge disadvantage to row stores when running high volume, high concurrency OLTP workloads.  However, in SAP-land there are a number of mitigations:

(1) Rewrite all code better, to eliminate SELECT * and other unnecessarily long column lists that will trash any column store database.  This is fine, and good practice, just try getting everyone to do it with the non-SAP code that you don't control!

(2) Use stored procedures and SQL functions to massively reduce the hideous chattiness of the SAP application.  In other words, do what you should have done in the first place and ship functions not data.  Again this is good practice and an idea as old as the hills.  It will make your month-end processing run like lightning.  This is also where the boundaries between HANA as a database and HANA as a platform start to blur. SAP have decided that HANA will not use standard procedural SQL but its own dialect that they call SQL Script.  They give some justifications for this decision but they are spurious.  The real reason is competitive advantage.  By refusing to implement the stored procedure and functions approach for other databases, they ensure that HANA the platform is faster for such tasks than SAP using any other database.

(3) For what SAP term "extreme transaction processing" but other vendors like IBM and Oracle would view as no big deal, HANA cannot cope.  For these scenarios SAP refer to HANA in the platform sense, and bundle it with, ahem , Sybase ASE.  (See http://www.sap.com/pc/tech/real-time-data-platform/software/extreme-transaction-oltp/index.html for details of this bundle.)  The idea is to use Sybase to cope with the volumes that HANA can't, leaving HANA to run analytics - its true strength.

SAP are also pulling a trick or two on the analytics side, but this post is long enough already.

Paul
Li Tan
50%
50%
Li Tan,
User Rank: Ninja
3/3/2014 | 10:35:45 PM
Re: For some, a switch to SSDs might be enough
I think moving to SSD and deploying traditional DB on it may be a more feasible solution for most of enterprises. Migrating data to another DB such as Mongo may not be cost effectivite. Furthermore, it will create some disturbance to the ongoing business. The in-memory DB looks good and it's really fast, but the reliability is always a concern - how frequent should we make the checkpoint and take snapshot for the DB?
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
3/3/2014 | 10:08:38 PM
For some, a switch to SSDs might be enough
SomeDude8: I too wonder about the SSD option alongside the in-memory database. For many people, switching from disk to SSD would be a good investment and gives applications better response times. But it's doesn't give the extra big yields in latency savings that the in-memory database operation does. Calls for data still have to exit the database server to an external device and return the data, which yields a 3X or 4X improvement, but not a 10X. But I suspect that a switch to SSDs makes a lot of sense for a lot of users in how it would decrease wait times and increase output in a highly non-disruptive manner.
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
3/3/2014 | 9:01:28 PM
Teradata Covered Here... Re: Vendors
Teradata is an analytical database, so it's not in the transactional (OLTP) fray. That said, its "Intelligent Memory" feature, introduced last year, is covered in "In-Memory Technology: Options Abound."
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
3/3/2014 | 6:43:23 PM
Re: Disruption
Yes, the Internet of things connections are predicted to reach a number close to 50 billion and much of these connections will only be able to deliver value if real-time analysis capability is present. And the slim profit margins that banks work upon indicates that banks will have to be increasingly online (electronic) in developing countries (where margins are slimmer) to provide service, bringing the unbanked population of the world down. This creates a need for faster and efficient hardware on which their current staff is comfortable, so that once expansion occurs -- demand does not overwhelm their systems.

 
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
3/3/2014 | 6:34:20 PM
Re: Vendors
andrewise, Teradata is noted in this accompanying article:

http://www.informationweek.com/big-data/big-data-analytics/in-memory-technology-the-options-abound/d/d-id/1114082
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
3/3/2014 | 6:24:12 PM
Re: Whoa...
Good point, if the economics of in-memory is not making sense for a firm at the moment than SSD should definitely be further investigated, not just become of the faster processing aspect but also because SSD has much lower failure rate. And SSD also provides that path into a hybrid in-momery and Flash solution.
ryanbetts
50%
50%
ryanbetts,
User Rank: Apprentice
3/3/2014 | 1:56:54 PM
Disruption
"Avoiding disruption is crucial for the three incumbents, because they want to keep and extend their business with database customers."

This is the key. Certainly all the macro trends are deeply disruptive. Fast mobile networks, broadly deployed sensors, real time electric grids ... all of the components of "Internet of Things," "Machine to Machine," and "Smart Planet" initiatives that are producing data that requires fast, real time processing at scales not possible in legacy database systems.

This disruption is driving multiple avenues of change in the in-memory database space. Most interesting to us, at VoltDB, is the use of in-memory decisioning and analytics to ingest, analyze, organize, respond to and archive incoming high velocity inputs. Combining analytics with transactional decision making on a per-event basis is necessary in a large class of use cases: real time alerting, alarming, authorization, complex policy enforcement, fraud, security and DDoS attack detection, micro-personalization and real time user segmentation.

Viable solutions must be virtualizable for cloud deployment, must offer integrated HA, must run on commodity cloud servers. Adding in-memory without eliminating the expense and complexity of shared storage is insufficient.

 

Ryan.
Page 1 / 2   >   >>
Slideshows
IT Careers: Top 10 US Cities for Tech Jobs
Cynthia Harvey, Freelance Journalist, InformationWeek,  1/14/2020
Commentary
Predictions for Cloud Computing in 2020
James Kobielus, Research Director, Futurum,  1/9/2020
News
What's Next: AI and Data Trends for 2020 and Beyond
Jessica Davis, Senior Editor, Enterprise Apps,  12/30/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
The Cloud Gets Ready for the 20's
This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
Slideshows
Flash Poll