Comments
In-Memory Databases: Do You Need The Speed?
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
3/4/2014 | 12:37:47 PM
Re: In-memory DBMS vs. In-memory option
Paul,

Great post with deep insight. I believe SAP is addressing some of these application write needs on its own end, whereby its updates of SAP BusinessSuite and certainly its promised Suite On Hana deploymnets (particularly on SAP's cloud) will address these transactional requirements on SAP's end. SAP can rely on ASE, SQL Anywhere and IQ as part of the Hana "Platform," but you are right, SAP doesn't like to talk about anything but Hana.

I, for one, am hoping Hana does not take up another day of Keynotes. Let that be a wonky track. Do we need another session with Hasso Plattner and Vishal Sikka congratulating each other on their technical brilliance and the "never-before-possible applications... radical simplification... incredible performance improvement... yadda, yadda, yadda. Let the customers using Hana take over. It's year four for Hana, so it's time for the real-world case examples to come to the forefront.

If there are 100+ live with Suite on Hana, it should be no problem finding a few talkers.
PV21
IW Pick
50%
50%
PV21,
User Rank: Apprentice
3/4/2014 | 1:06:07 AM
Re: In-memory DBMS vs. In-memory option
Hi Doug, good article.


However I take issue with one of SAP's "4 promises" for HANA, specifically the claim that it runs applications faster.  I should be clear that except where stated otherwise I am talking about HANA as a database here, not "HANA the platform".

HANA is designed for very fast analytics plus acceptable (for some customers) OLTP performance.  Its columnar store puts it at huge disadvantage to row stores when running high volume, high concurrency OLTP workloads.  However, in SAP-land there are a number of mitigations:

(1) Rewrite all code better, to eliminate SELECT * and other unnecessarily long column lists that will trash any column store database.  This is fine, and good practice, just try getting everyone to do it with the non-SAP code that you don't control!

(2) Use stored procedures and SQL functions to massively reduce the hideous chattiness of the SAP application.  In other words, do what you should have done in the first place and ship functions not data.  Again this is good practice and an idea as old as the hills.  It will make your month-end processing run like lightning.  This is also where the boundaries between HANA as a database and HANA as a platform start to blur. SAP have decided that HANA will not use standard procedural SQL but its own dialect that they call SQL Script.  They give some justifications for this decision but they are spurious.  The real reason is competitive advantage.  By refusing to implement the stored procedure and functions approach for other databases, they ensure that HANA the platform is faster for such tasks than SAP using any other database.

(3) For what SAP term "extreme transaction processing" but other vendors like IBM and Oracle would view as no big deal, HANA cannot cope.  For these scenarios SAP refer to HANA in the platform sense, and bundle it with, ahem , Sybase ASE.  (See http://www.sap.com/pc/tech/real-time-data-platform/software/extreme-transaction-oltp/index.html for details of this bundle.)  The idea is to use Sybase to cope with the volumes that HANA can't, leaving HANA to run analytics - its true strength.

SAP are also pulling a trick or two on the analytics side, but this post is long enough already.

Paul
Li Tan
50%
50%
Li Tan,
User Rank: Ninja
3/3/2014 | 10:35:45 PM
Re: For some, a switch to SSDs might be enough
I think moving to SSD and deploying traditional DB on it may be a more feasible solution for most of enterprises. Migrating data to another DB such as Mongo may not be cost effectivite. Furthermore, it will create some disturbance to the ongoing business. The in-memory DB looks good and it's really fast, but the reliability is always a concern - how frequent should we make the checkpoint and take snapshot for the DB?
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
3/3/2014 | 10:08:38 PM
For some, a switch to SSDs might be enough
SomeDude8: I too wonder about the SSD option alongside the in-memory database. For many people, switching from disk to SSD would be a good investment and gives applications better response times. But it's doesn't give the extra big yields in latency savings that the in-memory database operation does. Calls for data still have to exit the database server to an external device and return the data, which yields a 3X or 4X improvement, but not a 10X. But I suspect that a switch to SSDs makes a lot of sense for a lot of users in how it would decrease wait times and increase output in a highly non-disruptive manner.
D. Henschen
50%
50%
D. Henschen,
User Rank: Author
3/3/2014 | 9:01:28 PM
Teradata Covered Here... Re: Vendors
Teradata is an analytical database, so it's not in the transactional (OLTP) fray. That said, its "Intelligent Memory" feature, introduced last year, is covered in "In-Memory Technology: Options Abound."
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
3/3/2014 | 6:43:23 PM
Re: Disruption
Yes, the Internet of things connections are predicted to reach a number close to 50 billion and much of these connections will only be able to deliver value if real-time analysis capability is present. And the slim profit margins that banks work upon indicates that banks will have to be increasingly online (electronic) in developing countries (where margins are slimmer) to provide service, bringing the unbanked population of the world down. This creates a need for faster and efficient hardware on which their current staff is comfortable, so that once expansion occurs -- demand does not overwhelm their systems.

 
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
3/3/2014 | 6:34:20 PM
Re: Vendors
andrewise, Teradata is noted in this accompanying article:

http://www.informationweek.com/big-data/big-data-analytics/in-memory-technology-the-options-abound/d/d-id/1114082
Brian.Dean
50%
50%
Brian.Dean,
User Rank: Ninja
3/3/2014 | 6:24:12 PM
Re: Whoa...
Good point, if the economics of in-memory is not making sense for a firm at the moment than SSD should definitely be further investigated, not just become of the faster processing aspect but also because SSD has much lower failure rate. And SSD also provides that path into a hybrid in-momery and Flash solution.
ryanbetts
50%
50%
ryanbetts,
User Rank: Apprentice
3/3/2014 | 1:56:54 PM
Disruption
"Avoiding disruption is crucial for the three incumbents, because they want to keep and extend their business with database customers."

This is the key. Certainly all the macro trends are deeply disruptive. Fast mobile networks, broadly deployed sensors, real time electric grids ... all of the components of "Internet of Things," "Machine to Machine," and "Smart Planet" initiatives that are producing data that requires fast, real time processing at scales not possible in legacy database systems.

This disruption is driving multiple avenues of change in the in-memory database space. Most interesting to us, at VoltDB, is the use of in-memory decisioning and analytics to ingest, analyze, organize, respond to and archive incoming high velocity inputs. Combining analytics with transactional decision making on a per-event basis is necessary in a large class of use cases: real time alerting, alarming, authorization, complex policy enforcement, fraud, security and DDoS attack detection, micro-personalization and real time user segmentation.

Viable solutions must be virtualizable for cloud deployment, must offer integrated HA, must run on commodity cloud servers. Adding in-memory without eliminating the expense and complexity of shared storage is insufficient.

 

Ryan.
andrewise
50%
50%
andrewise,
User Rank: Apprentice
3/3/2014 | 12:03:53 PM
Vendors
I just dont understand why you didnt mention Teradata in-database systems...
Page 1 / 2   >   >>


The Business of Going Digital
The Business of Going Digital
Digital business isn't about changing code; it's about changing what legacy sales, distribution, customer service, and product groups do in the new digital age. It's about bringing big data analytics, mobile, social, marketing automation, cloud computing, and the app economy together to launch new products and services. We're seeing new titles in this digital revolution, new responsibilities, new business models, and major shifts in technology spending.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
A UBM Tech Radio episode on the changing economics of Flash storage used in data tiering -- sponsored by Dell.
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.