SAP HANA: Not The Only In-Memory Game In Town - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Software Platforms
Commentary
5/28/2015
10:01 AM
100%
0%

SAP HANA: Not The Only In-Memory Game In Town

SAP HANA is not the only option for those looking for an in-memory database platform. Big rivals such as Microsoft and Oracle offers similar tech.

6 Characteristics Of Data-Driven Rock Stars
6 Characteristics Of Data-Driven Rock Stars
(Click image for larger view and slideshow.)

In the world of in-memory computing, SAP's HANA has the big name, but it's not the only game in town. Other databases can do all or part of their work in memory, though the definitions can get a little fuzzy around the edges of the market.

Let's be clear: Whenever the phrase "in-memory computing" comes up, the more accurate phrase might be "in-memory database."

Compact applications running against limited data sets aren't a big problem. When the application sits on top of an enterprise database, that's when the data's location starts to matter in a most significant way.

Microsoft's SQL Server 2014 provides in-memory computing … sort of.

Redmond is careful not to call what they do in-memory computing, referring to it as "In-Memory OLTP." (OLTP is on-line transaction processing.) According to a page on the MSDN website, "In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP."

What this means is that you can define part of the application data as being specifically for transactions -- typically, the high-speed, intensive reads and writes that come with market segments like retail and banking.

The defined parts of the database are kept in memory, where they benefit from low latency and high overall performance. On a regular basis, though, the transaction records will be rolled into a portion of the database that's reserved for analysis -- analysis that is typically performed through pre-defined reports run on a scheduled basis.

There are many possibilities for organizations looking for in-memory options for database applications.

(Image: OpenClips via Pixabay)

There are many possibilities for organizations looking for in-memory options for database applications.

(Image: OpenClips via Pixabay)

Oracle in-memory database, available with Oracle Database 12c, takes an approach much more like that of SAP's HANA. It is, according to Oracle, designed to run both OLTP and OLAP (on-line analytics procession) from the same database. From the database application perspective this is important because it boosts performance and capabilities in two critical ways.

First, the single-database approach eliminates the need to move data from one database (or part of a database) to another before analysis can be performed. Since the data movement is generally a performance-sapping process run at times when the OLTP needs are smaller, this means that queries can be made and reports run at any time, rather than on the next business day after database been moved.

Next, because the OLTP and OLAP databases are the same, queries can be run against the entire data set at any time. The ability to perform these "ad-hoc queries" has long been a holy grail of application designers -- and the top of data base administrators' nightmare lists.

There are other in-memory databases, as well. According to Wikipedia, there are 47 different in-memory databases currently available.

[Read how big data is scoring big during the NHL playoffs.]

Why all the interest and the options?

For Gary Orenstein, CMO of MemSQL -- one of the 47 listed options -- the answer is straightforward.

"I think that the ability to do transactions and analytics in the same database is critical. The market is betting on the need to do real-time information and answers," Orenstein said during a phone interview. "Companies now have to satisfy that demand for real-time information and more specifically real-time answers, and you simply don't have the option to move data around to reach an answer point," he explained.

The search for high-speed answers is running into the price of RAM in massive quantities. There's no question, though, that a growing number of companies are willing to pay the price for answers at the point of executive need -- whenever and wherever that need might occur.

[Did you miss any of the InformationWeek Conference in Las Vegas last month? Don't worry: We have you covered. Check out what our speakers had to say and see tweets from the show. Let's keep the conversation going.]

Curtis Franklin Jr. is Senior Analyst at Omdia, focusing on enterprise security management. Curtis has been writing about technologies and products in computing and networking since the early 1980s. He has been on staff and contributed to technology-industry publications ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Curt Franklin
50%
50%
Curt Franklin,
User Rank: Strategist
6/4/2015 | 2:07:27 PM
Re: In-memory database started with TPF
@Charlie, you're right -- "in-memory computing" is, like so much of today's enterprise computing, built on a mainframe foundation. There are a couple of big differences, though: One is the sheer size of the databases involved. The other is that, as you point out, in the 70s the in-memory architecture was all about supporting transaction speed. Today, it's as much about analytics as transactions, and that's a pretty big deal.
Charlie Babcock
100%
0%
Charlie Babcock,
User Rank: Author
5/28/2015 | 6:08:11 PM
In-memory database started with TPF
The original high speed, in-memory system is not a recent phenomenon but IBM's Transaction Processing Facility, or TPF, used in the first airline reservation systems to speed response times to customers. It was first fired up in 1979, or before the birth of some of today's NoSQL experts. 
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

News
Becoming a Self-Taught Cybersecurity Pro
Jessica Davis, Senior Editor, Enterprise Apps,  6/9/2021
News
Ancestry's DevOps Strategy to Control Its CI/CD Pipeline
Joao-Pierre S. Ruth, Senior Writer,  6/4/2021
Slideshows
IT Leadership: 10 Ways to Unleash Enterprise Innovation
Lisa Morgan, Freelance Writer,  6/8/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Planning Your Digital Transformation Roadmap
Download this report to learn about the latest technologies and best practices or ensuring a successful transition from outdated business transformation tactics.
Slideshows
Flash Poll