In the world of in-memory computing, SAP's HANA has the big name, but it's not the only game in town. Other databases can do all or part of their work in memory, though the definitions can get a little fuzzy around the edges of the market.
Let's be clear: Whenever the phrase "in-memory computing" comes up, the more accurate phrase might be "in-memory database."
Compact applications running against limited data sets aren't a big problem. When the application sits on top of an enterprise database, that's when the data's location starts to matter in a most significant way.
Microsoft's SQL Server 2014 provides in-memory computing … sort of.
Redmond is careful not to call what they do in-memory computing, referring to it as "In-Memory OLTP." (OLTP is on-line transaction processing.) According to a page on the MSDN website, "In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP."
What this means is that you can define part of the application data as being specifically for transactions -- typically, the high-speed, intensive reads and writes that come with market segments like retail and banking.
The defined parts of the database are kept in memory, where they benefit from low latency and high overall performance. On a regular basis, though, the transaction records will be rolled into a portion of the database that's reserved for analysis -- analysis that is typically performed through pre-defined reports run on a scheduled basis.
Oracle in-memory database, available with Oracle Database 12c, takes an approach much more like that of SAP's HANA. It is, according to Oracle, designed to run both OLTP and OLAP (on-line analytics procession) from the same database. From the database application perspective this is important because it boosts performance and capabilities in two critical ways.
First, the single-database approach eliminates the need to move data from one database (or part of a database) to another before analysis can be performed. Since the data movement is generally a performance-sapping process run at times when the OLTP needs are smaller, this means that queries can be made and reports run at any time, rather than on the next business day after database been moved.
Next, because the OLTP and OLAP databases are the same, queries can be run against the entire data set at any time. The ability to perform these "ad-hoc queries" has long been a holy grail of application designers -- and the top of data base administrators' nightmare lists.
There are other in-memory databases, as well. According to Wikipedia, there are 47 different in-memory databases currently available.
Why all the interest and the options?
For Gary Orenstein, CMO of MemSQL -- one of the 47 listed options -- the answer is straightforward.
"I think that the ability to do transactions and analytics in the same database is critical. The market is betting on the need to do real-time information and answers," Orenstein said during a phone interview. "Companies now have to satisfy that demand for real-time information and more specifically real-time answers, and you simply don't have the option to move data around to reach an answer point," he explained.
The search for high-speed answers is running into the price of RAM in massive quantities. There's no question, though, that a growing number of companies are willing to pay the price for answers at the point of executive need -- whenever and wherever that need might occur.
[Did you miss any of the InformationWeek Conference in Las Vegas last month? Don't worry: We have you covered. Check out what our speakers had to say and see tweets from the show. Let's keep the conversation going.]Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and ... View Full Bio