Terracotta Intros Caching System For Hungry Java Apps
BigMemory can load data into the Java Virtual Machine heap at a pace that's close to the speed of light.
Java applications have been given a bounty of CPU cycles on modern multi-core servers, but their ability to use random access memory has not kept up. In fact, the aging language has developed a memory problem.
A Java Virtual Machine in which an application runs is most proficient at using just 1-2 GB of RAM, but applications' needs have stretched far beyond that limit. By tinkering with Java's garbage collection function, developers can build applications that work with 2-4 GB or even 6 GB, but that's a shadow of what's available now on the most powerful servers. Cisco launched its Unified Computing System servers last year with up to 384 GB of random access memory on them. Dell recently came out with a PowerEdge M910 blade with 512 GB.
Java applications can't really take advantage of even a sizeable fraction of the total. Once the primary RAM an application is using (referred to in Java lingo as "heap") passes 4 GB, the latencies caused by garbage collection start to rise steeply, points out Amit Pandey, CEO of Terracotta, a supplier of a Java cache management system. The heap is the primary RAM assigned to a Java Virtual Machine, he said in a recent interview. If you make the heap really big, say 100 GB, the garbage collection takes minutes, not seconds, and end users will fiddle while Java busily sweeps up the used bits around the very large chunk of RAM.
The bigger an enterprise or e-commerce application gets, the more short-term software objects (modules of code built for a specific purpose, such as a customer transaction, then throws away) it uses -- and the longer it will spend on garbage collection. Without periodic sweeps, the amount of discarded code grows until it's choking the available RAM and threatening to crash the app. Garbage collection is a necessary principle of memory management.
At the same time this garbage collection leads to unexpected latencies in a running application, delays that seem interminable when they occur during the most customer-facing parts of the application.
Terracotta has come up with a way to build a large secondary cache that serves the Java Virtual Machine from nearby random access memory without requiring it to be part of the JVM's defined heap. Garbage collection only applies to the heap, not to Terracotta's BigMemory cache, but BigMemory can load data into the heap at a pace that's close to the speed of light. BigMemory can be 100 GB or more; think of a terabyte of RAM available to your application, suggests Pandey.
Join us for a roundup of the top stories on InformationWeek.com for the week of December 14, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program.