Government // Enterprise Architecture
Commentary
11/10/2010
09:53 PM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Terracotta Intros Caching System For Hungry Java Apps

BigMemory can load data into the Java Virtual Machine heap at a pace that's close to the speed of light.

Java applications have been given a bounty of CPU cycles on modern multi-core servers, but their ability to use random access memory has not kept up. In fact, the aging language has developed a memory problem.

A Java Virtual Machine in which an application runs is most proficient at using just 1-2 GB of RAM, but applications' needs have stretched far beyond that limit. By tinkering with Java's garbage collection function, developers can build applications that work with 2-4 GB or even 6 GB, but that's a shadow of what's available now on the most powerful servers. Cisco launched its Unified Computing System servers last year with up to 384 GB of random access memory on them. Dell recently came out with a PowerEdge M910 blade with 512 GB.

Java applications can't really take advantage of even a sizeable fraction of the total. Once the primary RAM an application is using (referred to in Java lingo as "heap") passes 4 GB, the latencies caused by garbage collection start to rise steeply, points out Amit Pandey, CEO of Terracotta, a supplier of a Java cache management system. The heap is the primary RAM assigned to a Java Virtual Machine, he said in a recent interview. If you make the heap really big, say 100 GB, the garbage collection takes minutes, not seconds, and end users will fiddle while Java busily sweeps up the used bits around the very large chunk of RAM.

The bigger an enterprise or e-commerce application gets, the more short-term software objects (modules of code built for a specific purpose, such as a customer transaction, then throws away) it uses -- and the longer it will spend on garbage collection. Without periodic sweeps, the amount of discarded code grows until it's choking the available RAM and threatening to crash the app. Garbage collection is a necessary principle of memory management.

At the same time this garbage collection leads to unexpected latencies in a running application, delays that seem interminable when they occur during the most customer-facing parts of the application.

Terracotta has come up with a way to build a large secondary cache that serves the Java Virtual Machine from nearby random access memory without requiring it to be part of the JVM's defined heap. Garbage collection only applies to the heap, not to Terracotta's BigMemory cache, but BigMemory can load data into the heap at a pace that's close to the speed of light. BigMemory can be 100 GB or more; think of a terabyte of RAM available to your application, suggests Pandey.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - August 27, 2014
Who wins in cloud price wars? Short answer: not IT. Enterprises don't want bare-bones IaaS. Providers must focus on support, not undercutting rivals.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.