Interop In Advance: HyperCloud Claims To Overcome Server's Natural RAM Limits
Although it's relatively uncertified, a new type of RDIMM from Netlist targets memory bottlenecks, particularly in virtual machine-based consolidation exercises where a server's processors can support more VMs than its memory can.
Interop NYC 2010 (Oct 18-22) begins one week from today and in this final stretch before the show, InformationWeek will be posting highlights for attendees to watch for (it's not too late to register). One of those highlights will be server RAM from a company called Netlist. But it's not just any RAM. Netlist claims that this memory -- called "Hypercloud" -- is uniquely qualified to address the needs of cloud computing and datacenter consolidation (particularly where virtualization is in play).
The memory silicon itself may be a commodity (and Netlist claims Hypercloud's prices fluctuate accordingly), but what Netlist has done in Hypercloud is certainly not.
To understand Netlist's innovation, you first need to understand the math that goes into determining the maximum amount of memory that can be inserted into a server. It starts with the number of sockets into which a server's microprocessors are inserted. Many of today's Intel Xeon-based servers have two sockets. Each socket has a total of three channels for memory for a total of 6 channels. Each channel has three slots into which an RDIMM (registered dual in-line memory module) memory card can be inserted for a total of what on first blush appears to be a capacity of 18 RDIMMs per two-socket server.
Anyone who has taken Memory 101 also knows that the memory cards that go into a memory channel's slots cannot be mixed and matched. They must be identical. This is true, even of most notebook computers. The net effect of this "rule" is that the most amount of memory that can be loaded into a single memory channel is 32 G-Bytes.
The largest capacity RDIMM from companies like Dell has 16 G-Bytes of RAM on board. If one of those is used in the first of three slots and another (matching one) is used in the second of three slots, the memory channel will be maxed out at that point. The third slot is unusable because of another limitation per memory channel: the maximum number of 64-bit wide data areas or "ranks." Today's 16 G-Byte memory cards (like the aforementioned Dell product) are quad ranked cards; they have four ranks each. Unfortunately, memory channels can only handle eight ranks at a time. In other words, once the first two (of three slots) are occupied by four-rank cards, the rank limitation of the channel has been reached and the third slot must remain unoccupied.
Sadly, conventional efforts to make use of the third slot don't add up. For example, if you fall back to three 2-rank cards (each of which would be 8 G-bytes because the number of ranks is being halved), the most amount of memory that could be packed onto a channel would be 24 G-Bytes (3 slots x 8 G-Bytes = 24 G-Bytes) -- 8 G-Bytes less than the 32 G-Bytes that could be achieved with just two 16 G-Byte RDIMMs.
With each of the six channels (remember, there's three channels for each of the two sockets) being limited to two 16 G-Byte RDIMMs, the net effect is that the server can only take 12 RDIMMs (instead of 18, even though there are 18 physical slots). Six RDIMM slots (one per channel) must go unused and the maximum amount of memory that can be packed into the server works out to be 192 G-Bytes (12 RDIMMs x 16 G-Bytes per RDIMM).
Enter Netlist with HyperCloud.
According to Netlist director of business development Paul Duran, the company's HyperCloud 16 G-Byte RDIMM's appear to the system as two-rank RDIMMs instead of four-rank RDIMMs. "We make four physical ranks look like two virtual ranks to the CPU, and that's how you get double your memory" said Duran.
With HyperCloud RDIMMs occupying two of a channel's three slots, the memory controller only sees a total of four ranks (50% of the eight-rank max per channel). By creating such an illusion to a server, the third slot on each of the six channels can have another 16 G-Byte HyperCloud RDIMM inserted into it. The net result is that the server's maximum memory is increased by 96 G-Bytes (6 channels x 16 G-Bytes) from 196 G-Bytes to 288 G-Bytes.
But according to Duran, the benefits of HyperCloud don't stop there. Another well-known physical limitation of Intel's current memory architecture has to do with the power requirements of each conventional RDIMM. With only one RDIMM occupying the first of a memory channel's three slots, that memory channel can run at its maximum rated speed of 1333 MHz. But as soon as a second RDIMM is loaded onto the channel, the speed drops to 1066 MHz and when a third RDIMM is deployed, the speed drops even further to 800 MHz. "192 total G-Bytes per server at 1066 MHz is what everyone runs" said Duran (upon further inspection, the aforelinked Dell RDIMM is indeed rated at 1066 MHz).
According to Duran, for those relying on highly virtualized systems -- the operators of public or private clouds or just those IT managers who are using virtualization to consolidate their datacenters -- the ability to jump to 288 G-Bytes of memory running at 1333 MHz significantly changes the extent to which such a maxed-out server becomes a consolidation target.
"For example, an Intel XEON 5600 Westmere-based system will have 6 processor cores in each of its two sockets," said Duran. "Going off the commonly accepted assumed maximum of a five virtual machines per core, a single system could have [as many as 60 virtual machines running concurrently]" (6 cores x 2 sockets x 5 virtual machines per core). Even with just 4 G-Bytes allocated to each machine, you'd need 240 G-Bytes of memory. When it comes to virtualization, the system memory is what turns out to be the bottleneck."
Enderle Group principal analyst Rob Enderle said HyperCloud looks promising but cautioned that short of certification from the software providers whose wares would be expected to run on such a system, IT managers should be prepared to conduct extensive tests before assuming the extra memory will make a difference.
"The machine should be thermally capable of filling all its slots," said Enderle. "The risk is more on the software side than the hardware side. Applications are often written and tuned with the idea that some limitation is in place. So, if you lift the limitation, it's possible that the application might act in some unanticipated manner." Enderle said that getting some certification from application providers would go a long way towards making IT managers feel comfortable running with such an unorthodox configuration.
Duran said that in terms of hardware certification, that Netlist was working with all the "usual suspects" but could only publicly mention Supermicro and Viglen as manufacturers that Netlist has worked with to certify its memory.
On the software side, HyperCloud has not been validated by any application or software providers. However, raising the memory bar from 192 G-bytes to 288 didn't seem to phase VMware product marketing group manager Mark Chuang. Via email, Chuang told InformationWeek that "VMware validates vSphere to a maximum physical limit, but that is a validation limit, not an 'optimization' point, per se. In ESX 4.1, we already support up to 1TB of physical RAM per server, regardless of the memory configuration (size of DIMMs, etc)."
The retail price for a 16 G-Byte HyperCloud RDIMM is currently around $1200. But Duran warned that as with all other memory products on the market, HyperCloud's prices are subject to fluctuation.
Netlist will be exhibiting at booth 611 in Interop's Cloud Computing Zone.
David Berlind is the chief content officer of TechWeb and editor-in-chief of TechWeb.com. He can be reached at firstname.lastname@example.org and you also can find him on Twitter and other social networks (see the list below).
InformationWeek Elite 100Our data shows these innovators using digital technology in two key areas: providing better products and cutting costs. Almost half of them expect to introduce a new IT-led product this year, and 46% are using technology to make business processes more efficient.
The UC Infrastructure TrapWorries about subpar networks tanking unified communications programs could be valid: Thirty-one percent of respondents have rolled capabilities out to less than 10% of users vs. 21% delivering UC to 76% or more. Is low uptake a result of strained infrastructures delivering poor performance?
InformationWeek Must Reads Oct. 21, 2014InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.