Killing 'Zombies' Achieves Greatest Gain In Data Center Efficiency
The Uptime Institute has estimated the gains from eliminating 'zombie' servers and implementing a data center management system versus doing nothing.
Killing zombies has been a popular theme for teenagers and young adults for years, but the notion appears to be gaining ground among the data center IT operations professionals as well. If zombies can be identified, eliminating them can result in a big savings for the IT budget and a gain in stature for the managers who pulled off the move.
It's been time consuming in the past to track down servers no longer in active use but still running. According to research done in 2015 by Jonathan Koomey, a research fellow at Stanford University and cited by the Uptime Institute, as many as 30% of running servers may be comatose or among the data center equivalent to the walking dead.
The difficulty of tracking little used, aging applications and the tendency of shadow IT users to create new virtual servers and forget about them has contributed to an uptick in the existence of zombies. They exist in nearly every enterprise data center, whether on premises or off in the cloud, according to the research. Any server that hasn't been used to produce useful data or application results in the last six months should be described as a "zombie," the institute said in a summary of its findings.
While the research said the virtual machines could be located either on premises or in the cloud, it offered no results that stated what percentage of zombies might be found in the cloud or whether that portion of the zombie population was increasing.
Still, such servers are a major contributor to what Gartner refers to as the 80:20 split in the IT budget. Historically, 80% of the budget has gone to maintaining existing systems and 20% to creating new business applications. By addressing the zombie server issue, IT managers have a shot at reducing the maintenance side of the ledger and applying the savings to increased investment in new systems.
The research establishing the existence of zombie servers has been cited previously by contributor Ben Kepes in Forbes magazine in June 2015 and by the Anthesis Consulting Group. The Uptime Institute, a publisher of research on improved data center operations and continuous uptime, was acquired by the 451 Group, a technology market research organization, in 2013.
The Uptime Institute recently put together a chart showing its interpretation of what it means to do nothing about zombie servers and other constant data center inefficiencies over a seven-year period. In short, it costs a lot of money to keep the servers and their software licenses in place. It contrasted that option with the savings possible through active intervention and imposition of server efficiencies.
In it, the institute analysts are comparing two data centers, each having 30,000 2u servers, with each server rack consuming 6 kilowatts per hour at $.10 per kilowatt. The chart's title is The Exorbitant Cost of Doing Nothing and the institute says it is "based on a true story."
The demand for server compute by the data center's users is increasing at the rate of 5% a year, and the cost of electricity is also increasing at the same rate. So the option of doing nothing doesn't really exist, as the first or "X" data center illustrates. In the first two years of the example, data center X tries to contain energy costs by installing variable frequency disk drives and by fine tuning servers to get more production from them. It does so successfully enough that it staves off more server purchases for two years.
In the third year, it must start purchasing more physical servers and expand the operating budget at an added cost of $5.66 million. In the fourth year, it's the same story, with higher electricity costs constantly contributing to the increase: it amounts to $5.11 million. In year five, the extra costs goes up to $5.37 million and in year six, $5.65 million and year seven, $5.94 million. The picture is bleaker if the expanding server footprint means that data center X has run out of space and must build a new facility for at least $12 million for one megawatt's worth of new space.
Data center Y, the second example, on the other hand, takes steps to curtail inefficient use of servers. It does the same thing the data center X did in years one and two but, in addition, engages the Uptime Institute for an "Efficient IT Assessment."
Having identified zombie suspects in the assessment, it succeeds in shutting down 10% of the "comatose" or zombie servers in the third year, leading to $2.57 million savings in electricity as 900 pieces of hardware are taken offline. The real savings is realized through the $13.36 million in savings resulting from reduced hardware maintenance, reduced software licensing costs and elimination of hardware asset depreciation.
"The biggest driver of cost savings is reducing IT hardware count," the institute said on its Exorbitant Cost chart.
In the fourth year, data center Y proceeds to identify more comatose and under-utilized servers and consolidates them via virtualization (which most enterprise data centers have already done). It achieves a ratio of five virtual servers where there was formerly five physical servers or a five-to-one compression of the server footprint. The savings: $15.98 million.
In the fifth year, data center Y implements a data center infrastructure management system, a combination of system management and facilities management. DCIM systems are available from HPE, Schneider Electric and Geist Global. Such a system further optimizes efficient server and electricity use. Data center Y also accelerates its server refresh program, achieving a reduced total of servers and greater density of servers. The savings: $18.65 million.
In the sixth year, according to the Uptime Institute, it saves an additional $1.5 million in IT personnel costs through more effective use of the data center infrastructure management system. The savings are apparently achieved through staff reductions, although the Institute chart refers only to "staff efficiencies." It continues its other optimization trends, yielding a total savings of $6.55 million.
In the seventh year, efforts to better manage data center Y continues to yield gains on multiple fronts: reduced energy costs, server license fees, server maintenance costs, depreciation expenses, IT personnel costs and other operating expenses, the institute chart indicates. For the first time it acknowledges the capital expense to replace aging servers with more efficient new ones over the seven-year period: $20 million which data center X doesn't have. But total savings in the seventh year amount to an additional $8.13 million.
Whatever its viewers may think of the assumptions behind it, the chart is an attempt to illustrate the growing cost of continuing the course of a traditional IT staff versus taking aggressive steps to optimize data center operations. Some of Y's savings in operating expenses are lost to the need to make a $20 million capital expense in new servers.
But the institute's main point is still driven home: finding and eliminating under-utiliized and zombie servers reduces costs on several fronts. Optimizing and upgrading those servers that remain in use realizes further savings that over a seven year period more than offsets the required capital expense.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Cybersecurity Strategies for the Digital EraAt its core, digital business relies on strong security practices. In addition, leveraging security intelligence and integrating security with operations and developer teams can help organizations push the boundaries of innovation.