Data Center TCO Factors You Can’t Afford to Overlook

(SPONSORED) As we adopt hybrid work environments for good, here's how storage leaders can improve data center success and longevity while lowering total cost of ownership.

November 15, 2021

6 Min Read
InformationWeek logo in a gray background | InformationWeek

(SPONSORED) It’s no surprise the pandemic has accelerated digital transformation for data centers around the world. With the stress of managing continuous data growth, the surge to the cloud, and today’s diverse workloads and applications, total cost of ownership (TCO) remains critical … and it’s complex.

How efficiently and effectively enterprises manage data and deploy data infrastructure often determines their success and longevity. Most often, this also determines not only who thrives, but who merely survives.

At Western Digital, we work with the largest search, e-commerce, social media, and other cloud giants around the globe. Here’s what they tell us:

  • First and foremost, scaling efficiently and effectively is paramount to their success.

  • TCO influences most every decision they make.

  • And achieving the lowest possible TCO fuels their revenue, services, and innovation.

Don’t shop on price alone. You must peel the layers as acquisition cost is just a part of TCO. There are many other aspects driving TCO, including location, density or space, performance, quality and reliability, cooling, weight, utilization, QoS, infrastructure management and more. And these can vary widely depending on the type of data center.

There’s no one-size-fits-all approach to TCO and which factors are most important. However, here are some insights from the largest cloud and data center customers on the TCO factors you can’t afford to overlook.

Don’t Underestimate the Impact of Energy

Every watt matters. According to the U.S. Department of Energy, some of the world’s largest data centers consume more than 100 megawatts of power capacity, or enough to power 80,000 U.S. households. Cooling accounts for approximately 30% to 50% of total energy consumption in a data center.

One way to save on power costs with storage is to use helium-filled hard drives instead of air-filled hard drives. Because helium is one-seventh the density of air there is less turbulence inside the drive. This delivers myriad advantages such as fitting more disks in the same 3.5-inch form factor, less drag on the spinning disks, and ultimately lower power. When comparing 18TB helium-filled hard drives with 10TB air-filled drives, the helium drives bring greater power efficiency with 30% lower operating power and a 61% reduction in watts-per-TB.

Make the Most of Your Space

As we’ve seen in the news, many hyperscale data centers are moving to cooler climates or wide-open spaces to reap TCO benefits. Even if you do not have the luxury of physically moving your data center, slot tax and floor space should be top-of-mind, as your current real estate comes at a premium. Every floor tile and square foot must be optimized for maximum utilization.

Storage slot tax is a critical TCO factor for cloud, on-premises, and colocation data centers, and can encompass a number of things. For some, it’s the cost of the chassis, the rack, power supplies, and networking. For others, it’s literally all of the infrastructure required to take a storage device and make it accessible in the data center.

Slot tax can be extremely important with colocation. You might get a certain physical amount of floor space with a set amount of power. If you put in equipment that consumes too much power, you may have to rent additional floor space (that you don’t really need) simply because you need another power drop.

I also recommend you get the highest capacity storage you can. If you don’t, you’ll risk having to buy more later, whether that’s more storage subsystems, more racks, or more servers. Adding more later increases operational costs of each device, along with introducing more points of failure, which could reduce reliability and impact SLAs.

Get the Most Out of Your Data Infrastructure

Today’s increased and diverse workloads impact how you access and manage your data. When it comes to optimizing TCO, you have a choice: you can either add more servers and infrastructure using traditional approaches or, you can modernize your infrastructure to more efficiently and effectively maximize the capacity and performance of your storage by embracing new architectures.

One of those new approaches is an open-source, standards-based initiative called Zoned Storage. The Zoned Storage Initiative gives developers and architects the tools and resources to intelligently place data on both HDD and SSD media, and optimize for better performance, shorter latency, predictable QoS and most importantly, higher densities and lower TCO.

Another way to optimize resources is to shift to a shared model for composing compute, network and storage resources as needed. Composable disaggregated infrastructure (CDI) gives data center architects flexibility to compose what they need when they need it from a shared pool of resources to support a specific workload. Being able to get resources “on the fly” and as needed means you can be nimble and avoid unnecessary spending to keep pace with change and optimize IT resources. NVMe over Fabrics (NVMe-oF) serves as an on-ramp to CDI as it allows IT to compose, orchestrate and share flash storage over fabrics.

It’s important to align the performance and capabilities of your storage system with the workload and its specific bandwidth, latency, and data availability requirements. For example, if your business wants to gain greater insight and value through AI, your storage system should be designed to support the accelerated performance and scale requirements of analytics workloads.

Understanding your workloads today and in the future and investing in “the right” mix of HDD- and SSD-based storage systems will give you the highest level of optimization and the best overall TCO.

Keep in mind that what worked in the past may not be enough. Now is the time to consider new innovations and architectures.

No One Size Fits All, But Consider All to Manage TCO

TCO is complex. There are many factors as well as management and maintenance costs that need to be considered in a TCO equation. As your organization optimizes its data center both today and into the future, you need to consider a multi-faceted approach that includes floor space, density, performance, quality and reliability, cooling, weight, utilization, QoS, infrastructure management and more to create more efficient and effective data infrastructure with the lowest possible TCO.

Lowering TCO strategically will help you scale for the future. The quicker you can expand and scale, the more you can keep up with revenue opportunities. No matter how you look at it, TCO helps improve efficiencies and can add value to your bottom line. Optimizing TCO frees up resources that can fuel business growth. Our large cloud customers are actively finding new business opportunities by taking the gains made in one place and investing into others. And as they grow, storage is an essential part of their ability to scale to new heights.

Ihab_Hamadi-WesternDigital.jpg

Ihab Hamadi is a Fellow and Head of Systems Architecture at Western Digital. He is a renowned expert in the areas of compute, data storage, networking, and virtualization technology and solutions. Before joining Western Digital, Ihab was Distinguished Technologist at HPE Aruba, leading development teams that established OpenSwitch project and created ArubaOS-CX network operating systems. Ihab has also held a wide range of technology leadership positions with Broadcom, Emulex, and other companies. He earned a MS degree in computer engineering from the University of Denver.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights