Simple by Design: An Overlooked Strategy for Reducing Data Center Complexity
Looking for ways to reduce complexity in your data center? Prioritizing hardware manageability can go a long way toward streamlining operations.
Sponsored by Dell Technologies
Managing a hyperscale data center is almost unimaginably complex.
First, you have the problem of scale. In the US, the typical hyperscale data center occupies 100,000 square feet and houses 100,000 servers, and the largest facilities sprawl across millions of square feet. Simply keeping track of all the equipment in a space that large can be a daunting task.
Complicating matters, the equipment housed in the data center is very diverse. In addition to servers, it includes storage systems, networking equipment, power and cooling systems, monitoring devices, security infrastructure, and much more. This hardware typically comes from multiple vendors, and is often purchased over time, resulting in multiple generations of models even when equipment comes from a single vendor.
While running all this equipment, data center managers face a lot of competing demands. They need to balance the need for performance against power and cooling requirements. On top of that, they face pressures to keep costs low while complying with ever-changing regulatory requirements and keeping up with changing technology. Data center managers also must allocate space and resources effectively while recruiting, training, and managing highly qualified staff. Plus, they have to perform that difficult juggling act while meeting service-level agreements, maintaining high availability, and securing data against relentless cyberattacks.
Unfortunately, things are only getting worse. Complexity in a hyperscale data center is only increasing as demand skyrockets for cloud services, artificial intelligence, and other advanced technologies. Sig Nag, research vice president at Gartner says, “Data center operations will only increase in complexity as organizations move more diverse workloads to the cloud, and as the cloud becomes the platform for a combinatorial use of additional technologies such as edge and 5G, to name a few.”
In these challenging, complicated environments, managers are always looking for new strategies to reduce complexity. One very effective -- yet sometimes overlooked -- tactic is deploying servers designed for manageability. That means selecting hardware with five key capabilities:
1. Embedded Management Tools
When your server hardware and server management software have been designed to complement each other, they just work better. You are far less likely to run into integration issues, and far more likely to have a smooth deployment.
Ideally, that management software should be available out of the box with no extra installation necessary. It needs to be scalable and flexible, while also including advanced performance monitoring and reporting capabilities. And it must be highly secure with integrated features that control access and protect the system from attacks.
2. Open Platform
Some servers rely on management tools that are based on proprietary platforms that lock you in to a particular vendor. That adds to complexity and makes it more difficult to manage your systems.
Instead, look for servers that leverage open platforms, like OpenBMC. In general, open solutions are more cost-effective than proprietary solutions, as well as being more flexible and customizable. Because they are based on industry standards, they also integrate more easily with other hardware and software, which is critical in highly complex environments like hyperscale data centers. An open platform also allows you to make hardware choices based on capabilities and performance rather than limiting you to a smaller group of options that your systems management software can handle.
3. Comprehensive Remote Management Capabilities
When your servers are spread across hundreds of thousands or even millions of square feet, remote management is an absolute necessity. You want to be able to perform as many tasks as possible without being physically present at the server.
That means you need the ability to remotely monitor, configure, and troubleshoot server hardware independent of the operating system or server status. You need to be able to install BIOS and firmware updates, manage power, monitor health, and access diagnostics. And the software should automatically log events and send alerts.
4. Automated Lifecycle Management
Most hyperscale data centers are cloud environments, where resources need to scale up or down to meet demand. That requires the ability to provision and deploy servers, operating systems, and applications quickly. Automation is also key for cybersecurity policy deployment, monitoring, and alerting.
You should look for hardware with an integrated lifecycle controller that allows you to perform these tasks automatically as needed.
5. Integration with Other Management Tools
Of course, the embedded management software that comes with your server isn't the only software required to manage a data center. Make sure that the embedded software that comes with your hardware integrates with the other tools you use.
Even better is if that embedded software is part of a larger management portfolio that gives you a comprehensive range of capabilities. Centralized control and real-time monitoring can go a long way toward streamlining operations, minimizing downtime, and enhancing quality of service in complex environments.
Servers also become more manageable when they have other characteristics necessary for hyperscale data centers, like dynamic scalability, robust security, sustainability, high-performance, and smart cooling. To learn more, read the white paper, "Don’t let the challenges of scale-up keep you down."
About the Author
You May Also Like