When you have a big problem, it’s often said the best thing to do is to break it down into smaller parts and address each one individually. We’ve seen this philosophy in action over the past several years as the fundamental concept of the data center has undergone radical transformation.
Our Shifting Computing Paradigm
What has created much of the change we’ve seen began with a change in the way we look at things. From the earliest of days we looked at much of computing from a monolithic paradigm. Big processors. Big Storage. Big everything. We started packing all the intelligence into the hardware and firmware of our big machines so they could run bigger and faster.
Then our paradigm shifted.
Suddenly, it became all so obvious that smaller -- and more granular -- was better. Virtualization allowed us to create entire servers in software that could then run on hypervisors. Big hardware, yes, but small purpose-driven virtual machines, each containing almost everything it needed to function. Just as early programming languages enabled the development and assembly of subroutines into larger solutions, and later platforms provided libraries to eliminate the need to code common functions over and over, Web parts made it possible to assemble entire Web-based solutions by simply issuing calls to already existing chunks of code.
The next iteration of that continuing march toward smaller, more compact code is the development of microservices and the ability to package much smaller segments of code, along with all the libraries and binaries they need, into containers. Properly executed in standards-based open platforms, these containers can be highly portable, perhaps more so than virtual machines, enabling data centers to rapidly deploy and redeploy them wherever and whenever they may be needed.
This also introduces a level of modularity that makes applications far more durable and sustainable. When a large piece of monolithic software application fails, the entire application fails. Work stops. With microservices, should any given microservice develop issues, other similar microservices can supplement or even replace the damaged code. Self-healing and self-sustaining software becomes truly viable.
How Do You Break Down Big Things Like Storage?
One of the key primary resources required for most all computing is storage. Developers anticipate that when they move their software from development to operations, there will be pain. Pain as the storage they configured during Dev will need to be completely reinvented in Ops.
Again, what is old has become new again. By moving the intelligence that long ago was integrated into storage hardware back out of it, we enable software-defined storage. This decoupling has many important ramifications allowing new freedom to IT administrators to build and grow their storage to evolve with their business.
Software-defined storage removes the boundaries of drive size and other hardware specifics and abstracts them into variables that can easily be defined and redefined to provide massive scalability and extensibility. Ultimately, software-defined storage will bring to the virtualized data center the same flexibility and range that server virtualization has brought to servers.
Containers add a new degree of freedom to the way applications -- and in the future, infrastructure -- are deployed. Much work is going into containerizing not just the application layer, but also the underlying storage layer. Software-defined storage deployed inside containers, running alongside containers that house applications, is the next step toward the vision of storage itself delivered as a service.
Visit Red Hat for more information on the data center benefits of packaging software-defined storage and microservices into containers.Irshad Raihan is a product marketing manager at Red Hat Storage. Previously, he held senior product marketing and product management positions at HP and IBM. He is based in Northern California and can be reached on Twitter @irshadraihan. View Full Bio