This is the question I’m helping numerous customers answer right now. Development operations and the creation of a continuous development cycle are somewhat of a new concept. Ever since development leaders saw that working with broken release cycles, coding practices, and heterogenous tools actually slowed the entire process down, we’ve seen massive movement around today’s modern DevOps culture.
Before we go on, I need to touch on the concept of legacy. At a high level, this is any process or operation that requires quite a bit of manual intervention and a lot of complex reactive support. This can be code, business or IT processes, and even IT infrastructure. Gartner recently pointed out that legacy infrastructure and operations practices are not sufficient to meet the demands of the digital business.
As we evolve into a data-driven society, digital transformation will require agility and velocity that outstrip classical architectures and practices.
This absolutely applies to DevOps and the creation of new, innovative services. If you get the chance, take a look my DevOps 101 article, as it covers the core elements of adopting continuous innovation through DevOps. It’s that continuous part that sometimes throws people off. As I just mentioned, working with evolving DevOps practices will absolutely require a new level of agility and speed to deployment. This will require organizations to leverage technologies that can bring that level of agility to their organization. Remember, DevOps isn’t just a part of your technology, scope. It’s also an engine that directly impacts your ability to compete in the market and impact your entire organization.
So, why not leverage platforms and tools that not only save you money but improve your go-to-market capabilities?
This is where cloud comes in.
By offloading DevOps, testing, and your CI/CD (continuous integration, continuous delivery) into the cloud, you unlock the most agility possible for your development ecosystem. Most of all, you simplify your overall technology stack and get the chance to save money! For example, event-driven architectures allow you to leverage resources only when you’re using them. So, DevOps professionals don’t have to access idle VMs just to run a piece of code. Make that microservice or event-driven infrastructure available only when you need it.
I want to give you some specific examples. Let’s use Google Cloud Platform (GCP) as our general use-case.
Leveraging an on-demand compute engine. AWS has EC2 and Google has their Compute Engine. In these instances, you can launch VMs on demand, when you need them. You can predefine sizes and even leverage Custom Machine Types for very specific needs. You also have lots of options around the resources you use. You can leverage everything from solid-state resources to persistent disks delivering consistent and predictable performance. From a DevOps perspective, you can deliver applications, code, and services with automatic scaling capabilities and even isolate services for specific use-cases.
Managing your DevOps deployments. GCP has something called Cloud Deployment Manager. There, you can specify all the resources that your application might require leveraging a declarative format using yaml. You can also use Python or Jina2. This helps remove complex and painstaking steps that are usually required for a good DevOps deployment and replaces it with a process that allows you to predefine what the final deployment will look like. From there GCP leverage the necessary tools and processes for you. When done right, this type of architecture will scale, be repeatable, and become much easier to manage.
Your DevOps Cloud Console. It’s certainly a challenge to manage a large DevOps practice when components are all over the place. In GCP, you have something called a Cloud Console which becomes your single window into your entire cloud DevOps practice. This means seeing datastores, networking policies, web applications, data analytics, VMs, developer services, and much more. This console also lets you control rollbacks to monitoring your entire DevOps platform. It’s tools like this that help you create that continuous delivery cycle, but in the cloud!
Event-driven, serverless compute. This is one of my favorite tools. That is, pay only while your code runs. There are no "servers" to manage, patch, or even update. In the world of GCP, you can leverage Cloud Functions to make this happen. You can scale, create fault tolerance, and even extend into other cloud services. The biggest point is that there are no idle services simply sitting in your data center. This is an amazing tool for integration with application backends, working with APIs, IoT backends, and even developing mobile backend solutions.
Now, once your DevOps practice is in the cloud, you open the door for so many other opportunities. For example, you can start new initiatives around data ingestion into a data lake to process various data types. Or, you can quickly integrate with other cloud services to spin up a new business segment. The point is that you are now agile and can deploy at the speed of a digital market. GCP aside, it’s important to explore the cloud landscape to ensure you’re working with the right type of model.
Don’t be afraid to go on this journey. You can dip a toe in to see which of the services you have now will work well in the cloud. However, you do need to get started. Leading organizations are taking advantage of cloud services to revolutionize their cloud practices. All of this helps lower cost, simplify deployment, and create new competitive advantages.
For more about DevOps and the cloud check out this article:Bill is an enthusiastic technologist with experience in a variety of industries. This includes data center, cloud, virtualization, security, AI, mobility, edge solutions, and much more. His architecture work includes large virtualization and cloud deployments as well as ... View Full Bio