Technology has a way of being disrupted the moment you grow accustomed to a new set of processes and procedures. Dealing with change in enterprise IT is simply the nature of the business, and DevOps may be the latest victim of disruption. The application delivery process that you have implemented or are working toward implementing will likely have to be retooled. Or it may need to be scrapped altogether, thanks to a shift towards serverless computing.
The philosophy of DevOps is one that largely revolves around tight collaboration between software developers and infrastructure administrators that control the deployment and management of storage, servers, and networks. The idea is to have close communication between the two groups to streamline and speed up the continuous delivery of applications. Because software relies so heavily on the proper configuration of the infrastructure that applications are executed on, proper alignment of the IT operations side of the house is critical to making rapid adjustments to server and network components to align with changes in application code.
DevOps has been around for over a decade. But it's really been in the past five years that it's gained significant momentum in the enterprise. One of the reasons for slow adoption is that DevOps is quite a challenge to implement properly. Many IT departments found that the replacement of legacy processes and ingrained IT culture required a far longer of a transition than originally expected. For those of you that are still struggling to adopt to a DevOps model, a serverless architecture could prove to be an easier way to gain all the benefits of a DevOps model without the steep learning curve.
One of the main reasons IT operations is so important to the DevOps process is because the servers that applications run on require manual processes to provision and grow both horizontally and vertically. So, when developers make changes to an application that requires changes to the underlying infrastructure, they must call on the operations team to make the necessary adjustments in a timely manner.
With a serverless architecture, there is no need to coordinate the provisioning and scaling of servers. It's performed automatically for you. This means that developers no longer need to spend precious time coordinating infrastructure changes such as new deployments, scaling out existing applications, or performing capacity planning with IT operations. Instead, that time can be spent doing what developers do best: write code.
Of course, all this sounds great. But what's the downside? Well, unfortunately for many, moving to a fully serverless architecture will take a tremendous amount of time, money, and effort. Most applications that operate in on-premises data centers as well as public IaaS and PaaS clouds will have to rework their applications so they can operate in a serverless environment such as AWS Lambda or IBM OpenWhisk. So, if you thought you could scrap DevOps completely in favor of a serverless architecture, you're going to be disappointed.
Instead, the disruption to DevOps thanks to serverless architectures is likely to be a slower evolution for well-established enterprises. While smaller startups can likely pivot to serverless much more quickly, businesses reliant on legacy applications are going to take much longer to transition. But don't fret, much like cloud computing was a slow, yet steady migration from private data centers to the cloud, your IT strategy for serverless frameworks should follow a similar transition.
Whenever possible, new applications should be designed and built using cloud-native design strategies, and thought should be put into whether the app should be slated for a serverless architecture. And if not, the app should have the capability to easily port it to a serverless environment in the future. With careful planning, you'll find that in no time, the clear majority of applications will be running serverless. That ultimately means less time wasted by developers in a traditional DevOps model.