In the rush to revamp, rethink, and adopt new development strategies, it may be necessary to pump the brakes. As many organizations weigh moving towards a serverless world, there are concerns about how such plans take shape. Individuals and teams might take steps that make sense solely from their perspectives but could also lead to unanticipated issues.
“There is a lot of movement towards serverless and in particular containers,” said Lori MacVittie, principal technical evangelist for F5 Networks. She ran a session on this topic at Interop, diving into issues that arise if developers take actions when the infrastructure is not ready.
Chaos and containers
Change is already well underway at organizations with the rise of containerization. “Most organizations have a few, if not more containers running in production at this point and many more in development,” she said. Such organizations might use containers on premises, which can serve as a model for delivering SaaS-like capabilities to their developers, she added.
Containers might be deployed because it is easier, MacVittie said, compared with configuring a virtual network to ensure that everything is bridged correctly. “It’s very confusing, even for someone who understands networking. This movement has led to changes in expectations where the mindset is to deploy applications in containers and hope it all works."
"The trouble is that these activities do not happen in a vacuum," MacVittie said. There might be an assumption among some developers that mainframes and traditional networks have disappeared, but she said that does not reflect reality. Most IT teams support a multitude of custom applications, she said, in addition to traditional applications that must be supported. “There is a collision happening because there’s two different networking models that we’re working with.”
When needs clash
A traditional network may be designed to have a clear data path that can be controlled, is reliable, and it is easy to see where everything is. The newer model does not have the engineer’s touch for design, MacVittie said. “It’s fluid with multiple data paths that get you to the same kind of service.” Rather than building to prevent failure, there is a tendency to expect it and then design around it.
These differences can be found in many different layers. Typically, an application has dedicated resources, MacVittie said. Traditional networks feature large, scalable high-capacity devices that can support multiple applications at the same time. “Basically, we have something very controlled meeting something very chaotic,” she said. “Both have their purpose and need to be served.”
Security vs. speed of deployment
It is necessary to sort out such anarchy to ensure the security of other applications, she said. There might be lateral movement among systems, from one app to another, that becomes a risk if there is no control. This can lead to some pushback when security is brought up in relation to deployment, MacVattie said. Some organizations deploy even when they know they have vulnerable containers, she said, yet some wonder why they had a security incident. “You pushed out vulnerable code and pushed back against anything that’s secure. Speed is trumping security.”
Models that encourage speed, constant change, and chaos may have little room for security and reliability. Finding a balance can be particularly tricky, MacVattie said, given there is no perfect answer that fits all. “There is no single container-based instance of application that is going to defend against a DDoS attack, especially if it’s volumetric.” She said it is a shared responsibility across the network.
Identifying potential issues
MacVattie said teams should watch for conflicting appointment schedules that have a rate of change too high to ensure the stability and security of the core network. If there is a service that developers want to keep changing because they keep changing their app, she said it can be disruptive to the core network. That app may be a candidate to move closer to the container environment or into it, she said.
There can also be alignment conflicts where applications function more optimally on different versions of infrastructure. Making changes to favor one app can lead to another one breaking in ways that a patch will not fix. If such apps are not handled carefully, it could seem like the infrastructure is unraveling as different threads are pulled in different directions. “You may want to consider moving that to a separate, per-app type of architecture,” MacVattie said.