Intel Shares Its Transformation Strategy at DevOps World
Changing human behavior and leadership support were integral to driving a more agile approach to software and firmware development.
At last week’s DevOps World online conference, Peter Tiegs, principle engineer with Intel’s client group, and Lynn Coffin, senior technical program manager, discussed the strategy they adopted for a DevOps transformation of the development cycle. Opening up their playbook for the event hosted by CloudBees, Tiegs and Coffin described how Intel made technical changes and embraced new mindsets to meet goals for efficiency at scale.
Intel obviously has plenty of skilled personnel and resources but even then, the company did not just dive in when it came to DevOps transformation. “You don’t have to have all of the answers and know everything to get started,” Coffin said. It is not necessary to start with a big push, she said, suggesting that DevOps can be done in smaller, manageable bites. Getting teams willing to buy into change was essential to getting things rolling, Coffin said.
Such support is necessary, she said, because of the challenges that come with changing human behavior. It is important to explain to teams what the road ahead will look like in order to achieve DevOps transformation, Coffin said. This includes stops along the way for communication, training, and reflection. Strong leadership is also part of the equation, she said, because teams rely on deliberate guidance that communicates the expected goal repeatedly. “Make sure everyone understands what the end state looks like and why it’s important.”
Coffin said Intel uses scrum and agile delivery as part of DevOps to build software, with iterative delivery to allow for reversions, revisions, and updates as needed. “We can go back, look at what we did in the past, and understand why we made those decisions,” she said. The adoption of DevOps at Intel, Coffin said, included executive support from a senior director and multiple vice presidents who bought into the intended end state. Developers also supported the move after they saw how easy it would be once teams adopted common tool standardization, she said.
The DevOps transformation became a two-year journey, Coffin said, with teams now sharing sources across different disciplines. Intel also started a DevSec pipeline, she said, which helps build security in from the beginning of the development cycle instead of addressing it as an afterthought.
Tiegs said the Intel client group works in a variety of sectors including gaming, education, business, and mobile and deals with operating system combinations such as mobile on Chrome or gaming on Windows 10. There are even times when the client hardware they deal with might not even be in its final state, compounding challenges in integration, he said.
Some products use software from multiple teams, Tiegs said, making it not uncommon for thousands of software engineers around the world to be part of this complex mix. “Our historical process for integrating software and firmware was manual and inconsistent,” he said. The introduction of continuous integration and daily changes to software and firmware added to the intricacy. “We needed to modernize our software engine by introducing DevOps practices to the company that we are able to scale,” Tiegs said, “or we had a good chance to fail.”
Intel created a program made up of three sub-teams, he said, to tackle these challenges:
In the systems engineering group, Tiegs said the focus was on processes and data. “One of the first things we did was standardize the process of defining software and firmware that was part of each SKU,” he said. This enabled a standardized data schema for requirements, test cases, and software components. It also offered a way to trace features down to specific software components and versions, Tiegs said. “This allowed us to do requirements-based validation and make sure we were building the right products and building them with the right model.”
The test and release sub-team focused on standardizing test reporting between the software and firmware teams, he said, and using data modeled by the systems engineering team. They also established minimum requirements for software that was under consideration for release. This regulated the flow of new versions so the entire system would not be tanked by a bad ingredient, Tiegs said. They also created a shared pool of content such as test scripts, and collateral such as video files and workload files that could be used as part of a test. The sub-team directed releases to a single channel, he said, and listened to what customers wanted regarding how they would receive releases.
Finally, the source and build sub-team put their energies toward transformation of DevOps, Tiegs said, to ensure other teams were equipped with the tools and training they needed. This included developing training content such as wikis, videos, and instructor-led classes to ensure software and firmware engineers were prepared to deliver at scale in the DevOps environment.
For more content on DevOps, follow up with these stories:
The Growing Security Priority for DevOps and Cloud Migration
How AI and Machine Learning are Evolving DevOps
Next Phase of DevOps: Upskilling for Processes and Humanity
How Continuous Intelligence Enhances Observability in DevOps
About the Author
You May Also Like