Sponsored By

As developer education and training become more tailored to individual preferences, here's what the future of developer training will look like over the next year, and how AI will play a role in its evolution.

Pieter Danhieux

December 8, 2023

4 Min Read
Path to success concept and follow the right path idea as a group of roads and one red pathway as a career or life direction metaphor.
Brain light via Alamy Stock

Lately, there has been significant enthusiasm around leveraging artificial intelligence for code development. AI, after all, has the potential to take the burden of repetitive and tedious tasks off the hands of development teams. It can also suggest code, analyze and review products, as well as help with streamlining research initiatives.

In terms of pure productivity, AI is cutting down the time spent on generating and documenting code by nearly one half, according to McKinsey. Given the benefits, more than 9 out of 10 US-based developers are using AI coding tools, citing advantages such as increased efficiency and additional time to focus on more rewarding building and creation objectives, as opposed to repetitive tasks, according to GitHub.

But an over-reliance on unchecked -- or trusting AI to do all of the work by itself -- leads to poor-quality code output that is vulnerable to cyberattacks. In addition to code-level vulnerabilities making their way into production, threat actors are taking advantage of AI tooling in many different forms. Those range from  malware creation and social engineering attacks, to devious ploys such as “hallucination squatting” where ChatGPT’s penchant for “hallucinating” incorrect answers can be leveraged for nefarious purposes.

In the latter instance, a security researcher asked ChatGPT for library recommendations to perform a specific task, and the bot provided answers with full confidence. However, none of the libraries it named actually existed. It is entirely possible for an attacker to use these names, create malware disguised as the fake library and execute a lucrative attack. This is just one example of the attack vectors made possible by AI misuse. Coupled with its human-like lack of contextual security awareness when producing code, developers must be enabled to hone their secure coding skills and awareness. After all, they are the first line of defense in protecting organizations from the introduction of code-level security bugs and misconfigurations, especially as the adoption of AI coding tools increases.

Related:Making the Most of Generative AI in Your Business: Practical Tips and Tricks

It is imperative for development teams to acquire foundational security skills to ensure code is protected from the start. However, traditional upskilling efforts tend to fail because they are too rigid and are based on irrelevant information and context. They often fail in today’s constantly shifting threat environment. Developer education must become tailored to the requirements of individuals, with techniques they rapidly process that address the latest vulnerability and attack trends.

Related:The Evolving Ethics of AI: What Every Tech Leader Needs to Know

That’s where the concept of agile learning enters the equation. Simply stated, agile learning brings developers multiple pathways to educate themselves. It focuses on just-in-time “micro-burst” teaching sessions so teams learn, test and apply knowledge quickly and within the context of the actual work they’re doing and the security challenges they face. It accommodates different skill levels and training styles while encouraging developers to immediately tie new lessons to real-life practices, learning by doing so training is directly tied to job needs.

Agile learning has emerged as a fast and flexible approach that more swiftly transforms developers into “security-first” professionals.

How will AI impact the future of developer learning platforms over the next several years? Here’s my vision:

  • In combining productivity gains from safe, guided use of AI coding tools with agile learning, teams will get far closer to achieving security at speed. This has been notoriously difficult to implement, especially when developers have had little security awareness and training.

  • As indicated, the just-in-time, micro-burst instructional approach directly ties into what developers are doing on a day-to-day basis, with context that allows them to solve relevant vulnerability problems, including those that are AI-borne.

  • The adaptive nature of machine learning and large-language models (LLMs) will provide a more individual, tailored learning experience for developers. Most people have a preferred way in which they like to receive information, or indeed have their knowledge put to the test. Perhaps we will see a future in which developers have a learning assistant that is curated based on their needs, workday and education preferences, one that gets the best learning outcomes for them and subsequently, their team.

  • No matter what organizations seek to do with generative AI technology, critical thinking will be a must in order to leverage its power accurately and safely.

Organizations and development teams will naturally pursue the path of least resistance as they seek to create features with ever-escalating volume and velocity. AI proves tempting here: “How far can we push it? Can it do pretty much everything for us?” But, for now at least, this would be a mistake.

In the short term, AI will assist in creating code at a dizzying pace. Teams that lack security training and focus will also produce software faster. But, in the long-term, both will result in major problems that will lead to significant user and customer issues down the road, and costly fixes. To avoid this, smart enterprises must invest in agile learning, so developers are equipped with knowledge that enables them to tackle today’s threats, while maximizing the value of AI tools that don’t leave their work exposed.

About the Author(s)

Pieter Danhieux

CEO, Secure Code Warrior

Pieter Danhieux is a globally recognized security expert, with over 12 years’ experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organizations, systems and individuals for security weaknesses. In 2016, he was recognized as one of the Coolest Tech people in Australia (Business Insider), awarded Cyber Security Professional of the Year (AISA - Australian Information Security Association) and holds GSE, CISSP, GCIH, GCFA, GSEC, GPEN, GWAPT, GCIA certifications.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights