Will Rogue AI Become an Unstoppable Security Threat?
As artificial intelligence becomes increasingly sophisticated, concerns grow that it could emerge as the evil genius that defeats all existing network security measures.
Artificial intelligence is transforming the technology landscape, mostly for the better. Yet a growing number of security experts are concerned that advanced AI, when deployed by individuals or organizations looking to break into systems for financial or political gain, could elevate cyber attacks to an entirely new and perilous level.
Doug Saylors, partner and co-lead of cybersecurity at technology research and advisory firm ISG, views the rogue AI threat as an evolution of the current threat landscape. “Over the past three to five years, the rise of nation state advanced persistent threat (APT) groups has contributed to significant growth in the adversarial environment,” he states.
The idea of a completely rogue machine that can’t be controlled by humans is still something from science fiction, says Clare Walsh, director of education at The Institute of Analytics, a global professional organization for analytics and data professionals. “Machines on their own aren’t a threat, but humans who control machines are,” she explains. “We have a lot of dangerous actors out there working to develop machines that can pose a threat.” State players, in particular, are looking at cyber warfare as a way forward, Walsh adds.
Lurking Danger
Rogue AI attacks, if and when they emerge, will probably appear in many different forms. They are most apt to come, however, from systems that delegate much or all of the decision making to an unsupervised machine. “The most likely rogue AI is simply a machine that becomes malignant because nobody bothered to check that it was safe,” Walsh says.
Regardless of the source, the biggest threat rogue AI poses is that the danger won’t be detected until serious damage occurs, Walsh says. “That makes the rapid deployment of machines in areas such as healthcare and warfare particularly problematic,” she notes.
Building Stronger Cyber Practices
Claire Vandenbroecke, cyber security specialist at IT managed services provider TSG, believes that the primary risk isn’t AI going rogue, but falling into the wrong hands, which it inevitably will,” she warns.
Rather than focusing on AI as a threat, Vandenbroecke recommends that enterprises build strong cyber and information security practices into everyday business activities. “It should be at the top of their agenda with buy-in from the board, C-suite, and other key decision makers, not just their IT department,” she advises. “That way, as AI and other technologies continue to develop, businesses can adapt their security practices accordingly.”
Attack Tactics
The rogue AI concept generally refers to AI systems that have been trained to generate or identify opportunities to exploit code or system vulnerabilities and then take some form of destructive action without human intervention, Saylors says. That action could be the creation of code known to be vulnerable and publishing it to a common code repository with the expectation it would be exploited at a later date. It could also be the active exploitation of vulnerabilities by the AI technology itself.
The latter action is an extreme example, Saylors says, and generally only a concern for governments or high-profile enterprises, such as defense contractors and financial institutions. “Such organizations already tend to be under constant attack from well-funded APT groups,” he notes.
Unfortunately, as sophisticated AI technologies such as ChatGPT become widely available, they will be trained to exploit code or system vulnerabilities. “I’m not saying ChatGPT, specifically, will do this, but I’m suggesting that bad actors will clone this type of technology and train it for nefarious use,” Saylors says.
Walsh believes that legislative bodies will need to monitor experimental AI to ensure that the transition of code developed and working in a laboratory to functioning in the world at large is thoroughly tested for safety and security. Developers should be transparent about the data they used to train a machine, about any limitations that may exist, and about specific cases where their machines should not be used, she notes. “Without a legislative mandate to do so, there is little commercial motivation at the moment.”
Preparation and Training
Saylors suggests enterprises should train their developers to the point where they fully understand their organization’s approved use of AI technologies. He also recommends testing and vetting all AI-generated code modules, as well as conducting mandatory security training sessions focused specifically on AI technology.
AI developers should be instructed, Saylors says, to carefully inspect the code snippets generated by AI. He also advises constant security testing, using SAST and DAST tools, on all AI-generated code.
What will rogue AI actually look like? “It would look like any other AI,” Vandenbroecke says. “The only difference would be the people using it and what it’s being used for.”
What to Read Next:
RSA Takeaways: New Awareness Needed vs Sophisticated Cyberattacks
Misconfiguration Could Set Up Cyber Attacks
Leading with a Risk-Informed Cybersecurity Spending Strategy
About the Author
You May Also Like