October 25, 2023
At a Glance
- Malicious threat actors are using AI to power a new breed of cyberattacks.
- IT leaders must take a multifaceted, individualized approach to AI defense, experts say.
- Adversarial AI could become the largest focus of cybersecurity, CEO says.
The threat of artificial intelligence-led cyberattacks is growing as generative AI (GenAI) tools explode in popularity. As companies and governments weigh the benefits of AI, they must also keep close watch on developing threats using the quickly emerging technologies.
Booz Allen Hamilton-backed HiddenLayer is at the forefront of adversarial AI defense efforts with a contract to guard the US Department of Defense (DoD) against AI cyberattacks. Booz Allen’s investment in Hidden Layer is part of the company’s $100 million corporate venture capital fund and adds to its adversarial AI portfolio, which includes a focus on adversarial machine learning (AML). Both attackers and cybersecurity professionals are tapping into the power of AML.
Adversarial AI uses algorithmic and mathematical approaches to deceive, manipulate and attack AI systems. While published reports of AI attacks are scant, companies, organizations and governments are getting ready for increased attacks as open-source language models proliferate. Gauging the severity of the immediate AI threat is tough, experts say, as companies may not be reporting incidents.
The lack of reports is probably not surprising, considering the absence of reporting requirements about AI incidents, according to Apostol Vassilev, a computer scientist with National Institute of Standards and Technology (NIST). “Much of the legal framework and rules are still in the discussion phase,” he tells InformationWeek.
Matt Keating, leader of Booz Allen’s Adversarial AI portfolio tells InformationWeek companies are trying to bolster their AI defenses but staying tight-lipped on attacks. “A lot of the companies that are deploying AI models right now are keeping their incidents very close to their chests,” he says. “But if you look at the number of papers that have been pushed in the last three years, there has been an absolute explosion in academic literature on the adversarial side. That’s an indication of what’s happening out there.”
How CIOs/CISOs Should Handle AI Attack Threat
HiddenLayer CEO and Co-Founder Chris Sestito says the company has developed strong defense tools against AI-enabled attackers. He says enterprises should approach the AI threat landscape with the same seriousness as the government. “My No. 1 message to CISOs would be that you have to apply the same level of scrutiny to artificial intelligence that you would to traditional code,” he says. “Were not asking you to make any new decisions. We’re asking you to make the same decisions you’ve made on a new technology.”
Travis Bales, director at Booz Allen and leader of the HiddenLayer adversarial AI investment, says enterprises need to look at multiple levels of security as they prepare for AI threats. “We’re starting to look at this in two parts: Data security and the model security pieces, and how do we bring those together,” he says.
Adversarial AI is still in its infancy. NIST’s Vassilev says government and the private sector are on equal footing as organizations adopt safeguards. “This is still emerging technology and practice guides about its deployment and use are being developed and matured inside the government and the private sector,” he says. “It is still too early to say who should model after whom. Abundant caution and constant monitoring are advised in all cases.”
Keating says the company wants CIOs and other IT leaders to take the lead in assessing their own needs. Adversarial AI consultants can then offer threat modeling based on individual needs. “All of our customers have limited resources, and they need to intelligently apply those resources based on what their risk is,” he says. “Every customer might be looking at something different.”
He adds, “They might be looking at whether somebody is poisoning the data they are using in training to skew model results, or maybe they’re interested in data leakage … That’s what we’re doing with threat modeling.”
Firsthand Experience with AI Threat
Sestito and two other co-founders worked previously at cybersecurity firm Cylance and learned the hard way about the need for adversarial AI tools to guard against threats. In 2019, Cylance was using machine learning to advance antivirus technology and releases product through a free trial. Threat actors used that release to create a surrogate model that was used to create an attack.
“And they created attacks offline, where we couldn’t see anything … They essentially rendered that product useless overnight,” Sestito says. “We learned just how incredibly vulnerable artificial intelligence is as a technology.”
With the popularity of generative AI soaring in the last year, Sestito thinks the threats will only grow more intense. “We’re really at the beginning of what I think is going to be a very large, if not the largest category within cybersecurity,” he says.
Bales agrees. “We think this is definitely the cutting edge of where our clients are going to have to go as we deploy more AI around the world -- as we birth more AI models into the world, the security of those is paramount.”
About the Author(s)
You May Also Like
Q3 Threat Horizons Report
Cloud Security Maturity Model: Vision, Path, Execution
Solution Brief: Fortinet FortiFlex Delivers Usage-Based Security Licensing That Moves at the Speed of Digital Acceleration
The New Frontier of Cyber Security: Securing the Network Edge
Top Six Recommendations to Improve User Productivity with a Hybrid Architecture