What Does the Arms Race for Generative AI Mean for Security?

The growing focus and availability of generative AI, such as ChatGPT, present new challenges to cybersecurity teams, foreshadowing an era of machine versus machine threats and defenses.

Max Heinemeyer, Chief Product Officer, Darktrace

March 21, 2023

4 Min Read
cyberattack in red digital letters on screen
Skorzewiak via Alamy Stock

Imagine this scenario: You receive an email from your CEO asking you to send some information. It’s written in her exact tone of voice and exact language she typically uses. She even references her dog in a joke. It’s precise, accurate, and utterly convincing. The catch? It was crafted by generative artificial intelligence, using nothing but some basic information that a cyber-criminal fed to it from social media profiles.

The emergence of ChatGPT has catapulted AI into the mainstream consciousness, and with it, real concerns about its implications for cyber defense. Within weeks of its launch, researchers were able to demonstrate ChatGPT’s ability to write phishing emails, craft malware code, and explain how to embed malware into documents.

Adding further fuel to the flame, ChatGPT isn’t the first chatbot to hit the market, nor the last. Recently, we’ve seen Google and Baidu throw their hats into the ring. So, as the tech giants clamor to create the best generative AI, what will it mean for future cyber defense?

The Barrier to Entry Likely Hasn’t Been Lowered yet

One of the first debates raised by ChatGPT was that of cyber security. Could cyber-criminals use ChatGPT or other generative AI to better their attack campaigns? Could it lower the barrier to entry for would-be threat actors?

ChatGPT is a powerful tool, and its broad-ranging potential use cases can help existing users become more efficient, aggregate knowledge, and automate lower-level tasks in a world marked by rapid digital transformation.

That said, generative AI isn’t yet a silver bullet that solves everything. It has its limitations. For starters, it only knows what it has been trained on and requires ongoing training. And, as we’ve seen, the very data it has been trained on has also been called into question. Already, universities and news outlets are reporting concerns about the potential for AI-assisted plagiarism and the spread of misinformation. As a result, humans often need to verify its output. Sometimes it’s hard to tell if ChatGPT made up the content or if its output is based on reliable information.

The same applies to any application of generative AI to cyber-threats. If a criminal wants to write malware, they still need to guide ChatGPT through creating it, and then double check the malware even works. A would-be threat actor would need quite a bit of knowledge on attack campaigns to use it effectively. That means the barrier to entry hasn’t been significantly lowered just yet when it comes to the crafting of attacks, although of course some nuances still exist -- for example in creating credible phishing emails.

Generative AI-Powered Attacks Mean Quality Over Quantity

At our company, we wondered if there was merit to concerns that ChatGPT might cause an increase in the number of cyber-attacks targeting businesses. So, we did our own research across our customer base. What we found tells a slightly different story.

While the number of email-based attacks has largely remained the same since ChatGPT’s release. The number of phishing emails that try to trick the victim into clicking a malicious link has actually declined from 22% to 14%. However, the average linguistic complexity of the phishing emails has jumped by 17%.

Of course, correlation doesn’t mean causation. One theory of ours is that ChatGPT is allowing cyber-criminals to redirect their focus. Instead of using email attacks with malicious links or malware attached, criminals see a higher return-on-investment in crafting sophisticated engineering scams that exploit trust and solicit the user to take direct action. For example, urge HR to change salary payment details for the CEO to the bank account of an attacker-controlled money-mule.

Imagine our hypothetical that we posited at the start: It would take mere minutes for a criminal to quickly scrape some information on a potential victim from their social media accounts and ask ChatGPT to create an email based on that information. Within seconds, that criminal would be armed with a credible, well-written, and contextualized spear-phishing email ready to send.

A Future of Machines Fighting Machines

For nearly 10 years now, we have been predicting a future of AI-augmented attacks, and it seems we may be on the cusp of that future. The generative AI arms race will push tech giants to release the most accurate, fastest, and credible AI on the market. It’s an inevitability that cyber-criminals will exploit this innovation for their own gain. The introduction of AI, which can include deepfake audio and video, into the threat landscape will make it easier for criminals to launch personalized attacks that scale faster and work better.

For defenders charged with protecting their employees, infrastructure and intellectual property, the answer will be to turn to AI-powered cyber defense. Self-learning AI on the market today bases its ability to identify and contain subtle attacks through a deep knowledge of users and devices within the organization it protects. Through learning these patterns of life, it develops a comprehensive understanding of what is normal for users within the context of everyday business data. Put simply, the way to stop hyper-personalized, AI-powered attacks, is to have an AI that knows more about your business than external, generative AI ever could.

It’s clear that the introduction of generative AI to the mainstream is tipping the scales towards a war of algorithms against algorithms, machines fighting machines. For cyber security, the time to introduce AI into the toolkits of defenders is now.

About the Author(s)

Max Heinemeyer

Chief Product Officer, Darktrace

Max Heinemeyer is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as penetration testing, red-teaming, SIEM and SOC consulting and hunting advanced persistent threat (APT) groups. At Darktrace, Max oversees global threat hunting efforts, working with strategic customers to investigate and respond to cyber-threats. He works closely with the R&D team at Darktrace’s Cambridge UK headquarters, leading research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes, and WIRED. When living in Germany, he was an active member of the Chaos Computer Club. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights