5 Things We Must Do to Combat AI-Powered Cyberattacks

Defending against today's cyberattacks requires united and global efforts and education.

Chris Were, Co-Founder, CEO, Verida

July 18, 2024

4 Min Read
Innovative competitor business concept as an open big boxing glove with four other red gloves emerging out as a game changer strategy symbol
Brain light via Alamy Stock

Cybersecurity measures carefully cultivated for years are no longer enough to protect consumers and businesses. 

Why? Because many of the sophisticated and audacious attacks currently threatening our identities and finances are powered by artificial intelligence. 

Not only are large language models (LLM) being used to deceive unsuspecting victims, but they’re increasingly being deployed to detect potential weaknesses. 

Research from Avast shows 90% of the threats it uncovered in the first quarter of 2024 involved social engineering to some extent, with a rise of deepfake audio and video generated using AI making it increasingly easy to mislead the public. 

On top of all of this, OpenAI has admitted that its systems have been used by state-affiliated malicious actors in China, North Korea, Russia, and Iran in attempts to create content for phishing campaigns and create malware that could bypass detection. 

Tackling the rising threat of AI-powered cyberattacks is a matter of urgency. LLMs are becoming more advanced by the day, while the number of ruined lives continues to grow.  

Until now, a relatively easy way of detecting scams and bogus messages was to look for broken English or unusual phrasing that a company wouldn't normally use. But AI’s ability to write clean prose and emulate the tone of voice used by major brands, makes this much harder to achieve. Research from Check Point powerfully shows how OpenAI has already helped generate code for new strains of malware that can steal and encrypt files. 

Related:Firms Arm US Against AI Cyberattacks

A plethora of lawsuits are currently in the works amid claims that major artificial intelligence platforms have been trained on copyrighted content without consent. While it may be too late to reverse that damage, countries around the world must come together to devise stricter regulation that oversees ethical development in the future. 

As for end users, it’s vital to step up vigilance and approach all emails with a healthy dose of cynicism. Is a message purporting to be from a company asking you for something unusual? Do the email address and domain match up?  

2. AI-driven impersonation. 

Deepfakes are now being widely used to impersonate real people -- including CEOs -- to trick victims into revealing sensitive information or authorizing fraudulent transactions. One notable case in Hong Kong earlier this year saw a finance worker at an international firm release $25 million to scammers who included one masquerading as the company's chief financial officer on a video call. 

Related:10 Ways to Boost Cybersecurity Talent Retention

There are ways of mitigating some of the risks associated with deepfakes. According to the FBI, they include verifying the authenticity of digital communications, educating yourself on the latest tactics being used by cybercriminals, and implementing multifactor authentication. 

3. Identity theft. 

A number of concerning developments have arisen when it comes to how AI is being used to steal identities. For one, some image generation tools have been used to create synthetic documents, bringing together genuine and fictional data to create fake aliases. This can then be used to sidestep the weak verification measures that some businesses have in place. 

There has also been an increase in the number of facial recognition systems that have been manipulated through AI-generated images and videos, giving fraudsters unauthorized access to accounts and the freedom to access funds. One Vice Media Group reporter successfully managed to break into his own account with an AI replica of his voice.  

Research by Sumsub that tracked over two million fraud attempts across 224 countries and 28 industries recently found that there had been a tenfold rise in the number of deepfakes detected. This is where the use of zero-knowledge credentials can make an immense difference in the battle against AI. Such cryptography gives a legitimate consumer the ability to prove they are who they say they are, without disclosing specifics, for example, verifying you're over 18, without stating an exact date of birth.  

Related:11 Ways Cybersecurity Threats are Evolving

4. The data verification challenge.  

It’s proving exceptionally difficult for everyday consumers to differentiate between what’s real and what’s not online. The proliferation of AI has the potential to make misinformation rampant online. To illustrate what we mean, consider this study from the University of Waterloo, which found that just 61% of respondents could successfully distinguish AI-generated images of people from genuine photographs. 

One potential answer could be systems that allow anyone to track the provenance of a piece of digital content -- including its source, when and how it was created, and whether there have been any modifications along the way. Metadata embedded in digital media files would help achieve this, along with digital signatures, watermarks, and blockchain technology. 

5. A multi-pronged approach is key.  

Tech companies, governments, and security researchers need to collaborate on the best approach to take, and constantly work with each other as new threats emerge. Artificial intelligence can also be a force for good, and if it is trained to help detect deepfakes, malware and phishing emails. 

But more than anything else, it's essential that the public becomes aware about how AI can be abused, and the potential ramifications of falling victim to an attack.  

About the Author

Chris Were

Co-Founder, CEO, Verida

Chris Were is a Co-Founder and CEO of Verida, and an Australia-based technology entrepreneur who has spent more than 20 years devoted to developing innovative software solutions; lately Verida, a decentralized, self-sovereign data network. Chris has so far disrupted finance, media and healthcare industries with his application of the latest technologies.  

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights