What’s Real About AI in Cybersecurity?

Business leaders must separate facts from fiction to understand how cybercriminals realistically use AI and what legitimate AI-powered defenses can protect against these threats.

Devin Jones, Chief Strategy Officer, Active Cypher

July 25, 2024

4 Min Read
robot/AI holding a security shield/symbol
Pitinan Piyavatin via Alamy Stock

The cybersecurity industry has always faced fear, uncertainty, and doubt, but the rise of artificial intelligence has amplified hyperbole and exaggerated threats more than ever before. As this powerful technology advances, people are grappling with the unknown, unsure of what’s real and what’s mere hype.  

Sensationalized headlines and exaggerated claims have fueled anxiety about malicious actors’ misuse of AI while raising questions about the effectiveness of AI-powered defense solutions. Business leaders need to separate fact from fiction to understand how cybercriminals are really using AI and examine legitimate AI-powered cybersecurity defenses that provide greater protection based on their cost. 

AI has introduced a new level of sophistication and complexity to the threat landscape. Cybercriminals leverage AI to enhance their attack capabilities, making it easier and more efficient to breach networks and steal valuable data. Understanding how AI is being used in attacks indicates how defenders must leverage AI in response. 

How Cybercriminals Use AI 

Cybercriminals use AI to find network vulnerabilities more efficiently and effectively and maximize profits. It’s used to make social engineering more realistic and practical, classifying target data and identifying the most valuable and vulnerable information to steal. AI significantly lowers the technical barriers to cybercrime, making it much more efficient to identify and exploit paths into well-defended networks and steal the most valuable and liquid information for conversion into cash. 

Related:11 Ways Cybersecurity Threats are Evolving

One of the most significant ways threat actors exploit AI is in social engineering. They exploit human psychology and vulnerabilities like trust, fear, and authority. Generative AI models like ChatGPT can produce highly convincing phishing emails and websites, mimicking legitimate individuals’ or organizations’ communication styles and tones.  

Deepfake technology powered by AI can create fake videos or audio recordings that impersonate real people, tricking victims into revealing sensitive information. Somewhere between 70% and 90% of successful cyberattacks leverage social engineering. A recent study by Proofpoint revealed that AI-generated phishing emails had a success rate of over 60%, compared to just 3% for traditional phishing attempts. 

AI is also being used to develop more evasive and sophisticated malware strains. By analyzing defensive responses and iteratively evolving their approach, cybercriminals can create malware that constantly mutates its code and behavior, making it harder for traditional signature-based antivirus software to detect. In one high-profile case, the Emotet banking trojan used AI to evade detection and spread to over 1.6 million systems across 194 countries. 

Related:Law Enforcement Eyes AI for Investigations and Analysis

AI is also used for adversarial attacks, automating, and expanding various attack vectors with minimal human effort. Password cracking, vulnerability scanning, and exploitation can be accelerated and amplified, allowing attackers to launch more frequently and on a larger scale. AI can also analyze massive amounts of data and identify potential targets or entry points, making attack prioritization more efficient. 

One of the cybercriminals’ most insidious applications of AI is data and intellectual property theft. AI algorithms can sift through vast amounts of data and identify valuable intellectual property, trade secrets, or sensitive information. This enables cybercriminals to prioritize and exfiltrate only the most valuable assets that cause organizations the most significant financial and competitive damage. 

Defense Needs AI Just to Keep Up 

To counter these threats, organizations must adopt AI-powered cybersecurity solutions that can match or exceed the capabilities of their adversaries. AI-driven email and web security solutions can analyze content, sender behavior, and software characteristics to identify and block phishing attempts and malware more effectively than traditional signature-based methods. By continuously learning and adapting to new threats, these solutions can stay ahead of the curve, providing additional protection against ever-evolving attack vectors. 

Related:Overcoming AI’s 5 Biggest Roadblocks

Some AI-powered endpoint protection solutions use deep learning technology to detect and respond to threats in real-time. Instead of relying on signatures, it adapts over time, learning what is and isn’t normal endpoint behavior, enabling it to detect new threats and unknown attack methods. 

Security teams often are overwhelmed by the sheer volume of alerts, logs, and routine activities. AI offers the same increased efficiency to cybersecurity defense as it can the attackers. AI-powered solutions automate repetitive and time-consuming tasks like log monitoring, alert triage, patch management, and reporting, allowing cybersecurity professionals to focus on more critical and strategic functions while removing the risk of human error in detection activities. AI can streamline and optimize these processes, improving efficiency and reducing the risk of human error. 

As cybercriminals continue to advance their tactics with AI, businesses must respond with AI-powered email and web security solutions. These advanced systems are not just a defense. Rather, they are a strategic advantage because they can analyze content, sender behavior and software characteristics to identify and block phishing attempts and malware more effectively than traditional signature-based methods. By continuously learning and adapting to new threats, AI-driven phishing, and malware detection solutions can stay ahead of the curve, providing additional protection against these ever-evolving attack vectors. 

The rise of AI in cybercrime is a reality that cannot be ignored. By embracing AI-powered defense solutions and adopting a proactive approach to cybersecurity, organizations can stay ahead of the curve and safeguard their operations, reputation, and bottom line. 

About the Author

Devin Jones

Chief Strategy Officer, Active Cypher

Devin Jones is currently the Chief Strategy Officer for Active Cypher, Inc. held leadership roles in product management, engineering, development operations, and services at Cisco Systems and Juniper Networks, and has been a serial entrepreneur in multiple startups in the areas of cybersecurity and machine learning. His areas of expertise are cybersecurity, applying machine learning to real-world problems, SaaS infrastructure, and cloud computing. Devin is a veteran of the United States Navy and served as a regional expert in the Middle East and the Soviet Union for the intelligence community.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights