GenAI’s Impact on Cybersecurity

GenAI helps and hurts cybersecurity. Companies need to proactively assess and manage the risks or suffer the consequences.

Lisa Morgan, Freelance Writer

November 7, 2024

9 Min Read
Artificial Intelligence Applied to Cybersecurity - The Convergence of AI and Cybersecurity and the Transformative Role of Artificial Intelligence
ArtemisDiana via Alamy Stock

Generative AI adoption is becoming ubiquitous as more software developers include the capability in their applications and users flock to sites like OpenAI to boost productivity. Meanwhile, threat actors are using the technology to accelerate the number and frequency of attacks.  

“GenAI is revolutionizing both offense and defense in cybersecurity. On the positive side, it enhances threat detection, anomaly analysis and automation of security tasks. However, it also poses risks, as attackers are now using GenAI to craft more sophisticated and targeted attacks [such as] AI-generated phishing,” says Timothy Bates, AI, cybersecurity, blockchain & XR professor of practice at University of Michigan and former Lenovo CTO. “If your company hasn’t updated its security policies to include GenAI, it’s time to act.” 

According to James Arlen, CISO at data and AI platform company Aiven, GenAI’s impact is proportional to its usage.  

“If a bad actor uses GenAI, you'll get bad results for you. If a good actor uses GenAI wisely you'll get good results. And then there is the giant middle ground of bad actors just doing dumb things [like] poisoning the well and nominally good actors with the best of intentions doing unwise things,” says Arlen. “I think the net result is just acceleration. The direction hasn't changed, it's still an arms race, but now it's an arms race with a turbo button.” 

Related:8 Things That Need To Scale Better in 2025

The Threat Is Real and Growing

GenAI is both a blessing and a curse when it comes to cybersecurity. 

“On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats,” says Erik Avakian, technical counselor at Info-Tech Research Group. “These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.” 

 Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes.  

“The incorporation of voice impersonation and personalized content through ‘deepfake’ attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against,” says Avakian. “GenAI can and is also being used by adversaries to create advanced malware that adapts to defenses and evades current detection systems.” 

Related:Tech Company Layoffs: The COVID Tech Bubble Bursts

Pillar Security’s recent State of Attacks on GenAI report contains some sobering statistics about GenAI’s impact on cybersecurity:  

  • 90% of successful attacks resulted in sensitive data leakage. 

  • 20% of jail break attack attempts successfully bypassed GenAI application guardrails. 

  • Adversaries require an average of just 42 seconds to execute an attack.  

  • Attackers needed only five interactions, on average, to complete a successful attack using GenAI applications.  

The attacks exploit vulnerabilities at every stage of interaction with GenAI systems, underscoring the need for comprehensive security measures. In addition, the attacks analyzed as part of Pillar Security’s research reveal an increase in both the frequency and complexity of prompt injection attacks, with users employing more sophisticated techniques and making persistent attempts to bypass safeguards.  

“My biggest concern is the weaponization of GenAI -- cybercriminals using AI to automate attacks, create fake identities or exploit zero-day vulnerabilities faster than ever before. The rise of AI-driven attacks means that attack surfaces are constantly evolving, making traditional defenses less effective,” says University of Michigan’s Bates. “To mitigate these risks, we’re focusing on AI-driven security solutions that can respond just as rapidly to emerging threats. This includes leveraging behavioral analytics, AI-powered firewalls, and machine learning algorithms that can predict potential breaches.” 

Related:What Enterprise IT Predictions Actually Mattered in 2024?

In the case of deepfakes, Josh Bartolomie, VP of global threat services at email threat and defense solution provider Cofense recommends an out-of-band communication method to confirm the potentially fraudulent request, utilizing internal messaging services such as Slack, WhatsApp, or Microsoft Teams, or even establishing specific code words for specific types of requests or per executive leader. 

And data usage should be governed.  

“With the increasing use of GenAI, employees may look to leverage this technology to make their job easier and faster. However, in doing so, they can be disclosing corporate information to third party sources, including such things as source code, financial information, customer details [and] product insight,” says Bartolomie.  “The risk of this type of data being disclosed to third party AI services is high, as the totality of how the data is used can lead to a much broader data disclosure that could negatively impact that organization and their products [and] services.” 

Casey Corcoran, field chief information security officer at cybersecurity services company Stratascale -- an SHI company, says in addition to phishing campaigns and deep fakes, bad actors are using models that are trained to take advantage of weaknesses in biometric systems and clone persona biometrics that will bypass technical biometric controls.  

“[M]y two biggest fears are: 1) that rapidly evolving attacks will overwhelm traditional controls and overpower the ability of humans to distinguish between true and false; and 2) breaking the ‘need to know’ and overall confidentiality and integrity of data through unmanaged data governance in GenAI use within organizations, including data and model poisoning,” says Corcoran.  

Tal Zamir, CTO at advanced email and workspace security solutions provider Perception Point warns that attackers exploit vulnerabilities in GenAI-powered applications like chatbots, introducing new risks, including prompt injections. They also use the popularity of GenAI apps to spread malicious software, such as creating fake GenAI-themed Chrome extensions that steal data. 

“Attackers leverage GenAI to automate tasks like building phishing pages and crafting hyper-targeted social engineering messages, increasing the scale and sophistication of attacks,” says Zamir. “Organizations should educate employees about the risks of sharing sensitive information with GenAI tools, as many services are in early stages and may not follow stringent security practices. Some services utilize user inputs to train models, risking data exposure. Employees should be mindful of legal and accuracy issues with AI-generated content, and always review it before sharing, as it could embed sensitive information.” 

Bad actors can also use GenAI to identify zero days and create exploits. Similarly, defenders can also find zero days and create patches, but time is the enemy: hackers are not encumbered by rules that businesses must follow. 

“[T]here will likely still be a big delay in applying patches in a lot of places. Some might even require physically replacing devices,” says Johan Edholm, co-founder, information security officer and security engineer at external attack surface management platform provider Detectify. “In those cases, it might be quicker to temporarily add things between the vulnerable system and the attacker, like a WAF, firewall, air gapping, or similar, but this won't mitigate or solve the risk, only reduce it temporarily. " 

Make Sure Company Policies Address GenAI 

According to Info-Tech Research Group’s Avakian, sound risk management starts with general and AI specific governance practices that implement AI policies.  

“Even if our organizations have not yet incorporated GenAI technologies or solutions yet into the environment, it is likely that our own employees have experimented with it or are using AI applications or components of it outside the workplace,” says Avakian. “As CISOs, we need to be proactive and take a multi-faceted approach to implementing policies that account for our end-user acceptable use policies as well as incorporating AI reviews into our risk assessment processes that we already have in place. Our security policies should also evolve to reflect the capabilities and risks associated with GenAI if we don't have such inclusions in place already.” 

Those policies should span the breadth of things in GenAI usage, ranging from AI training that covers data protection to monitoring to securing new and existing AI architectural deployments. It’s also important that security, the workforce, privacy teams and legal teams understand AI concepts, including the architecture, privacy and compliance aspects so they can fully vet a solution containing AI components or features that the business would like to implement.  

“Implementing these checks into a review process ensures that any solutions introduced into the environment will have been vetted properly and approved for use and any risks addressed prior to implementation and use, vastly reducing risk exposure or unintended consequences,” says Avakian. “Such reviews should incorporate policy compliance, access control reviews, application security, monitoring and associated policies for our AI models and systems to ensure that only authorized personnel can access, modify or deploy them into the environment. Working with our legal teams and privacy officers can help ensure any privacy and legal compliance issues have been fully vetted to ensure data privacy and ethical use.” 

What if your company’s policies have not been updated yet? Thomas Scanlon, principal researcher at Carnegie Mellon University's Software Engineering Institute recommends reviewing exemplar policies created by professional societies to which they belong or consulting firms with multiple clients. 
“The biggest fear for GenAI’s impact on cybersecurity is that well-meaning people will be using GenAI to improve their work quality and unknowingly open an attack vector for adversaries,” says Scanlon. “Defending against known attack types for GenAI is much more straightforward than defending against accidental insider threats.” 

Technology spend and risk management platform Flexera established a GenAI policy early on, but it became obvious that the policy was quickly becoming obsolete. 

“GenAI creates a lot of nuanced complexity that requires fresh approaches for cybersecurity,” says Conal Gallagher, CISO & CIO of Flexera. “A policy needs to address whether the organization allows or blocks it. If allowed, under what conditions? A GenAI policy must consider data leakage, model inversion attacks, API security, unintended sensitive data exposure, data poisoning, etc. It also needs to be mindful of privacy, ethical, and copyright concerns.” 

To address GenAI as part of comprehensive risk management, Flexera formed an internal AI Council to help navigate the rapidly evolving threat landscape. 

“Focusing efforts there will be far more meaningful than any written policy. The primary goal of the AI Council is to ensure that AI technologies are used in a way that aligns with the company’s values, regulatory requirements, ethical standards and strategic objectives,” says Gallagher. “The AI Council is comprised of key stakeholders and subject matter experts within the company. This group is responsible for overseeing the development, deployment and internal use of GenAI systems.” 

Bottom Line

GenAI must be contemplated from end user, corporate risk and attacker perspectives. It also requires organizations to update policies to include GenAI if they haven’t done so already. 

The risks are generally two-fold: intentional attacks and inadvertent employee mistakes, both of which can have dire consequences for unprepared organizations. If internal policies have not been reviewed with GenAI specifically in mind and updated as necessary, organizations open the door to attacks that could have been avoided or mitigated. 

About the Author

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights