Generative AI an Emerging Risk as CISOs Shift Cyber Resilience Strategies

Taking a long-term view of GenAI will ensure companies continue to get the benefits while staying on top of its risks.

Nathan Eddy, Freelance Writer

October 16, 2023

4 Min Read
Brian Penny via Alamy Stock

At a Glance

  • Research shows that over half are rushing in without a cohesive strategy for secure, aligned deployment of GenAI.
  • "People are trying to do their jobs better and faster, but without any of the controls to keep the genie in the bottle."
  • When employees use approved GenAI tools, the company needs rules governing what data can be used with the tool.

Enterprise risk executives and IT security leaders are growing more concerned over the threat generative AI tools may pose to their organizations.

Building a clear generative AI strategy is essential to gain a leg up while avoiding long-term risks, but most companies aren’t doing this today.

Surprisingly, research shows that over half are rushing in without a cohesive strategy for secure, aligned deployment, while an August survey from research firm Gartner revealed GenAI has become a top emerging risk for enterprises.

Stan Black, chief information security officer at Delinea, explains there is such a rush to leverage the technology, it’s often putting organizations in a huge amount of risk. “Often we’re exposing credentials so that anyone who compromises and exploits those can have access to incredibly sensitive content,” he says. “For example, one vendor found their software developers were debugging code in LLMs, and then it was compromised.”

He adds one big concern is that people are trying to do their jobs better and faster, but without any of the controls to keep the genie in the bottle. “Now we have these pop-up AI engines, which are basically getting ranked high on SEO and people think they’re clicking on the real thing,” he adds. “It’s just a wolf in sheep’s clothing, established by malicious actors to get credentials and other valuable information.”

Related:Risks and Strategies to Use Generative AI in Software Development

New Technology, Old Threats

These risks can often be mitigated through responsible use, rigorous security measures, awareness training, regular audits, ongoing monitoring, and adherence to ethical guidelines in the development and deployment of AI models.

However, Black notes one of the main challenges is that organizations are still putting themselves out there with exposed or easily compromised credentials so that anyone who accesses and exploits those can have access to incredibly sensitive content, including intellectual property.

“One of the best ways to reduce this risk is to leverage multi-factor authentication to ensure credentials don’t get exploited by forcing whoever the purported user is -- human, machine or hacker -- to prove its identity beyond just a username and password,” he says.

Suha Can, CISO of Grammarly, tells InformationWeek via email some of generative AI’s risks, like quality and accuracy issues, aren’t necessarily new. “However, generative AI is unlike other technologies we’ve seen because it can so convincingly create competent content,” he says. “And these are just the known challenges today. I can’t stress enough that we’re still learning the extent to which these technologies are capable, and we may not have identified all associated risks.”

Related:Manage Generative AI Risk Before It Manages You

He explains that’s why taking a responsible approach to product development from the start is hyper-important. “Setting your strategy starts with some foundational principles, like assessing your organization’s readiness and unique risk factors and defining a purposeful approach with set objectives,” he says. “It’s important to understand both the capabilities and limitations of generative AI to carefully plan your strategy while being realistic about expectations.”

Developing Procedures as Regulations Evolve

Heather Stevens, product executive at Olive Technologies, tells InformationWeek via email enterprise risk executives should proceed with caution when introducing any tool that accesses, collects, or stores data. “Companies must have procedures to vet these tools prior to their introduction,” she says.

Enterprise risk executives should start by implementing clear rules prohibiting employees from using any unapproved web applications and tools. “It’s really another instance of shadow IT, which includes any IT-related purchases, activities or uses that the IT department is unaware of and which has historically been a big problem in most organizations,” Stevens says.

Related:The Big Threat to AI: Looming Disruptions

When employees use approved GenAI tools, the company needs rules governing what data can -- and, more importantly, cannot -- be used with the tool. “But these rules shouldn’t be limited to only GenAI tools,” she adds. “They should be in place for all tools and applications used in the organization.”

These execs should partner with any key stakeholders who might use GenAI tools.

Stevens says ideally, the organization has a CISO, with the infosec organization a key stakeholder for every application that accesses and stores data or lives within the company’s network and ecosystem.

“The hardest part about privacy and regulation is that generative AI solutions are outpacing regulations faster than we’ve ever seen before,” Black says. “Until we can be faster than AI, the best advice is don’t use these tools unless you can have a level of control, do a lot of awareness and training, and then try to stay on top of regulations as they evolve. 

While all of this can understandably create a lot of concern for risk leaders, and while it can be tempting to want to limit or restrict an organization’s use of generative AI, from Can’s perspective this would be a mistake.

“Generative AI is already in your workplace, whether you know it or not. You can’t ignore or outrun it,” he says. “As security and risk leaders, this is our opportunity to be enablers. We must equip our businesses and teams with the platforms and policies to safely use and get the most out of generative AI tools.”

About the Author(s)

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights