Inside The Duality of AI's Superpowers

This session weighs the benefits of AI versus the risks and explores pathways to begin your AI risk assessment.

Brandon Taylor, Digital Editorial Program Manager

November 14, 2024

5 Min View

The trend of introducing the magic of AI to your business is upon us, though you might be weary of the cybersecurity vulnerabilities and subsequent threats that come along with it. Network compromise, data leakage, and denial of service are only the tips of several potential icebergs you'll encounter.

In this archived keynote session, Tia Hopkins, chief cyber resilience officer and field CTO of eSentire, explores how to quantify and evaluate the risks associated with introducing AI into your organization. This segment was part of our live virtual event titled, “State of AI in Cybersecurity: Beyond the Hype.” The event was presented by InformationWeek and Dark Reading on October 30, 2024.

A transcript of the video follows below. Minor edits have been made for clarity.

Tia Hopkins: Now, let's dig into the cybersecurity risks specifically associated with AI. The OWASP group has developed a top 10 list for LLMs, and I'm going to go through them. I'm not going to dig into every one of them in the interest of time here, but we'll talk through some of them and reveal challenges that we need to be thinking about.

As I go through them, you'll see they fit into one of three categories. Some of them are related to network compromise, while some are related to direct data leakage. Some are related to denial of service. See if you can figure out which one they apply to, or which bucket they fall into as I'm going through them.

Related:Unicorn AI Firm Writer Raises $200M, Plans to Challenge OpenAI, Anthropic

We'll start with prompt injection. I think this is an easy one to understand, but this is basically when an LLM is tricked into doing something that is unintended, like redirecting a user to a site that is going to collect credentials.

The next one is insecure output handling. That's basically when output is not scrutinized. If it's accepted without scrutiny, then it can expose backend systems. This can lead to severe consequences like cross site scripting, privilege escalation, remote code execution, you name it. The output of our models needs to be heavily scrutinized.

It's the same thing with databases regarding training data poisoning. The mindset is garbage in, garbage out, right? And that's just from the perspective of bad data. If you put bad data into a model unintentionally, you're going to get bad data out.

If an attacker puts bad data into a model intentionally, then what comes out of that model is going to be something that's probably negatively impacting the business. But this could be something positive for an attacker.

Next is model denial of service. This is what I touched on in the last slide when I said what if you can't access the model at all. Throwing tons of resources at the model to overwhelm it, so that other resources that need to leverage it are unable can cripple a business.

Related:Getting a Handle on AI Hallucinations

Next is supply chain vulnerabilities. Now, this is not in the traditional sense of supply chain that we would think about. It's supply chain as it relates to the model and the components that make up models. If you have vulnerabilities within the components that make up your model, then an attacker can exploit that as well.

Sensitive info disclosure, of course. Insecure plug in design. Plugins need to be secure, which is not new. We've been having these conversations when developing applications. We talk about the cloud, but secure by design needs to be the goal here.

Next is excessive agency. This was an interesting one to me when I was reading up on it. This basically means giving the model too much privilege so that it's able to do too much within the business. My recommendation here would be to treat AI agents or genAI agents as users, and make sure you're considering them.

If your approach is that you're leveraging a zero-trust methodology, make sure that you're leveraging zero-trust for your AI agents as well. Next is over reliance. I mentioned this as it relates to users and technology dependence.

Related:ThreatLocker CEO Talks Supply Chain Risk, AI’s Cybersecurity Role, and Fear

This means you're depending too much on the model, and not having governance in place to scrutinize what's going into and coming out of the model. Taking things at face value is very dangerous. And then lastly, is model theft.

If there's a model that's built that's literally the IP of an organization that's stolen, that can have a detrimental impact to an organization. Moving on to AI risk management. In the interest of time, I won't spend a ton of time on this. I did want to share that NIST does have an AI risk management framework.

So, obviously the goal is to govern right what you're doing with AI and creating a risk management framework around that. After you've established your policies, your roles and who's accountable, the goal is to develop AI responsibly. You need to look at all your AI-related risks across the process that you already have in place.

If you're looking to introduce AI, you want to look at the risks associated with introducing it into the organization. Then you want to quantify and evaluate your data. This is standard risk management like we've been doing for years, and then you want to manage it.

You don't want to just say we've done the risk assessment, and we've identified the risks and put AI in place. We're good to go. You want to continue to manage that. Make sure you have metrics or controls to monitor what's going on to mitigate anything that may come up. You must continuously monitor and manage the environment.

Watch the archived “State of AI in Cybersecurity: Beyond the Hype” live virtual event on-demand today.

About the Author

Brandon Taylor

Digital Editorial Program Manager

Brandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights