Weighing the AI Threat By Incident Reports

Yes, artificial intelligence can be a threat. But probably not in the way you most expected.

Pam Baker, Contributing Writer

September 20, 2023

5 Min Read
people and a robot on different sides of a seesaw depicting the balance of artificial intelligence and society
Fanatic Studio Gary Waters via Alamy Stock

At a Glance

  • Not All AI Incidents Created Equally
  • Mundane Causes Behind Incidents
  • Incidents Come From 3 Areas

The AI Incident Database chronicles over 2,000 incidents of AI causing harm. It’s a gulp-worthy number that ominously continues to grow. But the devil is in the details and a mere count does not provide sufficient detail in degrees of harm or malevolent intent. Pretending AI is safe is sheer folly but imagining it the bringer of doom is equally foolish. To get a more realistic read on the damage AI has caused and is likely to cause, here’s a hard look at reported incidents in the real world.

The point of most incident reports is to share the knowledge as to what went wrong based on the simple theory that "more heads are better than one" in finding permanent solutions and preventing problem repetition. These reports are not intended to scare or soothe the public, nor are they meant to be fodder for conspiracy theorists or bad science.

That said, not all incident reports are created equally, and that’s true across all products and industries meaning that they are not even remotely limited to AI reports.

Examples of reputable incident reports include the AI Incident Database, a project of the Responsible AI Collaborative. That’s a not-for-profit organization chartered to build upon the Artificial Intelligence Incident Database, which is an open-source project with global support and participation.

Related:Experts Ponder GenAI’s Unprecedented Growth and Future

Another example of a reputable AI incident report is the AIAAIC Repository, which stands for AI, algorithmic, and automation incidents and controversies. It is a public resource that details incidents and controversies “driven by and relating to artificial intelligence, algorithms, and automation, in an objective, balanced, and open manner.” Anyone can access the data either on the AIAAIC Repository website or as a Google sheet.

Before anyone goes rushing off to look for signs of a rising AI worthy of a sci-fi horror story in these reports, prepare to encounter more mundane causes behind incidents.

AI incidents is “a misnomer” says Ashu Dubey, co-founder & CEO of Gleen.ai. The AI Incident Database shows “AI user incidents,” Dubey says, which is more of an accounting of human error than AI failures.

“The fact is most of these reports are of users utilizing AI like ChatGPT to solve for the wrong thing, and then ChatGPT hallucinating,” Dubey says. AI hallucinations means that the AI determined its response to have a high degree of accuracy when in fact it is demonstrably and undeniably wrong.

Practically speaking, real-world consequences can be equally damaging whether from human error, a hallucinating AI, or an adversarial AI. Equally practical, for this reason less effort is put toward assigning blame and more directed at solving the underlying issues.

Related:Does Your IT Organization Need an AI Team?

“AI incidents at a high level refer to unintended, negative outcomes or side effects produced solely or in part by AI,” explains Hoda Heidari, Responsible AI leader at Carnegie Mellon University’s Block Center.

Measuring the AI Threat from Incident Analysis

AI incidents tend to spring from one of three directions.

“The quality of output from AI is determined by three factors: First, the underlying model. Second, the quality of the data fed to it. Third, the prompt or question asked of the AI,” says Arti Raman, CEO and founder of Titaniam.

Historically the onus is on the AI makers for errors and incidents.

Analysis of incident data in the AI Incident Database by antivirus and VPN provider Surfshark found that until recently, OpenAI and Tesla tied at 41 cases, with Facebook in the lead at 48 cases in the overall dataset. However, OpenAI overtook Facebook for the most incidents occurring in 2023.

“Despite the opportunities AI presents, the increasing number of AI-related incidents also highlights concerns, including AI-generated deepfakes, algorithmic bias, and autonomous car accidents,” says Agneska Sablovskaja, lead researcher at Surfshark.

Related:How Artificial Intelligence Could Boost Artificial Reality

“Our data analysis reveals that Facebook encountered a higher number of AI incidents between 2020 and 2021,” she says. “However, their frequency has since decreased, possibly due to the allocation of resources for addressing them. This example shows that issues with new technology are expected yet addressing them promptly is vital to ensure that innovation proves more beneficial than detrimental.”

Other industry watchers and experts say there’s more influencing OpenAI’s uptick in number of incidents than may be readily evident.

“If you look at AI incidents over the last five years, you will notice that as long as the producer of the AI controls all three factors i.e., the model, the data, and the questions asked of the model, the number of incidents and their severity were well under control. This is the case with Tesla and Facebook earlier,” says Raman.

“As soon as AI became widely accessible and models could be queried by anybody, we started seeing more incidents -- this is illustrated by incidents related to OpenAI,” Raman adds.

Whether Raman’s observation holds true over time remains to be seen. But it is true that more people using AI for more tasks often magnifies the number of incidents.

“As AI systems become more powerful and widely accessible, such as ChatGPT, the range of application domains and tasks in which AI technologies are employed to play significant parts are ever widening. With that, we should expect the magnitude and gravity of AI incidents to expand,” says Hoda Heidari, assistant professor of machine learning & societal computing at Carnegie Mellon University’s Block Center.

Organizations like the Responsible AI movement aim to curb incidents and significantly reduce damage to society at large and individuals in particular. But many argue that volunteer efforts aren’t enough.

“While AI incident databases from the Partnership for AI, Project Atlas from MITRE, and the recently organized DEFCON red teaming event and voluntary commitments are important steps forward, an institutional solution is required,” says Ramayya Krishnan, faculty director of Carnegie Mellon University’s Block Center.

Industry consensus is that ultimately, AI safety measures are everyone’s responsibility. But it is important to avoid both over-reacting and under-reacting as both are detrimental. Finding the Goldilocks zone is a tough challenge, however.

“What we don't fully understand often scares us, and we react as Samsung did in banning AI models outright. But there's a middle ground to be found that will establish the needed guardrails without stymying continued innovation. Technologies that can define and enforce gen AI policies allow for that middle ground, where innovation can flourish while generative AI is responsibly adopted,” says Alon Yamin, CEO and co-founder at Copyleaks, an AI-based plagiarism and AI content detection tool.

About the Author(s)

Pam Baker

Contributing Writer

A prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights