Top ChatGPT Fails (and Why You Should Avoid Them)

When failure isn’t an option, you might want to think twice before turning to ChatGPT. It turns out that the AI technology isn’t very good at a lot of things.

John Edwards, Technology Journalist & Author

April 3, 2023

4 Min Read
Broken lamp, cracked lightbulb, bad light bulb. Idea failure, fail, useless item. AI generative
Valiantsin Suprunovich via Alamy Stock

Last week, top technology and artificial intelligence leaders called for a pause to the rapid pace that AI-powered chatbots are being developed and deployed. They urged for regulations on emerging AI technologies because they fear the risks to society and humanity.

Meanwhile, everyone seems fascinated with ChatGPT and eager to figure out how to apply it. But early experiences reveal that the chatbot and other generative AI technologies struggle in several areas. Here are five examples:

Innovative Thought

ChatGPT frequently fails at tasks that require creating new ideas. Large language models like ChatGPT rely on or summarize existing information. Yet they’re unable to add any knowledge, such as designing new medicines, vaccines, or business technologies, says Udo Sglavo, vice president of advanced analytics at AI and analytics technology provider SAS.

ChatGPT and related technologies are new, and the data used to train them must still come from humans. “The results of generative AI are, at their core, a reflection of us,” Sglavo says. He notes that there’s an inherent risk that models can be informed by inaccurate data, misinformation, or biases. “Users must continue to apply critical thinking whenever interacting with conversational AI and avoid automation bias -- the belief that a technical system is more likely to be accurate and true than a human.”

Accurate Insights

ChatGPT utilizes huge amounts of Internet data, meaning that it can draw from an almost endless amount of content when generating fresh text. While this makes ChatGPT remarkably adept in many use cases, the model struggles to accurately develop usable content for topics that are either rare or artificial. “Algorithms are like golden retrievers -- they will try really hard to do what you ask, even if that means chasing after an invisible ball,” says Sarah Shugars, an assistant professor of communication at Rutgers, the State University of New Jersey. “Ask ChatGPT to describe the importance of a made-up historical figure and it’s likely to hallucinate a detailed, fake biography for you on the spot.”

 Additionally, ChatGPT lacks the creative ability to generate new insights by uniting ideas. It can produce boilerplate language, give meaningful summaries, and even write screenplays or poems, but it can’t think in the same way humans do, Shugars explains. “That lack of creative synthesis can come across in its generated text. “A lot of its output is very bland; missing the surprising connection and elevation of new ideas that can occur in human writing.”

Motivating People

ChatGPT fails as a persuasive communicator. While the technology has access to an almost endless amount of information, good communicators must also understand their audience’s priorities and values. “ChatGPT can’t do that in a meaningful way yet,” says Clara Burke, an associate teaching professor of business management communication at Carnegie Mellon University’s Tepper School of Business.

 Consider the words that stay with and move people. “When Martin Luther King, Jr. addressed the crowd at the March on Washington, he described their purpose in language that has echoed and stuck over decades to remind America of the fierce urgency of ‘now’, Burke says. King’s words fired people’s imaginations and helped them view their experiences and hopes from a new perspective.

 Meanwhile, ChatGPT’s version of a rousing call to action is: “The time for change is now,” Burke explains. “It clearly asks for change but doesn’t offer audiences a new way of imagining change or connecting to change so it’s unlikely to actually effect change,” she notes.

 ChatGPT is only as interesting as the prompts fed into it. “A good communicator will create memorable stories out of mundane prompts,” Burke states. “That’s what we all need to do in the workplace to create a report, status update, or proposal compelling to our audience.”

Balancing Outcomes

ChatGPT stumbles when asked to make recommendations that require balancing multiple potential outcomes. “In the case of incident response, ChatGPT struggles to make a risk-based decision from an investigation,” says Adam Cohen Hillel, software engineering team lead at cloud forensics and incident response service provider Cado Security.

 Additionally, since ChatGPT relies entirely on the data it’s trained with, it has trouble staying current with the latest attack techniques. “This will likely be the reason why ChatGPT won’t be used as a threat detection mechanism,” Hillel says. “There are already better ways of doing so.”

Dispensing Specialized Knowledge

ChatGPT generally fails at tasks that require specialized knowledge or a nuanced understanding of context. The model often struggles to answer questions related to highly technical or scientific subjects, or provide accurate legal or medical advice, says Iu Ayala, founder and CEO of data science consulting company Gradient Insight. “In these cases, ChatGPT’s lack of expertise and limited training data may lead to unreliable or even harmful responses.”

Language models like ChatGPT are trained on vast amounts of text data but may lack the necessary depth of knowledge in specific domains, Ayala notes. Additionally, such models often rely on statistical patterns and associations rather than a true understanding of the underlying concepts. “Therefore, when faced with new and complex information, ChatGPT may struggle to provide coherent and accurate responses.”

What to Read Next:

ChatGPT: An Author Without Ethics

Malicious Actors and ChatGPT: IT Security on the Lookout

Should There Be Enforceable Ethics Regulations on Generative AI?

About the Author(s)

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights