A 20-page inquiry by the FTC marks the most intensive regulatory action around generative AI to date as companies and governments struggle to keep up with the rapid pace of adoption.

Shane Snider , Senior Writer, InformationWeek

July 12, 2023

2 Min Read
OPENAI company logo seen on smartphone and FTC Federal Trade Commission logo on the background.
Ascannio via Alamy Stock

A US Federal Trade Commission (FTC) investigation launched this week focuses on OpenAI and its ChatGPT platform over accusations the generative language software flouted consumer protection laws and endangered personal reputations and data, according to a report by The Washington Post.

The move is the strongest regulatory action yet facing the firebrand AI chatbot launched just several months ago. According to its 20-page letter, the FTC is concerned about steps OpenAI took to address ChatGPT’s potential to “generate statements about real individuals that are false, misleading, or disparaging” -- commonly referred to as AI’s potential to “hallucinate” to generate information.

The Washington Post first reported the investigation Thursday afternoon. Microsoft, OpenAI, and the FTC have not replied to requests for comment, as of publication time. While ChatGPT was launched in November, it has since spurred an arms race to adopt generative AI capabilities into many organizations. The speed of adoption has led to concerns about potential ethical and existential risks posed by such a powerful tool.

The investigation comes on the heels of class-action lawsuits filed in late June over the software’s use of private information. The FTC is looking for extensive records from OpenAI concerning its handling of personal data, the potential to give users inaccurate information, and the “risks of harm to consumers, including reputational harm.”

The FTC’s information request also seeks testimony from OpenAI about public complaints, lists of lawsuits it's involved in, and details of the data leak the company disclosed in March 2023 that exposed users’ chat histories and payment data.

In May, OpenAI CEO Sam Altman appeared before US Congress pleading with lawmakers to “lead” in developing rules and regulations around emerging AI technologies. At the time, he said lawmakers should step in to create parameters that would prevent AI creators from causing “significant harm to the world.”

Altman said regulatory intervention by governments would be “critical” and called for the creation of a new government agency to provide oversight.

InformationWeek will update with further comments and developments.

What to Read Next:

OpenAI CEO Sam Altman Pleads for AI Regulation

Tech Leaders Endorse AI 'Extinction' Bombshell Statement

What Just Broke: Is Self-Regulation the Answer to AI Worries?

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights