Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
January 30, 2024
2 Min Read
Ascannio via Alamy Stock
Italy’s data privacy regulator on Monday alleged OpenAI’s ChatGPT artificial intelligence platform breaches the European Union’s data protection laws.
The regulator, known as the Italian Garante, imposed a temporary ban last year that was lifted when OpenAI addressed concerns. The watchdog’s latest action gives OpenAI 30 days to address the alleged breaches, Garante’s website said.
“Following the temporary ban on processing imposed on OpenAI by the Garante … last year, and based on the outcome of its fact-finding activity, the Italian DPA concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR,” according to the Garante website.
The Garante has been one of the EU’s busiest privacy watchdogs in assessing AI risks. Last year’s ban on OpenAI was one of the harshest moves to date and forced the company to address the right of users to decline consent to use personal data to train algorithms. The watchdog’s bite comes from the EU’s General Data Protection Regulation (GDPR), which can levy fines of up to 4% of a company’s global turnover (revenue).
The EU in December reached an agreement to move forward with landmark AI rules. Reuters on Tuesday cited sources saying Germany will approve the AI Act, moving the regulatory framework closer to passage.
In an email to InformationWeek, Var Shankar, executive director of Responsible AI Institute, says Italy’s move has far-reaching implications.
“Amid the excitement around the EU finalizing its proposed AI regulation, Monday’s notification from Italy’s data protection authority regarding OpenAI is a timely reminder that AI systems are already subject to existing privacy laws," he says.
Shankar says the EU’s action is focused on how OpenAI is using private information. “Though the details of the alleged violations have not been publicly disclosed, they could potentially relate to the collection of private information to train LLMs and the use of LLMs to output information about individuals -- whether correct or hallucinated,” he says.
In a statement, OpenAI defended its practices, saying they are aligned with EU’s existing privacy laws. “We actively work to reduce personal data in training our systems like ChatGPT,” the company said, adding that it plans “to work constructively with the Garante.”
Read more about:Regulation
About the Author(s)
Senior Writer, InformationWeek, InformationWeek
Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.
You May Also Like