Data protection authorities take swift action following data breach at OpenAI. Observers say CIOs should keep a close watch on regulatory trends as artificial intelligence applications multiply in enterprise use.

Shane Snider , Senior Writer, InformationWeek

April 3, 2023

4 Min Read
The words ChatGPT loom ominously from a dark blue background of cryptic characters
Greg Guy / Alamy Stock Photo

Just days after an OpenAI data breach exposed user info and a nonprofit posted an open letter calling for a pause to artificial intelligence chatbot development, Italy’s data protection authority has banned ChatGPT. German officials on Monday signaled they could do the same -- a sign that Western governments are taking the call to regulate the technology very seriously.

ChatGPT, which launched in November 2022, is the generative language model created by U.S.-based OpenAI which can create convincingly natural-sounding text. These models are being used to generate responses automatically in chatbots and can mimic human-like text. Possibilities for enterprise use seem endless but concerns about ethical application and workforce impact are driving calls for regulation on AI.

OpenAI temporarily took ChatGPT offline March 20 after discovering that a bug in an open-source library had exposed some users' data, chat titles, and may have exposed some of their messages. The company issued a statement March 24 detailing the outage and data breach.

Italian Data Protection Authority Uses GDPR's Greatest Powers

The Italian data protection authority said it would ban and investigate OpenAI “with immediate effect.” Italian regulators cited the breach, saying there was no legal basis to justify "the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform," according to the BBC.

The regulator said that OpenAI -- due to potential violations of the European Union's General Data Protection Regulation (GDPR) -- had 20 days to address concerns or could face fines of $21.7 million or up to 4% of annual revenues. Other regulators in the United Kingdom, Ireland, and Germany have signaled they were investigating the technology as well, according to the BBC.

OpenAI told BBC it is cooperating with Italy’s regulators and has disabled ChatGPT for users in the country. “We are committed to protecting people’s privacy and we believe we comply with (the EU’s General Data Protection Regulation) and other privacy laws,” a company spokesperson wrote to BBC. “We also believe that AI regulation is necessary – so we look forward to working closely with (Italian authorities) and educate them on how our systems are built and used.”

The move comes just days after the nonprofit Future of Life Institute released a letter calling for a halt in the production of AI-powered chatbots like GPT-4, ChatGPT, and Google’s Bard. The letter was signed by more than 1,000 tech leaders, including Apple co-founder Steve Wozniak, Tesla’s Elon Musk, and Turing Prize winner Yoshua Bengio. The letter urges a six-month pause by companies developing the technology, and says governments should enforce a moratorium if an agreement cannot be reached quickly.

CIOs Should Remain Vigilant with ChatGPT and AI

Isaac Sacolick, founder and president of StarCIO and author of two books on digital transformation, says CIOs need to experiment with applications and keep a pulse on the new technology. “CIOs need to get ahead of this – just like with any new technology,” he told InformationWeek in an interview. “They should come up with communications and policies, so people understand where their organization stands on this tech.”

He said clear advantages for marketing and possibilities for further commercialization of information through better search access will continue to make ChatGPT and its competitors attractive solutions for enterprises.

CIOs also need to be prepared to answer tough questions about how the technology will impact workers. “There’s a fear factor – people are going to ask what it means for their jobs and careers. (CIOs) need to make sure people understand how they continue to provide value regardless of technology advancements. We’ve faced this before in different industries with automation. This is a continued path CIOs need to be responsible for.”

Sacolick, who called the open letter’s call to halt the technology’s development “a little ridiculous,” says there are better ways to approach concerns about ethics privacy. “They should be asking ChatGPT to do a better job handling areas that are unsafe,” he says, adding that the lack of source information is particularly concerning.

What to Read Next:

Top ChatGPT Fails (and Why You Should Avoid Them)

AI and Hiring Quick Study

Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights