OpenAI's 2023 Breach Raises Questions About AI Industry Transparency

A hacker was able to take information from OpenAI’s international messaging systems. How could this breach fuel the conversation around transparency in the AI space?

Carrie Pallardy, Contributing Reporter

July 11, 2024

5 Min Read
OpenAI logo seen on the screen of smartphone.
iliya Mitskovets via Alamy Stock Photo

OpenAI was the victim of a breach last year that is just now coming to light. The company informed employees in April 2023, The New York Times reports. As no customer or partner information was stolen, executives decided not to disclose the breach to the public, anonymous sources told The Times.  

As a private company, OpenAI does not have the same breach reporting obligations as public companies. However, it is a titan in today’s burgeoning AI industry, an industry that promises to reshape the way we do business and live our lives. What do we know about this breach now, and how could it fuel the conversation around transparency in the AI space?  

The Shadowy World of Espionage  

Details about the breach are sparse. The company did not disclose the breach to the public or to law enforcement. The hacker stole information from an employee discussion forum on OpenAI’s technology, but the company judged that individual was likely not linked to any specific foreign government, according to The Times report. OpenAI did not respond to InformationWeek’s request for comment.  

There are still plenty of unanswered questions. “Was it [a] cybercriminal? Was it a competitor? What did they steal, and how did they get in?” Eric O'Neill, founder of risk consultancy The Georgetown Group and cybersecurity company NeXasure, asks.  

Related:What Oracle’s Cloud Deals with OpenAI and Google Cloud Mean

While OpenAI deemed the security incident not to be a threat to national security, alarm bells went off for some of its employees. Leopold Aschenbrenner, a former researcher with the company, voiced concerns that OpenAI’s security is not sufficient to prevent the theft of secrets by state actors, according to The Times.  
“Well, if one attacker can get in, what about China or Russia or Iran or North Korea?” asks O’Neill, a former FBI counterterrorism and counterintelligence operative.  

AI companies, OpenAI just one among many, are valuable targets for espionage campaigns backed by nation-state actors or corporate competitors. While this particular breach may not have resulted in the theft of code, it still could be of value to the actor behind it.  

“Every little bit you can learn about a technology that you want to steal is something that you can use to perfect your own,” O’Neill points out.  

With so much money at stake, IP theft is going to be a tool that some use to claw their way toward a competitive advantage. “Individuals are going to be making billions of dollars off of this, millions of dollars off of this,” says Jack Berkowitz, chief data officer at Securiti, an AI company that focuses on the safe use of data and GenAI. “So, the ethics and the behavior of people can be changed.” 

Related:OpenAI Employees Pen Letter Calling for Whistleblower Protections

Transparency in the Age of AI  

OpenAI faces skepticism for its approach to transparency. Critics have had a field day with the company’s use of “open” in its name when it now takes a closed-source approach. The decision to not disclose a breach is another strike against transparency.  

“I think you're forgiven more for being breached and then disclosing it than what OpenAI did, which I can only see as trying to hide it,” says O’Neill.  

OpenAI is a lightning rod for scrutiny because of its prominent position in the AI market, but it is far from the only company in the space and the only target for breaches.  

“Are there other breaches? Most likely, yes. I think it would be naïve to assume that nothing's happening there,” says Steve Benton, vice president of threat research at cybersecurity company Anomali. “It's so attractive as a target; it's bound to be happening.”  

Many of the companies in the thick of the AI race have only been in business for a short period of time. And in highly competitive industries with market share to snap up, innovation often has a way of outstripping security.  

“They [companies] need to … not just [be] on spending money on the GPUs, but on that management systems and the security and all the other things that more mature businesses put in place to protect their customers, protect their investors, and their shareholders,” says Berkowitz.  

Related:What the NYT Case Against OpenAI, Microsoft Could Mean for AI and Its Users

With the very real threat of AI IP theft and the potential that breaches are happening unbeknownst to users and the public, what does that mean for the industry going forward? 

The demand for transparency is likely to grow as enterprises consider how to apply corporate governance to the use of AI.  

“It will start to put pressure on those companies to either be more transparent or companies will choose to work with the [AI] companies that are more transparent simply because they have a responsibility … to guard the best interests of their employees, their customers, and their shareholders,” says Berkowitz.  

While national security concerns regarding AI are nebulous now, it is possible they could become much more concrete in the future. As it is integrated into more and more businesses and systems, there is greater potential for real-world impact if AI technology is breached or IP stolen. “The position … artificial intelligence is moving towards is becoming like a critical national infrastructure,” says Benton.  

Already, AI is the focus of much government scrutiny, and regulation has a potential role to play in how transparent the companies building these technologies will have to be.  

O’Neill thinks it is possible this breach could be the subject of government attention. “I suspect that we're very, very soon going to see some sort of inquiry, a congressional inquiry, to try to understand in-depth how the breach occurred, what was lost, and whether there are national security issues,” he says.  

Calls for transparency, either via market demand or government mandate, will likely take time to result in change, but the adoption of AI is unlikely to slow. Enterprise leaders will need to determine what they expect from AI companies in terms of security and transparency, and they will need to consider their own roles in ensuring AI operates safely in their environments.  

Benton argues AI should be treated like a new employee coming onboard. “You wouldn't let somebody walk in off the street and just start doing stuff to your organization,” he says. “So, similarly with AI you need to think pretty carefully about what is the job description for our utilization of AI? How are we going to train this new employee? How will they be supervised? How will we judge their performance?” 

As AI is increasingly adopted, the question of transparency will remain paramount. “The utilization of something as powerful as artificial intelligence is absolutely founded on trust, and the way to build trust is to have transparency,” says Benton.  

About the Author

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights