OpenAI Employees Pen Letter Calling for Whistleblower Protections

Employees of Google Deepmind and Anthropic also signed the letter which warns of serious risks due to a lack of oversight and transparency in artificial intelligence.

Shane Snider, Senior Writer, InformationWeek

June 4, 2024

4 Min Read
Concept of anonymous whistleblower communicating in secret.
Daniel Beckemeier via Alamy Stock

A group of former and current OpenAI employees, along with current and former Google Deepmind and Anthropic workers, on Tuesday released a letter calling for protections for whistleblowers, saying open and public discourse is necessary artificial intelligence (AI) safety.

The letter, signed by more than a dozen people (both by name and anonymously, says the existential risks posed by AI outweigh companies’ desire to keep secrets in an intensely competitive market.

The letter calls on AI companies to not enforce “disparagement agreements” or bar criticism or retaliate by leveraging vested economic benefits; that the company facilitate anonymous risk reporting by current employees; that the company supports a culture of open criticism; and that the company will not retaliate against current and former employees raising risks publicly.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the letter states. “AI companies possess substantial non-public information about the capabilities and limitations of their systems… However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do no think they can all be relied upon to share it voluntarily.”

Related:Scarlett Johansson, OpenAI, and Silencing ‘Sky’

OpenAI is currently embroiled in multiple lawsuits over its juggernaut ChatGPT chatbot released in November 2022. ChatGPT started a generative AI (GenAI) arms race with Big Tech players like Google, Microsoft, Nvidia and Amazon and others jockeying for position as businesses and consumers race to adopt new AI technologies. The GenAI market is expected to top $1.3 trillion in a decade, according to Bloomberg.

In a thread on X, former OpenAI employee and letter signee Jacob Hilton detailed concerns, saying the group is “calling for all frontier AI companies to provide assurances that employees will not be retaliated against for responsibly disclosing risk-related concerns… Historically at OpenAI, sign-on agreements threatened employees with the loss of their vested equity if they were fired for “cause,” which includes breach of confidentiality. If an employee realizes that the company has broken one of its commitments, they have no one to turn to but the company itself.”

Last month, an internal memo viewed by CNBC, showed OpenAI had ended its practice of forcing employees to choose signing a non-disparagement agreement or lose vested equity in the company.

Related:FTC GenAI Probe Hits Google, Amazon, OpenAI, Microsoft and Anthropic

Hilton applauded OpenAI’s backtracking on the non-disparagement agreement but said more should be done to protect whistleblowers. “Employees may still fear other forms of retaliation for disclosure, such as being fired and sued for damages,” Hilton wrote.

The letter was also endorsed by technology dignitaries, including Geoffrey Hinton, Yoshua Bengio and Stuart Russel.

The letter says the technology poses serious risks. “These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” the letter states.

In a statement to InformationWeek, an OpenAI spokesperson said, “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments civil society and other communities around the world.”

OpenAI also said it removed a non-disparagement clause from its paperwork for departing employees. The company says it has a good track record of not releasing technology without safeguards, citing its Voice Engine and Sora video models that have been delayed from wide release.

In an email interview with InformationWeek, Erik Noyes, associate professor of entrepreneurship and founder of the Babson College AI lab, said the letter is a step in the right direction for responsible AI transparency. "Given the unique importance of AI to the future of human innovation -- and the real risks AI can pose -- this call makes great sense."

Related:OpenAI’s Dysfunctional Thanksgiving: 5 Key Players in Coup Drama

He adds, "Once again we see that it's important that the world's most powerful tech companies need practical, tactical oversight as their incentives can diverge from those of society at large."

Manoj Saxena, InformationWeek Insight Circle member and founder of the Responsible AI Institute, says its crucial for employees to have a voice in the safety process when it comes to responsible AI. "Given the emergent and unknown power of these rapidly advancing generative AI systems, it is critically important for management of an AI innovator such as OpenAI to hear and address concerns from former employees -- both to prevent reputational damage and expensive litigation and to prevent misuse and ensure ethical development, thereby fulfilling our duty to protect society from potential harm from rogue AI," he tells InformationWeek.

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights