Sponsored By

Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research

While applauding the ethical intentions of the open letter -- signed by Steve Wozniak, Turing Prize winner Yoshua Bengio, Elon Musk, and over 1,000 other leaders -- some experts wonder if the approach is too little, too late.

Shane Snider

March 30, 2023

3 Min Read
Red circle and crossbar surround the word 'AI' written in gold lettering, representing ban on artificial intelligence
Dragon Claws / Alamy Stock Photo

An open letter urging a pause on artificial intelligence -- signed by more than 1,000 top tech leaders and researchers in data science, artificial intelligence, and information technology -- calls for regulation on the emerging technology’s “profound risks to society and humanity.”

The nonprofit Future of Life Institute on Wednesday released the letter calling for a halt in the breakneck pace AI-powered chatbots like GPT-4, ChatGPT, and Google’s Bard are being developed and deployed. The letter is signed by the likes of Apple co-founder Steve Wozniak, Tesla firebrand Elon Musk, 2020 presidential candidate Andrew Yang, Turing Prize winner and founder-scientific director of the Montreal Institute for Learning Algorithms Yoshua Bengio, Berkley professor of computer science and co-author of the textbook "Artificial Intelligence: A Modern Approach" Stuart Russell, and a list of CEOs and researchers within the AI field.

According to the letter, titled “Pause Giant AI Experiments: An Open Letter,” the huge leaps in AI development experienced in the past several months “have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control.”

The letter urges a 6-month, publicly verifiable pause in the development of AI systems “more powerful than GPT-4” and says governments should step in and enforce a moratorium if an agreement cannot be reached quickly. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says.

Putting the AI Genie Back in the Bottle

Natalia Modjeska, research director and leader of Omdia’s AI research team, says the letter may be well-intentioned, but wonders how effective the initiative will be. “While I applaud and fully support this initiative, I wonder whether it has any teeth,” she told InformationWeek. “You can’t put the genie back in the bottle. And really, of what we need to do, such as develop safety protocols, independent oversight, auditing, certification, watermarking … Realistically, how much of this can be done in six months?”

For Roger Kay, founder of market intelligence firm Endpoint Technologies Associates, regulating the quickly emerging technology is a near-impossible task. “You’re talking about multiple stakeholders and multiple jurisdictions and countries,” he said. “Our own government is so dysfunctional, it’s hard to see them really getting something done quickly. There’s no stopping this technology. But I also think there’s a lot of panic involved. People have been working on AI for decades. It’s just now entered the popular imagination in a big way.”

What will really guide the future of AI technology is monetization and companies jockeying for position to best use the technology, Kay says. “Right now, a whole bunch of leaders are saying ‘Let’s not let AI get out of hand,’ because they don’t have control of it. And we do have to take it all very seriously because there is so much at stake.”

The Future of Life Institute was created in 2015 with a mission to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” Much of the concern surrounding AI is centered on the sudden emergence of predictive text generation that critics warn can be used maliciously and lead to the spread of misinformation and other problems.

In an interview with the New York Times, AI critic and entrepreneur Gary Marcus said, “These things are shaping our world. We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”

What to Read Next:

Should There Be Enforceable Ethics Regulations on Generative AI?

ChatGPT: An Author Without Ethics

AI Will Profoundly Impact Tech Products and Services

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights