OpenAI CEO Sam Altman Pleads for AI Regulation
As ChatGPT use blazes forward and stirs up controversy, tech leaders sat before US Congress to address growing concerns about AI’s potential harms.
Sam Altman, CEO of generative artificial intelligence firm OpenAI, on Tuesday told lawmakers the US needs to “lead” in developing rules and regulations around emerging AI tools like the company’s ChatGPT generative language software.
In testimony before Congress, Altman called for a regulatory roadmap for AI, which he said has amazing potential for both good and bad outcomes. He said lawmakers should step in to create parameters that would prevent AI creators from causing “significant harm to the world.” He added, “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening."
Growing fears about AI’s potential for misuse and concerns about what an unchecked AI revolution could mean for the global workforce guided many of the questions lobbed at Altman and Christina Montgomery, IBM’s chief privacy and trust officer. Despite laying out his own concerns, Altman said he also believed AI has a tremendous potential to be a huge benefit to humanity. “We think it can be a printing press moment,” Altman said. “We have to work together to make it so.”
Bomb in a China Shop
Richard Blumenthal (D., Conn.), chairman of the Senate subcommittee on privacy, started the hearing by playing an AI-generated clip of himself speaking on the topic, pointing to the potential for abuse.
Gary Marcus, professor of psychology and neural science at New York University, told lawmakers, “We have built machines that are like bulls in a china shop -- powerful, reckless, and difficult to control.” Marcus voiced support for the creation of an oversight agency like the Environmental Protection Agency or the Food and Drug Administration.
Piggybacking on the china shop analogy, Blumenthal mused, “Some of us might characterize it more like a bomb in a china shop, not a bull.”
Senator Josh Hawley (R., Mo.) voiced skepticism about the creation of an agency and Congress’s ability to work efficiently on regulation. Instead, he said, the threat of massive lawsuits could keep AI in check. Hawley also asked the experts where they stood on an AI pause.
Fears about the technology prompted thousands of tech luminaries, including Elon Musk and Andrew Yang, to sign an open letter last month calling for companies to pause AI training for six months. The three experts dodged explicitly endorsing such a move and voiced skepticism about a moratorium, and Blumenthal said a pause would be problematic. “The world won’t wait,” Blumenthal said.
IBM’s Montgomery called for a balanced approach. “The area of AI cannot be another era of ‘move fast and break things,’” she said. “We don’t have to slam the brakes on innovation, either.”
A New Agency?
During Altman’s opening remarks, he said OpenAI does extensive testing and audits on safety of its product. But a government agency may be necessary to provide needed oversight. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said, adding that the US government should consider licensing and testing requirements for AI. “US leadership is critical.”
Altman’s suggestion of a new regulatory agency guiding AI could prove problematic, Blumenthal said toward the end of the three-hour hearing. He said, “you can create 10 new agencies,” but if they don’t have the resources, private companies and their lawyers can “run circles around” the government.
Sen. Dick Durbin (D-Ill.) said major tech companies coming to Congress to ask to be regulated was a “historic” moment.
What to Read Next:
ChatGPT: An Author Without Ethics
Italy Bans ChatGPT, Other Nations Threaten the Same
Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research
Read more about:
RegulationAbout the Author
You May Also Like