July 23, 2023
Caving to pressure from the Biden Administration, several leading AI companies this week pledged to enact safeguards around the quickly emerging tech’s development and rollout.
Seven companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, made the new safety standards commitment during a White House meeting on Friday. The move comes just after the recent announcement of the FTC’s investigation into OpenAI’s ChatGPT.
“We must be clear-eyed and vigilant about threats emerging from emerging technologies that can pose -- don’t have to but can pose -- to our democracy and our values,” President Joe Biden said in brief remarks to the press at the time. “This is a serious responsibility; we have to get it right. And there’s enormous, enormous potential upside as well.”
Juggling the prospects of AI and its potential threats has been leading news stories since ChatGPT’s November 2022 public release. The overwhelming success and hype from that product has spurred an AI arms race with companies vying for new products and services utilizing generative AI -- including powerful new tools to create photos, text, music, and video without the aid of a human.
The seven safeguarding commitments include: internal and external security testing of AI systems before release; sharing information within the industry and with governments and academics on managing AI risks; investing in cybersecurity and insider-threat safeguards, to encourage third-party access and reporting of vulnerabilities in AI systems; developing robust technical mechanisms to ensure users know what content is AI-generated, public reporting of AI system capabilities, limitations, and areas of appropriate use; and finally, prioritizing research on AI’s potential societal risks.
“We are pleased to make these voluntary commitments alongside others in the sector,” Nick Clegg, president of global affairs at Meta, said in a statement. “They are an important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.”
Frith Tweedie, a data privacy consultant and proponent of responsible AI, in a LinkedIn post, said the companies’ commitments may have more to do with eventual regulations while calling the safeguards a “positive development.” “Voluntary commitments rarely yield the desired results. This looks like a win for Big Tech lobbying efforts, particularly if it staves off legislation,” she wrote.
Brad Smith, president of Microsoft, said the company’s effort would help it stay ahead of potential risks. “By moving quickly, the White House’s commitments create a foundation to help ensure the promise of AI stays ahead of its risk,” Smith said in a statement.
Anna Makanju, vice president of global affairs at OpenAI, said in a statement the commitments are “part of our ongoing collaborations with governments, civil society organizations and others around the world to advance AI governance.”
Still, others are calling for more action. Paul Barrett, deputy director of the Stern Center of Business and Human Rights at New York University, said in a statement that Big Tech and governments need to go further to protect society from AI’s potential existential threat. “The voluntary commitments … are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative AI.”
What to Read Next:
About the Author(s)
You May Also Like