Biden Administration Clamps Down on Agencies’ AI Use
The Administration announced new guidance for agencies to bolster their safe use of artificial intelligence (AI) tools.
The White House on Thursday announced a new Office of Management and Budget (OMB) policy that offers guidance on AI safety and will mandate public reporting on AI risks and management. OMB’s guidance will give agency leaders tools to independently assess AI tools to uncover flaws and prevent biased or discriminatory results.
During a press call, Vice President Kamala Harris told journalists the new OMB policy would create “binding requirements to promote the safe, secure, and responsible use of AI by our federal government.”
She added, “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people.”
Agencies will have until December 1 to implement “concrete safeguards” around their use of AI tools, according to the OMB. “These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” OMB said in a fact sheet.
If an agency fails to adopt suggested safeguards, “the agency must cease using the AI system,” the fact sheet says.
Manoj Saxena, founder and executive chairman of Responsible AI Institute, says the guidelines were needed, considering the government's potential as a major purchaser and user of AI tools. "By mandating federal agencies to integrate responsible AI, the administration is leveraging its considerable purchasing power to encourage all data, technology, and service providers to prioritize the development of AI solutions that are safe and ethically responsible," he tells InformationWeek in an email interview. "The implications of this policy are profound and it will accelerate American innovation and competitiveness forward and strengthen public trust in AI technologies."
Ilia Kolochenko, CEO at ImmuniWeb and adjust professor of cybersecurity at Capital Technology University, said in a statement the guidelines are a welcome safeguard, but the government has much more to do. The order, he wrote, “will certainly enhance transparency, safety, and reliability of numerous AI systems utilized by the federal government. Having said this, the (OMB guidance) will unlikely have a substantial impact on the private sector with narrow exception to those companies that develop AI solutions for federal agencies.”
Encouraging AI Innovation
OMB’s guidance will allow more AI innovation in areas such as addressing climate crisis, responding to natural disasters, advancing public health (for example, using AI to predict the spread of disease), and protecting public safety. “OMB’s policy will remove unnecessary barriers to federal agencies’ responsible AI innovation,” according to the fact sheet. “AI technology presents tremendous opportunities to help agencies address society’s most pressing challenges.”
The Biden Administration said by the summer, it would hire 100 AI professionals to promote safe AI as part of the National AI Talent Search created by Biden’s earlier executive order on AI. The administration also budgets an additional $5 million for fiscal year 2025 to expand government-wide AI training.
“The American people have a right to know when and how their government is using AI, that it is being used in a responsible way … and we want to do it in a way that holds leaders accountable for the responsible use of AI,” Harris said.
Read more about:
RegulationAbout the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022