What Does the New AI Executive Order Mean for Development, Innovation?

The Biden administration releases a broad executive order addressing the risks and opportunities of artificial intelligence.

Carrie Pallardy, Contributing Reporter

November 2, 2023

6 Min Read
White House, Washington DC
incamerastock via Alamy Stock

At a Glance

  • The order calls for the development of new standards and tools to ensure AI systems are safe, secure, and trustworthy.
  • Developers creating powerful AI systems will have to perform red-team safety tests and share results with the US government.
  • While some AI thought leaders are vocal proponents of regulation, others argue that it will stifle innovation.

The EU’s AI Act, the first comprehensive law regulating the use of artificial intelligence, is expected to be adopted within the next couple of years. The United States recently joined the EU in answering the mounting calls for the regulation of this nascent and rapidly growing technology.

On Oct. 30, President Joe Biden issued an executive order addressing the safety, security and trustworthiness of AI. The sweeping order has implications for the development and use of AI.

Many prominent voices in the AI field have urged lawmakers to act. On Oct. 26, a group of 24 leaders in the AI space released a paper -- Managing AI Risks in an Era of Rapid Progress -- warning of dire risks of unchecked development of this technology. The authors, Turing awardees among them, make an urgent call for confronting these risks and regulating AI before it is too late to halt widespread harm to humanity.

How does this new executive order address safety concerns, and what does it mean for the continued innovation in the AI space?

Defining Safety

What does safe AI look like? The executive order calls for the development of new standards and tools to ensure AI systems are safe, secure, and trustworthy before they are released to the public. The National Institute of Standards and Technology (NIST) along with the Secretary of Commerce, the Secretary of Energy, the Secretary of Homeland Security, and other relevant stakeholders will be developing these safety and security standards within 270 days, according to the full text of the order.

Related:Biden Pens Landmark AI Executive Order

The government is concerned specifically with “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety,” according to the White House fact sheet. Developers creating these powerful AI systems will have to perform red-team safety tests and share those results with the US government.

“If the government feels like the AI or an AI model has the potential to compromise national security, they want to be involved,” says Jake Williams, faculty member at security insights company IANS Research and a former US National Security Agency (NSA) hacker.

The focus on powerful AI systems aligns with the paper calling for AI risk management. “The most pressing scrutiny should be on AI systems at the frontier: a small number of most powerful AI systems trained on billion-dollar supercomputers -- which will have the most hazardous and unpredictable capabilities,” the co-authors write.

Responsible Use

Related:Firms Arm US Against AI Cyberattacks

While there will be far-reaching consequences of AI, particularly related to the eventual development of fully autonomous systems, Williams anticipates the immediate risks lie with users. “It’s less about the AI itself and it’s more about how people inappropriately use AI,” he explains.

The use of AI systems can perpetuate and deepen the harm caused by discrimination and bias. The executive order calls for guidelines to protect equity and civil rights in justice, health care, and housing.

 

“I think that this is going to be a major step towards ensuring … we’re not accidentally or deliberately using models for discriminatory purposes,” says Liz Fong-Jones, field CTO at Honeycomb, a software debugging tool.

The executive order also addresses how the government will responsibly deploy AI to modernize its infrastructure.

“It’s quite interesting how the executive order says as the federal government not only are we trying to impose these guidelines and these safeguards … but we’re basically going to eat our own dog food,” says Alla Valente, a senior analyst at market research company Forrester.

Privacy

Data privacy is a significant concern that comes with the rise of AI systems. In the executive order, Biden calls for Congress to pass bipartisan privacy legislation that protects privacy as it relates to the risks of AI and more broadly. “We might see regulation around individual privacy, something similar to GDPR,” says Valente.

Related:Hire or Upskill? The Burning Question in an Age of Runaway AI

Any privacy regulation that does come the fruition has the potential to impact the way AI system developers collect and use data.

Consumers and Workers

While the executive order targets the development and use of AI by companies and government agencies, it also acknowledges how its widespread adoption will impact individuals. The federal government will continue to enforce existing consumer protection laws and “enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI,” according to the full text of the order.

AI is poised to radically change the way work is done. The Biden Administration has plans to explore how AI will impact the labor market, and the order directs the development of “principles and best practices to mitigate the harms and maximize the benefits of AI for workers,” according to the fact sheet.

Regulation and Innovation

While some AI thought leaders are vocal proponents of regulation, others argue that it will stifle innovation in such a young, promising field. Yann LeCun, a Turing award winner himself and Meta’s chief AI scientist, is among such critics.

LeCun took to X to call out AI leaders in favor of more regulation. “If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI,” he writes.

Biden’s executive order is just the first step forward for AI regulation in the US. What does it mean for innovation?

The big players in AI are already visibility cooperating with the government. In July, seven companies -- Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI -- agreed to voluntary AI safeguards.

Williams points out that companies building AI systems likely had a role to play in drafting this executive order. “I don’t think the federal government got to some of the … requirements and definitions that they got to without some help from those in the industry,” he says.

Fong-Jones acknowledges that there are concerns related to open-source models. “It’s hard to answer these questions about who is responsible for the compliance aspects of that open-source software,” she says.

The demand for a more rigorous development process and greater understanding of safety and security could mean the breakneck pace of AI progress slows.

“From a development perspective, we really need to get in best practices on how do we build this with security in mind and that hasn’t been happening,” says Ian Swanson, CEO of AI and machine learning security company Protect AI. “So, while it might on one hand slow down innovation, I think it’s going to make AI more robust, more trustworthy and, per this executive order, more safe.”

 

The executive order appears to be seeking a middle ground between innovation and guidelines for responsible AI use and development. It includes a call for AI research, a competitive AI ecosystem and rapid hiring of AI professionals within the government.

“There’s fear of AI, and … we’ve had examples of situations that have led us to that belief,” says Valente. “In my opinion, if the executive order wants to do one thing, it’s to change the perception around AI, have us thinking about it in a much more beneficial way, and start thinking about how we can leverage it in our organizations but without creating undue risk.”

About the Author(s)

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights