Amid surging investment in artificial intelligence over the past few years and continuing concern about the implications of the technology, the White House announced on Tuesday that it intends to hold a series of workshops and form an interagency working group to examine the benefits and risks of AI.
In a blog post, Ed Felten, Deputy US Chief Technology Officer, framed the issue in a way that excludes speculative scenarios presenting AI as a threat to humanity, a concern raised by the likes of Stephen Hawking and Elon Musk.
While worries about runaway malevolent AI are often raised in public discussions of the technology, real AI research is more mundane, as in Google's effort to improve the conversational capabilities of its software by feeding it romance novels.
"Today's AI is confined to narrow, specific tasks, and isn't anything like the general, adaptable intelligence that humans exhibit," said Felten. "Despite this, AI's influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition."
Felten pointed to the President's Precision Medicine Initiative and the Cancer Moonshot as endeavors that will depend on AI to identify patterns in medical data. Such projects promise to provide physicians with information that leads to better medical care and clinical outcomes. Felten also highlighted AI's potential to enhance education and transportation through the introduction of autonomous or semi-autonomous vehicles.
At the same time, Felten acknowledged that AI brings risks and policy challenges. He pointed to AI's potential to destroy jobs even as it opens new employment opportunities. He noted that this underscores the need for job training programs. He also highlighted the problem of inscrutable AI.
"AI systems can also behave in surprising ways, and we're increasingly relying on AI to advise decisions and operate physical and virtual machinery -- adding to the challenge of predicting and controlling how complex technologies will behave."
This suggests many AI systems will have to be open source or open to auditing, in order to ensure legal compliance. AI could, for example, be programmed to discriminate unlawfully -- or could teach itself to discriminate in order to satisfy as pre-established criteria -- and no one would be the wiser without some way to trace the system's decisions. Likewise, when an autonomous vehicle kills a pedestrian -- there's no reason to assume self-driving cars will operate flawlessly -- authorities and others will want to know why and whether the incident was avoidable.
To begin grappling with these issues, the White House Office of Science and Technology Policy will be cohosting four workshops with academic institutions and the National Economic Council in the months ahead. The workshops include: Legal and Governance Implications of Artificial Intelligence on May 24 in in Seattle, Artificial Intelligence for Social Good on June 7 in Washington, Safety and Control for Artificial Intelligence on June 28 in Pittsburgh, and The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term on July 7 in New York.
Felten also said that a new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence plans to meet next week for the first time. The group intends to follow advances in AI in order to help develop relevant federal policy and use the technology to improve the operation of federal agencies.
One aspect of AI not addressed by Felten is its role in military targeting and weaponry. In a 2014 report, the Center for a New American Security (CNAS), a Washington-based defense policy group, asserted that at least 75 other nations are investing in autonomous military systems and that the US will be compelled to do so out of economic and operational necessity.