Ban AI Weapons, Scientists Demand - InformationWeek
IoT
IoT
Government // Leadership
News
7/27/2015
06:06 PM
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Ban AI Weapons, Scientists Demand

Roboticists and experts in artificial intelligence want to prohibit offensive autonomous weapons.

Windows 10: 10 Things To Know At Launch
Windows 10: 10 Things To Know At Launch
(Click image for larger view and slideshow.)

Theoretical physicist Stephen Hawking, Tesla CEO Elon Musk, and Apple co-founder Steve Wozniak are among the hundreds of prominent academic and industry experts who have signed an open letter opposing offensive autonomous weapons.

The letter, published by the Future of Life Institute in conjunction with the opening of the 2015 International Joint Conference on Artificial Intelligence (IJCAI) on July 28, warns that an arms race to develop military AI systems will harm humanity.

"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter states.

Such systems, by virtue of their affordability, would inevitably come to be ubiquitous and would be used for assassinations, destabilizing nations, ethnic killings, and terrorism, the letter asserts.

Hawking and Musk serve as advisors for the Future of Life Institute, an organization founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn to educate people about the ostensible risk that would follow from the development of human-level AI. Both have previously spoken out about the potential danger of super-intelligent AI. Musk has suggested advanced AI is probably "our biggest existential threat."

(Image: jlmaral via Flickr under CC By 2.0)

(Image: jlmaral via Flickr under CC By 2.0)

The potential danger posed by AI has become a common topic of discussion among technologists and policymakers. A month ago, the Information Technology and Innovation Foundation in Washington, D.C., held a debate with several prominent computer scientists about whether super-intelligent computers really represent a threat to humanity.

Stuart Russell, an AI professor at UC Berkeley who participated in the debate and is also a signatory of the letter, observed, "[W]hether or not AI is a threat to the human race depends on whether or not we make it a threat to the human race." And he argued that we need to do more to ensure that we don't make it a threat.

The U.S. military presently insists that autonomous systems be subordinate to people. A 2012 Department of Defense policy directive states, "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

Yet human control over these systems remains imperfect. In 2014, human rights group Reprieve claimed that U.S. drone strikes had killed 28 unknown individuals for every intended target.

The DoD policy on autonomous weapons must be recertified within five years of its publication date or it will expire in 2022. And it's not obvious that political or military leaders will want to maintain that policy if other nations continue to pursue the development of autonomous systems.

In a 2014 report, the Center for a New American Security (CNAS), a Washington, D.C.-based defense policy group, claimed that at least 75 other nations are investing in autonomous military systems and that the United States will be "driven to these systems out of operational necessity and also because the costs of personnel and the development of traditional crewed combat platforms are increasing at an unsustainable pace."

If CNAS is right and the economics of autonomous systems are compelling, a ban on offensive autonomous weapons may not work.

Economic Appeal

Economics play an obvious role in the appeal of weapon systems. The Kalashnikov rifle owes much of its popularity to affordability, availability, and simplicity. Or consider the landmine, an ostensibly defensive autonomous weapon that's not covered by the letter's proposed ban on "offensive autonomous weapons."

Landmines cost somewhere between $3 and $75 to produce, according to the United Nations. The agency claims that as many as 110 million landmines have been deployed across 70 countries since the 1960s. In addition, undiscovered landmines from wars before may still be operational.

Banning landmines is having an effect: Since the Mine Ban Treaty was enacted in 1999, daily casualties from landmines have declined from an average of 25 per day to nine per day. But the ban on mines is not respected everywhere or by everyone.

Better AI might actually help here. The basic landmine algorithm -- if triggered: explode -- could be far more discriminating about when to explode, whether the mine's mechanism is mechanical or electronic. The inclusion of an expiration timer in landmines, for example, could prevent many accidental deaths, particularly when conflicts have concluded. And more sophisticated systems could be even more discriminating with regard to valid targets.

Offensive autonomous weapons already exist. Beyond landmines, there are autonomous cyber weapons. Stuxnet, for example, has been characterized as AI. Rather than banning autonomous weapon systems, it may be more realistic and more effective to pursue a regime to govern them.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
Broadway0474
50%
50%
Broadway0474,
User Rank: Ninja
7/31/2015 | 4:58:11 PM
Re: Treaties need to have teeth
Sure, a couple of Third World despots have been placed in jail by the International Criminal Court for war crimes. But it takes a long time and far more criminals fall through the cracks. And as for WWII, that was a unique situation brought about by TOTAL VICTORY in an all-out war. You rarely have the sort of conclusion, so rarely will you be able to stop war criminals so completely (and that was, after all, after the war criminals killed tens of millions of people, mind you).
kstaron
50%
50%
kstaron,
User Rank: Ninja
7/31/2015 | 9:10:31 AM
abuse of technology
I agree that if you are going to ban/regulate them it has to be an international agreement that some AI weapons are considered weapons of mass destruction and are considered war crimes. Not only would you have an arms race where fewer casualties happen on the side of the ones using AI, the ability for abuse of the technology would be staggering. (Not to mention the several bad sci-fi plots involving rouge AI running through my head.)
jries921
50%
50%
jries921,
User Rank: Ninja
7/31/2015 | 12:27:30 AM
Re: Treaties need to have teeth
I seem to recall that there are some people doing prison time for war crimes.  There would be more if they were prosecuted more aggressively, which is why I think it foolish to rely on international courts (let them be tried by whatever state catches them, just like pirates used to be).  Lots of WWII-era war criminals were literally hunted for the rest of their lives; that is exactly what needs to happen to their modern successors until they are brought to justice.
Broadway0474
50%
50%
Broadway0474,
User Rank: Ninja
7/30/2015 | 11:11:07 PM
Re: Treaties need to have teeth
The problem is that even such efforts to label weapons as war crimes --- gas and biological weapons, for instance --- doesn't stop them from being built, tested and then eventually used by "rogue nations."
jries921
50%
50%
jries921,
User Rank: Ninja
7/30/2015 | 12:19:00 PM
Treaties need to have teeth
If there is to be a serious effort to ban autonomous offensive weapons, then they not only need to be defined, but their use has to be classified as a war crime, punishible by any state that apprehends the perpetrators (no matter how long it takes).

Otherwise, as others have pointed out, international weapons bans become so difficult to enforce as to be meaningless.
driverlesssam
50%
50%
driverlesssam,
User Rank: Strategist
7/30/2015 | 11:05:18 AM
Too late to ban AI weapons
Its too late to ban AI weapons!  "Fire and Forget" weapons have long been used in war.  Maybe you think that an air-to-air missile seeking and destroying the hottest target it "sees" is not AI.  Well that's another argument to have someday, isn't it.

I have always liked Prof. Allen Perlis' definition of AI from the 1960's: "A system is intelligent if you think it is".
Susan Fourtané
50%
50%
Susan Fourtané,
User Rank: Author
7/29/2015 | 3:31:36 AM
Re: Blaming AI for human destructive behavior
Sunita, what solution do you propose? -Susan
Gary_EL
50%
50%
Gary_EL,
User Rank: Ninja
7/29/2015 | 1:03:35 AM
Re: The genie can't be put back into the bottle
I don't think building robots of war precludes building robots for peace. History suggests the opposite - war technology often gets used for peaceful purposes – look at semiconductors. A robot that can outthink a human is certainly possible, but despite the popular TV show, a human author cannot create a conscious being that is self-aware. The notion is silly. And, yes, since 1945, the major instigator of technological change worldwide has been the USA. But there's a new actor on stage – China. From now on, people need to disabuse themselves of the idea that if we don't do it, there won't be someone else who will.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
7/29/2015 | 12:41:46 AM
Re: The genie can't be put back into the bottle
@gary: true. The Chinese government is strong and they have had different weapons testing through the years. I think they are a cold threat to other countries.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
7/29/2015 | 12:39:42 AM
Re: The genie can't be put back into the bottle
Maybe people should make robots to solve problems like handicap and population explosion among other things. They should make robots to build colonies on different habitable planets and moons of the solar system. They should make robots to facilitate space exploration.
Page 1 / 2   >   >>
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends for 2018
As we enter a new year of technology planning, find out about the hot technologies organizations are using to advance their businesses and where the experts say IT is heading.
Video
Slideshows
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll