The Threat Of Artificial Intelligence - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Mobile // Mobile Devices
Commentary
7/3/2015
09:16 AM
Thomas Claburn
Thomas Claburn
Commentary
Connect Directly
Google+
LinkedIn
Twitter
RSS
100%
0%

The Threat Of Artificial Intelligence

Super-intelligent robots deserve some concern, but really we should be paying more attention to the people and processes involved in building our machines.

Millennials: Why Customer Service Will Never Be The Same
Millennials: Why Customer Service Will Never Be The Same
(Click image for larger view and slideshow.)

At the end of June, a group of computer scientists gathered at the Information Technology and Innovation Foundation in Washington, D.C., to debate whether super-intelligent computers are really a threat to humanity.

The discussion followed reports a few days earlier of two self-driving cars that, according to Reuters, almost collided. Near-misses on a road aren't normally news, but when a Google self-driving car comes close to a Delphi self-driving car and prompts it to change course, that gets coverage.

To hear Google tell it, the two automated cars performed as they should have. "The headline here is that two self-driving cars did what they were supposed to do in an ordinary everyday driving scenario," a Google spokesperson told Ars Technica.

Ostensibly benevolent artificial intelligence, in rudimentary form, is already here, but we don't trust it. Two cars driven by AI navigated around each other without incident -- that gets characterized as a near-miss. No wonder technical luminaries who muse about the future worry that ongoing advances in AI have the potential to threaten humanity. Bill Gates, Stephen Hawking, and Elon Musk have suggested as much.

(Image: CSA-Printstock/iStockphoto)

(Image: CSA-Printstock/iStockphoto)

The panelists at the ITIF event more or less agreed that it could take anywhere from 5 to 150 years before the emergence of super-human intelligence. But really, no one knows. Humans have a bad track record for predicting such things.

But before our machines achieve brilliance, we will need half-a-dozen technological breakthroughs comparable to development of nuclear weapons, according to Stuart Russell, an AI professor at UC Berkeley.

Russell took issue with the construction of the question, "Are super-intelligent computers really a threat to humanity?"

AI, said Russell, is "not like [the weather]. We choose what it's going to be. So whether or not AI is a threat to the human race depends on whether or not we make it a threat to the human race."

Problem solved. Computer researchers can simply follow Google's example: Don't be evil.

However, Russell didn't sound convinced that we could simply do the right thing. "At the moment, there is not nearly enough work on making sure that [AI] isn't a threat to the human race," he said.

Ronald Arkin, a computing professor at Georgia Tech, suggested humanity has more immediate concerns. "I'm glad people are worrying about super-intelligence, don't get me wrong," he said. "But there are many, many threats on the path to super-intelligence."

Arkin pointed to lethal autonomous weapon systems, an ongoing challenge confronted by military planners, policymakers, and people around the world.

What's more, robots without much intelligence can be deadly, as an unfortunate Volkswagen contractor in Germany discovered the day before the ITIF talk. The 21-year-old technician was installing an industrial robot with a co-worker when he was struck by the robot and crushed, according to The Financial Times. The technician was inside a safety cage intended to keep people at a distance.

An investigation into the accident has begun. But the cause isn't likely to be malevolent machine intelligence. Human error would be a safer bet. And that's really something to worry about.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
j2bryson
50%
50%
j2bryson,
User Rank: Apprentice
8/14/2015 | 4:58:35 PM
AI is already our super intelligece
You say that no one knows when superintelligence will come. I've already said a couple times that superintlliigence is already here.  Lest you think I'm a hack, one of these times was at the main international AI conference (IJCAI) at a panel chaired by Stuart Russell.  All of our slides and a picture are here: oh, I think this site is blocking links, just google "ijcai 2013 panel future of ai", my blog post was second from the top when I did.

Nevertheless, I disagree with Ron Arkin.  We do have a lot of big problems, like climate change, sustainability, ISIS and other radical changes to the political and economic order  But all of these are symptomatic of the hypercomputation that our culture affords, that Bostrom predicts but can't recognise.  Computation doesn't only take place in computers, and intelligence doesn't have to be on just one processor.  Our entire society, machines and people both, are figuring out ways for humans and human organisations (including corporations) to succeed and compete, and in so doing accidentally causing a considerable level of mayhem, though also doing many wonderful positive things.

Overall I like your article for its balance.  But I'd like more people to realise that AI is already here, and it's helping us change the world very, very quickly.
ErasmoC579
50%
50%
ErasmoC579,
User Rank: Apprentice
7/29/2015 | 1:12:34 PM
Re: Paradox
It's allright to be positive. Let's suppose everybody (and every nation) has good intentions in the development of AI. Do you really believe that would be enough?
ErasmoC579
50%
50%
ErasmoC579,
User Rank: Apprentice
7/29/2015 | 1:09:08 PM
some problem with AI. Potential and risk
It's understandable the press likes to make a buzz. And it's understandable, the web content tend to be shallow.

There are some books that explain in simple terms the threats AI could generate. Of course, those are potential threats. But they're to take in consideration.  AI is expected to be able to solve problems and make decisions. Superintelligence is expected to be a capacity to learn by itself, improve and eventually come to be more intelligent than humans. Joins those two things and we get a huge capacity. Now, nobody is saying that the system would hate humans and would try to destroy us.  What is being said is, as that system would be thousand of times more intelligent than humans and it would be free of some human qualities, we really don't know which problem it may choose to solve and which solution it may pick.

So, there's a risk that it could arrive to the conclussion: humans are a mess, not useful for Earth, they use a lot of resources, produce a lot of trash and pollution, so the better course of action is to neutralize them and use the savings for other purposes. Now, this is only my stupidity here, but an intelligence thousand or times more capable could arrive to more interesting conclussions. But those conclussion would not necesarily be "positive" for humankind. Or may be there would be, but in a way that humankind wouldn't appreciate.
Brian.Dean
100%
0%
Brian.Dean,
User Rank: Ninja
7/5/2015 | 12:08:25 AM
Re: Paradox
A group of computer scientists also need to gather and speculate whether AI might be the only hope to save this planet in the event of a War of the Worlds scenario, extreme global warming, asteroid impact, gamma-ray bursts, undetected black holes, a flood basalt volcano and all of the above and more.
Blog Voyage
100%
0%
Blog Voyage,
User Rank: Strategist
7/4/2015 | 6:16:38 AM
Re: Paradox
I join you in your way of thinking. But it's as always : we want the pros, not the cons !
danielcawrey
50%
50%
danielcawrey,
User Rank: Ninja
7/3/2015 | 10:00:22 PM
Re: Paradox
Here's hoping we are going to see a lot of checks and balances for AI in the future. My biggest concern is that despite warning calls there will be people who disregard them for ulterior motives. 

That's my biggest concern. It's not going to be the machine, but human desire or greed that could unleash somehting that we all ultimately regret. 
Li Tan
100%
0%
Li Tan,
User Rank: Ninja
7/3/2015 | 10:36:34 AM
Paradox
Thanks for this post and the author - a good point is raised. In my humble opinion it's really a paradox. We want the advancement of AI to make our life easier. But we seldom think over can we still make it under control. It's a dilemma and worth of debating.
Slideshows
IT Careers: 12 Job Skills in Demand for 2020
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/1/2019
Commentary
Enterprise Guide to Multi-Cloud Adoption
Cathleen Gagne, Managing Editor, InformationWeek,  9/27/2019
Commentary
5 Ways CIOs Can Better Compete to Recruit Top Tech Talent
Guest Commentary, Guest Commentary,  10/2/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Data Science and AI in the Fast Lane
This IT Trend Report will help you gain insight into how quickly and dramatically data science is influencing how enterprises are managed and where they will derive business success. Read the report today!
Slideshows
Flash Poll