No, AI Won't Kill Us All - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Big Data Analytics
News
2/24/2015
10:31 AM
Connect Directly
Google+
RSS
E-Mail
100%
0%

No, AI Won’t Kill Us All

When famous technologists and scientists fear the menace of thinking machines, it's time to worry, right? Not really, because computers lack the imagination to wreak havoc, says one AI expert.

8 Google Projects To Watch in 2015
8 Google Projects To Watch in 2015
(Click image for larger view and slideshow.)

The sentient, self-aware, and genuinely bad-tempered computer is a staple of science fiction -- the murderous HAL 9000 of 2001: A Space Odyssey being a prime example from the genre. Recently, though, more than a few of the world's top technological and scientific minds -- most notably Bill Gates, Stephen Hawking, and Elon Musk -- have warned humanity of the threat posed by artificial intelligence. In fact, AI has even been named one of the "12 risks that threaten human civilization," according to a new report from the Global Challenges Foundation and Oxford University’s Future of Humanity Institute.

Whoa. So perhaps it's time to step back from the precipice of Skynet-like apocalypse? Maybe focus on making computers a little less smart -- or at least less autonomous?

No, actually it’s a good time to take a deep breath and relax, says Dr. Akli Adjaoute, founder and CEO of Brighterion, a San Francisco-based provider of AI and machine-learning software for healthcare and identity fraud, homeland security, financial services, mobile payments, and other industries. Adjaoute has a PhD in artificial intelligence and mathematics from Pierre and Marie Curie University in Paris. He has spent the past 15 years developing AI technologies for commercial applications.

In short, Adjaoute knows his stuff, and he says AI's ominous potential is vastly overblown.

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

In a phone interview with InformationWeek, Adjaoute provided a very simple reason the fear of malevolent, thinking machines is unfounded: Computers, unlike people, have no imagination.

"Suppose I'm on the 10th floor, and I'm talking to you from my office," said Adjaoute. "I say, 'Hey, could you please take this bucketful of water, and run to the reception [area] on the first floor?' What happens? You'll say, 'Oh, I will get wet, because the water will splash on me if I run.'"

The human mind, he noted, can imagine that carrying an open, sloshing bucket of water across office floors (and possible down several flights of stairs) will likely cause water to spill out of the bucket and onto the carrier's clothing. That's imagination at work.

A computer lacks similar cognitive capabilities, however. Rather, it's very, very fast at carrying out instructions.

Even powerful AI systems such as IBM's Jeopardy!-winning Watson, don’t mimic the human brain. (The same can be said for IBM's Deep Blue computer, which in 1997 defeated world chess champion Garry Kasparov in a six-game match.)

"We don't claim that Watson is thinking in the way people think. It is working on a computational problem at its core," IBM research scientist Murray Campbell, one of the developers of Deep Blue, told the New York Times in 2011.

"The computer doesn't even know it's playing chess," said Adjaoute of Deep Blue. "It's just another level of stupid calculation."

[ Read more about Watson's voice. ]

As Allen Institute CEO Oren Etzioni recently told CNBC, AI's critics may be blurring the distinction between machines capable of performing instructions very efficiently, and truly autonomous systems that think and act independently.

"How are you going to have self-awareness if all the program does is look to the data, and analyze it with zeros and ones?" said Adjaoute. "How will it be aware of what it's doing? It's impossible."

He added: "I am tired of seeing artificial intelligence become the boogeyman of technology. There is something irrational about the fear of AI."

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 2 / 4   >   >>
StaceyE
50%
50%
StaceyE,
User Rank: Ninja
2/28/2015 | 11:36:15 AM
Re: You can still have havoc without imagination
I think any type of AI that could potentially make decisions like you describe very scary. 
StaceyE
50%
50%
StaceyE,
User Rank: Ninja
2/28/2015 | 11:33:07 AM
Re: From where will computers get their paranoia?
@ Charlie

I agree with you completely. I don't think a computer will ever have the capability of that type of conciousness...even if someone tried to progrm it in....
StaceyE
50%
50%
StaceyE,
User Rank: Ninja
2/28/2015 | 11:30:19 AM
Re: I am human hear me roar
@ vnewman2

I bet those people are the same ones who believe a computer can not make mistakes. I have actually had to explain that "the data the computer is using to give you information was put into the computer by a human being, who of course could very well have mad a mistake". Sometimes that explanation worked, and other times it turned into a whole new "argument".
David Wagner
100%
0%
David Wagner,
User Rank: Strategist
2/28/2015 | 12:30:23 AM
Re: You can still have havoc without imagination
This might be an oversimplification, but I've always felt like anything we program would have our general respect for life as a part of the programming. Or that, at worst, we could create the "good" AI to fight the "bad" AI someone else made. I feel like AI is more of a battleground than a threat.
kstaron
50%
50%
kstaron,
User Rank: Ninja
2/27/2015 | 11:30:57 AM
You can still have havoc without imagination
Any AI system might lack the imagination to become a movie style super villian, but a minor flaw in the 'learning' portion of an AI system could cause havoc. The movies take these ideas to the Nth degree where rthe 'logical' conclusion is to delete the humans so the self aware computer may continue. What if a learning AI system decided to increase traffic efficiency by eliminating yellow lights over 20 city blocks? There could be hundreds of injuries or deaths or before it 'learned' that humans don't have instant reaction time. At any point where an entity learns and applies that information there is the possibility for havoc if the learning isn't complete. In humans we call this growing up and it takes a long time with lots of supervision and during that time the ones growing up rarely have control over any kind of critical computer system. While it might not even be possible to develop an evil computer like Hal, it seems like it might be entirely possible to create something where havoc could ensue.
mak63
50%
50%
mak63,
User Rank: Ninja
2/25/2015 | 4:30:29 PM
report
"12 risks that threaten human civilization," according to a new report from...
I thought for a moment there that report was from The Onion.
Anyway, I agree with Dr. Adjaoute, the threat of AI is "vastly overblown."
Resource
50%
50%
Resource,
User Rank: Apprentice
2/25/2015 | 3:53:56 PM
Re: From where will computers get their paranoia?
The original for that self-protective behavior was "Colossus, the Forbin Project", a made-for-TV movie that came out in the 1950s.   The human machine manager did defeat him, though.   An interesting monument to Cold War paranoia.
Resource
50%
50%
Resource,
User Rank: Apprentice
2/25/2015 | 3:53:55 PM
Re: From where will computers get their paranoia?
The original for that self-protective behavior was "Colossus, the Forbin Project", a made-for-TV movie that came out in the 1950s.   The human machine manager did defeat him, though.   An interesting monument to Cold War paranoia.
TerryB
50%
50%
TerryB,
User Rank: Ninja
2/25/2015 | 2:46:27 PM
Re: From where will computers get their paranoia?
Charlie, I guess I was thinking about programming for the computer to protect itself, the "self awareness" you see in so many movies. My favorite example is in Eagle Eye, where the Big Brother computer discovered the humans wanted to unplug him. He took care of that problem. :-)

Again, I want to be clear I'm being very tongue in cheek discussing this. We are nowhere near the level of programming in AI where what I describe above is feasible. And it is quite possible we will never get there. But in case we do, I sure hope the scientists have watched these movies....
tzubair
50%
50%
tzubair,
User Rank: Ninja
2/25/2015 | 2:35:15 PM
Re: I am human hear me roar
"would it be possible for an AI who is created to make decisions about systems (such as an AI system that is dedicated to reacting proactively to manage internal IT systems for critical infrastructure) to reach out and decide that it can make beneficial changes to systems where it honestly shouldn't even connect to"

@Stratustician: I think the use of AI for managing internal IT systems can be useful but it will have its own set of restrictions. You may make decisions in the run time such as how to manage the VMs when the load goes up or how to switch to an alternate network when a failure occurs, but when it comes to changing physical configurations, this won't be as easy.
<<   <   Page 2 / 4   >   >>
Commentary
Why It's Nice to Know What Can Go Wrong with AI
James M. Connolly, Editorial Director, InformationWeek and Network Computing,  11/11/2019
Slideshows
Top-Paying U.S. Cities for Data Scientists and Data Analysts
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/5/2019
Slideshows
10 Strategic Technology Trends for 2020
Jessica Davis, Senior Editor, Enterprise Apps,  11/1/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll