No, AI Won't Kill Us All - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Big Data Analytics
News
2/24/2015
10:31 AM
Connect Directly
Google+
RSS
E-Mail
100%
0%

No, AI Won’t Kill Us All

When famous technologists and scientists fear the menace of thinking machines, it's time to worry, right? Not really, because computers lack the imagination to wreak havoc, says one AI expert.

8 Google Projects To Watch in 2015
8 Google Projects To Watch in 2015
(Click image for larger view and slideshow.)

The sentient, self-aware, and genuinely bad-tempered computer is a staple of science fiction -- the murderous HAL 9000 of 2001: A Space Odyssey being a prime example from the genre. Recently, though, more than a few of the world's top technological and scientific minds -- most notably Bill Gates, Stephen Hawking, and Elon Musk -- have warned humanity of the threat posed by artificial intelligence. In fact, AI has even been named one of the "12 risks that threaten human civilization," according to a new report from the Global Challenges Foundation and Oxford University’s Future of Humanity Institute.

Whoa. So perhaps it's time to step back from the precipice of Skynet-like apocalypse? Maybe focus on making computers a little less smart -- or at least less autonomous?

No, actually it’s a good time to take a deep breath and relax, says Dr. Akli Adjaoute, founder and CEO of Brighterion, a San Francisco-based provider of AI and machine-learning software for healthcare and identity fraud, homeland security, financial services, mobile payments, and other industries. Adjaoute has a PhD in artificial intelligence and mathematics from Pierre and Marie Curie University in Paris. He has spent the past 15 years developing AI technologies for commercial applications.

In short, Adjaoute knows his stuff, and he says AI's ominous potential is vastly overblown.

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

In a phone interview with InformationWeek, Adjaoute provided a very simple reason the fear of malevolent, thinking machines is unfounded: Computers, unlike people, have no imagination.

"Suppose I'm on the 10th floor, and I'm talking to you from my office," said Adjaoute. "I say, 'Hey, could you please take this bucketful of water, and run to the reception [area] on the first floor?' What happens? You'll say, 'Oh, I will get wet, because the water will splash on me if I run.'"

The human mind, he noted, can imagine that carrying an open, sloshing bucket of water across office floors (and possible down several flights of stairs) will likely cause water to spill out of the bucket and onto the carrier's clothing. That's imagination at work.

A computer lacks similar cognitive capabilities, however. Rather, it's very, very fast at carrying out instructions.

Even powerful AI systems such as IBM's Jeopardy!-winning Watson, don’t mimic the human brain. (The same can be said for IBM's Deep Blue computer, which in 1997 defeated world chess champion Garry Kasparov in a six-game match.)

"We don't claim that Watson is thinking in the way people think. It is working on a computational problem at its core," IBM research scientist Murray Campbell, one of the developers of Deep Blue, told the New York Times in 2011.

"The computer doesn't even know it's playing chess," said Adjaoute of Deep Blue. "It's just another level of stupid calculation."

[ Read more about Watson's voice. ]

As Allen Institute CEO Oren Etzioni recently told CNBC, AI's critics may be blurring the distinction between machines capable of performing instructions very efficiently, and truly autonomous systems that think and act independently.

"How are you going to have self-awareness if all the program does is look to the data, and analyze it with zeros and ones?" said Adjaoute. "How will it be aware of what it's doing? It's impossible."

He added: "I am tired of seeing artificial intelligence become the boogeyman of technology. There is something irrational about the fear of AI."

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
<<   <   Page 2 / 4   >   >>
DR WILL
50%
50%
DR WILL,
User Rank: Apprentice
2/25/2015 | 9:38:22 AM
Re: Wishful Thinking
It is possible that consciousness is an emergent property that occurs with the proper (or more advanced) programming of the substrate (be it flesh or silicon). Most parts of our brain are not conscious. Some parts are due to several forms of memory that are linked and replayed in parallel with anticipation and predictive algorithms that keep our past available, with working memory providing an apparent "present", and frontal lobe scenarios looking into the future. Out of all this arises over time, with with modifications, our identity. It takes yrs for an infant to achieve this using a 100 billion neurons. Theoretically, it should be possible to do it in silicon.

The biggest question is what "modalities" will the computer have? Will it see color, hear sound, feel pain. feel sad, etc?

W. D. Niemi, PhD

 

 
Stratustician
50%
50%
Stratustician,
User Rank: Ninja
2/25/2015 | 1:50:17 PM
Re: I am human hear me roar
For me, my fear is that if we create an entity that hooks up to the Internet, what are the possible ramifications if it gets into industrial computer systems.  I'm thinking more CLU from Tron here, would it be possible for an AI who is created to make decisions about systems (such as an AI system that is dedicated to reacting proactively to manage internal IT systems for critical infrastructure) to reach out and decide that it can make beneficial changes to systems where it honestly shouldn't even connect to.

I guess we could call it AI sprawl.  We're not there, but theoretically, is there a risk there of AI slipping into interconnected systems and hijacking them?
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
2/25/2015 | 2:04:36 PM
Re: I am human hear me roar
Interesting counter view from Sam Altman, expressing his concern on this blog here: http://blog.samaltman.com/machine-intelligence-part-1

Two points jumped out to me: 

"We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn't really that hard in the first place (chess, Jeopardy, self-driving cars, etc.).  This makes it seems like we aren't making any progress towards it." 

And:

"But it's very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power."
tzubair
50%
50%
tzubair,
User Rank: Ninja
2/25/2015 | 2:24:43 PM
Re: Wishful Thinking
"But in longer run, I don't think it is out of realm of "analyzing data" that a computer might conclude something would work better with humans out of the loop.  And that something might be existence."

@TerryB: I think the fact that a computer can become so smart that it starts "thinking" that humans should be taken out can really exist in science fiction. I don't think this is anywhere close to being a reality. Not at least something that any human should be afraid of.
tzubair
50%
50%
tzubair,
User Rank: Ninja
2/25/2015 | 2:35:15 PM
Re: I am human hear me roar
"would it be possible for an AI who is created to make decisions about systems (such as an AI system that is dedicated to reacting proactively to manage internal IT systems for critical infrastructure) to reach out and decide that it can make beneficial changes to systems where it honestly shouldn't even connect to"

@Stratustician: I think the use of AI for managing internal IT systems can be useful but it will have its own set of restrictions. You may make decisions in the run time such as how to manage the VMs when the load goes up or how to switch to an alternate network when a failure occurs, but when it comes to changing physical configurations, this won't be as easy.
TerryB
50%
50%
TerryB,
User Rank: Ninja
2/25/2015 | 2:46:27 PM
Re: From where will computers get their paranoia?
Charlie, I guess I was thinking about programming for the computer to protect itself, the "self awareness" you see in so many movies. My favorite example is in Eagle Eye, where the Big Brother computer discovered the humans wanted to unplug him. He took care of that problem. :-)

Again, I want to be clear I'm being very tongue in cheek discussing this. We are nowhere near the level of programming in AI where what I describe above is feasible. And it is quite possible we will never get there. But in case we do, I sure hope the scientists have watched these movies....
Resource
50%
50%
Resource,
User Rank: Apprentice
2/25/2015 | 3:53:55 PM
Re: From where will computers get their paranoia?
The original for that self-protective behavior was "Colossus, the Forbin Project", a made-for-TV movie that came out in the 1950s.   The human machine manager did defeat him, though.   An interesting monument to Cold War paranoia.
Resource
50%
50%
Resource,
User Rank: Apprentice
2/25/2015 | 3:53:56 PM
Re: From where will computers get their paranoia?
The original for that self-protective behavior was "Colossus, the Forbin Project", a made-for-TV movie that came out in the 1950s.   The human machine manager did defeat him, though.   An interesting monument to Cold War paranoia.
mak63
50%
50%
mak63,
User Rank: Ninja
2/25/2015 | 4:30:29 PM
report
"12 risks that threaten human civilization," according to a new report from...
I thought for a moment there that report was from The Onion.
Anyway, I agree with Dr. Adjaoute, the threat of AI is "vastly overblown."
kstaron
50%
50%
kstaron,
User Rank: Ninja
2/27/2015 | 11:30:57 AM
You can still have havoc without imagination
Any AI system might lack the imagination to become a movie style super villian, but a minor flaw in the 'learning' portion of an AI system could cause havoc. The movies take these ideas to the Nth degree where rthe 'logical' conclusion is to delete the humans so the self aware computer may continue. What if a learning AI system decided to increase traffic efficiency by eliminating yellow lights over 20 city blocks? There could be hundreds of injuries or deaths or before it 'learned' that humans don't have instant reaction time. At any point where an entity learns and applies that information there is the possibility for havoc if the learning isn't complete. In humans we call this growing up and it takes a long time with lots of supervision and during that time the ones growing up rarely have control over any kind of critical computer system. While it might not even be possible to develop an evil computer like Hal, it seems like it might be entirely possible to create something where havoc could ensue.
<<   <   Page 2 / 4   >   >>
Slideshows
Reflections on Tech in 2019
James M. Connolly, Editorial Director, InformationWeek and Network Computing,  12/9/2019
Slideshows
What Digital Transformation Is (And Isn't)
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/4/2019
Commentary
Watch Out for New Barriers to Faster Software Development
Lisa Morgan, Freelance Writer,  12/3/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
The Cloud Gets Ready for the 20's
This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
Slideshows
Flash Poll