No, AI Won't Kill Us All - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // Big Data Analytics
News
2/24/2015
10:31 AM
Connect Directly
Google+
RSS
E-Mail
100%
0%

No, AI Won’t Kill Us All

When famous technologists and scientists fear the menace of thinking machines, it's time to worry, right? Not really, because computers lack the imagination to wreak havoc, says one AI expert.

8 Google Projects To Watch in 2015
8 Google Projects To Watch in 2015
(Click image for larger view and slideshow.)

The sentient, self-aware, and genuinely bad-tempered computer is a staple of science fiction -- the murderous HAL 9000 of 2001: A Space Odyssey being a prime example from the genre. Recently, though, more than a few of the world's top technological and scientific minds -- most notably Bill Gates, Stephen Hawking, and Elon Musk -- have warned humanity of the threat posed by artificial intelligence. In fact, AI has even been named one of the "12 risks that threaten human civilization," according to a new report from the Global Challenges Foundation and Oxford University’s Future of Humanity Institute.

Whoa. So perhaps it's time to step back from the precipice of Skynet-like apocalypse? Maybe focus on making computers a little less smart -- or at least less autonomous?

No, actually it’s a good time to take a deep breath and relax, says Dr. Akli Adjaoute, founder and CEO of Brighterion, a San Francisco-based provider of AI and machine-learning software for healthcare and identity fraud, homeland security, financial services, mobile payments, and other industries. Adjaoute has a PhD in artificial intelligence and mathematics from Pierre and Marie Curie University in Paris. He has spent the past 15 years developing AI technologies for commercial applications.

In short, Adjaoute knows his stuff, and he says AI's ominous potential is vastly overblown.

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

(Image: No comparison: Kasparov versus Deep Blue via Stanford University)

In a phone interview with InformationWeek, Adjaoute provided a very simple reason the fear of malevolent, thinking machines is unfounded: Computers, unlike people, have no imagination.

"Suppose I'm on the 10th floor, and I'm talking to you from my office," said Adjaoute. "I say, 'Hey, could you please take this bucketful of water, and run to the reception [area] on the first floor?' What happens? You'll say, 'Oh, I will get wet, because the water will splash on me if I run.'"

The human mind, he noted, can imagine that carrying an open, sloshing bucket of water across office floors (and possible down several flights of stairs) will likely cause water to spill out of the bucket and onto the carrier's clothing. That's imagination at work.

A computer lacks similar cognitive capabilities, however. Rather, it's very, very fast at carrying out instructions.

Even powerful AI systems such as IBM's Jeopardy!-winning Watson, don’t mimic the human brain. (The same can be said for IBM's Deep Blue computer, which in 1997 defeated world chess champion Garry Kasparov in a six-game match.)

"We don't claim that Watson is thinking in the way people think. It is working on a computational problem at its core," IBM research scientist Murray Campbell, one of the developers of Deep Blue, told the New York Times in 2011.

"The computer doesn't even know it's playing chess," said Adjaoute of Deep Blue. "It's just another level of stupid calculation."

[ Read more about Watson's voice. ]

As Allen Institute CEO Oren Etzioni recently told CNBC, AI's critics may be blurring the distinction between machines capable of performing instructions very efficiently, and truly autonomous systems that think and act independently.

"How are you going to have self-awareness if all the program does is look to the data, and analyze it with zeros and ones?" said Adjaoute. "How will it be aware of what it's doing? It's impossible."

He added: "I am tired of seeing artificial intelligence become the boogeyman of technology. There is something irrational about the fear of AI."

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 3 / 4   >   >>
tzubair
50%
50%
tzubair,
User Rank: Ninja
2/25/2015 | 2:24:43 PM
Re: Wishful Thinking
"But in longer run, I don't think it is out of realm of "analyzing data" that a computer might conclude something would work better with humans out of the loop.  And that something might be existence."

@TerryB: I think the fact that a computer can become so smart that it starts "thinking" that humans should be taken out can really exist in science fiction. I don't think this is anywhere close to being a reality. Not at least something that any human should be afraid of.
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
2/25/2015 | 2:04:36 PM
Re: I am human hear me roar
Interesting counter view from Sam Altman, expressing his concern on this blog here: http://blog.samaltman.com/machine-intelligence-part-1

Two points jumped out to me: 

"We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn't really that hard in the first place (chess, Jeopardy, self-driving cars, etc.).  This makes it seems like we aren't making any progress towards it." 

And:

"But it's very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power."
Stratustician
50%
50%
Stratustician,
User Rank: Ninja
2/25/2015 | 1:50:17 PM
Re: I am human hear me roar
For me, my fear is that if we create an entity that hooks up to the Internet, what are the possible ramifications if it gets into industrial computer systems.  I'm thinking more CLU from Tron here, would it be possible for an AI who is created to make decisions about systems (such as an AI system that is dedicated to reacting proactively to manage internal IT systems for critical infrastructure) to reach out and decide that it can make beneficial changes to systems where it honestly shouldn't even connect to.

I guess we could call it AI sprawl.  We're not there, but theoretically, is there a risk there of AI slipping into interconnected systems and hijacking them?
DR WILL
50%
50%
DR WILL,
User Rank: Apprentice
2/25/2015 | 9:38:22 AM
Re: Wishful Thinking
It is possible that consciousness is an emergent property that occurs with the proper (or more advanced) programming of the substrate (be it flesh or silicon). Most parts of our brain are not conscious. Some parts are due to several forms of memory that are linked and replayed in parallel with anticipation and predictive algorithms that keep our past available, with working memory providing an apparent "present", and frontal lobe scenarios looking into the future. Out of all this arises over time, with with modifications, our identity. It takes yrs for an infant to achieve this using a 100 billion neurons. Theoretically, it should be possible to do it in silicon.

The biggest question is what "modalities" will the computer have? Will it see color, hear sound, feel pain. feel sad, etc?

W. D. Niemi, PhD

 

 
Resource
50%
50%
Resource,
User Rank: Apprentice
2/25/2015 | 9:27:30 AM
Re: I am human hear me roar
The problem is not that computers with AI will wreak their evil will, it is that the humans directing them can (and have) used it to amplify their feeble human powers.   Computers are amplifiers of human intention -- for good or for  ill
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
2/25/2015 | 12:24:30 AM
Re: I am human hear me roar
@vnewman2: Nice take on the complaining masses. There aren't any happy endings while dealing with computers which fail at times we don't expect it too (yes, BlueScreen Of Death I am talking to you too). But I'm just glad it isn't intelligent enough to replace us.
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
2/24/2015 | 11:52:00 PM
Re: From where will computers get their paranoia?
@charlie: Technologists often dream about a day where independent AI would be able to work side by side with humans without attempting to replace them because lets face it, AIs would become way more powerful in the next 50 years and it is not long before a self learning AI would be developed and it can learn through experiences. An AI as advanced as the human brain cannot be made until a hundred years, but after that is accomplished, I think there would be a lot of flexibility between humans and AI.
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
2/24/2015 | 11:46:13 PM
Re: I am human hear me roar
@broadway: I agree. I read in a blog that a person was arguing that the employees who lost their jobs can learn how to control and manage the automation systems. What he/she didn't realize is that it would create unnecessary competition and wasting of resources. Also it would take ten times lesser people to manage the automation systems than the actual labourer count. So the equation remains unbalanced.
Broadway0474
50%
50%
Broadway0474,
User Rank: Ninja
2/24/2015 | 11:24:15 PM
Re: I am human hear me roar
@Pedro, I can't tell you the number of research reports I've seen in the past year about job market trends stating the one major trend beind that automation is taking jobs. Whether that automation is based on robots or some other form of computing, the threat of people losing jobs is real. We won't ALL be out of jobs. The side effect appears that this trend creates more higher-end jobs for the educated, but these additions in no way make up for the losses at the other end of the job spectrum.
PedroGonzales
50%
50%
PedroGonzales,
User Rank: Ninja
2/24/2015 | 9:32:17 PM
Re: I am human hear me roar
@ vnewman2. I agree. If computers were as smart as technologies have indicated, we will all be out of jobs.  I agree that this fear is overblown.   I really believe that technologists should focus on problems which are nearby, not hypothetical problems, some of these include youth unemployment, increasing healthcare cost and stagnant salaries. If our country have youth with not hope for the future and nothing nothing, I take the AI robots any time

 
<<   <   Page 3 / 4   >   >>
Slideshows
What Digital Transformation Is (And Isn't)
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/4/2019
Commentary
Watch Out for New Barriers to Faster Software Development
Lisa Morgan, Freelance Writer,  12/3/2019
Commentary
If DevOps Is So Awesome, Why Is Your Initiative Failing?
Guest Commentary, Guest Commentary,  12/2/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll