Mobile // Mobile Applications
News
5/8/2014
09:06 AM
Connect Directly
LinkedIn
Twitter
Google+
RSS
E-Mail
50%
50%

No God In The Machine

Artificial intelligence cannot replicate human consciousness, say Irish researchers in new study.

8 Gadgets For The High-Tech Home
8 Gadgets For The High-Tech Home
(Click image for larger view and slideshow.)

Computers might be able to do remarkable things, but new research offers mathematical proof that they cannot replicate human consciousness.

In a recently published paper, "Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory," Phil Maguire, co-director of the BSc degree in computational thinking at National University of Ireland, Maynooth, and his co-authors demonstrate that, within the model of consciousness proposed by Giulio Tononi, the integrated information in our brains cannot be modeled by computers.

Consciousness is not well understood. But Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin, Madison, has proposed an integrated information theory (IIT) of consciousness. IIT is not universally accepted, nor does it offer a definitive map of the mind. Nonetheless, it is well regarded as a model for consciousness and has proven valuable in understanding how to treat patients in comas or other states of diminished consciousness.

[Are self-driving cars around the corner? Read Google Car: What's Next?]

One of the axioms of IIT is "Each experience is unified; it cannot be reduced to independent components." This means that a person's experience of a flower, for example, is the product of input from multiple physiological systems -- various senses and other memories -- but that product cannot be reverse engineered. Under this definition, consciousness behaves like a hash function.

(Source: Wikimedia Commons)

"In this paper, we prove that a process which binds information together irreversibly is non-computable," Maguire explained in an email. "If the human brain is genuinely binding information then it cannot be emulated by artificial intelligence. We've proved that mathematically."

We're sorry, Hal. We're afraid we can do that.
Maguire concedes that the human mind might not integrate information in an irreversible process, but he says that does not match human intuition. "We argue that what people mean by the use of the concept 'conscious' is that a system cannot be broken down. If you can break it down, it isn't conscious (e.g. a light switch)."

This is not to say that artificial intelligence cannot behave intelligently or pass the Turing Test. Rather, what Maguire and his co-authors have shown is that there's something fundamentally different between consciousness, at least under Tononi's definition, and artificial intelligence.

"If you build an artificial system, you always know how you've constructed it," explained Maguire in a phone interview. "You know that it is decomposable. You know it's made up of elements that are non-integratable. We can never build a computing system and algorithm that integrates something so completely it can't be decomposed."

Asked whether there's a parallel between the unknowability of consciousness and the unknowability of quantum states, Maguire was cautious.

"Quantum mechanical effects occur when we reach the limits of measurement," he said via email. "Our definitions break down. There are properties that cannot be defined simultaneously. Similarly, if we try to model the integration of the brain, our models will break down. There will be computational properties that cannot meaningfully be defined. This possibility would rule out strong AI. And perhaps the irreversible integration of the brain is what causes quantum superpositions to collapse. But that's speculation for now."

Maguire's paper, co-authored by Philippe Moser (NUI Maynooth, Ireland), Rebecca Maguire (National College of Ireland), and Virgil Griffith (Caltech), is scheduled to be presented at the Annual Meeting of the Cognitive Science Society in Quebec, Canada, in July.

Our InformationWeek Elite 100 issue -- our 26th ranking of technology innovators -- shines a spotlight on businesses that are succeeding because of their digital strategies. We take a close at look at the top five companies in this year's ranking and the eight winners of our Business Innovation awards, and we offer 20 great ideas that you can use in your company. We also provide a ranked list of our Elite 100 innovators. Read our InformationWeek Elite 100 issue today.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 2 / 2
rjones2818
50%
50%
rjones2818,
User Rank: Strategist
5/8/2014 | 2:08:39 PM
And why exactly should this surprise us?
If AI ends up thinking in a fundamentally diferent manner than humans should we be surprised?  It will  almost certainly have many more sense organs than a human.  It will almost certainly have many more 'brains' involved than a human.  It will almost certainly be 'smarter' (perhaps not at first, but qickly) than a human.

 

Why would AI want to mimic a human?

 
TerryB
50%
50%
TerryB,
User Rank: Ninja
5/8/2014 | 1:49:17 PM
Re: Is this good news or bad news?
Nothing good? The credit fraud detection software protecting your Visa card? IBM Watson to help diagnose cancer and other illnesses? There is a long list of these type of apps, none of these are "good" for us?

It's like you think SkyNet is inevitable if we continue down this road. But I will admit that when I read the line in article that said "although we don't fully understand conciousness yet", it put a damper on any conclusions these guys gave.

I think that if AI ever creates self awareness and self preservation in the machine, that's when the science fiction movies begin to look a little more real. Scariest one I have seen is Eagle Eye. Not feasible today but didn't look that far off from possible reality. The computer was not trying to self actualize, like Data in Star Trek Next Generation, but simply survive when it learned it was going to be shut down.
Davidoff
50%
50%
Davidoff,
User Rank: Apprentice
5/8/2014 | 1:07:12 PM
Is this good news or bad news?
I can not find anything positive in the creation of Artificial Intellegence. Once we lose control of these machines, man will not be able to fix anything to stop this from continuing onto a critical end.
Thomas Claburn
50%
50%
Thomas Claburn,
User Rank: Author
5/8/2014 | 12:54:22 PM
Re: Really?
Comparing the development of digital music fidelity to the advancement of AI doesn't work as an analogy because the difference between analog and digital is well-understood. Not so human consciousness. If Tononi's model is correct -- and there's still debate about that -- then we simply can't model human consciousness on a computer. We may get something functionally similar, but we won't be able to compare AI to the conscious mind because the latter will remain a black box.
Laurianne
50%
50%
Laurianne,
User Rank: Author
5/8/2014 | 12:42:02 PM
AI Vs. human consciousness
"Under this definition, consciousness behaves like a hash function." Interesting analogy, Tom.
anon2606719491
50%
50%
anon2606719491,
User Rank: Apprentice
5/8/2014 | 12:07:25 PM
Really?
And digital music will never sound as good as analog nor will digital photos ever come close to rivaling film! With all due respect, statements like these seem ludicrous to me. We are not even at beta in our thinking about AI, much farther away still from being able to imagine AI post singularity. What will AI itself say about it's own ability to synthesize human consciousness? Don't know the answer, neither do I. Never say never or history will only remember you with amusement.
<<   <   Page 2 / 2
Building A Mobile Business Mindset
Building A Mobile Business Mindset
Among 688 respondents, 46% have deployed mobile apps, with an additional 24% planning to in the next year. Soon all apps will look like mobile apps and it's past time for those with no plans to get cracking.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.