Mobile // Mobile Applications
09:06 AM
Connect Directly

No God In The Machine

Artificial intelligence cannot replicate human consciousness, say Irish researchers in new study.

8 Gadgets For The High-Tech Home
8 Gadgets For The High-Tech Home
(Click image for larger view and slideshow.)

Computers might be able to do remarkable things, but new research offers mathematical proof that they cannot replicate human consciousness.

In a recently published paper, "Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory," Phil Maguire, co-director of the BSc degree in computational thinking at National University of Ireland, Maynooth, and his co-authors demonstrate that, within the model of consciousness proposed by Giulio Tononi, the integrated information in our brains cannot be modeled by computers.

Consciousness is not well understood. But Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin, Madison, has proposed an integrated information theory (IIT) of consciousness. IIT is not universally accepted, nor does it offer a definitive map of the mind. Nonetheless, it is well regarded as a model for consciousness and has proven valuable in understanding how to treat patients in comas or other states of diminished consciousness.

[Are self-driving cars around the corner? Read Google Car: What's Next?]

One of the axioms of IIT is "Each experience is unified; it cannot be reduced to independent components." This means that a person's experience of a flower, for example, is the product of input from multiple physiological systems -- various senses and other memories -- but that product cannot be reverse engineered. Under this definition, consciousness behaves like a hash function.

(Source: Wikimedia Commons)

"In this paper, we prove that a process which binds information together irreversibly is non-computable," Maguire explained in an email. "If the human brain is genuinely binding information then it cannot be emulated by artificial intelligence. We've proved that mathematically."

We're sorry, Hal. We're afraid we can do that.
Maguire concedes that the human mind might not integrate information in an irreversible process, but he says that does not match human intuition. "We argue that what people mean by the use of the concept 'conscious' is that a system cannot be broken down. If you can break it down, it isn't conscious (e.g. a light switch)."

This is not to say that artificial intelligence cannot behave intelligently or pass the Turing Test. Rather, what Maguire and his co-authors have shown is that there's something fundamentally different between consciousness, at least under Tononi's definition, and artificial intelligence.

"If you build an artificial system, you always know how you've constructed it," explained Maguire in a phone interview. "You know that it is decomposable. You know it's made up of elements that are non-integratable. We can never build a computing system and algorithm that integrates something so completely it can't be decomposed."

Asked whether there's a parallel between the unknowability of consciousness and the unknowability of quantum states, Maguire was cautious.

"Quantum mechanical effects occur when we reach the limits of measurement," he said via email. "Our definitions break down. There are properties that cannot be defined simultaneously. Similarly, if we try to model the integration of the brain, our models will break down. There will be computational properties that cannot meaningfully be defined. This possibility would rule out strong AI. And perhaps the irreversible integration of the brain is what causes quantum superpositions to collapse. But that's speculation for now."

Maguire's paper, co-authored by Philippe Moser (NUI Maynooth, Ireland), Rebecca Maguire (National College of Ireland), and Virgil Griffith (Caltech), is scheduled to be presented at the Annual Meeting of the Cognitive Science Society in Quebec, Canada, in July.

Our InformationWeek Elite 100 issue -- our 26th ranking of technology innovators -- shines a spotlight on businesses that are succeeding because of their digital strategies. We take a close at look at the top five companies in this year's ranking and the eight winners of our Business Innovation awards, and we offer 20 great ideas that you can use in your company. We also provide a ranked list of our Elite 100 innovators. Read our InformationWeek Elite 100 issue today.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

Comment  | 
Print  | 
More Insights
Oldest First  |  Newest First  |  Threaded View
Page 1 / 2   >   >>
User Rank: Apprentice
5/8/2014 | 12:07:25 PM
And digital music will never sound as good as analog nor will digital photos ever come close to rivaling film! With all due respect, statements like these seem ludicrous to me. We are not even at beta in our thinking about AI, much farther away still from being able to imagine AI post singularity. What will AI itself say about it's own ability to synthesize human consciousness? Don't know the answer, neither do I. Never say never or history will only remember you with amusement.
User Rank: Author
5/8/2014 | 12:42:02 PM
AI Vs. human consciousness
"Under this definition, consciousness behaves like a hash function." Interesting analogy, Tom.
Thomas Claburn
Thomas Claburn,
User Rank: Author
5/8/2014 | 12:54:22 PM
Re: Really?
Comparing the development of digital music fidelity to the advancement of AI doesn't work as an analogy because the difference between analog and digital is well-understood. Not so human consciousness. If Tononi's model is correct -- and there's still debate about that -- then we simply can't model human consciousness on a computer. We may get something functionally similar, but we won't be able to compare AI to the conscious mind because the latter will remain a black box.
User Rank: Apprentice
5/8/2014 | 1:07:12 PM
Is this good news or bad news?
I can not find anything positive in the creation of Artificial Intellegence. Once we lose control of these machines, man will not be able to fix anything to stop this from continuing onto a critical end.
User Rank: Ninja
5/8/2014 | 1:49:17 PM
Re: Is this good news or bad news?
Nothing good? The credit fraud detection software protecting your Visa card? IBM Watson to help diagnose cancer and other illnesses? There is a long list of these type of apps, none of these are "good" for us?

It's like you think SkyNet is inevitable if we continue down this road. But I will admit that when I read the line in article that said "although we don't fully understand conciousness yet", it put a damper on any conclusions these guys gave.

I think that if AI ever creates self awareness and self preservation in the machine, that's when the science fiction movies begin to look a little more real. Scariest one I have seen is Eagle Eye. Not feasible today but didn't look that far off from possible reality. The computer was not trying to self actualize, like Data in Star Trek Next Generation, but simply survive when it learned it was going to be shut down.
User Rank: Strategist
5/8/2014 | 2:08:39 PM
And why exactly should this surprise us?
If AI ends up thinking in a fundamentally diferent manner than humans should we be surprised?  It will  almost certainly have many more sense organs than a human.  It will almost certainly have many more 'brains' involved than a human.  It will almost certainly be 'smarter' (perhaps not at first, but qickly) than a human.


Why would AI want to mimic a human?

User Rank: Ninja
5/8/2014 | 3:21:46 PM
Re: Really?
If conciousness is not well understood then I fail to understand how technology will be able to replicate it. Yet. 

But I am convinced we will get to that point. There's no denying it. Will it change the way we think about technology? Probably. I hope its for good, and not the dismal-type scenarios we have seen in movies and on television. 
Thomas Claburn
Thomas Claburn,
User Rank: Author
5/8/2014 | 3:48:46 PM
Re: And why exactly should this surprise us?
>Why would AI want to mimic a human?

Also, why would we want AI to mimic a human? We don't want our software to have doubts, reservations, alternate opinions, or ideas of its own. We want software to be obedient. Code lays down rules with statements like:

if <condition>:

   do this


   do that


Imagine what a pain it would be to have software raise its own objections. I don't fear artificial intelligence but I do worry about natural stupidity.
Tony A
IW Pick
Tony A,
User Rank: Moderator
5/8/2014 | 7:48:05 PM
Interesting Proof of Limited Theorom
Nothing big enough going on here to merit an IW article as far as I can see. Using some very specific definitions of synergy, complexity, information etc. the authors show that on a certain model of mental processing, the information is too tightly integrated to be easily decoupled, and that a strictly computational model of consciousness would require that it could be decoupled in the way they say it can't be. A reasonable result that frankly depends much more on the definitions of concepts than on the mathematical "proof" they offer.

To put their point intuitively, conscious experience is not just more than the sum of the processing of sensory stimuli, it is the tight compression of that processing into a unified experience that cannot be de-unified by applying an algorithm. Thus the authors compare it, both metaphorically and mathematically, to data compression: you cannot, for example, change the word "too" to "also" in a compressed document simply by adding together information about the individual compressed bits and information about the compression algorithm. The reason is that the compression algorithm makes the meaning of each bit dependent on other bits. so that changing the compressed structure will not yield the result you want. I am not convinced beyond a doubt that this is true, but it does make sense when applied to consciousness: you cannot computationally back out the sensory stimuli from conscious experience itself. Part of this might have to do with the redundancy of brain structures, part with the ability of the brain to form new pathways on the fly, etc. In any case, when I look out of my window and experience the belief that I am seeing Brooklyn, I'm quite sure that this cannot be decomposed into the image of the white building, the sycamore tree and the slightly hazy air that I observe.

Like I said, nice result, but the inability to reduce consciousness to information processing has been demonstrated by numerous philosophical thought experiments before (Searle's Chinese Room, Frank Jackson's Mary, Ned Block's idea of connecting the entire Chinese population by telephone simultaneously, etc.) So I'm not sure that this "mathematical" result is big news. But it is always nice to have more evidence that Dan Dennett and his followers are wrong.




I give
I give,
User Rank: Moderator
5/9/2014 | 9:13:35 AM
Artificial Artifact
Could happen by accident.  The Singularity (to borrow the term from Asimov?) resulting from human devices and designed processes, as has been observed after the fact in many "natural" events, and especially when humans fiddle with the natural world, can be apparently Non-Linear.

Complex Adaptive Emergence, a "natural process" is the mechanism some credit to have brought about life, and perhaps consciousness in living forms.  There are some folks who debate whether humans are the only natural life forms to possess consciousness.  Since we didn't design ourselves, how is it possible we exist?

The discussion is broad.
Page 1 / 2   >   >>
Building A Mobile Business Mindset
Building A Mobile Business Mindset
Among 688 respondents, 46% have deployed mobile apps, with an additional 24% planning to in the next year. Soon all apps will look like mobile apps and it's past time for those with no plans to get cracking.
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends to Watch in Financial Services
IT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on for the week of June 19, 2016. We'll be talking with the editors and correspondents who brought you the top stories of the week to get the "story behind the story."
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.