No God In The Machine - InformationWeek
Mobile // Mobile Applications
09:06 AM
Connect Directly

No God In The Machine

Artificial intelligence cannot replicate human consciousness, say Irish researchers in new study.

8 Gadgets For The High-Tech Home
8 Gadgets For The High-Tech Home
(Click image for larger view and slideshow.)

Computers might be able to do remarkable things, but new research offers mathematical proof that they cannot replicate human consciousness.

In a recently published paper, "Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory," Phil Maguire, co-director of the BSc degree in computational thinking at National University of Ireland, Maynooth, and his co-authors demonstrate that, within the model of consciousness proposed by Giulio Tononi, the integrated information in our brains cannot be modeled by computers.

Consciousness is not well understood. But Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin, Madison, has proposed an integrated information theory (IIT) of consciousness. IIT is not universally accepted, nor does it offer a definitive map of the mind. Nonetheless, it is well regarded as a model for consciousness and has proven valuable in understanding how to treat patients in comas or other states of diminished consciousness.

[Are self-driving cars around the corner? Read Google Car: What's Next?]

One of the axioms of IIT is "Each experience is unified; it cannot be reduced to independent components." This means that a person's experience of a flower, for example, is the product of input from multiple physiological systems -- various senses and other memories -- but that product cannot be reverse engineered. Under this definition, consciousness behaves like a hash function.

(Source: Wikimedia Commons)

"In this paper, we prove that a process which binds information together irreversibly is non-computable," Maguire explained in an email. "If the human brain is genuinely binding information then it cannot be emulated by artificial intelligence. We've proved that mathematically."

We're sorry, Hal. We're afraid we can do that.
Maguire concedes that the human mind might not integrate information in an irreversible process, but he says that does not match human intuition. "We argue that what people mean by the use of the concept 'conscious' is that a system cannot be broken down. If you can break it down, it isn't conscious (e.g. a light switch)."

This is not to say that artificial intelligence cannot behave intelligently or pass the Turing Test. Rather, what Maguire and his co-authors have shown is that there's something fundamentally different between consciousness, at least under Tononi's definition, and artificial intelligence.

"If you build an artificial system, you always know how you've constructed it," explained Maguire in a phone interview. "You know that it is decomposable. You know it's made up of elements that are non-integratable. We can never build a computing system and algorithm that integrates something so completely it can't be decomposed."

Asked whether there's a parallel between the unknowability of consciousness and the unknowability of quantum states, Maguire was cautious.

"Quantum mechanical effects occur when we reach the limits of measurement," he said via email. "Our definitions break down. There are properties that cannot be defined simultaneously. Similarly, if we try to model the integration of the brain, our models will break down. There will be computational properties that cannot meaningfully be defined. This possibility would rule out strong AI. And perhaps the irreversible integration of the brain is what causes quantum superpositions to collapse. But that's speculation for now."

Maguire's paper, co-authored by Philippe Moser (NUI Maynooth, Ireland), Rebecca Maguire (National College of Ireland), and Virgil Griffith (Caltech), is scheduled to be presented at the Annual Meeting of the Cognitive Science Society in Quebec, Canada, in July.

Our InformationWeek Elite 100 issue -- our 26th ranking of technology innovators -- shines a spotlight on businesses that are succeeding because of their digital strategies. We take a close at look at the top five companies in this year's ranking and the eight winners of our Business Innovation awards, and we offer 20 great ideas that you can use in your company. We also provide a ranked list of our Elite 100 innovators. Read our InformationWeek Elite 100 issue today.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
User Rank: Apprentice
5/5/2015 | 4:53:21 AM
Re: Really?
I think you're far too naive. Consciousness is not "just a thing", if by that you mean "it's physically explainable, just like everything else". And your example actually goes some way to making my own point - the subjective experience of, say, the colour green or the feeling I get when I stick my hand into warm water *cannot* be properly quantified. The Early Moderns understood this, and their tack was just to deny that colour, warmth, etc, were actually real things that really existed in the objects we experience them in, but to claim that the world was really only full of colourless, odourless, tasteless particles in motion, or light of a certain wavelength vibrating, and that warmth, colour, etc as we actually experience them were just projected onto the world by the mind.


The rest of your comment is just a statement of your own faith in endless scientific progress, a faith I do not share. True, people used to laugh at the idea of more than 5 computers, and they don't now, but they also laughed at perpetual motion machines, and they're still doing that nowadays.
User Rank: Apprentice
5/15/2014 | 1:38:01 AM
One major flaw
I think they made one major flaw, that they have a good clue as to what exactly is going on with the production of this experience we all have, it is a natural phenomenon, it's mechanics will be discovered. Ever seen a straight line wiggle? If so you have already seen evidence of it the abitlity to "hack" the mechanics, even if we don't understand why that line that was supposed to be straight, is now as warped as a sine wave. By the way, if you've ever dropped LSD or have ate magic mushrooms, there is a high likelyhood you have seen that straight line in a not so straight state. I think psychedelics could serve as a valuable tool in figuring out what it is our brians are doing in regards to sensory experience, and mayeb even play an important role in virtual reality.
User Rank: Moderator
5/12/2014 | 5:05:06 PM
I think, therefore I am.
Though I count myself as a programmer and fancy myself as someone who could duplicate any computer technology if given reasonable amount of time and resources, I never thought singularity is ever a possibility within the existence of humanity. All computer programs, including the various AI technologies available are essentially logical step breakdowns of processes.

When the best that the philosophy, which tries to provide simple and basic logical principles for human consciousness, can offer from one of its best, René Descartes, is only "I think, therefore I am", what chance does a computer program have in answering this most unanswerable philosophical question: what defines a man/consciousness? What is a thought anyway? How can one break down an original thought if it takes more original thinkings to do so? Do we need more and larger original thinking to define the first set of original thinking? What's next? Third set of original thinking for the second set?
User Rank: Ninja
5/10/2014 | 1:25:29 AM
Re: And why exactly should this surprise us?
interesting observation/point... I trust you are right...
Gary N DeborahK403
Gary N DeborahK403,
User Rank: Apprentice
5/9/2014 | 9:32:29 PM
Re: And why exactly should this surprise us?
I totally agree with your statement. There will never.....NEVER be any real AI, I have been programming computers to do many things for 30 years, even random things, but it is never ......and will never be thought process.
I give
I give,
User Rank: Moderator
5/9/2014 | 11:28:01 AM
anon: Re: Really?
"Even if we can't figure it out, we're ultimately an insignificant intelligence when compared to the behemoth of knowledge and processing power that we are about to spawn."

Assuming one or another sort of "evolution" of life, to what extent is that or has that been under the conscious control of any life form.  Culture does effect natural selection to some extent, but causing a change is much different that controlling the form of what we "spawn".  As long as the control is within "our" hands, it is limited in what it can produce consciously, i.e. according to plan.

My hunch is that if so called machine or artificial consciousness occurs it will be an accident, not occuring according to design.
User Rank: Apprentice
5/9/2014 | 10:31:12 AM
Re: Really?
Your "well-understood" differences between analog and digital are not viewed with any hindsight. No one in the music or photo industry took digital seriously for the first 10 years (still?). Human consciousness may be more complicated, but it is ultimately no less quantifiable than many of the aesthetic qualities that people use to attribute to the warmth of sound coming from a worn needle gliding across a vinyl platter. The only difference is the complexity of the algorithms used to synthesize various processes that will be increasingly well-understood with empirical data gathered from exponentially higher resolution scans of our brains combined with data mining combined with tons of independent researchers producing countless complimentary models of the workings of our brains. Our knowledge in this area is increasing rapidly and we're just at the very beginning of major initiatives in Europe and North America to understand the brain. As I said before, we're still in beta - NEVER say never! Even if we can't figure it out, we're ultimately an insignificant intelligence when compared to the behemoth of knowledge and processing power that we are about to spawn. Consciousness is just a thing and like any thing, we can figure it out.

"I think there is a world market for maybe five computers."—Thomas Watson, chairman of IBM, 1943
I give
I give,
User Rank: Moderator
5/9/2014 | 9:13:35 AM
Artificial Artifact
Could happen by accident.  The Singularity (to borrow the term from Asimov?) resulting from human devices and designed processes, as has been observed after the fact in many "natural" events, and especially when humans fiddle with the natural world, can be apparently Non-Linear.

Complex Adaptive Emergence, a "natural process" is the mechanism some credit to have brought about life, and perhaps consciousness in living forms.  There are some folks who debate whether humans are the only natural life forms to possess consciousness.  Since we didn't design ourselves, how is it possible we exist?

The discussion is broad.
Tony A
IW Pick
Tony A,
User Rank: Moderator
5/8/2014 | 7:48:05 PM
Interesting Proof of Limited Theorom
Nothing big enough going on here to merit an IW article as far as I can see. Using some very specific definitions of synergy, complexity, information etc. the authors show that on a certain model of mental processing, the information is too tightly integrated to be easily decoupled, and that a strictly computational model of consciousness would require that it could be decoupled in the way they say it can't be. A reasonable result that frankly depends much more on the definitions of concepts than on the mathematical "proof" they offer.

To put their point intuitively, conscious experience is not just more than the sum of the processing of sensory stimuli, it is the tight compression of that processing into a unified experience that cannot be de-unified by applying an algorithm. Thus the authors compare it, both metaphorically and mathematically, to data compression: you cannot, for example, change the word "too" to "also" in a compressed document simply by adding together information about the individual compressed bits and information about the compression algorithm. The reason is that the compression algorithm makes the meaning of each bit dependent on other bits. so that changing the compressed structure will not yield the result you want. I am not convinced beyond a doubt that this is true, but it does make sense when applied to consciousness: you cannot computationally back out the sensory stimuli from conscious experience itself. Part of this might have to do with the redundancy of brain structures, part with the ability of the brain to form new pathways on the fly, etc. In any case, when I look out of my window and experience the belief that I am seeing Brooklyn, I'm quite sure that this cannot be decomposed into the image of the white building, the sycamore tree and the slightly hazy air that I observe.

Like I said, nice result, but the inability to reduce consciousness to information processing has been demonstrated by numerous philosophical thought experiments before (Searle's Chinese Room, Frank Jackson's Mary, Ned Block's idea of connecting the entire Chinese population by telephone simultaneously, etc.) So I'm not sure that this "mathematical" result is big news. But it is always nice to have more evidence that Dan Dennett and his followers are wrong.




Thomas Claburn
Thomas Claburn,
User Rank: Author
5/8/2014 | 3:48:46 PM
Re: And why exactly should this surprise us?
>Why would AI want to mimic a human?

Also, why would we want AI to mimic a human? We don't want our software to have doubts, reservations, alternate opinions, or ideas of its own. We want software to be obedient. Code lays down rules with statements like:

if <condition>:

   do this


   do that


Imagine what a pain it would be to have software raise its own objections. I don't fear artificial intelligence but I do worry about natural stupidity.
Page 1 / 2   >   >>
Register for InformationWeek Newsletters
Current Issue
The Next Generation of IT Support
The workforce is changing as businesses become global and technology erodes geographical and physical barriers.IT organizations are critical to enabling this transition and can utilize next-generation tools and strategies to provide world-class support regardless of location, platform or device
White Papers
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll