[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 28 13:31:17 UTC 2009

2009/12/28 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sun, 12/27/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> Let's assume the seat of consciousness is in the
>> mitochondria. You need to simulate the activity in mitochondria
>> because otherwise the artificial neurons won't behave normally:
> Your second sentence creates a logical contradiction. If real biological processes in the mitochondria act as the seat of consciousness then because conscious experience plays a role in behavior including the behavior of neurons, we cannot on Searle's view simulate those real processes with abstract formal programs (compromising the subject's consciousness) and then also expect those neurons (and therefore the organism) to behave "normally".
>> If the replacement neurons behave normally in their
>> interactions with the remaining brain, then the subject *must*
>> behave normally.
> But your replacement neurons *won't* behave normally, and so your possible conclusions don't follow. You've short-circuited the feedback loop between experience and behavior.
> Your thought experiment might make more sense if we were testing the theories of an epiphenomenalist, who believes conscious experience plays no role in behavior, but Searle adamantly rejects epiphenomenalism for the same reasons most people do.
> Getting back to my original point, science has almost no idea at present how to define the so-called "seat of consciousness" (what I prefer to call the neurological correlates of consciousness or NCC). In real terms, we simply don't know what happened in George Foreman's brain that caused him to lose consciousness when Ali delivered the KO punch. For that reason artificial neurons such as those you have in mind remain extremely speculative for use in thought experiments or otherwise. It seems to me that we cannot prove anything whatsoever with them.

Well, I think you've finally understood the problem. If indeed there
is something in the physics of neurons that is not computable, then we
won't be able to make artificial neurons based on computation that
behave like biological neurons. That would mean neither weak AI nor
strong AI is possible. But Searle claims that weak AI *is* possible.
He even alludes to Church's thesis to support this:

The answer to 3. seems to me equally obviously "Yes", at least on a
natural interpretation. That is, naturally interpreted, the question
means: Is there some description of the brain such that under that
description you could do a computational simulation of the operations
of the brain. But since according to Church's thesis, anything that
can be given a precise enough characterization as a set of steps can
be simulated on a digital computer, it follows trivially that the
question has an affirmative answer. The operations of the brain can be
simulated on a digital computer in the same sense in which weather
systems, the behavior of the New York stock market or the pattern of
airline flights over Latin America can. So our question is not, "Is
the mind a program?" The answer to that is, "No". Nor is it, "Can the
brain be simulated?" The answer to that is, "Yes". The question is,
"Is the brain a digital computer?" And for purposes of this discussion
I am taking that question as equivalent to: "Are brain processes

(from http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html)

However, Searle thinks that although the behaviour of the brain can be
replicated by a computer, the conscious cannot. But that position
leads to absurd conclusions, as you perhaps are now realising.

It still remains a possibility that the brain does in fact utilise
uncomputable physics. This is the position of Roger Penrose, who
believes neither strong AI nor weak AI is possible, and speculates
that an as yet undiscovered theory of quantum gravity plays an
important role in subcellular processes and will turn out to be
uncomputable. The problem with this idea is that there is no evidence
for it, and most scientists dismiss it out of hand; but at least it
has the merit of consistency.

A final point is that even if it turns out the brain is uncomputable,
that would be a fatal blow for computationalism but not for
functionalism. If we were able to incorporate a hypercomputer (perhaps
based on the exotic physics) able to do the relevant calculations into
the artificial neurons so that they behave like biological neurons,
then consciousness would follow.

Stathis Papaioannou

More information about the extropy-chat mailing list