[ExI] Meaningless Symbols
Stathis Papaioannou
stathisp at gmail.com
Tue Jan 12 17:45:49 UTC 2010
2010/1/13 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 1/12/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> At the end of the next day you need to show how the mind solves the
>> symbol grounding problem.
>
> We'll know the complete answer someday. For now we need only know and agree that the mind has this cognitive capacity to understand, say, Chinese symbols. Now we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. But as it turns out, we do not obtain that capacity from mentally running such a program. So whatever the mind does to get that cognitive capacity, it doesn't obtain it from running a formal program.
>
> Now we know more about the mind than we did before, even if we don't yet know the complete answer.
It's not much of an answer. I was hoping you might say something like,
understanding is due to a special chemical reaction in the brain, and
since computers usually aren't chemical, they don't have it even if
they can simulate its behaviour.
In all that you and Searle have said, the strongest statement you can
make is that a computer that is programmed to behave like a brain will
not *necessarily* have the consciousness of the brain. You have not
excluded the *possibility* that it might be conscious. You have no
proof that, for example, understanding requires carbon atoms and is
impossible without them. Nor have you any proof that arranging silicon
and copper atoms in particular configurations that can be interpreted
as implementing a formal program will *prevent* understanding that
might have occurred had the arrangement been otherwise.
In contrast, I have presented an argument which shows that it is
*impossible* to separate understanding from behaviour. We have been
talking about computerised neurons but the case can be made more
generally. If God makes miraculous neurons that behave just like
normal neurons but lack understanding, then these neurons could be
used to selectively remove any aspect of consciousness such as
perception, emotion and understanding. However, because the miraculous
neurons behave normally in their interactions with the other neurons,
the subject will behave normally and will not notice that anything has
changed. He will lose visual perception but he will be not only able
to describe everything he sees, he will also honestly believe that
that he sees normally. He won't even comment that things are looking a
little blurry around the edges, since the part of his brain
responsible for noticing, reflecting on and verbalising will behave
exactly the same as if the miraculous neurons had not been installed.
Now surely if there is *anything* that can be said about visual
perception, it is that a conscious, rational person will at least
notice that something a bit unusual has happened if he suddenly goes
completely blind; or that he has lost the power to understand speech,
or the ability to feel pain. But with these miraculous neurons, any
aspect of your consciousness could be arbitrarily removed and you
would never know it.
The conclusion is that in fact you would have normal consciousness
with the miraculous neurons. In other words, they're not miraculous at
all: not even God can make neurons that behave normally but lack
consciousness. It's a logical impossibility, and God can at best only
do the physically impossible, not the logically impossible.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list