[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sat Dec 19 01:52:40 UTC 2009


2009/12/19 Gordon Swobe <gts_2000 at yahoo.com>:

> Because I represent Searle here (even as I criticize him on another discussion list) I will say that I think my consciousness might very well fade away in proportion to the number of neurons you replaced that had relevance to it.
>
> This could happen even as I continued to behave in a manner consistent with intelligence. In other words, it seems to me that I would change gradually from a living example of strong AI to a living example of weak AI.

So you might lose your visual perception but to an external observer
you would behave just as if you had normal vision and, more to the
point, you would believe you had normal vision. You would look at a
person's face, recognise them, experience all the emotional responses
associated with that person, describe their features vividly, but in
actual fact you would be seeing nothing. How do you know you don't
have this kind of zombie vision right now? Would you pay to have your
normal vision restored, knowing that it could make no possible
subjective or objective difference to you?

By the way, I can't find the reference, but Searle claims that you
*would* notice that you were going blind with this sort of neural
replacement experiment, but be unable to do anything about it. You
would struggle to scream out that something had gone terribly wrong,
but your body would not obey you, instead smiling and declaring that
everything was just fine.

>>> Programs that run algorithms do not and cannot have
>> semantics. They do useful things but have no understanding
>> of the things they do. Unless of course Searle's formal
>> argument has flaws, and that is what is at issue here.
>>
>> Suppose we encounter a race of intelligent aliens.
>> Their brains are nothing like either our brains or our computers,
>> using a combination of chemical reactions, electric circuits,
>> and mechanical nanomachinery to do whatever it is they do. We
>> would dearly like to kill these aliens and take their technology
>> and resources, but in order to do this without feeling guilty we
>> need to know if they are conscious. They behave as if they are
>> conscious and they insist they are conscious, but of course
>> unconscious beings may do that as well. Neither does evidence that
>> they evolved naturally convince us, since
>> there is nothing to stop nature from giving rise to weak AI
>> machines. So, how do we determine if the activity in the alien
>> brains is some fantastically complex program running on fantastically
>> complex architecture
>
> I notice first that we need to ask ourselves the same question, (as many here no doubt already have): how do we know for certain that the human brain does not do anything more than run some fantastically complex program on some fantastically complex architecture?
>
> I think that if my brain runs any programs then it must do something else too. I understand the symbols that my mind processes and having studied Searle's arguments carefully I simply I do not see how a mere program can do the same, no matter how complex.

Well how about this theory: it's not the program that has
consciousness, since a program is just an abstraction. It's the
physical processes the machine undergoes while running the program
that causes the consciousness. Whether these processes can be
interpreted as a program or not doesn't change their consciousness.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list