[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Dec 20 05:37:54 UTC 2009


2009/12/20 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 12/19/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> The CR lacks understanding because the man in the room, who can be
>> seen as implementing a program, lacks understanding;
>
> Yes.
>
>> whereas a different system which produces similar
>> behaviour but with dumb components the interactions of which can't be
>> recognised as algorithmic has understanding.
>
> What different system?
>
> If you mean the natural brain, (the only different system known to have understanding), then it doesn't matter whether we can recognize its processes as algorithmic. Any computer running those possible algorithms would have no understanding.
>
> More generally, computer simulations of things do not equal the things they simulate.

But it seems that you and Searle are saying that the CR lacks
understanding *because* the man lacks understanding of Chinese,
whereas the brain, with completely dumb components, has understanding.
So you are penalising the CR because it has smart components and
because what it does has an algorithmic pattern. By this reasoning, if
neurons had their own separate rudimentary intelligence and if someone
could see a pattern in the brain's functioning to which the term
"algorithmic" could be applied, then the brain would lack
understanding also.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list