[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Dec 13 23:08:02 UTC 2009


2009/12/14 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sun, 12/13/09, John Clark <jonkc at bellsouth.net> wrote:
>
>> Incidentally why do you suppose Searle
>> didn't replace the little man with one of
>> those punch card reading machines? It could certainly
>> do a better job than a real flesh and blood human, so why
>> not use it?
>
> Such an argument would not address the question of strong AI, where a strong AI is defined as one that has mindful understanding of its own words and does not merely speak mindlessly. Searle considers that the difference between weak and strong AI, and on this point I agree with him.

Changing from a man to a punch card reading machine does not make a
different to the argument insofar as Searle would still say the room
has no understanding and his opponents would still say that it does.

> You've mentioned that you don't care about the difference between weak and strong AI. That's fine with me, but in that case neither Searle nor I have anything interesting to say to you.
>
> Some people do care about the difference between strong and weak. I happen to count myself among them. To people like me Searle has something very interesting to say.

To address the strong AI / weak AI distinction I put to you a question
you haven't yet answered: what do you think would happen if part of
your brain, say your visual cortex, were replaced with components that
behaved normally in their interaction with the remaining biological
neurons, but lacked the essential ingredient for consciousness?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list