[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Dec 14 13:45:53 UTC 2009


--- On Sun, 12/13/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Changing from a man to a punch card reading machine does
> not make a different to the argument insofar as Searle would 
> still say the room has no understanding and his opponents 
> would still say that it does.

The question comes back to semantics. Short of espousing the far-fetched theory of panspychism, no serious philosopher would argue that a punch card reading machine can have semantics/intentionality, i.e., mindful understanding of the meanings of words. 

People can obviously have it, however, and so Searle put a person into his experiment to investigate whether he would have it. He concluded that such a person would not have it.

I should point out here however that his formal argument does not depend on the thought experiment for its veracity. Searle just threw the thought experiment out there to help illustrate his point, then later formalized it into a proper philosophical argument sans silly pictures of men in Chinese rooms.

> To address the strong AI / weak AI distinction I put to you
> a question you haven't yet answered: what do you think would happen 
> if part of your brain, say your visual cortex, were replaced with
> components that behaved normally in their interaction with the remaining
> biological neurons, but lacked the essential ingredient for
> consciousness?

You need to show that the squirting of neurotransmitters between giant artificial neurons made of beer cans and toilet paper will result in a mind that understands anything. :-) How do those squirts cause consciousness? If you have no scientific theory to explain it, then, well, we're back to Searle's default position: that as far as we know, only real biological brains have it.

-gts


      



More information about the extropy-chat mailing list