[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Thu Dec 17 23:53:46 UTC 2009


2009/12/18 Gordon Swobe <gts_2000 at yahoo.com>:

>> If programs are syntactic and programs running on computers
>> can have semantics, then syntax is sufficient for semantics.
>
> That's a valid argument but not necessarily a true one. You've simply put the conclusion you want to see (that programs can glean semantics from syntax) into the premises.

And you and Searle have assumed the opposite, when it is the thing
under dispute.

> In other words your argument is not about Searle begging the question. If programs are syntactic and can also glean semantics from syntax then Searle's premise 3 is simply false. You just need to how P3 is false for programs or for people.

It is false for people, since people are manifestly conscious. It is
also false for computers if it shown that a computer can simulate the
behaviour of a brain and simulating the behaviour of a brain gives
rise to consciousness, as I have been arguing.

> The thought experiment illustrates how P3 is true. The man in the room follows the rules of Chinese syntax, yet he has no idea what his words mean.

To recap the CRA:

You say the man in the room has no understanding.

We say that neurons have no understanding either, but the system of
neurons has understanding.

You say but the man has no understanding even if he internalises all
the other components of the CR. Presumably by this you mean that by
internalising everything the man then *is* the system, but still lacks
understanding.

I say (because at this point the others are getting tired of arguing)
that the neurons would still have no understanding if they had a
rudimentary intelligence sufficient for them to know when it was time
to fire. The intelligence of the system is superimposed on the
intelligence (or lack of it) of its parts.

You haven't said anything directly in answer to this.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list