[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Dec 18 07:05:24 UTC 2009


2009/12/18 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Thu, 12/17/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> To recap the CRA:
>>
>> You say the man in the room has no understanding.
>
> No understanding of Chinese from following Chinese syntax. Right. And yet he still passes the Turing test in Chinese.
>
>> We say that neurons have no understanding either but the
>> system of neurons has understanding.
>
> I don't have any reason to disagree with that, but frankly I don't know how understanding works. I only know (or find myself persuaded by Searle's argument) that understanding doesn't happen as a consequence of the brain running formal programs. The brain does it by some other means.

Can you say what these other means might possibly be? For example,
could the understanding derive from some physical structure such as
the carbon-nitrogen bonds of amino acids, or from some process such as
the passage of water through cell membranes?

>> You say but the man has no understanding even if he
>> internalises all the other components of the CR. Presumably
>> by this you mean that by internalising everything the man then *is*
>> the system, but still lacks understanding.
>
> Yes.
>
>> I say (because at this point the others are getting tired
>> of arguing)...
>
> I'm glad you find this subject interesting. But for you, I would be arguing with the philosophers over on that other list. :)
>
>> ... that the neurons would still have no understanding if they
>> had a rudimentary intelligence sufficient for them to know when
>> it was time to fire.
>
> I can agree with that, but perhaps not in the way you mean.
>
> As I've written to John, I consider even my watch to have intelligence. But does it have intentionality/semantics/understanding? No sir. My watch tells me the time intelligently but it doesn't know the time. If it had intentionality, as in strong AI, it would not only tell the time; it would also know the time.

So where does the understanding of the brain come from if the neurons
are stupid?

>> The intelligence of the system is superimposed on
>> the intelligence (or lack of it) of its parts.
>
> See above. Let's first distinguish intelligence from semantics/intentionality, because until we do we're not talking the same language. It's the difference between weak AI and strong AI.
>
>> You haven't said anything directly in answer to this.
>
> I hope we're getting closer now to the crux of the matter.

You seem to accept that dumb matter which itself does not have
understanding can give rise to understanding, but not that an
appropriately programmed computer can pull off the same miracle. Why
not?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list