[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 18 01:04:02 UTC 2009


--- On Thu, 12/17/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> To recap the CRA:
> 
> You say the man in the room has no understanding.

No understanding of Chinese from following Chinese syntax. Right. And yet he still passes the Turing test in Chinese.

> We say that neurons have no understanding either but the
> system of neurons has understanding.

I don't have any reason to disagree with that, but frankly I don't know how understanding works. I only know (or find myself persuaded by Searle's argument) that understanding doesn't happen as a consequence of the brain running formal programs. The brain does it by some other means.

> You say but the man has no understanding even if he
> internalises all the other components of the CR. Presumably 
> by this you mean that by internalising everything the man then *is* 
> the system, but still lacks understanding.

Yes.
 
> I say (because at this point the others are getting tired
> of arguing)... 

I'm glad you find this subject interesting. But for you, I would be arguing with the philosophers over on that other list. :)

> ... that the neurons would still have no understanding if they
> had a rudimentary intelligence sufficient for them to know when
> it was time to fire. 

I can agree with that, but perhaps not in the way you mean. 

As I've written to John, I consider even my watch to have intelligence. But does it have intentionality/semantics/understanding? No sir. My watch tells me the time intelligently but it doesn't know the time. If it had intentionality, as in strong AI, it would not only tell the time; it would also know the time.

> The intelligence of the system is superimposed on
> the intelligence (or lack of it) of its parts.

See above. Let's first distinguish intelligence from semantics/intentionality, because until we do we're not talking the same language. It's the difference between weak AI and strong AI.

> You haven't said anything directly in answer to this.

I hope we're getting closer now to the crux of the matter.

-gts


      



More information about the extropy-chat mailing list