[ExI] Wernicke's aphasia and the CRA.

Gordon Swobe gts_2000 at yahoo.com
Tue Dec 8 13:12:30 UTC 2009


--- On Tue, 12/8/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> Premise A1: Programs are formal (syntactic).
> Premise A2: Minds have mental contents (semantics).
> Premise A3: Syntax is neither constitutive of nor
> sufficient for semantics.


> The A1/A2 dichotomy is deceptive. A human child learns that
> if it utters a particular word it gets a particular response, and
> a computer program can learn the same thing. Why would you say the
> child "understands" the word but the program doesn't?

Presumably the child has mental contents, i.e., semantics, that accompany if not also cause its behaviors, whereas the machine intelligence has only syntactical rules (even if self-written) that define its behaviors.

Keep in mind, and this is in reply also to one of your other messages, that Searle has no problem whatsoever with weak AI. He believes Software/Hardware systems will eventually mimic the behaviors of humans, pass the Turing test and in the eyes of behaviorists exceed the intelligence of humans. 

But will such S/H systems have semantic understanding? More generally, will any S/H system running a formal program have what philosophers call intentionality? Never, says Searle. The Turing test will give false positives.

-gts






      



More information about the extropy-chat mailing list