[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Dec 20 05:45:44 UTC 2009


2009/12/20 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 12/19/09, John Clark <jonkc at bellsouth.net> wrote:
>
>> A mechanical punch card reader from half a century ago knows that
>> particular hole is a symbol that means "put this card in the third
>> column from the left". How do we know this, because the
>> machine put the card in the third column from the left.
>
> In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y.
>
> That idea entails panpsychism; the theory that everything has mind. As I mentioned to someone here a week or so ago, panpsychism does refute my position here that only brains have minds, and it does so coherently. But most people find panpsychism implausible if not outrageous.

Not everything has a mind, just information-processing things. Mind is
not a binary quality: even in biology there is a gradation between
bacteria and humans. The richer and more complex the information
processing, the richer and more complex the mind.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list