[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Dec 27 00:27:35 UTC 2009


--- On Sat, 12/26/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> If computers had understanding then those patterns we
>> might find and write down would not detract from their
>> understanding any more than do patterns of brain behavior
>> detract from the brain's understanding. But how can
>> computers that run formal programs have understanding?
> 
> Because the program does not *prevent* the computer from
> having understanding even if it is conceded (for the sake of
> argument) that the program cannot *by itself* give rise to understanding.

You skipped over my points about the formal nature of machine level programming and the machine states represented by the 0's and 1's that have no symanctic content real OR imagined. That's what we're talking about here at the most basic hardware level to which you want now to appeal: "On" vs "Off"; "Open" vs "Closed". They mean nothing even to you and me, except that they differ in form one from the other. If we must say they mean something to the hardware then they each mean exactly the same thing: "this form, not that form".  And from these meaningless differences in form computers and their programmers create the *appearance* of understanding.

If you want to believe that computers have intrinsic understanding of the symbols their programs input and output, and argue provisionally as you do above that they can have it because their mindless programs don't prevent them from having it, but you can't show me how the hardware allows them to have it even if the programs don't, then I can only shrug my shoulders. After all people can believe anything they wish. :)

-gts




      



More information about the extropy-chat mailing list