[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Dec 27 03:43:03 UTC 2009


2009/12/27 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 12/26/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> If computers had understanding then those patterns we
>>> might find and write down would not detract from their
>>> understanding any more than do patterns of brain behavior
>>> detract from the brain's understanding. But how can
>>> computers that run formal programs have understanding?
>>
>> Because the program does not *prevent* the computer from
>> having understanding even if it is conceded (for the sake of
>> argument) that the program cannot *by itself* give rise to understanding.
>
> You skipped over my points about the formal nature of machine level programming and the machine states represented by the 0's and 1's that have no symanctic content real OR imagined. That's what we're talking about here at the most basic hardware level to which you want now to appeal: "On" vs "Off"; "Open" vs "Closed". They mean nothing even to you and me, except that they differ in form one from the other. If we must say they mean something to the hardware then they each mean exactly the same thing: "this form, not that form".  And from these meaningless differences in form computers and their programmers create the *appearance* of understanding.

I agree with you that the symbols a computer program uses have no
absolute meaning, which is why I have been asking you to pretend that
you are an alien scientist examining a computer and a brain side by
side. What you see is switches going on and off in the computer and
neurons going on and off in the brain. You can work out some simple
rules determining what the switches or neurons will do depending on
what the neighbours they are connected to do, and you can work out
patterns of behaviour, predicting that a certain input will
consistently give rise to a particular output. If you're very clever
you may be able to come up with a mathematical model of the neurons or
the computer circuitry, allowing you to predict more complex
behaviours. You understand that the symbolic representations of the
brain and computer you have used in your model are completely
arbitrary, and you don't know if the designers of the brain or
computer used a similar symbolic representation, a different symbolic
representation, or if there were no designers at all and the brain or
computer just evolved naturally. So what reason do you have at this
point to conclude that the computer, the brain, both or neither has
understanding?

> If you want to believe that computers have intrinsic understanding of the symbols their programs input and output, and argue provisionally as you do above that they can have it because their mindless programs don't prevent them from having it, but you can't show me how the hardware allows them to have it even if the programs don't, then I can only shrug my shoulders. After all people can believe anything they wish. :)

You can't show me how the hardware in your head has understanding
either. However, given that it does, and given that its behaviour can
be simulated by a computer, then that computer *must* also have
understanding. I've explained this several times, and you have not
challenged either the premises or the reasoning leading to the
conclusion.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list