[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sat Dec 26 17:18:47 UTC 2009


2009/12/27 Gordon Swobe <gts_2000 at yahoo.com>:

>> There was no plan behind the brain, but post hoc analysis can reveal
>> patterns which have an algorithmic description (provided that the
>> physics in the brain is computable). Now, if such patterns in the brain
>> do not detract from its understanding, why should similar patterns
>> detract from the understanding of a computer?
>
> If computers had understanding then those patterns we might find and write down would not detract from their understanding any more than do patterns of brain behavior detract from the brain's understanding. But how can computers that run formal programs have understanding?

Because the program does not *prevent* the computer from having
understanding even if it is conceded (for the sake of argument) that
the program cannot *by itself* give rise to understanding. The matter
in the computer and the matter in the brain both follow absolutely
rigid and mindless rules - at the lowest level of description, exactly
the same rigid and mindless rules - which at the highest level of
description leads to intelligent behaviour. It so happens that at
intermediate levels the patterns in the computer are recognisable as
programs, because that was an easy way for the engineer to figure out
how to put the matter together to do his bidding. Similarly, at
intermediate levels in the brain patterns appear which can be mapped
onto a computer program, such as a neural network. But if you can say
of the brain that it's something other than the symbol manipulation
that leads to understanding, what impediment is there to saying the
same of the computer?

>> In both cases you can claim that the understanding comes from the actual
>> physical structure and behaviour, not from the description of that
>> physical structure and behaviour.
>
> I don't claim computers have understanding. They act as if they have it (as in weak AI) but they do not actually have it (as in strong AI).
>
> Let us say machine X has strong AI, and that we abstract from it a formal program that exactly describes and determines its intelligent behavior. We then run that abstracted formal program on a software/hardware system called computer Y. Computer Y will act exactly like machine X but it will have only weak AI. (If you get that then you've gotten what there is to get! :-)
>
> Formal programs exist as abstract simulations. They do not equal the things they simulate. They contain the forms of things but not the substance of things.

As I have explained, even if it is accepted that formal programs lack
understanding it does not mean that a machine running such a program
lacks understanding, since it may get the understanding from something
else, such as the overall intelligent behaviour of the system or a
specific physical process. The latter would allow for the possibility
(but not necessity) that computers lack understanding because they
lack a special quality that neurons have. However, the partial brain
replacement thought experiment shows that that would lead to
absurdity, as previously discussed and not rebutted. Therefore, the
conclusion is that provided that the behaviour of the brain can be
reproduced in a different substrate, whether semiconductors or beer
cans and toilet paper, the
consciousness/experience/qualia/intentionality/understanding/feelings
will also be reproduced.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list