[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Dec 25 02:31:30 UTC 2009


2009/12/25 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Thu, 12/24/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> It's as if you believe that some physical activity is not
>> "purely syntactic", and therefore can potentially give rise to
>> mind; but as soon as it is organised in a complex enough way that it can
>> be interpreted as implementing a program, this potential is
>> destroyed!
>
> Real or hypothetical examples help to illustrate concepts, so let's try to use them when possible. I offer one:
>
> Consider an actual program that takes a simple input asking for information about the day of the week and reports "Thursday". You and I of course understand the meaning of "Thursday". We agree (for the moment) that the program did not understand the meaning because it did only syntactic operations and syntax does not give semantics. Now you ask what about the hardware? You want to know if the hardware (RAM, CPU and so on) that implemented those syntactic operations at the very lowest level (in 1's and 0's or ons and offs) knew the meaning of "Thursday" even while the higher program level did not. Odd question to ask, I think. Unlike the higher program level (which at least appears to have understanding!) at the machine level computers cannot even recognize or spell "Thursday". How then could the machine level understand the meaning of it?

There is no real distinction between program level, machine level or
atomic level. These are levels of description, for the benefit of the
observer, and a description of something has no causal efficacy.

> Now you might object and point out that you never actually agreed that the higher program level lacked understanding of "Thursday". Understandable - after all if any understanding exists then we should expect to find it at the higher levels - but now we find ourselves asking the same question about if and how programs can get semantics from syntax.

Understanding is something that is associated with understanding-like,
or intelligent, behaviour. A program is just a plan in your mind or on
a piece of paper to help you arrange matter in such a way as to give
rise to this intelligent behaviour. There was no plan behind the
brain, but post hoc analysis can reveal patterns which have an
algorithmic description (provided that the physics in the brain is
computable). Now, if such patterns in the brain do not detract from
its understanding, why should similar patterns detract from the
understanding of a computer? In both cases you can claim that the
understanding comes from the actual physical structure and behaviour,
not from the description of that physical structure and behaviour.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list