[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Thu Dec 24 14:04:18 UTC 2009


2009/12/24 Gordon Swobe <gts_2000 at yahoo.com>:

>> You believe that programs can't give rise to minds, but the
>> right kind of physical activity can. Would you then object to the
>> theory that it isn't the program that gives rise to the computer's mind,
>> but the physical activity that takes place during the program's
>> implementation?
>
> I object not because of the physical activity (I like that part of your argument) but rather because that physical activity represents only the implementation of purely syntactic operations in a formal program. As I mentioned in my last, syntax cannot give semantics no matter what entity does those operations.

It's as if you believe that some physical activity is not "purely
syntactic", and therefore can potentially give rise to mind; but as
soon as it is organised in a complex enough way that it can be
interpreted as implementing a program, this potential is destroyed!
You would also have to have a test to distinguish between the "purely
syntactic" (and therefore mentally impotent) and the syntactic that
could also be viewed as non-syntactic, and therefore can give rise to
mind by means of its non-syntactic components. The brain, for example,
could be seen as following an algorithm if viewed at its most basic
physical level, and a higher level algorithm if viewed at a higher
level, insofar as you could possibly come up with the rules that
determine neuronal firing. But, presumably, the brain is saved by its
non-algorithmic component. How do you recognise this component and how
do you know a computer doesn't also have it?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list