[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Dec 18 13:48:06 UTC 2009


2009/12/18 Gordon Swobe <gts_2000 at yahoo.com>:

>> You seem to accept that dumb matter which itself does not
>> have understanding can give rise to understanding, but not
>> that an appropriately programmed computer can pull off the
>> same miracle. Why not?
>
> Biological brains do something we don't yet understand. Call it X. Whatever X may be, it causes the brain to have the capacity for intentionality. We don't yet know the details of X but if we cannot refute Searle then we must say this about it:
>
> X != the running of formal syntactical programs.
>
> X = some biological process that takes place in the brain in addition to, or instead of, running programs.

The level of description which you call a computer program is, in the
final analysis, just a set of rules to help you figure out exactly how
you should arrange a collection of matter so that it exhibits a
desired behaviour, with no separate causal role of its own. That you
can describe the chemical reactions in the brain algorithmically
should not detract from the brain's consciousness, so why should an
algorithmic description of a computer in action detract from the
computer's consciousness?


--
Stathis Papaioannou



More information about the extropy-chat mailing list