[ExI] The symbol grounding problem in strong AI

Will Steinberg asyluman at gmail.com
Fri Dec 18 23:14:10 UTC 2009


Searle, deliberately or ignorantly, fails to take the experiment to its
logical ends.

Right now we have a man who processes and produces syntactical Chinese
inside a box.  Arguing that he is not conscious is like arguing that the
language center of the brain is (correctly) not conscious.  The part of
consciousness which is left out of the man resides in the book, in the rules
of response.  An algorithm made to perfectly emulate a human must be able to
draw on a changing set of information.  Imagine, one day, a person wearing
tells the machine his name is Barry and that he is going to Staten Island.
The machine uses on the template "Hello, [stated name]" to respond, and
Barry leaves.  The next day, Barry's mother asks the room where Barry went;
he informed her that he was going to see the machine.  To *accurately and
verifiably** *produce human results, the machine must have a memory, i.e. a
symbol grounding area.

Yet this is obviously computational--certain patterns in the brain are
causally linked to recieved sensory input; it is not hard to imagine the
brain producing a random keystring upon the sight of "ice cream" that is
retroactively made to associate with the sound of an ice cream truck.  This
is programmable.

An interesting thing to imagine is the experiment extended to completion.
The man uses meta-algorithms based on a changing storage bank (on paper, of
course) of symbols to derive the algorithms used for speech, as well as all
other functions associated with a human.  We transmit all the box's output
to an android which performs the man's commands.  The man knows the inputs
and outputs as the machine, the memory within, and the rules for
manipulation.  How is this different from a human?  A human does not know
the rules of manipulation.  Think of placing a window on our linguistic
machinations, if we were able to see our brains at work producing speech.
Now we are AWARE of the process, and the manipulations in which we are
engaging.  We become the (extended) Chinese man.

I don't see how people can talk for so long about a limited, flawed thought
experiment with an easily deducible answer when minds of this caliber should
perhaps be more interested in HOW things like qualia and thought are
constructed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091218/720506dc/attachment.html>


More information about the extropy-chat mailing list