[ExI] How to ground a symbol

Eric Messick eric at m056832107.syzygy.com
Sun Jan 31 23:05:39 UTC 2010


Gordon sends us this link:
>http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php

which contains this text (written by David L Anderson):

>  In one of the books, there will be a sentence written in English
>  that says:
>
>    "If you receive this string of shapes: 01010111011010000110000101110100,
>    0110100101110011, 01100001, 011100000110100101100111,
>
>    then send out this string of shapes: 010000001, 011100000110100101100111,
>    0110100101110011, 01100001, 0110001
>    00110000101110010011011100111100101100001011100100100100,
>    01100001011011100110100101101101 0110000101101100"

The animations and other text at the site all indicate that this is
the type of processing going on in Chinese rooms.

Now, I don't know if Searle was involved in this project, and Gordon
hasn't even indicated that he agrees with it, so perhaps this is just
what David Anderson thinks.

If this is the extent of what Chinese room supporters think computers
are capable of, then it's not surprising that they don't consider them
capable of understanding.

I think the proper reply to this is:

  Come back after you've written a neural network simulator and
  trained it to do something useful.  Then we'll see if your intuition
  still says that computers can't understand anything.

Neural networks operate *nothing* like the above set of if-then
statements.  Sure, you've got something Turing complete under the
neural network layer of abstraction, but you've got dumb chemical
reactions under the functioning of a neuron.  What matters is the
action at the higher layer of abstraction.

Once again, I wonder if the problem here is an inability to deal with
abstractions.  Can we test for that ability, teach it, or enhance it?
Is it just a selective inability to deal with particular abstractions?
Perhaps with a particular class of abstraction?

-eric



More information about the extropy-chat mailing list