[ExI] How to ground a symbol

Spencer Campbell lacertilian at gmail.com
Mon Feb 1 00:54:57 UTC 2010


Gordon Swobe <gts_2000 at yahoo.com>:
>Eric Messick <eric at m056832107.syzygy.com>:
>> The animations and other text at the site all indicate that
>> this is the type of processing going on in Chinese rooms.
>
> This kind of processing goes on in every software/hardware system.

No, it doesn't. That's only the result of the processing. I went over
this before. The processing itself is so spectacularly more
fine-grained that thinking about it as an "if this input, then this
output" rule is outright fallacious. Yes, you put that input in; yes,
you get that output out; but between these two points, a universe is
created and destroyed.

Gordon Swobe <gts_2000 at yahoo.com>:
>Eric Messick <eric at m056832107.syzygy.com>:
>> Come back after you've written a neural network
>> simulator and trained it to do something useful.
>
> Philosophers of mind don't care much about how "useful" it may seem. They do care if it has a mind capable of having conscious intentional states: thoughts, beliefs, desires and so on as I've already explained.

The point isn't to have a useful product, it's to demonstrate a
minimal comprehension of how neural network simulations work. You left
out the crux of what Eric said:

"Then we'll see if your intuition still says that computers can't
understand anything."

Getting a neural network simulation to do anything useful is
sufficiently difficult that you will necessarily learn something about
them in the process, and this may change your intuitive impression of
what a computer is capable of.

Besides, we don't care what philosophers of mind think. We care what
computers think. Regrettably, we are forced to talk to the former in
order to learn about the latter.



More information about the extropy-chat mailing list