[ExI] How to ground a symbol

Eric Messick eric at m056832107.syzygy.com
Mon Feb 1 01:03:29 UTC 2010


Gordon:
>This kind of processing goes on in every software/hardware system.

Yes, and apparently you didn't understand me.  I already addressed this
issue later in the same message.  It's at a different layer of
abstraction.

It's fine to ignore parts of messages that you agree with.  It's
disingenuous to act as though a point hadn't been raised when you're
actually ignoring it.

>> Come back after you've written a neural network
>> simulator and trained it to do something useful.
>
>Philosophers of mind don't care much about how "useful" it may seem.

While I haven't actually written a neural network simulator, I have
written quite a few programs that are of similar levels of complexity.
I know from experience that things which seem simple, clear, and well
defined when thought about in an abstract way are in fact complex,
muddy, and ill-defined when one actually tries to implement them.
Until such a system has been shown to do something useful, it's
probably incomplete, and any intuition learned from writing it may
well be useless.  That's why I stipulated usefulness.

>I think artificial neural networks show great promise as decision
> making tools. 

Natural ones do too.

>But 100 billion * 0 = 0.

But 100,000,000,000 * 0.000,000,000,001 = 1.

Your argument depends on the axiomatic assumption that the level of
understanding in a single simulated neuron is *exactly* zero.  Even
the tiniest amount of understanding in a programmed device (like a
thermostat) devastates your argument.  So you cling to the belief that
understanding must be a binary thing, while the universe around you
continues to work by degrees instead of absolutes.

Yes, philosophy deals with absolutes, but where it ignores shades of
gray in the real world it gets things horribly wrong.

-eric



More information about the extropy-chat mailing list