[ExI] Meaningless Symbols.

Eric Messick eric at m056832107.syzygy.com
Sat Jan 16 18:03:32 UTC 2010


John Clark writes:
> Gordon thinks that genuine understanding is a completely
>useless property for intelligent people or machines to have because they
>would continue to act in exactly the same way whether they have
>understanding or not.

I'm not at all sure that's what Gordon thinks, although it is
difficult to tell for sure.

A claim he's made several times, but which seems to have mostly
slipped by unnoticed, is that a program controlled neuron cannot be
made to behave the same as a biological one.

In discussing the partial replacement thought experiment he says that
the surgeon will replace the initial set of neurons and find that they
don't produce the desired behavior in the patient, so he has to go back
and tweak things again.

Everyone else seems to think Gordon means that tweaking is in the
programming, and that eventually the surgeon manages to get the
program right.  He's actually said that the surgeon will need to go in
and replace more and more of the patient's brain in order to get the
patient to pass the Turing test, and that the extensive removal of
biological neurons is what turns the patient into a zombie.

Since Gordon also claims that neurons are computable, this seems to me
to be a contradiction in his position.

My guess at his response to this would be: Sure, neurons may be
computable, but we don't CURRENTLY know enough about how they work to
duplicate their behavior well enough to support consciousness.

My reply to that would be: by the time we can make replacement neurons
we will be very likely to know how they work in sufficient detail.  In
fact, we currently know a good deal about how they work.  What we're
missing is the wiring pattern.

I'm going to also guess that Gordon thinks the thing we don't
currently know how to do in making a programmatic neuron is to derive
semantics from syntax.  I think I remember him saying he believes this
to eventually be possible, but that we currently have no clue how.

So, Gordon seems to think that consciousness is apparent in behavior,
and thus selectable by evolution.  I think that's why he's not
interested in your line of argument.

Gordon:  did I represent your position accurately here?

-eric



More information about the extropy-chat mailing list