[ExI] The digital nature of brains (was: digital simulations)

Eric Messick eric at m056832107.syzygy.com
Sat Jan 30 23:51:27 UTC 2010


Gordon:
>Eric:
>> Well, nothing except vast quantities of information about
>> Chinese language sufficient to answer questions as well as a native
>> speaker. He seems to consider this a trivial detail.
>
>That information in the computer would seem important only to someone
> who did not understand the question of strong vs weak AI.

Well, synaptic connection strengths between 100 billion neurons seems
like rather a lot of information, much of which might be crucial to
understanding.

As to strong vs. weak AI, part of the question here is what that
difference is.

In the referenced paper, Searle says that weak AI would be a useful
tool for understanding intelligence, while strong AI would duplicate
intelligence.  It would appear that Eliza would fall under this
definition of weak AI, though Searle may not agree.

I claim (and I expect you would dispute) that an accurate neural level
simulation of a healthy human brain would constitute strong AI.

Assuming that such a simulation accurately reproduced responses of an
intelligent human (it passes the Turing Test), I'm going to guess that
you'd grant it weak AI status, but not strong AI status.

Furthermore, you seem to be asserting that no test based on it's
behavior could ever convince you to grant it strong status.

Let's go a step farther and place the computer running this simulation
within the skull of the person we have duplicated, replacing their
brain.  It's connected with all of the neurons which used to feed into
the brain.

Now, what you have is a human body which behaves completely normally.

I present you with two humans, one of which has had this operation
performed, and the other of which hasn't.  Both claim to be the one
who hasn't, but of course one of them is lying (or perhaps mistaken).

How could you tell which is which?

This is of course a variant of the classic Turing Test, and we've
already stipulated that this simulation passes the Turing Test.

So, can you tell the difference?

Or do you claim that it will always be impossible to create such a
simulation in the first place?  No, wait, you've already said that
systems that pass the Turing Test will be possible, so you're no
longer claiming that it is impossible.  Do you want to change your
mind on that again?

>>Searle:
>>     My car and my adding machine, on
>>     the other hand, understand
>>     nothing[.]
>
>> Once again, he's simply asserting something.
>
>Do you think your car has understanding of roads? How about doorknobs
> and screwdrivers? He makes a reductio ad absurdem argument here (but
> you leave out the context) illustrating that we must draw a line
> somewhere between those things that have minds and those that don't.

So it is a question of where to draw the line.  I draw it at
information processing.  If something is processing information, it has
some level of understanding.  The adding machine processes
information, so it has a tiny amount of understanding.  I process a
lot more information in much more sophisticated ways, so I have a much
greater understanding.  A screwdriver does not process information.

Understanding is not Special Sauce which can only come from god.

>Apparently you believe that if you embodied the system as did Searle,
> and that if you did not understand the symbols as Searle didn't, that
> the system would nevertheless have a conscious understanding of the
> symbols.

Yes.  It demonstrates that understanding through its behavior.

> But I don't think you can articulate how. You just want to
> state it as article of faith.

It acquires understanding in *exactly* the same way that you do.  I
assume as an article of faith that you have understanding as well.  So
what?

Can you articulate how you acquire understanding?

>Did you simply miss his counter-argument to the systems reply?

I didn't see anything other than unsupported assertions, so if there
was an argument there, then I certainly missed it.

> *He becomes the system* and still does not understand the
> symbols. There exists [no] "vastly greater system" that understands
> them, unless you want to step foot into the religious realm.

We are once again looping back to material covered earlier (that seems
to be all we're doing).

The "he becomes the system" thing is stretching the analogy way past
its breaking point.  If we're talking about an ordinary human (which
Searle apparently is), then there is no way that human could contain
enough information or process it quickly enough to pass the Turing
Test before dying of old age (or even before the heat death of the
universe).

If the system is a neural level simulation, then the human must
maintain state information on every neuron in a human brain.  There
isn't anywhere to put that information, as the human's neurons are
already full keeping their own state.

So, in order to make the system work, we've got to seriously augment
that human into a vastly greater system.

So, if Searle's reply results in a working system, then it is no
different than the earlier case, and his reply is meaningless.  If, on
the other hand, we keep the human unaugmented, then the resulting
system cannot pass the Turing Test, which was given as a precondition,
so his reply is meaningless.

Do you see any other option?

-eric



More information about the extropy-chat mailing list