[ExI] The digital nature of brains (was: digital simulations)

Spencer Campbell lacertilian at gmail.com
Sun Jan 31 17:56:51 UTC 2010


Gordon Swobe <gts_2000 at yahoo.com>:
> We might reasonably attribute intelligence to both strong and weak AI systems. However for a system to have strong AI it must also have intentional states defined as conscious thoughts, beliefs, hopes, desires and so on. It must have a subjective conscious mind in the sense that you, Eric, have a mind.
>Eric Messick <eric at m056832107.syzygy.com>:
>> I claim (and I expect you would dispute) that an accurate
>> neural level simulation of a healthy human brain would constitute
>> strong AI.
>
> I dispute that, yes, if the simulation consists of software running on hardware.

A healthy human brain has intentional states defined as conscious
thoughts, beliefs, hopes, desires and so on. It has a subjective
conscious mind in the sense that he, Eric, has a mind.

An accurate neural level simulation of a healthy human brain would,
therefore, replicate those states. Otherwise it would not, by
definition, be accurate.

Gordon Swobe <gts_2000 at yahoo.com>:
>Eric Messick <eric at m056832107.syzygy.com>:
>> Or do you claim that it will always be impossible to create
>> such a simulation in the first place?  No, wait, you've
>> already said that systems that pass the Turing Test will be possible,
>> so you're no longer claiming that it is impossible.  Do you want to
>> change your mind on that again?
>
> Excuse me? I never argued for the impossibility of such systems and I have not "changed my mind" about this. I wonder now if I can count on you for an honest discussion.

I was with Eric until he said this, then switched allegiance again.
>From my perspective, Gordon has been very consistent when it comes to
what will and will not pass the Turing test. His arguments, implicitly
or explicitly, state that the Turing test does not measure
consciousness. This is one point on which he and I agree.

Gordon Swobe <gts_2000 at yahoo.com>:
>Stathis Papaioannou <stathisp at gmail.com>:
>> He is the whole system, but his intelligence is only a
>> small and inessential part of the system, as it could easily
>> be replaced by dumber components.
>
> Show me who or what has conscious understanding of the symbols.

In this thought experiment, Searle has "internalized" the algorithm
that he was using in the Chinese room. In effect, Searle is now a
system containing a virtual Chinese room.

The virtual Stathis in my head says that the virtual Chinese room is
what has conscious understanding of the symbols.

I'm inclined to agree, assuming that the Chinese room does indeed pass
the Turing test, except that I would not specify "conscious"
understanding. I'm not convinced that consciousness and understanding
are inseparable. My unconscious mind seems to understand easily enough
that it's important to keep my heart beating at a regular rate, and
I'm not inclined to criticize it solely on the basis that it has no
awareness (therefore, consciousness) of that understanding.



More information about the extropy-chat mailing list