[ExI] The digital nature of brains (was: digital simulations)

Gordon Swobe gts_2000 at yahoo.com
Sun Jan 31 20:01:12 UTC 2010


--- On Sun, 1/31/10, Spencer Campbell <lacertilian at gmail.com> wrote:

> A healthy human brain has intentional states defined as
> conscious thoughts, beliefs, hopes, desires and so on. 

Absolutely! 
 
> An accurate neural level simulation of a healthy human
> brain would, therefore, replicate those states. Otherwise it would not,
> by definition, be accurate.

Digital simulations of non-digital objects only *model* those things they simulate. They do not equal the things they model. To get this wrong is to confuse the model with reality, the description with the thing described, the book for the subject of the book, the simulation of the reality with the reality it simulates, the computation with the thing computed.

However a digital simulation of X will have all the real properties of X *if and only if* X already exists as a digital object. But in that case we should call that simulation of X a copy or a duplication of X, not a simulation of X. Simulations of things never equal the things they simulate, but copies do.

Whether people here in extropyland realize it or not, digital models of human brains will have the real properties of natural brains if and only if natural brains already exist as digital objects, i.e, only if the human brain already exists in reality as a digital computer running software. 

> I was with Eric until he said this, then switched
> allegiance again. From my perspective, Gordon has been very 
> consistent

Thanks for saying that. It warms my heart. :)

> In this thought experiment, Searle has "internalized" the
> algorithm that he was using in the Chinese room. In effect, Searle is
> now a system containing a virtual Chinese room.

You could say that.

> The virtual Stathis in my head says that the virtual
> Chinese room is what has conscious understanding of the symbols.

That virtual room exists somewhere in the man's brain/mind, and he has access to it such that he can follow the syntactic rules for manipulating the symbols well enough to pass the Turing test in Chinese. Why doesn't he also have access to its supposed intentional states so that he can understand the symbols? If he cannot access that understanding in his own head then it seems to me that we've just imagined something in a futile attempt to escape the conclusion that the man just plain cannot understand the symbols.


-gts



      



More information about the extropy-chat mailing list