[ExI] More thoughts on sentient computers

Ben Zaiboc ben at zaiboc.net
Thu Feb 23 16:03:10 UTC 2023


Giovanni, you've just made it onto my (regrettably small) list of people 
who seem to be capable of thinking in a joined-up fashion, who seemingly 
haven't fallen prey to dualistic and crypto-dualistic notions. I'll take 
more notice of your posts here in future.

It seems we on this list suffer from just as many misunderstandings as 
any other group of humans, and your statement "the brain is a 
simulation" is a good example of how this happens. I'm not criticising 
you, far from it, but it does illustrate how easy it is for people to 
get the wrong end of the stick and run with that, ignoring later 
clarifications of what the poster actually meant. I understand what you 
(almost certainly) meant by that comment, even if I wouldn't have put it 
that way myself. Some others will not.

Literally speaking, what you said doesn't make any sense. The brain is a 
physical object, in what we regard as the 'real world', so it can't be a 
simulation. But of course (my assumption is) you didn't really mean 
that, and it should be pretty easy to figure that out. Our internal 
representation of the world and other people, our entire experience, is 
what I'm assuming you mean, and of course that is a simulation. It 
couldn't be anything else (silly notions of specific molecules actually 
being certain experiences of certain colours notwithstanding).

My undertanding is that what our brain does, is simulate the world and 
the agents that appear in it, and even the agent that is experiencing 
the simulations.

The way I'd put it, is that everything I experience (including myself), 
is a simulation created by a ('my') brain.

Just to be clear, is that what you meant? I'm open to the possibility 
that I've got this totally wrong! (in which case, I may need to withdraw 
what I said in the first paragraph, above :D )

I also suspect you're right in saying that consciousness is going to be 
much easier to produce than we currently think, once we figure it out. 
We will probably be astonished at how simple it is, and how easy it will 
be to create fully-conscious artificial minds.

I think it's a bit like our understanding of tying a knot. At some point 
in our prehistory, humans wouldn't have known what knots were*, and 
probably struggled to do things like keeping animal skins on their 
bodies when they needed them to stay warm. Once some genius invented the 
knot (which probably didn't take long), it would have been a real 'Aha!' 
moment, and, once shown, suddenly everyone could securely tie a skin on 
themselves to keep warm, and we've hardly given it a second thought ever 
since (apart from a certain group of geeky mathematicians!).

I reckon the trick of creating fully-conscious minds will be similar. 
There's probably a small set of necessary features that a system needs 
in order to be conscious and self-aware, we just don't know what they 
are yet. But I think we're getting close (just for the record, I very 
much doubt that any chatbot has these features, quite possibly by a long 
chalk. Spike's remarks about having a persistent memory is a good start, 
but probably far from all that's needed).

Ben

* If this strikes you as ridiculously unlikely, substitute some other 
obvious-in-hindsight thing that would totally elude someone not aware of 
it, like maybe using stones to make sharp sticks or digging a hole then 
making a noise to kill an animal, etc.



More information about the extropy-chat mailing list