[ExI] The simplest possible conscious system

Spencer Campbell lacertilian at gmail.com
Sun Feb 7 05:31:02 UTC 2010


Gordon Swobe <gts_2000 at yahoo.com>:
> I have since the 70s practiced transcendental meditation. Occasionally while meditating my mind seems to, as you say, eradicate the subject-object duality. However I can only infer this indirectly, and only after the fact. During those actual moments I have awareness of nothing at all.

Gordon! I never knew. My opinion of you is again on the upward part of
its fluctuation.

This seems to me a good argument for the idea that all consciousness
is that of a subject being aware of an object. I'd have said that
before, but my source regarding that particular state (from Ken
Wilber, I think) wasn't very specific on the matter.

The vast-consciousness-without-content state does seem to contradict
this theory, though, from what little I know about it. I've heard it
described (probably by Ken Wilber again) as a void which is only aware
of itself. You could interpret that as saying that there technically
is a subject-object duality in the moment, but both positions are
occupied by the same thing and the thing in question actually isn't
anything. Precisely as nolipsism predicts!

And Buddhism, and so on.

Does any of this help us to construct the simplest possible conscious
system? Self-awareness seems to be a good description of
consciousness, but awareness isn't exactly understanding. I'm not sure
what sort of mechanism is responsible for awareness.

You could claim that a Turing machine is aware of the symbol it's
currently reading, and only that symbol. Logically, then, a Turing
machine that does nothing but read a tape of symbols denoting itself
should be conscious. The symbol grounding problem again: how to cause
a symbol to denote anything at all?

General consensus dictates that some sort of interaction with the
environment is necessary. It's obvious to me that this works when
taken to the extremely sophisticated level of human awareness, but I
would be hard pressed to define an exact point at which the
unconsciousness of an ungrounded Turing machine is replaced by the
consciousness of an egotistic Spencer Campbell.

Attaching a webcam to associate images with symbols (using complex
object recognition software, of course), which are then fed to the
machine on tape, does not seem sufficient to produce consciousness
even if you point the camera at a mirror. Yet, I have no good reason
to believe it isn't. Sheer anthropocentric prejudice alone makes me
say that such a system is incapable of awareness: the Swobe Fallacy.

So, I haven't managed to convince myself that a system simpler than a
disembodied verbal AI (discussed previously) is capable of
consciousness. It must be, though, if I can remain conscious even with
duct tape over my mouth. Calling the potential to communicate a
feature of consciousness would be extraordinarily counterintuitive, at
best.

Basically I am talking to myself at this point. Do all possible
consciousnesses do that?

Hmm.



More information about the extropy-chat mailing list