[ExI] extropy-chat Digest, Vol 77, Issue 1

Spencer Campbell lacertilian at gmail.com
Mon Feb 1 19:49:23 UTC 2010


Ben Zaiboc <bbenzai at yahoo.com>:
>Gordon Swobe <gts_2000 at yahoo.com>:
>> The system you describe won't really "know" it is red. It
>> will merely act as if it knows it is red, no different from,
>> say, an automated camera that acts as if it knows the light
>> level in the room and automatically adjusts for it.
>
> Please explain what "really knowing" is.
>
> I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that.  In fact, I'm at a loss to see how that sentence can even make sense.

Like so many other things, it depends on the method of measurement.
Gordon did not describe any such thing, but we can assume he had at
least a vague notion of one in mind.

It actually is possible to get that paradoxical result, and in fact
it's easy enough that examples of it are widespread in reality. See:
public school systems the world over, and their obsessive tendency to
test knowledge.

It's alarmingly easy to get the right answer on a test without
understanding why it's the right answer, but a certain mental trick is
required to notice when this happens. Basically, you have to
understand your own understanding without falling into an infinite
recursion loop. Human beings are naturally born into that ability, but
most people lose it in school because they learn (incorrectly) that
understanding doesn't make a difference.

Ben Zaiboc <bbenzai at yahoo.com>:
> You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck.  That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck.

This is the standard method of measurement in philosophy: omniscience.

The only problem is, omniscience tends to break down rather rapidly
when confronted with questions about subjective experience. If you do
manage to pry a correct answer from your god's-eye view, it will
typically be paradoxical, ambiguous, or both.

Works great for ducks, though, and brains by extension. If you assume
the existence of consciousness in a given brain, and then you
perfectly reconstruct that brain elsewhere on an atomic level, the
copy must necessarily also have consciousness.

But then you have to ask whether or not it's the same consciousness,
and, in my case, I'm forced to conclude that the copy is identical,
but distinct. In the next moment, the two versions will diverge,
ceasing to be identical. So far so good.

However, Gordon usually does not begin with a working consciousness:
he tries to construct one from scratch, and he finds that when he uses
a digital computer to do so, he fails. I'm not sure yet whether this
is a fundamental limitation built into how digital computers work, or
if Gordon is just a really bad programmer. I tend to believe the
latter. Gordon believes the former, so he's extended the notion to
situations in which we DO begin with a working consciousness and then
try to move it to another medium.

Hope that elucidates matters for you.

Also, that it's accurate.



More information about the extropy-chat mailing list