[extropy-chat] "Dead Time" of the Brain.

Christopher Healey CHealey at unicom-inc.com
Fri May 5 18:24:22 UTC 2006


This post referenced another I had sent about 5 minutes previously.  The
original did not make it though, apparently, so I'm resending the
original again now, below.

--------------------------------------

>>> Heartland wrote:
>>> Of course not. The point is that if you have two identical, but 
>>> separate brains, this must add up to two separate *instances* of one

>>> *type* of mind.
>>> If you have any experience in OOP, and I can't imagine you don't, 
>>> then you should know exactly what I mean.
>
>Christopher Healy: 
>> Is forking an instance equivalent to type?  I think not.
>
>Are you disagreeing with what seems to be your point?
>
>S.

My point is that this seems like saying identical twins are really just
two separate instances of type HumanBeing.  Well, yeah!  

But it fails to capture the important distinction, and perhaps even
subtly diverts attention from it:  A particular instance possesses a
higher amount of information content than a type, because in further
constraining the realm of possibility, additional specification is
always required.  When forking a particular instance, all *specific*
state information, as well as the type structure is preserved.  To
reduce the situation to a type comparison misses this deeper equivalence
between the source and target instances.

Jumping off this specific point, I don't think that this whole problem
can be solved while simultaneously maintaining our current notions of
identity.  

If we want to make useful progress on it, we need to put aside many of
our deeply embedded notions regarding our everyday experience of life.
We can't start off saying, "That cannot be the answer, for that would
lead to the death of the mind!"  We should instead simply say, "How does
this thing we perceive as mind actually operate?"  

In troubleshooting complex systems, what appears to be the problem is
often really just the symptom of a deeper cause.  In a similar way, we
should be careful that what appears to be an important structure in our
model of the mind is not just a surface indication of a deeper process
at work, a process that may work very differently than its surface
indications suggest.

-Chris Healey

--------------------------------------

[and this was the intended follow-up message, which was the only one to
get through...]

Heartland,

Another point of my last paragraph is in regards to the definitions we
use in uncovering truth.  

Whatever we *call* the things we describe, they are only labels, and
ultimately labels shouldn't alter the measurable predictions we achieve.
In doing what human minds do, they may occasionally, or even very often,
do things that you label as dying.  Some of those things could just as
easily be labeled: operating as designed, system hibernation, or plain
old "being alive".  

If it seems like we're dying an awful lot, but nobody seems to mind
much, it's stronger evidence in support of revising our models, rather
than revising our behavior.  

-Chris




More information about the extropy-chat mailing list