[extropy-chat] FW: The Undying

Jef Allbright jef at jefallbright.net
Mon Dec 4 18:45:37 UTC 2006


Forwarded to the list per Rafal's request:

-----Original Message-----
From: Rafal Smigrodzki 
Sent: Friday, December 01, 2006 1:10 PM
To: Jef Allbright
Subject: Re: [extropy-chat] The Undying

On 11/30/06, Jef Allbright <jef at jefallbright.net> wrote:

> Rafal, I agree with you that it gets old rehashing what we mean by 
> rationality.  Same goes for personal identity, free-will, morality and

> X*.
>
> But it's interesting to me that this problem of understanding runs so 
> deeply on a common thread tied to the meaning of self.

### Indeed, there is a very strong connection between the rationality
and understanding of identity. After all, rationality (to use the
dictionary meaning) is optimizing behavior to achieve goals, and goals
are at the very core of our self, define morality, and are closely
related to free will.

What I find interesting is the recursive interaction between the
inference structures in our mind, and the goal-definining structures.
Once you turn your intellect (sharpened in the analysis of the outside
world) to your own goals, strange things may happen. It appears that
upon self-consideration, our self tends to become quite unstable,
possibly chaotic, i.e. small changes in the initial emotional settings
may lead to widely divergent outcomes. I presume that self-consideration
is a relatively new phenomenon, evolutionarily speaking - our
cave-dwelling and small-village ancestors didn't have the intellectual
armamentarium accessible today to anybody who can read. Even in
historical times and now only a fraction of the population, perhaps as
little as 10%, engage in this sort of destabilizing activity. Not
surprisingly, the neural safety features that would preclude
gene-destructive outcomes, have not yet evolved to a high degree of
efficacy.

Is then the process of self-consideration a rational one? After all,
turning your intellect on your goals may result in the erasure of many
of the goals, possibly even all of them. If there was a goal "seek
happiness" in my then sophomore mind a long time ago, it was erased upon
noticing that happiness appears to be the subjective aspect of certain
computations within, most notably, the cingulate and insular cortices
and the nucleus accumbens. Why bother doing such computations? Somehow
that goal didn't have an alarm system that would respond to such an
inconoclastic question, and it was suppressed. On the other hand, other
goals, like "avoid unhappiness", have a strong direct line in my mind to
the cognitive faculties, so these goals are suppressed only mildly. It's
dangerous for a goal to mess with itself.
If rationality is using cognition to find ways of achieving goals, then
using cognition to erase goals would be irrational. On the other hand,
given the haphazard nature of our goal systems, consisting of a bunch of
drives hastily (ca. 500 million years) slapped together by evolution,
pruning some goals is almost always necessary to allow other goals to be
achieved (I am referring to consciously shaping your goals over long
periods of time, not to the simpler process of temporary supression of
goals, such as "relieve bladder pressure", under certain circumstances).
Therefore, I would hold that self-consideration is an indispensable, if
dangerous, part of long-term rationality.

Furthermore, it is facinating how the simple emotional images that
constitute our initial goals are transformed by cogitation about some of
the most advanced concepts in physics or neuroscience. On our list we
can observe what happens to the urge for self-preservation after
considering the many-worlds interpretation of QM, or the concept of
uploading. We have the intellectual means to delve much deeper into what
we really want than in the times when self-preservation meant simply
running faster then the tiger.

As you certainly notice, there are close analogies between the above
process and the recursive development of the FAI. Will Eliezer's attempt
at developing a reasonably safe but still powerful recursively
self-modifying mind give us clues about what to do with our own minds?

I stay tuned on SL4.

Rafal




More information about the extropy-chat mailing list