[extropy-chat] Criticizing One's Own Goals---Rational?

Lee Corbin lcorbin at rawbw.com
Tue Dec 12 03:09:16 UTC 2006


Rafal writes

> On 12/6/06, Lee Corbin <lcorbin at rawbw.com> wrote:
>> Rafal wrote
>> > If there was a goal "seek happiness" in my then sophomore mind
>> > a long time ago, it was erased upon noticing that happiness appears
>> > to be the subjective aspect of certain computations within, most
>> > notably, the cingulate and insular cortices and the nucleus accumbens.
>> > Why bother doing such computations?
>>
>> What!?  How can awareness of the mechanics of a process interfere
>> with your appreciation of it?
>>
>> [Moreover] Why bother doing *any* computation?  That is, suppose
>> that you uncovered the precise mechanism responsible for your affections
>> towards your family;  would this immediately imperil the desirability
>> to you of those computations?   So what if we know how happiness
>> works: I cannot fathom why this would make it any less desirable.
> 
> ...let me give you a brief summary of how my goal system evolved,
> leading to the invalidation of some initial high-level goals:
> 
> In the beginning there were some simple goals, such as "seek sweet
> food", "gain predictive understanding of the motions of physical
> objects in the environment", "avoid pain". Then more complex goals
> emerged, accompanied by the process of myelinization of my frontal
> cortex, and still under the direction of inborn mechanisms - such as
> "seek approval of mother", "become a member of ingroup", "achieve
> dominance over others", "avoid rejection", "understand the thinking of
> others", "avoid death". Then self-consideration emerged, cataloguing
> the goals, and their interactions. A loose hierarchy emerged, ordering
> goals by strength, discount rates and their interdependencies. Certain
> more abstract goals were formulated, e.g. transforming "avoid death"
> (i.e "avoid irreversible termination of mental and bodily funtions")
> into a more complex concept of self-preservation. My goal of
> self-preservation is interpreted by my higher cognitive faculties as
> continued existence of a conscious agent sharing a large fraction of
> my memories and a certain very small number of select goals (this is
> my idea of the Rafal-identity).

All that seems very logical and very well described.

> I score very low on self-transcendence, that is, I belong to the category
> of humans who did not develop almost any significant goals that would
> be independent on [of] self-preservation. Given this fact it is not
> surprising that the systemizing faculty placed self-preservation in the
> position of the ultimate supergoal, valid under almost any but the most
> esoteric hypothetical situations.

Again, well put.  Different people are indeed prone to sign-on to causes
for which they might forfeit their lives, sacred honor etc.  Very likely 
you would not be a signer of a revolutionary document when that activity
is punishable by death, and I myself would sign only with great trepidation.

I would, it sounds, be a lot more likely to wager my life on the chance
of sufficient benefit---either benefit directly to me or towards those I
love, or perhaps even towards some cause or other. I infer that I am
to describe this as "self-transcendence"  :-)

> The systemizing tendency in my mind is so strong that most goals in
> principle dispensable for the purpose of Rafal-preservation are placed
> very low in the goal hierarchy. I think I should be able to survive
> and maintain goal-driven activity without the need to be happy,

Fine.  That cogently explains why happiness itself is a rather low-order
goal. So this is why your claim above: "Why bother doing such computations?"
that are the brain-mechanical equivalents of being happy.  Whereas for 
the reasons you also give it *is* worthwhile for your brain to engage in
painless behavior. (Avoiding pain is behaviorally evidently higher on in
your goal hierarchy even if we didn't have your claim that it is.)

> although further research may change this opinion. Legacy goals like
> this are therefore not a part of my abstract definition of self, and
> may be subject  to erasure if there is any conflict with the
> supergoal, including a conflict over allocation of computational
> resources - if running the happiness-cortex costs money needed for
> survival, happiness goes out the window. This of course only once I
> gain access to my source code and finish some courses in
> autopsychoengineering.
> 
> I hope this explains my current thinking. No doubt, YMMV.

Yes, I think it does.  Thanks.  And yes, being happy continues to be
one of *my* higher goals, right up there with self-preservation.  In fact,
self-preservation though extended times when there would be no
happiness is justified in my own mental calculus only by the fact that
so long as I live, there is still the hope for more (and maybe vastly
greater) benefit to me eventually.

If I knew that I was never again to be happy, and that my passing would
not affect anyone I care for, then I'd check out post haste.

Lee




More information about the extropy-chat mailing list