[extropy-chat] Criticizing One's Own Goals---Rational?
lcorbin at rawbw.com
Wed Dec 6 14:20:12 UTC 2006
Rafal continued in a way that I couldn't quite connect up with
what had gone before, but which was nevertheless most interesting:
> If there was a goal "seek happiness" in my then sophomore mind
> a long time ago, it was erased upon noticing that happiness appears
> to be the subjective aspect of certain computations within, most
> notably, the cingulate and insular cortices and the nucleus accumbens.
> Why bother doing such computations?
What!? How can awareness of the mechanics of a process interfere
with your appreciation of it? Recall how Dawkins or Sagan would take
the exactly opposite tack with regard to artistic or aesthetic appreciation
of our world: just because we know scientifically what is going on beneath
the surface ought not have any effect on our appreciation, unless it be an
Why bother doing *any* computation? That is, suppose that you
uncovered the precise mechanism responsible for your affections
towards your family; would this immediately imperil the desirability
to you of those computations? So what if we know how happiness
works: I cannot fathom why this would make it any less desirable.
> that goal didn't have an alarm system that would respond to such an
> inconoclastic question, and it was suppressed. On the other hand, other
> goals, like "avoid unhappiness", have a strong direct line in my mind to
> the cognitive faculties, so these goals are suppressed only mildly. It's
> dangerous for a goal to mess with itself.
Please explain why "avoiding unhappiness" has a stronger link to
your cognitive faculties than does seeking happiness? Or, if this
is simply a fact, do you try to justify it at all?
> If rationality is using cognition to find ways of achieving goals, then
> using cognition to erase goals would be irrational.
I'm totally baffled here too: suppose X is a goal that you have
(e.g. you want to kill the sonofabitch that just cut you off in traffic,
and your .45 magnum you keep under the seat is still loaded),
surely it is not irrational to hold this goal, or any other goal,
up to the light of the rest of your memes and instincts and subject
it to criticism. Why, in many cases, that's the *whole* idea: I
wish to criticize my goals as much as my conjectures, and, with
the explicit meta-goal of eliminating certain unsatisfactory goals.
The remainder here seems unproblematical, except for the
remark about "many-worlds". I would demur from the claim
that the *urge* for self-preservation is in any way itself
affected. What is changed for one is the realization that self-
preservation may be achieved in non-obvious or non-customary
> On the other hand,
> given the haphazard nature of our goal systems, consisting of a bunch of
> drives hastily (ca. 500 million years) slapped together by evolution,
> pruning some goals is almost always necessary to allow other goals to be
> achieved (I am referring to consciously shaping your goals over long
> periods of time, not to the simpler process of temporary supression of
> goals, such as "relieve bladder pressure", under certain circumstances).
> Therefore, I would hold that self-consideration is an indispensable, if
> dangerous, part of long-term rationality.
> Furthermore, it is facinating how the simple emotional images that
> constitute our initial goals are transformed by cogitation about some of
> the most advanced concepts in physics or neuroscience. On our list we
> can observe what happens to the urge for self-preservation after
> considering the many-worlds interpretation of QM, or the concept of
> uploading. We have the intellectual means to delve much deeper into what
> we really want than in the times when self-preservation meant simply
> running faster then the tiger.
More information about the extropy-chat