[extropy-chat] Criticizing One's Own Goals---Rational?

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Thu Dec 14 17:18:35 UTC 2006


On 12/7/06, Ben Goertzel <ben at goertzel.org> wrote:

> IMO, revising one's supergoal set is a complex dynamic process that is
> **orthogonal** to rationality.  I suppose that Nietzsche understood
> this, though he phrased it quite differently.  His notion of
> "revaluation of all values" is certainly closely tied to the notion of
> supergoal-set refinement/modification....
>
> Refining the goal hierarchy underlying a given set of supergoals is a
> necessary part of rationality, but IMO that's a different sort of
> process...

### Thank you for the excellent post, Ben - you have clearly
articulated some concepts that in my mind exist only as intuitions.
And I would agree that there are some subtle differences between the
two processes mentioned above but I would not go as far as saying they
are orthogonal.

It is useful to classify supergoals in two flavors - protected and
unprotected. By protected I mean that any attempt to downgrade the
importance of a supergoal evokes an immediate and strong response
(emotional or cognitive) that prevents downgrading or erasure.
Unprotected supergoals may exist independently of other supergoals
(i.e. are not derived from and do not exist solely to further other
goals) but may be erased without inner conflict. Additionally there
may be hardwired goals, which remain stable even if unprotected as
long as no other goals have the means of directly modifying the
computational substrate - these we can disregard in further
discussion.

This distinction is for many goals very clear in my mind. If I could
boot up my mind in safe mode, and have overview of the goal network in
my working memory, with little red delete buttons attached to each
goal, I know I could go slashing a lot of them without dissonance.
"Eat chocolate" might be gone in a click, if there was any need to do
so. But the self-referential "Avoid downgrading self-preservation and
avoid downgrading this goal, unless necessary for self-preservation"
would awaken to burning intensity and strike the cursor dead, should
it wander too close the delete button.

Generally I would expect that a supergoal would be protected if it is
important to other goals (has in part an aspect of a subgoal), or if
it is self-referential. I am not sure if these are the only two
possibilities, you may be able to come up with more.

Now, reshaping unprotected supergoals is very much like reshaping
subgoals - if there is a process capable of performing erasure and
modification, any goal capable of controlling that process will be
able to do that. Changing protected supergoals would probably depend
on the mechanism of protection. If protection is due to their being
locked into a network of dependencies with other goals, then the
complex dynamics would come into play, with various cognitive and
emotional processes modifying connections between goals, until a goal
becomes unprotected - most likely an outcome very difficult to predict
a priori, much like a monstrously complicated version of chess. If a
supergoal is self-referential, then there could be still change if the
premises on which the goal's definitions are based are changed, for
example due to accumulation of new knowledge. Again most likely an
outcome very dependent on the details of the initial conditions, maybe
even to the point of single neurons making a difference - as you said,
complex dynamics.

Even then, however, I would not say that this is necessarily
orthogonal to rationality - after all, rationality is also a complex
dynamic process, with various goals competing for resources, and
continuously modified by external inputs, frequently driven by
feedback effects from our behavior.

The subtle difference you refer to seems to come from the changes in
the measures of rationality that the system itself is applying to its
own actions: As long as you only change subgoals, you still may
measure the degree of correspondence between goal, action and outcome
using the same measurement device. Even if you change a supergoal, you
can still obtain a consistent measure of the overall outcome, telling
you if the change was rational in the context of other goals. However,
once you change the measurement device you can no longer really tell
if a supergoal change brought the whole system closer or farther away
from its initial goals.

It is only with the greatest trepidation and unease that I would
contemplate such an intervention...

Rafal



More information about the extropy-chat mailing list