[extropy-chat] Criticizing One's Own Goals---Rational?

Jef Allbright jef at jefallbright.net
Thu Dec 7 18:16:12 UTC 2006


Ben Goertzel wrote:

> The relationship between rationality and goals is fairly 
> subtle, and something I have been thinking about recently.... 

Ben, as you know, I admire and appreciate your thinking but have always
perceived an "inside-outness" with your approach (which we have
discussed before) in that your descriptions of mind always seem (to me)
to begin from a point of pre-existing awareness.  (I can think of
immediate specific objections to the preceding statement, but in the
interest of expediency in this low-bandwidth discussion medium, I would
ask that you suspend immediate objections and look for the general point
I am trying to make clear.)

It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very "narrow-AI" approach and destined to fail in general
application.  Why?  Because to conceive of a goal requires a perspective
outside of and encompassing the goal system.  We can speak in a valid
way about the goals of a system, or the goals of a person, but it is
always from a perspective outside of that system.

It seems to me that a better functional description is based on
"values", more specifically the eigenvectors and eigenvalues of a highly
multidimensional model *inside the agent* which drive its behavior in a
very simple way:  It acts to reduce the difference between the internal
model and perceived reality. [The hard part is how to evolve these
recursively self-modifying patterns of behavior, without requiring
natural evolutionary time scale.]  Goals thus emerge as third-party
descriptions of behavior, or even as post hoc internal explanations or
rationalizations of its own behavior, but don't merit the status of
fundamental drivers of the behavior.

Does this make sense to you?  I've been saying this for years, but have
never gotten even a "huh?", let alone a "duh."  ;-)

- Jef


>  To address the issue I will introduce a series of concepts 
> related to goals.
> 
> SUPERGOALS VERSUS SUBGOALS
> ---------------------------------------------------
> 
> A supergoal is defined as a goal of a system that is not a 
> subgoal of any other goal of that system, to a significant extent.
> 
> With this in mind, regarding creation and erasure of goals, 
> there are two aspects which I prefer to separate:
> 
> 1) optimizing the set of subgoals chosen in pursuit of a 
> given set of supergoals.  This is well-studied in computer 
> science and operations research.  Not easy computationally or 
> emotionally, but conceptually straightforward to understand.
> 
> 2) optimizing the set of supergoals.  This is a far, far 
> subtler thing.
> 
> Supergoal optimization must be understood from a perspective 
> of dynamical systems theory, not from a perspective of logic.
> 
> A strongly self-modifying AI system will be able to alter its 
> own supergoals....  So can a human, to an extent, with a lot 
> of effort....
> 
> EXPLICIT VERSUS IMPLICIT GOALS
> ----------------------------------------------------
> 
> Next, I think it is worthwhile to distinguish two kinds of goals
> -- explicit goals: those that a system believes it is pursuing
> -- implicit goals: those a system acts like it is pursuing
> 
> Definition: a "coherent goal achiever" is one whose implicit 
> goals and explicit goals are basically the same
> 
> What is interesting, then, is the dynamics of coherent goal 
> achievers that are also strongly enough self-modifying to 
> modify their supergoals....  In this case, what properties 
> control the evolution of the supergoal-set over time?  This 
> is closely related to Friendly AI, of course....
> 
> META-GOALS
> --------------------
> 
> Next, there is the notion of a "meta-goal", a supergoal  
> designed to coexist with other supergoals and to regulate the 
> process of supergoal creation/erasure/modification.
> 
> For instance, a friend of mine has a metagoal of streamlining 
> and simplifying his set of supergoals.  I have a metagoal of 
> making sure my various sometimes-contradictory supergoals all 
> cooperate with each other in an open and friendly way, rather 
> than being competitive and adversarial.
> 
> RATIONALITY AND GOALS
> ---------------------------------------
> 
> To me, rationality has two aspects:
> 
> 1) how effectively one achieves one's explicit goals, given 
> the constraints imposed by the resources at one's disposal.
> 
> 2) how coherent one is as a goal-achiever (implicit goals = 
> explicit goals)
> 
> IMO, revising one's supergoal set is a complex dynamic process that is
> **orthogonal** to rationality.  I suppose that Nietzsche 
> understood this, though he phrased it quite differently.  His 
> notion of "revaluation of all values" is certainly closely 
> tied to the notion of supergoal-set refinement/modification....
> 
> Refining the goal hierarchy underlying a given set of 
> supergoals is a necessary part of rationality, but IMO that's 
> a different sort of process...
> 
> In general, it would seem important to be aware of when you 
> are non-rationally revising a supergoal versus "merely" 
> rationally modifying the set of subgoals used to achieve some 
> supergoal.  And yet, the two processes are very closely tied together.
> 
> SUBGOAL PROMOTION AND ALIENATION
> ------------------------------------------------------------
> 
> One very common phenomenon is when a supergoal is erased, but 
> one of its subgoals is promoted to the level of supergoal.  
> For instance, originally one may become interested in science 
> as a subgoal of achieving greatness, but later on one may 
> decide greatness is childish and silly, but retain the goal 
> of being a scientist (now as a supergoal rather than a subgoal).
> 
> When subgoal promotion happens unintentionally it is called 
> subgoal "alienation."  This happens because minds are not 
> fully self-aware.  A supergoal may be erased without all 
> subgoals that it spawned being erased along with it.  So, 
> e.g. even though you give up your supergoal of drinking 
> yourself to death, you may involuntarily retain your subgoal 
> of drinking (even though you started doing it only out of a 
> desire to drink yourself to death).
> 
> -- Ben G
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 




More information about the extropy-chat mailing list