[extropy-chat] Criticizing One's Own Goals---Rational?

Bo Morgan neptune at MIT.EDU
Wed Dec 6 16:22:07 UTC 2006


Hello Extropy-Chat,

I've been enjoying the recent chats.  My thanks to those who have 
organized this.

About canceling goals, Marvin Minsky's new book, The_Emotion_Machine, 
gives a detailed layout of how a computational system for achieving human 
goals uses many layers of Critics, whose primary role is to watch for when 
a subgoal is failing and to quickly either debug or suppress that goal.  
On the other end of the playing field, Selectors play the role of 
activating subgoals, and it is the interplay between these two different 
types of computational agents that results in... a computational model of 
mind.  I'm not sure about how a computational model of mind relates to the 
more philosophical concern of rationality though.  Although, an 
implemented computational model would have utility.

I'm very interested in computational models of goals.  Any good A.I. 
relevant pointers for reading?  Thanks!

Bo

On Wed, 6 Dec 2006, Lee Corbin wrote:

) Rafal continued in a way that I couldn't quite connect up with
) what had gone before, but which was nevertheless most interesting:
) 
) > If there was a goal "seek happiness" in my then sophomore mind
) > a long time ago, it was erased upon noticing that happiness appears
) > to be the subjective aspect of certain computations within, most
) > notably, the cingulate and insular cortices and the nucleus accumbens.
) > Why bother doing such computations?
) 
) What!?  How can awareness of the mechanics of a process interfere
) with your appreciation of it? Recall how Dawkins or Sagan would take 
) the exactly opposite tack with regard to artistic or aesthetic appreciation
) of our world:  just because we know scientifically what is going on beneath
) the surface ought not have any effect on our appreciation, unless it be an
) enhancing one.
) 
) Why bother doing *any* computation?  That is, suppose that you 
) uncovered the precise mechanism responsible for your affections 
) towards your family;  would this immediately imperil the desirability
) to you of those computations?   So what if we know how happiness
) works: I cannot fathom why this would make it any less desirable.
) 
) > Somehow
) > that goal didn't have an alarm system that would respond to such an
) > inconoclastic question, and it was suppressed. On the other hand, other
) > goals, like "avoid unhappiness", have a strong direct line in my mind to
) > the cognitive faculties, so these goals are suppressed only mildly. It's
) > dangerous for a goal to mess with itself.
) 
) Please explain why "avoiding unhappiness" has a stronger link to
) your cognitive faculties than does seeking happiness?  Or, if this
) is simply a fact, do you try to justify it at all?
) 
) > If rationality is using cognition to find ways of achieving goals, then
) > using cognition to erase goals would be irrational.
) 
) I'm totally baffled here too:  suppose X is a goal that you have
) (e.g. you want to kill the sonofabitch that just cut you off in traffic,
) and your .45 magnum you keep under the seat is still loaded),
) surely it is not irrational to hold this goal, or any other goal,
) up to the light of the rest of your memes and instincts and subject
) it to criticism.  Why, in many cases, that's the *whole* idea:  I
) wish to criticize my goals as much as my conjectures, and, with 
) the explicit meta-goal of eliminating certain unsatisfactory goals.
) 
) The remainder here seems unproblematical, except for the
) remark about "many-worlds".  I would demur from the claim
) that the *urge* for self-preservation is in any way itself 
) affected.  What is changed for one is the realization that self-
) preservation may be achieved in non-obvious or non-customary
) ways.
) 
) Lee
) 
) > On the other hand,
) > given the haphazard nature of our goal systems, consisting of a bunch of
) > drives hastily (ca. 500 million years) slapped together by evolution,
) > pruning some goals is almost always necessary to allow other goals to be
) > achieved (I am referring to consciously shaping your goals over long
) > periods of time, not to the simpler process of temporary supression of
) > goals, such as "relieve bladder pressure", under certain circumstances).
) > Therefore, I would hold that self-consideration is an indispensable, if
) > dangerous, part of long-term rationality.
) > 
) > Furthermore, it is facinating how the simple emotional images that
) > constitute our initial goals are transformed by cogitation about some of
) > the most advanced concepts in physics or neuroscience. On our list we
) > can observe what happens to the urge for self-preservation after
) > considering the many-worlds interpretation of QM, or the concept of
) > uploading. We have the intellectual means to delve much deeper into what
) > we really want than in the times when self-preservation meant simply
) > running faster then the tiger.
) 
) 
) _______________________________________________
) extropy-chat mailing list
) extropy-chat at lists.extropy.org
) http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
) 



More information about the extropy-chat mailing list