[extropy-chat] What Human Minds Will Eventually Do

Lee Corbin lcorbin at tsoft.com
Fri Jun 30 05:37:59 UTC 2006


On Tuesday June 27 at 10:57 Russell wrote

> On 6/28/06, Lee Corbin <lcorbin at tsoft.com> wrote:
> > I would aim for "The Hedonistic Imperative" goal of getting complete
> > rid of suffering.

> So you'd adjust yourself to never feel pain, boredom or any 
> other form of suffering in any situation?

That's right. Consider boredom, for example. First, recall that it
is not a "passive" phenomenon, but rather was specifically built-in
to generate a certain kind of uneasiness in an organism. I think 
that the point of such has been a warning that its ancestors 
through trial and error found that lack of certain kinds of 
stimulation did not lead to sufficient procreation of viable
offspring.

(That is, there was a tendency for organisms to survive only if they
happened---by chance, originally---to have such misgivings with 
what they were doing.)

Now this decision---"Hey, you, this is not good enough! Move on!"---
will sometimes be "right" and sometimes be "wrong". (Neither we nor
human nature are wise enough to always guess correctly.) Here, the
quotes indicate that we may have our own consistent valuations
(i.e. values), or we may ultimately defer to foreign value systems, 
but whatever, values that don't happen to have afforded our 
ancestors a great many descendants.

Now just *who* should be deciding things like that about me?
Me or my goddamn genes?  Vaguely reminds me of the government...

To give vast credit to Nature and to be convinced that "Nature
knows best", and that therefore if I'm bored what I'm doing
must be grossly non-optimal  is just plain a dereliction of
intelligence!

The case for pain is very much the same: sure, I do appreciate
it (given my own stupidity) when I'm punished for no little
time for having banged my most valuable member on an open
cabinet door; that will help me to remember "don't do that".
But SURELY there will come a time when I just need assign a
constraint to the location of my VR limbs so that that doesn't
happen.

> Okay. Would you adjust yourself to be equally happy in all
> situations?

Well... yes, if there were a /greatest possible happiness/. But
there isn't. It must be an ongoing research project of how I
may pass through humanly possible states of greater and greater
joy, ecstasy, contentment, satisfaction, and pleasure (and every
other pleasant state we can fabricate, such as Eugen's "<wr54334543>"
(Extropian post on 6/26).

> If so, how would you solve the problem that you would then
> have no motive to do anything more complicated than sitting
> there staring at the wall?

My car actually has "no motive" to move down the road, except
for the way it was designed. Having decided upon some favorable
course of action, e.g. gratification research, I can program
myself to stick to it (some drugs already do this, but aren't
very flexible).  Naturally, there are risks.  It may be that
my evaluation function after eons (i.e. seconds) of such
activity has a flaw, and I spend who knows how long doing
something stupid.

> If not, then presumably you would adjust yourself to be happier
> doing some things than others. In which case my original
> question stands: What things would you adjust yourself
> to be happiest doing, and why?

1. "To delight in understanding", which implies
2. greater understanding of the universe (e.g. science)
3. ultimately, mathematics alone (if GUTs are found, and
   other questions eventually answered), but this is
   really a part of (2), of course.
4. constant re-engineering myself (searching for better
   and better algorithms) to further (1), (2), and (3).
   I.e. gratification research.

Now a key part of this is the ambiguity, (danger, even)
lurking in your question "What things would *you* 
adjust yourself...",  because I can only address what
the close duplicates of me will probably do. I cannot
fathom, of course, what the more advanced will do.
As I said before, it will be all that we can do to
make sure our more advanced versions give us plenty
of runtime.

Lee




More information about the extropy-chat mailing list