[ExI] on inflation in long term thinking

Lee Corbin lcorbin at rawbw.com
Sun Aug 5 19:59:16 UTC 2007


Russell writes

> Samantha wrote
> 
> > What bothers me is the implicit notion rational decision making requires
> > maximal extension of hypotheticals.

This is a fine point to always bring up---even if just as a reminder.
And I really commend your phrasing.

> > None of us have any real idea whether humanity or its descendants
> > have a future beyond this planet, solar system or local galactic
> > neighborhood.  That we might perhaps become or create near-gods...
> > But is it really rational to judge risk to humanity as equating to a major
> > risk to the entire universe?

First, just to be clear.  We're talking about a finite portion of the
visible universe, because with the acceleration of expansion, most
of what we see can't and won't be within reach of our civilization.

In principle, I do consider it rational to so judge. Unless it benefits
someone, there isn't much use to the universe (a truism). The
*real* question is pursued further by you and Russell later.

> > Do we judge a human being not just on his own character and
> > likely potential but on the potential of all those myriad of beings
> > he might possibly be an ancestor to plus all those artificial beings...

To be precise, again, "judge" probably isn't the word you need, because
blame and punishment still must be accorded to individuals as their
capacities and actions call for. But in my opinion, yes indeed,
their *value* must include the contribution they'll make towards
converting dead matter into living matter in the long run, multiplied
by the probability, of course, that they'll actually do so.

> > So what is the proper means of cleaning this up?  How is it
> > properly delimited to something actually useful?  Am I missing
> >something? 

Missing something?  No, I think that caution is commendable.
But I do suspect that there may lurk here real differences in
*values* between you and Russell, on the one hand, and me,
Bostrum, Yudkowsky, and the usual suspects on the other.
Or it may be simply reducible to our necessary tendencies to 
assign different probabilities from one another to different
possibilities.

Russell then answers

> I can see the philosophical justification for it, but I agree with you
> that it's not useful. In practice, following that train of thought just
> leads us into a state of mind where we're not thinking straight; [!]
> we end up letting fear and despair make our decisions for us,

Oh, come now  :-)

> and in that condition we flinch away from (not rationally guard against,
> but flinch away from) that which _appears_ dangerous - and likely
> as not, right into the jaws of that which truly _is_ dangerous. 

If you think, for example, that global warming is a dire threat,
and I don't, it' doesn't follow that you aren't thinking straight
or are "letting fear and despair make our decisions for us".
(Well, yes, probably that's true of *some* people, but it is
downright un-Christian to assume that it's true for all.)

> I can't speak for everyone, but for myself I've decided the best approach is:

> 1) I acknowledge I cannot know what will happen in the distant future.

Good.  Applies, as ready examples, both to a deadly and fast
AI takeoff (which I fear) and to catastrophic global warming
(which I don't).

> 2) That doesn't mean I can't hope. The hope of future wonders
> can't provide detailed guidance for the here and now, but it can
> provide inspiration. 

Quite so.

> 3) My scope for action extends over the next few years, maybe
> couple of decades, and that is the timescale on which I make plans.

I guess I have to agree with you, here, and pull back a little on
my own enthusiasm for worrying too much, say, about "the
Singularity".   While yes, I'm very glad that there are some people
worrying about it as their professions, it has also occurred to me
recently that everything is just too unpredictable.

(I read a few pages of "The Black Swan", I think it's called, which
has some very telling anecdotes about making overly detailed plans
concerning a too uncertain future.  And reading prognostications
that are even four years old, e.g. "Robotic Nation", causes one to
see how very quickly our guesses become outdated.)

Can't some agreement be reached here simply by each of us
assigning different probabilities to various risks?  In other words,
is anything really new here?

Lee




More information about the extropy-chat mailing list