[extropy-chat] Fools building AIs (was: Tyranny in place)

Ben Goertzel ben at goertzel.org
Sat Oct 7 03:32:16 UTC 2006


Hi,

> > When asked what he meant, he said simply, "We're not so great, and
> > we've been dominating the planet long enough.  Let something better
> > have its turn."
>
> I hope you're not talking about Hugo de Garis here...


No, the person I was referring to is not Hugo, is not famous. and has
not published his views on the Singularity...

> > I do not see why this attitude is inconsistent with a deep
> > understanding of the nature of intelligence, and a profound
> > rationality.
>
> It isn't inconsistent with those things, but neither are a lot of
> attitudes.  I can have a deep understanding of the nature of
> intelligence, and a profound rationality, and still spend my days as a
> pedophile stalking grade school children... or work on a mathematical
> problem with zero expected value when there are other opportunities
> with great value...or whatever.
>
> The problem with rationality and understanding is that they can be
> coupled to something like 2^10^17 goal systems/attitudes, or more,
> sometimes making them meaningless in the context of examining goals.
> The problem is that the phrases "understanding" and "rationality" are
> frequently value-loaded, when to make things simpler we should use
> them just to describe the ability to better predict the next blip of
> perceptual input.

Thanks.  That is exactly the point I was trying to make.

> A better question might be, "as rationality increases asymptotically,
> does a generic human goal system have the urge to eliminate humans by
> replacing them with something better?"

I don't really believe in the idea of a "generic human goal system."
It seems that some human goal systems, if pursued consistently, would
have this conclusions, whereas others would not...

> I personally happen to think that the position of your friend is
> inconsistent with profound rationality and understanding of
> intelligence.

Can you explain why you think this?  This statement seems inconsistent
with your own discussion of rationality, above.

I stress that I am opposed to the annihilation of humanity!  I am just
pointing out the very basic point that a value judgement like this has
nothing to do with rationality... rationality is about logical
consistency and optimal goal pursuit, not about what one's values and
goals are.  So long as one's goals are not logically inconsistent,
they are consistent with rationality...

-- Ben



More information about the extropy-chat mailing list