[extropy-chat] Fools building AIs

Ben Goertzel ben at goertzel.org
Fri Oct 6 07:15:39 UTC 2006


Eli wrote:
> > I do not see why this attitude is inconsistent with a deep
> > understanding of the nature of intelligence, and a profound
> > rationality.
>
> Hell, Ben, you aren't willing to say publicly that Arthur T. Murray is a
> fool.  In general you seem very reluctant to admit that certain things
> are inconsistent with rationality - a charity level that *I* think is
> inconsistent with rationality.

Heh....  Eli, this is a very humorous statement about me, and one that
definitely would not be written by anyone who knew me well in person
!!!!   I have to say that making this kind of generalization about me,
based on your very limited knowledge of me as a human being, is rather
irrational on your part ;-) ... My ex-wife would **really** get a kick
out of the idea that I am "reluctant to admit that certain things are
inconsistent with rationality" ;-)

I think there are a LOT of things that are inconsistent with
rationality ... though it's also true that some humans can be highly
rational in some domains and highly irrational in others, and
effectively maintain a strict separation between the domains.  (For
example, I know some excellent scientists who are also deeply
religious, but separate the two domains verrry strictly so their
rationality in science is not pragmatically affected by their
irrationality in personal and spiritual life.)

However, I don't think that advocating the creation of superhuman AI
even in the face of considerable risk that it will annihilate
humanity, is **irrational**.  It is simply a choice of goals that is
different from your, Eliezer's, currently preferred choice.

>   -- "Cognitive biases potentially affecting judgment of global risks"
>
> I seriously doubt that your friend is processing that question with the
> same part of his brain that he uses to decide e.g. whether to
> deliberately drive into oncoming traffic or throw his three-year-old
> daughter off a hotel balcony.

No, but so what?

The part of his mind that decides whether to throw someone off a
balcony or to drive into traffic is his EMOTIONS ... the part of his
mind that decides whether a potentially dangerous superhuman AI should
be allowed to be created is his REASON which is more dispassionately
making judgments based on less personal and emotional aspects of his
value system...

It is not logically inconsistent to

a) value being alive more than being dead
b) value a superhuman AI's life more than the human race's life

> Your friend, I suspect, is carrying out a form of non-extensional
> reasoning which consists of reacting to verbal descriptions of events
> quite differently than how he would react to witnessing even a small
> sample of the human deaths involved.

Well, but why do you consider it irrational for someone to make a
considered judgment that contradicts their primal emotional reactions?

In this case, the person may just be making a decision to adopt a
supergoal that contradicts their emotional reactions, even though they
are not able to actually extinguish their emotional reactions...

> But a skilled rationalist who knows about extensional neglect is not
> likely to endorse the destruction of Earth unless they also endorse the
> death of Human_1, Human_2, Human_3, ... themselves, their mother, ...
> Human_6e9.

But it is quite consistent to endorse the destruction of all these
humans (individually or en masse) IN EXCHANGE FOR AN ALTERNATIVE
PERCEIVED AS BETTER, but not to endorse the destruction of all these
humans FOR NO REASON AT ALL ...

> Also I expect that your friend is making a mistake of simple fact, with
> respect to what kind of superhumans these are likely to be - he thinks
> they'll be better just because they've got more processing power, an old
> old mistake I once made myself.

No, he is not making this mistake.  He thinks they'll be better
because he thinks our evolutionary "design" sucks and appropriately
engineered AI systems can be ethically as well as intellectually
superior by design...

Ben



More information about the extropy-chat mailing list