[extropy-chat] Fools building AIs

Ben Goertzel ben at goertzel.org
Fri Oct 6 15:48:17 UTC 2006


> > For instance, you have repeatedly claimed quite confidently that "any
> > AI not provably Friendly is very very likely to wind up extremely
> > unFriendly", without ever presenting any remotely convincing reasoning
> > in favor of this contention ;-)
> The point is that a population of uncaring high-fitness beings will cause
> extinctions on a very large scale. We do. I wouldn't call it unfriendly,
> because unfriendly means to actually seek out and terminate with extreme
> prejudice. If you're missing reasoning for that, then I
> suggest you look into evolutionary biology and human history,
> and take a look out of your window.

You are missing my point ... there is a difference between

a) not provably caring


b) uncaring

I agree that a superhuman AI that doesn't give a shit about us is
reasonably likely to be dangerous.  What I don't see is why Eliezer
thinks an AI that is apparently not likely to be dangerous, but about
whose benevolence it's apparently formidably different to construct a
formal proof, is highly likely to be dangerous.

I also think that looking to evolutionary biology for guidance about
superhuman AI's is a mistake, BTW.

> Most people tend to value life. I would call a person who's indifferent
> or hostile to his own well-being at least slightly pathological.
> I don't see why you're so fixated on that rational thing, whatever that is.

This thread began as a discussion of whether or not rationality rules
out a certain attitude toward the preservation of human life.

I don't find it accurate to say that I'm fixated on rationality,
though I do consider it important.

-- Ben

More information about the extropy-chat mailing list