[extropy-chat] Fools building AIs (was: Tyranny in place)
michaelanissimov at gmail.com
Fri Oct 6 21:37:34 UTC 2006
On 10/5/06, Ben Goertzel <ben at goertzel.org> wrote:
> I asked a friend recently what he thought about the prospect of
> superhuman beings annihilating humans to make themselves more
> processing substrate...
> His response: "It's about time."
> When asked what he meant, he said simply, "We're not so great, and
> we've been dominating the planet long enough. Let something better
> have its turn."
I hope you're not talking about Hugo de Garis here...
> I do not see why this attitude is inconsistent with a deep
> understanding of the nature of intelligence, and a profound
It isn't inconsistent with those things, but neither are a lot of
attitudes. I can have a deep understanding of the nature of
intelligence, and a profound rationality, and still spend my days as a
pedophile stalking grade school children... or work on a mathematical
problem with zero expected value when there are other opportunities
with great value...or whatever.
The problem with rationality and understanding is that they can be
coupled to something like 2^10^17 goal systems/attitudes, or more,
sometimes making them meaningless in the context of examining goals.
The problem is that the phrases "understanding" and "rationality" are
frequently value-loaded, when to make things simpler we should use
them just to describe the ability to better predict the next blip of
A better question might be, "as rationality increases asymptotically,
does a generic human goal system have the urge to eliminate humans by
replacing them with something better?" If the answer is "yes", then
CEV, if implemented, would wipe everyone out, as would a human upload,
and as might a Joyous Growth AI (not that I understand the last one
too well...) In which case, it would be prudent to try a different
I personally happen to think that the position of your friend is
inconsistent with profound rationality and understanding of
intelligence. Genocide is genocide, even if you replace the victims
with ubervariants. Justify mass murder in the name of global
improvement, and you might as well be practicing your seig heils in
Part of the problem with discouraging people to build UFAIs is that no
one will be around to hold them responsible if they actually do it.
Lifeboat Foundation http://lifeboat.com
More information about the extropy-chat