[ExI] Is AGI development going to destroy humanity?

BillK pharos at gmail.com
Sat Apr 2 11:03:09 UTC 2022


MIRI announces new "Death With Dignity" strategy
by Eliezer Yudkowsky       2nd Apr 2022

(The Machine Intelligence Research Institute, is a non-profit research
organization devoted to reducing existential risk from unfriendly
artificial intelligence and understanding problems related to friendly
artificial intelligence. Eliezer Yudkowsky was one of the early
founders and continues to work there as a Research Fellow).

<https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy>

(This article doesn't appear to be an April Fool's joke. Eliezer seems
to have reached the conclusion that AGI development is going to
destroy humanity.   BillK)

Quotes:
It's obvious at this point that humanity isn't going to solve the
alignment problem, or even try very hard, or even go out with much of
a fight.  Since survival is unattainable, we should shift the focus of
our efforts to helping humanity die with slightly more dignity.

It is more dignified for humanity - a better look on our tombstone -
if we die after the management of the AGI project was heroically
warned of the dangers but came up with totally reasonable reasons to
go ahead anyways.

But compared to being part of a species that walks forward completely
oblivious into the whirling propeller blades, with nobody having seen
it at all or made any effort to stop it, it is dying with a little
more dignity, if anyone knew at all.  You can feel a little
incrementally prouder to have died as part of a species like that, if
maybe not proud in absolute terms.
--------------

BillK


More information about the extropy-chat mailing list