<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000">I cannot seem to find a way to search for AGI without getting 'adjusted gross income'. Is there a way? Just what is AGI? bill w</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Apr 2, 2022 at 8:56 AM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
-----Original Message-----<br>
From: extropy-chat <<a href="mailto:extropy-chat-bounces@lists.extropy.org" target="_blank">extropy-chat-bounces@lists.extropy.org</a>> On Behalf Of<br>
BillK via extropy-chat<br>
Sent: Saturday, 2 April, 2022 4:03 AM<br>
To: Extropy Chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>><br>
Cc: BillK <<a href="mailto:pharos@gmail.com" target="_blank">pharos@gmail.com</a>><br>
Subject: [ExI] Is AGI development going to destroy humanity?<br>
<br>
MIRI announces new "Death With Dignity" strategy<br>
by Eliezer Yudkowsky 2nd Apr 2022<br>
<br>
>...<br>
<br>
<<a href="https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy" rel="noreferrer" target="_blank">https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-<br>
with-dignity-strategy</a>><br>
<br>
>...(This article doesn't appear to be an April Fool's joke. Eliezer seems<br>
to have reached the conclusion that AGI development is going to<br>
destroy humanity. BillK)<br>
<br>
Quotes:<br>
>...It's obvious at this point that humanity isn't going to solve the<br>
alignment problem, or even try very hard, or even go out with much of a<br>
fight. Since survival is unattainable, we should shift the focus of our<br>
efforts to helping humanity die with slightly more dignity...<br>
--------------<br>
<br>
BillK<br>
_______________________________________________<br>
<br>
<br>
<br>
BillK, this article is classic Eliezer. We have known him personally since<br>
we first met him at a local Foresight Institute conference when he was 18,<br>
which has been 24 years ago. He is far more convinced than most of us that<br>
unfriendly AI will destroy humanity. He made a number of converts to that<br>
view, including some bright stars on ExI, however I am not among them. I am<br>
very impressed with his talent and writing skills, his analysis and so<br>
forth, but I have reasons to doubt the notion of unfriendly AI destroying<br>
humanity.<br>
<br>
I am eager to discuss the notion in this forum, as we did extensively in the<br>
90s, for we have a lot more data now, including a critical one: that BI<br>
(biological intelligence as opposed to artificial) has tried software<br>
weapons of mass destruction and are trying it constantly, however... we are<br>
still here, stubborn survivors insisting we are getting better:<br>
<br>
<a href="https://www.youtube.com/watch?v=Jdf5EXo6I68" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=Jdf5EXo6I68</a><br>
<br>
as some clearly talented AI experts continue promoting their own plausible<br>
theory, like Ann Elk.<br>
<br>
I can see bigger threats to humanity than runaway unfriendly AI, but none of<br>
these threats will destroy all of humankind. In all of the grim scenarios I<br>
can easily foresee, most of the African continent survives.<br>
<br>
spike<br>
<br>
<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>