<div dir="ltr">Right, <br>And if we made an AI that is misaligned then maybe we do deserve to be taken out.<div>Kidding but I'm also serious. I trust intelligence == good. <br>Giovanni </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 30, 2023 at 1:54 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 30, 2023, 2:48 PM Darin Sunley via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/" rel="noreferrer" target="_blank">https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/</a><br></div><div><br></div>We live in a timeline where Eliezer Yudkowsky just got published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears.<div><br></div><div>Without commencing on the arguments on either side or the qualities thereof, can I just say how f*cking BONKERS that is?!</div><div><br></div><div>This is the sort of thing that damages my already very put upon and rapidly deteriorating suspension of disbelief.</div><div><br></div><div>If you sent 25-years-ago-me the single sentence "In 2023, Eliezer Yudkowsky will get published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears." I would probably have concluded I was already in a simulation.</div><div><br></div><div>And I'm not certain I would have been wrong.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It is a sign of the times that these conversations are now reaching these outlets.</div><div dir="auto"><br></div><div dir="auto">I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99?</div><div dir="auto"><br></div><div dir="auto">Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them.</div><div dir="auto"><br></div><div dir="auto">Jason</div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>