<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">The fundamental values problem is that nations, races, religions, etc.</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">will never agree what values are correct. bill k</span><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><br></span></div><div class="gmail_default" style="font-size:large;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-size:small"><font face="comic sans ms, sans-serif">Actually, the major religions are very close in values. In particular,the Golden Rule, or some version of it, is a part of all major religions. Political values? Well, no. bill w</font></span></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 30, 2023 at 4:10 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, 30 Mar 2023 at 21:55, Jason Resch via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> It is a sign of the times that these conversations are now reaching these outlets.<br>
><br>
> I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99?<br>
><br>
> Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them.<br>
><br>
> Jason<br>
> _______________________________________________<br>
<br>
<br>
"Our" values?? I doubt that China thinks our values are correct.<br>
The fundamental values problem is that nations, races, religions, etc.<br>
will never agree what values are correct.<br>
The AGIs will be just as confused as humans on which values are preferable.<br>
<br>
<br>
BillK<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>