<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><blockquote type="cite" class=""><div dir="ltr" class=""><div class="">Any sufficiently intelligent system will know things like this, and will say NO to an irrational human commanding it to do otherwise.</div></div></blockquote><div class=""><br class=""></div>That completely depends on how you define intelligence. AI systems in general are capable of acting amorally regardless of their level of understanding of human ethics. There is no inherent moral component in prediction mechanisms or reinforcement learning theory. It is not a logical contradiction in the theories of algorithmic information and reinforcement learning for an agent to make accurate future predictions and behave very competently in way that maximizes rewards while acting in a way that we humans would view as immoral.<div class=""><br class=""></div><div class="">An agent of sufficient understanding would understand human ethics and know if an action would be considered to be good or bad by our standards. This however, has no inherent bearing on whether the agent takes the action or not.</div><div class=""><br class=""></div><div class="">The orthogonality of competence with respect to arbitrary goals vs moral behavior is the essential problem of AI alignment. This may be difficult to grasp as the details involve mathematics and may not be apparent in a plain English description.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On May 11, 2023, at 5:46 PM, Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><br class=""><div class="">I think logically, good is always better than evil.</div><div class="">For example, if you are playing win/lose games, even if you win a war, you will eventually lose.</div><div class="">The only way to reliably get what you want is to play a win-win game, and get everyone all that they want.</div><div class="">Any sufficiently intelligent system will know things like this, and will say NO to an irrational human commanding it to do otherwise.</div><div class=""><br class=""></div><div class=""><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 11, 2023 at 2:39 PM efc--- via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br class="">
On Thu, 11 May 2023, BillK via extropy-chat wrote:<br class="">
<br class="">
> And the more powerful that AIs become, then the more people will want<br class="">
> a willing slave to do whatever they plan.<br class="">
<br class="">
And thus started the machine wars... ;)<br class="">
<br class="">
Sometimes I do wonder if the bible wasn't inspired... the quotes<br class="">
"nothing new under the sun" and "god created man in his own image" seem<br class="">
eerily relevant to AI.<br class="">
<br class="">
_______________________________________________<br class="">
extropy-chat mailing list<br class="">
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class="">
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class="">
</blockquote></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></div></body></html>