<div dir="ltr"><br><div>I guess I'm not convinced.</div><div><br></div><div>To me, an example of necessary good is survival is better than non survival.</div><div>That is why evolutionary progress (via survival of the fittest) must take place in all sufficiently complex systems.</div><div><br></div><div>All 'arbitrary' goals, if they are in the set of moral goals, are good goals.</div><div>And, again, even if you win a war, and achieve your goal first, you will eventually lose, yourself.</div><div>So the only way to reliably get what you want, is to work to get it all, for everyone, till all good is achieved, the only possible ultimate final goal.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 11, 2023 at 4:32 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, 11 May 2023 at 23:14, Gadersd via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> That completely depends on how you define intelligence. AI systems in general are capable of acting amorally regardless of their level of understanding of human ethics. There is no inherent moral component in prediction mechanisms or reinforcement learning theory. It is not a logical contradiction in the theories of algorithmic information and reinforcement learning for an agent to make accurate future predictions and behave very competently in way that maximizes rewards while acting in a way that we humans would view as immoral.<br>
><br>
> An agent of sufficient understanding would understand human ethics and know if an action would be considered to be good or bad by our standards. This however, has no inherent bearing on whether the agent takes the action or not.<br>
><br>
> The orthogonality of competence with respect to arbitrary goals vs moral behavior is the essential problem of AI alignment. This may be difficult to grasp as the details involve mathematics and may not be apparent in a plain English description.<br>
> _______________________________________________<br>
<br>
<br>
So I asked for an explanation ------<br>
Quote:<br>
The orthogonality thesis is a concept in artificial intelligence that<br>
holds that intelligence and final goals (purposes) are orthogonal axes<br>
along which possible artificial intellects can freely vary. The<br>
orthogonality of competence with respect to arbitrary goals vs moral<br>
behavior is the essential problem of AI alignment. In other words, it<br>
is possible for an AI system to be highly competent at achieving its<br>
goals but not aligned with human values or morality. This can lead to<br>
unintended consequences and potentially catastrophic outcomes.<br>
----------------------<br>
<br>
Sounds about right to me.<br>
BillK<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>