<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);"><font class="" color="#000000">>>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?</font></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);"><font class="" color="#000000"><br class=""></font></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);">A super intelligence wouldn’t need to be “asked.” Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours.</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);">>>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px; color: rgb(0, 0, 0);">Because it’s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next.</div><div><br class=""><blockquote type="cite" class=""><div class="">On Feb 27, 2023, at 6:02 PM, Dave S via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div style="font-family: Arial, sans-serif; font-size: 14px;" class=""><span style="font-family: system-ui, sans-serif; font-size: 0.875rem;" class="">On Monday, February 27th, 2023 at 12:48 PM, Gadersd via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</span></div><div class="protonmail_quote"><div class=""><br class=""></div><blockquote class="protonmail_quote" type="cite"><div class=""><font class="">Please note the term “well-defined.” It is easy to hand wave a goal that sounds right but rigorously codifying such as goal so that an AGI may be programmed to follow it has so far been intractable.</font></div></blockquote><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><font class=""><br class=""></font></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><font class="">Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?</font></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">-Dave</div>
</div>_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></div></body></html>