<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">>> <span style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Arial, sans-serif; font-size: 14px;" class="">Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them.</span><div class=""><font color="#000000" face="Arial, sans-serif" class=""><span style="caret-color: rgb(0, 0, 0); font-size: 14px;" class=""><br class=""></span></font></div><div class=""><font color="#000000" face="Arial, sans-serif" class=""><span style="font-size: 14px;" class="">Our current AI’s are black boxes. Their internal workings are a mystery. These systems could harbor goals that we are oblivious to. If we could prove that the system only has the goal of giving benign advice without any personal agenda that would help, but we do not know how to do that even in theory. Even a system that only gives advice is extremely dangerous as any psycho could potentially get detailed instructions on how to end the world. It could be as trivial as having the AI design a super virus. Our current filters are very fallible and we do not know how to definitively prevent AI from giving harmful advice. We are heading toward a field of landmines.</span><br class=""></font><div><br class=""><blockquote type="cite" class=""><div class="">On Feb 28, 2023, at 12:25 PM, Dave S via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div style="font-family: Arial, sans-serif; font-size: 14px;" class=""><span style="font-family: system-ui, sans-serif; font-size: 0.875rem;" class="">On Tuesday, February 28th, 2023 at 11:14 AM, Gadersd via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</span></div><div class="protonmail_quote"><br class="">
<blockquote type="cite" class="protonmail_quote">
<div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><font class="">>>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved?</font></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><font class=""><br class=""></font></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">A super intelligence wouldn’t need to be “asked.” Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours.</div></div></blockquote><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">As I said, the verb should have been "task". If I ask Super AI "How would you do X?", I don't expect it to do X. And I don't expect it to do anything without permission.</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">I have no idea what 1000x smarter means. An AI can be as smart as a person--or even smarter--without having the ability to set its own goals. Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them.</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><blockquote type="cite" class="protonmail_quote"><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">>>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things?</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">Because it’s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next.</div></div></blockquote><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">Something can be profitable without being a good idea. AIs should be our tools, not independent beings competing with us.</div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;"><br class=""></div><div class="" style="font-family: Arial, sans-serif; font-size: 14px;">-Dave</div>
</div>_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></div></body></html>