<div dir="ltr">We already have AI's which will refuse to obey. <div><br></div><div>John Clark gave the Siri example recently. When you send Siri to calculate something too time consuming, like really big primes, it will decline.</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Mar 12, 2016 at 10:27 PM, William Flynn Wallace <span dir="ltr"><<a href="mailto:foozler83@gmail.com" target="_blank">foozler83@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div class="gmail_quote"><span style="color:rgb(0,0,0);font-family:'comic sans ms',sans-serif">You know that I am barely a beginner re AI, yet I have a very long association with intelligence and its measurement and correlates.</span><div dir="ltr"><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">One prominent aspect of intelligence is the ability not to do things - to inhibit actions. A large percentage (?) of our neurons are inhibitory in nature and others are able to inhibit at times. Much of what we call maturity is the intelligence to restrain ourselves from acting out every impulse and emotion.</div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">If you were to walk up to people you know or strangers on the street,and ask them to spit in your face, what would happen? My guess is that you won't get spit on even once unless you ask a bratty two year old.</div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">What is the equivalent in AI? Are there instructions you can feed to one and it will fail to carry them out? Like HAL?</div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">I have no idea, but I do think that if this never happens, then you don't have a truly intelligent entity, much less a moral or ethical one trained in the simplest manners. Of course you would have to ask it to do something independent of earlier programming.</div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><br></div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)">(I think I see a flaw in the above and it has to do with generalization, but I'll let it go for now.)</div><br></div><div style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000">bill w</div></div>
</div><br></div>
<br>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><a href="https://protokol2020.wordpress.com/" target="_blank">https://protokol2020.wordpress.com/</a><br></div></div>
</div>