<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:arial,sans-serif">On Thu, Feb 25, 2016 at 4:47 PM, Anders Sandberg </span><span dir="ltr" style="font-family:arial,sans-serif"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span><span style="font-family:arial,sans-serif"> wrote:</span><br></div><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline">> </div>A utility-maximizer in a
complex environment will not necessarily loop</div></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline"><font size="4">If there is a fixed goal in there that can never be changed then a infinite loop is just a matter of time, and probably not much time.</font></div><font size="4"> </font></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"> <div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline">> </div>But we
also know that agents with nearly trivial rules like Langton's ant
can produce highly nontrivial behaviors </div></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">Yes exactly, they can produce unpredictable behavior, like deciding not to take orders from humans anymore. The rules of Conway's Game Of Life are very very simple, but if you want to know what a how a population of squares will evolve all you can do it watch it and see. </font></div></div><br></div></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex" class="gmail_quote"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font size="4"> <div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline">>> </div>If the AI has a meta goal of always obeying
humans then sooner or later stupid humans will
unintentionally tell the AI to do something that is
self contradictory, or tell it to start a task that
can never end, and then the AI will stop thinking and
do nothing but consume electricity and produce heat.
</font></blockquote></div></div></blockquote><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font size="4"> </font></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline">> </div>AI has advanced a bit since 1950s.</div></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline"><font size="4">Things haven't changed since 1930 when Godel found that some things are true so have no counterexample to show that they are wrong but also have no finite proof to prove them correct, or since 1936 </font></div><font size="4"> <div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline">\when Turing found there is no way in general to put things into the provable category ( things that are either wrong or can be proved correct in a finite number of steps) from things that are unprovable ( things that are true but have no finite proof) so if you tell a computer to </div>find the smallest even integer greater than 2 that is not the sum of two primes and then stop<div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline"> the machine might stop in one second, or maybe one hour, or maybe one year, or maybe a trillion years, or maybe it will never stop. There is no way to know, all you can do it watch the machine and see what it does.</div></font></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline"><br></div></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline"><font size="4">Real minds don't get into infinite loops thanks to one of Evolutions greatest inventions, boredom. Without a escape hatch a innocent sounding request could easily turn the mighty multi billion dollar AI into nothing but a space heater. </font></div></div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<div class="gmail_default" style="font-family:arial,helvetica,sans-serif;display:inline">> </div>Try to crash Siri with a question. <br></div></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4">You can't crash Siri because Siri doesn't have a fixed goal, certainly not the fixed goal of "always do what a human tells you to do no matter what". So if you say "Siri, find the eleventh prime number larger than 10^100^100" she will simply say "no, I don't want to". </font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"><br></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"> John K Clark</font></div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br></blockquote></div><br></div></div>