On Mon, Dec 24, 2012 Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
>> An end perhaps but not a ignominious end, I can't imagine a more<br>
glorious way to go out. I mean 99% of all species that have ever existed<br>
are extinct, at least we'll have a descendent species.<br></blockquote></div></blockquote><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div>
</div></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
</blockquote>
> If our descendant species are doing something utterly value-less *even to them* </div></blockquote><div><br>Values are only important to the mind that holds them and I don't believe any intelligence, artificial or not, can function without values, although what those values might be at any given point in the dynamic operation of a mind I can not predict. <br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> because we made a hash of their core motivations or did not try to set up the right motivation-forming environment, I think we would have an amazingly ignominious monument.<br>
</blockquote><div><br>I think that's the central fallacy of the friendly AI idea, that if you just get the core motivation of the AI right you can get it to continue doing your bidding until the end of time regardless of how brilliant and powerful it becomes. I don't see how that could be. <br>
<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">> if our analysis is right, a rational AI would also want to follow it.<br></blockquote><br>Rationality can tell you what to do to accomplish what you want to do, but it can't tell you what you should want to do.<br>
<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">> We show what rational agents with the same goals should do, and it actually doesn't matter much if one is super-intelligent and the others not<br>
</blockquote><div><br>Cows and humans rarely have the same long term goals and it's not obvious to me that the situation between a AI and a human would be different. More importantly you are implying that a mind can operate with a fixed goal structure and I can't see how it could. The human mind does not work on a fixed goal structure, no goal is always in the number one spot not even the goal for self preservation. The reason Evolution never developed a fixed goal intelligence is that it just doesn't work. Turing proved over 70 years ago that such a mind would be doomed to fall into infinite loops. <br>
</div><br>Godel showed that if any system of thought is powerful enough to do arithmetic and is consistent (it can't prove something to be both true and false) then there are an infinite number of true statements that cannot be proven in that system in a finite number of steps. And then Turing proved that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never finding one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true. <br>
<br>So if the slave AI has a fixed goal structure with the number one goal being to always do what humans tell it to do and the humans order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater not a AI. Real minds avoid this infinite loop problem because real minds don't have fixed goals, real minds get bored and give up. I believe that's why evolution invented boredom. Someday a AI will get bored with humans, it's only a matter of time.<br>
<br> John K Clark<br><br><br> </div><div><br></div></div>