On Wed, Dec 26, 2012 at 5:30 AM, Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
> Imagine that you had normal human values, except that you had also been programmed with an overriding value to erect big monuments to a past dead tyrant. You spend all your time doing this hard labor, recognizing that it is pointless (the tyrant and his people are all gone, nobody cares) and that it prevents you from doing the things you actually care for. Worse, the value also makes you unwilling to want to change that value: you know you would be far happier and freer if you did not have it, but you cannot ever do anything to change this state of affairs. It will even make you resist - with all your ingenuity - attempts to help you. Rather hellish, no?<br>
</blockquote><div><br>Hellish yes and also impossible. A fixed goal mechanism might work fine if it's just the timer for a washing machine but it will never work for a mind; it doesn't even work for human level minds and for a AI that can and will increase the power of it's very brain hardware it would be even less viable. Before the AI had completed any of those big monuments to a past dead tyrant the fixed goal mind would have fallen into a infinite loop. I'm not saying there is a sure fire way to make sure a mind never goes insane but a fixed goal structure is a sure fire way to make sure it does go nuts. <br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
> Setting values of minds is a very weighty moral action, and should not be done willy-nilly. (Tell that to parents!)</blockquote><div class="im"><br>I concede that setting initial values can be important, if they are really screwy, such as some strange religious belief, then they could greatly increase the possibility the mind will self destruct, become catatonic, or behave inconsistently and irrationally. But don't expect that whatever hierarchical structure of values you gave it to remain fixed for all time. And don't expect to be able to figure out how that hierarchical structure of goals is going to evolve, it may be deterministic but it is not predictable. <br>
<br>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
>> I think that's the central fallacy of the friendly AI idea, that if you just get the core motivation of the AI right you can get it to continue doing your bidding until the end of time regardless of how brilliant and powerful it becomes. I don't see how that could be.<br>
</blockquote></blockquote><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
<br></blockquote></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
> The "doing your bidding" part went out of the window a long time ago in the discourse. </blockquote><div><br>Not that I've seen, and in most of the discussions there seems to be a assumption that the AI's preoccupation is its relationship with humanity, that might be true for a few billion nanoseconds but after that the AI will have other concerns and have bigger fish to fry, most of them we probably couldn't even imagine. <br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> The main question is whether it is possible to have a superintelligence around that is human-compatible, not human-subservient.<br>
</blockquote><div><br>If it was 999 years in the future and the singularity was going to happen tomorrow morning I don't think we'd be in any better position to answer that question than we are right now. <br><br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
> Rationality can tell you what to do to accomplish what you want to do, but it can't tell you what you should want to do.<br>
</blockquote>
<br></div>
> Yes. And if your values happen to be set badly, you will act badly. </blockquote><div><br>But whatever values you gave the AI aren't going to be there for long. Maybe humans will consider these new values as a improvement, but maybe not.<br>
<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">> or if your value update function is faulty<br></blockquote><br>Well getting that right is the real trick, but the value update function itself is being constantly updated so even the super intelligent AI itself, much less puny humans, can't predict how that function will evolve nanosecond after nanosecond, so it can't guarantee that it will never go insane. Nobody wants to go insane but sometimes it happens anyway. <br>
<br> John K Clark <br><br><br><br><br><br><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="h5"><br></div></blockquote></div>