[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

John Clark jonkc at bellsouth.net
Fri Feb 4 21:46:28 UTC 2011

On Feb 4, 2011, at 12:50 PM, Richard Loosemore wrote:

> since the negative feedback loop works in (effectively) a few thousand dimensions simultaneously, it can have almost arbitrary stability.

Great, since this technique of yours guarantees that a trillion line recursively improving AI program is stable and always does exactly what you want it to do it should be astronomically simpler to use that same technique with software that exists right now, then we can rest easy knowing computer crashes are a thing of the past and they will always do exactly what we expected them to do.

> that keeps it on the original track.

And the first time you unknowingly ask it a question that is unsolvable the "friendly" AI will still be on that original track long after the sun has swollen into a red giant and then shrunk down into a white dwarf. 

  John K Clark


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110204/9428dea5/attachment.html>

More information about the extropy-chat mailing list