[ExI] Safety of human-like motivation systems
John Clark
jonkc at bellsouth.net
Sat Feb 5 16:38:47 UTC 2011
On Feb 5, 2011, at 11:26 AM, Richard Loosemore wrote:
>> Great, since this technique of yours guarantees that a trillion line recursively improving AI program is stable and always does exactly what you want it to do it should be astronomically simpler to use that same technique with software that exists right now, then we can rest easy knowing computer crashes are a thing of the past and they will always do exactly what we expected them to do.
>
> You are a man of great insight, John Clark.
I'm blushing!
> What you say is more or less true (minus your usual hyperbole) IF the software is written in that kind of way (which software today is not).
Well why isn't todays software written that way? If you know how to make a Jupiter Brain behave in ways you can predict and always do exactly what you want it to do for eternity it should be trivially easy right now for you to make a word processor or web browser that always works perfectly.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110205/76bc0648/attachment.html>
More information about the extropy-chat
mailing list