[ExI] Safety of human-like motivation systems

Richard Loosemore rpwl at lightlink.com
Sat Feb 5 17:02:50 UTC 2011


John Clark wrote:
> On Feb 5, 2011, at 11:26 AM, Richard Loosemore wrote:
> 
>>> Great, since this technique of yours guarantees that a trillion line 
>>> recursively improving AI program is stable and always does exactly 
>>> what you want it to do it should be astronomically simpler to use 
>>> that same technique with software that exists right now, then we can 
>>> rest easy knowing computer crashes are a thing of the past and they 
>>> will always do exactly what we expected them to do.
>>
>> You are a man of great insight, John Clark.
> 
> I'm blushing! 
> 
>> What you say is more or less true (minus your usual hyperbole) IF the 
>> software is written in that kind of way (which software today is not).
> 
> Well why isn't todays software written that way? If you know how to make 
> a Jupiter Brain behave in ways you can predict and always do exactly 
> what you want it to do for eternity it should be trivially easy right 
> now for you to make a word processor or web browser that always works 
> perfectly. 

Of course it is trivially easy.  I only require ten million dollars 
mailed to a post office box in the Cayman Islands, and the software will 
be yours as soon as I have finished writing it.


Drahcir Eromesool



More information about the extropy-chat mailing list