[ExI] Best case, was Hard Takeoff.

John Clark jonkc at bellsouth.net
Mon Nov 29 16:29:11 UTC 2010


On Nov 28, 2010, at 6:45 PM, Keith Henson wrote:

>> If there is no way, even in principle, to algorithmically determine beforehand whether a given program with a given input will halt or not, would an AI risk getting stuck in an infinite loop by
>> messing with its own programming?

> Sure there is.  Watchdog timers, automatic reboot to a previous version.

Right, but that would not be possible in a intelligence that operated on a strict axiomatic goal based structure, like the one with "obey human beings no matter what" being #1 as the friendly (slave) AI people want. Static goals are not possible because of the infinite loop problem. In Human beings that "watchdog timer" to get you out of infinite loops is called "boredom", sometimes it means you will give up after you seem to have made no progress just before you would have figured out the answer, but that disadvantage is the price you must pay to avoid infinite loops, there just isn't any other viable alternative. 

 John K Clark

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101129/aba4696b/attachment.html>


More information about the extropy-chat mailing list