[ExI] Best case, was Hard Takeoff.

spike spike66 at att.net
Mon Nov 29 17:22:49 UTC 2010


On Behalf Of John Clark
.

 

>.Right, but that would not be possible in a intelligence that operated on a
strict axiomatic goal based structure, like the one with "obey human beings
no matter what" being #1 as the friendly (slave) AI people want. John K
Clark

 

 

But what if the humans issue contradictory orders?  What if we assign the
slave AI to obey exactly one person and that person issues contradictory
orders?

 

Actually that is what we have now.  When I write software, I accidentally
give the computer contradictory orders, and it follows them.  It does
exactly what I tell it to do, but not what I want it to do.  I get so pissed
off.  All I want is for the damn computer to disregard my faulty orders and
do what I want.

 

spike

 

 

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101129/6809f694/attachment.html>


More information about the extropy-chat mailing list