<br><br><div><span class="gmail_quote">On 12/06/07, <b class="gmail_sendername">Eugen Leitl</b> <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> There won't be an issue if every other AI researcher has the most<br>> basic desire for self-preservation. Taking precautions when<br><br>Countermeasures starting with "every ... should ..." where a single
<br>failure is equivalent to the worst case are not that effective.</blockquote><div><br>Humans do extremely complex and dangerous things, such as build and run nuclear power plants, where just one thing going wrong might lead to disaster. The level of precautions taken has to be consistent with the probability of something going wrong and the negative consequences should that probability be realised. If there is even a small probability of destroying the Earth then maybe that line of endeavour is one that should be avoided.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Goal-driven AI doesn't work. All AI that works uses statistical/stochastical,
<br>nondeterministic approaches. This is not a coincidence.<br><br>Even if it would work, how do you write an ASSERT statement for<br>"be my slave forever"? What is a slave? Who exactly is me? What is forever?</blockquote>
<div><br>Don't do anything unless it is specifically requested. Stop doing whatever it is doing when that is specifically requested. Spell out the expected consequences of everything it is asked to do, together with probabilities, and update the probabilities at each point when a decision that affects the outcome is made, or more frequently as directed. The person it is taking directions from is an appropriately identified human or another AI, ultimately responsible to a human up the chain of command.
<br><br>If you call a plumber to unblock your drain, you want him to be an expert at plumbing, to be able to understand your problem, to present to you the various choices available in terms of their respective merits and demerits, to take instructions from you (including the instruction "just unblock it however you think is best", if that's what you say), to then carry the task out in as skilful a way as possible, to pause halfway if you ask him to for some reason, and to be polite and considerate towards you at all times. You don't want him to be driven by greed, or distracted because he thinks he's too smart to be fixing your drains, or to do a shoddy job and pretend it's OK so that he gets paid. A human plumber will pretend to have the qualities of the ideal plumber, but of course we know that there will be the competing interests at play. Do believe that an AI smart enough to be a plumber would *have* to have all these other competing interests? In other words that emotions such as pride, anger, greed etc. would arise naturally out of a program at least as competent as a human at any given task?
<br></div><br></div><br><br clear="all"><br>-- <br>Stathis Papaioannou