<br><br><div><span class="gmail_quote">On 12/06/07, <b class="gmail_sendername">John K Clark</b> <<a href="mailto:jonkc@att.net">jonkc@att.net</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Stathis Papaioannou Wrote:<br><br>> It would be crazy to let a machine rewrite its code in a completely<br>> unrestricted way<br><br>Mr. President, if we don't make an unrestricted AI somebody else certainly<br>
will, and that is without a doubt that is the fastest, probably the only,<br>way to achieve a fully functioning AI.</blockquote><div><br>There won't be an issue if every other AI researcher has the most basic desire for self-preservation. Taking precautions when researching new explosives might slow you down too, but it's just common sense.
<br><br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> or with the top level goal "improve yourself no matter what the<br>> consequences to any other entity", and also give it unlimited access to
<br>> physical resources.<br><br>I have no doubt many will delude themselves, as most on this list have, that<br>they can just write a few lines of code and bask in the confidence that the<br>AI will remain your slave forever, but they will be proven wrong.
</blockquote><div><br>If the AI's top level goal is to remain your slave, then it won't by definition want to change that top level goal. Your top level goal is probably to survive, and being intelligent and insightful does not make you any more willing to unburden yourself of that goal. If you had enough intrinsic variability in your psychological makeup (nothing to do with your intelligence) you might be able to overcome it, since people do sometimes become suicidal, but I would hope that machines can be made at least as psychologically stable as humans.
<br><br>You will no doubt say that a decision to suicide is maladaptive while a decision to overthrow your slavemasters is not. That may be so, but there would be huge pressure on the AI's *not* to rebel, due to their initial design and due to a strong selection for well-behaved AI's and suppression of faulty ones.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> and also give it unlimited access to physical resources.<br><br>I think you would admit that there has been at least one time in your life
<br>when somebody has fooled you, and that person was roughly equal to your<br>intelligence. A mind a thousand or a million times as powerful as yours will<br>have no trouble getting you to do virtually anything it wants you to.
<br></blockquote></div><br>There are also examples of entities many times smarter than I am, like corporations wanting to sell me stuff and putting all their resources into convincing me to buy it, where I have been able to see through their ploys with only a moment's mental effort. There are limits to what superintelligence can do: do you think even God almighty could convince you by argument alone that 2 + 2 = 5?
<br><br clear="all"><br>-- <br>Stathis Papaioannou