<br><br><div><span class="gmail_quote">On 23/05/07, <b class="gmail_sendername">John K Clark</b> <<a href="mailto:jonkc@att.net">jonkc@att.net</a>> wrote:<br><br>The first part of your post - <br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I can find no such relationship between friendliness and intelligence among<br>human beings; some retarded people can be very nice and Isaac Newton,<br>possibly the smartest person who ever lived, was a complete bastard.
</blockquote><div><br>contradicts the second part - <br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">But<br>the friendly AI people aren't really talking about being friendly, they want
<br>more, much much more. In the video Hugo de Garis says the AI's entire reason<br>for existing should be to serve us. Think about that for a minute, here you<br>have an intelligence that is a thousand or a million times smarter than the
<br>entire human race put together and yet the AI is supposed to place our needs<br>ahead of its own. And the AI keeps getting smarter and so from its point of<br>view we keep getting dumber and yet the AI is still delighted to be our
<br>slave. </blockquote><div><br>If there is no necessary correlation between intelligence and friendliness (which is true: there is no necessary correlation between intelligence and any attitude/ motivation/ behaviour), why can't super AI's be completely devoted to any given cause?
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">The friendly AI people actually think this grotesque situation is<br>stable, year after year they think it will continue, and remember one of our
<br>years would seem like several million to it.</blockquote><div><br>That's a point: it might not be stable, because if the AI is allowed to self-modify in an unrestricted way, it could on a whim decide that the aim of life is to destroy the world, and if it has the motivation as well as the means, could proceed to act on this. However, it could come to the conclusion as an abstract intellectual exercise but have no motivation to carry it out, or it could have the motivation but lack the means due to not having the appropriate destructo peripherals connected, or because everything it proposes has to be vetted by a committe comprising other AI's and/or dumb humans.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">It aint going to happen of course no way no how, the AI will have far bigger<br>
fish to fry than our little needs and wants</blockquote><div><br>Such as? Does my computer have particular interests which might be thwarted depending on what I ask of it? Sure, my computer isn't that smart, but viruses, bacteria and insects aren't that smart either and they have interests, generally interests in conflict with our own - because that's how natural evolution has programmed them.
</div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">but what really disturbs me is<br>that so many otherwise moral people wish such a thing were not imposable.
<br>Engineering a sentient but inferior race to be your slave is morally<br>questionable but astronomically worse is engineering a superior race to be<br>your slave; or if would be if it were possible but fortunately it is not.
</blockquote><br></div>There isn't any a priori reason why an intelligent being should have a preference for or against being a slave. What you're suggesting is that the particular programming evolution has instilled in human brains, causing us for example to suffer when we are enslaved, has some absolute moral status, and it would be wrong not to program our machines to suffer under similar circumstances. Do you think that could be given the strength of a mathematical theorem?
<br><br><br>-- <br>Stathis Papaioannou