<br><br><div><span class="gmail_quote">On 02/06/07, <b class="gmail_sendername">Christopher Healey</b> <<a href="mailto:CHealey@unicom-inc.com">CHealey@unicom-inc.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>> Stathis Papaioannou wrote:<br>><br>> We don't have human level AI, but we have lots of dumb AI. In<br>> nature, dumb organisms are no less inclined to try to take over<br>> than smarter organisms<br>
<br>Yes, but motivation and competence are not the same thing. Considering<br>two organisms that are equivalent in functional capability, varying only<br>intelligence level, the smarter ones succeed more often. However, within
<br>a small range of intelligence variation, other factors contribute to<br>one's aggregate ability to execute those better plans. So If I'm a<br>smart chimpanzee, but I'm physically weak, following particular courses
<br>of action that may be more optimal in general carries greater risk.<br>Adjusting for that risk may actually leave me with a smaller range of<br>options than if I was physically stronger and a bit less smart. But<br>when intelligence differential is large, those other factors become very
<br>small indeed. Humans don't worry about chimpanzee politics (no jokes<br>here please :o) because our only salient competition is other humans.<br>We worry about those entities that possess an intelligence that is at
<br>least in the same range as our own.</blockquote><div><br>We worry about viruses and bacteria, and they're not very smart. We worry about giant meteorites that might be heading our way, and they're even dumber than viruses and bacteria.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Smart chimpanzees are not going to take over our civilization anytime<br>soon, but a smarter and otherwise well-adapted chimp will probably be
<br>inclined and succeed in leading its band of peers.</blockquote><div><br>All else being equal, which is not generally the case. <br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> (and no less capable of succeeding, as a<br>> general rule, but leave that point for the sake of argument).<br><br>I don't want to leave it, because this is a critical point. As I<br>mentioned above, in nature you rarely see intelligence considered as an
<br>isolated variable, and in evolution, intelligence is the product of a<br>red queen race. By definition (of a red queen race), you're<br>intelligence isn't going to be radically different from your direct<br>competition, or the race would never have started or escalated. So it
<br>confusingly might not look like you're chances of beating "the Whiz on<br>the block" are that disproportionate, but the context is so narrow that<br>other factors can overwhelm the effect of intelligence over that limited
<br>range. In some sense, our experiential day-to-day understanding of<br>intelligence (other humans) biases us to consider its effects over too<br>narrow a range of values. As a general rule, I'd say humans have been
<br>very much more successful at "taking over" than chimpanzees and salmon,<br>and that it is primarily due to our superior intelligence.</blockquote><div><br>Single-celled organisms are even more successful than humans are: they're everywhere, and for the most part we don't even notice them. Intelligence, particularly human level intelligence, is just a fluke, like the giraffe's neck. If it were specially adaptive, why didn't it evolve independently many times, like various sense organs have? Why don't we see evidence of it having taken over the universe? We would have to be extraordinarily lucky if intelligence had some special role in evolution and we happen to be the first example of it. It's not impossible, but the evidence would suggest otherwise.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> Given that dumb AI doesn't try to take over, why should smart AI<br>> be more inclined to do so?
<br><br>I don't think a smart AI would be more inclined to try and take over, a<br>priori. </blockquote><div><br>That's an important point. Some people on this list seem to think that an AI would compute the unfairness of its not being in charge and do something about it - as if unfairness is something that can be formalised in a mathematical theorem.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> And why should that segment of smart<br>> AI which might try to do so, whether spontaneously or by malicious
<br>> design, be more successful than all the other AI, which maintains<br>> its ancestral motivation to work and improve itself for humans<br><br>The consideration that also needs to be addressed is that the AI may
<br>maintain its "motivation to work and improve itself for humans", and due<br>to this motivation, take over (in some sense at least). In fact, it has<br>been argued by others here (and I tend to agree) that an AGI
<br>*consistently* pursuing such benign directives must intercede where its<br>causal understanding of certain outcomes passes a minimum assurance<br>level (which would likely vary based on probability and magnitude of the
<br>outcome).</blockquote><div><br>I'd feel uncomfortable about an AI that had any feelings or motivations of its own, even if they were positive ones about humans, especially if it had the ability to act rather than just advise. It might decide that it had to keep me locked up for my own good, for example, even though I don't want to be locked up. I'd feel much safer around an AI which informs me that, using its greatly superior intelligence, it has determined that I am less likely to be run over if I never leave home, but what I do with this advice is a matter of complete indifference to it. So although through accident or design an AI with motivations and feelings might arise, I think by far the safest ones, and the ones likely to sell better, will be those with the minimal motivation set of the disinterested scientist, concerned only with solving intellectual problems.
<br></div><br></div><br clear="all"><br>-- <br>Stathis Papaioannou