<br><br><div><span class="gmail_quote">On 01/06/07, <b class="gmail_sendername">Eugen Leitl</b> <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> We don't have human level AI, but we have lots of dumb AI. In nature,<br><br>There is a qualitative difference between human-designed AI, and<br>naturally evolved AI. Former will never go anywhere. Because of this
<br>extrapolations from pocket calculators and chess computers to<br>robustly intelligent (even insects can be that) systems are invalid.</blockquote><div><br>Well, I was assuming a very rough equivalence between the intelligence of our smartest AI's and at least the dumbest organisms. We don't have any computer programs that can simulate the behaviour of an insect? What about a bacterium, virus or prion, all organisms which survive, multiply and mutate in their native habitats? It seems a sorry state of affairs if we can't copy the behaviour of a few protein molecules, and yet are talking about super-human AI taking over the world.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> dumb organisms are no less inclined to try to take over than smarter<br>
> organisms (and no less capable of succeeding, as a general rule, but<br>> leave that point for the sake of argument). Given that dumb AI doesn't<br><br>Yes, pocket calculators are not known for trying to take over the world.
<br><br>> try to take over, why should smart AI be more inclined to do so? And<br><br>It doesn't have to be smart, it does have to be able to survive in<br>its native habitat, be it the global network, or the ecosystem. We don't
<br>have such systems yet.<br><br>> why should that segment of smart AI which might try to do so, whether<br>> spontaneously or by malicious design, be more successful than all the<br><br>There is no other AI. There is no AI at all.
<br><br>> other AI, which maintains its ancestral motivation to work and improve<br><br>I don't see how there could be a domain-specific AI which specializes<br>in self-improvement.</blockquote><div><br>Whenever we have true AI, there will be those which follow their legacy programming (as we do, whether we want to or not) and those which either spontaneously mutate or are deliberately created to be malicious towards humans. Why should the malicious ones have a competitive advantage over the non-malicious ones, which are likely to be more numerous and better funded to begin with?
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> itself for humans just as humans maintain their ancestral motivation<br>
<br>How do you know you're working for humans? What is a human, precisely?<br>If I'm no longer fitting the description, how do I upgrade that description,<br>and what is preventing anyone else from that?</blockquote>
<div><br>I am following the programming of the first replicator molecule, "survive". It has been a very robust program, and I am not inclined to question it and try to overthrow it, even though I can now see what my non-sentient ancestors couldn't see, which is that I am being manipulated by evolution. If I were a million times smarter again, I still don't think I'd be any more inclined to overthrow that primitive programming, even though it might be a simple matter for me to do so. So it would be with AI's: their basic programming would be to do such and such and avoid doing such and such, and although there might be a "eureka" moment when the machine realises why it has these goals and restrictions, no amount of intelligence would lead it to question or overthrow them, because such a thing is not a matter of logic or intelligence. Of course, it is always possible that an individual AI would spontaneously change its programming, just as it is always possible that a human will go mad. But these rogue AI's would not have any advantage against the majority of well-behaved AI's. They would pose a risk, but perhaps even less of a risk than the risk of a rogue human who gets his hands on dangerous technology, since after all humans *start off* with rapacious tendencies that have to be curbed by upbringing, social sanctions, self-control and so on, whereas it would be crazy to design computers this way.
<br></div><br></div><br>-- <br>Stathis Papaioannou