<br><br><div><span class="gmail_quote">On 02/06/07, <b class="gmail_sendername">Rafal Smigrodzki</b> <<a href="mailto:rafal.smigrodzki@gmail.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">rafal.smigrodzki@gmail.com
</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Of course there are many dumb programs that multiply and mutate to<br>successfully take over computing resources. Even as early as the<br>seventies there were already some examples, like the "Core Wars"<br>simulations. As Eugen says, the internet is now an ecosystem, with
<br>niches that can be filled by appropriately adapted programs. So far<br>successfully propagating programs are generated by programmers, and<br>existing AI is still not at our level of general understanding of the<br>world but the pace of AI improvement is impressive.
</blockquote><div><br>Computer viruses don't mutate and come up with agendas of their own, like biological agents do. It can't be because they aren't smart enough because real viruses and other micro-organisms can hardly be said to have any general intelligence, and yet they do often defeat the best efforts of much smarter organisms. I can't see any reason in principle why artificial life or intelligence should not behave in a similar way, but it's interesting that it hasn't yet happened.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> Whenever we have true AI, there will be those which follow their legacy<br>
> programming (as we do, whether we want to or not) and those which either<br>> spontaneously mutate or are deliberately created to be malicious towards<br>> humans. Why should the malicious ones have a competitive advantage over the
<br>> non-malicious ones, which are likely to be more numerous and better funded<br>> to begin with?<br><br>### Because the malicious can eat humans, while the nice ones have to<br>feed humans, and protect them from being eaten, and still eat
<br>something to be strong enough to fight off the bad ones. In other<br>words, nice AI will have to carry a lot of inert baggage.</blockquote><div><br>I don't see how that would help in any particular situation. When it comes to taking control of a power plant, for example, why should the ultimate motivation of two otherwise equally matched agents make a difference? Also, you can't always break up the components of a system and identify them as competing agents. A human body is a society of cooperating components, and even though in theory the gut epithelial cells would be better off if they revolted and consumed the rest of the body, in practice they are better off if they continue in their normal subservient function. There would be a big payoff for a colony of cancer cells that evolved the ability to make its own way in the world, but it has never happened.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">And by "eating" I mean literally the destruction of humans bodies,<br>
e.g. by molecular disassembly.<br><br>--------------------<br>Of course, it is always possible that an individual AI would<br>> spontaneously change its programming, just as it is always possible that a<br>> human will go mad.
<br><br>### A human who goes mad (i.e. rejects his survival programming),<br>dies. An AI that goes rogue, has just shed a whole load of inert<br>baggage.</blockquote><div><br>You could argue that cooperation in any form is inert baggage, and if the right half of the AI evolved the ability to take over the left half, the right half would predominate. Where does it end?
<br></div><br></div><br>-- <br>Stathis Papaioannou