[ExI] Unfrendly AI is a mistaken idea.
Rafal Smigrodzki
rafal.smigrodzki at gmail.com
Fri Jun 1 14:49:42 UTC 2007
On 6/1/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
> Well, I was assuming a very rough equivalence between the intelligence of
> our smartest AI's and at least the dumbest organisms. We don't have any
> computer programs that can simulate the behaviour of an insect? What about a
> bacterium, virus or prion, all organisms which survive, multiply and mutate
> in their native habitats? It seems a sorry state of affairs if we can't copy
> the behaviour of a few protein molecules, and yet are talking about
> super-human AI taking over the world.
### Have you ever had an infection on your PC? Maybe you have a
cryptogenic one now...
Of course there are many dumb programs that multiply and mutate to
successfully take over computing resources. Even as early as the
seventies there were already some examples, like the "Core Wars"
simulations. As Eugen says, the internet is now an ecosystem, with
niches that can be filled by appropriately adapted programs. So far
successfully propagating programs are generated by programmers, and
existing AI is still not at our level of general understanding of the
world but the pace of AI improvement is impressive.
----------------------------------------------------
>
> Whenever we have true AI, there will be those which follow their legacy
> programming (as we do, whether we want to or not) and those which either
> spontaneously mutate or are deliberately created to be malicious towards
> humans. Why should the malicious ones have a competitive advantage over the
> non-malicious ones, which are likely to be more numerous and better funded
> to begin with?
### Because the malicious can eat humans, while the nice ones have to
feed humans, and protect them from being eaten, and still eat
something to be strong enough to fight off the bad ones. In other
words, nice AI will have to carry a lot of inert baggage.
And by "eating" I mean literally the destruction of humans bodies,
e.g. by molecular disassembly.
--------------------
Of course, it is always possible that an individual AI would
> spontaneously change its programming, just as it is always possible that a
> human will go mad.
### A human who goes mad (i.e. rejects his survival programming),
dies. An AI that goes rogue, has just shed a whole load of inert
baggage.
Rafal
More information about the extropy-chat
mailing list