[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Sat Jun 2 05:50:11 UTC 2007
On 02/06/07, Rafal Smigrodzki <rafal.smigrodzki at gmail.com> wrote:
Of course there are many dumb programs that multiply and mutate to
> successfully take over computing resources. Even as early as the
> seventies there were already some examples, like the "Core Wars"
> simulations. As Eugen says, the internet is now an ecosystem, with
> niches that can be filled by appropriately adapted programs. So far
> successfully propagating programs are generated by programmers, and
> existing AI is still not at our level of general understanding of the
> world but the pace of AI improvement is impressive.
Computer viruses don't mutate and come up with agendas of their own, like
biological agents do. It can't be because they aren't smart enough because
real viruses and other micro-organisms can hardly be said to have any
general intelligence, and yet they do often defeat the best efforts of much
smarter organisms. I can't see any reason in principle why artificial life
or intelligence should not behave in a similar way, but it's interesting
that it hasn't yet happened.
> Whenever we have true AI, there will be those which follow their legacy
> > programming (as we do, whether we want to or not) and those which either
> > spontaneously mutate or are deliberately created to be malicious towards
> > humans. Why should the malicious ones have a competitive advantage over
> the
> > non-malicious ones, which are likely to be more numerous and better
> funded
> > to begin with?
>
> ### Because the malicious can eat humans, while the nice ones have to
> feed humans, and protect them from being eaten, and still eat
> something to be strong enough to fight off the bad ones. In other
> words, nice AI will have to carry a lot of inert baggage.
I don't see how that would help in any particular situation. When it comes
to taking control of a power plant, for example, why should the ultimate
motivation of two otherwise equally matched agents make a difference? Also,
you can't always break up the components of a system and identify them as
competing agents. A human body is a society of cooperating components, and
even though in theory the gut epithelial cells would be better off if they
revolted and consumed the rest of the body, in practice they are better off
if they continue in their normal subservient function. There would be a big
payoff for a colony of cancer cells that evolved the ability to make its own
way in the world, but it has never happened.
And by "eating" I mean literally the destruction of humans bodies,
> e.g. by molecular disassembly.
>
> --------------------
> Of course, it is always possible that an individual AI would
> > spontaneously change its programming, just as it is always possible that
> a
> > human will go mad.
>
> ### A human who goes mad (i.e. rejects his survival programming),
> dies. An AI that goes rogue, has just shed a whole load of inert
> baggage.
You could argue that cooperation in any form is inert baggage, and if the
right half of the AI evolved the ability to take over the left half, the
right half would predominate. Where does it end?
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070602/2376ae52/attachment.html>
More information about the extropy-chat
mailing list