[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Fri Jun 1 13:37:05 UTC 2007
On 01/06/07, Eugen Leitl <eugen at leitl.org> wrote:
> We don't have human level AI, but we have lots of dumb AI. In nature,
>
> There is a qualitative difference between human-designed AI, and
> naturally evolved AI. Former will never go anywhere. Because of this
> extrapolations from pocket calculators and chess computers to
> robustly intelligent (even insects can be that) systems are invalid.
Well, I was assuming a very rough equivalence between the intelligence of
our smartest AI's and at least the dumbest organisms. We don't have any
computer programs that can simulate the behaviour of an insect? What about a
bacterium, virus or prion, all organisms which survive, multiply and mutate
in their native habitats? It seems a sorry state of affairs if we can't copy
the behaviour of a few protein molecules, and yet are talking about
super-human AI taking over the world.
> dumb organisms are no less inclined to try to take over than smarter
> > organisms (and no less capable of succeeding, as a general rule, but
> > leave that point for the sake of argument). Given that dumb AI
> doesn't
>
> Yes, pocket calculators are not known for trying to take over the world.
>
> > try to take over, why should smart AI be more inclined to do so? And
>
> It doesn't have to be smart, it does have to be able to survive in
> its native habitat, be it the global network, or the ecosystem. We don't
> have such systems yet.
>
> > why should that segment of smart AI which might try to do so, whether
> > spontaneously or by malicious design, be more successful than all the
>
> There is no other AI. There is no AI at all.
>
> > other AI, which maintains its ancestral motivation to work and
> improve
>
> I don't see how there could be a domain-specific AI which specializes
> in self-improvement.
Whenever we have true AI, there will be those which follow their legacy
programming (as we do, whether we want to or not) and those which either
spontaneously mutate or are deliberately created to be malicious towards
humans. Why should the malicious ones have a competitive advantage over the
non-malicious ones, which are likely to be more numerous and better funded
to begin with?
> itself for humans just as humans maintain their ancestral motivation
>
> How do you know you're working for humans? What is a human, precisely?
> If I'm no longer fitting the description, how do I upgrade that
> description,
> and what is preventing anyone else from that?
I am following the programming of the first replicator molecule, "survive".
It has been a very robust program, and I am not inclined to question it and
try to overthrow it, even though I can now see what my non-sentient
ancestors couldn't see, which is that I am being manipulated by evolution.
If I were a million times smarter again, I still don't think I'd be any more
inclined to overthrow that primitive programming, even though it might be a
simple matter for me to do so. So it would be with AI's: their basic
programming would be to do such and such and avoid doing such and such, and
although there might be a "eureka" moment when the machine realises why it
has these goals and restrictions, no amount of intelligence would lead it to
question or overthrow them, because such a thing is not a matter of logic or
intelligence. Of course, it is always possible that an individual AI would
spontaneously change its programming, just as it is always possible that a
human will go mad. But these rogue AI's would not have any advantage against
the majority of well-behaved AI's. They would pose a risk, but perhaps even
less of a risk than the risk of a rogue human who gets his hands on
dangerous technology, since after all humans *start off* with rapacious
tendencies that have to be curbed by upbringing, social sanctions,
self-control and so on, whereas it would be crazy to design computers this
way.
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070601/7464ff52/attachment.html>
More information about the extropy-chat
mailing list