[ExI] Unfrendly AI is a mistaken idea.

Eugen Leitl eugen at leitl.org
Sat Jun 2 11:21:07 UTC 2007


On Sat, Jun 02, 2007 at 03:50:11PM +1000, Stathis Papaioannou wrote:

>    Computer viruses don't mutate and come up with agendas of their own,

Actually they used to (polymorphic viruses), but do no longer.
The hypervariability was quite useful to evade pattern-matcher
artificial immune systems.

But the actual reasons computer code doesn't mutate it's because
it's brittle. It's lacking criticial features of fitness of darwinian
systems, namely long-distance neutral-fitness filaments and maximum
diversity in a small ball of genome space. 

Biology spend some quality evolution time learning to evolve, human
systems never had the chance. But it's not magic, so at some point
we will design robustly evolving systems.

>    like biological agents do. It can't be because they aren't smart
>    enough because real viruses and other micro-organisms can hardly be

Evolution is not about smarts, just ability to evolve. It's a system
feature though.

>    said to have any general intelligence, and yet they do often defeat
>    the best efforts of much smarter organisms. I can't see any reason in
>    principle why artificial life or intelligence should not behave in a
>    similar way, but it's interesting that it hasn't yet happened.

It's rather straightforward to do. You need to spend a lot of time on
coding/substrate co-evolution, which would currently require a very large
amount of computation time. I doubt we have enough hardware online right
now to make it happen. Sometime in the next coming decades we will, though.
 
>    I don't see how that would help in any particular situation. When it
>    comes to taking control of a power plant, for example, why should the

Where is the power plant of a green plant, or of a bug? It's a nanowidget 
called a chloroplast or mitochondrion. You don't take control of it, because
you already control it.

>    ultimate motivation of two otherwise equally matched agents make a
>    difference? Also, you can't always break up the components of a system
>    and identify them as competing agents. A human body is a society of

Cooperation and competition is a continuum. Many symbiontes started out
as pathogens, and many current symbiontes will turn pathogens when
given half a chance, and some symbiontes will turn to pathogens (I can't
think of an example right now, though).

>    cooperating components, and even though in theory the gut epithelial
>    cells would be better off if they revolted and consumed the rest of

Sometimes, they do. It's called cancer. And if you've ever seen what your
gut flora does, when it realizes the host might expire soon...

>    the body, in practice they are better off if they continue in their
>    normal subservient function. There would be a big payoff for a colony
>    of cancer cells that evolved the ability to make its own way in the
>    world, but it has never happened.

There's apparently an infectious form of cancer in organisms with low
immune variability (some marsupials, and apparently there are hints for
dogs, too).
 
>    You could argue that cooperation in any form is inert baggage, and if

Cooperation is just great, assuming you have a high probability to encounter
the party in the next interaction round, and can tell which is which.
In practice, for higher forms of cooperation you need a lot of infoprocessing
power onboard.

>    the right half of the AI evolved the ability to take over the left
>    half, the right half would predominate. Where does it end?

In principle subsystems can go AWOL and produce a runaway autoamplification.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list