[ExI] Extraterrestrial liberty and colonising the universe

Eugen Leitl eugen at leitl.org
Mon Jun 24 10:07:10 UTC 2013

On Sat, Jun 22, 2013 at 03:53:19PM -0700, Dennis May wrote:

> The assumption that given time parasites will evolve is based on the 
> image that the system
>  is free to evolve. But non-evolving replicators 
> getting there first can also prevent the appearance of evolving 

The assumption that there can be non-evolving replicators 
1) arising 2) outcompeting Darwin has big problem: missing

> replicators. If the first seeders didn't want to allow them, they could 
> do it. We might *like* the concept of evolving replicators a great deal 

Most people hate it.

> more than those boring non-evolving, but the latter can win against the 
> rest if they are programmed to be [thorough]."

I program you to be false. Oops, doesn't seem to do much in practice.
It seems you have to actually control all the other darwinian
agents by superior force. I have bad news for you: a global self-relicating
superior force is something that most likely will emerge via a
darwinian design. The result won't be brittle, or controllable.

I'm afraid the genie was firmly out of the bottle billions of years
ago, and there does not seem to be a way to rebottle it, assuming
it would be at all desirable (it would be not).
> There seem to be a few assumptions implicit in your statements.  You seem

Well, so are in yours.

> to assume some kind of central planner control of probe launches into
> the universe - which implies economic and technology control over individuals.
> Otherwise anyone wealthy enough can do their own probe launches.  If I
> were interested in such probe launches and replicating systems I would
> recognize that evolution can happen in both software and hardware such
> that a single probe going out can create entire ecosystems of predator-prey-
> parasites and do its own launches at any point in time later.  A single AI
> which can replicate and spread in free space is enough to populate every
> scenario including war-gaming against its own creations to evolve impossibly
> efficient predators.  Before the first centrally-planned probe reaches
> another galaxy, independently evolved probes could have sent out a trillion
> competing probes ahead of it.

Competition is a dynamic equilibrium. Fixed systems are provably at
a disadvantage.
> On 22/06/2013 11:45, Anders Sandberg wrote:
> "...the only way to be truly certain nobody else can invade is to turn 
> everything into your kind of replicator. Hence the "deadly probe 
> scenario" is not a likely answer to the Fermi question."

The deadly probe scenario is a partial answer to the Fermi question,
because nonexpansive observers observing an expansive front is
arbitrarily improbable due to anthropic principle.

However, the strongest explanation by far is that we're not in
anybody's smart lightcone.
> It always takes less energy and resources to destroy than to create.

That may sound profound, but it isn't.

> This is part of the reason why offensive WoMD are so much more effective 

Measured in what?

> than defenses against them.  The greater the energy involved the more 

A few m of packed dirt are a great defense against them.

> effective offense becomes.
> The smallest technological footprint for AI to replicate is presently unknown.

It doesn't take AI to replicate. It takes about a cubic micron to encode
a self-replicator, and enough payload to generate a complex result, if
you know what you're doing.

> There is the time footprint, the resources footprint, the signals generated
> by replication footprint, the trails left in both traveling and replication,
> and the thermodynamic footprints both in matter and radiation.
> Ideally replication would take place in high noise environments to minimize
> detection.  High noise environments are only available in limited regions
> and what qualifies as high noise is technology dependent.

The whole solar system would qualify as indetectable. In fact, I could
be replicating right under your nose (feel that itch?) and you won't
notice. In fact, if I take over the CNS of 7 gigamonkeys right now
with self-replicating stealthy nanoware they'll never be the wiser.
> There are many assumptions inherent in what replicating AI systems are

The biggest ASSumption is the word AI. A self-replicator is a self-replicator.
It can be dumb, it can be smart, the word artificial or intelligent has
nothing to do with it.

> going to do.  I assume they will use wide band impulse communications which

You don't need communication. If you need communication, you can use
highly collimated beams or relativistic matter pellet stream.

> appear as white noise [SETI detection won't work].  I assume they will leave
> as little footprint as possible to keep from being tracked [military 101],

They're not military. They're animals, for gawd's sake. Just as you and me.

> I assume they will stay on the move, disperse themselves, and act in
> a stealthy manner to avoid WoMD.  In space stealth also means small footprint

WoMD in space doesn't work even less than on Earth. Space is really, really
big, and really, really empty. Visiting in person is not an option, detection
is very difficult, targeted beams are difficult and easy to shield again.

> in every way possible.
> So the Fermi Paradox is not hard to understand.  If a single civilization

The Fermi paradoxon is indeed very easy to understand. It starts with
being that there is no paradoxon. We are really not in anyone's smart
light cone. 

> allows AI - not controlled by central-planners - military strategy for AI that
> can replicate will quickly lead to quiet well dispersed mobile AI with small
> footprints we will never see.  The predators among the AI replicators will


> also hunt other predators.  The Earth may be nothing more than a loud baby
> animal drawing in predators while other predators watch and wait.  There

Yes, they come for our women, and chocolate. Because, if you live in deep
space and dine on solar flux you really would bother to target rocks with
icky volatiles on them. Makes total sense, now.

> are too many possible scenarios to calculate.

More information about the extropy-chat mailing list