[ExI] Survival (was: elections again)

Eugen Leitl eugen at leitl.org
Mon Dec 31 18:40:47 UTC 2007

On Mon, Dec 31, 2007 at 01:11:17PM -0500, Harvey Newstrom wrote:

> > The future will be determined by what Mr. Jupiter Brain wants, not by what
> > we want. Exactly what He (yes, I capitalized it) will decide to do I don't
> > know; that's why it's called a Singularity. Maybe He will treat us like
> > pampered pets; maybe He will exterminate us like rats, it's out of our
> > hands.

Very succinctly said, and agree completely.
> Then I want to be the cutest pet ever!  Or else the stealthiest scavenger rat 

Don't we all! But it's not obvious artificial beings will conserve specific
traits we equipped them with initially (assuming, we could do that at all,
which is not obvious) across many generations (not necessarily long in
terms of wallclock time). Or that they keep environments nicely temperated,
and full of breathable gases, and allow us to grow or synthesize food.

> ever.  Or maybe I want to leave this sinking planet before all this goes 

You can't outrun lean and mean Darwin survival machines. Your best chance is
to float off with a large stealthy habitat into interstellar void, and never go 
near another star. Not that you won't run into deep space plankton eventually...
which will be probably not good, not good at all.

> down.  Or else I want to upload into the AI before it takes over.  Or build 

Don't we all. Ideally, there should be no us and them. But that, admittedly,
is quite a lot to ask for.

> my own counter-AI to protect me.

Why should fluffy pink pet poodles help against Gods?
> Even given your scenarios, we have a lot of choices on how our subjugation is 
> going to occur.

How much are cockroaches worth on the Wall Street job market? Do they make good quants?
> Although I don't agree that things are hopeless as all that, I find your 

I agree that things are not hopeless. But realistically, we do not have a lot
of leverage. Quite irrationally, I remain an optimist (who'd thunk?).

> viewpoints fascinating.  I agree that programming friendliness into an AI is 

...and wish to subscribe to your newsletter?

> a poor strategy.  But I am not pessimistic, because I don't expect AI to 
> become conscious any time soon, self-evolving soon after that, or to evolve 

Not any time soon. But, eventually. We might not see it (heck, what is another 40-50 years),
but our children could very well, and their children's children almost certainly (unless
they're not too busy fighting in the Thunderdome, of course).

> speedily after that, or to have much control over the physical universe 
> outside cyberspace even if it does.

The only useful AI is embodied. We really don't have any AI so far, usable, nor

Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

More information about the extropy-chat mailing list