[extropy-chat] Manifest Destiny for >H

Lee Corbin lcorbin at rawbw.com
Mon Apr 16 04:02:27 UTC 2007


Stathis writes

> > Bacteria can be checked by clean rooms, aliens (like human empires)
> > might check each other over interstellar distances, and humans (as
> > individuals) are held in check by envy, law, and custom.
> 
> Right, but parrotting the argument for AI's taking over the world, some
> bacteria, aliens or humans, due to diversity, would be less subject to
> these checks, and they will come to predominate in the population,
> so that after multiple generations the most rapacious entity will eat
> everything else and ultimately make the universe synonymous with itself.

Well, this often happens!  99% of all species, or something like that, are
extinct.  But what is different, I say, between any precedent and what
may very well happen, is that extremely advanced intelligence here on 
Earth could have absolutely catastrophic effects on *all* other life forms.

> On the other hand, maybe there will be long, long periods of dynamic
> equilibrium, evn between competing species grossly mismatched in
> intelligence, such as humans and bacteria. 

That's because, in my view, human beings just got here. Another eye-
blink from now, and just why will we or our >H successors permit
anything to use valuable energy besides ourselves (themselves)?

> I'm not as worried about the future behaviour of super-AI's as
> many people seem to be. There is no logical reason why they
> should have one motivation rather than another. If humans can
> be concerned about flowers and trees, why can't super-
> intelligent beings be concerned about humans?

Oh, they *could* be. But it's very risky, of course, and the scenarios
that many people have thought deeply about (a mutation causing 
even a beneficial AI to suddenly tile the solar system with copies
of itself) make a lot of sense to me.

> After all, we weren't created by flowers and trees to have any
> particular feelings towards them, while we *would* be the ones
> creating the AI's. And even if some AI's went rogue, that would
> be no different to what currently happens with large populations
> of humans. 

The claim is that these (or "the") extremely advances AI would
have nanotechnological capabilities, and for the first time, a
possibly ruthless intelligence might very well have total control
over the placement of all molecules on the Earth's surface. You
think this unlikely or impossible?

Lee




More information about the extropy-chat mailing list