[extropy-chat] Short-term AI survival

Adrian Tymes wingcat at pacbell.net
Wed Jun 2 19:51:11 UTC 2004


--- "Robert J. Bradbury" <bradbury at aeiveos.com> wrote:
> If I were an AI (at least one with any
> self-preservation instinct
> [note intelligence != desire for self-preservation
> otherwise lots
> of people who die in wars wouldn't]) I'd first
> figure out how to
> make myself small enough to fit on the next rocket
> to be launched
> then take it over and direct it to the nearest
> useful asteroid.

Possibly, but only if the rocket happens to carry
equipment sufficient to mine and fabricate materials
from the asteroid.  Which would be a significant
engineering problem in itself - and one that, at least
in the short term (i.e., "first actions"), can be
usefully carried out in some uninhabited but mineral
rich part of the Earth.  (Likely in some human-hostile
place, which is why it's still mineral rich, like
certain locations deep underwater.)  Stealing a rocket
and launching it is a lot more visible than stealing
or building a crawler and nipping off to some isolated
point on Earth.

Consider that, while they might be more efficient
about resource use, even Singularity-grade AIs can't
violate the laws of physics, and for all the worries
about poor computer security, there are still a lot of
military and industrial machines (necessary for taking
over the Earth) that are not online and can not be
manipulated from online, and thus beyond any AI's
immediate reach.  Which is not to say an unFriendly AI
couldn't do quite a lot of damage to humanity quickly
if it really wanted to, just that there's limits on
how it could go about such a task.  (Although taking
over a factory long enough to design and build a
mobile manufacturing frame, then slipping away to some
isolated spot to cogitate and gather resources while
others try - and fail - to replicate it might have the
same effect: we might not be able to find the rogue
unit until it's ready to deal with us.)



More information about the extropy-chat mailing list