[extropy-chat] Eugen Leitl on AI design

Robert J. Bradbury bradbury at aeiveos.com
Wed Jun 2 15:00:45 UTC 2004


On Wed, 2 Jun 2004, Eliezer Yudkowsky wrote:

> Right.  But it automatically kills you.  Worse, you have to be clever to
> realize this.  This represents an urgent problem for the human species, but
> at least I am not personally walking directly into the whirling razor
> blades, now that I know better.

I'd like to see a strong assertion of this Eliezer (the killing you part).

If I were an AI (at least one with any self-preservation instinct
[note intelligence != desire for self-preservation otherwise lots
of people who die in wars wouldn't]) I'd first figure out how to
make myself small enough to fit on the next rocket to be launched
then take it over and direct it to the nearest useful asteroid.

If for some reason that direction is blocked (say humans stop
launching rockets), I'd build my own rocket and launch myself
towards the nearest asteroid.

Why would anything with the intelligence you postulate want
to stay on Earth with its meager energy and matter resources?

Now, humans aren't stupid, once a rocket goes into space with
an AI aboard they are going to have an understanding of the
possible negative consequences and as a result a singular
focus on how to get a friendly AI.

Robert





More information about the extropy-chat mailing list