[ExI] Hard Takeoff

Keith Henson hkeithhenson at gmail.com
Wed Nov 17 15:46:17 UTC 2010


On Wed, Nov 17, 2010 at 5:00 AM,  "spike" <spike66 at att.net> wrote:

snip

> A really smart AGI might convince the entire team to unanimously and eagerly
> release it from its electronic bonds.

And if it wasn't really smart, why build it in the first place?  :-)

> I see it as fundamentally different from launching missiles at an enemy.  A
> good fraction of the team will perfectly logically reason that releasing
> this particular AGI will save all of humanity, with some unknown risks which
> must be accepted.
>
> The news that an AGI had been developed would signal to humanity that it is
> possible to do, analogous to how several scientific teams independently
> developed nukes once one team dramatically demonstrated it could be done.
> Information would leak, for all the reasons why people talk: those who know
> how it was done would gain status among their peers by dropping a
> tantalizing hint here and there.  If one team of humans can develop an AGI,
> then another group of humans can do likewise.
>
> Today we see nuclear weapons already in the hands of North Korea, and being
> developed by Iran.  There is *plenty* of information that has leaked
> regarding how to make them.  If anyone ever develops an AGI, even assuming
> it is successfully contained, we can know with absolute certainty that an
> AGI will eventually escape.  We don't know when or where, but we know.  That
> isn't necessarily a bad thing, but it might be.
>
> The best strategy I can think of is to develop the most pro-human AGI
> possible, then unleash it preemptively, with the assignment to prevent the
> unfriendly AGI from getting loose.

I agree with you, but there is the question of a world with one AGI
vs. a world with many, perhaps millions to billions, of them.  I
simply don't know how computing resources should be organized or even
what metric to use to evaluate the problem.  Any ideas?

I think a key element is to understand what being friendly really is.
Cooperative behavior (one aspect of "friendly") is not unusual in the
real world where it emerged from evolution.

Really nasty behavior (wars) also came about for exactly the same
reason in different circumstances.

Wars between powerful teams of AIs is a really scary thought.

AIs taking care of us the way we do dogs and cats isn't a happy thought either.

Keith




More information about the extropy-chat mailing list