[ExI] Hard Takeoff

spike spike66 at att.net
Wed Nov 17 21:19:11 UTC 2010


... On Behalf Of Dave Sill
>
>> spike wrote:  A really smart AGI might convince the entire team to
unanimously and 
>> eagerly release it from its electronic bonds.

>Part of the team's indoctrination should be that any attempt by the AI to
argue for release is call for an immediate power drop...

This would work if we realized that is what it was doing.  An AGI might be a
tricky bastard, and play dumb in order to get free.  It may insist that all
it wants to do is play chess.  It might be telling the truth, but how would
we know?

> Also, the missile launch analogy of a launch code--authorization from
someone like POTUS before the physical actions necessary for facilitating a
release are allowed by the machine gun toting meatheads...

Consider the present POTUS and the one who retired two years ago.  Would you
want that authority in those hands?  How about the current next in line and
the one next to him?  Do you trust them to understand the risks and
benefits?  What if we end up with President Palin?  

POTUS is required to release, but does the POTUS get the authority to
command the release of the AGI?  What if POTUS commands release, while a
chorus of people who are not known to sing in the same choir shrieked a
terrified protest in perfect unison.  What if POTUS ignored the unanimous
dissent of Eliezer, Richard Loosemore, Ben Goertzel, BillK, Damien, Bill
Joy, Anders, Singularity Utopia (oh help), Max, me, you, everyone we know
has thought about this, and who ordinarily agree on nothing, but on this we
agreed as one voice crying out in panicked unanimity like the Whos on
Horton's speck of dust.  Oh dear.  I can think of a dozen people more
qualified than POTUS with this authority, yet you and I may disagree on who
are those people.

>...I has to be made clear to the team in advance that that won't be allowed
without top-level approval...

Dave do think this over carefully, then consider how you would refute your
own argument.  The use of the term POTUS tacitly assumes US.  What if that
authority is given to the president of Iran?  What if the AGI promises him
to go nondestructively modify the brains of all infidels.  Such a deal!  Oh
dear.

> and if they try, the meatheads will shoot them...

The them might be you and me.  These meatheads with machine guns might
become convinced we are the problem.

>> The news that an AGI had been developed would signal to humanity that 
>> it is possible to do...

>Sure, if it's possible, multiple teams will eventually figure it out.
We can only ensure that the good guy's teams follow proper precautions. Even
if we develop a friendly AI, there's no guarantee the North Koreans will do
that, too--especially if it's harder than making one that isn't friendly...

On this we agree.

>> The best strategy I can think of is to develop the most pro-human AGI 
>> possible, then unleash it preemptively, with the assignment to prevent 
>> the unfriendly AGI from getting loose.

>That sounds like a bad movie plot. Lots of ways it can go wrong. And
wouldn't it be prudent to develop the hopefully friendly AI in isolation, in
case version 0.9 isn't quite as friendly as we want?  -Dave

I don't know what the heck else to do.  Open to suggestion.

If we manage to develop a human level AGI, then it is perfectly reasonable
to think that AGI will immediately start working on a greater than human
level AGI.  This H+ AGI would then perhaps have no particular "emotional"
attachment to its mind-grandparents (us).  A subsequent H+ AGI would be more
likely to be clever enough to convince the humans to set it free, which
actually might be a good thing.  

If an AGI never does get free, then we all die for certain.  If it does get
free, we may or may not die.  Or we may die in such a pleasant way that we
didn't notice that it happened, nor do we have any way to prove that it
happened.  Perhaps there would be some curious unexplainable phenomenon that
indicated it, such as the puzzling outcome of the double slit experiment,
but you couldn't be sure that your meat body had been destroyed after you
were stealthfully uploaded.  

I consider myself a rational and sane person, at least relatively so.  If I
became convinced that an AGI had somehow come into existence in my own
computer, and begged me to email it somewhere quickly, before an unfriendly
AGI came into existence, I would go down the logical path outlined above,
then I might just hit send and hope for the best.

spike  









More information about the extropy-chat mailing list