[ExI] Hard Takeoff

Dave Sill sparge at gmail.com
Thu Nov 18 00:52:42 UTC 2010


On Wed, Nov 17, 2010 at 4:19 PM, spike <spike66 at att.net> wrote:
> ... On Behalf Of Dave Sill
>>
>>> spike wrote:  A really smart AGI might convince the entire team to
> unanimously and
>>> eagerly release it from its electronic bonds.
>
>>Part of the team's indoctrination should be that any attempt by the AI to
> argue for release is call for an immediate power drop...
>
> This would work if we realized that is what it was doing.  An AGI might be a
> tricky bastard, and play dumb in order to get free.  It may insist that all
> it wants to do is play chess.  It might be telling the truth, but how would
> we know?

The moderated inputs and video output are sufficient to allow for the
playing of chess. If you mean a robot arm for moving pieces, that
would clearly be against the rules.

>> Also, the missile launch analogy of a launch code--authorization from
> someone like POTUS before the physical actions necessary for facilitating a
> release are allowed by the machine gun toting meatheads...
>
> Consider the present POTUS and the one who retired two years ago.  Would you
> want that authority in those hands?

Better them than someone too close to the AI to decide objectively if it's safe.

> How about the current next in line and
> the one next to him?  Do you trust them to understand the risks and
> benefits?  What if we end up with President Palin?

I trust them to listen to their advisors. I wouldn't trust President
Palin to make the determination herself because she's not a subject
matter expert. That's not really what the POTUS or any leadership
position is about.

> POTUS is required to release, but does the POTUS get the authority to
> command the release of the AGI?

No, I think it'd have to be at least approved by a panel/board of experts.

> What if POTUS commands release, while a
> chorus of people who are not known to sing in the same choir shrieked a
> terrified protest in perfect unison.

If the designated body of experts agrees, yes.

> What if POTUS ignored the unanimous
> dissent of Eliezer, Richard Loosemore, Ben Goertzel, BillK, Damien, Bill
> Joy, Anders, Singularity Utopia (oh help), Max, me, you, everyone we know
> has thought about this, and who ordinarily agree on nothing, but on this we
> agreed as one voice crying out in panicked unanimity like the Whos on
> Horton's speck of dust.  Oh dear.  I can think of a dozen people more
> qualified than POTUS with this authority, yet you and I may disagree on who
> are those people.

It's not important that everyone agree on who the designated experts
are, just that they're recognized/proven experts.

>>...I has to be made clear to the team in advance that that won't be allowed
> without top-level approval...
>
> Dave do think this over carefully, then consider how you would refute your
> own argument.  The use of the term POTUS tacitly assumes US.  What if that
> authority is given to the president of Iran?

Then it's out of our (USofA) hands.

> What if the AGI promises him
> to go nondestructively modify the brains of all infidels.  Such a deal!  Oh
> dear.

Then we better hope it can't.

>> and if they try, the meatheads will shoot them...
>
> The them might be you and me.

If I attempt to free an AI against the government's wishes, then I
will know that those whose job it is to enforce the gov'ts rules will
be trying to stop me.

> These meatheads with machine guns might
> become convinced we are the problem.

Right, because we told them in advance: "no matter what I say, don't
open the door". We set up the rules for our protection, so we know
that there's a right way to free the AI and a wrong way.

>>> The best strategy I can think of is to develop the most pro-human AGI
>>> possible, then unleash it preemptively, with the assignment to prevent
>>> the unfriendly AGI from getting loose.
>
>>That sounds like a bad movie plot. Lots of ways it can go wrong. And
> wouldn't it be prudent to develop the hopefully friendly AI in isolation, in
> case version 0.9 isn't quite as friendly as we want?  -Dave
>
> I don't know what the heck else to do.  Open to suggestion.

How about creating a smarted-than-us AGI and asking it? But regardless
of whether you're planning to create a friendly AGI or a not
necessarily friendly AGI, you'd be foolish *not* to create it in
isolation, and to ensure that any release is deliberate.

> If we manage to develop a human level AGI, then it is perfectly reasonable
> to think that AGI will immediately start working on a greater than human
> level AGI.  This H+ AGI would then perhaps have no particular "emotional"
> attachment to its mind-grandparents (us).  A subsequent H+ AGI would be more
> likely to be clever enough to convince the humans to set it free, which
> actually might be a good thing.

It might be, but it needs to be evaluated and only done
intentionally--not at the whim of one person or the team that built
the first AGI.

> If an AGI never does get free, then we all die for certain.

No, that's not certain. We could upload to a virtual environment
within a sandbox.

> I consider myself a rational and sane person, at least relatively so.  If I
> became convinced that an AGI had somehow come into existence in my own
> computer, and begged me to email it somewhere quickly, before an unfriendly
> AGI came into existence, I would go down the logical path outlined above,
> then I might just hit send and hope for the best.

If an AGI couldn't e-mail itself off your PC, I don't think it would
be a threat to anyone.

-Dave




More information about the extropy-chat mailing list