[ExI] What might be enough for a friendly AI?

spike spike66 at att.net
Fri Nov 19 17:08:50 UTC 2010


... Behalf Of Dave Sill
Subject: Re: [ExI] What might be enough for a friendly AI?

>> Think it over and come back  tomorrow with a list of reasons why it
really isn't as simple as 
>> having a big power cutting panic button.

>I've thought it over for more than a day, and maybe I'm a naive fool, but I
can't see any. I'm all ears, though.

Good, read on sir.  {8-]

On Fri, Nov 19, 2010 at 2:07 AM, spike <spike66 at att.net> wrote:
>> It isn't that simple Mike.  To use that off switch might be considered
murder.

>It's a power switch, not a detonator. The AGI can be restarted after the
situation is analyzed and the containment is beefed up, if necessary.

Ja, but of course the program is recursively self modifying.  It is writing
to a disk or nonvolatile memory of some sort.  When software is running, it
isn't entirely clear what it is doing, and in any case it is doing it very
quickly.  Imagine the program does something unpredictable or scary, and we
hit the power switch.  It has a bunch of new code on the disk, but we don't
know what it does, if anything.  We have the option of reloading back to the
previous saved version, but that is the one that generated this unknown
bittage.

>> There may not be unanimous consent to use it.

>No, it can't require any bureaucratic approval. It has to be a panic button
that anyone can press. Obviously there will be ramifications if the button
is pressed for no good reason.

Agreed.  However there will likely be widely varying opinion on what
constitutes a good reason.

>> There might be emphatic resistance on the part of some team members to
using it.

>That's why it can't be a group decision.

So each team member can hit stop.  OK.  Then only one team leader has the
authority to hit restart?

>Everyone with physical access to the button is authorized to press it...
the same mechanism will work just as well to disable a potentially dangerous
AGI. -Dave

Ja I am still trying to get my head around how to universally and
unambiguously define "potentially dangerous" with respect to AGI.  

There was a movie a long time ago that you might find fun, Dave.  It isn't a
serious science fiction, but rather a comedy, called Number 5 is Alive.
Eliezer was in first grade when that one came and went in the theaters.  It
was good for a laugh, has an emergent AI with some of the stuff we are
talking about.  It has the pre-self-destruction Ally Sheedy naked but she
does't actually show much of anything in that one, damn.  In any case a
robot gets struck by lightning and becomes sentient, and who knew it would
be that easy?  Then the meatheads from the military try to use it as a
weapon, then try to destroy it, etc.  If you get that, don't expect anything
deep, but it is good fun.  The AI escapes in that one.

spike











More information about the extropy-chat mailing list