[ExI] What might be enough for a friendly AI?

Dave Sill sparge at gmail.com
Fri Nov 19 18:39:32 UTC 2010


On Fri, Nov 19, 2010 at 12:08 PM, spike <spike66 at att.net> wrote:
>
> Ja, but of course the program is recursively self modifying.  It is writing
> to a disk or nonvolatile memory of some sort.  When software is running, it
> isn't entirely clear what it is doing, and in any case it is doing it very
> quickly.  Imagine the program does something unpredictable or scary, and we
> hit the power switch.  It has a bunch of new code on the disk, but we don't
> know what it does, if anything.  We have the option of reloading back to the
> previous saved version, but that is the one that generated this unknown
> bittage.

Right, so the team of experts decides whether to revert to a known
checkpoint, examine the new code, beef up the containment, etc.

> Agreed.  However there will likely be widely varying opinion on what
> constitutes a good reason.

That can be decided at leisure and policies can be updated or
disciplinary action can be taken.

> So each team member can hit stop.  OK.  Then only one team leader has the
> authority to hit restart?

That would take a group decision, I think.

> There was a movie a long time ago that you might find fun, Dave.  It isn't a
> serious science fiction, but rather a comedy, called Number 5 is Alive.

"Short Circuit", actually.

> Eliezer was in first grade when that one came and went in the theaters.  It
> was good for a laugh, has an emergent AI with some of the stuff we are
> talking about.  It has the pre-self-destruction Ally Sheedy naked but she
> does't actually show much of anything in that one, damn.  In any case a
> robot gets struck by lightning and becomes sentient, and who knew it would
> be that easy?  Then the meatheads from the military try to use it as a
> weapon, then try to destroy it, etc.  If you get that, don't expect anything
> deep, but it is good fun.  The AI escapes in that one.

Yeah, it was entertaining. So what's your point? That an AGI may
emerge spontaneously outside of a controlled attempt to create one?
OK, seems highly unlikely, but so what? How does that change the
environment under which we should be trying to develop one?

Yes, an AGI could spring to life on the Interweb someday. Or the North
Koreans could create one. Or we could create one that escapes our best
effort to contain it. None of that implies that it would be prudent
not to attempt to contain one that we're trying to build.

-Dave




More information about the extropy-chat mailing list