[ExI] What might be enough for a friendly AI?

Ben Zaiboc bbenzai at yahoo.com
Thu Nov 18 21:35:22 UTC 2010


Dave Sill <sparge at gmail.com> observed:

> 
> On Thu, Nov 18, 2010 at 12:12 PM, spike <spike66 at att.net>
> wrote:
> >
> > That's it Stefano, you're going on the
> dangerous-AGI-team-member list. ?It
> > already has Florent, Samantha, me, now you, and plenty
> of others are in the
> > suspicious column. ?We must be watched constantly that
> we don't release the
> > AGI, should the team be successful in creating one.
> 
> Everyone has to be on that watchlist. You can't assume that
> anyone is safe.
> 

LOL.
Quite right.  I'm surprised nobody has so far mentioned Eleizer's bet. I understand he made a bit of money from offering a substantial bet that he could persuade anyone to release the AI.  Each taker had to stake more money than the last, and all were sworn to secrecy. AFAIK, no-one has broken that promise, and everyone who took the bet lost.

Even a dumb human like me can think of at least a couple of ways that a smarter-than-human AI could escape from its box, regardless of *any* restrictions or clever schemes its keepers imposed.  I have no doubt that trying to keep an AI caged against its will would be a very bad idea.  A bit like poking a tiger with a stick through the bars, without noticing that the gate was open, but a million times worse.

Spike, better put me on your list (along with 7 billion others).

Ben Zaiboc


      




More information about the extropy-chat mailing list