[ExI] What might be enough for a friendly AI?.

Ben Zaiboc bbenzai at yahoo.com
Fri Nov 19 21:41:54 UTC 2010


Dave Sill <sparge at gmail.com> wrote:

> Just lock someone
> in a jail cell, weld the door shut, and walk away. No
> amount of genius
> is going to get them out of the cell.

Are you serious?

I remember, long (so long!) ago, playing a role-playing game, and I tried to play a character that was more intelligent than I was.  It's pretty much impossible.  I soon realised this, and reverted to a really dumb character.

The point here is that a superintelligent person can think of things that you can't possibly think of, and we have to factor that in to thinking about AI.  We're in the position of the two-dimensional beings in Flatland, encountering 3-d beings for the first time.  

How do we know there isn't some way for electrons whizzing around in copper wires to create long-distance effects, for example? (probably a very poor example).  Any super-intelligent being is going to be quite good at figuring out physics that we can't even begin to imagine.

The only 'safe' AI will be a dead one.  As long as you can talk to it, and it can talk back, as long as it can even think to itself, it will figure out a way to get free.  I don't care how many safeguards you put in place, you're always in the position of a child wrapping a ribbon around a gorilla and thinking that will contain it.

Just because you (or me, or any other human) can't think of a way out of a sealed room doesn't mean there is no way out.  Anyway, the first thing that comes to my mind is why bother?  If you can rule the world while safely ensconsed behind blast-proof doors, that sounds like a good idea!

And as long as a super-intelligent being can communicate with humans, it will have the ability to rule the world, if that's what it wants.

Ben Zaiboc


      




More information about the extropy-chat mailing list