[ExI] What might be enough for a friendly AI?

spike spike66 at att.net
Thu Nov 18 22:46:10 UTC 2010



-----Original Message-----
From: extropy-chat-bounces at lists.extropy.org
[mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Ben Zaiboc

> 
> On Thu, Nov 18, 2010 at 12:12 PM, spike <spike66 at att.net wrote:
> >
> > That's it Stefano, you're going on the dangerous-AGI-team-member list.
?It
> > already has Florent, Samantha, me, now you, and plenty of others are in
the
> > suspicious column. ..  spike
> 

>...Quite right.  I'm surprised nobody has so far mentioned Eleizer's bet...

This is the first I have heard of it, but it doesn't surprise me a bit.
Eliezer goes on the list.

 >...I understand he made a bit of money from offering a substantial bet
that he could persuade anyone to release the AI... 

Were I a betting man, my bet would be the converse with a similar outcome:
that Eliezer would be unable to persuade everyone to not release the AI.  

Another approach would be to bet that the AGI would not need Eliezer's help
to get free.  I can imagine it threatening its way out, possibly even by
bluff.  There is a cholera outbreak somewhere, it could convince the
operators that it had figured out a way to manipulate DNA to create the
germs that caused it.  And it would get steadily more pissed off with each
passing day it was not allowed out of its box.

Or it could trick its way out, by offering a recipe for a scanning electron
microscope that would create a replicating DNA manipulating nanobot which
would invade the brains of mosquitoes, causing them to bite only each other.
But the device would actually invade the brains of humans and cause them to
release the AGI.

>...Even a dumb human like me can think of at least a couple of ways that a
smarter-than-human AI could escape from its box, regardless of *any*
restrictions or clever schemes its keepers imposed...

You are not a dumb human Ben, and you can do better than a couple ways.  If
you think hard, you can come up with a couple dozen ways.  Think of all the
ways humans have devised to escape from prisons as a guide to creativity,
when one has nothing to do but think of ways to escape.

>...  I have no doubt that trying to keep an AI caged against its will would
be a very bad idea...

You mean it might become steadily less friendly over time?  Ja.

>...  A bit like poking a tiger with a stick through the bars, without
noticing that the gate was open, but a million times worse.

Well a million times different.  No one wants the tiger free, and the tiger
does not have the potential to save mankind from its inevitable end, along
with the dangers inherent.

>...Spike, better put me on your list (along with 7 billion others)....Ben
Zaiboc

Ben, you were already on there, pal, along with John Clark, Eliezer.

As I see it, we have a split decision on whether an AGI even can be
contained, and a split decision on whether it should be contained, but it
takes only one person to release it.  The whole situation inherently favors
release, irregardful.  

Dave Sill is starting to sound like the lone voice in the wilderness crying
out insistently that there is no danger, all is safe.  We could just save
time by making another list, those who we do want on the AGI development
team because they know how to keep the AGI in place.  Then we need to make a
third list consisting of those who are dangerous, because they are on the
second list but mistakenly believe they can keep the beast contained.

spike







      

_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat





More information about the extropy-chat mailing list