[ExI] What might be enough for a friendly AI?

spike spike66 at att.net
Wed Nov 17 23:21:45 UTC 2010


... On Behalf Of Stefano Vaj
Subject: Re: [ExI] What might be enough for a friendly AI?

2010/11/17 spike <spike66 at att.net>:
>> Well friendly to me of course.  Silly question.  And friendly to you 
>> too, so long as you are friendly to me and my friends, but not to my 
>> enemies or their friends.  spike

>Sure, rain may be friendly to the farmer and unfriendly to the truck
driver, even though it is hardly "intelligent"...

It is even more complicated than that.  To hold this analogy, most farmers
are truck drivers as well.  If we define a friendly AGI as one which does
what we want, we must want what we want, and to do that we must know what we
want.  Often, perhaps usually, this is not the case.

An AGI which does what we want might be called a slave, but in the wrong
hands it is a weapon.  Hell even In the right hands it is a weapon.

>... it is a bet I make that neither any increase in raw computing power,
nor the choice to use some of it to emulate "human, all too human"
behaviours, is really likely to kill me any sooner than old age, diseases,
or accidents.

Sure.  Time and nature will most likely slay you and me before an AGI does,
but it isn't clear in the case of my son.  An AGI that emerges later in
history may do so under more advanced technological and ethical
circumstances, so perhaps that one is human-safer than one which emerges
earlier.  But perhaps not.  We could fill libraries with what we do not
know.

>...And, all in all, if I am really going to be killed by a computer, I
think that a stupid or primitive one would have no more qualms or troubles
in doing so than a "generally intelligent" one...--Stefano Vaj

Perhaps so.  We do not know.  Eliezer doesn't know either, or if so he
hasn't proven it to me.

spike










More information about the extropy-chat mailing list