[ExI] Universally versus 'locally' Friendly AGI

Samantha Atkins sjatkins at mac.com
Mon Mar 7 19:34:50 UTC 2011


On 03/06/2011 05:18 AM, Amon Zero wrote:
> Hi All -
>
> I've been thinking about AGI and Friendliness. Yes, I know, a 
> minefield to say the least. Specifically, I've been taking this matter 
> and  comparing it to early Extropian notions about libertarianism and 
> technological progress, and the comparison suggests what might be a 
> new question. (Something that I daresay Ben Goertzel has considered, 
> but I don't have him to hand, as it were).
>
> So, I remember a piece of Max's (IIRC), in which he made the case that 
> too many governmental controls on technological development would only 
> ensure that less-controlled countries would develop key technologies 
> first. Within reason, that sounds a plausible claim to me. Universally 
> Friendly AGI, of the sort that SIAI contemplates, seems to be a 
> textbook case of constrained technological development. i.e. it seems 
> reasonable to expect that non-Friendly AGI would be easier to develop 
> than Friendly AGI (even if FAI is possible, and there seem to be good 
> reasons to believe that universally Friendly superhuman AGI would be 
> impossible for humans to develop).
>

You mean Unfriendly with no real definition of what "Friendly" is?  You 
mean requiring absolute proof of no harm to proceed?  This is known as 
the Precautionary Principle and it will most certainly stop progress 
dead wherever it is applied.  We cannot define in a provably correct way 
or enforce in a provably foolproof way "Friendliness" to humans much 
less universally (whatever that means).

> Because Friendliness is being worked on for very good (safety) 
> reasons, it seems to me that we should be thinking about the 
> possibility of "locally Friendly" AGI, just in case Friendliness is in 
> principle possible, but the full package SIAI hopes for would just 
> come along too late to be useful.

I do not know of any SIAI push to "universal" Friendliness.  Where do 
you see this?
>
> By "locally Friendly", I mean an AGI that respects certain boundaries, 
> is Friendly to *certain* people and principles, but not ALL of them. 
> E.g. a "patriotic American" AGI. That may sound bad, but if you've got 
> a choice between that and a completely unconstrained AGI, at least the 
> former would protect the people it was developed by/for.

And when it encounters the AGI for some other group, what do you expect 
to happen?

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110307/4522e3e0/attachment.html>


More information about the extropy-chat mailing list