[ExI] Yes, the Singularity is the greatest threat to humanity

Michael Anissimov michaelanissimov at gmail.com
Mon Jan 17 23:11:30 UTC 2011


On Mon, Jan 17, 2011 at 4:34 AM, Eugen Leitl <eugen at leitl.org> wrote:

> On Mon, Jan 17, 2011 at 12:43:35PM +0100, Stefano Vaj wrote:
>
> > I am still persuaded that the crux of the matter remains a less
> superficial
> > consideration of concept such as "intelligence" or "friendliness".  I
>
> To be able to build friendly you must first be able to define friendly.
> Notice that it's a relative metric, both in regards to the entitity
> and the state at time t.
>
> What is friendly today is not friendly tomorrow. What is friendly to me,
> a god, is not friendly to you, a mere human.
>

This is the basis of Eugen's opposition to Friendly AI -- he sees it as a
dictatorship that any one being should have so much responsibility.

Our position, on the other hand, is that one being will likely end up with a
lot of responsibility whether or not we want it, and to maximize the
probability of a favorable outcome, we should aim for a nice agent.

The nice thing about our solution is that it works under both circumstances
-- whether the first superintelligence becomes unrivaled or not.  Eugen's
strategy, however, fails if superintelligence does indeed become unrivaled,
because considering the possibility in the first place was so reprehensible
to him that he could never bring himself to plan for the eventuality.

-- 
Michael Anissimov
Singularity Institute
singinst.org/blog
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110117/0d2f00cc/attachment.html>


More information about the extropy-chat mailing list