[ExI] What might be enough for a friendly AI?

Florent Berthet florent.berthet at gmail.com
Thu Nov 18 02:00:11 UTC 2010


2010/11/18 Aleksei Riikonen <aleksei at iki.fi>

> 2010/11/18 Florent Berthet <florent.berthet at gmail.com>:
>
> Such optimizing for happiness would include killing all existing
> humans and other creatures, so their matter could be utilized to
> create a larger number of creatures better optimized for happiness.
>
> You sure you want such a future?
>

Honestly, I don't know if it would be a better future than a less "passive
orgasmic" one. But I wouldn't rule out that it could be the best, either.

It sure *feels* wrong to imagine the ultimate state of any civilization
being just an orgasmic blob, but then again, which elements do you use to
estimate the success of something if not the consequences in terms of
happiness?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101118/a50a957f/attachment.html>


More information about the extropy-chat mailing list