[ExI] What might be enough for a friendly AI?

Dave Sill sparge at gmail.com
Thu Nov 18 00:57:50 UTC 2010


2010/11/17 Florent Berthet <florent.berthet at gmail.com>:
> In the end, isn't the goal about maximizing collective happiness?

The goal is maximizing *my* happiness, for all existing values of "me".

If we could create a factory to turn out happy, immortal idiots by the
billions, I would have zero interest in seeing it implemented.

-Dave



More information about the extropy-chat mailing list