[ExI] What might be enough for a friendly AI?

Florent Berthet florent.berthet at gmail.com
Wed Nov 17 23:43:17 UTC 2010


It may just be me, but this whole friendliness thing bothers me.

I don't really mind dying if my successors (supersmart beings or whatever)
can be hundreds of times happier than me. Of course I'd prefer to be alive
and see the future, but if we ever had to make a choice between the human
race and the posthuman race, I'd vote for the one that hold the most
potential happiness. Wouldn't it be selfish to choose otherwise?

More generally, wouldn't it be a shame to prevent an AGI to create an
advanced civilization (eg computronium based) just because this outcome
could turn out to be less "friendly" to us than the one of a human-friendly
AGI?

In the end, isn't the goal about maximizing collective happiness?

So why don't we just figure out how to make the AGI understand the concept
of happiness (which shouldn't be hard since we already understand it), and
make it maximize it?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101118/5839e73a/attachment.html>


More information about the extropy-chat mailing list