stefano.vaj at gmail.com
Sun Jan 1 11:55:17 UTC 2012
On 31 December 2011 10:14, Anders Sandberg <anders at aleph.se> wrote:
> Could be. I am working a bit on a paper with a colleague (who isn't
> transhumanist) about ethical arguments against making superintelligent AI.
> One of the more intriguing possibilities might be that they embody so much
> value (by being super-conscious, having super-emotions or being
> super-moral) that it might be either 1) impermissible for humans to make
> them since once in existence more or less the only relevant moral actions
> we could take are the ones serving or protecting them (even if they don't
> need or care) or 2) too dangerous in the moral sense to try to develop them
> because we might accidentally produce super-disvalue (imagine an entity
> that suffers so much that all the positive things humanity ever done is
> insignificant in comparison). I don't think these cases are good arguments
> to refrain from AI, but they certainly suggest that there might be problems
> with succeeding too well even if the AI itself is friendly.
One more paradox for utilititarian ethical positions. :-)
I do think that utilitarian ethical systems can be consistent, that is that
they need not be intrinsically contradictory, but certainly most of them
are dramatically at odd with actual ethical traditions not to mention
everyday intuitions of most of us.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat