[ExI] uploads again
Anders Sandberg
anders at aleph.se
Mon Dec 24 19:57:24 UTC 2012
On 2012-12-24 19:51, John Clark wrote:
> On Mon, Dec 24, 2012 at 3:45 AM, Rafal Smigrodzki
> <rafal.smigrodzki at gmail.com <mailto:rafal.smigrodzki at gmail.com>> wrote:
>
> > I see this as one of the main reasons to rush headlong into
> developing Friendly AI:
>
>
> Friendly AI is just a euphemism for slave AI, it's supposed to always
> place our interests and well being above our own but it's never going to
> work.
Things have moved on quite a bit since the early days. You might want to
look up Coherent Extrapolated Volition (generally agreed to be obsolete
too - http://singularity.org/files/CEV.pdf ) There are some pretty
intriguing and hard problems in motivation design:
http://www.nickbostrom.com/superintelligentwill.pdf
http://singularity.org/files/SaME.pdf
http://singularity.org/files/LearningValue.pdf
> An end perhaps but not a ignominious end, I can't imagine a more
> glorious way to go out. I mean 99% of all species that have ever existed
> are extinct, at least we'll have a descendent species.
If our descendant species are doing something utterly value-less *even
to them* because we made a hash of their core motivations or did not try
to set up the right motivation-forming environment, I think we would
have an amazingly ignominious monument.
Imagine the universe converted to paperclips by minds smart enough to
realize that 1) this is pointless, 2) this is immoral, yet being unable
to change their minds.
http://singularity.org/files/ComplexValues.pdf
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list