[ExI] Moral enhancement

Anders Sandberg anders at aleph.se
Thu Oct 8 11:53:26 UTC 2015


On 2015-10-07 22:37, Dan TheBookMan wrote:
> On Mon, Oct 5, 2015 at 10:29 PM, Anders Sandberg <anders at aleph.se 
> <mailto:anders at aleph.se>> wrote:
> > We do enforce it on children and insane people, often for their own 
> good. Unfortunately
> > we also do do it for other, bad reasons.
>
> My fear would be the latter, of course, though I'm biased toward 
> persuasion as opposed to forcing others to change to fit into some 
> ideal of mine.

Persuasion works to some extent (just consider the socialization of 
children and the fact that most of us do not committ crimes even when we 
can get away with it and it is beneficial for us), but the moral 
enhancement people have a point in that we have been trying to persuade 
people for 2,500 years with limited success. This is where Steven 
Pinker's thesis of reduced violence suggests that organisation and 
coordination may matter too.

Of course, one can construct very creepy reinforces of prosocial 
behavior. See this article
https://www.aclu.org/blog/free-future/chinas-nightmarish-citizen-scores-are-warning-americans
and the correction/updates in this 
https://www.techinasia.com/china-citizen-scores-credit-system-orwellian/
I have little doubt that something like this could be used to produce 
"moral enhancement".

> With regard to Bill's point, what I'm more afraid of not altering, 
> say, genes, to make people smarter or to think more long range (i.e., 
> have more willpower to use the traditional term) -- if such is 
> possible -- but programming people to do what's now considered a 
> socially appropriate behavior that involves removing more choices from 
> them. I was more surprised since, correct me if I'm wrong (Bill or 
> you), but I thought Bill called himself a libertarian. In which case, 
> I'd expect him to have some qualms about this -- whether he's a 
> transhumanist libertarian or no.

Moral enhancement theorists generally do not think programming people 
constitutes "real" moral enhancement, just behavior control. Sometimes 
that or nudging is OK, but most of the time it is also very limited, 
since it only applies to situations somebody had thought about beforehand.


>
> > And as we argued in my most controversial
> > paper ( 
> http://www.smatthewliao.com/wp-content/uploads/2012/02/HEandClimateChange.pdf
> > ) we may want to enforce these things on *ourselves*.
>
> To be sure, he's arguing for a voluntary change -- though this is, I 
> presume, voluntary for the parents not the offspring. My guess with 
> this particular paper is it's totally unnecessary. And this is the 
> usual argument for doing something drastic, no? Doom awaits us unless 
> we do X! :) So, we must do X or suffer the consequences and only a bad 
> person would be against doing X.

That is not what we are saying, although I totally understand why you 
mention that reading. An awful number of policies are motivated by a 
major risk (real or not), and since the policy reduce the risk then 
arguing against it seems to be like arguing in favor of the bad outcome. 
Hence, less criticism than there should be.

Climate change is not really bad enough to motivate radical 
interventions in humans (even the worst case scenarios span many 
decades, where other intervations are more effective), but it might 
apply to certain existential risks.

-- 
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20151008/409abbff/attachment.html>


More information about the extropy-chat mailing list