[ExI] Moral enhancement

Dan TheBookMan danust2012 at gmail.com
Wed Oct 7 20:37:56 UTC 2015


On Mon, Oct 5, 2015 at 10:29 PM, Anders Sandberg <anders at aleph.se> wrote:
>
> On 2015-10-05 20:02, Dan TheBookMan wrote:
>
> On Oct 5, 2558 BE, at 9:09 AM, William Flynn Wallace <foozler83 at gmail.com>
wrote:
>>> A final solution:  program our genes with powerful instincts so that we
simply
>>> cannot do anything antihuman.  Take away free will, if you will.  If
you never
>>> had it, you'll never miss it.
>>
>> That's the authoritarian position, no? If people don't meet someone's
social ideal,
>> then change the people. Why would that ever be a good thing to enforce
on others?
>
> We do enforce it on children and insane people, often for their own good.
Unfortunately
> we also do do it for other, bad reasons.

My fear would be the latter, of course, though I'm biased toward persuasion
as opposed to forcing others to change to fit into some ideal of mine.
There are other ways to go about this too.

With regard to Bill's point, what I'm more afraid of not altering, say,
genes, to make people smarter or to think more long range (i.e., have more
willpower to use the traditional term) -- if such is possible -- but
programming people to do what's now considered a socially appropriate
behavior that involves removing more choices from them. I was more
surprised since, correct me if I'm wrong (Bill or you), but I thought Bill
called himself a libertarian. In which case, I'd expect him to have some
qualms about this -- whether he's a transhumanist libertarian or no.

> And as we argued in my most controversial
> paper (
http://www.smatthewliao.com/wp-content/uploads/2012/02/HEandClimateChange.pdf
> ) we may want to enforce these things on *ourselves*.

To be sure, he's arguing for a voluntary change -- though this is, I
presume, voluntary for the parents not the offspring. My guess with this
particular paper is it's totally unnecessary. And this is the usual
argument for doing something drastic, no? Doom awaits us unless we do X! :)
So, we must do X or suffer the consequences and only a bad person would be
against doing X.

> There has been a discussion in bioethics of moral enhancement for a few
years (centered
> around Savulescu and Person's book "Unfit for the Future"): given that we
are moving
> towards a world of powerful technologies in the hands of most people, it
might be
> necessary for our survival to become more ethical and sane. So biomedical
moral
> enhancement, improving people's ability to make good moral choices, may be
> something that should be enforced even if the exact choices or moral
systems
> are left to people.
>
> https://philosophynow.org/issues/91/Moral_Enhancement
>
> From a transhumanist standpoint moral enhancement is interesting. When we
had the
> discussion about enhanced emotions back around Extro 4, it touched on
this (long
> before the outside philosophers crowded in). We can distinguish between
enhancing
> the capacities useful for making moral behavior (improving our ability to
foresee
> consequences, empathize with others, and control ourselves), enhancements
of our
> social structures (setting up incentives to be nice, surveillance and
reputations to
> make being bad worse), but also the ethical issues of being a moral
enhanced being
> - with great power comes great responsibility.

I'm more worried about an attempt to remove the ability to choose overall
rather than "moral enhancement." And your clause at the end, though not
original, is true. I don't see dystopia as likely here, especially since
the more likely outcome is a varied set of piecemeal changes -- if these
are possible.

Regards,

Dan
  Sample my Kindle books via:
http://www.amazon.com/Dan-Ust/e/B00J6HPX8M/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20151007/434ee38c/attachment.html>


More information about the extropy-chat mailing list