[ExI] Moral enhancement
danust2012 at gmail.com
Wed Oct 7 20:37:56 UTC 2015
On Mon, Oct 5, 2015 at 10:29 PM, Anders Sandberg <anders at aleph.se> wrote:
> On 2015-10-05 20:02, Dan TheBookMan wrote:
> On Oct 5, 2558 BE, at 9:09 AM, William Flynn Wallace <foozler83 at gmail.com>
>>> A final solution: program our genes with powerful instincts so that we
>>> cannot do anything antihuman. Take away free will, if you will. If
>>> had it, you'll never miss it.
>> That's the authoritarian position, no? If people don't meet someone's
>> then change the people. Why would that ever be a good thing to enforce
> We do enforce it on children and insane people, often for their own good.
> we also do do it for other, bad reasons.
My fear would be the latter, of course, though I'm biased toward persuasion
as opposed to forcing others to change to fit into some ideal of mine.
There are other ways to go about this too.
With regard to Bill's point, what I'm more afraid of not altering, say,
genes, to make people smarter or to think more long range (i.e., have more
willpower to use the traditional term) -- if such is possible -- but
programming people to do what's now considered a socially appropriate
behavior that involves removing more choices from them. I was more
surprised since, correct me if I'm wrong (Bill or you), but I thought Bill
called himself a libertarian. In which case, I'd expect him to have some
qualms about this -- whether he's a transhumanist libertarian or no.
> And as we argued in my most controversial
> paper (
> ) we may want to enforce these things on *ourselves*.
To be sure, he's arguing for a voluntary change -- though this is, I
presume, voluntary for the parents not the offspring. My guess with this
particular paper is it's totally unnecessary. And this is the usual
argument for doing something drastic, no? Doom awaits us unless we do X! :)
So, we must do X or suffer the consequences and only a bad person would be
against doing X.
> There has been a discussion in bioethics of moral enhancement for a few
> around Savulescu and Person's book "Unfit for the Future"): given that we
> towards a world of powerful technologies in the hands of most people, it
> necessary for our survival to become more ethical and sane. So biomedical
> enhancement, improving people's ability to make good moral choices, may be
> something that should be enforced even if the exact choices or moral
> are left to people.
> From a transhumanist standpoint moral enhancement is interesting. When we
> discussion about enhanced emotions back around Extro 4, it touched on
> before the outside philosophers crowded in). We can distinguish between
> the capacities useful for making moral behavior (improving our ability to
> consequences, empathize with others, and control ourselves), enhancements
> social structures (setting up incentives to be nice, surveillance and
> make being bad worse), but also the ethical issues of being a moral
> - with great power comes great responsibility.
I'm more worried about an attempt to remove the ability to choose overall
rather than "moral enhancement." And your clause at the end, though not
original, is true. I don't see dystopia as likely here, especially since
the more likely outcome is a varied set of piecemeal changes -- if these
Sample my Kindle books via:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat