[ExI] Wrestling with Embodiment

Kelly Anderson kellycoinguy at gmail.com
Wed Feb 1 11:07:21 UTC 2012

2012/2/1 Amon Zero <amon at doctrinezero.com>:
> On 1 February 2012 00:10, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> If we got rid of depression, anger, sadness, melancholy, fatigue,
>> bitchiness, sarcasm, fear, inattentiveness, frustration, boredom and
>> all the other wonderful negative emotions, could you really call what
>> you ended up with human in any sense of the word?
>> So what is lost if we reach a state of paradise on earth? Everything.
> As much as I agree that we'd have to be extremely careful when "engineering
> out" evolutionarily adaptive emotions (such as disgust), from my point of
> veering the statement above is veering into bona fide Luddo-Theological
> territory.

Believe me, mine is not a theological point of view. It is entirely
practical. Think of leprosy. No pain and you lose your fingers. Pain
is an essential part of not just human physiology, but all biological
physiology. I suspect that if you get rid of physical and emotional
pain, you won't have full AGI either.

> My responses would be:
> A) So what if what is left isn't human? We're transhumanists, aren't we?

I for one want whatever comes after us (or that we evolve into) to
maintain an element of humanity (the ethical part, not necessarily the
people themselves) in the sense that there are genuine emotions
involved. I think it would be terrible to be replaced by emotionless
logic machines like Watson. Transhumanist yes, but beware that we
don't throw the baby out with the bathwater. And what is the baby if
not our emotions? Our ability to appreciate beauty? To love? Pure
intelligence without this emotional landscape would be a kind of
genocide in my opinion... we would have lost the best part of what
we're about IMHO.

For a concrete example, if machines never felt guilt, that could be
very bad for everyone. That would be a machine with Antisocial
Personality Disorder. I am personally very interested in making sure
that our children (of either sort) don't grow up with personality

> B) Sorry to say it, but that sounds like religious hyperbole. One could just
> as easily say "nothing" (paradise may easily enough be defined as a state of
> good-without-exception) - these are essentially meaningless, deeply
> anti-practical statements whose only use is to argue for a "true path" as
> opposed to some form of perceived deviation.

Perhaps I was misunderstood... I don't think the technological future
will be anything like the Christian heaven or paradise.... I think
there will always be competition for limited resources. And that means
there has to be some level of individual failure continuing into the

> The fact of the matter is that when we feel happy, we don't imagine that
> we've somehow lost something because we aren't sad.

But if you had never experienced sad, how would you know the true
"meaning" of happy? And would beings that did not have emotions be
able to understand those of us who do know emotions?

> So if we were motivated
> by shades of positive reinforcement rather than negative (to the extent that
> may prove possible without scuppering critical instinctive survival
> reactions), I doubt very much that there'd be a consensus that we had lost
> anything at all. (and FWIW, I personally think there'd still be a place for
> the likes of NIN or Tori Amos in that world! The "pain" they evoke is an
> artistic recreation, not real, individually felt pain at all, in my
> opinion).

I think there is plenty of room for NIN in the technological (real)
future. Just not in the Christian heaven. I was pointing out the
differences between the two. I don't think you can get rid of emotions
and not screw up critical survival mechanisms. Without experiencing
real pain, I think art would suffer. Survival would be threatened. I
don't want the next step in human evolution to be less survival
capable than us!

> That's really beside the point anyway, I suppose, since reality is never
> some extreme fantasy scenario, but an unpredictable mess of consequences and
> practical considerations.

Hang on, it's going to be a bumpy ride.

> It seems a little strange to see conversations on
> this list making the same kind of black-or-white,
> transhumanist-futures-good-or-bad dichotomy arguments that were common 25
> years ago. Surely we can see that when technology changes human behaviour,
> the outcomes are never simplistic, and can therefore never easily be
> classified as all good or all bad?

I am not a black and white thinker by any means. I'm more of an
optimist than a pessimist, for what that's worth, but I think there
will be plenty of challenges in any future I can conceive of. I do
think that humanity has an endowment for whatever comes next, and I
think it would be a real tragedy if they didn't get most of the gifts
that we have to give them.


More information about the extropy-chat mailing list