<p>The idea of humans dealing with risks of all sorts, most often catastrophic
risks such as asteroids hitting the earth, a nuclear war, or a world threatening
virus have had theorists thinking for eons about the future. More recently, we
are concerned with global warming, bio warfare, runaway nanoassemblers and super
AI/AGI entities. Most of these risks lie outside the personal issue of radical
life extension. </p><p>If we consider radical human life extension, what type
of risk might there be? (Extinction risk is obvious, but I'm wondering if
extinction risk is more relevant to a species rather than a person.) So, I
started thinking about the elements of a person that keep him/her alive:
foresight, insight, intelligence, creativity, willingness to change, etc. I also
thought about what might keep a person from not continuing to exist:
depression/sadness. Then I thought about what someone else might do to keep me
from existing: inflicting his/her values/beliefs onto my sphere of existence
that would endanger my right to live. I arrived back at morphological freedom,
as understood by More on one hand and Sandberg on the other, which pertains to a
negative right -- a right to exist and a right not to be coerced to exist. But
again, here the behavior of morphological freedom is a freedom and does not
answer the question of what could a risk be that reflects a person's
choice/right to live/exist?</p><p>It may be simply a matter of discrimination
about a right not to die.</p><p>Does anyone have thoughts on
this?</p><p>Thanks,</p><p>Natasha</p><p /><p /><br />