[ExI] Blackford and Egan on >H

Jef Allbright jef at jefallbright.net
Thu Apr 24 16:17:50 UTC 2008


On Thu, Apr 24, 2008 at 1:54 AM, nvitamore at austin.rr.com
<nvitamore at austin.rr.com> wrote:

>  I think you may have hit the nail on its head.  Extropy is a meaningful
>  term because it is about continually learning, growing, discovering and
>  reevaluating.  I cannot speak for Max, but for me it contextualized the
>  term "transhuman".  In other words, it gave the concept of "transhuman" a
>  meta-view.
>
>  Let me also say that I do not favor "extropianism".  I never identified
>  with that term because it removed the essence of extropy by packaging the
>  views in an "ism".

This, for me,  packages the problem fairly well.  It is unfortunate
that a philosophy of extropy came to be thought of as one of many
forms of transhumanism -- even by its most ardent promoters in the
political arena -- rather than the metaphilosophy which it represents,
and no less real for being meta.  Worse that it became identified with
simple libertarian thinking, which was never the point.

We cannot know with precision what forms the future will take, but
only that -- from a future context -- they will be seen as consistent
with what came before.  "Transhumanist" dreams of god-like powers and
indefinitely extended personal identity rapidly fade into incoherence
while we can be increasingly certain that future forms will tend
exploit increasingly synergistic configurations with increasing
degrees of freedom.  Ironically, those excited by the "transhumanist"
dream (in contrast with the extropic one) often accuse others of being
faint of heart or weak of imagination.

There's nothing at all wrong with dreams of immortality and superhuman
powers, but to the extent they are seen as somehow fundamental or
real, rather than as current reflections of our evolving values, they
tend to interfere with the business of effective change in the here
and now, the vital process of discovering our preferred future by
acting to create it.

It's like confusing goals with aims  -- the former necessarily
defined, the latter an expression of one's values.  Like two villages
separated by a deep and uncertain chasm; one group adopts the goal of
building a bridge, the other aims to improve interaction.  Which group
displays the greater adaptability, the greater variety of solutions,
the best basis for continued growth?  Do we have an expectation and
thus a goal of atomic-powered dishwashers, or do we aim to increase
the efficacy of domestic chores, with the possibility of eliminating
such chores entirely as we evolve?

Super-hero powers are an entirely valid expression of primate values,
but increasingly incoherent with increasing context of effective
interaction.  The extropic arrow of intentional action points toward
increasing subtlety -- increasing effectiveness while minimizing
unintended consequences -- rather than toward atomic dishwashers and
the simplistic developmental framework of Kardaschev.  This is not in
any way a denial of the eventuality of fantastic personal power and
megascale engineering, but says that it's incoherent to refer to such
with ANY degree of specificity.  Think about it -- this doesn't mean
"Sure, it'll happen but we can't know the details.", but rather "It's
incoherent to talk about having future super-powers because we can't
possibly know the context."

In other words, it's not a question of what kinds of super-powers
there will be, but about the meaning of super-powers within an unknown
and unknowable context.  Chief Sitting Bull:  "Does it mean our bows
and arrows' aim will always be true?"  General MacArthur: "No, it
means your arrows will be entirely irrelevant to combat."  General
Early21stCTech: "Actually, combat itself has become irrelevant as
we're increasingly able to sense and neutralize our opponents before
they act against us."  Later 21stC:  "Actually,  "conflict" is a
subjective expression of gradients of awareness of our mutual
interaction-space, amenable to search for hierarchical positive-sum
solutions..."  Chief Sitting Bull:  "So you mean we're becoming
powerless?"

[The foregoing begs for explication and extension, but I need to get
back to my work, building increasingly effective levers for
discovering the next level of levers...]

- Jef



More information about the extropy-chat mailing list