[ExI] Re-framing Innovation re Consciousness

x at extropica.org x at extropica.org
Sun Dec 30 18:53:25 UTC 2007

On 12/30/07, Natasha Vita-More <natasha at natasha.cc> wrote:
>  At 08:24 PM 12/29/2007, Harvey wrote:
> On Friday 28 December 2007 17:43, nvitamore at austin.rr.com wrote:
>  > How would you reframe the concept of innovation in its relationship to
>  > progress and change within the context of perception and its
>  > transformation?
>  Wow.  What an amazing question with such a detailed set of options.  Such
>  rigorous thinking and precision of expression

Are you serious??  I see some interesting thinking but far from
rigorous or precise.


> Perhaps the entire dimension of transhumanism proposes is risk in motion.
> But if risk is the probability that something will cause injury or harm, it
> is not the correct concept.  I would not dare to enter an environment that
> probably will cause me harm.  On the other hand, I would enter an
> environment that could cause me harm if I was not aware of dangers.  So, I
> would opt for the possibility of injury or harm rather than probably of
> injury or harm.
>  Thus, there is a loophole in the pre-innovation development of observing an
> environment for its potential and possible injury or harm rather than
> assuming that the probability of harm will ensue.
>  What do you think?

The above appears to be a statement involving the relative merits and
applicability of a proactionary versus precautionary stance in regard
to intentional action within a context of uncertain risk.  [I'm going
to try to express my response here without invoking the customary
language of probability.]

I think a fundamental point is that in any case action will be taken,
even if it is a choice of  "inaction."  This fundamental bias, a
defining attribute of agency, makes all the (subjective) difference in
this game.

Given that a choice will be made, always only within an ultimately
uncertain context, the optimum strategy involves applying best-known
(scientific) principles to the promotion of the best-known model of
the present (subjective) values-complex.

Then the assessed "rightness" of an action corresponds not with
expected outcome (since expectations of specific consequences are
unwarranted to the extent the future context is uncertain), but with
the extent to which the action is assessed as implementing
**principles** promoting an increasing context of increasingly
coherent values over increasing scope of consequences.

This schema is intended to highlight the logical imperative of the
proactionary stance, applicable beyond the perceived tipping point
(dependent on the particular context) where the influence [need a
better term here] of the agent exceeds the influence of its
environment of interaction.

In (possibly) more intuitive terms, we should expect that the
"wisdom", or effective intelligence of a highly adapted organism on
any particular behavioral axis corresponds roughly with the
"complexity" of its environment.   [This may suggest an explanation
for why so often evolved traits are observed to cluster around a
roughly 50/50 mix of genetic vs. environmental influence.]  To the
extent that the organism perceives its level of
[intelligence|influence|capacity for self-actualization?] to exceed
that of its environment of interaction, then its optimum strategy
should be proactionary.  Note however that this implies the importance
of increasing humility with increasing distance from home.

Submitted (in very rough form) for your consideration and comments.

More information about the extropy-chat mailing list