[extropy-chat] More forwards please
jef at jefallbright.net
Thu Jan 11 20:12:48 UTC 2007
Anders Sandberg wrote:
> Jef Allbright wrote:
>> I think there's a slight but significant gap in this
>> expression of appreciation for progress. While I enjoy
>> its romantic and audacious approach to the uncertain
>> rewards of human experience, it feeds a perception that
>> we (extropians, transhumanists) believe all change is
>> good in a kind of blind, negentropic way.
> Maybe. But I think we need to consider our position.
> Rationally we are all for "good change" and trying to steer
> change towards desirable outcomes and avoid bad outcomes.
> That is something we need to emphasize. But I find that quite
> often when you do that you end up in a grey utilitarian mode
> where all progress is for making the world more comfy. It
> tends to lock you into arguing for a "more-human" rather than
> a "transhuman" world, a world where all current human needs
> are taken into account but where there is no acknowledgement
> of the expansion of humanity.
>> Chaotic change is our friend to the extent that it provides
>> the raw stuff necessary for selection and growth, but it is
>> subjective, intentional selection by increasingly aware
>> agents (us) that defines and drives toward the "good".
> Exactly. But what I'm fishing for here is something beyond
> chaos, getting the new stuff. I'm increasingly thinking that
> there might be something good about new stuff that has never
> existed before and might never have come into being. It could
> be just a weak aesthetic value, but it could also be that its
> contingency and uniqueness gives it a bit of moral value.
Please consider the following, since I think I clearly understand your
stated position, having been there myself.
This difficulty is another example symptomatic of our unfamiliarity and
discomfort with our nature as subjective agents and has lead to
interminable debate. A problem with virtually all ethical philosophy is
that it deals with how or whether we can optimize ends. Kant's
Categorical Imperative and the Golden Rule fail for this reason, but
people still debate all around the issues. Let's face it in the 21st
century, we can't optimize for ends, but we can optimize for growth in
directions of our choosing.
As subjective agents, we can never know the extended consequences of our
actions, and for this reason "the best laid plans of mice and men" do
tend to lead to disaster, stagnation, or developmental cul-de-sac, in
the long run. Why? Because they don't optimize for growth.
But, as subjective agents, we *can* gain increasingly effective
understanding of principles of interaction in our (expanding)
environment of interaction, via essentially what we know as the
scientific method. And with this increasingly effective knowledge of
principles of effective interaction, applied to increasingly effective
knowledge of our values, we can optimally (but boundedly) steer our way
forward. We might still expire in some evolutionary cul-de-sac, but at
least we would have made the best possible choices given our starting
Again, the key is to optimize based on *principles* of effective
interaction, relative to promotion of our present (but evolving) values,
rather than optimize for (inherently context-limited) *ends*. This
practice inherently promotes synergetic positive-sum cooperation,
"blind" justice, and the diversity necessary for robust ongoing "growth"
in the direction of our shared (cooperative) values that work.
I've described this in more detail, and with some degree of rigor, in
previous discussions about the "Arrow of Morality", and it seems to have
the merits of being both internally consistent and extensible. I am
also greatly encouraged by several comments in this year's Edge.org
Question indicating that this meme is spreading.
Although I'm quite sensitive to abusing this public list with another
rendition, I would appreciate any comments, questions, or criticism and
will gladly continue the discussion either on or offline.
More information about the extropy-chat