[extropy-chat] What Human Minds Will Eventually Do

Jef Allbright jef at jefallbright.net
Fri Jun 30 16:20:15 UTC 2006


On 6/29/06, Lee Corbin <lcorbin at tsoft.com> wrote:
> But it's so *hard* to extrapolate from non-human viewpoints :-)
> At least for me (I'm less sure about you!)

That's an interesting statement.  On the surface it may seem obvious
and common-sensical but it seems to carry a hidden assumption.
Consider this alternate and ask whether anything substantial is
missing:  "But it's so hard to extrapolate non-human behavior."

I find that we can quite effectively extrapolate (predict) the
behavior of dogs, apes, spiders, etc. Tell me, human, what is this
essential "viewpoint" of which you speak?


>
> More seriously, I totally agree that we shouldn't over assume
> that our own values will predominate; indeed, Darwin has to
> remain the best guide. As an example, recall the SF stories
> and movies in which it was just *assumed* that more advanced
> creatures would be benevolent, would have "risen above" our
> lowly morals. But in the end, one must ask, what sorts of
> algorithms will dominate the computronium of the far future?
> And the answer need not be too bleak: after all, Earth's
> currently most advanced life form is rather altruistic.

Lee, while I generally agree with your point here, for the sake of
clarity in this sort of discussion we might do well to abandon the
term "altruistic" as it is deeply tied to the irrational behavior of
an agent putting the good of others over its own (within a given
context.)

Altruism certainly does exist, in the form of evolved programming that
causes individuals to act to their local detriment for the good of
their larger group (or some proxy), but in our discussions on the
Extropy list we are more often interested in "enlightened
self-interest", dynamics of cooperation/synergy over increasing scope,
or superrationality.


> (It bears repeating that humans engage in violence far, far
> less per observed hour than does any other primate.)


And it may bear repeating that this trend is not based on increasing
niceness or goodness, but rather on increasing awareness of
positive-sum behaviors that work over increasing scope.  We're moving
away from focusing on ends (that person/tribe is our enemy) and toward
effective principles of growth (that person/tribe may eventually
become a McDonalds franchise.)

- Jef



More information about the extropy-chat mailing list