avantguardian2020 at yahoo.com
Sun Apr 10 10:57:32 UTC 2005
--- Technotranscendence <neptune at superlink.net> wrote:
> Put schematically, the view was that if you appease
> X when X does Y,
> then X will do more of Y or Y-like things in the
> future. What I'm
> questioning is whether inaction really leads to the
> purported bad
> outcomes. In other words, when X does Y -- and Y is
> something you
> disapprove of -- and Z does nothing about it, does
> this make it more
> likely X will do Y again or Y-like things again?
> Further does it mean X
> will think Z will not stop X from doing much worse
> things in the future?
> The conventional view is that if Z doesn't react, X
> will keep testing
> the limits. I.e., there's a high cost for Z's
> inaction. Along with
> this view goes the policy prescription of acting
> sooner rather than
> latter against a given X. (Naturally, in a world
> full of real and
> potential Xs, this would mean constant involvement
> everywhere for any
> Z.) Is there empirical evidence to back this claim?
Actually game theorists and evolutionary
theorists have both run computer simulations such as
hawks vs. doves, prisoners dilemna, and genetic
algorithm free-for-alls in simulated ecosystems. The
general consensus is that certain strategies dominate
in certain enviromental conditions.
In a society of doves (never retaliates), for
example, any hawk (always retaliates) who happens to
enter the scene will have an overwhelming fitness
advantage and will reproduce like mad. Until the
population of hawks gets high enough that there is a
substantial chance of mutual retaliation that ends up
lowering the fitness of hawk behavior.
This leads to an interesting phenomenon. In a
society that is predominately hawks, its the doves
that have the advantage since when 2 doves meet they
do not fight. So they tend to be healthier than the
hawks that are always engaging in fitness lowering
fights when they meet.
What ends up happening is that a population
equilibrium is reached wherein there is a steady state
proportion of hawks to doves that mutually maximizes
both their fitness functions. This equilibrium has
been identified as being best modelled by a Nash
Equilibrium because it is the point of maximum fitness
(i.e. return for expended resources) for BOTH
strategies. Both sides are following the most
productive strategy they can in light of their
opponent's strategy. See Steven Dawkin's "The Selfish
Gene" for an excellent and readable review of this.
More recently game theorists have been using a model
called the "bourgeoisie dove". This is a dove that
acts like a hawk with regards to it's own resources
but like a dove with regards to the resources of other
virtual game players.
A similar model is prisoner's dillemna where you
and a partner in crime are caught by the cops. If
neither of you rat on the other you both do small
time, a slap on the wrist. If one rats and the other
doesn't, the rat walks free but the other guy goes
away for a long time. If you both rat then you both go
to jail for much longer than if you neither had but
not as long as the guy who got ratted out while
In computer simulations of repeated rounds of
prisoner's dilemna (i.e. the "algorithm" can play the
game with the same partner "algorithm" multiple times
as well as different ones) the strategy that
outcompeted the others even if they were more
complicated was one called "tit-for-tat" (a simple
algorithm of "repeat your opponents last move" i.e.
"if your opponent ratted you out last time, rat him
out this time and if he cooperated, you cooperate this
time too." Recently however game theorists have come
up with a strategy called "forgiving-tit-for-tat" that
outcompetes everything including the original "tit for
tat". FTFT operates essentially as TFT except that it
will a small percentage of time forgives an opponent
that defected last time. This allows it to cooperate
with "tit-for-tats" that have been set on retaliate by
their previous opponent.
I guess the upshot of all this for your original
question is that yes, there is empirical evidence that
given a certain environment, their is ideal mixed
strategy where you appease a certain percentage of the
time, and you retaliate at other times. What all this
means for politics I am not certain. The models and
computer simulations are horribly simplistic compared
to the many layers of intrigue involved in
international politics. My gut feeling is that we
should be acting more like bourgeoisie doves that play
forgiving-tit-for-tat instead of hawks that play
preemption, a very bad strategy in a world full of hawks.
"The surest sign of intelligent life in the universe is that they haven't attempted to contact us."
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
More information about the extropy-chat