[ExI] Economics of singularity

The Avantguardian avantguardian2020 at yahoo.com
Thu Jun 19 10:28:06 UTC 2008


--- Jef Allbright <jef at jefallbright.net> wrote:

> > Stuart wrote:
> > this is one of the more likely of the
> > nine possible outcomes of the
> > singularity that I have calculated.
> 
> Please tell?

Well Jef this is still a work in progress but since you asked so nicely, here
you go. Feedback would be appreciated:

My discovery of the nine possible outcomes of the singularity grew out of my
attempts to unite Neo-Darwinian ecology and economics into a single discipline
by way of game theory. Biologists have noted for some time that the game
Prisoner’s Dilemma (PD) models some relationships between organisms very well
but doesn’t quite fit others. While ecologists have long been able to
categorize most relationships between organisms (individuals or species, it
doesn’t matter) into a set of distinct species interactions. The canonical
species interactions are: Predation/parasitism, commensalism, neutralism,
amensalism, mutualism, and competition.

These relationships form the basis of a new game I developed in response to the
shortcomings of PD. I call my game “Critter’s Dilemma” and it is essentially PD
with an additional possible move and additional associated payoffs for that
move. What I have done is apply game theory to the ecological model of
relationships. In order to generate a payoff matrix to analyze the game, one
has to introduce the concept of symmetry. 

Three of the relationships- mutualism, neutralism, and competition are
symmetric for both players. While the remaining three relationships-
commensalism, amensalism, and predation are asymmetric. Because of this
asymmetry, each of these represents two possible relationships. In predation
for example, organism A can eat organism B or conversely organism B can eat
organism A.

Three symmetric relationships and six asymmetric relationships total nine
possible relationships. Using the dynamics of these relationships, you can
model the evolution of relationships between any two entities. This should be
true for single cells, people, insects, flowers, corporations, countries,
Jupiter brains, or intergalactic empires. Thus for any two entities, there are
only nine possible ecological/economic relationships and those relationships
are outcomes determined by the strategy employed by those two entities playing
Critter’s Dilemma against one another, usually unaware that they are doing so.
Since neither humans nor AI are exempt from evolutionary and market forces,
there are nine possible outcomes for the singularity. Keep in mind these are
generalized outcomes and each can have numerous possible manifestations and
variations. Futhermore except for the Nash equilibrium states, they are not
stable and depending on the shifting strategies employed by the players, each
can transform into one of the others at any time. 

Critter’s Dilemma:

                   Player B
                C     I     D
             -------------------
           C | R,R | i,R | S,T |
             -------------------
Player A   I | R,i | i,i | S,i |
             -------------------
           D | T,S | i,S | P,P |
             -------------------


Moves: C=Cooperate, I=Ignore, D=Defect

Keep in mind that these moves represent the subjective effect of one player on
the other with respect to cost or benefit so they do not necessarily correspond
to the common definition of the word or even to the intention of player making
the move. For example, when a mosquito bites you and sucks your blood, you have
effectively cooperated with it even if you tried to swat it and failed because
from the mosquito’s point of view he got a meal. Similarly, you may be aware of
another creature’s existence but if you neither benefit it nor harm it (whether
you are able to or not), you are ignoring it, even if it is stepping on you
while tending its garden. Furthermore in the context of two entities playing
Critter's Dilemma for real, the strategies available to a given player may be
limited by such considerations as size or relative power level. In other words,
mice are seldom in a position to retaliate against cats.  

Payoffs: (T)emptation to defect > (R)eward for cooperation > (i)nsignificant
cost or benefit of being ignored > (P)unishment for mutual defection >
(S)ucker’s payoff

1.	Competition (D,D)->(P,P) *strong Nash equilibrium
2.	Neutralism (I,I)->(i,i) *weak Nash equilibrium
3.	Mutualism (C,C)->(R,R) *Pareto-optimal strategy pair
4.	Predation A on B (D,C)->(T,S)
5.	Predation B on A (C,D)->(S,T)
6.	Commensalism pro-A (I,C)->(R,i)
7.	Commensalism pro-B (C,I)->(i,R)
8.	Amensalism anti-A (I,D)->(S,i)
9.	Amensalism anti-B (D,I)->(i,S)


1. Competition:
This relationship corresponds to strategy pair (D,D) and payoff (P,P) and is a
strong Nash equilibrium meaning that assuming that their opponent’s strategy
remains consistent , no change in strategy by either player will result in an
outcome better than or even equal to the current outcome. This relationship
seems fairly self explanatory though it actually covers a great variety of
mutually detrimental relationships common in nature and economics. This
relationship can refer to everything from playful verbal jousting over the
affections of pretty girl at a cocktail party, to a price war between
companies, to the horrors of an all out genocidal war between man and machine.

Competition can be modified by the biological phenomenon of spite. Competition
with enough spite attached could have a modified payoff of (S,S) which is
essentially the complete mutual  destruction of both entities. 

Interspecies competition is normally somewhat rare in nature because most
ecosystems have already settled into an outcome where one species dominates a
given ecological niche in a region having already forced out any potential
competing species. However it is often observed when foreign species are
introduced into a new area and fight the indigenous species for its niche.

Intraspecies competition on the other hand is exceedingly common and is a
general rule in biology since members of the same species have identical
resource requirements and are thus prone to squabble over niches. Fortunately
intraspecies competition often exhibits less spite than interspecies
competition. Thus seldom will jousting bighorn rams fight to the death over a
ewe for example.

In regards to the singularity, this outcome seems to be the one that science
fiction writers and Hollywood script writers seem fixated on as evidenced by
the Terminator, Berserker Wars, etc. Although it is but one of nine possible
outcomes, because it does represent a strong Nash equilibrium in Critter’s
Dilemma, it is actually quite probable if we do not try to avoid it by
carefully planning the resource requirements of the AI machines to not overlap
too much with ours.

2. Neutralism:        
This relation corresponds to the strategy pair (I,I) and payoff (i,i) in
Critter’s Dilemma. Like butterflies and buffalo, organisms in a neutral
relationship live past one another. Applied to the singularity, it would mean
that humanity and the sentient machines of the future follow their bliss and
ignore one another without any real cost or benefit to either. We may occupy
completely different territories and seldomly interact with one another or the
very same territory with the AIs utilizing a completely different pattern of
resource use from us to forestall any competition.

This outcome is in game-theoretical terms a weak Nash equilibrium in the
Critter’s Dilemma game meaning that assuming their opponent’s strategy remains
stable, neither entity can improve its outcome by changing its strategy. A weak
Nash equilibrium differs from a strong one in that it may be possible to
achieve an equal payoff by changing ones strategy but certainly not a better
one.

Neutralism is probably one of the most common relationships in nature and is
relatively stable. Since if you can’t eat it and it can’t eat you, then you are
probably better off leaving it be. Most strangers one encounters daily and
subsequently ignore would fall into this category as would two businesses in
different market sectors.  

3. Mutualism:
This relationship corresponds to the strategy pair (C,C) and payoff (R,R).
Mutualism is sometimes called symbiosis but that is a mistake. Symbiosis is a
more general term regarding the methods by which organisms of differing species
live together and some forms of symbiosis are downright nasty. Both entities
benefit from the relationship of mutualism and may grow dependent on one
another. Examples from nature include pollinating insects and flowering plants.
An extreme example is the yucca cactus and the yucca moth. The cactus is the
only food that the yucca caterpillar can eat and the adult moth is the only
insect that can pollinate the cactus flowers. Mitochondria and the cells they
inhabit would also fall into this category. Economic examples would be the
hardware and software industries, marriages and civil unions, or two businesses
in a contractual agreement.

In game theory terms, this outcome is Pareto-optimal meaning that there is no
different strategy that will make either player better off without hurting the
other player. It is also the outcome with highest total payoff over both
players. Applied to the singularity, Iain Bank’s “The Culture” would fit into
this outcome as would Data from “Star Trek:TNG”. This outcome with the AI would
certainly be the best for both of our species as well as the most civilized.
Possible ways this sort of relationship might be engineered into the AI would
be to require human intervention in some step of the AI’s life cycle,
replication, or reproductive process. A power switch built into every AI that
requires a living human hand to operate might be one possible example of this.

In any case in order to achieve this sort of relationship with AI, we will need
to make ourselves essential to the AI somehow. Care must be taken however since
mutualism is not a Nash equilibrium. Thus mutualism will not necessarily be
stable since the temptation to defect will always be present. Indeed even
amongst yucca moths there are documented cheater variants that lay their eggs
in the cactus without pollinating it.    
 
4. Predation (Human on AI):
This relationship corresponds to strategy pair (D,C) and payoff (T,S). Although
quite nicely modeling actual predatory relationships in nature, I use the term
more generally to mean any asymmetric transfer of resources sufficient to give
relatively large gains to one player at the cost of the other. This cost can be
relatively small to the prey’s total resources, but not negligible, like a
mosquito’s blood meal or as large as the very existence of the prey species or
entity.

I use the term predation loosely to cover a whole gamut of exploitive behaviors
ranging from the literal hunting and eating of the other player to
parasitically sucking its blood. Economically this relationship manifests
itself in such phenomena as theft, slavery, taxation without representation,
and even the mooch that always borrows money and never pays it back.

In terms of the singularity, this outcome would mean that the humans benefit at
the expense of the AI. An example of this would be slavery of the intelligent
machines or the cannibalization of the AI for spare parts for our “dumb”
machines. In any case what matters is the asymmetric flow of resources. Care
must be taken with this particular strategy as it invites retaliation from
intelligent prey. Even with non-intelligent prey, there is a non-negligible
risk associated with predation as many an unlucky lion has been kicked by a
zebra or gored by a wildebeest.

5. Predation (AI on human)
This corresponds to strategy pair (C,D) and payoff (S,T). This is like number 4
only machines benefit at humanity’s expense. The Matrix with its machines using
humans for energy would be an unlikely example of this. Another example would
be the AI’s harvesting humans for our carbon atoms. Obviously variations of
this possible outcome can be rather scary. But less horrific variations are
possible too such as an AI that hides in the Internet and steals money from
people’s bank accounts or rogue robots recharging themselves from people’s
electrical outlets when they aren’t looking. This outcome, though less likely,
than the Nash equilibrium outcomes, should still be a matter for concern.

6. Pro-human Commensalism
This outcome corresponds to strategy pair (I,C) and payoff (R,i). Commensalism
is a form of symbiotic relationship wherein one species is benefited by another
but neither harms nor benefits that species. Examples from nature include the
microscopic dust mites that are on everybody. They roam our bodies, even the
cleanest of us, and eat flakes of our dead skin as it sloughs off. We in
essence cooperate with them in that we give (albeit unwillingly and usually
unknowingly) them shelter and food. They give us nothing in return but they
don’t hurt us either (except an unlucky few that are allergic to their feces),
so they in essence ignore us. Thus they are in a commensal relationship with
us.

An example of an economic commensal relationship would be kids on skateboards
using a corporation’s parking structure to skate in on a weekend or a bank
using your money to make money for itself without taking any of it.

Possible examples of this outcome post-singularity would be large AI city
complexes that humans squatted in perhaps scavenging technology whilst beneath
the notice of the AI. Or the friendly AI overlord that devoted a few cycles to
our survival simply because it cost the AI so little or was a side effect of it
pursuing its bliss. In any of these scenarios care must be taken not to incur
non-negligible costs upon the AI or the AI might interpret such as a defection
and retaliate. Similar to someone with a dust mite allergy using medicated soap
to kill the dust mites. Nanosanta and abundance scenarios might also fall into
this category as well although they could also fit under mutualism.  

7. Pro-AI Commensalism 
This outcome corresponds to strategy pair (C,I) and payoff (i,R). This outcome
is the inverse of outcome 6. In this outcome the AI neither harm us nor benefit
us yet are benefited by us. Perhaps we plug them in and they just ignore us and
withdraw into an inner reality to contemplate their existence. Or perhaps they
escape into the Internet and download onto people’s hard drives to hide when
people leave their PCs unattended. Although less dangerous for us than some of
the other outcomes, there would still be a risk of the AI shifting its strategy
to defect.

8. Anti-human Amensalism
This outcome corresponds to strategy pair (I,D) and payoff (S,i). Amensalism is
a symbiotic relationship that is somewhat rare in nature but not so much in
human affairs. Examples in nature include black walnut trees releasing toxins
from their roots that kill off surrounding shrubs and grasses, giant sequoias
killing off pine saplings in their shadow, and fungi secreting antibiotics that
kill other microorganisms. Amensal players thus incur a cost to the other
player whilst deriving no direct benefit by doing so.

Species extinction caused by mankind due to pollution or habitat destruction
would qualify as would accidentally or purposefully stepping on a harmless
insect. You didn’t benefit from killing it, you might not be aware that you
killed it, but nonetheless it’s dead, Jim.

In post-singularity scenarios, this outcome could take a variety of forms. We
could, for example, simply be irrelevant to the machines. They might follow
their bliss and tile us over with paperclips. Or perhaps they decide to power
themselves with unshielded nuclear reactors and their very presence is toxic to
us. In any case this is not a desirable outcome and the proper strategic
response would be retaliation if possible which would lead to interspecies
competition.

 9. Anti-AI Amensalism
This outcome corresponds to strategy pair (D,I) and payoff (i,S). In this
scenario, humanity harms the AI while deriving no real cost or benefit from
doing so. Examples of this might include humans shutting down an AI that hasn’t
done anything to deserve it or taking outdated AI offline. While not directly
dangerous to us, this strategy does carry significant risk of retaliation and
could lead to the Nash equilibrium of interspecies competition if the AI shifts
its strategy in response to our actions.  


Stuart LaForge
alt email: stuart"AT"ucla.edu

"In ancient times they had no statistics so that they had to fall back on lies."- Stephen Leacock


      



More information about the extropy-chat mailing list