[extropy-chat] Darwinian dynamics unlikely to apply to superintelligence

Eliezer S. Yudkowsky sentience at pobox.com
Mon Jan 5 02:30:27 UTC 2004


Robin Hanson wrote:
> 
> Let's see, if there are lots of "SIs" that pop up from ancestral 
> civilizations, we might expect variation and selection among them.  You
> seem to be arguing that there won't be enough of them varying enough 
> over time for this to happen much, at least within the posited class of
> SIs that are maximally capable and quickly grab all the resources they
> can, until they run into a (by assumption equally capable) neighbor,
> at which point they make peace with that neighbor.  If so, the
> distribution of what happens at various places in the future would be
> largely determined by the distribution of preferences that SIs begin
> with.

Yup.

> It seems to me that your key assumption is one of very cheap defense - 
> once one SI has grabbed some resources you seem to posit that there is 
> little point in some other SI, or even a large coalition of them,
> trying to take it from him.

I agree that this is a key assumption.  However, the assumption can fail 
and still bar natural selection, if there is little variation in 
preferences or little variation in resource-grabbing capacity or little 
correlation between the two.  Since I suspect that intelligence would use 
up almost all the potential variation before what we ordinarily think of 
as heritable capacities had the chance to operate, natural selection would 
not automatically follow even if there were frequent combats.

> Given this, I suppose the rest of your
> scenario might plausibly follow, but I'm not sure why you believe this
> assumption.

I tend to suspect that between two similar intelligent agents, combat will 
be too uncertain to be worthwhile, will consume fixed resources, and will 
produce negative externalities relative to surrounding agents.  Let us 
assume that loss aversion (not just in the modern human psychological 
sense of aversion to losses as such, but in the sense of loss aversion 
emergent in diminishing marginal utility) does not apply, so that a 50/50 
chance of winning - which goes along with the argument of intelligent 
optimization using up variation - does not automatically rule out combat. 
  However, there would still be a fixed cost of combat, probably extremely 
high; and if we assume variation in preferences, there would probably be 
negative externalities to any nearby SIs, who would have a motive to 
threaten punishment for combat.  Negotiations among SIs are, I think, out 
of my reach to comprehend - although I do have some specific reasons to be 
confused - but I still suspect that they would negotiate.  The point about 
large coalitions devouring single cells is interesting (although my 
current thoughts about SI negotations suggest that *the choice to form a 
predatory coalition* might be viewed as tantamount to starting a war).  If 
we do have coalitions eating smaller cells, then we have a filterish 
selection pressure that rules out all unwillingness or hesitation to form 
coalitions - not necessarily natural selection unless there is heritable 
variation, which correlates, etc.  But beyond that point, it would 
essentially amount to gambling, more than combat - will you be part of the 
latest coalition, or not?  Something like a tontine, perhaps, until there 
are only two entities left standing?  But where does the non-random 
selection come in?  What does it correlate to?

The stringency of the conditions for natural selection as we know it to 
apply are not widely appreciated; you need, not just limited resources, 
but limited resources
AND frequent death to free up resources
AND multiple phenotypes with heritable characteristics
AND good fidelity in transmission of heritable characteristics
AND substantial variation in characteristics
AND substantial variation in reproductive fitness
AND persistent correlation between the variations
AND this is iterated for many generations
THEN you have a noticeable amount of selection pressure

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence




More information about the extropy-chat mailing list