[extropy-chat] Darwinian dynamics unlikely to apply to superintelligence

Robin Hanson rhanson at gmu.edu
Mon Jan 5 18:17:12 UTC 2004


On 1/4/2004, Eliezer S. Yudkowsky wrote:
>>It seems to me that your key assumption is one of very cheap defense - 
>>once one SI has grabbed some resources you seem to posit that there is 
>>little point in some other SI, or even a large coalition of them, trying 
>>to take it from him.
>
>I agree that this is a key assumption.  However, the assumption can fail 
>and still bar natural selection, if there is little variation in 
>preferences or little variation in resource-grabbing capacity or little 
>correlation between the two.

Variation in preferences alone, even with no variation in ability to grab 
resources, can result in evolutionary selection.  Preferences can be 
selected for, even without being correlated with other features.

>>Given this, I suppose the rest of your scenario might plausibly follow, 
>>but I'm not sure why you believe this assumption.
>
>I tend to suspect that between two similar intelligent agents, combat will 
>be too uncertain to be worthwhile, will consume fixed resources, and will 
>produce negative externalities relative to surrounding agents.  Let us 
>assume that loss aversion (not just in the modern human psychological 
>sense of aversion to losses as such, but in the sense of loss aversion 
>emergent in diminishing marginal utility) does not apply, so that a 50/50 
>chance of winning - which goes along with the argument of intelligent 
>optimization using up variation - does not automatically rule out 
>combat.  However, there would still be a fixed cost of combat, probably 
>extremely high; and if we assume variation in preferences, there would 
>probably be negative externalities to any nearby SIs, who would have a 
>motive to threaten punishment for combat.  Negotiations among SIs are, I 
>think, out of my reach to comprehend - although I do have some specific 
>reasons to be confused - but I still suspect that they would 
>negotiate.  The point about large coalitions devouring single cells is 
>interesting (although my current thoughts about SI negotations suggest 
>that *the choice to form a predatory coalition* might be viewed as 
>tantamount to starting a war).  If we do have coalitions eating smaller 
>cells, then we have a filterish selection pressure that rules out all 
>unwillingness or hesitation to form coalitions - not necessarily natural 
>selection unless there is heritable variation, which correlates, etc.  But 
>beyond that point, it would essentially amount to gambling, more than 
>combat - will you be part of the latest coalition, or not?  Something like 
>a tontine, perhaps, until there are only two entities left standing?  But 
>where does the non-random selection come in?  What does it correlate to?

A lot of words here, but still hard to follow.  We must distinguish 
assumptions about the immediate physical consequences of combat from 
assumptions about what the resulting behavioral equilibrium is.  In the 
above you seem to mix these up.  I was trying to paraphrase you in terms of 
assumptions about physical consequences.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




More information about the extropy-chat mailing list