[extropy-chat] Darwinian dynamics unlikely to apply to superintelligence

Robin Hanson rhanson at gmu.edu
Mon Jan 5 00:18:45 UTC 2004


On 1/2/2004, Eliezer S. Yudkowsky wrote:
>Perry E. Metzger wrote:

Was this on this list?  If so, when?  I didn't see this message.

>>>>the laws of physics and the rules of math don't cease to apply.
>>>>That leads me to believe that evolution doesn't stop. That further
>>>>leads me to believe that nature -- bloody in tooth and claw, ...
>>>>will simply be taken to the next level. ...
>>>You've taken one sample set, Earth, and implied from the course of
>>>evolution on Earth that it is a *law of physics* that violent
>>>conflict occur.
>>Evolution isn't something you can avoid. Deep down, all it says is
>>"you find more of that which survives and spreads itself", which is so
>>close to a tautology that it is damn hard to dispute. ...
>
>The replicator dynamics, like all math equations, generally are provable
>and hence what people would call "tautological" when applied to the real
>world.  The question is whether the variables take on any interesting
>values.  Price's Equation is a tautology, ... can apply it to pebbles on
>the seashore, for example, ... the question of whether one
>is dealing with infinitesimal quantities that obey a replicator equation,
>or large quantities; small handful of generations, or millions of
>generations; whether there is enough selection pressure, over a long
>enough period of time, to produce complex information of the sort we're
>used to seeing in biology. ... Even if blue pebbles survive some tiny
>amount better, it doesn't mean that in 20,000 years all the pebbles on the
>seashore will be intensely blue.
>Correspondingly, we can expect that any SI we deal with will exclude the
>set of SIs that immediately shut themselves down, and that whichever SI we
>see will be the result of an optimization process that was capable of
>self-optimization and preferred that choice.  But this does not imply that
>any SI we deal with will attach a huge intrinsic utility to its own survival.
>If you have an optimization system, ... like the expected utility equation,
>then, ... instrumental expected utility for the continued operation of an
>optimization system similar to the one doing the calculation, ...
>Similarly, ... we should expect that optimization process to
>optimize all available matter, ... they will *all* choose to absorb all
>nearby matter.  ... most any optimization process ... defend itself from
>a hostile optimization process - as an instrumental utility. ...
>And finally, there is no reason to suppose that the process whereby SIs
>absorb matter, optimize matter, or in other ways do things with matter,
>would create subregions with (a) large heritable changes in properties,
>that (b) correlate to large differences in the rate at which these regions
>spread or transform other matter, and that (c) this process will continue
>over the thousands or millions of generations that would be required for
>the natural selection dynamic to produce optimized functional complexity.
>This last point is particularly important in understanding why replicator
>dynamics are unlikely to apply to SIs.  At most, we are likely to see one
>initial filter in which SIs that halt or fence themselves off in tiny
>spheres are removed from the cosmic observables.  Almost any utility
>function I have ever heard proposed will choose to spread across the
>cosmos and transform matter into either (1) *maximally high-fidelity
>copies* of the optimization control structure or (2) configurations that
>fulfill intrinsic utilities.  If the optimization control structure is
>copied at extremely high fidelity, there are no important heritable
>differences for natural selection to act on.  If there were heritable
>differences, they are not likely to covary with large differences in
>reproductive fitness, insofar as all the optimization control structures
>will choose equally to transform nearby matter. ...
>Anyway, there's a heck of a difference between natural selection *building
>a goal system from scratch*, like where humans come from, and applying a
>anti-suicide filter to the set of SIs that are likely to pop up from
>ancestral civilizations (mostly the result of runaway recursive
>self-improvement, I expect, perhaps a Friendlyoid SI here and there if
>someone in the ancestral civilization was implausibly competent). ...
>Replicator dynamics assume a (large, frequent) death rate.  If
>optimization processes compete to absorb *available* resources but hang on
>permanently to all resources already absorbed, the replicator dynamics are
>not iterated across thousands of generations.

The general question of how much we can expect variation and selection to
determine the nature of the future is extremely important, so I'm sorry
I didn't see more follow-up to this post.  But I have a lot of trouble
figuring out where you (Eliezer) are coming from here.

Let's see, if there are lots of "SIs" that pop up from ancestral
civilizations, we might expect variation and selection among them.  You
seem to be arguing that there won't be enough of them varying enough
over time for this to happen much, at least within the posited class of
SIs that are maximally capable and quickly grab all the resources they
can, until they run into a (by assumption equally capable) neighbor, at
which point they make peace with that neighbor.  If so, the distribution
of what happens at various places in the future would be largely
determined by the distribution of preferences that SIs begin with.

It seems to me that your key assumption is one of very cheap defense -
once one SI has grabbed some resources you seem to posit that there is
little point in some other SI, or even a large coalition of them, trying
to take it from him.  Given this, I suppose the rest of your scenario might
plausibly follow, but I'm not
sure why you believe this assumption.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




More information about the extropy-chat mailing list