[extropy-chat] In the Long Run, How Much Does Intelligence Dominate Space?

Lee Corbin lcorbin at tsoft.com
Tue Jul 4 18:36:26 UTC 2006


This is a continuation of the July 1 / July 2 discussion under
the subject line "What Human Minds Will Eventually Do".

There are two reasons for changing the subject line. One, the
term "Human" created confusion; sometimes Eugen and Russell
and the others would mean the (more interesting) metaphorical
sense of us and our mind children or inheritors, even though
when I initiated the thread I meant to be talking only about
*human* activities in the far future as t --> infinity (e.g.
previous primitive versions of the ruling >H types).

The second reason: we began a wonderful discussion of the
ultimate colonization of the universe (or "engulfing" of
the universe, as Barrow and Tipler put it in 1986) by
life---or by intelligence, however you want to phrase it.

Those two days---July 1 and July 2---saw the best posts on
this topic I have ever seen on this list, or anywhere. Finally
all my stupid posts provoked ideas and prognostications that
have never occurred to me, and perhaps never would have. Whew!

A third reason to change the subject line is that the fundamental
assumptions that Eugen makes differs from those that Russell and
I make in a key area: just exactly how much can---or will, or
should we expect---superhuman *intelligence* to completely
dominate and completely control some finite volume of space? 

Eugen takes the ecosystem view, and adduces the historical
successes of free markets and other "out of control" systems.
Russell and I take the "good housekeeping" view, if I might
phrase it that way, that a powerful intelligence keeps her
area as clean as a Dutch housewife does hers. This too has
historical precedents (e.g. some ecosystems are not very
complicated, having fallen under control of one species,
or even the Dutch housewife herself).

I do not believe that Eugen has yet made an adequate case
for the "ecosystem" view. We know that for some radius R
(limited by light speed) an intelligence is really a single-
willed entity capable of laying down complete governing rules,
conventions and laws regarding its own space. So what is your
(or anyone's, of course) rejoinder to that? (After all, unless
you're a lot crazier that I think :-) your intelligence
dominates your two hemispheres without much competition!?)

For example, the surface of the Earth could fragment into
ninety competing "individuals" each thinking a million times
faster than humans do now. Since light travels a about one
foot per nanosecond, it takes a second(!) to tell another
part of your brain on your periphery (a couple of blocks away)
just what the rest of you is thinking. Already an "individual"
is at hazard just as much as was the Roman Empire without
telegraphy.

Maybe Eugen will suggest that between such boundaries there
will be symbionts?  I just don't know!  (This is so great!)

Or the parameters could be vastly different, and an 
"individual" might occupy a solar system. If so, then
the only way such an "individual" could occupy two
adjacent solar systems would be if the flow of algorithms 
from one continued over time to dominate the other. (The
Tiwanaku of South America used to dominate their "empire"
not by force, but by this exact mechanism.) That's what
I've always supposed since about 1992 or so in my "The
Wind from Earth" scenario.

Also I have to keep putting "intelligence" in quotes because
as BillK points out, we should be very careful about using
our intuitions haphazardly here. Those entities will actually
be non-human, superhuman to the extent that we would recognize
them no more than a fetus would recognize an old man he becomes.
It's even possible (though I doubt it) that such "individuals"
would form some kind of voluntary collective, just as the 
Soviets strived for.

Well, back to studying the Eugen/Russell exchange of 7-1/7-2.

Lee

P.S. Apologies to those whose posts I have not got to yet.



More information about the extropy-chat mailing list