[extropy-chat] In the Long Run, How Much Does Intelligence Dominate Space?

Lee Corbin lcorbin at tsoft.com
Tue Jul 4 19:24:51 UTC 2006


Robin writes (reducing me to exigeses in many cases)

> At 02:36 PM 7/4/2006, Lee Corbin wrote:
> > just exactly how much can---or will, or
> > should we expect---superhuman *intelligence* to completely
> > dominate and completely control some finite volume of space?
> > Eugen takes the ecosystem view, and adduces the historical
> > successes of free markets and other "out of control" systems.
> > Russell and I take the "good housekeeping" view, if I might
> > phrase it that way, that a powerful intelligence keeps her
> > area as clean as a Dutch housewife does hers. ... for some
> > radius R (limited by light speed) an intelligence is really a single-
> > willed entity capable of laying down complete governing rules,
> > conventions and laws regarding its own space.
> 
> I would ask the question as:  what kinds of choices are coordinated
> over what scales?  An "intelligence" over some region is not aware
> of everything going on in that region, but for some choices made in
> that region coordination is important enough and feasible enough
> that the intelligence is conscious of those choices and attempts to
> coordinate them with each other and with other closely relevant
> choices.

I'll try to understand exactly what you mean through some examples.

Example one: A Dutch housewife only controls the macro human-visible
elements of her house, giving spiders and rodents no chance whatsoever
for sharing the residence. But she cannot (until the 20th century)
attempt to eliminate all microbes, and even then she does not succeed.

Example two: In modern hi-tech clean near but not absolute success
is achieved. Did you have this example in mind also when you made
your statement?  An AI may be able to keep its mind as "clean" as
this.

Example three: A human mind (i.e. program) currently runs on hardware
over which it is still pretty ignorant and has very little control.
One may die of a brain aneurysm at any moment. (See Robin's discussion
of this below.)

Example four: An AI has converted Earth to computronium but is still
not running so fast that any part of itself is more than 1/7 of a
light-second away (it is still an "individual"). But it contests,
at least in the realm of ideas, with the Martian and Venusian
similar intelligence, and all subscribe as closely as they dare
to Jupiter's algorithm blog. But how can you be so sure that the
Earth AI, for example, doesn't command every atom on Earth?

> To address this question, we want to identify for various candidate
> choices the relevant values to coordination, and the costs of
> coordination.  We might then expect coordination only when the
> value of coordination exceeds its costs.

As, for example, I suppose

> Humans on Earth attempt to coordinate some choices at the scale
> of the Earth, such as through the United Nations.   At the other
> extreme, parts of my mind make many choices that they do not
> coordinate much with other parts of my mind, and which I am not
> conscious of.

Yes---so I suppose that another application of what you have
written here might be:  two adjacent intelligences, even if
separated by non-trivial light speed vs. computation times,
may amount to one more-or-less whole individual, such as we
fancy our own hemispheres.

Lee




More information about the extropy-chat mailing list