[ExI] Survival (was: elections again)

Eugen Leitl eugen at leitl.org
Tue Jan 1 21:31:48 UTC 2008


On Tue, Jan 01, 2008 at 12:30:52PM -0800, Jef Allbright wrote:

> This touches on a key point that seems to elude the most outspoken
> proponents of hard take-off singularity scenarios:  So-called
> "recursively self-improving" intelligence is relevant only to the

I never understood why people said recursive in that context.
It's simply a positive-feedback enhancement process. It doesn't use
a stack nor tail-recursion, and it's certainly not a simple algorithm,
like (iterate over all elements; enhance each; stop when you're done).
Exponential runaway self-enhancement, or explosive enhancement,
or bloody transcension (in the sense of Daleish Robot God goodness)
is pretty descriptive in comparison.

> extent it improves via selective interaction with its environment.  If

The environment doesn't have to be embodied. Unlike simpler darwinian
systems, human designs don't need to be embodied in order to be evaluated,
making progress both much faster, and also allowing to leap across 
bad-fitness chasms. (The underlying process is still darwin-driven, but
most people don't see it that way).

> the environment lacks requisite variety, then the "recursively

Most of the environment are other invididuals. That's where the complexity is.

> self-improving" system certainly can go "vwhooom" as it explores
> possibility space, but the probability of such explorations having
> relevance to our world becomes minuscule, leaving such a system hardly

Most of what engineers do in simulation rigs today is highly relevant to our
world. Look at machine-phase chemistry; the science is all known, but it is
currently not computationally tractable, mostly because our infoprocessing
prowess is puny. I could easily see bootstrap of machine-phase self-rep
which happens 99% in machina, 1% in vitro. In fact, this is almost certainly
how we meek monkeys are going to pull it off.

> more effective than than a cooperative of technologically augmented
> humans at tiling the galaxy with paperclips.

Paperclips don't self-select, and self-reproduce. Ain't going to happen.
 
> This suggests a ceiling on the growth of **relevant** intelligence of
> a singleton machine intelligence to only slightly above the level

Why singleton? That's a yet another sterile assumptions. Single anything
ain't going to happen either. Humanity is not a huge pink worm torso with
billions of limbs, which started growing in Africa, then spreading all 
over the planet as a huge single individual. You'll notice ecosystems don't
do huge individuals, and that's not a coincidence.

> supported by all available knowledge and its latent connections,
> therefore remaining vulnerable to the threat of asymmetric competition
> with a broad-based system of cooperating technologically augmented
> specialists.

Do you see much technological augmentation right now? I don't.
Getting a lot of bits out and especially in in a relevant fashion,
that's medical nanotechnology level of technology. Whereas, building
biologically-inspired infoprocessing systems is much more tractable,
and in fact we're doing quite well in that area, even given our 
abovementioned puny computers.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list