[ExI] Survival (was: elections again)

Harvey Newstrom mail at harveynewstrom.com
Wed Jan 2 04:14:00 UTC 2008


On Tuesday 01 January 2008 15:30, Jef Allbright wrote:
> This touches on a key point that seems to elude the most outspoken
> proponents of hard take-off singularity scenarios:  So-called
> "recursively self-improving" intelligence is relevant only to the
> extent it improves via selective interaction with its environment.  If
> the environment lacks requisite variety, then the "recursively
> self-improving" system certainly can go "vwhooom" as it explores
> possibility space, but the probability of such explorations having
> relevance to our world becomes minuscule, leaving such a system hardly
> more effective than than a cooperative of technologically augmented
> humans at tiling the galaxy with paperclips.
>
> This suggests a ceiling on the growth of **relevant** intelligence of
> a singleton machine intelligence to only slightly above the level
> supported by all available knowledge and its latent connections,
> therefore remaining vulnerable to the threat of asymmetric competition
> with a broad-based system of cooperating technologically augmented
> specialists.

Exactly.  Even if a machine can think super fast, it won't be able to change 
physical reality around it as fast.  Even with nanotech, reality in the maco 
meat universe is slow with limited resources.

-- 
Harvey Newstrom <www.harveynewstrom.com>
CISSP CISA CISM CIFI GSEC IAM ISSAP ISSMP ISSPCS IBMCP



More information about the extropy-chat mailing list