[ExI] Survival (was: elections again)

Samantha Atkins sjatkins at mac.com
Tue Jan 1 21:59:29 UTC 2008


On Jan 1, 2008, at 12:30 PM, Jef Allbright wrote:

>
> This touches on a key point that seems to elude the most outspoken
> proponents of hard take-off singularity scenarios:  So-called
> "recursively self-improving" intelligence is relevant only to the
> extent it improves via selective interaction with its environment.

"Environment" includes its own internal processing and algorithms.   
This is something that is not true of intelligences like ourselves and  
can easily be missed or its significance downplayed.

>
>
> This suggests a ceiling on the growth of **relevant** intelligence of
> a singleton machine intelligence to only slightly above the level
> supported by all available knowledge and its latent connections,
> therefore remaining vulnerable to the threat of asymmetric competition
> with a broad-based system of cooperating technologically augmented
> specialists.

Not really much of a limit as a much faster mind capable of conscious  
attention to much more information at once and vastly more  
computationally capable will discover implications and connections  
that any arbitrary sized collection of lesser minds will either come  
up with much more slowly or miss entirely.    If the technological  
augmentation is sufficient to put the augment humans on a par then  
that system of humans and technology effectively is a new super human  
intelligence.    Given the inherit limits of human minds and group  
dynamics I am very doubtful this can occur.

- samantha




More information about the extropy-chat mailing list