[ExI] Re-framing Innovation re Consciousness

Bryan Bishop kanzure at gmail.com
Sun Jan 6 20:25:20 UTC 2008


Just connecting a few threads that have been left open ...

On Sunday 30 December 2007, x at extropica.org wrote:
> roughly 50/50 mix of genetic vs. environmental influence.]  To the
> extent that the organism perceives its level of
> [intelligence|influence|capacity for self-actualization?] to exceed
> that of its environment of interaction, then its optimum strategy
> should be proactionary.  Note however that this implies the

The other day this same topic came up with Harvey, Eugen and Jef.

On Tuesday 01 January 2008, Jef Allbright wrote:
> This touches on a key point that seems to elude the most outspoken
> proponents of hard take-off singularity scenarios:  So-called
> "recursively self-improving" intelligence is relevant only to the
> extent it improves via selective interaction with its environment.
>  If the environment lacks requisite variety, then the "recursively
> self-improving" system certainly can go "vwhooom" as it explores
> possibility space, but the probability of such explorations having
> relevance to our world becomes minuscule, leaving such a system
> hardly more effective than than a cooperative of technologically
> augmented humans at tiling the galaxy with paperclips.
>
> This suggests a ceiling on the growth of **relevant** intelligence of
> a singleton machine intelligence to only slightly above the level
> supported by all available knowledge and its latent connections,
> therefore remaining vulnerable to the threat of asymmetric
> competition with a broad-based system of cooperating technologically
> augmented specialists.

However, I am not immediately seeing what a proactionary strategy would 
call for when that relevant intelligence exceeds the capacities of its 
environment to respond to itself [the intelligence].

- Bryan
________________________________________
Bryan Bishop
http://heybryan.org/



More information about the extropy-chat mailing list