[ExI] Fwd: Hard Takeoff

x at extropica.org x at extropica.org
Sun Nov 14 22:05:46 UTC 2010


2010/11/14 Michael Anissimov <michaelanissimov at gmail.com>:
> On Sun, Nov 14, 2010 at 11:26 AM, Aware <aware at awareresearch.com> wrote:
>> The need is not for a singleton nanny-AI but for development of a
>> fractally organized synergistic framework for increasing awareness of
>> our present but evolving values, and our increasingly effective means
>> for their promotion, beyond the capabilities of any individual
>> biological or machine intelligence.
>
> Go ahead and build one, I'm not stopping you.

It's already ongoing in the marketplace of ideas, but not as
intentionally therefore coherently as should be desired.


>> It might be instructive to consider that a machine intelligence
>> certainly can and will outperform the biological kludge, but
>> MEANINGFUL intelligence improvement entails adaptation to a relatively
>> more complex environment. This implies that an AI (much more likely a
>> human-AI symbiont), poses a considerable threat in present terms, with
>> acquisition of knowledge up to and integrating between existing silos
>> of knowledge, but lacking relevant selection pressure it is unlikely
>> to produce meaningful growth and will expend nearly all its
>> computation exploring irrelevant volumes of possibility space.
>
> I'm having trouble parsing this.  Isn't it our job to provide that
> "selection pressure" (the term is usually used in Darwinian population
> genetics so I find it slightly odd to see it used in this context)?

Any "intelligent" system improves by extracting and effectively
modeling regularities within its environment of interaction.  At some
point, corresponding to integration of knowledge apprehended via
direct interaction as well as communicated from existing domains as
well as information latent between domains, the system will become
starved for RELEVANT novelty necessary for further MEANINGFUL growth.
(Of course it could continue to apply its prodigious computing power
exploring vast reaches of a much vaster mathematical possible space.)
Given a static environment, that intelligence would eventually catch
up and plateau at some level somewhat higher than that of any
preexisting agent. The strategic question is this:  Given practical
considerations of incomplete specification, combinatorial explosion,
rate of information (and effect) diffusion, and effective interaction
area as well as first-mover advantage within a complex co-evolving
environment, how should we compare the highly asymmetric strengths of
the very vertical AI versus a very broad technologically amplified
established base?   Further, given such a plateau, on what basis could
we expect such an AI to act as an effective nanny to humanity?

There can be such threats but no such guarantees and to the extent we
are looking for protection when none can be found, such effort is
wasted and thus wrong.

- Jef




More information about the extropy-chat mailing list