[ExI] Hard Takeoff

Natasha Vita-More natasha at natasha.cc
Sun Nov 14 19:29:18 UTC 2010


Nice.  You bring in a n-order cybenretics into the hard takeoff which I have
not seen written about ... Yet.  Michael, what do you think about seeing a
hard takeoff through the lens of n-order cybernetics?



Natasha Vita-More

-----Original Message-----
From: extropy-chat-bounces at lists.extropy.org
[mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Aware
Sent: Sunday, November 14, 2010 1:26 PM
To: ExI chat list
Subject: Re: [ExI] Hard Takeoff

2010/11/14 Michael Anissimov <michaelanissimov at gmail.com>:
> We have some reason to believe that a roughly human-level AI could 
> rapidly improve its own capabilities, fast enough to get far beyond 
> the human level in a relatively short amount of time.  The reason why 
> is that a "human-level" AI would not really be "human-level" at all -- 
> it would have all sorts of inherently exciting abilities, simply by 
> virtue of its substrate and necessities of construction:
> 1.  ability to copy itself
> 2.  stay awake 24/7
> 3.  spin off separate threads of attention in the same mind 4.  
> overclock helpful modules on-the-fly 5.  absorb computing power 
> (humans can't do this) 6.  constructed from scratch with 
> self-improvement in mind 7.  the possibility of direct integration 
> with new sensory modalities, like a codic modality 8.  the ability to 
> accelerate its own thinking speed depending on the speed of available 
> computers When you have a human-equivalent mind that can copy itself, 
> it would be in its best interest to rent computing power to perform 
> tasks.

Michael, what has always frustrated me about Singularitarians, apart from
their anthropomorphizing of "mind" and "intelligence", is the tendency,
natural for isolated elitist technophiles, to ignore the much greater social
context.  The vast commercial and military structure supports and drives
development providing increasingly intelligent systems, exponentially
augmenting and amplifying human capabilities, hugely outweighing not only in
height but in breadth, the efforts of a small group of geeks (and I use the
term favorably, being one myself.)

The much more significant and accelerating risk is not that of a
"recursively self-improving" seed AI going rogue and tiling the galaxy with
paper clips or copies of itself, but of relatively small groups of people,
exploiting technology (AI and otherwise) disproportionate to their context
of values.

The need is not for a singleton nanny-AI but for development of a fractally
organized synergistic framework for increasing awareness of our present but
evolving values, and our increasingly effective means for their promotion,
beyond the capabilities of any individual biological or machine
intelligence.

It might be instructive to consider that a machine intelligence certainly
can and will outperform the biological kludge, but MEANINGFUL intelligence
improvement entails adaptation to a relatively more complex environment.
This implies that an AI (much more likely a human-AI symbiont), poses a
considerable threat in present terms, with acquisition of knowledge up to
and integrating between existing silos of knowledge, but lacking relevant
selection pressure it is unlikely to produce meaningful growth and will
expend nearly all its computation exploring irrelevant volumes of
possibility space.

Singularitarians would do well to consider more ecological models in this
Red Queen's race.

- Jef

_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat





More information about the extropy-chat mailing list