[ExI] Hard Takeoff

Stefano Vaj stefano.vaj at gmail.com
Sun Nov 14 23:45:10 UTC 2010


2010/11/14 Michael Anissimov <michaelanissimov at gmail.com>:
> We have some reason to believe that a roughly human-level AI could rapidly
> improve its own capabilities, fast enough to get far beyond the human level
> in a relatively short amount of time.  The reason why is that a
> "human-level" AI would not really be "human-level" at all -- it would have
> all sorts of inherently exciting abilities, simply by virtue of its
> substrate and necessities of construction:
> 1.  ability to copy itself
> 2.  stay awake 24/7
> 3.  spin off separate threads of attention in the same mind
> 4.  overclock helpful modules on-the-fly
> 5.  absorb computing power (humans can't do this)
> 6.  constructed from scratch with self-improvement in mind
> 7.  the possibility of direct integration with new sensory modalities, like
> a codic modality
> 8.  the ability to accelerate its own thinking speed depending on the speed
> of available computers

What would "human-equivalent" mean? I contend that all the above is
basically what every system exhibiting universal computation can do,
from cellular automata to organic brains to PC. At most, it just needs
to be programmed to exhibit such behaviours. If we do not take things
too literally, such behaviours have already been emergent in
contemporary fyborgs for years. What's the big deal?

The difference might be increasing performance and accuracy in a
number of tasks. This would be welcome, and the "abrupter", the
better, as far as I am concerned. Rather, we should keep in mind that
such increase is far from guaranteed, especially in an age where
technological development is freezing and real breakthroughs are
becoming rarer and rarer, so that it seems indeed weird that many
transhumanists are primarily concerned with "steering" what is
expected to take place automagically ("gosh, how are we going to
protect the ecosystems of extrasolar planets from terrestrial
contamination?"), and what needs instead to be made *happen* in the
first place.

> We have real, evidence-based arguments for an abrupt takeoff.  One is that
> the human speed and quality of thinking is not necessarily any sort of
> optimal thing, thus we shouldn't be shocked if another intelligent species
> can easily surpass us as we surpassed others.  We deserve a real debate, not
> accusations of monotheism.

Biological-human "thinking" has just been relatively good for what it
was designed for, and "quality" does not have any real meaning out of
a specific context.

Moreover, the concept of "another species" is indeed quite vague, when
taken in a diacronic sense - besides being quite "specieist" per se.

We could not interbreed with our remote biological ancestors, and have
no reason to believe that we could forever interbreed with our
descendants even if they remained DNA-based forever. So, what do we
have to fear?

If we are discussing all that from a "self-protection" point of view,
my bet is that most of us will be killed by accidents, human murder,
disease or old age rather than while being chased down the road by an
out-of-control Terminator - whose purpose in engaging in such a sport
remains pretty unclear, by the way.

-- 
Stefano Vaj




More information about the extropy-chat mailing list