[ExI] Limiting factors of intelligence explosion speeds

Stefano Vaj stefano.vaj at gmail.com
Fri Jan 21 13:24:22 UTC 2011


On 20 January 2011 20:27, Richard Loosemore <rpwl at lightlink.com> wrote:
> Anders Sandberg wrote:
> E)  Most importantly, the invention of a human-level, self-understanding
> AGI would not lead to a *subsequent* period (we can call it the
> "explosion period") in which the invention just sits on a shelf with
> nobody bothering to pick it up.

Mmhhh. Aren't we already there? A few basic questions:

1) Computers are vastly inferior to humans in some specific tasks, yet
vastly superior in others. Why human-like features would be so much
more crucial in defining the computer "intelligence" than, say, faster
integer factorisation?

2) If the Principle of Computational Equivalence is true, what are we
really all if not "computers" optimised for, and of course executing,
different programs? Is AGI ultimately anything else than a very
complex (and, on contemporary silicon processor, much slower and very
inefficient) emulation of typical carbo-based units' data processing?

3) What is the actual level of self-understanding of the average
biological, or even human, brain? What would "self-understanding" mean
for a computer? Anything radically different from a workstation
utilised to design the next Intel processor? And if anything more is
required, what difference would it make to put simply a few neurons in
a PC? a whole human brain? a man (fyborg-style) at the keyboard? This
would not really slow down things one bit, because as soon as
something become executable in a faster fashion on the rest of the
"hardware", you simply move the relevant processes from one piece of
hardware to another, as you do today with CPUs and GPUs. In the
meantime, everybody does what he does best, and already exhibit at
increasing performance level whatever "AGI" feature one may think
of...

-- 
Stefano Vaj




More information about the extropy-chat mailing list