[ExI] Limiting factors of intelligence explosion speeds

Stefano Vaj stefano.vaj at gmail.com
Sun Jan 23 13:56:19 UTC 2011


On 21 January 2011 15:40, Eugen Leitl <eugen at leitl.org> wrote:

> Humans are competitive in the real world. They're reasonably well-rounded
> for a reference point.
>

"Competitive" is a Darwinian reference. A competitive car may be a different
concept, and extremely complicate computations happen in nature which need
not be competitive in that sense to take place and reproduce themselves.

So, a computational device is not going to exhibit anything like that unless
we program them, or some routines running on them, to emulate Darwinian
features, or an artificial environment where they may evolve them through
random mutation and selection.

In the latter case, programs or devices may end up being *very* competitive
without exhibiting anything vaguely similar to human intelligence or even
very sophisticated computations, for that matter (see under gray goo). In
fact, mammal- or human-like intelligence is just one possible Darwinian
strategy amongst a very large space of possible alternatives. Bacteria, as
replicators, are, e.g., at least as competitive as we are.

 > 2) If the Principle of Computational Equivalence is true, what are we
> > really all if not "computers" optimised for, and of course executing,
> > different programs? Is AGI ultimately anything else than a very
>
> Can I check out your source? No, not the genome, the actual data
> in your head.
>

Be my guest, just remember to return my brain at the end when you have
finished disassembling its machine code... :-)

Seriously, I do not believe that we have to resort to very-low level
emulation of bio brains to emulate one or another of their features, but
once such emulation is satisfactory enough what we have actually performed a
black-box is  reeeingeneering of its relevant programs, so that the only
practical difference with the original coding is probably copyright...

 > complex (and, on contemporary silicon processor, much slower and very
> > inefficient) emulation of typical carbo-based units' data processing?
>
> Inefficient in terms of energy, initially, yes. But if you can spare
> a few 10 MW you can do very interesting things with today's primitive
> technology. Any prototype is inefficient, but these tend to ramp up
> extremely quickly. In terms of physics of computation we people are
> extremely inefficient. We only look good because the next-worst spot
> is so much worse. But we're fixed, while our systems are improving
> quite nicely.
>

I am not referring to energy efficiencies, but simply to the kind of
efficiency where a human brain or a cellular automaton are very slow in
calculating the square roots of large integers, and contemporary computers
are quite slow at accurate, fine-grained pattern recognition.

And I do not really see why we would be "fixed". Of course, there are a
number of computations which are likely to be performed more efficiently on
different hardware. You have just to add or move or replace the relevant
portions, as in any system. E.g., distributed computing projects have
dramatic bandwith bottleneck problems in comparison with traditional HPC.
Does it prevent them from attacking just the same problems? No.

But take a Chinese Room with sufficient time and memory, you can have it
emulate anything at arbitrary degrees of accuracy. And this is of course
true also for silicon-based processors and any other device passing the very
low threshold of universal computation.

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110123/55b3dd9d/attachment.html>


More information about the extropy-chat mailing list