[ExI] Limiting factors of intelligence explosion speeds

Stefano Vaj stefano.vaj at gmail.com
Sun Jan 23 12:40:08 UTC 2011


On 21 January 2011 15:30, Richard Loosemore <rpwl at lightlink.com> wrote:

> Stefano Vaj wrote:
>
>> On 20 January 2011 20:27, Richard Loosemore <rpwl at lightlink.com> wrote:
>>
>>> Anders Sandberg wrote:
>>> E)  Most importantly, the invention of a human-level, self-understanding
>>> AGI would not lead to a *subsequent* period (we can call it the
>>> "explosion period") in which the invention just sits on a shelf with
>>> nobody bothering to pick it up.
>>>
>>
>> Mmhhh. Aren't we already there? A few basic questions:
>>
>> 1) Computers are vastly inferior to humans in some specific tasks, yet
>> vastly superior in others. Why human-like features would be so much
>> more crucial in defining the computer "intelligence" than, say, faster
>> integer factorisation?
>>
>
> Well, remember that the hypothesis under consideration here is a system
> that is capable of redesigning itself.
>

In principle, a cellular automaton, a Turing machine or a personal computer
should be able to design themselves if we can do it ourselves. You just have
to feed them the right program and be ready to wait for a long time...


> "Human-level" does not mean identical to a human in every respect, it means
> smart enough to understand everything that we understand.


Mmhhh. Most humans do not "understand" (for any practical mean) anything
about the working of any computational device, let alone their own brain.
Does it qualify them as non-intelligent? :-/


 2) If the Principle of Computational Equivalence is true, what are we
>> really all if not "computers" optimised for, and of course executing,
>> different programs? Is AGI ultimately anything else than a very
>> complex (and, on contemporary silicon processor, much slower and very
>> inefficient) emulation of typical carbo-based units' data processing?
>>
>
> The main idea of building an AGI would be to do it in such a way that we
> understood how it worked, and therefore could (almost certainly) think of
> ways to improve it.
>

We are already able to design (or profit from) devices that exhibit
intelligence. The real engineering feat would be a Turing-passing system,
which in turn probably requires a better reverse-engineering of human
ability to pass it by definition. But many non-Turing passing systems may be
more powerful and "intelligent", not to mention useful and/or dangerous, in
other senses.

Also, if we had a working AGI we could do something that we cannot do with
> human brains:  we could inspect and learn about any aspect of its function
> in real time.
>

Perhaps. Or perhaps we will first be able to do that with biological brains.
Who knows? Ultimately, we might even discover that bio or bio-like brains
are a decently optimised platform for what they do best, and that silicon
really shines in a "co-processor" position, same as GPUs vs CPUs. But of
course this would not prevent us from implementing AGIs entirely on silicon,
if we accept the performance hit.

There are other factors that would add to these.  One concerns the AGI's
> ability to duplicate itself, after acquiring some knowledge.  In the case of
> a human, a single, world-leading expert in some field would be nothing more
> than one expert.  But if an AGI became a world expert, she could then
> duplicate herself a thousand times over and work with her sisters as a team
> (assuming that the problem under attack would benefit from a big team).
>

In principle, I do not see any specific reason why duplicating a bio-based
brain should be any more impossibile than the same data, features and
process on another platform...

Lastly, there is the fact that an AGI could communicate with its sisters on
> high-bandwidth channels, as I mentioned in my essay.  We cannot do that.  It
> would make a difference.


Really can't a fyborg do that? Aren't we already doing that? :-/

A workstation that is used to design the next Intel processor has zero
> self-understanding, because it cannot autonomously start and complete a
> project to redesign itself.
>

To form an opinion on the above, I would require a more precise definition
of "autonomously", "understanding", "self" etc.

In the meantime, I suspect that the difference essentially lies in the
execution of different programs, or in the hallucination of supposed
"bio-specific" gifts which does not really bear close inspection. The
behavioural features and range of simpler animals and the end result of
contemporary, ad-hoc, sophisticated computer emulations illustrate well, I
believe, this point.

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110123/e8cc4b35/attachment.html>


More information about the extropy-chat mailing list