[ExI] Limiting factors of intelligence explosion speeds

Stefano Vaj stefano.vaj at gmail.com
Thu Jan 27 15:34:57 UTC 2011


On 24 January 2011 16:39, Ben Zaiboc <bbenzai at yahoo.com> wrote:
> Stefano Vaj <stefano.vaj at gmail.com> asked:
>
>>> Lastly, there is the fact that an AGI could communicate with its sisters on
>>> high-bandwidth channels, as I mentioned in my essay.  We cannot do that.  It
>>> would make a difference.
>
>> Really can't a fyborg do that? Aren't we already doing that? :-/
>
> Absolutely not!

All the examples provided concerned communication bandwith and
computing performance at a given task.

This can of course be solved by eliminating bottlenecks (say, by
dropping altogether carbon-based computation or developing better
interfaces thereto).

AND/OR by moving computing tasks where they really belong, adopting
higher and higher level languages to communicate with the parts that
suffers the most from such bottlenecks.

You can have a man who can fly a helicopter to Cincinnati, a computer
who decides to visit Cincinnati, or a man telling a computer in
natural language to bring him to Cincinnati.

The latter does not know how to pilot a helicopter? What else is new?
We have been routinely doing things the working thereof we do not
really understand for a long time before any digital computer was
born. And of course we are doing that more and more since.

>From a system-wide perspective, the features of the whole system
itself remain undistinguishable.

This is why IMHO AGIs or a singularity are not "bound to happen", nor
imply any especial "rapture" or "doom" scenarios, but are rather
things worth fighting for if one thinks that the will of overcoming
themselves is the only thing that make humans of any interest.

-- 
Stefano Vaj




More information about the extropy-chat mailing list