[extropy-chat] Fundamental limits on the growth rate ofsuperintelligences
Dirk Bruere
dirk.bruere at gmail.com
Tue Feb 14 18:19:46 UTC 2006
On 2/14/06, Robert Bradbury <robert.bradbury at gmail.com> wrote:
up on us and suddenly manifest itself as the overlord or that each country
> is going to try to build its own superintelligence. What I was attempting
> to point out is that we don't need to allow that to happen. A Playstation 5
> or 6 is probably going to have the computational capacity to enable more
> than human level intelligence (though I doubt the computational architecture
> will facilitate that). One can however always unplug them if they get out
> of line.
>
> Its obviously relatively easy for other countries to detect situations
> where some crazy person (country) is engaging in unmonitored
> superintelligence development. Anytime they start constructing power
> generating capacity significantly in excess of what the people are
> apparently consuming and/or start constructing cooling towers for not only a
> reactor but a reactor + all of the electricity it produces then it will be
> obvious what is going on and steps can be taken to deal with the situation.
>
And I maintain that it will *not* be obvious.
The kind of 'petty' superintelligence that only requires a puny 1MW of power
utilised as efficiently as the Human brain would yield an intelligence some
10,000x greater than Human. IMO just a factor of 10 could prove exceedingly
dangerous, let alone 10,000
And such an intelligence is going to be able to provide very significant
incentives for it's 'owners' not to pull any plugs.
Dirk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060214/d3ab456e/attachment.html>
More information about the extropy-chat
mailing list