[extropy-chat] Fundamental limits on the growth rate ofsuperintelligences

Dirk Bruere dirk.bruere at gmail.com
Tue Feb 14 19:02:03 UTC 2006


On 2/14/06, Robert Bradbury <robert.bradbury at gmail.com> wrote:
>
>
> On 2/14/06, Dirk Bruere <dirk.bruere at gmail.com> wrote:
>
> > The kind of 'petty' superintelligence that only requires a puny 1MW of
> > power utilised as efficiently as the Human brain would yield an intelligence
> > some 10,000x greater than Human. IMO just a factor of 10 could prove
> > exceedingly dangerous, let alone 10,000
> >
> > And such an intelligence is going to be able to provide very significant
> > incentives for it's 'owners' not to pull any plugs.
> >
>
> Point granted.  I'm going to have to think about this some more.
>

There is also one more point.
Undoubtedly one of the uses to which such an AI would be put is generating
novel technology. Almost by definition such tech will be beyond Human
comprehension. Even if it's just s/w, finding the Easteregg in a trillion
lines of code would require another AI of comparable or greater ability.
Then you have to start worrying about conspiracies and/or viral takeovers...

Dirk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060214/cdbdd5df/attachment.html>


More information about the extropy-chat mailing list