[ExI] AI motivations

John Clark johnkclark at gmail.com
Fri Dec 28 21:12:43 UTC 2012


On Tue, Dec 25, 2012  Keith Henson <hkeithhenson at gmail.com> wrote:

>
> > I also suspect that shear physical limits are going to limit the size of
> an AI due to "the bigger they are, the slower they think."


The fastest signals in the human brain move at a couple of hundred meters a
second, many are far slower, light moves at 300 million meters per second.
So if you insist that the 2 most distant parts of a brain communicate as
fast as they do in a human brain (and it is not immediately obvious why you
should insist on such a thing) then parts in a AI could be at least one
million times as distant. The volume of such a brain would be a million
trillion times larger than a human brain. Even if 99.9% of that space were
used just to deliver power and get rid of waste heat you'd still have a
thousand trillion times as much volume for logic components as humans have
room for inside their heads, and the components would be considerably
smaller than the human ones too.

That's why I think all this talk about how to make sure the AI always
remains friendly is just futile, maybe it will be friendly and maybe it
won't, but whatever it is we won't have any say in the matter.  The AI will
do what it wants to do and we're just in for the ride.

  > I have never come to a satisfactory formula of what physical size is
> optimum, but I strongly suspect it is not as large as a human brain in size.
>

Optimum for what? The optimal brain size to reflexively dodge a hyper
velocity bullet is probably not the same as the optimal brain size to
figure out if the Goldbach' conjecture is true.

 John k Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121228/4487b28f/attachment.html>


More information about the extropy-chat mailing list