[ExI] What would an IQ of 500 or 1000 look like?

Anders Sandberg anders at aleph.se
Mon May 25 09:06:48 UTC 2015


Jason Resch <jasonresch at gmail.com> 



The calculator breaks down above a z-score of 6, which has a probability of 1 in ~10 billion, corresponding to an IQ > 180. An IQ of 1,000 would have a z-score of 26.667, and an IQ of 1000 would be a z-score of 60. So we must ask, out of 10^20 or 10^30 naturally born humans how smart would that smartest human out of those ~10^30 be?

A while ago I dug up an approximate formula that is applicable:

http://www.aleph.se/andart/archives/2009/09/ten_sigma_numerics_and_finance.html
So z=27 gives one chance in 10^160. z=60 is way outside my numerical precision when calculated straight, and the probability is around one in 10^199 when I just take the log of the equation. That is essentially one out of every particle that has ever or will ever existed in the observable universe.

In the end, talking about IQ 500 is almost as confused as talking about doubling IQs: this is not what the scale is about. It is a bit like discussing how loud the big bang was (although, see https://telescoper.wordpress.com/2009/04/26/how-loud-was-the-big-bang/ )


I think a better approach would be to consider how an animal would perceive human intelligence. We do incomprehensible or arbitrary things - generate some odd sounds, move stuff about, handle objects - and then big outcomes occur for often no obvious reason - food, images or rooms appear, other humans just do things as if they knew what we were thinking. Sometimes the point becomes somewhat clear far in retrospect, but most of the time there is no discernible link. And of course, many of the things that humans worry or enthuse about are things the animals simply do not get - why would a human get sad over a piece of paper with scribbles on?

So I would expect superintelligences to be like this. They do stuff, stuff happens, and sometimes we can see that some desire and goal  seems to have been met. If they are human-derived we can sometimes see the similar drives, but also totally alien interests and drives. In many ways they would be confusing and boring, except when they decide to play with us. 


Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150525/de33bf1d/attachment.html>


More information about the extropy-chat mailing list