[ExI] What would an IQ of 500 or 1000 look like?

David Lubkin lubkin at unreasonable.com
Mon May 25 11:36:41 UTC 2015

Jason wrote:

>The calculator breaks down above a z-score of 6, 
>which has a probability of 1 in ~10 billion, 
>corresponding to an IQ > 180. An IQ of 1,000 
>would have a z-score of 26.667, and an IQ of 
>1000 would be a z-score of 60. So we must ask, 
>out of 10^20 or 10^30 naturally born humans how 
>smart would that smartest human out of those ~10^30 be?

If we're talking 10^30 actual members of our 
current species, because we managed to spread 
through the universe or multiverse, I see two 
choices, that are not mutually exclusive. They 
might already at the maximum our current biology 
is capable of by itself, which might make 1 in 
10^30 no different than 1 in 10^12. Or there 
might be unknown abilities that appear in genetic 
permutations that have never happened yet, whose 
nature we have to admit we have no clue about.

It might be helpful to think about the flip 
question. IQ goes in both directions. If the 
curve is symmetrical—which it may not be—for 
every person at +n standard deviations, there's 
someone at -n. Today, is the 1-in-a-billion 
(SD15) IQ of 10 distinguishable from IQ 15 (1 in 
137M)? It's obvious when we look at the lowest 
levels today that there's a point below which the 
biology can't go. And giving those two people 
different IQ numbers is pointless.

Anders replied:

>I think a better approach would be to consider 
>how an animal would perceive human intelligence. 
>We do incomprehensible or arbitrary things - 
>generate some odd sounds, move stuff about, 
>handle objects - and then big outcomes occur for 
>often no obvious reason - food, images or rooms 
>appear, other humans just do things as if they 
>knew what we were thinking. Sometimes the point 
>becomes somewhat clear far in retrospect, but 
>most of the time there is no discernible link. 
>And of course, many of the things that humans 
>worry or enthuse about are things the animals 
>simply do not get - why would a human get sad 
>over a piece of paper with scribbles on?
>So I would expect superintelligences to be like 
>this. They do stuff, stuff happens, and 
>sometimes we can see that some desire and 
>goal  seems to have been met. If they are 
>human-derived we can sometimes see the similar 
>drives, but also totally alien interests and 
>drives. In many ways they would be confusing and 
>boring, except when they decide to play with us.

An interesting question falls out: Why do we 
describe them as superintelligences? Any alien 
species might be incomprehensible or seem 
arbitrary in its actions to us. Super- is a 
judgment. As is denying super. If we're wowed by 
what a human does, we might label her super. Or 
in our mystification at what she did, we might label her retarded or crazy.

In an extropian context, I think we use 
superintelligence to denote those 
incomprehensible minds that we hope or fear can 
see and leverage what we cannot, in ways that 
might have rapid, extreme consequences for us. 
It's the impact on us that draws our attention, 
that distinguishes them from merely weird.

-- David.

More information about the extropy-chat mailing list