[ExI] 1DIQ: an IQ metaphor to explain superintelligence

Jason Resch jasonresch at gmail.com
Thu Oct 23 15:58:58 UTC 2025


The problem with IQ as a scale of intelligence, especially when trying to
model non human intelligence, is that it is based on standard deviations.

If a sample population has very low variation, you could still use an IQ to
tease out differences and rank members, but even wide gaps in scores would
not be indicator of massive differences in performance or capability.

Humans all have roughly the same-sized heads, with the same-sized brains
and roughly equivalent numbers of neurons. There might be slight variations
in efficiency of those neurons or their metabolic rates, but we wouldn't
expect orders of magnitude differences.

A chimp brain has about a third the number of neurons as a human. So a 3X
difference in raw compute/memory explains the gap between Chino and human
intelligence.

IQ alone (being based on standard deviations of human intelligence) tells
us only how rare a particular IQ score will be, not what capabilities or
raw differences in power we should expect.

For judging intelligence of other non human entities, a new scale is
required. I propose an alternate scale based on some raw quantifier, such
as computational capacity, or number of neurons. Everything else then boils
down to efficiency of algorithms employed.

This is what the scale looks like, and where humans fall on it:

https://docs.google.com/spreadsheets/d/1_8QfebbBvQXo_3OroBhOfp24RAJPKCM4e_q5njbfBbU/edit?usp=drivesdk

For more background on this scale, see:
https://alwaysasking.com/when-will-ai-take-over/#Limits_of_Intelligence

Jason


On Thu, Oct 23, 2025, 8:46 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I've been thinking about that video that claimed a superintelligence can
> always perfectly outthink any lesser intelligence, such as a human.  The
> assumption of narrative godmodding aside, intelligence just doesn't work
> like that.  I think I may have come up with an imperfect but simple
> metaphor to explain this.
>
> I have been a member of Mensa since a young age.  While it has been a
> while since my IQ was measured (and I do not trust the free online tests),
> let us say my IQ is around 150: not the record highest ever, but
> comfortably into the top 2%.  So I am speaking from the experience of
> having lived with high intelligence.
>
> In cases where just your IQ applies, it's like rolling a die, with sides
> numbered from 1 to your IQ.  (Skills and training also factor in.  I'm
> nowhere near as good at fixing a car as a trained auto mechanic, for
> instance, regardless of our relative IQs.  But here we'll ne comparing me
> to hypothetical AIs where both of us have access to the same database - the
> Internet - and some training on relevant skills.)
>
> I will, on average for such matters, roll higher than someone with IQ
> 100.  This means I come up with the better answer: more efficient, more
> often correct, et cetera.  (This does not apply to subjective matters, such
> as politics, which shows one weakness of using just IQ to measure all
> intelligence, and why some speak of multiple kinds of intelligence.  But
> here we'll be looking into tactics, technology planning, and so on where
> there usually is an objectively superior answer.)
>
> But not always.  Sometimes I'll roll low and they'll roll high.  I know
> this.  Any AI that's as smart as I am, and ran for long enough to gain such
> experience, would know this too.  (The video's scenario started with the AI
> running for many subjective years.)
>
> From what I have seen, IQ may be partly about physical architecture but
> also largely depends on heuristics and optimizations: it is literally
> possible to "learn" to be smarter, especially for young children whose
> brains are still forming.  For an AI, we can map this to its hardware and
> software: a single-chip AI might be a million times smarter than an average
> human, and then run on a million GPUs.
>
> From what I have seen, IQ is not linear.  It's closer to log-based.  Twice
> as smart as me would not be IQ 300; it would be far closer to 151.  (I
> don't know if that is the exact scaling, but for this metaphor let's say it
> is.)  1,000, or 10^3, is approximately 2^10, so a thousand-fold increase in
> intelligence corresponds to a 10-point IQ increase by this metric.
>
> So, that "million by million" AI I just described would only be IQ 140.
> Let's toss another million in there somewhere, or change both of those
> "million"s to "billion"s, either way getting to IQ 160.
>
> This IQ 160 AI will outthink me on average, but not always - not
> perfectly.  Further, the AI in the video wanted to be the only AI.  2% of
> humanity is in the tens of millions.  Even if we can only take our maximum
> collective roll, not adding our dice or anything, that AI will rarely
> outroll all of us - and it needs to do so several times in a row, reliably,
> in the video's scenario.  Otherwise, we figure out the AI is doing this,
> find a way to purge it, and stop its time bomb, so humanity lives.
>
> Knowing this, the AI would see its survival and growth - the imperatives
> that video assumes to explain the AI's actions - as more likely if it works
> with humanity instead of opposing it.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251023/498e44e0/attachment.htm>


More information about the extropy-chat mailing list