[ExI] 1DIQ: an IQ metaphor to explain superintelligence

Jason Resch jasonresch at gmail.com
Fri Oct 24 14:49:09 UTC 2025


On Fri, Oct 24, 2025, 10:33 AM William Flynn Wallace via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> The case might be that the AI is simply faster than the fastest human -
> actually this is a given, right? Quantitative things will favor the AI.
>
> Now if the AI is using qualitatively different thinking unfamiliar to
> humans, then that will be a mystery unless the AI can explain it.
>
> If it can, it might not appear to us to be anything special, unless it can
> be shown that the AI solved a problem humans can't.  If it can't because of
> speed I don't see that as requiring anything special.
>
> We need to quit focusing on speed.  That has been long settled. Faster is
> not a higher level of thinking.  Beating humans at chess comes down to
> speed, not different thinking.  We need to figure out the 'how' of the AIs
> problem solving.   bill w
>

I think the how of even human thinking, is already largely beyond the
capacity for humans to understand.

Consider that some 30% of your neocortex is used for visual processing.
That's billions of neurons. Can the human brain truly comprehend a machine
with billions of parts?

Only be abstraction, which is to say, by ignoring fine details. So at-best,
all we will ever have is an incomplete understanding of how our brains
achieve what they do. Our brains are not complex enough to fully understand
their own operation. There are perhaps arguments that would extend this to
any brain, however complex. If so, then we should not expect an AI, however
advanced, to understand itself fully.

Jason



> On Fri, Oct 24, 2025 at 9:14 AM John Clark via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Fri, Oct 24, 2025 at 9:21 AM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>> *> If an alien superintelligence intelligence visited us, and allowed us
>>> to ask it any question, we could readily determine it's computational
>>> capacity by asking questions that required more and more computing power to
>>> solve.*
>>
>>
>> *Not if that alien superintelligence had found an algorithm that was more
>> efficient at solving that problem than any that you know about. Or if that
>> alien intelligence was part of a quantum computer that had several hundred
>> logical qubits. *
>>
>>
>> *John K Clark*
>>
>>
>>
>>
>>
>>
>>>
>>> On Fri, Oct 24, 2025, 8:49 AM Jason Resch <jasonresch at gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Oct 24, 2025, 8:15 AM John Clark via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> On Thu, Oct 23, 2025 at 9:48 AM Adrian Tymes via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>
>>>>>> *> I was addressing the terrestrial-scale scenario presented,
>>>>>> not potential J-Brains (which would occupy different planets entirely).*
>>>>>
>>>>>
>>>>> *The Guinness Book of World Records no longer recognizes a highest IQ
>>>>> category because of "a lack of a universally agreed-upon standard".*
>>>>>
>>>> * It's easy to see why they did that, the only one who would have the
>>>>> competence to write a test to find the world's smartest human would be the
>>>>> world's smartest human, and that fact introduces certain
>>>>> obvious difficulties.  *
>>>>>
>>>>
>>>> You can take any set of questions, so long as they have agreed upon
>>>> answers, and make an IQ test out of it: simply give the test to many people
>>>> and you will find their performance fits a bell curve. This is generally
>>>> true regardless of what questions you ask, so long as they're not so easy
>>>> you get a cluster of perfect scores.
>>>>
>>>> The questions don't have to be written by someone with a higher IQ,
>>>> rather, they just have to be such that there's a non-zero probability that
>>>> someone won't know the answer. So the question might require specialized or
>>>> esoteric knowledge, or be one that requires a lot of time to figure out
>>>> (and then limit test time).
>>>>
>>>> So long as very high IQ people don't all get perfect scores on the
>>>> test, then you can rank them, and you will find the distribution follows a
>>>> bell curve.
>>>>
>>>>
>>>>> *How could somebody with just Human intelligence even judge the
>>>>> responses that a superintelligence gave on an IQ test?*
>>>>>
>>>>
>>>> What's the capital of Benin?
>>>>
>>>> This is something a 100 IQ person can judge and verify the answer to,
>>>> but something less than 5% of the population will know the answer to.
>>>>
>>>> If you have a test with a lot of questions such as these, then high or
>>>> perfect scores will be extremely rare. Someone must be very well read,
>>>> knowledgeable and have a great memory to do well on a test with questions
>>>> such as these.
>>>>
>>>> To test processing speed, you can ask math questions that have well
>>>> agreed answers but require many steps of processing, like multiplying 5
>>>> digit numbers. Again this is a question that someone with a 100 IQ can
>>>> verify, but depending on time allowed, perhaps very few people will be able
>>>> to answer.
>>>>
>>>> Jason
>>>>
>>>> * Suppose the year was 1901 and one of the items on an IQ test was
>>>>> "prove Fermat's Last Theorem" and suppose that somebody had given a proof
>>>>> that was identical to the one that Andrew Wiles gave in 1995, how could
>>>>> anybody know if it was valid? In 1901 even the world's top mathematicians
>>>>> would have had no idea what Wiles was talking about because in his proof he
>>>>> was using concepts without explanation, he didn't need to because they were
>>>>> common knowledge to all mathematicians in 1995, but were completely unknown
>>>>> to mathematicians in 1901. If Wiles had included all those explanations in
>>>>> his proof then it would've been 10 times as large, and even then it
>>>>> would've probably taken mathematicians at least a decade to fully
>>>>> understand it and realize that Wiles was right.*
>>>>>
>>>>
>>>
>>> As to what questions we should chose to ask a super intelligence, they
>>> should be questions of a type that directly measures what intelligence is
>>> and requires: pattern recognition and prediction.
>>>
>>> You can generate random functions, then produce some sequence of outputs
>>> generated by those functions, and then ask the superintelligence to
>>> identify the function that produced the sequence.
>>>
>>> See:
>>> https://en.wikipedia.org/wiki/AIXI
>>>
>>> The problem of generating functions in this way isn't difficult, nor is
>>> verifying answers, both can be done mechanically and in an automated
>>> fashion. But the problem of working out the function from the outputs can
>>> be immensely difficult. For example, cryptographic pseudorandom number
>>> generators are designed to require exponentially many steps to figure out
>>> the seed value.
>>>
>>> If an alien superintelligence intelligence visited us, and allowed us to
>>> ask it any question, we could readily determine it's computational capacity
>>> by asking questions that required more and more computing power to solve.
>>> Eventually there would be questions it would fail to answer due to its
>>> computational limits.
>>>
>>> Again this doesn't require superintelligence to setup or judge these
>>> difficult questions. This follows so long as "P != NP" (there are questions
>>> that are computationally easy to verify the answer to, but computationally
>>> hard to find.)
>>>
>>> https://en.wikipedia.org/wiki/P_versus_NP_problem
>>>
>>> This is regarded as the greatest unproven problem in computer science,
>>> but it is nearly universally accepted as true.
>>>
>>> Jason
>>>
>>>
>>>
>>>
>>>
>>>
>>>>>
>>>>> *John K Clark*
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> On Thu, Oct 23, 2025 at 9:32 AM John Clark <johnkclark at gmail.com>
>>>>>> wrote:
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > On Thu, Oct 23, 2025 at 8:47 AM Adrian Tymes via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>> >
>>>>>> >>  > IQ 160 AI will outthink me on average, but not always
>>>>>> >
>>>>>> >
>>>>>> > I see no reason to believe that a smart human is about as smart as
>>>>>> something can be. I also don't believe an IQ test can meaningfully measure
>>>>>> the intelligence of something that is significantly smarter than the people
>>>>>> who wrote the IQ test, so an IQ of 300 or even 200 means nothing. And I
>>>>>> don't think there are many people who have an IQ of 160 and are in the IQ
>>>>>> test writing business. But if there was such a test that could measure
>>>>>> intelligence of any magnitude, and if you made a logarithmic plot of it, I
>>>>>> think you'd need a microscope to see the difference between the village
>>>>>> idiot and Albert Einstein, but if you were standing at the Albert Einstein
>>>>>> point you'd need a telescope to see the Mr. Jupiter Brain point.
>>>>>> >
>>>>>> > John K Clark
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >> I've been thinking about that video that claimed a
>>>>>> superintelligence can always perfectly outthink any lesser intelligence,
>>>>>> such as a human.  The assumption of narrative godmodding aside,
>>>>>> intelligence just doesn't work like that.  I think I may have come up with
>>>>>> an imperfect but simple metaphor to explain this.
>>>>>> >>
>>>>>> >> I have been a member of Mensa since a young age.  While it has
>>>>>> been a while since my IQ was measured (and I do not trust the free online
>>>>>> tests), let us say my IQ is around 150: not the record highest ever, but
>>>>>> comfortably into the top 2%.  So I am speaking from the experience of
>>>>>> having lived with high intelligence.
>>>>>> >>
>>>>>> >> In cases where just your IQ applies, it's like rolling a die, with
>>>>>> sides numbered from 1 to your IQ.  (Skills and training also factor in.
>>>>>> I'm nowhere near as good at fixing a car as a trained auto mechanic, for
>>>>>> instance, regardless of our relative IQs.  But here we'll ne comparing me
>>>>>> to hypothetical AIs where both of us have access to the same database - the
>>>>>> Internet - and some training on relevant skills.)
>>>>>> >>
>>>>>> >> I will, on average for such matters, roll higher than someone with
>>>>>> IQ 100.  This means I come up with the better answer: more efficient, more
>>>>>> often correct, et cetera.  (This does not apply to subjective matters, such
>>>>>> as politics, which shows one weakness of using just IQ to measure all
>>>>>> intelligence, and why some speak of multiple kinds of intelligence.  But
>>>>>> here we'll be looking into tactics, technology planning, and so on where
>>>>>> there usually is an objectively superior answer.)
>>>>>> >>
>>>>>> >> But not always.  Sometimes I'll roll low and they'll roll high.  I
>>>>>> know this.  Any AI that's as smart as I am, and ran for long enough to gain
>>>>>> such experience, would know this too.  (The video's scenario started with
>>>>>> the AI running for many subjective years.)
>>>>>> >>
>>>>>> >> From what I have seen, IQ may be partly about physical
>>>>>> architecture but also largely depends on heuristics and optimizations: it
>>>>>> is literally possible to "learn" to be smarter, especially for young
>>>>>> children whose brains are still forming.  For an AI, we can map this to its
>>>>>> hardware and software: a single-chip AI might be a million times smarter
>>>>>> than an average human, and then run on a million GPUs.
>>>>>> >>
>>>>>> >> From what I have seen, IQ is not linear.  It's closer to
>>>>>> log-based.  Twice as smart as me would not be IQ 300; it would be far
>>>>>> closer to 151.  (I don't know if that is the exact scaling, but for this
>>>>>> metaphor let's say it is.)  1,000, or 10^3, is approximately 2^10, so a
>>>>>> thousand-fold increase in intelligence corresponds to a 10-point IQ
>>>>>> increase by this metric.
>>>>>> >>
>>>>>> >> So, that "million by million" AI I just described would only be IQ
>>>>>> 140.  Let's toss another million in there somewhere, or change both of
>>>>>> those "million"s to "billion"s, either way getting to IQ 160.
>>>>>> >>
>>>>>> >> This IQ 160 AI will outthink me on average, but not always - not
>>>>>> perfectly.  Further, the AI in the video wanted to be the only AI.  2% of
>>>>>> humanity is in the tens of millions.  Even if we can only take our maximum
>>>>>> collective roll, not adding our dice or anything, that AI will rarely
>>>>>> outroll all of us - and it needs to do so several times in a row, reliably,
>>>>>> in the video's scenario.  Otherwise, we figure out the AI is doing this,
>>>>>> find a way to purge it, and stop its time bomb, so humanity lives.
>>>>>> >>
>>>>>> >> Knowing this, the AI would see its survival and growth - the
>>>>>> imperatives that video assumes to explain the AI's actions - as more likely
>>>>>> if it works with humanity instead of opposing it.
>>>>>> >>
>>>>>>
>>>>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251024/c6a9f23b/attachment.htm>


More information about the extropy-chat mailing list