[ExI] 1DIQ: an IQ metaphor to explain superintelligence

Jason Resch jasonresch at gmail.com
Fri Oct 31 00:53:24 UTC 2025


On Thu, Oct 30, 2025, 8:38 PM William Flynn Wallace <foozler83 at gmail.com>
wrote:

> Jason, are you saying that only a future AI with adequate memory, will
> ever understand our minds?
>

I suppose I am saying only a vastly greater mind has any hope of *fully
understanding* the workings of another lesser mind.

Consider that even a fruit fly brain has 140 thousand neurons and 50
million synapses. Is there any machine of equivalent complexity you can
point to which humans *fully understand* the workings of?

At that point, humans are superflous, not needed, better off extinct.  Or
> the AIs will keep us around as interesting pets.
>

I'm not sure all that follows, but certainly by the time there is an
intelligence capable of understanding everything there is to know about the
human brain, humanity will no longer be the most intelligent entity around.

Jason


>
>
> On Thu, Oct 30, 2025 at 5:12 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> I have read several times in these chats the assumption that one cannot
>>> understand something as complicated as themselves.
>>>
>>> Why not?  It sounds reasonable but what's the basis for it?   bill w
>>>
>>
>> I believe it may follow from information theory.
>>
>> Consider: if understanding(X) requires holding some additional
>> higher-level set of relations and interrelations beyond the mere
>> specification of what X is, then the information contained within
>> understanding(X) will always be greater than the information contained in X.
>>
>> Now extend this to the brain. If brain's information content is Y, then
>> understanding (Y) requires a brain with a greater information storage
>> capacity than Y.
>>
>> Or another way to think about it: how many neurons does it take to
>> memorize all the important facts of a single neuron's connections within
>> the brain? If it takes N neurons to store that memory, then just memorizing
>> a brain state will require a brain with N times as many neurons as the
>> brain that's memorized.
>>
>> Jason
>>
>>
>>
>>> On Thu, Oct 30, 2025 at 2:22 PM John Clark via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Tue, Oct 28, 2025 at 4:16 PM Ben Zaiboc via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>> *> There are also nuances. For example, different interpretations of
>>>>> "to understand".*
>>>>
>>>>
>>>> *Exactly.  We can have a general sort of understanding of how our brain
>>>> works but to have a perfect understanding a part of our brain would have to
>>>> have a sort of internal map of the entire brain, and for it to be
>>>> perfect there would have to be a one to one correspondence between the map
>>>> and the territory, but that would be impossible for something that is
>>>> finite like the number of neurons in the human brain. However it would be
>>>> possible for a proper subset of something infinite to have a one to one
>>>> correspondence with the entire set; then you could have such a perfect map
>>>> with a one to one correspondence, and then you'd always know what you were
>>>> going to do long before you did it. And you wouldn't feel free. So by the
>>>> only definition of free will that is not gibberish (not knowing what you're
>>>> going to do next until you actually do it) we reach the interesting
>>>> conclusion that a human being does have free will, but God does not.*
>>>>
>>>> *John K Clark*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> non-flying animal.
>>>>>
>>>>> "If our brains were simple enough for us to understand, we would be
>>>>> simple enough that we could not."
>>>>>
>>>>>
>>>>> Well, that just sounds defeatist to me. It makes a nice little
>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence
>>>>> that it's true? Or any logical argument for it?
>>>>> There are also nuances. For example, different interpretations of "to
>>>>> understand".
>>>>>
>>>>> Maybe you are right, given "understand completely" (whatever that
>>>>> actually means). Maybe definitely not, given "understand enough to be
>>>>> useful/worth the attempt".
>>>>>
>>>>> We have, after all, discovered a lot about how brains work already.
>>>>> Maybe not a lot in comparison to all there is to be discovered, but more
>>>>> than enough to be useful, and I doubt if we have reached some sort of limit
>>>>> on what we are capable of discovering and understanding.
>>>>>
>>>>> And there's always AI assistance with this kind of research, which
>>>>> greatly extends our reach, and adds more variations of "to understand".
>>>>>
>>>>> On the whole, I think the statement is harmful, in that it tends to
>>>>> discourage even trying.
>>>>>
>>>>> --
>>>>> Ben
>>>>>
>>>>>
>>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251030/6452921b/attachment.htm>


More information about the extropy-chat mailing list