[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    William Flynn Wallace 
    foozler83 at gmail.com
       
    Fri Oct 31 00:37:57 UTC 2025
    
    
  
Jason, are you saying that only a future AI with adequate memory, will ever
understand our minds?  At that point, humans are superflous, not needed,
better off extinct.  Or the AIs will keep us around as interesting pets.
bill w
On Thu, Oct 30, 2025 at 5:12 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I have read several times in these chats the assumption that one cannot
>> understand something as complicated as themselves.
>>
>> Why not?  It sounds reasonable but what's the basis for it?   bill w
>>
>
> I believe it may follow from information theory.
>
> Consider: if understanding(X) requires holding some additional
> higher-level set of relations and interrelations beyond the mere
> specification of what X is, then the information contained within
> understanding(X) will always be greater than the information contained in X.
>
> Now extend this to the brain. If brain's information content is Y, then
> understanding (Y) requires a brain with a greater information storage
> capacity than Y.
>
> Or another way to think about it: how many neurons does it take to
> memorize all the important facts of a single neuron's connections within
> the brain? If it takes N neurons to store that memory, then just memorizing
> a brain state will require a brain with N times as many neurons as the
> brain that's memorized.
>
> Jason
>
>
>
>> On Thu, Oct 30, 2025 at 2:22 PM John Clark via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Tue, Oct 28, 2025 at 4:16 PM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>> *> There are also nuances. For example, different interpretations of "to
>>>> understand".*
>>>
>>>
>>> *Exactly.  We can have a general sort of understanding of how our brain
>>> works but to have a perfect understanding a part of our brain would have to
>>> have a sort of internal map of the entire brain, and for it to be
>>> perfect there would have to be a one to one correspondence between the map
>>> and the territory, but that would be impossible for something that is
>>> finite like the number of neurons in the human brain. However it would be
>>> possible for a proper subset of something infinite to have a one to one
>>> correspondence with the entire set; then you could have such a perfect map
>>> with a one to one correspondence, and then you'd always know what you were
>>> going to do long before you did it. And you wouldn't feel free. So by the
>>> only definition of free will that is not gibberish (not knowing what you're
>>> going to do next until you actually do it) we reach the interesting
>>> conclusion that a human being does have free will, but God does not.*
>>>
>>> *John K Clark*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> non-flying animal.
>>>>
>>>> "If our brains were simple enough for us to understand, we would be
>>>> simple enough that we could not."
>>>>
>>>>
>>>> Well, that just sounds defeatist to me. It makes a nice little
>>>> pessimistic soundbite (if you like pessimism), but is there any evidence
>>>> that it's true? Or any logical argument for it?
>>>> There are also nuances. For example, different interpretations of "to
>>>> understand".
>>>>
>>>> Maybe you are right, given "understand completely" (whatever that
>>>> actually means). Maybe definitely not, given "understand enough to be
>>>> useful/worth the attempt".
>>>>
>>>> We have, after all, discovered a lot about how brains work already.
>>>> Maybe not a lot in comparison to all there is to be discovered, but more
>>>> than enough to be useful, and I doubt if we have reached some sort of limit
>>>> on what we are capable of discovering and understanding.
>>>>
>>>> And there's always AI assistance with this kind of research, which
>>>> greatly extends our reach, and adds more variations of "to understand".
>>>>
>>>> On the whole, I think the statement is harmful, in that it tends to
>>>> discourage even trying.
>>>>
>>>> --
>>>> Ben
>>>>
>>>>
>>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251030/a4f64e6c/attachment-0001.htm>
    
    
More information about the extropy-chat
mailing list