[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    Jason Resch 
    jasonresch at gmail.com
       
    Sun Nov  2 14:26:56 UTC 2025
    
    
  
On Sun, Nov 2, 2025, 9:16 AM Jason Resch <jasonresch at gmail.com> wrote:
>
>
> On Sun, Nov 2, 2025, 9:05 AM John Clark <johnkclark at gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 31, 2025 at 10:50 AM Jason Resch <jasonresch at gmail.com>
>> wrote:
>>
>>
>>
>>> >>> See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE
>>>>>
>>>>
>>>> *>> If "real desires" require perfect knowledge then "real desires" do
>>>> not exist and it is not a useful concept. *
>>>>
>>>
>>> *> The better knowledge/intelligence becomes the more correctly we
>>> approach that unattainable perfect grasp. **It is a useful concept
>>> insofar as it defined an ideal, just like Turing machines define
>>> computation, though their perfect and unlimited memory is unrealizable in
>>> practice.*
>>>
>>
>> *You're right, Turing was able to define computation with his machine and
>> his instructions on how to construct the device were simple and very clear,
>> and he was able to prove a number of fascinating things about computation
>> from his machine. But there is nothing equivalent to that when it comes to
>> morality, certainly not a proof that "all sufficiently intelligent and
>> rational agents reach the same morality". And all the empirical evidence is
>> pointing in the opposite direction.*
>>
>
>
> If this is a problem that genuinely interests you (and I think it should,
> because if it's true, it means superintelligence will tend towards
> beneficence), then read the attached paper, and see whether you agree with
> it or if you can uncover some fatal flaw in its reasoning.
>
The attachment failed, I have uploaded the paper here:
https://drive.google.com/file/d/1l8T1z5dCQQiwJPlQlqm8u-1oWpoeth3-/view?usp=drivesdk
Jason
>
>
>>
>>
>>
>>
>>
>>>
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> On Fri, Oct 31, 2025, 8:04 AM John Clark via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>> On Thu, Oct 30, 2025 at 8:40 PM William Flynn Wallace via
>>>>>> extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>> *> Jason, are you saying that only a future AI with adequate memory,
>>>>>>> will ever understand our minds? *
>>>>>>>
>>>>>>
>>>>>> *I don't know about Jason but I would say that, and I'm saying that
>>>>>> superintelligent AI will never fully understand its own mind because, even
>>>>>> though it understands ours, however big it gets it will still be finite.
>>>>>> And only with an infinite set can a proper subset be put into a one to one
>>>>>> correspondence with the entire set.  *
>>>>>>
>>>>>> *> At that point, humans are superflous, not needed, better off
>>>>>>> extinct. *
>>>>>>>
>>>>>>
>>>>>> *Better off for who? Not better off for us certainly, maybe better
>>>>>> off for the AI.  *
>>>>>>
>>>>>> > Or the AIs will keep us around as interesting pets.
>>>>>>>
>>>>>>
>>>>>> *My hope is that the superintelligence will think we're cute pets, or
>>>>>> will feel some sort of a sense of duty, like the obligation we feel in
>>>>>> taking care of an aged parent who has Alzheimer's disease. But whether a
>>>>>> SuperIntelligent AI will feel either of those emotions strong enough to
>>>>>> keep us around I don't know. I can't predict with much specificity what
>>>>>> even one of my fellow human beings will do that is no smarter than I am,
>>>>>> and it is vastly more difficult to predict the actions of a
>>>>>> superintelligence, even generally.  *
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> Jason
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Oct 30, 2025 at 5:12 PM Jason Resch via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via
>>>>>>>> extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>>>>>>>
>>>>>>>>> I have read several times in these chats the assumption that one
>>>>>>>>> cannot understand something as complicated as themselves.
>>>>>>>>>
>>>>>>>>> Why not?  It sounds reasonable but what's the basis for it?   bill
>>>>>>>>> w
>>>>>>>>>
>>>>>>>>
>>>>>>>> I believe it may follow from information theory.
>>>>>>>>
>>>>>>>> Consider: if understanding(X) requires holding some additional
>>>>>>>> higher-level set of relations and interrelations beyond the mere
>>>>>>>> specification of what X is, then the information contained within
>>>>>>>> understanding(X) will always be greater than the information contained in X.
>>>>>>>>
>>>>>>>> Now extend this to the brain. If brain's information content is Y,
>>>>>>>> then understanding (Y) requires a brain with a greater information storage
>>>>>>>> capacity than Y.
>>>>>>>>
>>>>>>>> Or another way to think about it: how many neurons does it take to
>>>>>>>> memorize all the important facts of a single neuron's connections within
>>>>>>>> the brain? If it takes N neurons to store that memory, then just memorizing
>>>>>>>> a brain state will require a brain with N times as many neurons as the
>>>>>>>> brain that's memorized.
>>>>>>>>
>>>>>>>> Jason
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Thu, Oct 30, 2025 at 2:22 PM John Clark via extropy-chat <
>>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>
>>>>>>>>>> On Tue, Oct 28, 2025 at 4:16 PM Ben Zaiboc via extropy-chat <
>>>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>>
>>>>>>>>>> *> There are also nuances. For example, different interpretations
>>>>>>>>>>> of "to understand".*
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Exactly.  We can have a general sort of understanding of how our
>>>>>>>>>> brain works but to have a perfect understanding a part of our brain would
>>>>>>>>>> have to have a sort of internal map of the entire brain, and for it to be
>>>>>>>>>> perfect there would have to be a one to one correspondence between the map
>>>>>>>>>> and the territory, but that would be impossible for something that is
>>>>>>>>>> finite like the number of neurons in the human brain. However it would be
>>>>>>>>>> possible for a proper subset of something infinite to have a one to one
>>>>>>>>>> correspondence with the entire set; then you could have such a perfect map
>>>>>>>>>> with a one to one correspondence, and then you'd always know what you were
>>>>>>>>>> going to do long before you did it. And you wouldn't feel free. So by the
>>>>>>>>>> only definition of free will that is not gibberish (not knowing what you're
>>>>>>>>>> going to do next until you actually do it) we reach the interesting
>>>>>>>>>> conclusion that a human being does have free will, but God does not.*
>>>>>>>>>>
>>>>>>>>>> *John K Clark*
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> non-flying animal.
>>>>>>>>>>>
>>>>>>>>>>> "If our brains were simple enough for us to understand, we would be
>>>>>>>>>>> simple enough that we could not."
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Well, that just sounds defeatist to me. It makes a nice little
>>>>>>>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence
>>>>>>>>>>> that it's true? Or any logical argument for it?
>>>>>>>>>>> There are also nuances. For example, different interpretations
>>>>>>>>>>> of "to understand".
>>>>>>>>>>>
>>>>>>>>>>> Maybe you are right, given "understand completely" (whatever
>>>>>>>>>>> that actually means). Maybe definitely not, given "understand
>>>>>>>>>>> enough to be useful/worth the attempt".
>>>>>>>>>>>
>>>>>>>>>>> We have, after all, discovered a lot about how brains work
>>>>>>>>>>> already. Maybe not a lot in comparison to all there is to be discovered,
>>>>>>>>>>> but more than enough to be useful, and I doubt if we have reached some sort
>>>>>>>>>>> of limit on what we are capable of discovering and understanding.
>>>>>>>>>>>
>>>>>>>>>>> And there's always AI assistance with this kind of research,
>>>>>>>>>>> which greatly extends our reach, and adds more variations of "to
>>>>>>>>>>> understand".
>>>>>>>>>>>
>>>>>>>>>>> On the whole, I think the statement is harmful, in that it tends
>>>>>>>>>>> to discourage even trying.
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Ben
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>
>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251102/2c59960e/attachment-0001.htm>
    
    
More information about the extropy-chat
mailing list