[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    John Clark 
    johnkclark at gmail.com
       
    Fri Oct 31 13:37:41 UTC 2025
    
    
  
On Fri, Oct 31, 2025 at 8:30 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
*> There is predicting the means and there is predictung the ends. I think
> we can predict the ends that is, the goals, of a superintelligence. It may
> even be possible to predict (at a high level) the morality of an AI, for
> example, if this argument is valid, then all sufficiently intelligent and
> rational agents reach the same morality.*
>
*The same morality? If you look at the evidence from history we would have
to conclude that your argument must be invalid, unless a "rational agent"
has never appeared on this planet.  *
> See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE
>
*If "real desires" require perfect knowledge then "real desires" do not
exist and it is not a useful concept. *
*John K Clark*
>
> On Fri, Oct 31, 2025, 8:04 AM John Clark via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Oct 30, 2025 at 8:40 PM William Flynn Wallace via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>> *> Jason, are you saying that only a future AI with adequate memory, will
>>> ever understand our minds? *
>>>
>>
>> *I don't know about Jason but I would say that, and I'm saying that
>> superintelligent AI will never fully understand its own mind because, even
>> though it understands ours, however big it gets it will still be finite.
>> And only with an infinite set can a proper subset be put into a one to one
>> correspondence with the entire set.  *
>>
>> *> At that point, humans are superflous, not needed, better off extinct. *
>>>
>>
>> *Better off for who? Not better off for us certainly, maybe better off
>> for the AI.  *
>>
>> > Or the AIs will keep us around as interesting pets.
>>>
>>
>> *My hope is that the superintelligence will think we're cute pets, or
>> will feel some sort of a sense of duty, like the obligation we feel in
>> taking care of an aged parent who has Alzheimer's disease. But whether a
>> SuperIntelligent AI will feel either of those emotions strong enough to
>> keep us around I don't know. I can't predict with much specificity what
>> even one of my fellow human beings will do that is no smarter than I am,
>> and it is vastly more difficult to predict the actions of a
>> superintelligence, even generally.  *
>>
>
>
>
>
> Jason
>
>
>>
>>
>>
>>>
>>>
>>> On Thu, Oct 30, 2025 at 5:12 PM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> I have read several times in these chats the assumption that one
>>>>> cannot understand something as complicated as themselves.
>>>>>
>>>>> Why not?  It sounds reasonable but what's the basis for it?   bill w
>>>>>
>>>>
>>>> I believe it may follow from information theory.
>>>>
>>>> Consider: if understanding(X) requires holding some additional
>>>> higher-level set of relations and interrelations beyond the mere
>>>> specification of what X is, then the information contained within
>>>> understanding(X) will always be greater than the information contained in X.
>>>>
>>>> Now extend this to the brain. If brain's information content is Y, then
>>>> understanding (Y) requires a brain with a greater information storage
>>>> capacity than Y.
>>>>
>>>> Or another way to think about it: how many neurons does it take to
>>>> memorize all the important facts of a single neuron's connections within
>>>> the brain? If it takes N neurons to store that memory, then just memorizing
>>>> a brain state will require a brain with N times as many neurons as the
>>>> brain that's memorized.
>>>>
>>>> Jason
>>>>
>>>>
>>>>
>>>>> On Thu, Oct 30, 2025 at 2:22 PM John Clark via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>> On Tue, Oct 28, 2025 at 4:16 PM Ben Zaiboc via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>> *> There are also nuances. For example, different interpretations of
>>>>>>> "to understand".*
>>>>>>
>>>>>>
>>>>>> *Exactly.  We can have a general sort of understanding of how our
>>>>>> brain works but to have a perfect understanding a part of our brain would
>>>>>> have to have a sort of internal map of the entire brain, and for it to be
>>>>>> perfect there would have to be a one to one correspondence between the map
>>>>>> and the territory, but that would be impossible for something that is
>>>>>> finite like the number of neurons in the human brain. However it would be
>>>>>> possible for a proper subset of something infinite to have a one to one
>>>>>> correspondence with the entire set; then you could have such a perfect map
>>>>>> with a one to one correspondence, and then you'd always know what you were
>>>>>> going to do long before you did it. And you wouldn't feel free. So by the
>>>>>> only definition of free will that is not gibberish (not knowing what you're
>>>>>> going to do next until you actually do it) we reach the interesting
>>>>>> conclusion that a human being does have free will, but God does not.*
>>>>>>
>>>>>> *John K Clark*
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> non-flying animal.
>>>>>>>
>>>>>>> "If our brains were simple enough for us to understand, we would be
>>>>>>> simple enough that we could not."
>>>>>>>
>>>>>>>
>>>>>>> Well, that just sounds defeatist to me. It makes a nice little
>>>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence
>>>>>>> that it's true? Or any logical argument for it?
>>>>>>> There are also nuances. For example, different interpretations of
>>>>>>> "to understand".
>>>>>>>
>>>>>>> Maybe you are right, given "understand completely" (whatever that
>>>>>>> actually means). Maybe definitely not, given "understand enough to
>>>>>>> be useful/worth the attempt".
>>>>>>>
>>>>>>> We have, after all, discovered a lot about how brains work already.
>>>>>>> Maybe not a lot in comparison to all there is to be discovered, but more
>>>>>>> than enough to be useful, and I doubt if we have reached some sort of limit
>>>>>>> on what we are capable of discovering and understanding.
>>>>>>>
>>>>>>> And there's always AI assistance with this kind of research, which
>>>>>>> greatly extends our reach, and adds more variations of "to understand".
>>>>>>>
>>>>>>> On the whole, I think the statement is harmful, in that it tends to
>>>>>>> discourage even trying.
>>>>>>>
>>>>>>> --
>>>>>>> Ben
>>>>>>>
>>>>>>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251031/6b2e9323/attachment.htm>
    
    
More information about the extropy-chat
mailing list