[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    Ben Zaiboc 
    ben at zaiboc.net
       
    Fri Oct 31 22:16:27 UTC 2025
    
    
  
On 31/10/2025 21:34, Jason Resch wrote:
> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat 
> <extropy-chat at lists.extropy.org> wrote:
>
>     On 31/10/2025 12:28, John K Clark wrote:
>>     We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfectthere would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a oneto one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ...
>
>     You've completely lost me there, but I have two observations:
>     There's no such thing as 'perfect understanding' except as a
>     nebulous theoretical concept, and I don't think a one-to-one
>     correspondence would be enough to understand something, or even be
>     a relevant concept. We use large parts of our brains to process
>     information from small parts of the world. You need a lot more
>     than a single neuron to figure out what's going on in a single neuron.
>
>     Oh, three observations. We don't process data instantaneously. The
>     same parts of the brain can be used to process information about
>     something repeatedly over time, using feedback loops etc.
>
>
> Computers and algorithms are constrained by two resources space (i.e. 
> memory), and time (i.e. CPU cycles). While some algorithms allow for 
> time/space trade offs to be made in certain circumstances, in general 
> there is some shortest description of the brain (in terms of bits) for 
> which no shorter representation is possible (regardless of how much 
> additional computation is thrown at it).
>
> So while the same brain may compute many times with the same neurons, 
> this addresses only the time component of simulating a brain. There is 
> still the matter of space.
Ah, ok. I was talking about understanding the brain, not simulating it. 
Modelling something is not the same as understanding it. Yes, they help 
each other, but they aren't the same thing.
>
> The analogy here is that a computer with 1 MB of RAM can't emulate a 
> computer with 1 GB of RAM, even if it's given 1000X the time to do so. 
> In fact there's no amount of additional time that will permit the 
> memory deficient computer to emulate the computer with 1 GB of memory, 
> for the simple reason that it will run out of variables to represent 
> all the possible values in the memory addresses of the computer with a 
> greater memory.
I'm not sure that this is true. Are you assuming no swap disk, or other 
similar non-RAM storage? Because then I'm sure you're right, but that's 
a pretty artificial restriction.
The analogy there would be a human with a notepad maybe, or a database, 
or a bunch of other humans, an AI, etc.
So we're back to: A single human brain /on it's own/ can't understand a 
human brain in any great detail. Of course. But that's a pretty 
artificial restriction.
-- 
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251031/c109ecee/attachment.htm>
    
    
More information about the extropy-chat
mailing list