[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    William Flynn Wallace 
    foozler83 at gmail.com
       
    Sun Nov  2 16:51:05 UTC 2025
    
    
  
Jason, keep in mind that 'same stimulus, same response' doesn't work.
There scores of reasons why not, ; including simple habituation and
sensitization (response waning or increasing).  How do you map that? Very
general tendencies, perhaps, can be mapped,but the closer you get to
predicting specific responses the error rate will increase.  And how do you
count responses that are the reverse of what you predict?
So - we will never map the brain because its topography, if you will allow
the metaphor, is constantly changing.    bill w
On Sat, Nov 1, 2025 at 9:52 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Given that all of our actions originate in the unconscious mind. there is
>> no way a conscious mind can understand where its ideas and actions come
>> from, much less understand them.
>>
>
> Great point!
>
>
> The conscious mind may think it is in charge but it is just an observer
>> (which nevertheless can understand that a mistake has been made and the
>> idea or action needs redoing.)
>>
>> You want to understand our minds?  Make the workings of the unconscious
>> conscious - and that's just a start.  Why did the impulse go to point B
>> when it left point A rather than to point C? And then trace all the points
>> in between entering the unconscious and resulting in some idea or action.
>> And explain each one.
>>
>
> I have doubts that such a thing is possible from the perspective of the
> mind in question. Can any brain ever feel and know what each of its neurons
> is doing? Can those corresponding neurons feel and know what every one of
> its constituent atoms is doing?
>
> Given Turing universality, it's provable that computer software can't know
> about its underlying hardware. If our minds are a kind of software which
> can be simulated by a computer, then this same implication would apply to
> us. There would be a layer of abstraction of one's underlying
> implementation which high levels cannot penetrate.
>
>
>> You can't even get started until you can truly access the unconscious.
>> Give Freud credit- he tried to do this.
>>
>
> He deserves credit for the attempt, but I think there are limits to a
> mind's ability to introspect.
>
> "Our thoughts seem to run about in their own space, creating new thoughts
> and modifying old ones, and we never notice any neurons helping us out! But
> that is to be expected. We can’t. […]
> We should remember that physical law is what
> makes it all happen–way, way down in neural nooks and crannies which are
> too remote for us to reach with our high-level introspective probes."
>
> — Douglas Hofstadter in “Gödel, Escher, Bach” (1979)
>
>
> Jason
>
>
>>
>>
>> On Fri, Oct 31, 2025 at 6:35 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On 31/10/2025 21:34, Jason Resch wrote:
>>>>
>>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> On 31/10/2025 12:28, John K Clark wrote:
>>>>>
>>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ...
>>>>>
>>>>>
>>>>> You've completely lost me there, but I have two observations: There's
>>>>> no such thing as 'perfect understanding' except as a nebulous theoretical
>>>>> concept, and I don't think a one-to-one correspondence would be enough to
>>>>> understand something, or even be a relevant concept. We use large parts of
>>>>> our brains to process information from small parts of the world. You need a
>>>>> lot more than a single neuron to figure out what's going on in a single
>>>>> neuron.
>>>>>
>>>>> Oh, three observations. We don't process data instantaneously. The
>>>>> same parts of the brain can be used to process information about something
>>>>> repeatedly over time, using feedback loops etc.
>>>>>
>>>>
>>>> Computers and algorithms are constrained by two resources space (i.e.
>>>> memory), and time (i.e. CPU cycles). While some algorithms allow for
>>>> time/space trade offs to be made in certain circumstances, in general there
>>>> is some shortest description of the brain (in terms of bits) for which no
>>>> shorter representation is possible (regardless of how much additional
>>>> computation is thrown at it).
>>>>
>>>> So while the same brain may compute many times with the same neurons,
>>>> this addresses only the time component of simulating a brain. There is
>>>> still the matter of space.
>>>>
>>>>
>>>> Ah, ok. I was talking about understanding the brain, not simulating it.
>>>> Modelling something is not the same as understanding it. Yes, they help
>>>> each other, but they aren't the same thing.
>>>>
>>>
>>> I think understanding a thing is equivalent to being able to form an
>>> accurate mental model of it. With greater levels of understanding
>>> corresponding to more accurate models.
>>>
>>> What do you mean by the word understand?
>>>
>>>
>>>
>>>>
>>>> The analogy here is that a computer with 1 MB of RAM can't emulate a
>>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In
>>>> fact there's no amount of additional time that will permit the memory
>>>> deficient computer to emulate the computer with 1 GB of memory, for the
>>>> simple reason that it will run out of variables to represent all the
>>>> possible values in the memory addresses of the computer with a greater
>>>> memory.
>>>>
>>>>
>>>> I'm not sure that this is true. Are you assuming no swap disk, or other
>>>> similar non-RAM storage?
>>>>
>>>
>>> Swap disks are a means to extend available RAM.
>>>
>>>
>>> Because then I'm sure you're right, but that's a pretty artificial
>>>> restriction.
>>>> The analogy there would be a human with a notepad maybe, or a database,
>>>> or a bunch of other humans, an AI, etc.
>>>>
>>>> So we're back to: A single human brain /on it's own/ can't understand a
>>>> human brain in any great detail. Of course.
>>>>
>>>
>>> I think that was the original question: can any mind ever fully
>>> understand its own operation.
>>>
>>> Jason
>>>
>>> But that's a pretty artificial restriction.
>>>>
>>>> --
>>>> Ben
>>>>
>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251102/861991e4/attachment-0001.htm>
    
    
More information about the extropy-chat
mailing list