[ExI] Ben Goertzel on Large Language Models

Giovanni Santostasi gsantostasi at gmail.com
Mon May 1 01:35:05 UTC 2023


Brent,
Please watch this video. It is about memories in the brain but similar
ideas apply to redness or anything else that happens in the brain. It shows
how patterns in time is what the brain stuff is made of.
It is a very well-done video and you can learn a lot about neuroscience
from watching it. This should resolve a lot of misunderstandings we are
having.

https://www.youtube.com/watch?v=piF6D6CQxUw

On Sun, Apr 30, 2023 at 6:07 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> *You seem to be saying that because grey light can seem to be red, the
> seeming redness is not irreducible?*
> *Mechanically, what do you think is a seeming redness quality?*
> It is not irreducible because 2 inputs give you the same output. It is
> obvious that there is some process that takes 2 different inputs and gives
> the same result. It is not a 1 to 1 correspondence between the existence of
> an external physical phenomenon and the perception. This indicates to me
> some complex processing is happening and in normal circumstances, the
> presence of a given range of frequency of light produces an output but
> there are other circumstances that have nothing to do with the presence of
> light in a given frequency range, or actually a completely different
> frequency range (actually grey is all the frequencies at once) produces the
> same effect. This shows that whatever complex mechanism is processing the
> information received arrived to a faulty conclusion, basically garbage in,
> garbage out.
> I know it sounds strange to you but "mechanically" redness is a electrical
> pulses in our brain that follow a certain repeated pattern.
> It is the same for memories, it is the same for love or whatever other
> inner experience we have. These are simply patterns of information that
> happen to know themselves via feedback loops. This experience of awareness
> is nothing else than these self-referential loops, it is not a substance,
> it is not something you can point out to besides as a process and a
> sequence of events.
>
>
>
>
>
>
>
>
>
> On Sun, Apr 30, 2023 at 5:43 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Sun, Apr 30, 2023, 3:17 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> Hi Jason,
>>> I've gone over the difference between composite and elemental qualities
>>> before but evidently you missed it.
>>> Or is that different?
>>>
>>
>> If one person finds the color beautiful and another finds it ugly, is it
>> the same color? Can someone who likes the taste of broccoli be tasting the
>> same thing as someone who finds the taste disgusting? Or is then liking or
>> disliking an inseparable part of the experience? An answer eludes me.
>>
>>
>> We don't experience redness standalone it is always computationally bound
>>> with lots of other information. Information like how sweet a red strawberry
>>> will taste and other memories.
>>> Of course, one person's memories that gets bounds with redness is going
>>> to be different than another person's memories bound to redness, but the
>>> elemental redness itself could be the same.
>>>
>>
>> Perhaps, yes.
>>
>> If this were not the case, we could not reproduce a TV signal with a
>>> fixed level of pixels with a fixed set of colors for each pixel, right?
>>>
>>
>> Our capacity to make a TV that can display any color a normally sighted
>> person can recognize requires only that normally sighted humans share the
>> same set of photosensitive chemicals in their retinas. How the signal from
>> the retina is interpreted, however, depends on the nature of the mind in
>> question.
>>
>>
>> I guess we are each making different predictions. It's up to the
>>> experimentalist to prove which one is right.  I guess I'm predicting there
>>> is an elemental quality level out of which all possible composite visual
>>> experiences can be composed.
>>>
>>
>> How do you think they get composed? You say "computational binding", can
>> I take this to mean you think the structure of the computational relations
>> among the elemental parts is what determines how a set of elemental
>> experiences are composed into a larger unified experience?
>>
>>   You are predicting otherwise.
>>>
>>
>> Let's just say I remain unconvinced of your hypothesis that the
>> fundamental qualities are physical in nature. (Not that we have a great
>> definition of what we mean when we use the word physical.) I have asked you
>> what you mean by physical but I am not sure you have answered yet. I think
>> its quite likely fundamental qualities are informational or relational,
>> rather than physical, but then, I think physics is itself perhaps also
>> entirely informational or relational -- demonstrating the importance of
>> getting definitions right and agreeing on them first. Otherwise we will
>> talk past each other without hope of ever converging on truth.
>>
>>   If science verifies my hypothesis to be true, effing of the ineffable
>>> will be possible.
>>>
>>
>> Can you address my concern in my previous email: that is, even if
>> qualities are physical, how can we ever confirm that in an intersubjective
>> way? I showed even with self manipulation of brain states and neural
>> ponytails it's far from clear this could provide any knowledge they could
>> take with them.
>>
>>
>>
>> Otherwise it's not approachable via science, and we will never know?
>>>
>>
>> Science and math are filled with provably unprovable situations: halting
>> problem, proving mathematical consistency, proving two programs compute the
>> same thing, etc.
>>
>> Jason
>>
>>
>>
>>>
>>>
>>>
>>> On Sun, Apr 30, 2023, 7:49 AM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>>
>>>> On Sun, Apr 30, 2023, 9:23 AM Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> Hi Jason,
>>>>> OK, thanks.  That helps!
>>>>>
>>>>
>>>> So happy to hear that! Thank you.
>>>>
>>>>
>>>> (Can you hear my brain working to reorganize my understanding structure
>>>>> of functionalism? ;)
>>>>>
>>>>
>>>> ��
>>>>
>>>>
>>>>> You also said:  " it is hard to say, and impossible to prove."
>>>>>
>>>>> But this is as simple as plugging whatever it is, into a computational binding
>>>>> system
>>>>> <https://canonizer.com/topic/827-Name-for-Binding-Problem-Sltn/2-Computational-Binding>
>>>>> and finding out, isn't it?
>>>>>
>>>>
>>>> Let's say we had advanced micro surgery technology that could rewire,
>>>> permute, or tweak our brains however we wanted. Then we could perform
>>>> direct qualia experiments on ourselves, and individually we could notice
>>>> how different tweaks to one's brain change one's experience.
>>>>
>>>> But note that even with this, we're still stuck -- any knowledge one
>>>> gains about their qualia remains subjective and forever linked to a
>>>> particular brain state.
>>>>
>>>> If I perceive a very beautiful color that I want to share with you, how
>>>> similar does your brain have to become to mine for you to perceive it? Just
>>>> your visual cortex? Your visual cortex and emotion centers? Your visual
>>>> cortex, emotional centers and language center? Your visual cortex,
>>>> emotional centers,  language centers and memories?
>>>>
>>>> It's not clear to me that you could have an identical color experience
>>>> without radical changes throughout your brain. And how could we know when
>>>> our experiences are identical when our brains are not? Even when brains are
>>>> identical, many argue it still requires a leap of faith to presume they
>>>> have identical qualia (e.g. proponents of the inverted qualia experiments).
>>>>
>>>> You propose we can bridge this gap by linking qualia with certain
>>>> physical properties, but I don't see that overcoming this issue. Event with
>>>> a neural ponytail (from Avatar), or a thalamic bridge like the Hogan twins,
>>>> there's no guarantee that the two ninds can take their knowledge of a
>>>> combined experience with them after the minds disentangle. That's no
>>>> different from you slowly modifying your mind to be like mine then slowly
>>>> merging back, the returning back erases whatever context you had as me, and
>>>> you're back in the dark as far as knowing or remembering what my color
>>>> experience was like. The same applies to two brains merging into a combined
>>>> state and then differentiating again.
>>>>
>>>> I apologize if this implies any kind of futility in understanding and
>>>> sharing knowledge of qualia, but if you see a way around it I am all ears.
>>>>
>>>> Jason
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Apr 30, 2023 at 7:13 AM Jason Resch via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Apr 30, 2023, 8:29 AM Brent Allsop via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Apr 29, 2023 at 5:54 AM Jason Resch via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, Apr 29, 2023, 2:36 AM Gordon Swobe <gordon.swobe at gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On Fri, Apr 28, 2023 at 3:46 PM Jason Resch via extropy-chat <
>>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Apr 28, 2023, 12:33 AM Gordon Swobe via extropy-chat <
>>>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>>
>>>>>>>>>>> Quite by accident, I happened upon this quote of Erwin
>>>>>>>>>>> Schrodinger this evening.
>>>>>>>>>>>
>>>>>>>>>>> "Consciousness cannot be explained in physical terms. Because
>>>>>>>>>>> consciousness is absolutely fundamental. It cannot be explained in any
>>>>>>>>>>> other terms."
>>>>>>>>>>>
>>>>>>>>>> That is actually what I also hold to be true about consciousness,
>>>>>>>>>>> though not necessarily for reasons related to quantum mechanics or eastern
>>>>>>>>>>> philosophy. (Schrodinger is said to have been influenced by
>>>>>>>>>>> eastern philosophy).
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Me too. Its strange then that we disagree regarding AI.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Yes, that is interesting. To be clear, I agree with Schrodinger
>>>>>>>>> that consciousness cannot be explained in physical terms, but this is not
>>>>>>>>> quite the same as saying it is immaterial or non-physical. I mean, and I
>>>>>>>>> think he meant, that it cannot be explained in the third-person objective
>>>>>>>>> language of physics.
>>>>>>>>>
>>>>>>>>
>>>>>>>> There is a sense in which I could agree with this. I think physics
>>>>>>>> is the wrong language for describing states of consciousness, which is a
>>>>>>>> higher order phenomena. I would also say, as I have explained elsewhere,
>>>>>>>> that in a certain sense consciousness is also more fundamental than the
>>>>>>>> apparent physical reality.
>>>>>>>>
>>>>>>>> I take "absolutely fundamental" to mean irreducible.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Right there are several possible interpretations of what he means
>>>>>>>> by fundamental.
>>>>>>>>
>>>>>>>> I agree that conscious is irreducible in the sense that looking at
>>>>>>>> ever smaller pieces of the brain does not yield better understanding of the
>>>>>>>> mind. I would say that consciousness is constructive, not reductive. You
>>>>>>>> need to consider all the parts together, and how they build up to a whole,
>>>>>>>> rather than how each part operates in isolation.
>>>>>>>>
>>>>>>>> Much of science has been successful precisely because it has
>>>>>>>> followed the path of reductionism, but I don't think states of
>>>>>>>> consciousness can be entirely understood by reductive means. Likewise the
>>>>>>>> same is true for any complex enough system that manifests emergent
>>>>>>>> behavior, like a complex computer program, or an ecosystem. When there are
>>>>>>>> many unique parts interacting in complex ways with each other, the system
>>>>>>>> as a whole cannot be understood by a simple analysis of each part. Any true
>>>>>>>> understanding of that system must include all the parts working together:
>>>>>>>> the whole.
>>>>>>>>
>>>>>>>>
>>>>>>>>   I take "It cannot be explained in other terms" to mean that the
>>>>>>>>> experience itself is the only way to understand it.
>>>>>>>>>
>>>>>>>>
>>>>>>>> I agree with what you say above.
>>>>>>>>
>>>>>>>> This is also why I try to stay out of the endless discussions about
>>>>>>>>> what are qualia.
>>>>>>>>>
>>>>>>>>> I cannot explain in the language of physics, or in the language of
>>>>>>>>> computation or of functionalism generally, why I see the red quale when I
>>>>>>>>> look at an apple. I just do. It is fundamental and irreducible.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Note that functionalism doesn't aim to make qualia communicable. It
>>>>>>>> is just the hypothesis that if you could reproduce the functional
>>>>>>>> organization of a consciousness system, you would reproduce the same
>>>>>>>> consciousness as that first conscious system.
>>>>>>>>
>>>>>>>
>>>>>>> I don't understand why functionalists only ever seem to talk about
>>>>>>> "functional organization".
>>>>>>> All 4 of the systems in this image:
>>>>>>> https://i.imgur.com/N3zvIeS.jpg
>>>>>>> have the same "functional organization" as they all know the
>>>>>>> strawberry is red.
>>>>>>>
>>>>>>
>>>>>> You have to consider the organization at the right degree of detail.
>>>>>> They are not functionally identical as they are each processing information
>>>>>> in different ways, one is inverting the symbol after the retina, another
>>>>>> before, another is only geared to map inputs to text strings. These are
>>>>>> functional differences.
>>>>>>
>>>>>> If you ignore the level of detail (the functional substitution level)
>>>>>> and look at only the highest level of output, then you wouldn't up equating
>>>>>> dreaming brain with a rock, both output nothing, but one has a rich inner
>>>>>> experience.
>>>>>>
>>>>>>
>>>>>>
>>>>>> But the fact that they all have this same functionality is missing
>>>>>>> the point of what redness is.
>>>>>>>
>>>>>>
>>>>>> It seems to me that the real issue is that perhaps you have been
>>>>>> misunderstanding what functionalism is this whole time. Yes a person asked
>>>>>> what 2+3 is and a calculated what 2+3 is will both give 5, but they are
>>>>>> very different functions when analyzed at a finer grain. This is what I
>>>>>> have referred to as the "substitution level", for humans it may be the
>>>>>> molecular, proteins, neural, or perhaps slightly above the neuronal level,
>>>>>> it is hard to say, and impossible to prove.
>>>>>>
>>>>>> Note this is not done pet theory of mind, look at how Chalmers
>>>>>> defines his notion of functionally invariant:
>>>>>>
>>>>>> "Specifically, I defend a principle of organizational invariance,
>>>>>> holding that experience is invariant across systems with the same
>>>>>> fine-grained functional organization. More precisely, the principle states
>>>>>> that given any system that has conscious experiences, then any system that
>>>>>> has the same functional organization at a fine enough grain will have
>>>>>> qualitatively identical conscious experiences. A full specification of a
>>>>>> system's fine-grained functional organization will fully determine any
>>>>>> conscious experiences that arise."
>>>>>>
>>>>>> Note his repeated (I see three) appeals to it being a necessarily
>>>>>> "fine-grained" level of functional organization. You can't stop at the top
>>>>>> layer of them all saying "I see red" and call it a day, nor say they are
>>>>>> functionally equivalent if you ignore what's going on "under the hood".
>>>>>>
>>>>>>
>>>>>> Why do functionalists never talk about redness,
>>>>>>>
>>>>>>
>>>>>>
>>>>>> They do talk about redness and colors all the time. Chalmers fading
>>>>>> qualia experiment is entirely based on color qualia.
>>>>>>
>>>>>>
>>>>>> but just "functional organisation?
>>>>>>>
>>>>>>
>>>>>> Because functional organization is the only thing that determines
>>>>>> behavior, and it is as far as we can test or analyze a system objectively.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> It's a fairly modest idea as far as theories go, because you would
>>>>>>>> obtain identical behavior between the two systems. So if the first is David
>>>>>>>> Chalmers his functional duplicate would say and do all the same things as
>>>>>>>> the original, including stating his love of certain qualia like deep
>>>>>>>> purples and greens, and writing books about the mysterious nature of
>>>>>>>> consciousness. Could such a thing be a zombie? This is where you and I part
>>>>>>>> ways.
>>>>>>>>
>>>>>>>
>>>>>>> To me, the R system in the above image is a zombie, as it can be
>>>>>>> functionally isomorphic to the other 3,
>>>>>>>
>>>>>>
>>>>>> It's not functionally isomorphic at a fine-grained level.
>>>>>>
>>>>>>
>>>>>> it can simulate the other 3,
>>>>>>>
>>>>>>
>>>>>> It's not simulating the other three, it just happens to have the same
>>>>>> output. To be simulating one of the other three, in my view, it's circuits
>>>>>> would have to be functionally isomorphic to one of the others brains at
>>>>>> perhaps the neuronal or molecular level.
>>>>>>
>>>>>> Note there is no way to simulate all three at the necessary level of
>>>>>> detail at the same time in your picture because they have different qualia.
>>>>>> Should two different fine-grained versions have different qualia implies,
>>>>>> that they are not functionally isomorphic at the necessary substitution
>>>>>> level (i.e. they're not the same at the fined-grained level on which the
>>>>>> qualia supervene).
>>>>>>
>>>>>> but its knowledge isn't like anything.  Do functionalists think
>>>>>>> of a zombie as something different?
>>>>>>>
>>>>>>
>>>>>> Different from what?
>>>>>>
>>>>>> Functionalists seem to be saying that a zombie like R isn't possible,
>>>>>>> and they seem to be saying aht A and C are the same, because they both know
>>>>>>> the strawberry is red.  That is true, but that is missing the point.
>>>>>>> "Functional organization" isn't the point, the redness is the point.
>>>>>>>
>>>>>>
>>>>>> I think you may be missing some points regarding functionalism, and
>>>>>> implore you to read all of the dancing qualia thought experiment -- and
>>>>>> consider what the consequences would be *if we could* simulate the brain's
>>>>>> behavior using an artificial substrate.
>>>>>>
>>>>>> I know you disagree with this premise, but if you truly want to
>>>>>> understand the functionalist perspective, you must temporarily accept the
>>>>>> premise for the purposes of following the thought experiment ans seeing
>>>>>> where lead *if* digital emulation were possible.
>>>>>>
>>>>>>
>>>>>>> Jason, what is redness, to you?  And why do you never talk about
>>>>>>> that, but only "functional organization?"
>>>>>>>
>>>>>>
>>>>>> I mention colors and qualia all the time. And moreover I have
>>>>>> provided many arguments for why they are neither communicable nor
>>>>>> shareable. Therefore I see little point in me talking about "redness for
>>>>>> me" because others who are not me (everyone else on this list) cannot know
>>>>>> what "redness for me" is, or whether or to what extent it mirrors or
>>>>>> approximates "redness for them".
>>>>>>
>>>>>> It may be that the best we can do is say if we have two functionally
>>>>>> isomorphic versions of me, with identically organized brains, then the
>>>>>> redness for both will be the same, if the functional organization is
>>>>>> identical at the necessary functional substitution level (i.e., it is
>>>>>> finely-enough grained).
>>>>>>
>>>>>>
>>>>>> Jason
>>>>>> _______________________________________________
>>>>>> extropy-chat mailing list
>>>>>> extropy-chat at lists.extropy.org
>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230430/d9afc796/attachment-0001.htm>


More information about the extropy-chat mailing list