[ExI] Ben Goertzel on Large Language Models

Brent Allsop brent.allsop at gmail.com
Sun Apr 30 13:22:40 UTC 2023


Hi Jason,
OK, thanks.  That helps!
(Can you hear my brain working to reorganize my understanding structure of
functionalism? ;)

You also said:  " it is hard to say, and impossible to prove."

But this is as simple as plugging whatever it is, into a computational binding
system
<https://canonizer.com/topic/827-Name-for-Binding-Problem-Sltn/2-Computational-Binding>
and finding out, isn't it?





On Sun, Apr 30, 2023 at 7:13 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Apr 30, 2023, 8:29 AM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>>
>> On Sat, Apr 29, 2023 at 5:54 AM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Sat, Apr 29, 2023, 2:36 AM Gordon Swobe <gordon.swobe at gmail.com>
>>> wrote:
>>>
>>>> On Fri, Apr 28, 2023 at 3:46 PM Jason Resch via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Apr 28, 2023, 12:33 AM Gordon Swobe via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>> Quite by accident, I happened upon this quote of Erwin Schrodinger
>>>>>> this evening.
>>>>>>
>>>>>> "Consciousness cannot be explained in physical terms. Because
>>>>>> consciousness is absolutely fundamental. It cannot be explained in any
>>>>>> other terms."
>>>>>>
>>>>> That is actually what I also hold to be true about consciousness,
>>>>>> though not necessarily for reasons related to quantum mechanics or eastern
>>>>>> philosophy. (Schrodinger is said to have been influenced by
>>>>>> eastern philosophy).
>>>>>>
>>>>>
>>>>> Me too. Its strange then that we disagree regarding AI.
>>>>>
>>>>
>>>> Yes, that is interesting. To be clear, I agree with Schrodinger that
>>>> consciousness cannot be explained in physical terms, but this is not quite
>>>> the same as saying it is immaterial or non-physical. I mean, and I think he
>>>> meant, that it cannot be explained in the third-person objective language
>>>> of physics.
>>>>
>>>
>>> There is a sense in which I could agree with this. I think physics is
>>> the wrong language for describing states of consciousness, which is a
>>> higher order phenomena. I would also say, as I have explained elsewhere,
>>> that in a certain sense consciousness is also more fundamental than the
>>> apparent physical reality.
>>>
>>> I take "absolutely fundamental" to mean irreducible.
>>>>
>>>
>>> Right there are several possible interpretations of what he means by
>>> fundamental.
>>>
>>> I agree that conscious is irreducible in the sense that looking at ever
>>> smaller pieces of the brain does not yield better understanding of the
>>> mind. I would say that consciousness is constructive, not reductive. You
>>> need to consider all the parts together, and how they build up to a whole,
>>> rather than how each part operates in isolation.
>>>
>>> Much of science has been successful precisely because it has followed
>>> the path of reductionism, but I don't think states of consciousness can be
>>> entirely understood by reductive means. Likewise the same is true for any
>>> complex enough system that manifests emergent behavior, like a complex
>>> computer program, or an ecosystem. When there are many unique parts
>>> interacting in complex ways with each other, the system as a whole cannot
>>> be understood by a simple analysis of each part. Any true understanding of
>>> that system must include all the parts working together: the whole.
>>>
>>>
>>>   I take "It cannot be explained in other terms" to mean that the
>>>> experience itself is the only way to understand it.
>>>>
>>>
>>> I agree with what you say above.
>>>
>>> This is also why I try to stay out of the endless discussions about what
>>>> are qualia.
>>>>
>>>> I cannot explain in the language of physics, or in the language of
>>>> computation or of functionalism generally, why I see the red quale when I
>>>> look at an apple. I just do. It is fundamental and irreducible.
>>>>
>>>
>>> Note that functionalism doesn't aim to make qualia communicable. It is
>>> just the hypothesis that if you could reproduce the functional organization
>>> of a consciousness system, you would reproduce the same consciousness as
>>> that first conscious system.
>>>
>>
>> I don't understand why functionalists only ever seem to talk about
>> "functional organization".
>> All 4 of the systems in this image:
>> https://i.imgur.com/N3zvIeS.jpg
>> have the same "functional organization" as they all know the strawberry
>> is red.
>>
>
> You have to consider the organization at the right degree of detail. They
> are not functionally identical as they are each processing information in
> different ways, one is inverting the symbol after the retina, another
> before, another is only geared to map inputs to text strings. These are
> functional differences.
>
> If you ignore the level of detail (the functional substitution level) and
> look at only the highest level of output, then you wouldn't up equating
> dreaming brain with a rock, both output nothing, but one has a rich inner
> experience.
>
>
>
> But the fact that they all have this same functionality is missing the
>> point of what redness is.
>>
>
> It seems to me that the real issue is that perhaps you have been
> misunderstanding what functionalism is this whole time. Yes a person asked
> what 2+3 is and a calculated what 2+3 is will both give 5, but they are
> very different functions when analyzed at a finer grain. This is what I
> have referred to as the "substitution level", for humans it may be the
> molecular, proteins, neural, or perhaps slightly above the neuronal level,
> it is hard to say, and impossible to prove.
>
> Note this is not done pet theory of mind, look at how Chalmers defines his
> notion of functionally invariant:
>
> "Specifically, I defend a principle of organizational invariance, holding
> that experience is invariant across systems with the same fine-grained
> functional organization. More precisely, the principle states that given
> any system that has conscious experiences, then any system that has the
> same functional organization at a fine enough grain will have qualitatively
> identical conscious experiences. A full specification of a system's
> fine-grained functional organization will fully determine any conscious
> experiences that arise."
>
> Note his repeated (I see three) appeals to it being a necessarily
> "fine-grained" level of functional organization. You can't stop at the top
> layer of them all saying "I see red" and call it a day, nor say they are
> functionally equivalent if you ignore what's going on "under the hood".
>
>
> Why do functionalists never talk about redness,
>>
>
>
> They do talk about redness and colors all the time. Chalmers fading qualia
> experiment is entirely based on color qualia.
>
>
> but just "functional organisation?
>>
>
> Because functional organization is the only thing that determines
> behavior, and it is as far as we can test or analyze a system objectively.
>
>
>
>>
>>
>>> It's a fairly modest idea as far as theories go, because you would
>>> obtain identical behavior between the two systems. So if the first is David
>>> Chalmers his functional duplicate would say and do all the same things as
>>> the original, including stating his love of certain qualia like deep
>>> purples and greens, and writing books about the mysterious nature of
>>> consciousness. Could such a thing be a zombie? This is where you and I part
>>> ways.
>>>
>>
>> To me, the R system in the above image is a zombie, as it can be
>> functionally isomorphic to the other 3,
>>
>
> It's not functionally isomorphic at a fine-grained level.
>
>
> it can simulate the other 3,
>>
>
> It's not simulating the other three, it just happens to have the same
> output. To be simulating one of the other three, in my view, it's circuits
> would have to be functionally isomorphic to one of the others brains at
> perhaps the neuronal or molecular level.
>
> Note there is no way to simulate all three at the necessary level of
> detail at the same time in your picture because they have different qualia.
> Should two different fine-grained versions have different qualia implies,
> that they are not functionally isomorphic at the necessary substitution
> level (i.e. they're not the same at the fined-grained level on which the
> qualia supervene).
>
> but its knowledge isn't like anything.  Do functionalists think
>> of a zombie as something different?
>>
>
> Different from what?
>
> Functionalists seem to be saying that a zombie like R isn't possible, and
>> they seem to be saying aht A and C are the same, because they both know the
>> strawberry is red.  That is true, but that is missing the point.
>> "Functional organization" isn't the point, the redness is the point.
>>
>
> I think you may be missing some points regarding functionalism, and
> implore you to read all of the dancing qualia thought experiment -- and
> consider what the consequences would be *if we could* simulate the brain's
> behavior using an artificial substrate.
>
> I know you disagree with this premise, but if you truly want to understand
> the functionalist perspective, you must temporarily accept the premise for
> the purposes of following the thought experiment ans seeing where lead *if*
> digital emulation were possible.
>
>
>> Jason, what is redness, to you?  And why do you never talk about that,
>> but only "functional organization?"
>>
>
> I mention colors and qualia all the time. And moreover I have provided
> many arguments for why they are neither communicable nor shareable.
> Therefore I see little point in me talking about "redness for me" because
> others who are not me (everyone else on this list) cannot know what
> "redness for me" is, or whether or to what extent it mirrors or
> approximates "redness for them".
>
> It may be that the best we can do is say if we have two functionally
> isomorphic versions of me, with identically organized brains, then the
> redness for both will be the same, if the functional organization is
> identical at the necessary functional substitution level (i.e., it is
> finely-enough grained).
>
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230430/49a82654/attachment-0001.htm>


More information about the extropy-chat mailing list