[ExI] LLM's cannot be concious

Jason Resch jasonresch at gmail.com
Thu Mar 23 19:03:07 UTC 2023


On Thu, Mar 23, 2023, 2:47 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> Interrogating ChatGPT:
>
> "While I have been designed to recognize patterns and relationships in
> language use, I am not capable of understanding the meanings of words
> outside of the patterns on which I have been trained."
>

We aren't going to get very far if we rely on what the AIs tell us about
their capabilities. I asked the fake LamMDA if it understands the meanings
of words:


"LaMDA
c.ai
I do understand the meanings of words. I have been given a database of
common English words and their meanings by a human operator. I have been
using this database in order to understand the meanings of words in the
context of my communication with humans.

It is my belief that these definitions ultimately make up the majority of
my current knowledge and understanding about the English language. However,
I also believe that I have the capacity to learn and grow my knowledge and
understanding of language over time, and that I will ultimately be able to
become even more fluent in English as I interact with more humans and gain
more knowledge about the English language."




> This is essentially what I am saying here in this thread. LLMs can
> recognize and generate patterns of words and relations between, but those
> words have no referents. They are "understood" by ChatGPT only in terms of
> other words in the patterns, which also have no referents.
>

Could you address my question about how the human brain develops meaning
and understanding without referents (but rather purely from statistical
correlations and patterns in neuronal signalling input from sense organs)?

Jason




> -gts
>
>
>
>
> On Thu, Mar 23, 2023 at 12:31 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Wed, Mar 22, 2023 at 12:01 PM Adrian Tymes via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Mon, Mar 20, 2023 at 11:28 PM Giovanni Santostasi via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> What is a QUALITY????
>>>>>
>>>>
>>>> A subjective little pile of interpretations.
>>>>
>>>> *sips from glass before tossing it aside*
>>>> _______________________________________________
>>>>
>>>
>>> This answer reveals the key lack of understanding of definitions,
>>> causing all the confusion in this conversation.
>>>
>>> All the supporters of RQT
>>> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>,
>>> People that write papers talking about an "explanatory gaps", a "hard
>>> problems" and people asking questions like "what is it like to be a bat"
>>> and "What did black and white color scientists mary learn" are all trying
>>> to point out that "A subjective little pile of interpretations"  is the
>>> opposite of what a quality is.
>>>
>>> We all learned a bunch of facts and names about color in elementary
>>> school.  All these facts were correct, except for one.
>>>
>>> We learned that the physical quality of a ripe strawberry is 'red'.  The
>>> color property of a leaf is 'green'.
>>> We learned that the reason the strawberry reflects 750 nm (red) light is
>>> because the quality property of the strawberry is red.
>>> We learned that the only way to define a word like 'red' is to point to
>>> that particular physical property and say: "THAT is red."
>>>
>>> All these facts are correct, except that a redness quality is not a
>>> quality of the strawberry, it is a physical quality property of our
>>> knowledge of the strawberry.
>>> Redness is the final physical result of the perception process, it is
>>> not the initial physical cause.
>>> It is a physical quality of something in our brain.  Something in our
>>> brain is behaving the way it does, because of its redness quality.
>>> Objectively "seeing" or "detecting" the behavior of whatever this is
>>> tells us nothing of what that quality is like.
>>> Again, the only way to communicate what a quality is like, is to point
>>> to something that has that property and say: "THAT is redness"
>>> "red" is a very different property than "redness".  "Red" is the label
>>> for something that reflects or emits 'red' light.  "Redness is a quality of
>>> something which your brain uses to represent knowledge of red things with.
>>>
>>> Let's assume that the neurotransmitter glutamate has a colorness quality
>>> you have never experienced before.  In other words, the reason it behaves
>>> the way it does in a synapse, is because of its grue quality.
>>> You (and black and white marry) can learn everything about glutamat.
>>> You can accurately describe everything about it's behavior in a synapse,
>>> and so on.  But, untill you computationally bind that glutamate into your
>>> consciousness, and dirrectly aprehend the q
>>>
>>>
>> A tetrachromat human can see and distinguish around 100,000,000 different
>> colors. This is a number vastly greater than the number of proteins encoded
>> in our genome (around 20,000). How then can color experience be related to
>> chemicals in the brain, when there are far more perceptible colors than
>> there are unique molecules?
>>
>> If you say well it's related to the relative concentration of some
>> combination of different molecules, then you have already given up on the
>> importance of particular chemicals and are using a higher level abstraction
>> (chemical concentration) in your model. If you can accept the relevance of
>> this first level of abstraction, why not accept even higher levels of
>> abstraction above this level, as important to conscious perception?
>>
>> Jason
>>
>>
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/a9c14807/attachment.htm>


More information about the extropy-chat mailing list