[ExI] Another ChatGPT session on qualia

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 07:25:03 UTC 2023


Giovanni
By the way, this sentence seems to be contradictory. Can you tell me where
you think the contradiction is?

1) Language is not really that meaningful in fact we can use stats to
determine what is the more likely word that follows another and this will
give very meaningful text, so we are all stochastic parrots

GPT-4:
The contradiction in the sentence lies in the assertion that "language is
not really that meaningful" followed by the claim that using stats to
predict the next word will produce "very meaningful text."

The contradiction is as follows:

The first part of the sentence suggests that language lacks deep meaning
and can be reduced to simple statistics.
The latter part of the sentence claims that by using these statistics, we
can generate meaningful text.
These two statements are contradictory because if language were not
meaningful, it would not be possible to generate meaningful text using
statistics alone.

I don't think you can escape this.
The entire idea of stochastic parrots is bs. Parrots can be trained to
understand but that is beside the point. When they repeat certain words
that they were not trained to give meaning to they don't make full
sentences and paragraphs that are consistent and "seem" meaningful. If
stats alone could do that then language would be pretty boring and...
predictable.
Adding randomness would not help either.






On Thu, Apr 27, 2023 at 12:14 AM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> See, this is how you get meaningful answers, have some conversation where
> you ask it to come to some logical conclusion.
>
> Giovanni: Think about this
> 1) Language is not really that meaningful in fact we can use stats to
> determine what is the more likely word that follows another and this will
> give very meaningful text, so we are all stochastic parrots
> 2) Language is so complex and full of intrinsic meaning that to really be
> able to use it you need to understand it. You will never be able to use
> just stats to create meaningful context.
> GPT-4:
> Both statements present interesting perspectives on the nature of language
> and its meaning. Here's an analysis of each:
>
> Language as stochastic process:
> This perspective argues that language can be understood and generated
> using statistical models that predict the likelihood of one word following
> another. This idea is the foundation of many natural language processing
> (NLP) algorithms, including n-grams and some deep learning models. While it
> is true that these models can generate coherent text, they still have
> limitations in understanding context, nuance, and ambiguity.
>
> Language as complex and intrinsically meaningful:
> This perspective emphasizes the importance of understanding the meaning
> and context in language. It suggests that statistical models alone cannot
> fully capture the intricacies of human language. For example, idioms,
> metaphors, and cultural references require knowledge beyond simple word
> probabilities.
> In reality, human language is a combination of both perspectives. Modern
> NLP techniques, such as the one used in this AI model, rely on a
> combination of statistical and semantic understanding to generate coherent
> and contextually relevant responses. However, it is worth noting that even
> these advanced models have limitations and are continuously being improved
> to better understand and generate human-like language.
>
> Giovanni Santostasi
>
> What I'm tryign to understand is this. My friend Gordon claims that LLMs
> only use statistical patterns to guess the next word in a sentence. I told
> him this is just the starting point because you cannot really measure the
> stats for 3, 4, 5 or more combinations of words because you will get soon
> in an untractable combinatorial explosion. I told him eventually they had
> to train the LLM via ANN by exposing it to large body of text and make a
> model of how words follow each other. It is possible that as a result the
> ANN weights developed a deep understanding of language represented by the
> weights. Withot this understanding you could not really achieve the level
> of mastery of language exibit by LLMs.
> So this why I wrote the previous sentences. Or language can be understood
> purely by stats (not sure how one would overcome the combinatorial
> explosion) or to actually create meaninful content the LLM has to really
> understand the intricacies of language.
>
> GPT-4:
>
> Your understanding is on the right track. While early NLP models relied
> heavily on statistical patterns to predict the next word in a sentence,
> they faced limitations due to the combinatorial explosion problem, as you
> mentioned. Modern language models, like the one used in this AI, are based
> on artificial neural networks (ANNs) and have evolved beyond just using
> statistical patterns.
>
> These models, such as GPT-4 (the model used here), are trained on vast
> amounts of text data and learn representations of words and their contexts,
> effectively encoding a deeper understanding of language within their
> weights. The training process allows the model to capture not just simple
> word co-occurrence patterns but also the nuances, context, and complexities
> of language. This is achieved through a combination of self-attention
> mechanisms, multi-layer neural networks, and other advanced techniques.
>
> However, it's essential to note that while these models have come a long
> way, they are not perfect and still have limitations. They can sometimes
> generate responses that may seem plausible but are incorrect or lack
> understanding of certain contexts. Researchers are continuously working to
> improve these models and help them better understand the intricacies of
> human language.
>
> On Thu, Apr 27, 2023 at 12:03 AM Giovanni Santostasi <
> gsantostasi at gmail.com> wrote:
>
>> Gordon,
>> This why I put in the form of a question how do we know that our way of
>> using language doesn't use perceived patterns and regularities (that are in
>> the end statistical) in language?
>> Also, you never answered the question if you understand that the LLMs
>> make models right? It is not just this word comes after this with this
>> probability so I'm going to use this word as the most likely. They don't do
>> that. They used that as the starting point but they needed to create a
>> model (via the training and so the adjusting of the weights in the ANN)
>> that was predictive of how language works.
>> The main question is: Is it possible that the final results, that is the
>> trained ANN actually spontaneously figure out language and
>> therefore meaning to actually achieve this level of accuracy and mastery of
>> language?
>> Think about this
>> 1) Language is not really that meaningful in fact we can use stats to
>> determine what is the more likely word that follows another and this will
>> give very meaningful text, so we are all stochastic parrots
>> 2) Language is so complex and full of intrinsic meaning that to really be
>> able to use it you need to understand it. You will never be able to use
>> just stats to create meaningful context.
>>
>> I cannot prove it 100 % but basically, I adhere to the second camp (like
>> Brent would call it). I think that they used stats as starting point and
>> because it was impossible to handle the combinatorial explosion they
>> started to train the system by exposing it to more and more text and
>> increasing the number of connections in the net. Eventually the net figure
>> out language because the net is able to discriminate and select
>> meaningful responses EXACTLY how the net in our brain works.
>>
>> There are many examples of similar processes in nature where initially
>> there was so particular goal set by evolution but as a side effect,
>> something else evolved. Take the human ability to do abstract thinking. It
>> evolved to do some simple tasks like maybe simple planning for a hunt or
>> maybe understanding social structure but then this ability evolved where
>> now we can send probes to Mars.
>> It is not obvious you can get this capability by evolving good hunters.
>> This is how emergence works, it is unexpected and unpredictable from the
>> original components of the system that gave rise to that complexity.
>>
>> Giovanni
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Apr 26, 2023 at 11:49 PM Gordon Swobe <gordon.swobe at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, Apr 27, 2023 at 12:20 AM Giovanni Santostasi via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>> *GPT "understands" words only in so much it understands how how they
>>>> fit into patterns, statistically and mathematically in relation to other
>>>> words in the corpus on which it is trained, which is what it appears to be
>>>> saying here*
>>>> 1) How do you know humans do not the same
>>>> https://www.fil.ion.ucl.ac.uk/bayesian-brain/#:~:text=The%20Bayesian%20brain%20considers%20the,the%20basis%20of%20past%20experience
>>>> .
>>>>
>>>
>>> This is an article about Bayesian reasoning, a theory that we use
>>> Bayesian reasoning innately to estimate the likelihood of hypotheses. It's
>>> all very interesting but nowhere does it even hint at the idea that humans
>>> are like language models, not knowing the meanings of words but only how
>>> they relate to one another statistically and in patterns. We know the
>>> meanings of the words in our hypotheses, for example, spoken or unspoken.
>>>
>>> -gts
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/31ffb621/attachment.htm>


More information about the extropy-chat mailing list