[ExI] Another ChatGPT session on qualia

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 07:03:42 UTC 2023


Gordon,
This why I put in the form of a question how do we know that our way of
using language doesn't use perceived patterns and regularities (that are in
the end statistical) in language?
Also, you never answered the question if you understand that the LLMs make
models right? It is not just this word comes after this with this
probability so I'm going to use this word as the most likely. They don't do
that. They used that as the starting point but they needed to create a
model (via the training and so the adjusting of the weights in the ANN)
that was predictive of how language works.
The main question is: Is it possible that the final results, that is the
trained ANN actually spontaneously figure out language and
therefore meaning to actually achieve this level of accuracy and mastery of
language?
Think about this
1) Language is not really that meaningful in fact we can use stats to
determine what is the more likely word that follows another and this will
give very meaningful text, so we are all stochastic parrots
2) Language is so complex and full of intrinsic meaning that to really be
able to use it you need to understand it. You will never be able to use
just stats to create meaningful context.

I cannot prove it 100 % but basically, I adhere to the second camp (like
Brent would call it). I think that they used stats as starting point and
because it was impossible to handle the combinatorial explosion they
started to train the system by exposing it to more and more text and
increasing the number of connections in the net. Eventually the net figure
out language because the net is able to discriminate and select
meaningful responses EXACTLY how the net in our brain works.

There are many examples of similar processes in nature where initially
there was so particular goal set by evolution but as a side effect,
something else evolved. Take the human ability to do abstract thinking. It
evolved to do some simple tasks like maybe simple planning for a hunt or
maybe understanding social structure but then this ability evolved where
now we can send probes to Mars.
It is not obvious you can get this capability by evolving good hunters.
This is how emergence works, it is unexpected and unpredictable from the
original components of the system that gave rise to that complexity.

Giovanni










On Wed, Apr 26, 2023 at 11:49 PM Gordon Swobe <gordon.swobe at gmail.com>
wrote:

>
>
> On Thu, Apr 27, 2023 at 12:20 AM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> *GPT "understands" words only in so much it understands how how they fit
>> into patterns, statistically and mathematically in relation to other words
>> in the corpus on which it is trained, which is what it appears to be saying
>> here*
>> 1) How do you know humans do not the same
>> https://www.fil.ion.ucl.ac.uk/bayesian-brain/#:~:text=The%20Bayesian%20brain%20considers%20the,the%20basis%20of%20past%20experience
>> .
>>
>
> This is an article about Bayesian reasoning, a theory that we use Bayesian
> reasoning innately to estimate the likelihood of hypotheses. It's all very
> interesting but nowhere does it even hint at the idea that humans are like
> language models, not knowing the meanings of words but only how they relate
> to one another statistically and in patterns. We know the meanings of the
> words in our hypotheses, for example, spoken or unspoken.
>
> -gts
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/5290b24f/attachment.htm>


More information about the extropy-chat mailing list