[ExI] Another ChatGPT session on qualia

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 27 06:48:33 UTC 2023


On Thu, Apr 27, 2023 at 12:20 AM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> *GPT "understands" words only in so much it understands how how they fit
> into patterns, statistically and mathematically in relation to other words
> in the corpus on which it is trained, which is what it appears to be saying
> here*
> 1) How do you know humans do not the same
> https://www.fil.ion.ucl.ac.uk/bayesian-brain/#:~:text=The%20Bayesian%20brain%20considers%20the,the%20basis%20of%20past%20experience
> .
>

This is an article about Bayesian reasoning, a theory that we use Bayesian
reasoning innately to estimate the likelihood of hypotheses. It's all very
interesting but nowhere does it even hint at the idea that humans are like
language models, not knowing the meanings of words but only how they relate
to one another statistically and in patterns. We know the meanings of the
words in our hypotheses, for example, spoken or unspoken.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/bdf3621c/attachment.htm>


More information about the extropy-chat mailing list