[ExI] Another ChatGPT session on qualia

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 27 05:57:08 UTC 2023


On Wed, Apr 26, 2023 at 11:34 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> \Understanding is not a binary yes/no thing.  Multiple degrees of
> understanding, and lack thereof, are possible.  Note that it says it does
> not "truly" understand.
>
> Perhaps it understands enough to know it lacks full understanding.
>

I think it includes the qualification "truly" on account of it is so common
for people to speak and write of language models and software applications
in general of having understanding when, as I have argued and try to do
myself, we really ought to be putting the word "understanding" in scare
quotes or using some other word altogether. This is one of Professor Emily
Bender's pet peeves and I agree.

GPT "understands" words only in so much it understands how how they fit
into patterns, statistically and mathematically in relation to other words
in the corpus on which it is trained, which is what it appears to be saying
here:



*> My responses are generated based on patterns in the text and data that I
have been trained on, and I do not have the ability to truly understand the
meaning of the words I generate. > -GPT*
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/2551e996/attachment.htm>


More information about the extropy-chat mailing list