[ExI] LLM's cannot be concious

Will Steinberg steinberg.will at gmail.com
Sun Mar 19 15:17:35 UTC 2023


Could you define "understand"?

Oh dear this is going to be the Chinese room all over again...

It's a lot better than all the political bullshit people have been spewing
here though so by all means philosophize away

On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I think those who think LLM  AIs like ChatGPT are becoming conscious or
> sentient like humans fail to understand a very important point: these
> software applications only predict language. They are very good at
> predicting which word should come next in a sentence or question, but they
> have no idea what the words mean. They do not and cannot understand what
> the words refer to. In linguistic terms, they lack referents.
>
> Maybe you all already understand this, or maybe you have some reasons why
> I am wrong.
>
> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230319/09ee8eec/attachment.htm>


More information about the extropy-chat mailing list