[ExI] LLM's cannot be concious

spike at rainier66.com spike at rainier66.com
Sat Mar 18 12:56:45 UTC 2023


 

 

From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Gordon Swobe via extropy-chat
Subject: [ExI] LLM's cannot be concious

 

 

 

>
I think those who think LLM  AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.

Maybe you all already understand this, or maybe you have some reasons why I am wrong.

 

-gts

 

 

 

 

I think you are right gts.  I studied a bunch of experiments where chatbots were talking to each other, looking for any indication that there was any invention of ideas, the way two humans invent ideas when they talk to each other.  The two-bot discussions never blossomed into anything interesting that I have found, but rather devolved into the one-line fluff that often comes with a forced discussion with a dull unimaginative person at a party you don’t have the option of leaving.

 

These chatbots are not yet ready to train each other and cause the singularity.  But there might be an upside to that: humans continue to survive as a species.  That’s a good thing methinks.  I like us.

 

spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230318/20f34100/attachment.htm>


More information about the extropy-chat mailing list