[ExI] LLMs cannot be conscious

Gadersd gadersd at gmail.com
Mon Mar 20 17:57:15 UTC 2023


I wonder where the goalposts will be moved once we have embodied intelligent robots?

> On Mar 20, 2023, at 1:02 PM, Darin Sunley via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> If you ask ChatGPT to provide an "operational definition" it will hand you one. 
> 
> Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious?
> 
> The inhabitants of Plato's Cave have entered the conversation (or at least, they're blinking furiously).
> 
> On Mon, Mar 20, 2023 at 9:26 AM William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> 
> Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.-gts
> 
> As John would say:  we have to have examples to really understand meaning,   But the words we are talking about are abstractions without any clear objective referent, so we and the AIs and the dictionary are reduced to synonyms for 'meaning' and 'understanding' etc.  In science we use operational definitions to try to solve this problem.  bill w  
> 
> 
> On Sun, Mar 19, 2023 at 1:05 AM Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand  *meaning* of any word definition,  you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress.  
> 
> Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.
> 
> -gts
> 
> 
> 
> 
>  Sat, Mar 18, 2023, 3:39 AM Gordon Swobe <gordon.swobe at gmail.com <mailto:gordon.swobe at gmail.com>> wrote:
> I think those who think LLM  AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.
> 
> Maybe you all already understand this, or maybe you have some reasons why I am wrong.
> 
> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/b8fe3b28/attachment-0001.htm>


More information about the extropy-chat mailing list