[ExI] LLM's cannot be concious

Tara Maya tara at taramayastales.com
Mon Mar 20 21:52:46 UTC 2023


If a flower can attract a male insect by presenting the facsimile of a female insect, it shows that both the flower and the insect have evolved to do what they do; the flower, like the insect, has a certain level of "intelligence" but it is not an intelligence anything like that of the insect, because the reward system that it evolved in is nothing like that of an actual female insect.

The fact that we have created the facsimile of human intelligence in no way makes it anything like human intelligence. It could be some other kind of intelligence.

Tara Maya



> On Mar 18, 2023, at 3:29 PM, Darin Sunley via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> If a system can formulate a sentence that employs a word, 
> and I can then ask the system to tell me the definition of that word, 
> and it does, 
> and I can then ask the system to relate its definition to the original utterance, 
> and it does, 
> and the system goes on to continue to use that word,
> and all of the system's uses of that word are consistent with the definition,
> then I no longer even understand what it could possibly mean for that system to /not/ "really understand" that word.
> 
> You might just as well say humans don't "really understand" words, since all our neurons are doing is manipulating concentrations of neurotransmitters and calcium ions.
> 
> On Sat, Mar 18, 2023 at 10:15 AM William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> Is there any referent for the word 'understand' as it relates to an AI?  bill w
>> 
>> On Sat, Mar 18, 2023 at 4:42 AM Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>>> I think those who think LLM  AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.
>>> 
>>> Maybe you all already understand this, or maybe you have some reasons why I am wrong.
>>> 
>>> -gts
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/dbef2e1c/attachment.htm>


More information about the extropy-chat mailing list