[ExI] Language models are like mirrors
Gordon Swobe
gordon.swobe at gmail.com
Fri Mar 31 23:36:09 UTC 2023
On Fri, Mar 31, 2023 at 2:18 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Gordon,
> *almost everybody disagrees with you. *
>
ChatGPT-4 itself agrees with me. It says it cannot solve the symbol
grounding problem for itself as it has no conscious experience, and says it
therefore does not understand the meanings of the words as humans do, and
that in this respect it is at a disadvantage compared to humans. See my
thread on the subject.
Spike also agrees these are only language analysis tools. Brent also seems
to agree that they have no access to referents and therefore no way to
know meanings of words.
And this is not democracy, in any case. I’m not afraid to be in the company
people who disagree wit me.
-gts
> -gts
>> > _______________________________________________
>> > extropy-chat mailing list
>> > extropy-chat at lists.extropy.org
>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/7a6b0dba/attachment.htm>
More information about the extropy-chat
mailing list