[ExI] Language models are like mirrors

Gordon Swobe gordon.swobe at gmail.com
Sat Apr 1 00:18:08 UTC 2023


Found the article where I saw the analogy...

Introducing the AI Mirror Test, which very smart people keep failing
https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test

On Fri, Mar 31, 2023 at 5:36 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Fri, Mar 31, 2023 at 2:18 PM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Gordon,
>> *almost everybody disagrees with you. *
>>
>
> ChatGPT-4 itself agrees with me. It says it cannot solve the symbol
> grounding problem for itself as it has no conscious experience, and says it
> therefore does not understand the meanings of the words as humans do, and
> that in this respect it is at a disadvantage compared to humans. See my
> thread on the subject.
>
>
> Spike also agrees these are only language analysis tools. Brent also seems
> to agree that they have no access to referents and therefore  no way to
> know meanings of words.
>
> And this is not democracy, in any case. I’m not afraid to be in the
> company people who disagree wit me.
>
>
> -gts
>
>
>
>
> > -gts
>>> > _______________________________________________
>>> > extropy-chat mailing list
>>> > extropy-chat at lists.extropy.org
>>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/8b909b26/attachment.htm>


More information about the extropy-chat mailing list