[ExI] GPT-4 on its inability to solve the symbol grounding problem
Brent Allsop
brent.allsop at gmail.com
Fri Apr 7 20:26:58 UTC 2023
I completely agree.
Unlike conscious beings, who can experience a redness color quality, and
thereby know what the word "redness" means, no abstract bot can know the
definition of the word redness.
They can abstractly represent all that, identical to black and white Marry,
but they can't know what redness is like.
And all intelligent chat bots clearly model this very accurate factual
knowledge.
On Fri, Apr 7, 2023 at 1:54 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Fri, Apr 7, 2023 at 12:27 PM Tara Maya via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I stand by what I said before: the least helpful way to know if ChatGPT
>> is conscious is to ask it directly.
>>
>
> I do not disagree with that, but I find it amusing that according to the
> state-of-the-art LLM, it is not conscious despite so many people wishing
> otherwise. All I can really say for certain is that GPT-4's reported
> analysis of language models is consistent with what I understand and
> believe to be the case.
>
> -gts
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/35bdc56d/attachment.htm>
More information about the extropy-chat
mailing list