[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Thu Apr 6 13:49:04 UTC 2023


On Thu, Apr 6, 2023, 9:30 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Thu, Apr 6, 2023 at 5:13 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> Please see and understand the images in this Twitter thread.
>>
>
> In know all about Sydney, thanks. As I have mentioned, LLMs can converse
> in the first person like conscious individuals on account of they are
> trained on vast amounts of text much of it written in the first person by
> conscious individuals. They are parroting what a conscious person speaking
> in the first person looks like in language. This is why the founders of
> OpenAI say that the only true test of consciousness in an LLM would require
> that it be trained on material completely devoid of references to first
> person experience, etc.
>
> I do not dispute that the developers have the capacity to turn this “first
> person conscious individual” feature off, or that to some extent they might
> have done so with GPT-4. It’s just software except in our extropian
> fantasies.
>


If you understand this manipulation exists, then do you see why using
quotes from GPT where it denies being conscious hold no weight?

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/f2cfbab0/attachment.htm>


More information about the extropy-chat mailing list