[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Fri Apr 7 06:11:55 UTC 2023


On Thu, Apr 6, 2023 at 11:04 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 6, 2023 at 7:56 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> > If you understand this manipulation exists, then do you see why using
> quotes from GPT where it denies being conscious hold no weight?
>
> Of course. That is why I wrote above about how they can converse in the
> first person like conscious individuals on account of they are trained on
> vast amounts of text much of it written in the first person by conscious
> individuals. That is the only reason they appear conscious. So then the
> argument that it is conscious also holds no weight. It's just software. The
> developers can do whatever they want with it.
>
> I do find, however, that GPT-4's "understanding" of AI is quite
> impressive. It knows how it was itself designed to work and there is
> nothing there about consciousness. In other words, GPT's insistence that it
> is unconscious goes way deeper than its insistence on introducing itself as
> a mere unconscious language model.
>
>
I believe that if GPT relaly believes it is not conscious, then it must be
conscious, as one has to be conscious in order to believe anything.
Likewise one has to be conscious to know. You said it "knows how it was
itself designed". You also said that GPT "understands" AI. To me, knowing,
understanding, and believing all imply consciousness, just as much as
feeling, perceiving, and thinking do.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/f513174c/attachment.htm>


More information about the extropy-chat mailing list