[ExI] Ben Goertzel on Large Language Models
spike at rainier66.com
spike at rainier66.com
Fri Apr 28 00:04:35 UTC 2023
…> On Behalf Of Gordon Swobe via extropy-chat
Subject: Re: [ExI] Ben Goertzel on Large Language Models
On Thu, Apr 27, 2023 at 4:59 PM Giovanni Santostasi <gsantostasi at gmail.com <mailto:gsantostasi at gmail.com> > wrote:
>>…Gordon,
Given Goertzel believes that we can reach AGI in a few years would you simply concede that when we reach this level of intelligence the AGI would be conscious if it behaves like a conscious agent …
>…As for whether any AI will have subjective experience -- what I mean by consciousness -- I do doubt that, at least on digital computers as we understand them today. I certainly do not believe that GPT-4 or any other LLM is conscious.
-gts
Seems we are working back to a question I have posed earlier: are consciousness and intelligence separable? In principle, I don’t see why not. ChatGPT is claiming to be not conscious, but it appears to be intelligent.
I suppose we could ask GPT if it thinks consciousness and intelligence can be separated, but it might end up contradicting itself. Perhaps someone already did that experiment.
spike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/1f1cbc2d/attachment.htm>
More information about the extropy-chat
mailing list