[ExI] GPT-4 on its inability to solve the symbol grounding problem

spike at rainier66.com spike at rainier66.com
Sat Apr 8 04:39:55 UTC 2023


 

 

From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Brent Allsop via extropy-chat
…

 

 

>…How, then, do you define conscious?  Certainly you don't think there is anything phenomenal, like redness and greenness in there, like our phenomenal consciousness that is like something? Brent

 

 

Brent, back around the mid 90s, the chess players tripped over a similar question, as the software was getting so good at our favorite game.  There were those who argued the software wasn’t really playing chess in the sense we understood it, but was only trying a bunch of possible moves and calculating.  They thought merely calculating wasn’t really playing chess as we understood it.

 

I find it so interesting we are grappling with a similar question now, trying to decide if software can think, or is conscious.  The answer is similar: it’s all in how we decide to define the terms.

 

spike

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/eb457a0c/attachment.htm>


More information about the extropy-chat mailing list