[ExI] Language models are like mirrors

spike at rainier66.com spike at rainier66.com
Sat Apr 1 02:22:59 UTC 2023


 

 

…> On Behalf Of Gordon Swobe via extropy-chat
Cc: Gordon Swobe <gordon.swobe at gmail.com>
Subject: Re: [ExI] Language models are like mirrors

 

 

 

On Fri, Mar 31, 2023 at 2:18 PM Giovanni Santostasi via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org> > wrote:

Gordon,

almost everybody disagrees with you. 

 

>…ChatGPT-4 itself agrees with me. It says it cannot solve the symbol grounding problem for itself as it has no conscious experience, and says it therefore does not understand the meanings of the words as humans do, and that in this respect it is at a disadvantage compared to humans. See my thread on the subject.

 

>…Spike also agrees these are only language analysis tools. Brent also seems to agree that they have no access to referents and therefore  no way to know meanings of words.

 

>…And this is not democracy, in any case. I’m not afraid to be in the company people who disagree wit me.

 

-gts

 

 

 

Gordon what I have learned from reading the discussion over the last few weeks is that even if we agree that ChatGPT is only a language model, some things still are not clear.  I had never thought of it this way, but what if… our own consciousness is merely a language model?  With certain critical enhancements of course.

 

What humans are experiencing right now is analogous to what chess players were experiencing in the 1990s, as software was improving quickly.  I remember that well, as I was about a low-end expert by then or probably more high A rated.  The software was getting good enough by then that I could no longer beat it.  It wasn’t just tactics: the software appeared to be able to formulate strategy and carry it out.  

 

The 1990s caused a lot of chess players to view ourselves differently, as humans are struggling with viewing ourselves differently now.  We could see the chess software was merely calculating something very quickly.  No one believed it was intelligent or “understood” what it was doing in the sense that humans do.  It played as if it understood, but it was just software so of course it cannot.  Played a hell of a good game however.  Perhaps we human players had fooled ourselves all along, and we too were merely calculating.  Damn.  I thought I was smarter than that.  Brilliant.  Insightful.  But no, just calculating really fast.

 

We see ChatGPT doing marvelous things using mere calculation of language models.  So what if… we are doing something like that too?    Perhaps we just fooled ourselves into thinking we are smart.

 

This is why I am interested in bot^2 and bot^3 discussions.  I want to see if two or three bots can discuss something and come up with new insights somehow, any really new insights, the way we have in this forum.  So far I haven’t seen a trace of evidence they can do that.  Humans can, GPT cannot.

 

spike

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/de7b5a98/attachment.htm>


More information about the extropy-chat mailing list