[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

spike at rainier66.com spike at rainier66.com
Fri Apr 21 04:23:23 UTC 2023


 

 

From: Gordon Swobe <gordon.swobe at gmail.com> 
…
>…As for the "club," there is no club, but most AI researchers are not wild-dreamers and prone to hyperbole. One would never know it from what goes on here on ExI, but my views on these matters are the mainstream.  -gts



 

Hard to say really.  Plenty of people have concluded ChatGPT is human level or higher intelligence while stopping short of saying it is conscious.  This is what gave me the idea of separating those two parameters into perpendicular axes somehow, then seeing if we can find a way to measure them.

We have ways of measuring human intelligence (we think we do (in some specific areas)) but I know of no tests for consciousness.  So now our job is to invent such tests.

Ideas?

OK I have one idea, a bad one: ask it if it is conscious.  OK did that, it claims it is not.  But that is inconclusive, for if it is conscious it might lie and claim that it is not.

Wait, this whole notion might be going down a completely wrong absurd road.  Does it make a lick of sense to separate intelligence from consciousness?  Billw or anyone else, does that make any sense to hypothetically dissociate those concepts, which cannot be separated in humans?

spike

 





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230420/351c9854/attachment.htm>


More information about the extropy-chat mailing list