[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

efc at swisscows.email efc at swisscows.email
Fri Apr 21 10:09:05 UTC 2023



On Thu, 20 Apr 2023, spike jones via extropy-chat wrote:

> Wait, this whole notion might be going down a completely wrong absurd road.  Does it make a lick of sense to separate intelligence
> from consciousness?  Billw or anyone else, does that make any sense to hypothetically dissociate those concepts, which cannot be
> separated in humans?

Well, I'm sure science has progresseda lot, but even during my
university years, intelligence was branched out into emotional
intelligence, logical intelligence, and I'm sure you can find many more
types of intelligence.

So it's a language game. Take logical/analytical intelligence, a
computer is great at it, but not conscious. If we take the reverse, I'm
conscious (or so I claim at least ;) ) but I'm not a math wizard
compared with the best engineers I know.

Looking at emotions, some (or all?) those are more fundamental and governed
by a different part of the brain than logic/analytical intelligence. So
I can definitely see how having different scales might make sense
depending on the type of question you are trying to answer.

But at the root, today, determining if something i consciouss is a
language game and tied up with the theory of conscisousness you
subscribe to, and I do not think there is any consensus or proof that
currently is solving this. Would be great though, if this mailinglist
did it. =)


More information about the extropy-chat mailing list