[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
Ben Zaiboc
ben at zaiboc.net
Mon Apr 17 20:26:56 UTC 2023
On 17/04/2023 20:45, Giovanni Santostasi wrote:
> If you do cognitive tests that are used to test humans then GPT-4 has
> a similar performance to humans at different levels of development
> depending on the task.
Actually, that raises a rather disturbing thought: If Gordon, and
others, like the linguists he keeps telling us about, can dismiss these
cognitive tests when applied to GPT-4, it means the tests can't be
relied upon to tell us about the humans taking them, either. For a test
to be any use, we have to treat the subject as a 'black box', and not
make any assumptions about them, otherwise there's no point doing the
test. So presumably these people think that such tests are no use at
all. Otherwise it's, what? racism? I don't know what to call it.
Looks like the old AI goalpost-moving means we're going to have to stop
doing cognitive tests, on anybody/thing. They're no use anymore!
Ben
More information about the extropy-chat
mailing list