[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 17 20:38:39 UTC 2023


*Looks like the old AI goalpost-moving means we're going to have to
stopdoing cognitive tests, on anybody/thing. They're no use anymore!*

Ben,
My thoughts exactly.
Giovanni


On Mon, Apr 17, 2023 at 1:33 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 17/04/2023 20:45, Giovanni Santostasi wrote:
> > If you do cognitive tests that are used to test humans then GPT-4 has
> > a similar performance to humans at different levels of development
> > depending on the task.
>
> Actually, that raises a rather disturbing thought: If Gordon, and
> others, like the linguists he keeps telling us about, can dismiss these
> cognitive tests when applied to GPT-4, it means the tests can't be
> relied upon to tell us about the humans taking them, either. For a test
> to be any use, we have to treat the subject as a 'black box', and not
> make any assumptions about them, otherwise there's no point doing the
> test. So presumably these people think that such tests are no use at
> all. Otherwise it's, what? racism? I don't know what to call it.
>
> Looks like the old AI goalpost-moving means we're going to have to stop
> doing cognitive tests, on anybody/thing. They're no use anymore!
>
> Ben
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/f8c85e9e/attachment.htm>


More information about the extropy-chat mailing list