[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
gsantostasi at gmail.com
Mon Apr 17 20:38:39 UTC 2023
*Looks like the old AI goalpost-moving means we're going to have to
stopdoing cognitive tests, on anybody/thing. They're no use anymore!*
My thoughts exactly.
On Mon, Apr 17, 2023 at 1:33 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 17/04/2023 20:45, Giovanni Santostasi wrote:
> > If you do cognitive tests that are used to test humans then GPT-4 has
> > a similar performance to humans at different levels of development
> > depending on the task.
> Actually, that raises a rather disturbing thought: If Gordon, and
> others, like the linguists he keeps telling us about, can dismiss these
> cognitive tests when applied to GPT-4, it means the tests can't be
> relied upon to tell us about the humans taking them, either. For a test
> to be any use, we have to treat the subject as a 'black box', and not
> make any assumptions about them, otherwise there's no point doing the
> test. So presumably these people think that such tests are no use at
> all. Otherwise it's, what? racism? I don't know what to call it.
> Looks like the old AI goalpost-moving means we're going to have to stop
> doing cognitive tests, on anybody/thing. They're no use anymore!
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat