[ExI] Symbol Grounding
gordon.swobe at gmail.com
Fri Apr 28 17:33:19 UTC 2023
On Wed, Apr 26, 2023 at 1:10 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 26/04/2023 18:32, extropy-chat-request at lists.extropy.org wrote:
> On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> I wrote to you that in my opinion you were conflating linguistics and
>> Actually, you went further than that, arguing that linguistics is not
>> even the correct discipline. But you were supposedly refuting my recent
>> argument which is entirely about what linguistics — the science of language
>> — can inform us about language models.
>> Yes, prior to my question. Which has a point. But you are still dodging
> I simply have no interest in it.
> OK, then. That clears that up. You have no interest in even listening to
> someone else's argument, much less engaging with it. I get it.
I explained that while your theory of spike trails in the brain and so on
is interesting, it tells me nothing about how a digital computer with no
brain and no nervous system and no sense organs or sensory apparatus
whatsoever can understand the meanings of words merely from analyzing how
they appear in relation to one another statistically in the corpus.
The reality as I see it and *as GPT-4 itself explains it *is that it does
not truly understand the meanings of words. We all find that amazing and
difficult to believe as the words appear meaningful to us and sometimes
even profoundly meaningful, but we as the end-users of this technology are
the ones finding/assigning the meanings to the words. GPT-4 is merely
generating symbols that it has a high degree of confidence will have
meaning to us.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat