[ExI] Symbol Grounding

Brent Allsop brent.allsop at gmail.com
Sat Apr 29 06:21:17 UTC 2023


On Fri, Apr 28, 2023 at 11:34 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Wed, Apr 26, 2023 at 1:10 PM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On 26/04/2023 18:32, extropy-chat-request at lists.extropy.org wrote:
>>
>> On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> I wrote to you that in my opinion you were conflating linguistics and
>>> neuroscience.
>>>
>>> Actually, you went further than that, arguing that linguistics is not
>>> even the correct discipline.  But you were supposedly refuting my recent
>>> argument which is entirely about what linguistics — the science of language
>>> — can inform us about language models.
>>>
>>> -gts
>>>
>>>
>>>
>>> Yes, prior to my question. Which has a point. But you are still dodging
>>> it.
>>>
>>
>> I simply have no interest in it.
>>
>>
>> OK, then. That clears that up. You have no interest in even listening to
>> someone else's argument, much less engaging with it. I get it.
>>
>
> I explained that while your theory of spike trails in the brain and so on
> is interesting, it tells me nothing about how a digital computer with no
> brain and no nervous system and no sense organs or sensory apparatus
> whatsoever can understand the meanings of words merely from analyzing how
> they appear in relation to one another statistically in the corpus.
>

Ben.  All spike trails or trains, or whatever, begin and end with
neurotransmitters being dumped into a synapse, right?  Seems to me that
someone who predicts someone's knowledge of [image: red_border.png], is
more likely to be spike trains, than the quality of a chemical in a
synapse, like Giovani, has no ability to understand or model the true
nature of a subjective qualities.  How the heck could a train of
spikes produce a redness experience?  Just like functionalists can't
provide a falsifiable "function" that would result in redness, without
passing the laugh test, there is no hypothetical example of any train of
spikes, from which a redness experience would result.  I bet you can't give
me any example that would pass the laugh test.



> The reality as I see it and *as GPT-4 itself explains it *is that it does
> not truly understand the meanings of words. We all find that amazing and
> difficult to believe as the words appear meaningful to us and sometimes
> even profoundly meaningful, but we as the end-users of this technology are
> the ones finding/assigning the meanings to the words. GPT-4 is merely
> generating symbols that it has a high degree of confidence will have
> meaning to us.
>

I don't think I'd go this far.  the fact that GPT-4 is "merely generating
symbols that it has a high degree of confidence will have meaning to us."
to me, says it has the ability to model exactly that meaning, and know what
that meaning is.  And its models must be very isomorphic to a lot of facts
both platonic and physical, otherwise, it couldn't do what it is doing.
True, there is a lot of meaning missing.  But there is a lot of meaning
that it must be understood and modeled in some way way, otherwise it
couldn't do what it does.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/e28cb3f2/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: red_border.png
Type: image/png
Size: 187 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/e28cb3f2/attachment.png>


More information about the extropy-chat mailing list