[ExI] Symbol Grounding
Giovanni Santostasi
gsantostasi at gmail.com
Sat Apr 29 07:05:41 UTC 2023
* How the heck could a train of spikes produce a redness experience?*But it
does like everything else in our brain. Why a chemical could do that
better? I don't get it. A chemical is just a mean to transmit information.
The air we use to communicate via voice is not the critical thing in
communicating a message. It is just a mean. There is no special
characteristic of air that makes communication more meaningful. If anything
it has many limitations and hindrances but it is what we had available as
we evolved.
The spikes convey information, the experience is information that informs
itself. This is the real miracle of awareness, this self-loop. It is not
mysterious of other things that are, like the repulsion of 2 electrical
charges, how that is done? That is what irritates me about the qualia
fanatics, they think that qualia deserve an explanation that can somehow
produce the experience in others (or it is not even clear what they hope a
suitable explanation looks like) but they never apply this to other
phenomena in the universe. They ask how does it feel to be a bat but not
how does it feel to be an electron. How one feels is not science and it
should not be.
On Fri, Apr 28, 2023 at 11:23 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Fri, Apr 28, 2023 at 11:34 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, Apr 26, 2023 at 1:10 PM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On 26/04/2023 18:32, extropy-chat-request at lists.extropy.org wrote:
>>>
>>> On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> I wrote to you that in my opinion you were conflating linguistics and
>>>> neuroscience.
>>>>
>>>> Actually, you went further than that, arguing that linguistics is not
>>>> even the correct discipline. But you were supposedly refuting my recent
>>>> argument which is entirely about what linguistics — the science of language
>>>> — can inform us about language models.
>>>>
>>>> -gts
>>>>
>>>>
>>>>
>>>> Yes, prior to my question. Which has a point. But you are still dodging
>>>> it.
>>>>
>>>
>>> I simply have no interest in it.
>>>
>>>
>>> OK, then. That clears that up. You have no interest in even listening to
>>> someone else's argument, much less engaging with it. I get it.
>>>
>>
>> I explained that while your theory of spike trails in the brain and so on
>> is interesting, it tells me nothing about how a digital computer with no
>> brain and no nervous system and no sense organs or sensory apparatus
>> whatsoever can understand the meanings of words merely from analyzing how
>> they appear in relation to one another statistically in the corpus.
>>
>
> Ben. All spike trails or trains, or whatever, begin and end with
> neurotransmitters being dumped into a synapse, right? Seems to me that
> someone who predicts someone's knowledge of [image: red_border.png], is
> more likely to be spike trains, than the quality of a chemical in a
> synapse, like Giovani, has no ability to understand or model the true
> nature of a subjective qualities. How the heck could a train of
> spikes produce a redness experience? Just like functionalists can't
> provide a falsifiable "function" that would result in redness, without
> passing the laugh test, there is no hypothetical example of any train of
> spikes, from which a redness experience would result. I bet you can't give
> me any example that would pass the laugh test.
>
>
>
>> The reality as I see it and *as GPT-4 itself explains it *is that it
>> does not truly understand the meanings of words. We all find that amazing
>> and difficult to believe as the words appear meaningful to us and sometimes
>> even profoundly meaningful, but we as the end-users of this technology are
>> the ones finding/assigning the meanings to the words. GPT-4 is merely
>> generating symbols that it has a high degree of confidence will have
>> meaning to us.
>>
>
> I don't think I'd go this far. the fact that GPT-4 is "merely generating
> symbols that it has a high degree of confidence will have meaning to us."
> to me, says it has the ability to model exactly that meaning, and know what
> that meaning is. And its models must be very isomorphic to a lot of facts
> both platonic and physical, otherwise, it couldn't do what it is doing.
> True, there is a lot of meaning missing. But there is a lot of meaning
> that it must be understood and modeled in some way way, otherwise it
> couldn't do what it does.
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/268b21b5/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: red_border.png
Type: image/png
Size: 187 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230429/268b21b5/attachment.png>
More information about the extropy-chat
mailing list