[ExI] Symbol Grounding

Brent Allsop brent.allsop at gmail.com
Sun Apr 23 02:32:41 UTC 2023


Hi Giovanni,

Will gave some great advice.  Everything I say is just my opinion.  And I
should especially be humble around all the people on this list, who are all
so intelligent, in most cases far more intelligent than I.  And I am
clearly in the minority.  So, what I say here, is just my opinion.  I
appreciate everyone's patience with me.

Giovanni, there are a bunch of ways of interpreting what you are saying
here, and I don't know which interpretation to use.
It seems to me, when I look at a strawberry, my subjective experience is a
3D model, composed of subjective qualities.  Are you saying that doesn't
exist?
Are you saying that Steven Lehar's bubble
<https://canonizer.com/videos/consciousness?chapter=the+world+in+your+head&format=360>
world
<https://canonizer.com/videos/consciousness?chapter=the+world+in+your+head&format=360>,
doesn't exist?  And are you saying that when there is a single pixel, on
the surface of the strawberry, switching between redness and greenness,
there is not something in the brain, which is your knowledge of that
change, and all the other pixels that make up the 3D awareness, which, yes,
is a model that represents every pixel of the strawberry, out there?

On Sat, Apr 22, 2023 at 4:36 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> Hi Brent,
> There is something very wrong with your drawing. The arrow from Complex
> Perception Process (CPP) to the 3D model doesn't exist. I think that is the
> key to all our clashes (not just mine but almost everybody else on the
> list), also you don't need the language centers or just make it a bubble in
> the conceptual models' cloud. Language is just another conceptual model
> among the others. What you call a 3D model composed of subjective qualities
> is identical to that cloud of "conceptual models". I know it sounds weird
> to you but what you see with your eyes is in a sense a model, it is not
> made with words but images and colors and so on but that is the vocabulary
> of the visual system. It is another form of language. It is a model because
> it is re-created using some algorithm that interprets and manipulates the
> information received, it filters what is not needed and makes
> interpolations to make sense of the data.
> Giovanni
>
>
>
>
> On Fri, Apr 21, 2023 at 2:01 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Your model is based on a Naive Realism model.
>>
>> Here is a representational model which will actually be possible without
>> magic:
>>
>> [image: image.png]
>>
>>
>> On Fri, Apr 21, 2023 at 5:19 AM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Here is a diagram (because I'm generally a visual person, and can
>>> usually understand things if I can draw them):
>>>
>>>
>>>
>>> A very general, high-level and crude diagram that tries to illustrate
>>> the concept of 'symbol grounding' as I understand it, from these
>>> discussions we've been having. Plus an arrow representing output of speech
>>> or text, or anything really, that the system is capable of outputting
>>> (obviously there's a hell of a lot going on in every single element in the
>>> diagram, that I'm ignoring for simplicity's sake).
>>>
>>> As far as I understand, the 'symbol grounding' occurs between the
>>> conceptual models (built up from sensory inputs and memories) and the
>>> language centres (containing linguistic 'tokens', or symbols), as we've
>>> previously agreed.
>>>
>>> There are two arrows here because the models can be based on or include
>>> data from the language centres as well as from the environment. The symbols
>>> (tokens) in the language centres represent, and are 'grounded in', the
>>> conceptual models (these are the object and action models I've discussed
>>> earlier, and likely other types of models, too, and would include a
>>> 'self-model' if the system has one, linked to the token "I").
>>>
>>> The sensory inputs are of various modalities like vision, sounds, text,
>>> and so-on (whatever the system's sensors are capable of perceiving and
>>> encoding), and of course will be processed in a variety of ways to extract
>>> 'features' and combine them in various ways, etc.
>>>
>>> I didn't include something to represent Memory, to keep things as simple
>>> as possible.
>>>
>>> So, could we say that this diagram illustrates, in a very general way,
>>> what's going on in a human? in a LLM AI? Both? Neither?
>>>
>>> Would you say it's broadly correct, or missing something, or incorrect
>>> in another way?
>>>
>>> Ben
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/6044a1a1/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 8vllDMs5s2lJQuKB.png
Type: image/png
Size: 44150 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/6044a1a1/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 41220 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/6044a1a1/attachment-0003.png>


More information about the extropy-chat mailing list