[ExI] Symbol Grounding

Will Steinberg steinberg.will at gmail.com
Sun Apr 23 02:08:26 UTC 2023


Brent my man you really gotta change the way you state your opinions, I
don't think you do it on purpose but in a dialogue I'm of the opinion (and
it has been borne out thru experience) that it goes a long way to use
phrases like "I think" and "I feel" when stating your opinion.  It doesn't
matter if you think you're correct; other people don't assume that and when
you assume it in your speech it makes people less willing to listen to
you.  No hate, just a tip (I think).

On Fri, Apr 21, 2023, 5:01 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Your model is based on a Naive Realism model.
>
> Here is a representational model which will actually be possible without
> magic:
>
> [image: image.png]
>
>
> On Fri, Apr 21, 2023 at 5:19 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Here is a diagram (because I'm generally a visual person, and can usually
>> understand things if I can draw them):
>>
>>
>>
>> A very general, high-level and crude diagram that tries to illustrate the
>> concept of 'symbol grounding' as I understand it, from these discussions
>> we've been having. Plus an arrow representing output of speech or text, or
>> anything really, that the system is capable of outputting (obviously
>> there's a hell of a lot going on in every single element in the diagram,
>> that I'm ignoring for simplicity's sake).
>>
>> As far as I understand, the 'symbol grounding' occurs between the
>> conceptual models (built up from sensory inputs and memories) and the
>> language centres (containing linguistic 'tokens', or symbols), as we've
>> previously agreed.
>>
>> There are two arrows here because the models can be based on or include
>> data from the language centres as well as from the environment. The symbols
>> (tokens) in the language centres represent, and are 'grounded in', the
>> conceptual models (these are the object and action models I've discussed
>> earlier, and likely other types of models, too, and would include a
>> 'self-model' if the system has one, linked to the token "I").
>>
>> The sensory inputs are of various modalities like vision, sounds, text,
>> and so-on (whatever the system's sensors are capable of perceiving and
>> encoding), and of course will be processed in a variety of ways to extract
>> 'features' and combine them in various ways, etc.
>>
>> I didn't include something to represent Memory, to keep things as simple
>> as possible.
>>
>> So, could we say that this diagram illustrates, in a very general way,
>> what's going on in a human? in a LLM AI? Both? Neither?
>>
>> Would you say it's broadly correct, or missing something, or incorrect in
>> another way?
>>
>> Ben
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/90e40419/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 8vllDMs5s2lJQuKB.png
Type: image/png
Size: 44150 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/90e40419/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 41220 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/90e40419/attachment-0003.png>


More information about the extropy-chat mailing list