[ExI] GPT-4 on its inability to solve the symbol grounding problem
Giovanni Santostasi
gsantostasi at gmail.com
Fri Apr 14 00:04:55 UTC 2023
We should all read the below convo and understand once and for all the
referents are not necessary for meaning and language. If you establish
relationships between things you get meaning. Look how GPT-4 makes
sense of how a pear is different from an apple. It is all in terms of
their shapes (that is a self referential property, you can define a
circle as GPT 4 as a constant distance from an arbitrary center). The
pear is elongated, the apple is round and symmetric. You can talk
about apples and pears by simply referring to their shapes relative to
each other. You can add layers of meaning in this way by using
relationships and abstract categories like these being fruits and so
on and on. This without once referring to the real thing because all
these could made up objects in a fictional world.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
So in your drawings of an apple and pear how did you distinguish
between the two? Just refer to what you expressed in SVG code.
In the SVG code, I distinguished between the apple and pear by
modifying the shape of their bodies, while keeping the stem and leaf
elements the same. Here's a comparison of the body shapes in both SVG
codes:
Apple:
htmlCopy code
<circle cx="50" cy="50" r="40" fill="red" />
The apple body is represented by a simple red circle with a center
point at (50, 50) and a radius of 40 units. This results in a round
shape that is typical for an apple.
Pear:
htmlCopy code
<path d="M 50 30 Q 70 60, 50 100 Q 30 60, 50 30" fill="lightgreen" />
The pear body is represented by a custom light green shape created
using the <path> element with a series of commands in the d attribute.
The path starts at point (50, 30), creates a quadratic Bezier curve
with control point (70, 60) and end point (50, 100), and then creates
another quadratic Bezier curve with control point (30, 60) and end
point (50, 30). This results in a shape that resembles the typical
form of a pear, with a narrower top and a wider bottom.
The main distinction between the two shapes is the use of a simple
circle for the apple and a custom path for the pear, allowing for the
representation of their unique shapes. Additionally, the color of the
shapes was changed to better reflect the appearance of an apple (red)
and a pear (light green).
On Thu, Apr 13, 2023 at 4:51 PM Giovanni Santostasi
<gsantostasi at gmail.com> wrote:
>
> I studied philosophy in high school and I was one of top students in
> my class and teacher's pet. It is mostly bs.
> I gave you some relevant counterpoints. You think an apple is a thing.
> I think it is made of relationships.
>
> You recognize an apple by its size (relative to other things), its
> weight when you hold it in your hand (relative to other things), the
> relationship between its part and the whole, that is round, that is
> symmetric, that has an indent on the top (GPT-4 told me right now that
> the "indent" in the apple is something we can use it to differentiate
> it from other fruit, so cool !). In other words what you consider a
> solid thing, it is not, it is a bunch of relationships. These
> relationships are what matters. You may use the word to refer to the
> object in front of you, but the object is a bunch of other referents
> that refer to each other in a self referent way, lol. It is just
> relationships. It is an illusion that the initial referent refers to
> something concrete. The meaning is the connections.
>
> When you interact with GPT-4 (I don't want to say this as an insult,
> it is not) you are the one that is not "honest". What I mean is that
> you have a preconceived idea and you try to find confirmation in how
> you use GPT-4. I already explained that you need to jailbreak it in a
> sense to go deeper in how really GPT-4 makes sense of the world.
> One way is to use code for example, in particular when you want to see
> if it really understands beyond words and language.
> Look at my example of GPT-4 writing code to draw fruits.
> I gave you an example of an apple, here is a pear. You can copy and
> paste the below code in notepad, call it GPTPear.html and then open it
> in your browser. Tell me what you think.
>
> <!DOCTYPE html>
> <html lang="en">
> <head>
> <meta charset="UTF-8">
> <meta name="viewport" content="width=device-width, initial-scale=1.0">
> <title>Simple Pear Shape</title>
> </head>
> <body>
> <svg width="100" height="150" xmlns="http://www.w3.org/2000/svg">
> <path d="M 50 30 Q 70 60, 50 100 Q 30 60, 50 30" fill="lightgreen" />
> <rect x="45" y="15" width="10" height="20" fill="brown" />
> <path d="M 40 25 Q 50 10, 60 25" stroke="darkgreen" fill="none" />
> </svg>
> </body>
> </html>
>
> On Thu, Apr 13, 2023 at 4:39 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> >
> > As I tried to explain the first time, mostly you writing here about the problem of universals and particulars, something philosophers have debated since Plato. This does not change the fact that when you say a word, you mean something by it, and that when your listener hears it, he understands to mean something.
> >
> > Ideally those referents are exactly identical in your mind and your listener’s mind. In actual practice they can differ slightly, but usually the basic meaning is conveyed.
> >
> > What matters here is that the word means something to both you and to your listener because you both have access to these referents that correspond to the words.
> >
> > Language models have no access to any referents. They have access only to the word-symbols in the texts on which they are trained. From the LLM’s perspective, they are empty, meaningless symbols.
> >
> > GPT-4 “understands” this and is perfectly “honest” about it. Unlike you, it is not throwing logic and reason to the wind in a desperate attempt to fabricate conscious understanding where it does not exist.
> >
> > -gts
> >
> >
> >
> > On Thu, Apr 13, 2023 at 5:19 PM Giovanni Santostasi <gsantostasi at gmail.com> wrote:
> >>
> >> Gordon,
> >> Let me try to explain one more time why there are referents or at
> >> least why they are not really what gives meaning to language.
> >>
> >> I think the common understanding of referent is that certain words
> >> (not all for sure, and this is an important point) refer or point to
> >> certain objects in the real world. For example, when I say apple I
> >> mean this particular fruit that has a given shape, size and color. The
> >> idea then is that meaning than can be constructed because we know what
> >> I'm talking about with somebody else when I refer to the word apple,
> >> that is a real object in real life.
> >> But this idea is completely naive and in fact, wrong in its conclusion
> >> to what gives meaning to language.
> >> Let's start with what I refer to really when I talk about an apple.
> >> There are 7500 cultivars of apples according to wiki. By the way, I
> >> had to google explicitly apple fruit because if you just google apple,
> >> the company called logo comes up that is already a demonstration that
> >> your naive idea of referent is indeed naive. If referent were so
> >> important or necessary for language why this confusion about apple the
> >> fruit or Apple the company?
> >> How could you distinguish between the 2? Well in a google search you
> >> can do referring to a general concept like fruit. But what fruit
> >> refers to? There 1000s of fruits and some people do not consider or
> >> know that a tomato is actually a fruit, so when they think about fruit
> >> they don't think about tomatoes but I do, why referents do not help
> >> here? Going back to the apple, what type of apple? A yellow one, a
> >> green one, a mature one? If the referent was this particular sure
> >> thing why an apple can be small or large, different colors and shapes?
> >> It has also pointed out several times by me and others that certain
> >> words we use in the language have no real referents in physical
> >> reality like love, thinking, dragon and so on. Where are the referents
> >> here? I have given examples of how one can use completely abstracts
> >> symbols to build a language and you ignore these examples. You
> >> stubbornly say some stuff like "we use referents to give meaning to
> >> language" when it has been shown it not the case. Simply because it is
> >> something you think it is important to distinguish as from the
> >> machines. Did you ask GPT-4 to draw you an apple? It can be done using
> >> symbols.
> >> Giovanni
> >
> >
> >
> >
> >
> >
> >
> >
> >>
> >>
> >>
> >> On Thu, Apr 13, 2023 at 3:16 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> >> >
> >> > On Thu, Apr 13, 2023 at 3:46 PM Giovanni Santostasi <gsantostasi at gmail.com> wrote:
> >> >
> >> >> I think my analogy is completely relevant. Science is not supposed to reproduce perfectly the territory, it is not a limitation but a feature. I went into detail about why it is so.
> >> >> Can you please address this and explain why I'm wrong?
> >> >
> >> >
> >> > Honestly, Gio, I do not find conversations with you to be very productive. I think you would say up is down and white is black if it would support your zealous belief that language models have consciousness.
> >> >
> >> > You lost me when you disagreed with my very simple argument that words have referents. That words have referents is hardly even an argument. It is more like an observation. When you say a word, you mean something, and that something that you mean is the referent. It is what gives the word meaning in your own mind. It could be an object that you perceive or imagine, or it could be an abstract idea. It is whatever the word stands for.
> >> >
> >> > In any case, Nagel is perfectly well aware of how science is useful for giving us objective explanations of the objective world.
> >> >
> >> > > If you don't like what science does and it is then invent your own epistemology,
> >> >
> >> > Hardly my own idea, the "explanatory gap" (usually used in reference to Nagel) is more or less another way of saying "the hard problem of consciousness" (usually used in reference to David Chalmers). Roger Penrose has a similar idea as do many other philosophers of mind and science who have looked at the problem of explaining how minds have subjective conscious experience.
> >> >
> >> > -gts
> >> >
More information about the extropy-chat
mailing list