[ExI] GPT-4 on its inability to solve the symbol grounding problem
gsantostasi at gmail.com
Thu Apr 13 23:23:23 UTC 2023
GPT-4 and apples.
I asked GPT-4 to draw an apple. It has never seen an apple and it was
trained only on text (at least the version we have access to). And it
was able to draw an apple.
How is this possible? Because an apple is also a bunch of
relationships. That is the other thing I wanted to say in my previous
discussion of referents. How do we recognize an apple is an apple.
Exactly how GPT 4 does it but understanding the relationship between
the whole and the parts, where the referents are in this process.
Nowhere. It is the relationships that count, let's cut the Gordon knot
once and for all !
Certainly! My intention was to create a very simplistic representation
of an apple using basic SVG elements. The SVG code consists of three
<circle cx="50" cy="50" r="40" fill="red" />: This element creates a
red circle that represents the body of the apple. The cx and cy
attributes define the center point of the circle, while the r
attribute defines the radius. In this case, the circle has a center at
(50, 50) and a radius of 40 units.
<rect x="45" y="15" width="10" height="20" fill="brown" />: This
element creates a brown rectangle that represents the stem of the
apple. The x and y attributes define the top-left corner of the
rectangle, while the width and height attributes define its
dimensions. In this case, the rectangle has a top-left corner at (45,
15), a width of 10 units, and a height of 20 units.
<path d="M 40 25 Q 50 10, 60 25" stroke="green" fill="none" />: This
element creates a green curved line that represents the leaf of the
apple. The d attribute contains a series of commands that define the
path of the curve. In this case, the path starts at point (40, 25)
with the M 40 25 command, then creates a quadratic Bezier curve with
control point (50, 10) and end point (60, 25) using the Q 50 10, 60 25
command. The stroke attribute sets the color of the curve, while the
fill attribute is set to "none" to ensure that the area enclosed by
the path is not filled with any color.
The combination of these three elements forms a very simplistic
representation of an apple. Note that this is a basic example and may
not accurately depict a real apple's shape, but it can be used as a
starting point for further refinement or customization.
Here is the final result. Look an apple (attached).
On Thu, Apr 13, 2023 at 4:18 PM Giovanni Santostasi
<gsantostasi at gmail.com> wrote:
> Let me try to explain one more time why there are referents or at
> least why they are not really what gives meaning to language.
> I think the common understanding of referent is that certain words
> (not all for sure, and this is an important point) refer or point to
> certain objects in the real world. For example, when I say apple I
> mean this particular fruit that has a given shape, size and color. The
> idea then is that meaning than can be constructed because we know what
> I'm talking about with somebody else when I refer to the word apple,
> that is a real object in real life.
> But this idea is completely naive and in fact, wrong in its conclusion
> to what gives meaning to language.
> Let's start with what I refer to really when I talk about an apple.
> There are 7500 cultivars of apples according to wiki. By the way, I
> had to google explicitly apple fruit because if you just google apple,
> the company called logo comes up that is already a demonstration that
> your naive idea of referent is indeed naive. If referent were so
> important or necessary for language why this confusion about apple the
> fruit or Apple the company?
> How could you distinguish between the 2? Well in a google search you
> can do referring to a general concept like fruit. But what fruit
> refers to? There 1000s of fruits and some people do not consider or
> know that a tomato is actually a fruit, so when they think about fruit
> they don't think about tomatoes but I do, why referents do not help
> here? Going back to the apple, what type of apple? A yellow one, a
> green one, a mature one? If the referent was this particular sure
> thing why an apple can be small or large, different colors and shapes?
> It has also pointed out several times by me and others that certain
> words we use in the language have no real referents in physical
> reality like love, thinking, dragon and so on. Where are the referents
> here? I have given examples of how one can use completely abstracts
> symbols to build a language and you ignore these examples. You
> stubbornly say some stuff like "we use referents to give meaning to
> language" when it has been shown it not the case. Simply because it is
> something you think it is important to distinguish as from the
> machines. Did you ask GPT-4 to draw you an apple? It can be done using
> On Thu, Apr 13, 2023 at 3:16 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> > On Thu, Apr 13, 2023 at 3:46 PM Giovanni Santostasi <gsantostasi at gmail.com> wrote:
> >> I think my analogy is completely relevant. Science is not supposed to reproduce perfectly the territory, it is not a limitation but a feature. I went into detail about why it is so.
> >> Can you please address this and explain why I'm wrong?
> > Honestly, Gio, I do not find conversations with you to be very productive. I think you would say up is down and white is black if it would support your zealous belief that language models have consciousness.
> > You lost me when you disagreed with my very simple argument that words have referents. That words have referents is hardly even an argument. It is more like an observation. When you say a word, you mean something, and that something that you mean is the referent. It is what gives the word meaning in your own mind. It could be an object that you perceive or imagine, or it could be an abstract idea. It is whatever the word stands for.
> > In any case, Nagel is perfectly well aware of how science is useful for giving us objective explanations of the objective world.
> > > If you don't like what science does and it is then invent your own epistemology,
> > Hardly my own idea, the "explanatory gap" (usually used in reference to Nagel) is more or less another way of saying "the hard problem of consciousness" (usually used in reference to David Chalmers). Roger Penrose has a similar idea as do many other philosophers of mind and science who have looked at the problem of explaining how minds have subjective conscious experience.
> > -gts
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 4207 bytes
Desc: not available
More information about the extropy-chat