[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 13 03:45:17 UTC 2023


Brent,
What is your evidence for such statements? Is there an experiment, not a
thought experiment that follows your made-up rules (like the color
converter that has serious problems from a logical and scientific point of
view that I don't want to discuss here). Can you find a paper, a real
experiment in neuroscience that supports this statement:


*But there is no way you can communicate to someone what redness is like,
with text alone.*
What do you mean here by communicate? Is it my analogy about touching a
blue line on a map and saying the blue line is not wet?
Is this what you really mean?
In other words, do you mean if I describe to you what happens in my brain
or any other brain what sequences of events, what relationships between
neurons, what neural code represents my perception of red, you are not
going to see red in your head?

If that is what you mean, do you realize how absurd of an idea this is?
 1) It is not what science is about, it is not supposed to make you feel
red, it is supposed to make you understand what is fundamental about this
phenomenon of red, science's job is to provide simplification,
abstractions, maps, and models. This simplification is not a BUG but a
FEATURE. It is what gives power to science.
2) The usefulness of making a model is that you can carry the map in your
pocket, sort of speak, and bring it with you in another location and
communicate everything essential (for whatever purpose) to somebody else
that has never been in that place. Yes, they are not to experience the
landscape as if they were there but that is not the point at all.
If we use the analogy of the blueprint instead of a map I can recreate a
car or a building using the blueprint and if somebody comes by and points
to the blueprint and says "but this engine doesn't move" you will think
that person is crazy and mentally impaired. If you want to ride the car,
let me build it from the blueprint and then you can do that.

So your statement above is both crazy and obvious at the same time.
Science is not in the business of making you feel the original thing that
is described. It is in the opposite business, it tries to abstract the
essential parts, which are mostly relational parts, and how things are
related to each other. This is also how science can abstract away even from
the original form of something. Think about how we abstracted away the
meaning of flight from birds. It is not about the feathers, and the
flapping wings but the principle of aerodynamics. You can create a flying
machine by using these principles that are related but not a 1 to 1
relationship with how birds solved the problem of aerodynamics.
By the way, this is also a natural way. Think about how many living beings
rediscovered in evolution sight, camouflage, hydrodynamics, photosynthesis.
Think about DNA.
Yes, think about DNA. Does DNA make you see my redness? No, but my redness
was somehow contained in the DNA as code. You can build the DNA to build a
Giovanni that then will experience red. But if you understand where in the
DNA the redness is represented, then you can use that information to
understand everything there is to understand about Giovanni's redness from
a scientific point of view.

I think maybe in writing this down I may understand an issue that could
rise to some of your thinking. That is the idea of computational
irreducibility that is an idea that Wolfram developed. All the phenomena,
in reality, are a sort of code but you cannot predict what the result of
the code is in advance in some instances. You need to run the code to know
what the results are. Maybe this is something that you have in mind when
you talk about this business of redness, I have the suspicions that you are
thinking something like that but you are expressing it in a way that is not
easy to understand or causes a lot of confusion. So it is still code if you
do but you raise an important and relevant issue about computation that
some of them are so complex that they are irreducible. I'm ok with qualia
being irreducible computation. Maybe is the only scientific meaningful way
to think about them.
Here a summary of this issue by Wolfram himself:

https://writings.stephenwolfram.com/2021/09/charting-a-course-for-complexity-metamodeling-ruliology-and-more/


















On Wed, Apr 12, 2023 at 6:37 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Jason,
>
> On Wed, Apr 12, 2023 at 8:07 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Thus the simulation, like the isomorphic graph, by preserving all the
>> same relationships recovers all the same properties. If the glutamate
>> molecule possesses redness, then the perfect simulation of glutamate will
>> possess redness too.
>>
>
> ALL of our objective observations of physics can be fully described with
> abstract text.
> All of that which you could simulate, can also be described with abstract
> text.
>
> But there is no way you can communicate to someone what redness is like,
> with text alone.
> You MUST have pictures, to produce the subjective experience, before
> someone can know what redness is like.
>
> There must be certain stuff in the brain which can be computationally
> bound, which produces something beyond, what can be described via abstract
> text.
> You can abstractly describe all of it, you can objectively observe all of
> it with our senses, and you can abstractly simulate all of that.
> But until it is physically computationally bound with the rest of our
> consciousness, you can't know the true quality you are only abstractly
> describing and simulating.
>
> In other words, like abstract text can't communicate the nature of
> qualities.
> An abstract simulation also, can't produce anything more than abstract
> text can describe.
> At least, that is what I predict.
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/905f8c03/attachment.htm>


More information about the extropy-chat mailing list