[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Thu Apr 13 22:06:58 UTC 2023


On Thu, Apr 13, 2023 at 3:14 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 13, 2023 at 4:23 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> But recently it's been shown, somewhat technically, how for certain
>> complex recursive systems, these first person properties naturally emerge.
>> This happens without having to add new neuroscience, physics, or math, just
>> applying our existing understanding of the mathematical notion of
>> incompleteness.
>>
>> See: https://www.eskimo.com/~msharlow/firstper.htm
>>
>
> Thank you for this. I have spent several hours studying this paper. As you
> say, it is somewhat technical. I used GPT-4 as a research partner (a
> fantastic tool even if it has no idea what it is saying).
>

Great idea. I'll have to try that.


> I conclude that while it is interesting and might aid in understanding the
> brain and mind, and how subjectivity works in objective terms, it does not
> overcome the explanatory gap. Even if this author is correct on every
> point, it is still the case that for a reductionist account of
> consciousness like this to be successful, it must provide an explanation of
> how subjective experience arises from objective processes.
>

Indeed. I don't know if it was the author's goal to fully answer that
problem (which I think might require a different answer for each possible
mind), but rather I think he was trying to show that the reductionists who
deny there is any room for subjectivity, as well as those who say the
existence of subjectivity proves the mind can't be described objectively,
are both missing an important piece of the puzzle. Namely, that objective
processes can have properties which are not accessible to external
analysis. This understanding, if accepted as true (if it is true), should
close the gap between computatinalists and weak-AI theorists (such as
Searle)

The author's examples were hard to follow, but I think I can come up with a
simpler example:
Imagine a machine that searches for a counterexample to Goldbach's
conjecture <https://en.wikipedia.org/wiki/Goldbach%27s_conjecture> (an even
number > 2 that's not the sum of two primes), and once it finds it, it
turns itself off. The program that does this can be defined in just 4 or 5
lines of code, it's behavior is incredibly simple when looked at
objectively. But let's say we want to know: does this machine have the
property that it runs forever? We have no way to determine this objectively
given our present mathematical knowledge (since it's unknown, and may not
be provable under existing mathematical theories, whether there is or isn't
any such counter example). Then, even if we know everything there is to
know objectively about this simple machine and simple computer program,
there remain truths and properties about it which exist beyond our capacity
to determine.

Example code below:
Step 1: Set X = 4
Step 2: Set R = 0
Step 3: For each Y from 1 to X, if both Y and (X – Y) are prime, set R = 1
Step 4: If R = 1, Set X = X + 2 and go to Step 2
Step 5: If R = 0, print X and halt

Note that around the year 2000, $1,000,000 was offered to anyone who could
prove or disprove the Goldbach conjecture. This is equivalent to
determining whether or not the above program ever reaches step 5. It's an
incredibly simple program, but no one in the world was able to figure out
whether it ever gets to step 5. So, we arguably have a property here which
is true for the program: it either halts or doesn't, but one which is
inaccessible to us even when we know everything there is to know about the
code itself.

I think "What is it like" questions concerning other people's quality are
also inaccessible in the same sense, even when we know every neuron in
their brain, the what is it like property of their subjectivity, is
unknowable to we who are not them (and who have different brains from the
person whose subjectivity is in question). It's much like two different
mathematical systems being able to prove, or not probe, certain things
about the other or themselves. If you had System A, and System B, A could
prove "This sentence cannot consistently be proved by B", but B could not
prove that. Likewise, I can consistently accept as true the sentence:

"People named Gordon Swobe cannot consistently believe this sentence is
true."

Others, not named Gordon Swobe, can also consistently believe it is true,
but those named Gordon Swobe cannot consistently believe that sentence is
true. This is only to illustrate that from different vantage points
(different conscious minds, or different mathematical systems), certain
things are not knowable or provable. This opens up the door to there being
subjective truths for one subject, which remain unknowable or unprovable to
those who are not that subject.





>
> I hope this paper might show that we can keep our inaccessible,
>> irreducible, real first person properties *and* have a rational description
>> of the brain and it's objectively visible behavior. We don't have to give
>> up one to have the other.
>>
>
> I suppose the real question is about one *or* the other. If the latter
> does not explain the former then I would say it is incomplete, and I think
> it is.
>

I can agree with that, there's still a lot more to answer. This was just a
demonstration of the possibility of the compatibility between those two
views.


>
> I would like to revisit a topic we discussed when I first (re)-entered
> this forum a few weeks ago:
>
> You were making the argument that because GPT can "understand" English
> words about mathematical relationships and translate them into the language
> of mathematics and even draw diagrams of houses and so on, that this was
> evidence that it had solved the grounding problem for itself with respect
> to mathematics. Is that still your contention?
>

I wouldn't say that it *solved* the symbol grounding problem. It would be
more accurate to say it demonstrates that it has *overcome* the symbol
grounding problem. It shows that it has grounded the meaning of English
words down to objective mathematical structures (which is about as far down
as anything can be grounded to). So it is no longer trading symbols for
symbols, it is converting symbols into objective mathematical structures
(such as connected graphs).


> My thought at the time was that you must not have the knowledge to
> understand the problem, and so I let it go, but I've since learned that you
> are very intelligent and very knowledgeable. I am wondering how you could
> make what appears, at least to me, an obvious mistake.
>
Perhaps you can tell me why you think I am mistaken to say you are mistaken.
>
>
My mistake is not obvious to me. If it is obvious to you, can you please
point it out?

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230413/0fdbdf60/attachment.htm>


More information about the extropy-chat mailing list