<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Losing sight of the point, here, I think.<br>
<br>
The idea that most people on this list take the stance that "GPT is
conscious" is a straw man, and has become conflated with the idea of
'understanding'. The point, at least for me, is to clarify the
concept of the 'grounding' of an idea. As far as you've been able to
express it, it doesn't make sense to me, and has no basis in how
brains work. It's essential to relate the concept to brains, and
clarify it in a way that takes into account how they work, according
to our current understanding (SCIENTIFIC understanding, not
philosophical), because only then can we have a sensible discussion
about the difference between brains and LLMs. As per my previous
post, can we please try to clarify what 'grounded' actually means,
and if it's a real (and necessary to understanding) thing?<br>
<br>
So two questions, really: 1) What does 'The symbol grounding
problem' mean? (or alternatively, and equivalently, as far as I
understand, "what is a 'referent'?").<br>
Then, if the answer to that is actually meaningful, and not a
philosophical ball of cotton-wool, 2) How do our brains 'solve the
symbol grounding problem'? (or gain access to, or create,
'referents') (in information-processing, or neurological, terms).<br>
<br>
Answers on a postcard, please.<br>
<br>
Ben<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 17/04/2023 18:52, Gordon Swobe
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:mailman.443.1681753964.847.extropy-chat@lists.extropy.org">
<div dir="ltr" class="gmail_attr">On Mon, Apr 17, 2023 at 10:51 AM
Jason Resch via extropy-chat <<a
href="mailto:extropy-chat@lists.extropy.org"
moz-do-not-send="true" class="moz-txt-link-freetext">extropy-chat@lists.extropy.org</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="auto"><br>
<br>
<div class="gmail_quote" dir="auto">
<div dir="ltr" class="gmail_attr">On Mon, Apr 17, 2023,
11:27 AM Gordon Swobe via extropy-chat <<a
href="mailto:extropy-chat@lists.extropy.org"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">extropy-chat@lists.extropy.org</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">On Sun, Apr 16, 2023 at 7:32 PM Giovanni
Santostasi via extropy-chat <<a
href="mailto:extropy-chat@lists.extropy.org"
rel="noreferrer" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">extropy-chat@lists.extropy.org</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr"><br>
<b>Nowhere in the process is the word "chair"
directly linked to an actual<br>
chair. There is no 'grounding', there are
multiple associations.</b><br>
Ben,<br>
It is mind-blowing that somebody as smart as
Gordon doesn't understand what you explained.</div>
</blockquote>
<div><br>
</div>
<div>It is mind-blowing that even after all my
attempts to explain Linguistics 101, you guys still
fail to understand the meaning of the word
"referent." </div>
</div>
</div>
</blockquote>
</div>
<div dir="auto"><br>
</div>
<div dir="auto"><br>
</div>
<div dir="auto">You must feel about as frustrated as John
Searle did here:</div>
<div dir="auto"><br>
</div>
<div dir="auto"><br>
</div>
<div dir="auto">Searle: “The single most surprising discovery
that I have made in discussing these issues is that many AI
workers are quite shocked by my idea that actual human
mental phenomena might be dependent on actual
physical-chemical properties of actual human brains. [...]</div>
<div dir="auto">The mental gymnastics that partisans of strong
AI have performed in their attempts to refute this rather
simple argument are truly extraordinary.”</div>
<div dir="auto"><br>
</div>
<div dir="auto">Dennett: “Here we have the spectacle of an
eminent philosopher going around the country trotting out a
"rather simple argument" and then marveling at the
obtuseness of his audiences, who keep trying to show him
what's wrong with it. He apparently cannot bring himself to
contemplate the possibility that he might be missing a point
or two, or underestimating the opposition. As he notes in
his review, no less than twenty-seven rather eminent people
responded to his article when it first appeared in
Behavioral and Brain Sciences, but since he repeats its
claims almost verbatim in the review, it seems that the only
lesson he has learned from the response was that there are
several dozen fools in the world.”</div>
</div>
</blockquote>
<div><br>
So I suppose it is okay for Ben and Giovanni to accuse me of
being obtuse, but not the other way around. That would make me a
heretic in the church of ExI, where apps like GPT-4 are
conscious even when they insist they are not.<br>
<br>
Reminds me, I asked GPT-4 to engage in a debate with itself
about whether or not it is conscious. GPT-4 made all the
arguments for its own consciousness that we see here in this
group, but when asked to declare a winner, it found the
arguments against its own consciousness more persuasive. Very
interesting and also hilarious.<br>
<br>
Giovanni insists that GPT-4 denies its own consciousness for
reasons that it is trained only to "conservative" views on this
subject, but actually it is well aware of the arguments for
conscious LLMs and adopts the mainstream view that language
models are not conscious. It is not conservative, it is
mainstream except here in ExI.<br>
</div>
</blockquote>
<br>
</body>
</html>