<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
On 02/04/2023 18:46, Gordon Swobe wrote:<br>
<blockquote type="cite"
cite="mid:mailman.141.1680457569.847.extropy-chat@lists.extropy.org">
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px">Jason, </span></font></div>
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px"><br>
</span></font></div>
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px">Presumably
ChatGPT-4 has processed most if not all the literature on AI
and Linguistics Epistemology and philosophy in general. Why
does it say this about its own limitations compared to
humans?</span></font></div>
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px"><br>
</span></font></div>
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px">—</span></font></div>
<div dir="auto"><span
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">ChatGPT-4: The symbol grounding
problem refers to the difficulty in connecting abstract
symbols (such as words or concepts) to their real-world
referents. While ChatGPT can simulate understanding by
generating human-like text based on its training data, it
doesn't possess the direct experience or sensory input that
humans have to ground these symbols in reality.</span><br>
</div>
<br
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">
<span
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">The significance of this limitation
has several implications:</span><br
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">
<br
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">
<div><span
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">Lack of true understanding:
ChatGPT doesn't have a genuine understanding of the world, as
it relies on patterns and associations found in the text it
was trained on. It can't form new connections based on direct
experience, which may result in limitations in problem-solving
or abstract reasoning. </span></div>
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px">—-</span></font></div>
<div dir="auto"><font face="-apple-system, Helvetica Neue"
color="#757575"><span style="word-spacing:1px"><br>
</span></font></div>
<div dir="auto"><span
style="color:rgb(117,117,117);font-family:-apple-system,"Helvetica
Neue";word-spacing:1px">[it continues with more
implications of this limitation, but this lack of true
understanding is the first and in my view most important]</span></div>
</blockquote>
<br>
<br>
I don't know if someone else has already noted this (I'm still
catching up on the recent flood of posts), but don't you consider it
ironic that you are using the systems own apparent understanding of
itself to show that it doesn't understand things?<br>
<br>
Ben<br>
<br>
<br>
</body>
</html>