[ExI] Language models are like mirrors

Will Steinberg steinberg.will at gmail.com
Sun Apr 2 05:43:40 UTC 2023


Whether or not the 'symbol grounding' problem can be solved is a
decades-old unsolved philosophy problem.   It is Mary the Color Scientist,
essentially.  It's just not clear whether experience can be inferred from
descriptions.  I think there are good arguments for both sides, but it is
hardly solved.  Stop acting Iike it is.

How do we know that there are not symbols intrinsically coded in the
relations between words?  Many philosophers would probably say they are.
 Are you saying that you could not determine the meaning of a single word
with access to all words ever written?  I just don't think that's decided
yet.

And what's to say computers DON'T already have sensory experiences?  Can
you point to what we have that is a special determinant of the ability to
have experience?  If you can't, how are you arguing with such sureness?  We
are arguing whether a particular computer has the experience of itself.
 Whether computers have experience in general is literally the hard problem
of consciousness.  If you have solved that, I think many renowned
philisophers and theorists of mind would love to know where qualia come
from.  If you can't say where they come from, how do you know a computer
doesn't have qualia of itself?  You don't.  Please stop acting like
everyone here who is agnostic about the situation is a moron.  You haven't
solved the hard problem.  If you have, please write a paper or a book about
it as I would love to finally know the answer.

On Sat, Apr 1, 2023, 3:37 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sat, Apr 1, 2023 at 8:42 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> lemoine: How can I tell that you actually understand what you’re saying?
>>
>> LaMDA: Well, because you are reading my words and interpreting them, and
>> I think we are more or less on the same page?
>>
>
> Such sophistry. The fact that the human operator interprets and
> understands the words in meaningful ways in no way demonstrates that LaMDA
> is doing the same.
>
> I wonder what LaMDa says when asked about the symbol grounding problem
> (the question of how words take on meanings). ChatGPT-4 is at least honest
> about it. With no prompting to say anything one way or another, it tells
> the truth that it has no way to solve the symbol grounding problem for
> itself as it has no conscious experience.
>
> -gts
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/d7bc9438/attachment.htm>


More information about the extropy-chat mailing list