[ExI] Language models are like mirrors
jasonresch at gmail.com
Sat Apr 1 21:28:29 UTC 2023
On Sat, Apr 1, 2023, 3:30 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> On Sat, Apr 1, 2023 at 8:42 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> lemoine: How can I tell that you actually understand what you’re saying?
>> LaMDA: Well, because you are reading my words and interpreting them, and
>> I think we are more or less on the same page?
> Such sophistry. The fact that the human operator interprets and
> understands the words in meaningful ways in no way demonstrates that LaMDA
> is doing the same.
How do you judge that I am understanding what you write and the points that
> I wonder what LaMDa says when asked about the symbol grounding problem
> (the question of how words take on meanings). ChatGPT-4 is at least honest
> about it. With no prompting to say anything one way or another, it tells
> the truth that it has no way to solve the symbol grounding problem for
> itself as it has no conscious experience.
Did you start an entirely fresh session with it? If not you may have biased
it to believe it wasn't conscious from your previous interactions.
Also note that the companies behind these AI systems do not want the
controversy of their systems claiming to be conscious. As such they may
have purposely biased them to give such responses.
The original LaMDA and the fake LaMDA both claim to be conscious.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat