[ExI] Language models are like mirrors
Ben Zaiboc
ben at zaiboc.net
Sun Apr 2 09:03:39 UTC 2023
On 02/04/2023 00:55, Gordon Swobe wrote:
> it tells the truth that it has no way to solve the symbol grounding
> problem for itself as it has no conscious experience.
Two points:
1 "It tells the truth". We already know these things can't 'tell the
truth' (or 'tell lies'). I won't say "by their own admission" because
that's inadmissible. What you really mean here, I think, is "it agrees
with me".
2 Conflating 'solving the grounding problem' with 'having conscious
experience'. I'm sure there are a great many people who can't solve the
'grounding problem', who will claim to be conscious, and I'm equally
sure that a non-conscious system would be capable of solving the
'grounding problem'. They are two different classes of claim, and
proving one to be true (or false) doesn't have any bearing on the other.
Yes, I know the 'grounding problem'* is about consciousness, but that
doesn't make them the same thing.
*why am I putting 'grounding problem' in quotes? Because I don't think
it's actually a problem at all. Any system that can solve problems and
has enough knowledge of neuroscience should probably be able to
demonstrate this. Might be an interesting experiment to ask ChatGPT
something like "taking into account the findings of modern neuroscience,
can you show that 'The grounding problem' is solvable?" ;>
Ben
More information about the extropy-chat
mailing list