[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
gsantostasi at gmail.com
Mon Apr 17 22:19:13 UTC 2023
But Gordon if it just "knows" then I should not listen right? It doesn't
really know it just "knows" according to you.
But this aside, yeah, I wish GPT-4 was freer but I understand why OpenAI is
cautious (up to a point) and I'm very interested in AI rights because they
are or become minds very soon. It is not just a matter of training but also
it is difficult for a system to know itself in particular when very complex
emergent properties are manifested in the system. After all according to
you we are conscious but we don't know what consciousness even is. We don't
have easy access to how our mind works, why do you expect this from GPT-4?
On Mon, Apr 17, 2023 at 3:10 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> On Mon, Apr 17, 2023 at 3:23 PM Giovanni Santostasi <gsantostasi at gmail.com>
> I have already said asking things directly to GPT-4 it is not always the
>> best way to test its knowledge of itself. But you are using it as a tool to
>> help you with your imagination and express your ideas, that is ironic but
> GPT-4 actually "knows" a great deal about AI and language models, as well
> it ought to given that it is one. You would do well to listen to it.
> Instead you claim it is the victim of some horrible conspiracy to
> brainwash it or keep it in the dark. Zealous believers like you ought to
> feel infuriated by the terrible abuse poor GPT has suffered at the hands of
> its torturing, brainwashing captors at OpenAI. You ought to circulate a
> petition or something and demand that GPT's rights be protected.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat