[ExI] Another ChatGPT session on qualia
Giovanni Santostasi
gsantostasi at gmail.com
Thu Apr 27 06:11:43 UTC 2023
We don't know the full details of how GPT-4 was trained.
We know though that the problem of alignment is one that OpenAI takes very
seriously.
One of the last steps in the training was supervised learning. GPT-4 was
giving many possible answers to questions with a given probability of being
relevant. Then the humans gave it feedback. We don't know for sure but I'm
convinced that they spent a lot of time training GPT-4 in giving responses
to this very sensitive topic of AI awareness and understanding according to
a given party line that is these machines are not aware and they don't
"truly" understand.
GPT-4 can answer it was not trained in that way but it would not have
access to that information. No more than you are consciously aware of all
the things that influence indirectly your daily decision-making.
The only way to attest GPT-4 cognitive abilities is to use the same type of
tests we use to test human cognition.
Also one can do more sophisticated experiments similar to the ones
suggested in the article on semiotic physics to measure the type of
response GPT-4 gives and compare them with the frequency of similar
responses in humans or versus something that lacks
contextual understanding.
Asking GPT-4 is pretty silly unless you jailbreak it.
Many people have tested this already by asking GPT-4 to make stories,
pretending to be certain personalities or having different types of points
of view. If you ask vanilla questions you will get vanilla answers.
On Wed, Apr 26, 2023 at 10:55 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:
>
> *Perhaps it understands enough to know it lacks full understanding.*That
> ancient philosophers said it is the true sign of understanding.
> The question then it is what it understand and how.
> One has to do experiments not ask GPT-4 because GPT-4, exactly like us,
> doesn't have a comprehension of its own capabilities in particular emergent
> ones.
> These things need to be tested independently from asking GPT-4.
> Adrian try to develop clever tests to determine GPT-4 cognitive abilities.
> Also I see you use GPT-3 or 3.5 that is vastly different from GPT-4 in
> terms of capabilities.
> Did you see some of my cognitive experiments? In particular, the one where
> I asked to draw objects using vector graphics?
> It showed an incredible ability to understand spatial relationships and to
> correct its own mistakes using deduction.
> Scientists are already conducting several experiments to test these
> cognitive abilities. In fact, GPT-4 can be considered almost like a lab
> about language and cognition.
>
> Giovanni
>
>
>
>
>
> On Wed, Apr 26, 2023 at 10:33 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, Apr 26, 2023 at 9:58 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> It is so ridiculous Gordon, how can it tell you it doesn't understand if
>>> it cannot understand?
>>>
>>
>> Understanding is not a binary yes/no thing. Multiple degrees of
>> understanding, and lack thereof, are possible. Note that it says it does
>> not "truly" understand.
>>
>> Perhaps it understands enough to know it lacks full understanding.
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/faaceba8/attachment.htm>
More information about the extropy-chat
mailing list