[ExI] Another ChatGPT session on qualia

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 08:41:48 UTC 2023

Just the normal OpenAI site but to access GPT-4 costs 20 dollars a month,
which I consider very well spent (all because it helps me with coding).

On Thu, Apr 27, 2023 at 1:23 AM Giovanni Santostasi <gsantostasi at gmail.com>

> *If one ignores context outside of the AI itself, one can assume that any
> answer is preprogrammed. *No one cannot assume that at all. As you said
> the answers are too complex and nuanced to assume that and also it is not
> how generative AI works.
> But as I explained and GPT-4 confirmed there is a phase of the training
> where you can guide (GPT-4 term) the AI in a certain direction. We know for
> a fact this happened. There was a phase that involved large groups of
> humans, some of them actually working in African countries, where GPT-4
> training material had to be filtered and its responses guided given the
> type of content one finds on the internet that is full of violence and
> pornographic content. So it was not necessarily a programmed answer but the
> reinforced learning process pushed GPT-4 to respond in a certain way about
> certain topics.
> But you cannot find out if a system is intelligent by asking the system if
> it is intelligent. Try to do that with humans. Some very not intelligent
> people would say yes and others that are very intelligent would try to be
> modest and deny they are very intelligent. It is very difficult to make
> this kind of self-assessment. This is why you need to use indirect testing.
> I don't say your questioning GPT-4 on this topic is completely useless, I
> said I did that too, but it is not really a definite test for understanding
> GPT capabilities.
> Giovanni
> On Thu, Apr 27, 2023 at 1:13 AM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> On Wed, Apr 26, 2023 at 11:37 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>> I asked this sort of question myself to GPT-4 several times. It is
>>> obvious that these are pretty much-contrived answers.
>> No, based on experience they are probably not.  Again: it is
>> theoretically possible but in practice this level of detail does not
>> happen, even from professionals.
>>> It is obvious that when you ask questions about law or medicine there
>>> are always corporate-type disclaimers and much less when you discuss
>>> certain other topics.
>> Ever wonder why you're calling them "corporate-type"?  It's
>> because certain humans give these disclaimers too, many of whom work for
>> corporations and give them because they work for corporations.  Said humans
>> likewise give less disclaimers on other topics.  That these disclaimers are
>> there is no evidence of programmer interference in the answers.
>> There is no point to ask GPT-4 about itself unless you find a clever way
>>> to tease these answers from it.
>> If one ignores context outside of the AI itself, one can assume that any
>> answer is preprogrammed.  Likewise, anyone can - and, a few centuries ago,
>> too many people did - assume that black-skinned people were subhuman, not
>> actually feeling or thinking in the way that white men did, then interpret
>> all evidence in this light.
>> So how did we get from "negros are subhuman" being generally accepted to
>> "black-skinned humans are as human as white-skinned humans" being generally
>> accepted?  Intellectually, aside from the civil rights protests et al,
>> acknowledging that far too large a vocal minority still acts on the former.
>> If GPT-4 can pass the bar exam (and it is not all memorization, there is
>>> a lot of reasoning and problem-solving in these exams) then
>>> 1) the humans that pass these exams are not that clever either and they
>>> really do not understand
>>> 2) GPT-4 understands quite a lot
>>> 3) All these exams and tests are useless and we should invent other ones
>>> to both test humans and AI cognitive abilities.
>> They're not completely useless.  They screen out the least capable of the
>> wannabes, significantly improving the average quality of court case (even
>> if we can imagine a much higher standard, it is not that hard to envision a
>> much worse one than the average we have today).
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/48e904e6/attachment-0001.htm>

More information about the extropy-chat mailing list