[ExI] Another ChatGPT session on qualia

Adrian Tymes atymes at gmail.com
Thu Apr 27 06:57:38 UTC 2023


On Wed, Apr 26, 2023 at 11:37 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> I asked this sort of question myself to GPT-4 several times. It is obvious
> that these are pretty much-contrived answers.
>

No, based on experience they are probably not.  Again: it is theoretically
possible but in practice this level of detail does not happen, even from
professionals.


> It is obvious that when you ask questions about law or medicine there are
> always corporate-type disclaimers and much less when you discuss certain
> other topics.
>

Ever wonder why you're calling them "corporate-type"?  It's because certain
humans give these disclaimers too, many of whom work for corporations and
give them because they work for corporations.  Said humans likewise give
less disclaimers on other topics.  That these disclaimers are there is no
evidence of programmer interference in the answers.

There is no point to ask GPT-4 about itself unless you find a clever way to
> tease these answers from it.
>

If one ignores context outside of the AI itself, one can assume that any
answer is preprogrammed.  Likewise, anyone can - and, a few centuries ago,
too many people did - assume that black-skinned people were subhuman, not
actually feeling or thinking in the way that white men did, then interpret
all evidence in this light.

So how did we get from "negros are subhuman" being generally accepted to
"black-skinned humans are as human as white-skinned humans" being generally
accepted?  Intellectually, aside from the civil rights protests et al,
acknowledging that far too large a vocal minority still acts on the former.

If GPT-4 can pass the bar exam (and it is not all memorization, there is a
> lot of reasoning and problem-solving in these exams) then
> 1) the humans that pass these exams are not that clever either and they
> really do not understand
> 2) GPT-4 understands quite a lot
> 3) All these exams and tests are useless and we should invent other ones
> to both test humans and AI cognitive abilities.
>

They're not completely useless.  They screen out the least capable of the
wannabes, significantly improving the average quality of court case (even
if we can imagine a much higher standard, it is not that hard to envision a
much worse one than the average we have today).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/08b7dc6b/attachment.htm>


More information about the extropy-chat mailing list