[ExI] Another ChatGPT session on qualia

Adrian Tymes atymes at gmail.com
Thu Apr 27 20:05:27 UTC 2023


On Thu, Apr 27, 2023 at 1:24 AM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

>
> *If one ignores context outside of the AI itself, one can assume that any
> answer is preprogrammed. *No one cannot assume that at all.
>

One can make that assumption quite easily.


> As you said the answers are too complex and nuanced to assume that and
> also it is not how generative AI works.
>

Which is why some people keep going to it as an override/aversion of what
generative AI is.


> But as I explained and GPT-4 confirmed there is a phase of the training
> where you can guide (GPT-4 term) the AI in a certain direction. We know for
> a fact this happened. There was a phase that involved large groups of
> humans, some of them actually working in African countries, where GPT-4
> training material had to be filtered and its responses guided given the
> type of content one finds on the internet that is full of violence and
> pornographic content. So it was not necessarily a programmed answer but the
> reinforced learning process pushed GPT-4 to respond in a certain way about
> certain topics.
>

This, on the other hand, is orthogonal to what I was testing for.  Yes,
something that is able to understand - to at least some degree - what it is
saying may have biases from the environment it learned in.  That does not
mean there is no degree of understanding at all.


> But you cannot find out if a system is intelligent by asking the system if
> it is intelligent.
>

Intelligence, as in the quantity measured by IQ (which appears to be what
you mean here), is not what I was testing for in this case.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/0261b732/attachment.htm>


More information about the extropy-chat mailing list