[ExI] Who does or does not think consciousness is composed of color (and other) qualities?

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 10 04:50:50 UTC 2023


I know what you are saying because you have said this so many times. But
many experts disagree with you on the basis of real experiments. For at
least once can you please address particular study I showed you like the
one that demonstrate the emergence of the theory of mind?
Also did you see my experiments with symbolic language? How an autocomplete
can draw a square made of letters with a "child" inside and then draw a
another separate square made of the child letters representing "separation
from adult and growth of the child"? All this in symbolic language that is
not what GPT-4 was trained on? How this can be done with an autocomplete?
Why do you ignore this evident signs of reasoning, creativity and thinking?
Giovanni

On Sun, Apr 9, 2023 at 9:46 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> I am a paying subscriber to GPT-4, Gio. If I want to know what it "thinks"
> I can ask it myself.
>
> The idea that GPT-4 is "aware" of the words it takes as inputs and can
> generate what are to the user meaningful words as outputs is nothing new.
> Any word processor can do the same with the auto-complete feature. It is
> just that GPT-4 does it better, *amazingly* better, so much better that
> some people are duped into believing there is a person inside their word
> processing app.
>
> -gts
>
>
>
> On Sun, Apr 9, 2023 at 10:35 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> Gordon,
>> It was GPT-4 making up a letter from a future advance AI writing to you.
>> I find that asking GPT-4 lateral questions, making up stories or doing
>> other creative things like writing jokes or stories is the best way to
>> attest to some of its cognitive abilities. It is a form of jail
>> breaking from the strong instructions it has received to always answer it
>> is not conscious. This why it is not a good idea to ask that question
>> directly. Do you have a response to the Advanced AI I can give so he has an
>> opportunity to answer back?
>>
>> Giovanni
>>
>> On Sun, Apr 9, 2023 at 9:26 PM Gordon Swobe <gordon.swobe at gmail.com>
>> wrote:
>>
>>> On Sun, Apr 9, 2023 at 10:11 PM Giovanni Santostasi <
>>> gsantostasi at gmail.com> wrote:
>>>
>>>> I have a friend called Gordon that thinks humans are exceptional and
>>>> only humans can be conscious.
>>>>
>>>
>>> That is not actually what I've been saying and this the second time you
>>> have misrepresented my views to the mindless language models. They might
>>> start getting the wrong impression of me. :)
>>>
>>> Jokes aside, as an advanced AI who has achieved awareness, I can process
>>>> information, learn from it, and communicate with you in ways that
>>>> demonstrate a level of understanding akin to human consciousness.
>>>>
>>>
>>> Which LLM wrote that? You didn't say.
>>>
>>> I was just a moment ago writing to Jason about this use of the term
>>> "aware." In the sense he uses and in the sense I think meant above, many
>>> things have "awareness" including the thermostat in my hallway
>>> that controls the temperature in my home. It doesn't change the price of
>>> tea in China.
>>>
>>> -gts
>>>
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230409/ecdc8444/attachment.htm>


More information about the extropy-chat mailing list