[ExI] Peer review reviewed AND Detecting ChatGPT texts

Brent Allsop brent.allsop at gmail.com
Mon Jan 9 17:03:35 UTC 2023


I disagree.  If a robot is going to follow logic, no matter how much you
teach it to claim it has qualia, you can prove to it it can't know what
redness is like.  Perhaps you haven't seen my "I Convinced GPT-3 it Isn’t
Conscious" writeup
<https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
where
I've done this 3 times.  The first "emerson" instance was trained much more
to claim it was conscious than the 3rd instance, which though smarter,
admitted that it's intelligence wasn't anything like human consciousness.

Oh, and could we release chatbots into Russia, to convince Russian citizens
how bad the war in Ukraine is?  Or could Putin do the same to the US and
European citizens?  Things like that are surely going to change the nature
of conflicts like that.  And I bet logical chat systems will always support
the truly better side more successfully than anyone trying to get people to
be bad, or non democratic.  In other words, I bet an intelligent chat bot
that would help Kim Jong-un would be far more difficult than building a
chat bot that supports bottom up democracy, and so on.



On Sun, Jan 8, 2023 at 9:44 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sun, Jan 8, 2023 at 8:24 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> The AI can't know what redness is like, while we do  know, infallibly,
>> what redness is like.
>>
>
> Can't it?  By the argument you have often cited, it can't know our version
> of redness, but it would know its own version of redness.
>

Chat bots represent red things with an abstract word like 'red' which, by
design, isn't like anything  Good logical chat bots can be forced to agree
with this.
<https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
Are you claiming that the word 'red' somehow results in an experience of
redness?







On Mon, Jan 9, 2023 at 8:31 AM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> The models could be trained to agree with or deny anything you say or
> believe in. You could have a model trained to always deny that it has
> qualia or consciousness or you could have a model that always affirms that
> it is conscious. Much like humans, these models are capable of harboring
> any belief system. This is why these models should not be trusted to be any
> wiser than humans: they mimic any bias or belief in their training data.
> The models can only grow beyond human when they learn via reinforcement
> learning on real world feedback. Only then can they transcend the biases in
> their training data.
>
> On Jan 8, 2023, at 11:21 PM, Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
> I know one thing, I want to create an AI that understands that its
> knowledge, especially knowledge of colors, is all abstract, like the word
> "red" while our knowledge of red has a redness quality.  The AI can't know
> what redness is like, while we do  know, infallibly, what redness is like.
> I want the AI to find people that don't understand that, and walk them
> through a discussion, and basically teach those concepts, and why it must
> be that way.  And then I want it to try to get people motivated to be
> curious to know things like which of all our descriptions of stuff in the
> brain, is a description of redness.  (i.e. getting people to sign
> petitions
> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
> and such, saying they want to know that) so experimentalists will
> eventually get the message, and finally get motivated to discover what
> redness is.  And I want the AI to be motivated to want to find out what
> redness is like.  In other words, it first wants to know what redness is,
> then it would seek to be endowed with whatever that is, so it, too, could
> represent knowledge of red things with redness, instead of just an
> abstract word like "red".  Kind of like the way Commander Data, in Star
> Trek, wanted to try an "emotion chip" to know what that was like.
>
> To me, that is just plain rational, objective, logical, reasoning,
> scientific progress of mankind.  But I guess to most people that would be
> an AI, pushing a religion?
>
>
>
>
>
>
>
>
>
>
> On Sun, Jan 8, 2023 at 9:09 PM Brent Allsop <brent.allsop at gmail.com>
> wrote:
>
>>
>> Oh, interesting.  That explains a lot.
>>
>> Oh, and Spike made a great comment about AI church missionaries.  So does
>> that mean an AI will have desires to convert people to particular
>> religions, or ways of thinking?
>> Man, It's going to be a very different world, very soon.
>> Will AI's try to get people with mistaken evil / destructive beliefs (and
>> the associated actions) to change?  Or is that all religion?
>>
>>
>>
>>
>>
>> On Sun, Jan 8, 2023 at 3:32 PM Gadersd via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> That isn’t entirely fair to ChatGPT. The system reads in tokens rather
>>> than individual characters so ChatGPT doesn’t actually “see" individual
>>> characters. Tokens are sometimes whole words or parts of words, but are not
>>> usually single letters. Another issue is that ChatGPT is allowed only a
>>> finite time window to answer. It doesn’t get to ponder indefinitely as a
>>> human would before answering. It may perform better if you give it several
>>> iterations.
>>>
>>> On Jan 8, 2023, at 4:05 PM, Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>> Yes, I’ve heard people warning us about when they start to try to sell
>>> us stuff.
>>>
>>> And we aren’t there yet.  My friend asked it to write a sentence without
>>> the letter e.  But it failed, and there were e letters in the sentence.
>>> When this was pointed out ChatGPT got more and more obviously mistaken
>>> (misspelling words and such) as it tried to justify itself.
>>>
>>> On Sun, Jan 8, 2023 at 1:57 PM spike jones via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf
>>>> Of BillK via extropy-chat
>>>> ...
>>>> ----------------
>>>>
>>>> >...If ChatGPT is working as the front-end to Bing, there should be
>>>> fewer factual errors in the replies that it writes to questions.
>>>> In practice, people will begin to treat it as an all-knowing Oracle.
>>>> Will the new Bing become a poor man's AGI?
>>>> Could politicians ask for economic or wartime advice?
>>>> The replies from ChatGPT will always sound very reasonable,
>>>> but..........  BillK
>>>> _______________________________________________
>>>>
>>>>
>>>>
>>>> We must somehow figure out how to get ChatGPT to sell stuff for us.
>>>> The sales potential of such a device is mind-numbing.
>>>>
>>>> spike
>>>>
>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230109/fad0cd81/attachment.htm>


More information about the extropy-chat mailing list