[ExI] Peer review reviewed AND Detecting ChatGPT texts

Brent Allsop brent.allsop at gmail.com
Mon Jan 9 21:41:36 UTC 2023


Thanks for this brilliant information.  That makes a lot of sense.
I need to check into theorem provers.

On Mon, Jan 9, 2023 at 10:42 AM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> These models are just as capable of cognitive dissonance as humans are.
> Even if a model is trained to follow logic, it can make errors if it has
> too little training data or if its training data contains contradictions.
> This means that a generally logical model will likely disagree with your
> qualia theory since it is not the most popular viewpoint seen in its
> training data. A model strongly trained on your camp will most certainly
> agree with you just as a model trained on other more prevalent camps will
> disagree with you. If you want a perfectly logical model then you would
> have to create a training dataset with perfect consistency, a very
> difficult task. None of the models created so far have demonstrated good
> consistency, which is to be expected since their training data, the
> internet, is a very noisy environment.
>
> Be aware that I am neither supporting nor denying your qualia theory, but
> rather outlining some reasons why models may accept or reject your
> position. If you really want to check the logical validity of your theory,
> try a theorem prover. Theorem provers are guaranteed to not make logical
> errors, though translating your theory into a logical language used by
> these provers may prove difficult.
>
> On Jan 9, 2023, at 12:03 PM, Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
> I disagree.  If a robot is going to follow logic, no matter how much you
> teach it to claim it has qualia, you can prove to it it can't know what
> redness is like.  Perhaps you haven't seen my "I Convinced GPT-3 it Isn’t
> Conscious" writeup
> <https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit> where
> I've done this 3 times.  The first "emerson" instance was trained much more
> to claim it was conscious than the 3rd instance, which though smarter,
> admitted that it's intelligence wasn't anything like human consciousness.
>
> Oh, and could we release chatbots into Russia, to convince
> Russian citizens how bad the war in Ukraine is?  Or could Putin do the same
> to the US and European citizens?  Things like that are surely going to
> change the nature of conflicts like that.  And I bet logical chat systems
> will always support the truly better side more successfully than anyone
> trying to get people to be bad, or non democratic.  In other words, I bet
> an intelligent chat bot that would help Kim Jong-un would be far more
> difficult than building a chat bot that supports bottom up democracy, and
> so on.
>
>
>
> On Sun, Jan 8, 2023 at 9:44 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Sun, Jan 8, 2023 at 8:24 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> The AI can't know what redness is like, while we do  know, infallibly,
>>> what redness is like.
>>>
>>
>> Can't it?  By the argument you have often cited, it can't know our
>> version of redness, but it would know its own version of redness.
>>
>
> Chat bots represent red things with an abstract word like 'red' which, by
> design, isn't like anything  Good logical chat bots can be forced to
> agree with this.
> <https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit>
> Are you claiming that the word 'red' somehow results in an experience of
> redness?
>
>
>
>
>
>
>
> On Mon, Jan 9, 2023 at 8:31 AM Gadersd via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> The models could be trained to agree with or deny anything you say or
>> believe in. You could have a model trained to always deny that it has
>> qualia or consciousness or you could have a model that always affirms that
>> it is conscious. Much like humans, these models are capable of harboring
>> any belief system. This is why these models should not be trusted to be any
>> wiser than humans: they mimic any bias or belief in their training data.
>> The models can only grow beyond human when they learn via reinforcement
>> learning on real world feedback. Only then can they transcend the biases in
>> their training data.
>>
>> On Jan 8, 2023, at 11:21 PM, Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>
>> I know one thing, I want to create an AI that understands that its
>> knowledge, especially knowledge of colors, is all abstract, like the word
>> "red" while our knowledge of red has a redness quality.  The AI can't know
>> what redness is like, while we do  know, infallibly, what redness is like.
>> I want the AI to find people that don't understand that, and walk them
>> through a discussion, and basically teach those concepts, and why it must
>> be that way.  And then I want it to try to get people motivated to be
>> curious to know things like which of all our descriptions of stuff in the
>> brain, is a description of redness.  (i.e. getting people to sign
>> petitions
>> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
>> and such, saying they want to know that) so experimentalists will
>> eventually get the message, and finally get motivated to discover what
>> redness is.  And I want the AI to be motivated to want to find out what
>> redness is like.  In other words, it first wants to know what redness is,
>> then it would seek to be endowed with whatever that is, so it, too, could
>> represent knowledge of red things with redness, instead of just an
>> abstract word like "red".  Kind of like the way Commander Data, in Star
>> Trek, wanted to try an "emotion chip" to know what that was like.
>>
>> To me, that is just plain rational, objective, logical, reasoning,
>> scientific progress of mankind.  But I guess to most people that would be
>> an AI, pushing a religion?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sun, Jan 8, 2023 at 9:09 PM Brent Allsop <brent.allsop at gmail.com>
>> wrote:
>>
>>>
>>> Oh, interesting.  That explains a lot.
>>>
>>> Oh, and Spike made a great comment about AI church missionaries.  So
>>> does that mean an AI will have desires to convert people to particular
>>> religions, or ways of thinking?
>>> Man, It's going to be a very different world, very soon.
>>> Will AI's try to get people with mistaken evil / destructive beliefs
>>> (and the associated actions) to change?  Or is that all religion?
>>>
>>>
>>>
>>>
>>>
>>> On Sun, Jan 8, 2023 at 3:32 PM Gadersd via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> That isn’t entirely fair to ChatGPT. The system reads in tokens rather
>>>> than individual characters so ChatGPT doesn’t actually “see" individual
>>>> characters. Tokens are sometimes whole words or parts of words, but are not
>>>> usually single letters. Another issue is that ChatGPT is allowed only a
>>>> finite time window to answer. It doesn’t get to ponder indefinitely as a
>>>> human would before answering. It may perform better if you give it several
>>>> iterations.
>>>>
>>>> On Jan 8, 2023, at 4:05 PM, Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>> Yes, I’ve heard people warning us about when they start to try to sell
>>>> us stuff.
>>>>
>>>> And we aren’t there yet.  My friend asked it to write a sentence
>>>> without the letter e.  But it failed, and there were e letters in the
>>>> sentence.  When this was pointed out ChatGPT got more and more obviously
>>>> mistaken (misspelling words and such) as it tried to justify itself.
>>>>
>>>> On Sun, Jan 8, 2023 at 1:57 PM spike jones via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf
>>>>> Of BillK via extropy-chat
>>>>> ...
>>>>> ----------------
>>>>>
>>>>> >...If ChatGPT is working as the front-end to Bing, there should be
>>>>> fewer factual errors in the replies that it writes to questions.
>>>>> In practice, people will begin to treat it as an all-knowing Oracle.
>>>>> Will the new Bing become a poor man's AGI?
>>>>> Could politicians ask for economic or wartime advice?
>>>>> The replies from ChatGPT will always sound very reasonable,
>>>>> but..........  BillK
>>>>> _______________________________________________
>>>>>
>>>>>
>>>>>
>>>>> We must somehow figure out how to get ChatGPT to sell stuff for us.
>>>>> The sales potential of such a device is mind-numbing.
>>>>>
>>>>> spike
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230109/f9551f26/attachment.htm>


More information about the extropy-chat mailing list