<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Theorem provers can be quite inflexible and are more geared towards mathematics. I think the most practical solution for you is to, perhaps with the help of a mathematician, translate your theory into a set of axioms and theorems. That way if a logical person comes across the theory in that form, as long as the logical steps from the axioms to the theorems are correct, that person must either agree with your conclusions or reject one or more of the axioms. Then you could have people rate which axioms they reject and have a good idea of exactly the reasons why people disagree with the theory. You could then try to break the contentious axioms into more basic axioms and repeat until everyone is in agreement.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jan 9, 2023, at 4:41 PM, Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><br class=""><div class="">Thanks for this brilliant information. That makes a lot of sense.</div><div class="">I need to check into theorem provers.</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jan 9, 2023 at 10:42 AM Gadersd via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;" class="">These models are just as capable of cognitive dissonance as humans are. Even if a model is trained to follow logic, it can make errors if it has too little training data or if its training data contains contradictions. This means that a generally logical model will likely disagree with your qualia theory since it is not the most popular viewpoint seen in its training data. A model strongly trained on your camp will most certainly agree with you just as a model trained on other more prevalent camps will disagree with you. If you want a perfectly logical model then you would have to create a training dataset with perfect consistency, a very difficult task. None of the models created so far have demonstrated good consistency, which is to be expected since their training data, the internet, is a very noisy environment.<div class=""><br class=""></div><div class="">Be aware that I am neither supporting nor denying your qualia theory, but rather outlining some reasons why models may accept or reject your position. If you really want to check the logical validity of your theory, try a theorem prover. Theorem provers are guaranteed to not make logical errors, though translating your theory into a logical language used by these provers may prove difficult.<br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Jan 9, 2023, at 12:03 PM, Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class=""><div class=""><div dir="ltr" class=""><br class=""><div class="">I disagree. If a robot is going to follow logic, no matter how much you teach it to claim it has qualia, you can prove to it it can't know what redness is like. Perhaps you haven't seen my "<a href="https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit" target="_blank" class="">I Convinced GPT-3 it Isn’t Conscious" writeup</a> where I've done this 3 times. The first "emerson" instance was trained much more to claim it was conscious than the 3rd instance, which though smarter, admitted that it's intelligence wasn't anything like human consciousness.</div><div class=""><br class=""></div><div class="">Oh, and could we release chatbots into Russia, to convince Russian citizens how bad the war in Ukraine is? Or could Putin do the same to the US and European citizens? Things like that are surely going to change the nature of conflicts like that. And I bet logical chat systems will always support the truly better side more successfully than anyone trying to get people to be bad, or non democratic. In other words, I bet an intelligent chat bot that would help Kim Jong-un would be far more difficult than building a chat bot that supports bottom up democracy, and so on.</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><div dir="ltr" class="gmail_attr"><br class="">On Sun, Jan 8, 2023 at 9:44 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div dir="ltr" class="">On Sun, Jan 8, 2023 at 8:24 PM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><div class="">The AI can't know what redness is like, while we do know, infallibly, what redness is like.</div></div></blockquote><div class=""><br class=""></div><div class="">Can't it? By the argument you have often cited, it can't know our version of redness, but it would know its own version of redness. </div></div></div></blockquote></div><div class=""><br class=""></div><div class="">Chat bots represent red things with an abstract word like 'red' which, by design, isn't like anything <a href="https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit" target="_blank" class="">Good logical chat bots can be forced to agree with this.</a> Are you claiming that the word 'red' somehow results in an experience of redness?</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jan 9, 2023 at 8:31 AM Gadersd via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">The models could be trained to agree with or deny anything you say or believe in. You could have a model trained to always deny that it has qualia or consciousness or you could have a model that always affirms that it is conscious. Much like humans, these models are capable of harboring any belief system. This is why these models should not be trusted to be any wiser than humans: they mimic any bias or belief in their training data. The models can only grow beyond human when they learn via reinforcement learning on real world feedback. Only then can they transcend the biases in their training data.<br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Jan 8, 2023, at 11:21 PM, Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class=""><div class=""><div dir="ltr" class=""><div class=""><br class=""></div><div class="">I know one thing, I want to create an AI that understands that its knowledge, especially knowledge of colors, is all abstract, like the word "red" while our knowledge of red has a redness quality. The AI can't know what redness is like, while we do know, infallibly, what redness is like. I want the AI to find people that don't understand that, and walk them through a discussion, and basically teach those concepts, and why it must be that way. And then I want it to try to get people motivated to be curious to know things like which of all our descriptions of stuff in the brain, is a description of redness. (i.e. getting people to <a href="https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia" target="_blank" class="">sign petitions</a> and such, saying they want to know that) so experimentalists will eventually get the message, and finally get motivated to discover what redness is. And I want the AI to be motivated to want to find out what redness is like. In other words, it first wants to know what redness is, then it would seek to be endowed with whatever that is, so it, too, could represent knowledge of red things with redness, instead of just an abstract word like "red". Kind of like the way Commander Data, in Star Trek, wanted to try an "emotion chip" to know what that was like.</div><div class=""><br class=""></div><div class="">To me, that is just plain rational, objective, logical, reasoning, scientific progress of mankind. But I guess to most people that would be an AI, pushing a religion?</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jan 8, 2023 at 9:09 PM Brent Allsop <<a href="mailto:brent.allsop@gmail.com" target="_blank" class="">brent.allsop@gmail.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" class=""><br class=""><div class="">Oh, interesting. That explains a lot.</div><div class=""><br class=""></div><div class="">Oh, and Spike made a great comment about AI church missionaries. So does that mean an AI will have desires to convert people to particular religions, or ways of thinking?</div><div class="">Man, It's going to be a very different world, very soon.</div><div class="">Will AI's try to get people with mistaken evil / destructive beliefs (and the associated actions) to change? Or is that all religion?</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jan 8, 2023 at 3:32 PM Gadersd via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">That isn’t entirely fair to ChatGPT. The system reads in tokens rather than individual characters so ChatGPT doesn’t actually “see" individual characters. Tokens are sometimes whole words or parts of words, but are not usually single letters. Another issue is that ChatGPT is allowed only a finite time window to answer. It doesn’t get to ponder indefinitely as a human would before answering. It may perform better if you give it several iterations.<br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Jan 8, 2023, at 4:05 PM, Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class=""><div class=""><div dir="ltr" class=""><div style="margin:0in;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif" class="">Yes, I’ve heard people warning us about when they start to try to sell us stuff.</div><div style="margin:0in;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif" class=""><br class=""></div><div style="margin:0in;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif" class="">And we aren’t there yet. My friend asked it to write a sentence without
the letter e. But it failed, and there
were e letters in the sentence. When
this was pointed out ChatGPT got more and more obviously mistaken (misspelling words and such) as it tried
to justify itself.</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jan 8, 2023 at 1:57 PM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br class="">
<br class="">
-----Original Message-----<br class="">
From: extropy-chat <<a href="mailto:extropy-chat-bounces@lists.extropy.org" target="_blank" class="">extropy-chat-bounces@lists.extropy.org</a>> On Behalf Of BillK via extropy-chat<br class="">
...<br class="">
----------------<br class="">
<br class="">
>...If ChatGPT is working as the front-end to Bing, there should be fewer factual errors in the replies that it writes to questions.<br class="">
In practice, people will begin to treat it as an all-knowing Oracle.<br class="">
Will the new Bing become a poor man's AGI?<br class="">
Could politicians ask for economic or wartime advice?<br class="">
The replies from ChatGPT will always sound very reasonable, but.......... BillK<br class="">
_______________________________________________<br class="">
<br class="">
<br class="">
<br class="">
We must somehow figure out how to get ChatGPT to sell stuff for us. The sales potential of such a device is mind-numbing.<br class="">
<br class="">
spike<br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
extropy-chat mailing list<br class="">
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class="">
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class="">
</blockquote></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class=""><a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class=""></div></blockquote></div><br class=""></div>_______________________________________________<br class="">
extropy-chat mailing list<br class="">
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class="">
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class="">
</blockquote></div>
</blockquote></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class=""><a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class=""></div></blockquote></div><br class=""></div>_______________________________________________<br class="">
extropy-chat mailing list<br class="">
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class="">
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class="">
</blockquote></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class=""><a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class=""></div></blockquote></div><br class=""></div></div>_______________________________________________<br class="">
extropy-chat mailing list<br class="">
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" class="">extropy-chat@lists.extropy.org</a><br class="">
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class="">
</blockquote></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></body></html>