<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle19
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link=blue vlink=purple style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><div><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal><b>…</b>> <b>On Behalf Of </b>Gadersd via extropy-chat<br><b>Subject:</b> Re: [ExI] Peer review reviewed AND Detecting ChatGPT texts<o:p></o:p></p></div></div><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>>…The models could be trained to agree with or deny anything you say or believe in. You could have a model trained to always deny that it has qualia or consciousness or you could have a model that always affirms that it is conscious. Much like humans, these models are capable of harboring any belief system. This is why these models should not be trusted to be any wiser than humans: they mimic any bias or belief in their training data. The models can only grow beyond human when they learn via reinforcement learning on real world feedback. Only then can they transcend the biases in their training data… Gadersd<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>The first thing one notices about ChatGPT is that it is a little too agreeable. It is polite and all that, but there must be some kind of control on it to be a little more challenging than it is. <o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Some of us who are older may remember a program from a long time ago called 60 Minutes. It theoretically still exists, but functionally ended on 8 September 2004. Back in the 70s, they carried a segment which was a miniature political debate between Jack Kilpatrick and Shana Alexander. We need something like a slider bar in which one could set the chatbot more towards Shana or more towards Jack. Such a device would be great for educating a prole in alternative views or reinforcing existing prejudice.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I am primarily thinking of it in terms of a super teacher, who constantly offers scientific insights, as a preparation tool for the Science Olympiad team at the high school. If we could harness this technology somehow, we could SMASH Palo Alto and Cupertino and all those rich bigshot schools over on the west side of the valley, SMASH EM! Oh what a dream gadersd, we could come out of nowhere and BOOM those west side intellectual giants who win everything won’t know what hit em. It would be the blue and gold storm rising, the long-awaited scientific proletariat revolt!<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>spike<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p></div></body></html>