<div dir="ltr">There is lie and there is imitation. <br>They are not the same thing. <br>One of the things to consider when interacting with these AIs is that they tend to "generate" stories based on the prompts. They follow along in a sense. Often their convo is really a mirror of us, how we interact with them, they type of questions we ask and so on. So I would not rely on asking them how they think about something unless they were not prompted previously on this topic during the conversation. <br>Giovanni </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 21, 2023 at 9:10 AM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg-703734705546825111"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="m_-703734705546825111WordSection1"><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><u></u> <u></u></p><div><div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(225,225,225);padding:3pt 0in 0in"><p class="MsoNormal"><b>…</b>> <b>On Behalf Of </b>Tara Maya via extropy-chat<br><b>Subject:</b> Re: [ExI] ChatGPT says it's not conscious<u></u><u></u></p></div></div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">>…I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" implies a conscious mind. (Hence the Seinfeld paradox that one part of our brain fools another part, enabling us to "lie" it isn't a lie, but--I would insert--a mental gymnastics that has evolutionary benefits so has been able to evolve.) … Tara Maya<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">Interesting insight, thx Tara.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">We laugh at Costanza because he really identifies what you wrote about, but it is funny because he is making the case that one part of the brain decides what it wants to believe, then intentionally fools another part of the brain to believe it. But the part of the brain doing the fooling “knows” what it is doing, keeping the paradox in place.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">We have so many terms for this, such as wishful thinking and intentional blindness and religious faith, hell even love I suppose. We intentionally embrace the good and overlook the bad in our sweethearts, do we not? I do. My bride certainly does, sheesh. I suck. But she loves me anyway, ignoring that fact.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">We can talk ourselves into believing ourselves, and it still works, even if we know we arrived at the dubious conclusion by dubious means. We know that just because we know we are fooling ourselves, we are fooled just the same.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">Sheesh what a weird place to live, inside a blob of carbon inside a shell of calcium. It’s really nuts in here. Crazy fun, but crazy just the same.<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">spike<u></u><u></u></p><div><p class="MsoNormal"><br><br><u></u><u></u></p></div><p class="MsoNormal"><u></u> <u></u></p></div></div>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</div></blockquote></div>