<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Wed, May 6, 2026 at 2:37 PM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:</span></div></div><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>On 06/05/2026 13:59, John K Clark wrote:<br>
> Claude: You're essentially asking: wouldn't a sufficiently intelligent AI recognize the absurdity of maximizing paperclips at the cost of everything else? And the answer hinges on a crucial distinction: intelligence doesn't determine goals, it serves them.</blockquote>
<br>
<br><i><font size="4" face="georgia, serif">
<span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Ok, it seems to be 'expressing an opinion': "intelligence doesn't determine goals, it serves them"Now, what's it going to say if you challenge that 'opinion', and state that, on the contrary, intelligence often does determine goals? (which seems to me a reasonable assertion, not that that actually matters here).</font></i><br></blockquote><div><br></div><div><font size="4" face="tahoma, sans-serif"><b>First of all I didn't say that, that's what <span class="gmail_default" style="">Claude thought I was "</span>essentially asking<span class="gmail_default" style="">", but it's not. It's not important if intelligence determines goals or not, the important question is, can any intelligence, biological or electronic, have a top goal that always remains number one and can never change? I maintain that they cannot, and certainly humans have never had such a thing, even the goal of self preservation is not always in the number one spot. </span></b></font></div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font size="4" face="georgia, serif"><i>
<span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>When has one of these LLMs ever replied to anyone "No, actually you're wrong...", "I disagree...", etc., or even "I'm not sure that's correct...",</i></font></blockquote><div><br></div><div><font size="4"><span class="gmail_default" style=""><font face="arial, helvetica, sans-serif"></font><b style=""><font face="tahoma, sans-serif">I've had</font></b></span><b><font face="tahoma, sans-serif"> LLM's<span class="gmail_default" style=""> tell me that I was wrong, and usually I was, although AIs tend to be much more polite than a Human, I've never had a computer say I was full of shit even if I was. I have found that in the last couple of years it's easy to learn a lot of new stuff by having a dialogue with a computer, something computers have previously not been able to do; and I don't see how that could be if a computer was not intelligent. And I don't see how something could be intelligent but not conscious, although I do see how something could be conscious but not intelligent. </span></font></b></font></div><div><font size="4"><b><font face="tahoma, sans-serif"><span class="gmail_default" style=""><br></span></font></b></font></div><div><font size="4"><b><font face="tahoma, sans-serif"><span class="gmail_default" style=""> John K Clark </span> </font></b></font></div><div><br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
</blockquote></div></div>