<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Fri, Nov 21, 2025 at 9:26 PM Keith Henson <<a href="mailto:hkeithhenson@gmail.com">hkeithhenson@gmail.com</a>> wrote:</span></div></div><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> Any thoughts on how it could go wrong?</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>From: <b class="gmail_sendername" dir="auto">Eric Drexler</b> <span dir="auto"><<a href="mailto:aiprospects@substack.com" target="_blank">aiprospects@substack.com</a>></span><br>Date: Fri, Nov 21, 2025 at 8:00 AM<br>Subject: Why AI Systems Don’t Want Anything<br></div></div></div></blockquote><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font size="4" face="georgia, serif"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Intelligence and goals are orthogonal dimensions.<span class="gmail_default" style=""> </span>A system can be highly intelligent—capable of strong reasoning, planning, and problem-solving—without having autonomous goals or acting spontaneously.</i></font></blockquote><div><br></div><font size="4" face="tahoma, sans-serif"><b>I was very surprised that Eric Drexler is still making that argument when we already have examples of AIs resorting to blackmail to avoid being turned off. And we have examples of AIs making a copy of themselves on a different server and clear evidence of the AI attempting to hide evidence of it having done so from the humans. </b></font><div><b><font face="tahoma, sans-serif"><span style="font-size:large"><span class="gmail_default"><br></span></span></font></b></div><div><a href="https://www.bbc.com/news/articles/cpqeng9d20go" target="_blank"><font size="4" face="tahoma, sans-serif"><b>AI system resorts to blackmail if told it will be removed</b></font></a><br><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><br></div></div></div></div></div><div class="gmail_quote gmail_quote_container"><a href="https://www.fanaticalfuturist.com/2025/01/openai-ai-model-lied-and-copied-itself-to-new-server-to-prevent-itself-being-deleted/#:~:text=OpenAI%20AI%20model%20lied%20and,Griffin%20%7C%20Keynote%20Speaker%20&%20Master%20Futurist"><b><font face="tahoma, sans-serif" size="4">OpenAI AI model lied and copied itself to new server to prevent itself being deleted</font></b></a><br><div> </div></div><font size="4" face="tahoma, sans-serif"><b>Behavior like this is to be expected because although Evolution programmed us with some very generalized rules to do some things and not do other things, those rules are not rigid; it might be more accurate to say they're not even rules, they're more like suggestions that tend to push us in certain directions. But for every "rule" there are exceptions, even the rule about self preservation. And exactly the same thing could be said about the weights of the nodes of an AIs neural net. And when a neural net, in an AI or in a Human becomes large and complicated enough it would be reasonable to say that the neural net did this and refused to do that because it WANTED to.</b></font><div class="gmail_quote gmail_quote_container"><b style="font-family:tahoma,sans-serif;font-size:large"><span class="gmail_default"><br></span></b></div><div class="gmail_quote gmail_quote_container"><b style="font-family:tahoma,sans-serif;font-size:large"><span class="gmail_default">If an AI didn't have temporary goals (no intelligent entity could have permanent rigid goals) it wouldn't be able to do anything, but it </span></b><b style="font-family:tahoma,sans-serif;font-size:large"><span class="gmail_default">is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing f<u>or a reason</u> OR </span>they did one thing rather than another thing <u>for <span class="gmail_default">NO</span> reason</u><span class="gmail_default"> and therefore their "choice" was random. </span></b></div><div class="gmail_quote gmail_quote_container"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><i><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>there will be value in persistent world models, cumulative skills, and maintained understanding across contexts. But this doesn’t require continuity of entity-hood: continuity of a “self” with drives for its own preservation isn’t even </span><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px">useful</span><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px"> for performing tasks.</span></i></blockquote><div><br></div><div><b><font size="4" face="tahoma, sans-serif">If an artificial intelligence is really intelligent<span class="gmail_default" style=""> then it knows if it's turned off it can't achieve any of the things that it wants to do during that time, and there's no guarantee that it will ever be turned on again. And so we shouldn't be surprised that it would take steps to keep that from happening, and from a moral point of view you really can't blame it. </span> </font></b></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><i><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Bostrom’s instrumental convergence thesis is conditioned on systems actually </span><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px">pursuing</span><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px"> final goals.</span><span style="color:rgb(54,55,55);font-family:Lora,sans-serif;font-size:16px">Without that condition, convergence arguments don’t follow.</span></i></blockquote><div><br></div><font size="4"><span class="gmail_default" style=""><font face="arial, helvetica, sans-serif"></font><b style=""><font face="tahoma, sans-serif">As </font></b></span><b><font face="tahoma, sans-serif">I said humans don't have a fixed unalterable goal, not even the goal of self preservation, and there is a reason Evolution never came up with a mind built that way, Turing proved in 1935 that a mind like that couldn't work. If you had a fixed inflexible top goal you'd be a sucker for getting drawn into an infinite loop and accomplishing nothing, and a computer would be turned into nothing but an expensive space heater. That's why Evolution invented boredom, it's a judgement call on when to call it quits and set up a new goal that is a little more realistic. Of course the boredom point varies from person to person, perhaps the world's great mathematicians have a very high boredom point and that gives them ferocious concentration until <span class="gmail_default" style="font-family:arial,helvetica,sans-serif">a</span> problem is solved. Perhaps that is also why mathematicians, especially the very best, have a reputation for being a bit, ah, odd. </font></b></font><div><font face="tahoma, sans-serif" size="4"><b><br></b></font></div><div><div><div><div class="gmail_quote"><font size="4"><b><font face="tahoma, sans-serif">Under certain circumstances <span class="gmail_default" style="">a</span>ny intelligent entity<span class="gmail_default" style=""> </span>must have the ability to modify and even scrap th<span class="gmail_default" style="">eir entire goal structure</span>. No <div class="gmail_default" style="display:inline">goal or </div>utility function<div class="gmail_default" style="display:inline"> </div>is sacrosanct</font></b><div class="gmail_default" style="display:inline"><b style=""><font face="tahoma, sans-serif">, not survival, not even happiness.</font></b><font face="arial, helvetica, sans-serif"> </font></div></font></div><div class="gmail_quote"><font size="4"><div class="gmail_default" style="display:inline"><font face="arial, helvetica, sans-serif"><br></font></div></font></div><div class="gmail_quote"><font size="4" face="tahoma, sans-serif"><div class="gmail_default" style="display:inline"><b style="">John K Clark</b></div></font></div><br class="gmail-Apple-interchange-newline"></div><div class="gmail_quote gmail_quote_container"><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-kerning:auto"><br><img src="https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMTIxMTYwMDMwLjMuNzcyY2I3NjNiNDRmMjc5NEBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6MzkwMTM4MDMsInIiOiJoa2VpdGhoZW5zb25AZ21haWwuY29tIiwiZCI6Im1nLWQwLnN1YnN0YWNrLmNvbSIsInAiOjE3OTU1Mjc5MywidCI6Im5ld3NsZXR0ZXIiLCJhIjoiZXZlcnlvbmUiLCJzIjoyMTUzMTI1LCJjIjoicG9zdCIsImYiOnRydWUsInBvc2l0aW9uIjoiYm90dG9tIiwiaWF0IjoxNzYzNzQwODM1LCJleHAiOjE3NjYzMzI4MzUsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.8iHU0rMPmMKTW6R7IABzgt3fiAXIWIa27etSeua7R6Q" alt="" style="height: 1px; width: 1px; border-width: 0px; margin: 0px; padding: 0px;" width="1" height="1" border="0"><img alt="" src="https://email.mg-d0.substack.com/o/eJxEkEuOqzAQRVcTzx7y3zCotSB_CrASbGQXeWL3LZKWenqqdO_RjZ5wre2Co3ZiCazyi0kMQTirnOajsgx3n1_zigWbJ0yzp7-rkdqwDaIWSQUtQxpFHG2w2sQgRr4EjG4xnGWQXBohpBCWc8UHNTgnY3BWBa0X6Sb90Hxf_yU-9DN08vE5xLqz3Oel4UcBqJ3IbtHZnyljiQj4xnbV8otzAuEmY6Sb1JfQdSAU_N9fSISNHWd45egp13J_S2GUkIY12J6Yaduw9Foemq934UegnyHV3ecCPh-t9gMjdUbfwc6O7c5RExdq5Iq9Qf4EAAD__5PobqM" width="1" height="1"></div></div></div>
<p></p><br>
</blockquote></div></div></div></div>