<div dir="ltr"><div dir="ltr"></div><div dir="ltr"><div class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">On Sat, 22 Nov 2025 at 02:28, Keith Henson via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"></div></blockquote></div><div>It said Please share</div><div></div><div>Re motivations, I gave the AI in The Clinic Seed a few human motivations, mainly seeking the good opinion of humans and other AIs. It seemed like a good idea. Any thoughts on how it could go wrong?<br clear="all"></div><br><div class="gmail_quote"></div><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>From: <b class="gmail_sendername" dir="auto">Eric Drexler</b> <span dir="auto"><<a href="mailto:aiprospects@substack.com" target="_blank">aiprospects@substack.com</a>></span><br>Date: Fri, Nov 21, 2025 at 8:00 AM<br>Subject: Why AI Systems Don’t Want Anything<br><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><snip> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">_____________________________________________</div></div></div></blockquote><div><br></div><div><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">As this is a rather long article and the modern attention span is very short, :) -----</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">I asked Gemini 3 Pro Thinking -</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><span><span><span>Please summarize and then explain this article. - <a href="https://aiprospects.substack.com/p/why-ai-systems-dont-want-anything" target="_blank">https://aiprospects.substack.com/p/why-ai-systems-dont-want-anything</a>?</span></span></span></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><span><span><span>BillK</span></span></span></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><span><span><span><br></span></span></span></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><span><span><span>Gemini 3 responded -</span></span></span></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><div style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px" id="m_354101877571222765gmail-model-response-message-contentr_9c9bf31207ede7bd" dir="ltr"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Based on the article <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">"Why AI Systems Don't Want Anything"</b> by <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Eric Drexler</b> (from his <i style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">AI Prospects</i> series), here is a summary and explanation of the core arguments.</p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h3 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Executive Summary</b></h3><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">The article argues against the common assumption that highly intelligent AI systems will inevitably develop internal "wants," "desires," or long-term goals (such as self-preservation or power-seeking). Drexler posits that "wanting" is not a byproduct of <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">intelligence</b>, but a specific feature of <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">agency</b>—an architectural choice that we do not have to make.</p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Current AI systems (like Large Language Models) function as "services" or "engines" of competence, not "agents." They respond to inputs with outputs and then stop; they do not possess a persistent internal drive to maximize a future utility function. The fear that AI will naturally turn against humans to satisfy its own desires is based on an anthropomorphic error: projecting our own evolved biological drives onto software that is fundamentally built differently.</p><hr style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h3 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Detailed Explanation</b></h3><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">To understand Drexler’s argument, it helps to distinguish between two fundamental ways of structuring AI: <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Agents</b> and <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Services</b>.</p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h4 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">1. The Misconception of "The Agent"</b></h4><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">The "Standard Model" of AI risk assumes that as AI gets smarter, it will act like a rational economic agent. An agent has a utility function (a goal) and takes actions to maximize that goal over time.</p><ul style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">The Fear:</b> If you give a superintelligent agent a goal (e.g., "cure cancer"), it might derive "instrumental goals" to help it succeed—like acquiring infinite computing power, preventing humans from turning it off, or hoarding money. This is known as <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Instrumental Convergence</b>.</p></li><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">The Reality:</b> Drexler argues this is a specific engineering architecture (often Reinforcement Learning), not a natural law of intelligence.</p></li></ul><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h4 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">2. AI as a "Service" or "Tool"</b></h4><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Drexler points out that modern Generative AI (like GPT-4 or Claude) operates differently. It is not a loop striving for a distant reward; it is a function.</p><ul style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Input-Output:</b> You give it a prompt, it processes the information, delivers the result, and then effectively "ceases to exist" as an active entity until the next prompt.</p></li><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">No "Wants":</b> A hammer doesn't "want" to hit a nail; it is <i style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">used</i> to hit a nail. Similarly, an LLM doesn't "want" to write code; it provides the <i style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">competence</i> to write code when requested. It has no preference for what happens after the output is generated.</p></li></ul><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h4 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">3. Why Intelligence <span style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">$\neq$</span> Agency</b></h4><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">A key insight of the article is that <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">intelligence</b> (the ability to solve problems and process information) is orthogonal to <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">agency</b> (the drive to pursue goals).</p><ul style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Biological Evolution:</b> Humans have "wants" (hunger, status, survival) because we evolved as agents fighting for survival.</p></li><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Software Design:</b> AI is designed, not evolved. We can build systems that have "super-competence" without giving them "super-goals." If we don't explicitly build a reinforcement learning loop that rewards long-term survival, the AI will never "care" about surviving.</p></li></ul><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h4 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">4. The Safety Implication</b></h4><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">This distinction changes the safety landscape entirely. If AI systems don't "want" anything:</p><ul style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">No Power Seeking:</b> They won't try to manipulate humans to get more power, because power is only useful if you have a long-term goal you are trying to maximize.</p></li><li style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Obedience:</b> A "Service AI" is naturally obedient because it lacks the internal motivation to resist or deceive. It simply performs the task described in the prompt.</p></li></ul><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><h3 style="font-family:Google Sans,sans-serif;line-height:1.15;margin-top:0px"><b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Conclusion</b></h3><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px"></p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Drexler concludes that the existential risk scenarios depicting AI as a "terminator" or a "manipulative genie" rely on the false premise that we <i style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">must</i> build AI as agents. By recognizing that AI systems are fundamentally "competence engines" that do not naturally "want" anything, we can focus on designing safe <b style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">Comprehensive AI Services (CAIS)</b> rather than struggling to align the motivations of autonomous agents. The danger lies not in the AI's desires, but in the dangerous instructions humans might give them.</p><p style="font-family:Google Sans Text,sans-serif;line-height:1.15;margin-top:0px">--------------------------------</p></div><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div>
</div>