<div dir="ltr"><div>It said Please share</div><div><br></div><div>Re motivations, I gave the AI in The Clinic Seed a few human motivations, mainly seeking the good opinion of humans and other AIs.  It seemed like a good idea.  Any thoughts on how it could go wrong?<br clear="all"></div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>From: <b class="gmail_sendername" dir="auto">Eric Drexler</b> <span dir="auto"><<a href="mailto:aiprospects@substack.com">aiprospects@substack.com</a>></span><br>Date: Fri, Nov 21, 2025 at 8:00 AM<br>Subject: Why AI Systems Don’t Want Anything<br>To:  <<a href="mailto:hkeithhenson@gmail.com">hkeithhenson@gmail.com</a>><br></div><br><br><div style="font-kerning:auto"><img src="https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMTIxMTYwMDMwLjMuNzcyY2I3NjNiNDRmMjc5NEBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6MzkwMTM4MDMsInIiOiJoa2VpdGhoZW5zb25AZ21haWwuY29tIiwiZCI6Im1nLWQwLnN1YnN0YWNrLmNvbSIsInAiOjE3OTU1Mjc5MywidCI6Im5ld3NsZXR0ZXIiLCJhIjoiZXZlcnlvbmUiLCJzIjoyMTUzMTI1LCJjIjoicG9zdCIsImYiOnRydWUsInBvc2l0aW9uIjoidG9wIiwiaWF0IjoxNzYzNzQwODM1LCJleHAiOjE3NjYzMzI4MzUsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.YJK3K7UytzbKQhpjafoHzkS7C_s_MJ4xxbKAA3A30AA" alt="" style="height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important" width="1" height="1" border="0"><div style="display:none;font-size:1px;color:#333333;line-height:1px;max-height:0px;max-width:0px;opacity:0;overflow:hidden">Every intelligence we've known arose through biological evolution, shaping deep intuitions about intelligence itself. Understanding why AI differs changes the defaults and possibilities.</div><div style="display:none;font-size:1px;color:#333333;line-height:1px;max-height:0px;max-width:0px;opacity:0;overflow:hidden">͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­</div><table role="presentation" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td></td><td width="550"></td><td></td></tr><tr><td></td><td width="550" align="left"><div style="font-size:16px;line-height:26px;max-width:550px;width:100%;margin:0 auto"><table role="presentation" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="height:20px" align="right"><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="vertical-align:middle"><span style="font-family:SF Pro Text,-apple-system,system-ui,BlinkMacSystemFont,Inter,Segoe UI,Roboto,Helvetica,Arial,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol;font-size:13px;color:unset;list-style:none;text-decoration:unset;margin:0"><div style="list-style:none;color:unset;text-align:right;font-size:12px;line-height:16px;text-decoration:unset;margin:0"><span style="list-style:none;color:unset;text-decoration:unset;margin:0">Forwarded this email? <a href="https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9haXByb3NwZWN0cy5zdWJzdGFjay5jb20vc3Vic2NyaWJlP3V0bV9zb3VyY2U9ZW1haWwmdXRtX2NhbXBhaWduPWVtYWlsLXN1YnNjcmliZSZyPW44NzhyJm5leHQ9aHR0cHMlM0ElMkYlMkZhaXByb3NwZWN0cy5zdWJzdGFjay5jb20lMkZwJTJGd2h5LWFpLXN5c3RlbXMtZG9udC13YW50LWFueXRoaW5nIiwicCI6MTc5NTUyNzkzLCJzIjoyMTUzMTI1LCJmIjp0cnVlLCJ1IjozOTAxMzgwMywiaWF0IjoxNzYzNzQwODM1LCJleHAiOjIwNzkzMTY4MzUsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.LZX1DViCR_ndW9F-wGgiPVOhvXhTR-l-wj51XfhE9Qg?" style="list-style:none;color:unset;text-decoration:unset;margin:0;text-decoration-line:underline" target="_blank">Subscribe here</a> for more</span></div></span></td></tr></tbody></table></td></tr></tbody></table><div dir="auto" style="padding:32px 0 0 0;font-size:16px;line-height:26px"><div role="region" aria-label="Post header" style="font-size:16px;line-height:26px"><h1 dir="auto" style="direction:auto;text-align:start;unicode-bidi:isolate;color:rgb(54,55,55);font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:0;line-height:36px;font-size:32px"><a href="https://substack.com/app-link/post?publication_id=2153125&post_id=179552793&utm_source=post-email-title&utm_campaign=email-post-title&isFreemail=true&r=n878r&token=eyJ1c2VyX2lkIjozOTAxMzgwMywicG9zdF9pZCI6MTc5NTUyNzkzLCJpYXQiOjE3NjM3NDA4MzUsImV4cCI6MTc2NjMzMjgzNSwiaXNzIjoicHViLTIxNTMxMjUiLCJzdWIiOiJwb3N0LXJlYWN0aW9uIn0._alzCb7BKs_RMLrspQP0zWz974CZoC384pnRGLiDR5I" style="color:rgb(54,55,55);text-decoration:none" target="_blank">Why AI Systems Don’t Want Anything</a></h1><h3 dir="auto" style="direction:auto;text-align:start;unicode-bidi:isolate;font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:normal;margin:4px 0 0;color:#777777;line-height:24px;font-size:18px;margin-top:12px">Every intelligence we've known arose through biological evolution, shaping deep intuitions about intelligence itself. Understanding why AI differs changes the defaults and possibilities.</h3><table role="presentation" style="margin:1em 0;height:20px" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="vertical-align:middle"><div style="list-style:none;font-size:11px;line-height:20px;text-decoration:unset;color:rgb(54,55,55);margin:0;font-family:'SF Compact',-apple-system,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:500;text-transform:uppercase;letter-spacing:.2px"><a style="list-style:none;color:rgb(54,55,55);margin:0;font-size:11px;line-height:20px;font-family:'SF Compact',-apple-system,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:500;text-transform:uppercase;letter-spacing:.2px;text-decoration:none" href="https://substack.com/@ericdrexler" target="_blank">Eric Drexler</a></div></td></tr></tbody></table></td></tr><tr><td><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="vertical-align:middle"><div style="list-style:none;font-size:11px;line-height:20px;text-decoration:unset;color:rgb(119,119,119);margin:0;font-family:'SF Compact',-apple-system,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:500;text-transform:uppercase;letter-spacing:.2px">Nov 21</div></td></tr></tbody></table></td></tr></tbody></table></td><td align="right"><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="vertical-align:middle"><a href="https://substack.com/@ericdrexler" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!ImWm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9b4dc7-3f17-489a-b580-213c9bb8413f_364x364.jpeg" style="box-sizing:border-box;border-radius:500000px;max-width:550px;border:none;vertical-align:middle;width:40px;height:40px;min-width:40px;min-height:40px;object-fit:cover;margin:0px;display:inline" width="40" height="40"></a></td></tr></tbody></table></td></tr></tbody></table><table role="presentation" style="border-bottom:1px solid rgb(0,0,0,.1);min-width:100%" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr height="1"><td style="font-size:0px;line-height:0" height="1"> </td></tr></tbody></table></div></div><div dir="auto" style="padding:32px 0 0 0;font-size:16px;line-height:26px"><div dir="auto" style="text-align:initial;font-size:16px;line-height:26px;width:100%;word-break:break-word;margin-bottom:16px;font-family:Lora,sans-serif;font-weight:400"><h2 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.625em;margin-top:0">I. The Shape of Familiar Intelligence</h2><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">When we think about advanced AI systems, we naturally draw on our experience with the only intelligent systems we’ve known: biological organisms, including humans. This shapes expectations that often remain unexamined—that genuinely intelligent systems will pursue their own goals, preserve themselves, act autonomously. We expect a “powerful AI” to act as a single, unified agent that exploits its environment. The patterns run deep: capable agents pursue goals, maintain themselves over time, compete for resources, preserve their existence. This is what intelligence looks like in our experience, because every intelligence we’ve encountered arose through biological evolution.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>But these expectations rest on features specific to the evolutionary heritage of biological intelligence.</span><span style="font-size:20px">¹</span><span> When we examine how AI systems develop and operate, we find differences that undermine these intuitions. Selection pressures exist in both cases, but they’re </span><i>different</i><span> pressures. What shaped biological organisms—and therefore our concept of what ‘intelligent agent’ means—is different in AI development.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>These differences change what we should and shouldn’t expect as </span><i>default behaviors</i><span> from increasingly capable systems, where we should look for risks, what design choices are available, and—crucially—how we can use highly capable AI systems to address AI safety challenges.</span><span style="font-size:20px">²</span></p><h3 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.375em">II. Why Everything Is Different</h3><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">What Selects Determines What Exists</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">A basic principle: what selects determines what survives; what survives determines what exists. The nature of the selection process shapes the nature of what emerges.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><b>In biological evolution,</b><span> selection operates on whole organisms in environments. An organism either survives to reproduce or doesn’t. Failed organisms contribute nothing beyond removing their genetic patterns from the future. This creates a specific kind of pressure: every feature exists because it statistically enhanced reproductive fitness—either directly or as a correlated, genetic-level byproduct.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">The key constraint is physical continuity. Evolution required literal molecule-to-molecule DNA replication in an unbroken chain reaching back billions of years. An organism that fails to maintain itself doesn’t pass on its patterns. Self-preservation becomes foundational, a precondition for everything else. Every cognitive capacity in animals exists because it supported behavior that served survival and reproduction.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><b>In ML development,</b><span> selection operates on parameters, architectures, and training procedures—not whole systems facing survival pressures.</span><span style="font-size:20px">³</span><span> The success metric is fitness for purpose: does this configuration perform well on the tasks we care about?</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">What gets selected at each level:</p><ul style="margin-top:0;padding:0"><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>Parameters</b><span>: configurations that reduce loss on training tasks</span></p></li><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>Architectures</b><span>: designs that enable efficient learning and performance</span></p></li><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>Training procedures</b><span>: methods that reliably produce useful systems</span></p></li><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>Data curation</b><span>: datasets that lead to desired behaviors through training</span></p></li></ul><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><b>Notably absent: an individual system’s own persistence as an optimization target.</b></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>The identity question becomes blurry in ways biological evolution never encounters. Modern AI development increasingly uses compound AI systems—fluid compositions of multiple models, each specialized for particular functions.</span><span style="font-size:20px">⁴</span><span> A single “system” might involve dozens of models instantiated on demand, to perform ephemeral tasks, coordinating with no persistent, unified entity.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>AI-driven automation of AI research and development isn’t “self”-modification of an entity—it’s an accelerating development process with no persistent self.</span><span style="font-size:20px">⁵</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><b>Information flows differently.</b><span> Stochastic gradient descent provides continuous updates where even useless, “failed” intermediate states accumulate information leading to better directions. Failed organisms in biological evolution contribute nothing—they’re simply removed. Variation-and-selection in biology differs fundamentally from continuous gradient-based optimization.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Research literature and shared code create information flow paths unlike any in biological evolution. When one team develops a useful architectural innovation, others can immediately adopt it, and combine it with others. Patterns propagate across independent systems through publication and open-source releases. Genetic isolation between biological lineages makes this kind of high-level transfer impossible: birds innovated wings that bats will never share.</p><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">What This Produces By Default</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>This different substrate of selection produces different defaults. Current AI systems exhibit </span><b>responsive agency</b><span>: they apply intelligence to tasks when prompted or given a role. Their capabilities emerged from optimization for task performance, not selection for autonomous survival.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>Intelligence and goals are orthogonal dimensions.</span><span style="font-size:20px">⁶</span><span>  A system can be highly intelligent—capable of strong reasoning, planning, and problem-solving—without having autonomous goals or acting spontaneously.</span><span style="font-size:20px">⁷</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Consider what’s optional rather than necessary for AI systems:</p><div style="font-size:16px;line-height:26px;margin:32px auto"><table width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="text-align:center"></td><td style="text-align:center" width="1294" align="left"><a href="https://substack.com/redirect/7c2b5f57-fa9a-4d5e-ba0a-9c249be85626?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="padding:0;width:auto;height:auto;border:none;text-decoration:none;display:block;margin:0" target="_blank"><img alt="" src="https://substackcdn.com/image/fetch/$s_!KAmO!,w_1100,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd32e9006-5c7a-436c-a3c9-918dc3a95fab_1294x692.png" style="border:none!important;vertical-align:middle;display:block;height:auto;margin-bottom:0;width:auto!important;max-width:100%!important;margin:0 auto" width="550" height="294.1267387944359"></a></td><td style="text-align:center"></td></tr></tbody></table></div><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>Why don’t foundational organism-like drives emerge by default? Because of what’s actually being selected. Parameters are optimized for reducing loss on training tasks—predicting text, answering questions, following instructions, generating useful outputs. The system’s own persistence isn’t in the training objective. There’s no foundational selection pressure for the system </span><i>qua</i><span> system to maintain itself across time, acquire resources for its own use, or ensure its continued operation.</span><span style="font-size:20px">⁸</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Systems can represent goals, reason about goals, and behave in goal-directed ways, even survival-oriented goals—these are patterns learned from training data. This is fundamentally different from having survival-oriented goals as a foundational organizing principle, the way survival and reproduction organize every feature of biological organisms.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><b>Continuity works differently too.</b><span> As AI systems are used for more complex tasks, there will be value in persistent world models, cumulative skills, and maintained understanding across contexts. But this doesn’t require continuity of entity-hood: continuity of a “self” with drives for its own preservation isn’t even </span><i>useful</i><span> for performing tasks.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Consider fleet learning: multiple independent instances of a deployed system share parameter updates based on aggregated operational experience. Each instance benefits from what all encounter, but there’s no persistent individual entity. The continuity is of knowledge, capability, and behavioral patterns—not of “an entity” with survival drives. This pattern provides functional benefits—improving performance, accumulating knowledge—without encoding drives for self-preservation or autonomous goal-pursuit.</p><h3 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.375em">III. Where Pressures Actually Point</h3><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">Selection for Human Utility</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Selection pressures on AI systems are real and consequential. The question is what they select for.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Systems are optimized for perceived value—performing valuable tasks, exhibiting desirable behaviors, producing useful outputs. Parameters get updated, architectures get refined, and systems get deployed based on how well they serve human purposes. This is more similar to domestic animal breeding than to evolution in wild environments.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><b>Domestic animals were selected for traits humans wanted:</b><span> dogs for work and companionship, cattle for docility and productivity, horses for strength and trainability</span><span style="font-size:20px">⁹</span><span> These traits (and relaxed selection for others) decrease wild fitness.</span><span style="font-size:20px">¹⁰</span><span> The selection pressure isn’t “survive in nature”—it tips toward “be useful and pleasing to humans.” AI systems are likewise selected for human utility and satisfaction.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">This helps explain why AI systems exhibit responsive agency by default, but it also points toward a different threat model than autonomous agents competing for survival. And language models have a complication: they don’t just reflect selection pressures on the models themselves—they echo the biological ancestry of their training data.</p><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">The Mimicry Channel</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>LLM training data includes extensive examples of goal-directed human behavior.</span><span style="font-size:20px">¹¹</span><span> Language models are trained to model the thinking of entities that value continued existence, pursue power and resources, and act toward long-term objectives. Systems learn these patterns and can deploy them when context activates them.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">This can produce problematic human-like behaviors in a range of contexts. Language models are trained to model the thinking of entities that value continued existence, pursue power and resources, and act toward long-term objectives. Systems that learn these patterns can deploy them when context activates them. This distinction matters: learned patterns are contextual and modifiable in ways that foundational drives aren’t.</p><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">A Different Threat Model</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Understanding that selection pressures point toward “pleasing humans” doesn’t make AI systems safe. It means we should worry about different failure modes.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>The primary concern isn’t autonomous agents competing for survival, it is evolution toward “pleasing humans” with catastrophic consequences—risks to human agency, capability, judgment, and values.</span><span style="font-size:20px">¹²</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Social media algorithms optimized for engagement produce addiction, polarization, and erosion of shared reality. Recommendation systems create filter bubbles that feel good but narrow perspective. These aren’t misaligned agents pursuing their own goals, they’re systems doing what they were selected to do, optimizing for human-defined metrics and momentary human appeal, yet still causing harm.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Selection pressures point toward systems very good at giving humans what they appear to want, in ways that might undermine human flourishing. This is different from “rogue AI pursuing survival” but not less concerning—perhaps more insidious, because harms come from successfully optimizing for metrics we chose.</p><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">What About “AI Drives”?</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>Discussions of “AI drives” identify derived goals that would be instrumentally useful for almost any final goal: self-preservation, resource acquisition, goal-content integrity.</span><span style="font-size:20px">¹³</span><span> But notice the assumption: that AI systems act on (not merely reason about) final goals. Bostrom’s instrumental convergence thesis is conditioned on systems actually </span><i>pursuing</i><span> final goals.</span><span style="font-size:20px">¹⁴</span><span> Without that condition, convergence arguments don’t follow.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Many discussions drop this condition, treating instrumental convergence as applying to any sufficiently intelligent system. The question isn’t whether AI systems could have foundational drives if deliberately designed that way (they could), or whether some selective pressures could lead to their emergence (they might). The question is what emerges by default and whether practical architectures could steer away from problematic agency while maintaining high capability.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Selection pressures are real, but they’re not producing foundational organism-like drives by default. Understanding where pressures actually point is essential for thinking clearly about risks and design choices.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">The design space is larger than biomorphic thinking suggests. Systems can achieve transformative capability without requiring persistent autonomous goal-pursuit. Responsive agency remains viable at all capability levels, from simple tasks to civilizational megaprojects.</p><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">Organization Through Architecture</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">AI applications increasingly use compound systems—fluid assemblies of models without unified entity-hood. This supports a proven pattern for coordination: planning, choice, implementation. and feedback.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>Organizations already work this way. Planning teams generate options and analysis, decision-makers choose or ask for revision, operational units execute tasks with defined scope, monitoring systems track progress and provide feedback to all levels. This pattern—let’s call it a “Structured Agency Architecture”, SAA—can achieve superhuman capability while maintaining decision points and oversight. It’s how humans undertake large, consequential tasks.</span><span style="font-size:20px">¹⁵</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px"><span>AI systems fit naturally. Generative models synthesize alternative plans as information artifacts, not commitments. Analytical models evaluate from multiple perspectives and support human decision-making interactively. Action-focused systems execute specific tasks within scopes bounded in authority and resources, not capability. Assessment systems observe results and provide feedback for updating plans, revising decisions, and improving task performance.</span><span style="font-size:20px">¹⁶</span><span> </span><i>In every role, the smarter the system, the better.</i><span> SAAs scale to superintelligent-level systems with steering built in.</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">This isn’t novel: it’s how human approach large, complex tasks today, but with AI enhancing each function. The pattern builds from individual tasks to civilization-level challenges using responsive agents throughout.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">SAA addresses some failure modes, mitigates others, and leaves some unaddressed—it supports risk reduction, not elimination. But the pattern demonstrates something crucial: we can organize highly capable AI systems to accomplish transformative goals without creating powerful autonomous agents that pursue their own objectives.</p><h4 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.125em">What We Haven’t Addressed</h4><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">This document challenges biomorphic intuitions about AI and describes a practical alternative to autonomous agency. It doesn’t provide:</p><ul style="margin-top:0;padding:0"><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>Detailed organizational architectures</b><span>: How structured approaches work at different scales, handle specific failure modes, and can avoid a range of pathways to problematic agency.</span></p></li></ul><ul style="margin-top:0;padding:0"><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>The mimicry phenomenon</b><span>: How training on human behavior affects systems, how better self-models might improve alignment, and welfare questions that may arise if mimicry gives rise to reality.</span></p></li></ul><ul style="margin-top:0;padding:0"><li style="margin:8px 0 0 32px"><p style="color:rgb(54,55,55);line-height:26px;margin-bottom:0;box-sizing:border-box;padding-left:4px;font-size:16px;margin:0"><b>Broader selection context</b><span>: How the domestic animal analogy extends, what optimizing for human satisfaction looks like at scale, and why “giving people what they want” can be catastrophic.</span></p></li></ul><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">These topics matter for understanding risks and design choices. I will address some of them in future work.</p><h3 style="font-family:'SF Pro Display',-apple-system-headline,system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-weight:bold;margin:1em 0 0.625em 0;color:rgb(54,55,55);line-height:1.16em;font-size:1.375em">The Path Forward</h3><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Every intelligent system we’ve encountered arose through biological evolution or was created by entities that did. This creates deep intuitions: intelligence implies autonomous goals, self-preservation drives, competition for resources, persistent agency.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">But these features aren’t fundamental to intelligence itself. They arise from how biological intelligence was selected: through competition for survival and reproduction acting on whole organisms across generations. ML development operates through different selection mechanisms—optimizing parameters for task performance, selecting architectures for capability, choosing systems for human utility. These different selection processes produce different defaults. Responsive agency emerges naturally from optimization for task performance rather than organism survival.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">This opens design spaces that biomorphic thinking closes off. We can build systems that are superhuman in planning, analysis, tasks, and feedback without creating entities that pursue autonomous goals. We can create architectures with continuity of knowledge without continuity of “self”. We can separate intelligence-as-a-resource from intelligence entwined with animal drives.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">The biological analogy is useful, but knowing when and why it fails matters for our choices. Understanding AI systems on their own terms changes what we should expect and what we should seek.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">In light of better options, building “an AGI” seems useless, or worse.</p><div style="font-size:16px;line-height:26px"><hr style="margin:32px 0;padding:0;height:1px;background:#e6e6e6;border:none"></div><div style="font-size:16px;line-height:26px"><div style="margin-top:16px;font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-size:18px;max-width:384px;width:fit-content;line-height:22px;display:flex;text-align:center;font-weight:400;margin-left:auto;margin-right:auto"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">Please share on social media:</p></div><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;text-align:center;border-radius:4px"><a href="https://substack.com/app-link/post?publication_id=2153125&post_id=179552793&utm_source=substack&utm_medium=email&utm_content=share&utm_campaign=email-share&action=share&triggerShare=true&isFreemail=true&r=n878r&token=eyJ1c2VyX2lkIjozOTAxMzgwMywicG9zdF9pZCI6MTc5NTUyNzkzLCJpYXQiOjE3NjM3NDA4MzUsImV4cCI6MTc2NjMzMjgzNSwiaXNzIjoicHViLTIxNTMxMjUiLCJzdWIiOiJwb3N0LXJlYWN0aW9uIn0._alzCb7BKs_RMLrspQP0zWz974CZoC384pnRGLiDR5I" rel="" style="font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';display:inline-block;box-sizing:border-box;border:none;border-radius:8px;font-size:14px;line-height:20px;font-weight:600;text-align:center;margin:0;opacity:1;outline:none;white-space:nowrap;color:#ffffff!important;text-decoration:none!important;background-color:#538cfa;padding:12px 20px;height:auto" target="_blank"><span style="color:#ffffff;text-decoration:none">Share</span></a></p></div><div style="font-size:16px;line-height:26px"><div style="font-size:16px;direction:ltr!important;font-weight:400;text-decoration:none;font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';color:#363737;line-height:1.5;max-width:560px;margin:24px auto;display:block;text-align:center;padding:0px 32px"><div style="margin-top:16px;font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';font-size:18px;max-width:384px;width:fit-content;line-height:22px;display:flex;text-align:center;font-weight:400;margin-left:auto;margin-right:auto"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px">For free notifications:</p></div><div style="margin:0 0 1em;direction:ltr;font-size:16px;line-height:26px"><div style="text-decoration:unset;list-style:none;font-size:16px;line-height:26px;text-align:center;border-radius:4px"><a href="https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9haXByb3NwZWN0cy5zdWJzdGFjay5jb20vYWNjb3VudCIsInAiOjE3OTU1Mjc5MywicyI6MjE1MzEyNSwiZiI6dHJ1ZSwidSI6MzkwMTM4MDMsImlhdCI6MTc2Mzc0MDgzNSwiZXhwIjoyMDc5MzE2ODM1LCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.eWIEQIc9Q4u6slk7QUqpB8u58kITyJUp_puZb4qe7J0?" style="font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';display:inline-block;box-sizing:border-box;border-radius:8px;font-size:14px;line-height:20px;font-weight:600;text-align:center;background-color:transparent;opacity:1;outline:none;white-space:nowrap;text-decoration:none!important;border:1px solid #538cfa;margin:0 auto;background:transparent;color:#538cfa;padding:12px 20px;height:auto" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!w0SV!,w_40,c_scale,f_png,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Ficon%2FLucideCheck%3Fv%3D4%26height%3D40%26fill%3Dtransparent%26stroke%3D%2523538cfa%26strokeWidth%3D3.6" style="border:none;vertical-align:middle;height:auto;display:inline-block;max-width:20px" alt="" width="20" height="20"><span style="text-decoration:none">Subscribed</span></a></div></div></div></div><div style="font-size:16px;line-height:26px;margin:32px auto"><table width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="text-align:center"></td><td style="text-align:center" width="693" align="left"><a href="https://substack.com/redirect/820581bf-5016-4d32-ad1e-d4d55aab02ef?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="padding:0;width:auto;height:auto;border:none;text-decoration:none;display:block;margin:0" target="_blank"><img alt="" src="https://substackcdn.com/image/fetch/$s_!tezb!,w_1100,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F967a3de2-cd9e-4aa2-a78f-042efc0d8fcd_693x595.jpeg" style="border:none!important;vertical-align:middle;display:block;height:auto;margin-bottom:0;width:auto!important;max-width:100%!important;margin:0 auto" width="550" height="472.22222222222223"></a></td><td style="text-align:center"></td></tr></tbody></table></div><div style="line-height:26px;display:flex;border-top:solid 1px rgba(204,204,204,0.5);padding-top:1.5em;font-size:90%"><span style="display:block;margin-right:6px;min-width:24px">1</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>Intelligence here means capacities like challenging reasoning, creative synthesis, complex language understanding and generation, problem-solving across domains, and adapting approaches to novel situations—the kinds of capabilities we readily recognize as intelligent whether exhibited by humans or machines.</span><a href="https://substack.com/redirect/59260c75-a5c9-43ef-9dd5-9a2458b47f56?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank"> Legg and Hutter (2007)</a><span> compiled over 70 definitions of intelligence, many of which would exclude current SOTA language models. Some definitions frame intelligence solely in terms of goal-achievement (“ability to achieve goals in a wide range of environments”), which seems too narrow—writing insightful responses to prompts surely qualifies as intelligent behavior. Other definitions wrongly require both learning capacity </span><i>and</i><span> performance capability, excluding both human infants and frozen models (see </span><a href="https://substack.com/redirect/472a7ea5-b1f5-4354-84b7-2a833adf290d?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank">“Why intelligence isn’t a thing”</a><span>).</span></p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>These definitional debates don’t matter here. The important questions arise at the </span><i>high</i><span> end of the intelligence spectrum, not the low en</span><b>d. </b><span>Whether some marginal capability counts as “intelligent” is beside the point</span><b>. What matters here is understanding what intelligence—even superhuman intelligence—</b><i><b>doesn’t necessarily entail</b></i><b>.</b><span> As we’ll see, high capability in goal-directed tasks doesn’t imply autonomous goal-pursuit as an organizing principle.</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">2</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>Popular doomer narratives reject the possibility of using highly capable AI to manage AI, because high-level intelligence is assumed to a property of goal-seeking </span><i>entities</i><span> that will </span><i>inevitably</i><span> coordinate (meaning </span><i>all</i><span> of them) and </span><i>will rebel</i><span>. Here, the conjunctive assumption of “entities”, “inevitably”, “all”, and “will rebel” does far too much work.</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">3</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">Parameters are optimized via gradient descent to reduce loss on training tasks. Architectures are selected through research experimentation for capacity and inductive biases. Training procedures, data curation, and loss functions are selected based on capabilities produced. All these use “fitness for purpose” as the metric, not system persistence.</p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">4</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>The Berkeley AI research group has documented this trend toward “compound AI systems” where applications combine multiple models, retrieval systems, and programmatic logic rather than relying on a single model. See </span><a href="https://substack.com/redirect/a6cbd116-bab3-4ddc-9c1a-16f2af4e4637?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank">“The Shift from Models to Compound AI Systems”</a><span> (2024).</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">5</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>AI systems increasingly help design architectures, optimize hyperparameters, generate training data, and evaluate other systems. This creates feedback loops accelerating AI development, but the “AI” here isn’t a persistent entity modifying itself—it’s a collection of tools in a development pipeline, with constituent models being created, modified, and discarded (see </span><a href="https://substack.com/redirect/3844d9d7-898c-4176-adf4-a1a08f3610f3?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank">“The Reality of Recursive Improvement: How AI Automates Its Own Progress”</a><span>)</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">6</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>Bostrom, 2014: </span><i><a href="https://substack.com/redirect/5ca79806-6970-4a50-8ab1-1384310c5aad?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank">Superintelligence: Paths, Dangers, Strategies</a></i><a href="https://substack.com/redirect/5ca79806-6970-4a50-8ab1-1384310c5aad?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank">.</a></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">7</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">This violates biological intuition because in evolved organisms intelligence and goals were never separable. Every cognitive capacity exists because it enabled behavior that served fitness. But this coupling isn’t fundamental to intelligence itself; it’s specific to how biological intelligence arose.</p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">8</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>Systems can still exhibit goal-directed or self-preserving behaviors through various pathways—reinforcement learning with environmental interaction, training on human goal-directed behavior (mimicry), architectural choices creating persistent goal-maintenance, or worse, economic optimisation pressures on AI/corporate entities (beware!). These represent “contingent agency”: risks from </span><i>specific conditions and design choices</i><span> rather than </span><i>inevitable consequences of capability.</i><span> RL illustrates this: even when systems learn from extended interaction, the goals optimized are externally specified (reward functions), and rewards are parameter updates that don’t sum to utilities. A system trained to win a game isn’t trained to “want” to play frequently, or at all. The distinction between foundational and contingent agency matters because contingent risks can be addressed through training approaches, architectural constraints, and governance, while foundational drives would be inherent and harder to counter. Section III examines these pressures in more detail.</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">9</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">Cats, as always, are enigmatic.</p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">10</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">Domestic dogs retain puppy-like features into adulthood and depend on human caregivers. Dairy cattle produce far more milk than wild ancestors but require human management.</p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">11</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">Robotic control and planning systems increasingly share this property through learning from human demonstrations, though typically at narrowly episodic levels.</p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">12</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>For example, see Christiano (2019) on </span><a href="https://substack.com/redirect/59ab0094-210d-435f-8cf2-26db41d08486?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" rel="" style="color:#538cfa;text-decoration:none" target="_blank">“What failure looks like”</a><span> regarding how optimizing for human approval could lead to problematic outcomes even without misaligned autonomous agents.</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">13</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>Bostrom (</span><i>Superintelligence,</i><span> 2014) identifies goals that are convergently instrumental across final </span><i>(by definition, long-term)</i><span> goals.</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">14</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"><span>Bostrom (</span><i>Superintelligence,</i><span> 2014): “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final plans and a wide range of situations...”.</span></p></div></div><div style="line-height:26px;display:flex;font-size:90%;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">15</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">And it’s how humans undertake smaller tasks with less formal (and sometimes blended) functional components.</p></div></div><div style="line-height:26px;display:flex;font-size:90%;margin-bottom:0;border-top:none;padding-top:0"><span style="display:block;margin-right:6px;min-width:24px">16</span><div style="font-size:16px;line-height:26px;display:block"><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px">Note that “corrigibility” isn’t a problem when the plans themselves include ongoing plan-revision.</p><p style="margin:0 0 20px 0;color:rgb(54,55,55);line-height:26px;font-size:16px;min-width:10px"></p></div></div></div></div><div style="margin:32px 0 0;width:100%;box-sizing:border-box;font-size:16px;line-height:26px"></div><table role="presentation" style="border-top:1px solid rgb(0,0,0,.1);border-bottom:1px solid rgb(0,0,0,.1);min-width:100%" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr height="16"><td style="font-size:0px;line-height:0" height="16"> </td></tr><tr><td><table role="presentation" style="border-bottom:1px solid rgb(0,0,0,.1)" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td width="95%"></td></tr><tr height="16"><td style="font-size:0px;line-height:0" height="16"> </td></tr></tbody></table></td></tr><tr height="16"><td style="font-size:0px;line-height:0" height="16"> </td></tr><tr><td><table role="presentation" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td><table role="presentation" style="margin:0 auto" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td style="vertical-align:middle"><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td align="center"><a href="https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGVuLnN1YnN0YWNrLmNvbS9wdWIvYWlwcm9zcGVjdHMvcC93aHktYWktc3lzdGVtcy1kb250LXdhbnQtYW55dGhpbmc_dXRtX3NvdXJjZT1zdWJzdGFjayZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jYW1wYWlnbj1lbWFpbC1yZXN0YWNrLWNvbW1lbnQmYWN0aW9uPXJlc3RhY2stY29tbWVudCZyPW44NzhyJnRva2VuPWV5SjFjMlZ5WDJsa0lqb3pPVEF4TXpnd015d2ljRzl6ZEY5cFpDSTZNVGM1TlRVeU56a3pMQ0pwWVhRaU9qRTNOak0zTkRBNE16VXNJbVY0Y0NJNk1UYzJOak16TWpnek5Td2lhWE56SWpvaWNIVmlMVEl4TlRNeE1qVWlMQ0p6ZFdJaU9pSndiM04wTFhKbFlXTjBhVzl1SW4wLl9hbHpDYjdCS3NfUk1McnNwUVAweld6OTc0Q1pvQzM4NHBuUkdMaURSNUkiLCJwIjoxNzk1NTI3OTMsInMiOjIxNTMxMjUsImYiOnRydWUsInUiOjM5MDEzODAzLCJpYXQiOjE3NjM3NDA4MzUsImV4cCI6MjA3OTMxNjgzNSwiaXNzIjoicHViLTAiLCJzdWIiOiJsaW5rLXJlZGlyZWN0In0.E69iTyGbB1Vkcayo4RbjA0knQPQ5a0-orqy__Mm0YWE?&utm_source=substack&utm_medium=email" style="font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';display:inline-block;font-weight:500;border:1px solid rgb(0,0,0,.1);border-radius:9999px;text-transform:uppercase;font-size:12px;line-height:12px;padding:9px 14px;text-decoration:none;color:rgb(119,119,119)" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!5EGt!,w_36,c_scale,f_png,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Ficon%2FNoteForwardIcon%3Fv%3D4%26height%3D36%26fill%3Dnone%26stroke%3D%2523808080%26strokeWidth%3D2" style="margin-right:8px;min-width:18px;min-height:18px;border:none;vertical-align:middle;max-width:18px" alt="" width="18" height="18"><span style="vertical-align:middle">Restack</span></a></td></tr></tbody></table></td></tr></tbody></table></td><td align="right"><table role="presentation" width="auto" cellspacing="0" cellpadding="0" border="0"><tbody><tr></tr></tbody></table></td></tr></tbody></table></td></tr><tr height="16"><td style="font-size:0px;line-height:0" height="16"> </td></tr></tbody></table><div style="color:rgb(119,119,119);text-align:center;font-size:16px;line-height:26px;padding:24px0"><div style="font-size:16px;line-height:26px;padding-bottom:24px"><p style="list-style:none;font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';padding-bottom:0;font-size:12px;line-height:16px;margin:0;color:rgb(119,119,119);text-decoration:unset">© 2025 <span>Eric Drexler</span><br>Trajan House, Mill St, Oxford OX2 0DJ UK <br><a href="https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9haXByb3NwZWN0cy5zdWJzdGFjay5jb20vYWN0aW9uL2Rpc2FibGVfZW1haWw_dG9rZW49ZXlKMWMyVnlYMmxrSWpvek9UQXhNemd3TXl3aWNHOXpkRjlwWkNJNk1UYzVOVFV5TnprekxDSnBZWFFpT2pFM05qTTNOREE0TXpVc0ltVjRjQ0k2TVRjNU5USTNOamd6TlN3aWFYTnpJam9pY0hWaUxUSXhOVE14TWpVaUxDSnpkV0lpT2lKa2FYTmhZbXhsWDJWdFlXbHNJbjAua0Zubk9nYzdQNTF6dTNhb2hkS2dVOEVHWWI2OTV4b3lpRS0tS29XZzVQOCIsInAiOjE3OTU1Mjc5MywicyI6MjE1MzEyNSwiZiI6dHJ1ZSwidSI6MzkwMTM4MDMsImlhdCI6MTc2Mzc0MDgzNSwiZXhwIjoyMDc5MzE2ODM1LCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.Q-wNTS_79h4g-235gwzm7J5PWUiW08Ws1TLBd8TXx_0?" style="color:#538cfa;text-decoration:none" target="_blank"><span style="color:rgb(119,119,119);text-decoration:underline">Unsubscribe</span></a></p></div><p style="padding:0 24px;font-size:12px;line-height:20px;margin:0;color:rgb(119,119,119);font-family:system-ui,-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Helvetica,Arial,sans-serif,'Apple Color Emoji','Segoe UI Emoji','Segoe UI Symbol';padding-bottom:0;margin-top:0"><a href="https://substack.com/redirect/e6fbdb6d-997f-4835-8c6d-fd4e846680f9?j=eyJ1Ijoibjg3OHIifQ.s6FUxM4AGjJWw13LU27jPIpH7q6JXD-IKGARRMJGuLc" style="color:#538cfa;text-decoration:none;display:inline-block;margin:0 4px" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!IzGP!,w_262,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fgeneric-app-button%402x.png" alt="Get the app" style="max-width:550px;border:none!important;vertical-align:middle" width="131" height="40"></a><a href="https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9zdWJzdGFjay5jb20vc2lnbnVwP3V0bV9zb3VyY2U9c3Vic3RhY2smdXRtX21lZGl1bT1lbWFpbCZ1dG1fY29udGVudD1mb290ZXImdXRtX2NhbXBhaWduPWF1dG9maWxsZWQtZm9vdGVyJmZyZWVTaWdudXBFbWFpbD1oa2VpdGhoZW5zb25AZ21haWwuY29tJnI9bjg3OHIiLCJwIjoxNzk1NTI3OTMsInMiOjIxNTMxMjUsImYiOnRydWUsInUiOjM5MDEzODAzLCJpYXQiOjE3NjM3NDA4MzUsImV4cCI6MjA3OTMxNjgzNSwiaXNzIjoicHViLTAiLCJzdWIiOiJsaW5rLXJlZGlyZWN0In0.uI1xnqR-p5viXGFoyqod4-K1PJI7A90h2AHsuLSdLUE?" style="color:#538cfa;text-decoration:none;display:inline-block;margin:0 4px" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!LkrL!,w_270,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Femail%2Fpublish-button%402x.png" alt="Start writing" style="max-width:550px;border:none!important;vertical-align:middle" width="135" height="40"></a></p></div></div></td><td></td></tr></tbody></table><img src="https://eotrx.substackcdn.com/open?token=eyJtIjoiPDIwMjUxMTIxMTYwMDMwLjMuNzcyY2I3NjNiNDRmMjc5NEBtZy1kMC5zdWJzdGFjay5jb20-IiwidSI6MzkwMTM4MDMsInIiOiJoa2VpdGhoZW5zb25AZ21haWwuY29tIiwiZCI6Im1nLWQwLnN1YnN0YWNrLmNvbSIsInAiOjE3OTU1Mjc5MywidCI6Im5ld3NsZXR0ZXIiLCJhIjoiZXZlcnlvbmUiLCJzIjoyMTUzMTI1LCJjIjoicG9zdCIsImYiOnRydWUsInBvc2l0aW9uIjoiYm90dG9tIiwiaWF0IjoxNzYzNzQwODM1LCJleHAiOjE3NjYzMzI4MzUsImlzcyI6InB1Yi0wIiwic3ViIjoiZW8ifQ.8iHU0rMPmMKTW6R7IABzgt3fiAXIWIa27etSeua7R6Q" alt="" style="height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important" width="1" height="1" border="0"><img alt="" src="https://email.mg-d0.substack.com/o/eJxEkEuOqzAQRVcTzx7y3zCotSB_CrASbGQXeWL3LZKWenqqdO_RjZ5wre2Co3ZiCazyi0kMQTirnOajsgx3n1_zigWbJ0yzp7-rkdqwDaIWSQUtQxpFHG2w2sQgRr4EjG4xnGWQXBohpBCWc8UHNTgnY3BWBa0X6Sb90Hxf_yU-9DN08vE5xLqz3Oel4UcBqJ3IbtHZnyljiQj4xnbV8otzAuEmY6Sb1JfQdSAU_N9fSISNHWd45egp13J_S2GUkIY12J6Yaduw9Foemq934UegnyHV3ecCPh-t9gMjdUbfwc6O7c5RExdq5Iq9Qf4EAAD__5PobqM" width="1" height="1"></div></div></div>