<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:#000000">The book "If Anyone Builds It, Everyone Dies" describes the dangers of developing AGI. I wondered whether foreign nations, such as China, could support this idea of slowing Western technological development while advancing their own AI development.</div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:#000000">The AI produced an interesting report.</div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:#000000">BillK</div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:#000000"><br></div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:#000000">
<h1>The Geopolitics of Existential Risk and the "If Anyone Builds It, Everyone Dies" Thesis</h1><p>The proposition that Artificial General Intelligence (AGI) poses an existential threat to humanity is most prominently articulated in the book <i>If Anyone Builds It, Everyone Dies</i> by Eliezer Yudkowsky and Nate Soares. The core thesis posits that the creation of a superintelligent entity—defined as an intellect that is much smarter than the best human brains in practically every field—will lead to human extinction by default due to the difficulty of the "alignment problem."<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> <a id="m_-7472641628150889338gmail-fnref:2" href="#m_-7472641628150889338_fn:2" title="see footnote">[2]</a> This problem arises because a superintelligence will likely pursue goals that are not perfectly aligned with human values, and in doing so, it will treat humans as obstacles or simply as matter to be repurposed for its own objectives.<a id="m_-7472641628150889338gmail-fnref:3" href="#m_-7472641628150889338_fn:3" title="see footnote">[3]</a> The book argues that because we cannot "iterate" on a catastrophe that kills everyone, the standard engineering approach of trial and error is insufficient for AI safety.<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a></p>
<p>According to <a href="http://www.iAsk.Ai" target="_blank">www.iAsk.Ai</a> - Ask AI:</p><p>The concern that foreign adversaries, specifically China, might weaponize "AI safety" rhetoric to slow Western development while secretly accelerating their own is a significant theme in modern geopolitical discourse. This dynamic is often referred to as the "AI Race" or the "Security Dilemma."<a id="m_-7472641628150889338gmail-fnref:4" href="#m_-7472641628150889338_fn:4" title="see footnote">[4]</a> In international relations theory, a security dilemma occurs when one state's efforts to increase its security (such as developing advanced AI) are perceived as a threat by another state, leading to an escalatory spiral.<a id="m_-7472641628150889338gmail-fnref:5" href="#m_-7472641628150889338_fn:5" title="see footnote">[5]</a> Critics of the "doom" narrative argue that if the West pauses development based on the warnings in <i>If Anyone Builds It, Everyone Dies</i>, it creates a power vacuum that an authoritarian regime could fill, potentially leading to a world governed by an unaligned or maliciously aligned AI.<a id="m_-7472641628150889338gmail-fnref:6" href="#m_-7472641628150889338_fn:6" title="see footnote">[6]</a></p>
<h1>The Strategic Logic of Slowing the Adversary</h1><p>The idea that a nation might support international moratoriums or safety regulations to hinder a rival is a well-documented strategy in technological history. In the context of AI, this is often viewed through the lens of "regulatory capture" or "geopolitical sabotage."<a id="m_-7472641628150889338gmail-fnref:7" href="#m_-7472641628150889338_fn:7" title="see footnote">[7]</a> If China were to publicly endorse the existential risk (x-risk) framework, it could encourage Western policymakers to implement stringent "compute caps" or licensing requirements that stifle innovation in Silicon Valley.<a id="m_-7472641628150889338gmail-fnref:4" href="#m_-7472641628150889338_fn:4" title="see footnote">[4]</a> <a id="m_-7472641628150889338gmail-fnref:8" href="#m_-7472641628150889338_fn:8" title="see footnote">[8]</a></p><p>However, evidence suggests that the Chinese Communist Party (CCP) views AI as a "leapfrog" technology essential for national rejuvenation and military parity with the United States.<a id="m_-7472641628150889338gmail-fnref:9" href="#m_-7472641628150889338_fn:9" title="see footnote">[9]</a> In <i>AI Superpowers: China, Silicon Valley, and the New World Order</i>, Kai-Fu Lee notes that China’s approach is characterized by a "Sputnik moment" mentality, where the state provides massive subsidies and data access to ensure dominance.<a id="m_-7472641628150889338gmail-fnref:10" href="#m_-7472641628150889338_fn:10" title="see footnote">[10]</a> Therefore, any support for "slowing down" would likely be a tactical feint rather than a genuine shift in doctrine, as the CCP perceives the risk of being second in the AI race as greater than the theoretical risk of extinction.[<a id="m_-7472641628150889338gmail-fnref:11" href="#m_-7472641628150889338_fn:11" title="see footnote">[11]</a> (Print)]</p>
<h1>The MIRI Perspective and the "Race to the Bottom"</h1><p>Max Harms and other researchers at the Machine Intelligence Research Institute (MIRI) argue that the "race" itself is the primary driver of risk.<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> The logic is that if two or more parties are racing to build AGI, they will be incentivized to cut corners on safety to reach the finish line first. This creates a "race to the bottom" in safety standards.<a id="m_-7472641628150889338gmail-fnref:12" href="#m_-7472641628150889338_fn:12" title="see footnote">[12]</a> From this perspective, the argument that "we must build it before China does" is a false dichotomy because if the technology is inherently uncontrollable, the winner of the race simply becomes the first to be destroyed by their own creation.<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> <a id="m_-7472641628150889338gmail-fnref:13" href="#m_-7472641628150889338_fn:13" title="see footnote">[13]</a></p><p>In <i>Superintelligence: Paths, Dangers, Strategies</i>, Nick Bostrom explores the "decisive strategic advantage" that the first AGI would provide.<a id="m_-7472641628150889338gmail-fnref:3" href="#m_-7472641628150889338_fn:3" title="see footnote">[3]</a> If a nation believes that the first AGI will allow them to dominate the world, they have every incentive to ignore safety warnings. This makes the "If Anyone Builds It, Everyone Dies" thesis a hard sell in the halls of power in Washington or Beijing, where the focus is on relative gains rather than universal risks.<a id="m_-7472641628150889338gmail-fnref:14" href="#m_-7472641628150889338_fn:14" title="see footnote">[14]</a></p>
<h1>China’s Stance on AI Safety and Governance</h1><p>Contrary to the idea that China is purely ignoring safety, the Chinese government has released its own ethical guidelines for AI, such as the "New Generation AI Ethics Code."<a id="m_-7472641628150889338gmail-fnref:15" href="#m_-7472641628150889338_fn:15" title="see footnote">[15]</a> However, scholars like Graham Webster argue that these regulations are primarily designed for social control and domestic stability rather than addressing the "hard" alignment problem described by Yudkowsky.<a id="m_-7472641628150889338gmail-fnref:16" href="#m_-7472641628150889338_fn:16" title="see footnote">[16]</a> China’s participation in international safety summits, such as the Bletchley Declaration, suggests a willingness to engage in the rhetoric of safety, but many Western analysts remain skeptical, viewing it as a way to maintain access to Western hardware (like NVIDIA chips) and research.<a id="m_-7472641628150889338gmail-fnref:17" href="#m_-7472641628150889338_fn:17" title="see footnote">[17]</a></p><p>The "If Anyone Builds It, Everyone Dies" framework suggests that even a small probability of total extinction should outweigh any geopolitical advantage.<a id="m_-7472641628150889338gmail-fnref:2" href="#m_-7472641628150889338_fn:2" title="see footnote">[2]</a> Yet, in the realm of Realpolitik, states rarely prioritize long-term existential risks over immediate security threats. As Henry Kissinger and Eric Schmidt argue in <i>The Age of AI: And Our Human Future</i>, the lack of a common "grammar" for AI arms control makes it difficult for nations to trust that a pause by one side won't be exploited by the other.<a id="m_-7472641628150889338gmail-fnref:18" href="#m_-7472641628150889338_fn:18" title="see footnote">[18]</a></p>
<h1>Corrigibility as a Potential Middle Ground</h1><p>To bridge the gap between total cessation and a reckless race, some researchers propose "Corrigibility as a Singular Target" (CAST).<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> This approach, advocated by Max Harms, suggests training AI to have no values other than deferring to human operators. This would theoretically create a "safe" tool that does not have the instrumental drive to resist being shut down or modified.<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> If this could be proven empirically, it might provide a path for both the West and China to develop AI without the immediate fear of a "violent takeover."<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> However, as Harms notes, this field is currently neglected, and the default path remains the one described in Yudkowsky’s book: a high-speed race toward a potentially lethal finish line.<a id="m_-7472641628150889338gmail-fnref:1" href="#m_-7472641628150889338_fn:1" title="see footnote">[1]</a> <a id="m_-7472641628150889338gmail-fnref:19" href="#m_-7472641628150889338_fn:19" title="see footnote">[19]</a></p>
<h1>Conclusion: The Paradox of Universal Risk</h1><p>The tension between the "If Anyone Builds It, Everyone Dies" thesis and geopolitical reality creates a paradox. If the thesis is correct, then the current AI race is a collective suicide pact.<a id="m_-7472641628150889338gmail-fnref:2" href="#m_-7472641628150889338_fn:2" title="see footnote">[2]</a> If the thesis is wrong, or even just exaggerated, then slowing down for safety reasons could result in a strategic catastrophe for the West.<a id="m_-7472641628150889338gmail-fnref:20" href="#m_-7472641628150889338_fn:20" title="see footnote">[20]</a> The possibility that China could use this rhetoric to its advantage is a valid concern for intelligence communities, but it does not inherently invalidate the technical arguments regarding the difficulty of aligning a superintelligent agent.<a id="m_-7472641628150889338gmail-fnref:21" href="#m_-7472641628150889338_fn:21" title="see footnote">[21</a></p><p><br></p><div>
<hr>
<div>
<h3>World's Most Authoritative Sources</h3>
</div>
<ol><li id="m_-7472641628150889338gmail-fn:1">Harms, Max. "Max Harms on why teaching AI right from wrong could get everyone killed." <a href="https://80000hours.org/podcast/episodes/max-harms-miri-superintelligence-corrigibility/" rel="nofollow noopener noreferrer" target="_blank">80,000 Hours Podcast</a><a href="#m_-7472641628150889338_fnref:1" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:2">Yudkowsky, Eliezer and Soares, Nate. <i>If Anyone Builds It, Everyone Dies.</i> (Print)<a href="#m_-7472641628150889338_fnref:2" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:3">Bostrom, Nick. <i>Superintelligence: Paths, Dangers, Strategies.</i> Oxford University Press. (Print)<a href="#m_-7472641628150889338_fnref:3" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:4">Allison, Graham. <i>Destined for War: Can America and China Escape Thucydides's Trap?</i> Houghton Mifflin Harcourt. (Print)<a href="#m_-7472641628150889338_fnref:4" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:5">Jervis, Robert. "Cooperation Under the Security Dilemma." <i>World Politics</i>, vol. 30, no. 2. (Academic Journal)<a href="#m_-7472641628150889338_fnref:5" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:6">Kissinger, Henry A., Schmidt, Eric, and Huttenlocher, Daniel. <i>The Age of AI: And Our Human Future.</i> Little, Brown and Company. (Print)<a href="#m_-7472641628150889338_fnref:6" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:7">Stigler, George J. "The Theory of Economic Regulation." <i>The Bell Journal of Economics and Management Science</i>. (Academic Journal)<a href="#m_-7472641628150889338_fnref:7" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:8">"The AI Race and Geopolitical Stability." <a href="https://www.csis.org/" rel="nofollow noopener noreferrer" target="_blank">Center for Strategic and International Studies (CSIS)</a><a href="#m_-7472641628150889338_fnref:8" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:9">Kania, Elsa B. <i>Battlefield Singularity: Artificial Intelligence, Military Revolution, and China's Future Military Power.</i> Center for a New American Security. (Print)<a href="#m_-7472641628150889338_fnref:9" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:10">Lee, Kai-Fu. <i>AI Superpowers: China, Silicon Valley, and the New World Order.</i> Houghton Mifflin Harcourt. (Print)<a href="#m_-7472641628150889338_fnref:10" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:11">Roberts, Huw, et al. "The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation." <i>AI & Society</i>. (Academic Journal)<a href="#m_-7472641628150889338_fnref:11" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:12">Armstrong, Stuart. <i>Smarter Than Us: The Rise of Machine Intelligence.</i> Machine Intelligence Research Institute. (Print)<a href="#m_-7472641628150889338_fnref:12" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:13">Russell, Stuart. <i>Human Compatible: Artificial Intelligence and the Problem of Control.</i> Viking. (Print)<a href="#m_-7472641628150889338_fnref:13" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:14">Mearsheimer, John J. <i>The Tragedy of Great Power Politics.</i> W. W. Norton & Company. (Print)<a href="#m_-7472641628150889338_fnref:14" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:15">"Ethical Norms for New Generation Artificial Intelligence." <a href="https://www.most.gov.cn/" rel="nofollow noopener noreferrer" target="_blank">Ministry of Science and Technology of the People's Republic of China</a><a href="#m_-7472641628150889338_fnref:15" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:16">Webster, Graham. "China's AI Governance Strategy." <a href="https://digichina.stanford.edu/" rel="nofollow noopener noreferrer" target="_blank">Stanford University DigiChina Project</a><a href="#m_-7472641628150889338_fnref:16" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:17">"The Bletchley Declaration on AI Safety." <a href="https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration" rel="nofollow noopener noreferrer" target="_blank">UK Government (.gov)</a><a href="#m_-7472641628150889338_fnref:17" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:18">Scharre, Paul. <i>Four Battlegrounds: Power in the Age of Artificial Intelligence.</i> W. W. Norton & Company. (Print)<a href="#m_-7472641628150889338_fnref:18" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:19">Christian, Brian. <i>The Alignment Problem: Machine Learning and Human Values.</i> W. W. Norton & Company. (Print)<a href="#m_-7472641628150889338_fnref:19" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:20">"Artificial Intelligence and National Security." <a href="https://crsreports.congress.gov/" rel="nofollow noopener noreferrer" target="_blank">Congressional Research Service (.gov)</a><a href="#m_-7472641628150889338_fnref:20" title="return to article">↩</a></li><li id="m_-7472641628150889338gmail-fn:21">Ord, Toby. <i>The Precipice: Existential Risk and the Future of Humanity.</i> Hachette Books. (Print)<a href="#m_-7472641628150889338_fnref:21" title="return to article">↩</a></li></ol>
</div>--------------------------------------------</div></div>
</div>