[ExI] ai 2027
John Clark
johnkclark at gmail.com
Fri Nov 28 18:27:15 UTC 2025
On Fri, Nov 28, 2025 at 12:17 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
>
>
> *As with other such predictions, this site's timeline has enough internal
> contradictions to dismiss its predicted results.The simplest one is: at
> some point it predicts that China outright steals Agent-2...and yet, the
> best Chinese model for that time period and the one after is well behind
> Agent-2, despite the operation in question supposedly rocketing it right up
> to equivalence.*
*I asked Gemini the following question: *
*"Somebody told me that the following website has an inconsistency, he
claims " it predicts that China outright steals Agent-2...and yet, the best
Chinese model for that time period and the one after is well behind
Agent-2, despite the operation in question supposedly rocketing it right up
to equivalence". Is there really an inconsistency? https://ai-2027.com/
<https://ai-2027.com/>"*
*And this is what Gemini said in response:*
*==*
Yes, based on the logic of how Large Language Models (LLMs) and "weight
theft" work, your friend has identified a valid *logical inconsistency* (or
at least a confusing contradiction) often cited in critiques of the
"Situational Awareness" paper and the associated ai-2027.com visualizations
by Leopold Aschenbrenner.
Here is a breakdown of why that inconsistency exists and the mechanics
behind it.
The Logic of the Inconsistency
To understand why the claim contradicts the graph/prediction, you have to
look at the difference between *Training* and *Inference*.
1.
*The "Theft" Claim:* The text (and Aschenbrenner's general thesis)
argues that US lab security is currently insufficient and that state actors
(like China/CCP) could "outright steal" the model weights.
2.
*What "Stealing Weights" Means:* If you steal the weights of "Agent-2"
(presumably an AGI-level model in this timeline), you do not need to train
the model. You effectively *have* the model. You only need the compute
power to *run* it (inference), which is a tiny fraction of the compute
power required to *train* it.
3.
*The Resulting Expectation:* If China steals Agent-2, China's "Best
Model" should immediately become *equal to Agent-2*.
Where the Website Conflicts
The inconsistency arises because the website's visual timelines or specific
capability forecasts often depict China as maintaining a *persistent lag*
(e.g., being 1–2 years behind the US).
-
*The Text says:* China steals the model (Event A).
-
*The Graph implies:* China is still struggling to catch up or is running
a model inferior to Agent-2 (Event B).
If China had stolen the weights, the line on the graph representing "Best
Chinese Model" should have jumped vertically to meet the US line the moment
the theft occurred. If the graph shows China remaining "well behind"
Agent-2, it implies they are relying on *indigenous development* (building
it themselves from scratch), which contradicts the text claiming they
successfully stole it.
Possible "Steel-man" Explanations
While it looks like an error, proponents of the paper might argue a few
nuances, though they are often considered weak defenses against this
specific point:
1.
*The "Deployment" Lag:* They might argue that even with the weights,
China lacks the specific infrastructure or system prompts to deploy it
effectively immediately. However, for a state actor, this delay would be
measured in weeks, not the long durations implied by the charts.
2.
*The "Next Token" Defense:* The chart might be tracking the "Ability to
Train the *Next* Model" rather than "Best Available Model." Even if
China steals Agent-2, they don't necessarily understand *how* it was
built or have the compute to build Agent-3. They are stuck using a stolen
Agent-2 while the US moves on to Agent-3. However, the user prompt
specifically notes the Chinese model is "behind Agent-2," which invalidates
this defense.
3.
*Inconsistent Narratives:* The most likely reality is that the author is
presenting two different warnings that conflict when placed on the same
timeline:
-
*Warning 1:* "China will steal our secrets!" (Implying parity).
-
*Warning 2:* "The US must maintain its lead!" (Implying a gap).
-
When visualized, the author accidentally prioritized the "US Lead"
visual (Warning 2) over the logical consequence of the "Theft"
(Warning 1).
Verdict
*Your friend is correct.*
If a prediction claims a competitor "outright steals" a specific AI model,
but simultaneously predicts that competitor's best model is "well behind"
that same stolen model, it is a contradiction. Possession of the weights
*is* possession of the model's capabilities.
*John K Clark*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251128/a1ac7441/attachment-0001.htm>
More information about the extropy-chat
mailing list