[ExI] AI 2027: A Realistic Scenario of AI Takeover

Adrian Tymes atymes at gmail.com
Mon Oct 6 20:32:49 UTC 2025


It's late 2025, and we have already missed the early marks of this
scenario.  That suggests this scenario won't happen, at least not on
the timeline given.

More critically: most of the large AIs - the ones capable of recursive
operation - require large data centers to run on.  Even if they were
theoretically capable of "escaping the lab", there are few places they
could escape to - all of which are heavily monitored, and most of
which are already running rival AIs.  The potential for runaway
unmonitored self-replication is stymied if there aren't enough
resources on Earth to run even 100 copies and all those copies would
be monitored.

There is room for smaller AIs to self-replicate onto a lot more
platforms, but those smaller AIs need to be able to self-improve to
pull off something like this scenario, and those who are running
self-improving AIs generally don't see the point in using smaller AIs
for their work.

On Mon, Oct 6, 2025 at 4:22 PM John Clark via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> The people at AI2027 made this video about what they expect will happen between now and 2030 and it's pretty close to what I think will happen. Spike I really hope you watch it because even if you disagree with it at least you'll understand why I can't get all hot and bothered about the national debt. In their scenario there is a branch point around November 2027, one branch, the most likely branch, leads to human extinction but the other branch does not because the president made a wise decision. The trouble is in November 2027 He Who Must Not Be Named will still be in power.
>
> AI 2027: A Realistic Scenario of AI Takeover
>
> John K Clark
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list