[ExI] A Realistic Scenario of AI Takeover - Minute-By-Minute

Adrian Tymes atymes at gmail.com
Tue Oct 21 13:10:57 UTC 2025


On Tue, Oct 21, 2025 at 8:01 AM John Clark <johnkclark at gmail.com> wrote:
> On Mon, Oct 20, 2025 at 5:39 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote
>> A nice explanation, and it stumbles right over the main objections to
>> the prediction.  To list a few:
>> 1) Assuming that literally everyone is online or is trackable online.
>> "A lot of people" or even "the majority of humanity" is not literally
>> every human on Earth.
>
> Not a serious criticism. If not every human on earth then every human on earth who is important in this matter.

They said "everyone dies".  "Everyone" means "everyone", not "every
human on earth who is important in this matter".

>> >2) Postulating that the AI can do certain things that humans can't
>> find a counter to,
>
> It said an AI can do things that a human can't if the AI is smarter than a human, and it said you can't  permanently outsmart something that is smarter than you are. And those things are not postulates, they are facts.
>
>> > but at the same time can always find counters if someone or something - such as another AI - does these things.
>
> It said a smart AI can do things that a less smart AI can't, and that is also not a postulate, that is a fact.

Being smarter is not omniscience.

No, really.  Being smarter is not omniscience.  Being smarter does not
give it this absolute power that it will always perfectly outmaneuver
all opposition.  Even ants can get around human eradication attempts
at times: ask any pest control company.

Also, there might be other AIs just as smart as it is, if not smarter.
The claim is that the AI is simply godmode - able to outdo all
opponents just because the narration sags so - which is never
realistic.

>> > that it knows it can fullysolve its own upgrade problems forever,
>
> They do not postulate that and they didn't need to.

They postulate that the AI is interested in upgrading itself, which
seems fair.  They also postulate that the AI thinks and cares quite a
bit about the means of upgrading itself.

It follows that the AI might consider how it has been upgraded before
it could upgrade itself: via human endeavors, doing things the AI at
the time could not foresee - and thus, that humans might have further
upgrades they can give.

>>  > postulates that the rogue AI can eventually subvert humans to do its will,
>
> That is a perfectly reasonable postulate. However the postulate that something very stupid can remain in control of something very smart forever is not a reasonable postulate.
>
>> >and humans can cross those air gaps.
>
> If no human can cross the air gap to a super intelligent AI then it would be a completely useless machine and there would be no reason that humans would want to build such a thing.

You miss the point of these two parts combined.  The video claims that
the AI fears other AIs being developed behind air gaps, but this fear
does not follow from the postulates the video has given.  The AI as
depicted would have a way to influence and sabotage even other AIs
behind air gaps, by making use of the humans that cross said air gaps.

This fear is given as the sole reason why such an AI might want to
kill all humans.  Without that fear, the motivation to kill everyone
(again, addressing the video's specific scenario, regardless of
whether there might be other reasons) goes away, and the case that it
would kill everyone falls apart.

> I read the book the video was based on and the book did comment about that. It said that if any rogue nation was constructing a building that would contain more than 8 State of the art (as of 2025) Nvidia GPUs then the other nations of the world should unite and use any means necessary, up to and including nuclear weapons, to prevent the construction of that building from being finished. Their reasoning was that a nuclear war would kill billions of people but it wouldn't cause the extinction of the human race, but an AI takeover would. They admit there is very little likelihood of the world uniting in that way and that is why they are so pessimistic.

Uh huh.  And what happens if 8 turns out to be enough for such an AI
to kickstart itself - or if an AI uses multiple such buildings?

Also, what about arcades with more than 8 machines, each one of which
has its own state-of-the-art GPU?

Putting these notions together: even a large (say, 12 or 16 player*)
Starcraft 2 game can be a single software instance in which more than
8 state-of-the-art GPUs link up: one on each player's machine, and it
would be typical in such cases that each of these GPUs is in a
separate building.  Beyond All Reason is a more recent game famous for
occasionally** hosting hundreds of players in a single match - see for
instance https://www.youtube.com/watch?v=gQJujiZ-oz0 .  These GPUs are
distributed around the world, and it is trivial to set up such
systems.  See also the "(whatever) At Home" software where people
donate unused CPU cycles to various causes.  An overnight Curiosity
Run as the video depicts would not require a single data center.

* 8 is the usual max, but I've played on maps with more players.

** Again: this many at once is an exception, but it does happen.

So...yeah, even if one accepted all their premises, their suggested
solution still wouldn't work.



More information about the extropy-chat mailing list