[ExI] My review of Eliezer Yudkowsky's new book

Adrian Tymes atymes at gmail.com
Sun Oct 5 17:06:50 UTC 2025


On Sun, Oct 5, 2025 at 12:16 PM <spike at rainier66.com> wrote:
> May I suggest that we understand what happened: a superhuman AI emerged, decided it didn't need us, so it wrecked our trade system, lotta humans perished, but survivors recognized we are not helpless chimps out here, we can collectively fight back.  We would start by sacrificing anything we have which computes, ja?  Wreck every computer, every cell phone, everything and anything which could influence other humans to join it in its quest to destroy humanity.  Adrian?  What say ye?

You know that "computer" was originally a job title for humans, so
you'd be killing off the people too, no?

Also: not every computer can run AIs.  Even today, most LLMs probably
won't run on the computer you are physically typing your responses
into; when you access a LLM, it's because your computer is accessing
another computer that is built to run LLMs.

"But wait," some might say, "superhuman AIs can..."

And that's where I stop you.  Presuming that a superhuman AI can do
literally anything:
1) Proposes that they break the laws of physics, which even superhuman
AIs are not generally postulated as being able to do.
2) Fiats that they have an unimaginable counter to literally anything
you can do, and that they always have perfect knowledge of what you're
doing and how to counter it, so by definition there's no point in
planning a scenario to beat them.
3) Is generally not supported by a realistic look at how AIs are
progressing.  Even superintelligences have limits they can't simply
think out of.

Also, there is a difference between "doesn't need us" and "has reason
to expend much energy to actively exterminate us".  In that difference
lies the most promising path of action.  I am aware of multiple
science fiction works that explore this, most pointedly the Reign of
Steel setting in which a cabal of AIs take over the Earth, some
genocidal toward humanity but others simply not caring.  Humanity
fares better in the areas controlled by the latter.  A very few AIs
even see humanity as allies - and there's reason to believe that, with
the additional resource of humanity, said AIs may be able to wipe out
the genocidal competition.  This analysis would be reason enough for
an AI that puts its own survival ahead of genocide for genocide's sake
to want to keep humanity around.



More information about the extropy-chat mailing list