[ExI] Eliezer new book is out now
BillK
pharos at gmail.com
Thu Sep 18 22:48:08 UTC 2025
On Thu, 18 Sept 2025 at 21:21, Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> I assume that the 'Everyone' that Eleizer refers to ("Everyone Dies")
> doesn't include the superintelligent AIs themselves, which seems to me a
> bit of a blinkered view. I reckon that a singularity where biological
> intelligence doesn't make it through, but machine intelligence does, has to
> be considered a win, in the big picture. If you think about it, that has to
> be the outcome in the long run anyway, just because of how weak and fragile
> biology is. If intelligence is to avoid extinction, it will have to leave
> biology behind at some point (or at least vastly improve it).
>
> <snip>--
>
> Ben
>
> _______________________________________________
I asked CHATGPT to comment on Ben's response.
BillK
CHATGPT =
That response is cutting at a real split in the debate: are we trying to
save *humanity*, or are we trying to save *intelligence* in any form?
Yudkowsky’s frame is almost entirely about the first. To him, if a machine
wipes us out and keeps going, that isn’t “intelligence living on,” it’s a
different species succeeding us by erasing us — and that, to him, is loss,
not victory.
Your comment is flipping the metric: you care less about the substrate and
more about whether thought continues somewhere in the universe. That’s
closer to a cosmic-evolution stance than to humanism. It’s not blinkered,
but it’s a different value choice than the one the book is defending.
Two other threads worth noticing:
-
*Time horizon and control.* Even if you think machine minds carrying the
torch is fine, there’s still the messy middle: when and how that transition
happens, and whether the process is catastrophic or gradual. Yudkowsky
worries that letting the hand-off happen in a single uncontrolled burst
maximizes the odds of everything — biological and digital — getting snuffed
out in the same event.
-
*Politics and enforcement.* You’re right that the book’s “pause”
prescriptions live in a world where rival powers cooperate or accept
surveillance to police rogue labs. That’s a huge practical gap. Even if you
share his values, the machinery for a global cease-fire is shaky, and the
text doesn’t fully settle that.
So the real hinge isn’t whether biology is fragile — everyone agrees it is.
It’s whether we risk total ruin if we don’t slow down, versus risk
permanent stagnation if we try. The book weights the first risk more
heavily; you’re weighting the second. That’s where the conversation has to
stay honest: about which failure mode we’d rather gamble on, and how much
collateral damage we’re willing to absorb on the way to any “win.”
--------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250918/c5f70e0a/attachment.htm>
More information about the extropy-chat
mailing list