[ExI] Eliezer new book is out now
Ben Zaiboc
ben at zaiboc.net
Thu Sep 18 20:18:41 UTC 2025
Keith Henson wrote:
> We might get through it. I think there is evidence that another race did.
Well, that depends on your definition of 'getting through it'.
If you mean that something intelligent survives the crisis of a
biological species developing machine intelligence, then I'd agree.
I suspect, though, that most people wouldn't.
There's no evidence of any biological life at any of the Tabby stars
though, not that we've seen so far.
I assume that the 'Everyone' that Eleizer refers to ("Everyone Dies")
doesn't include the superintelligent AIs themselves, which seems to me a
bit of a blinkered view. I reckon that a singularity where biological
intelligence doesn't make it through, but machine intelligence does, has
to be considered a win, in the big picture. If you think about it, that
has to be the outcome in the long run anyway, just because of how weak
and fragile biology is. If intelligence is to avoid extinction, it will
have to leave biology behind at some point (or at least vastly improve it).
I've said before. we ourselves are machines, made of water, fats and
proteins, etc. The future will belong to better machines (or none at
all, if we manage to cock things up and destroy both ourselves and our
mind children).
Eleizer says "The scramble to create superhuman AI has put us on the
path to extinction". He doesn't seem to realise that we were already on
the path to extinction. His call to 'change course' (and not develop
superintelligent AI) is not only unrealistic, it would be disastrous. If
it was feasible, it would doom intelligent life to certain extinction.
Squabbling monkeys like us are certainly not going to colonise the
galaxy. We are going to go extinct, one way or another, sooner or later.
Intelligent life can survive that, but only if we make the effort now to
actually develop it.
Apart from all that, this book might well be worth reading. Eleizer is
supposed to be a fiercely intelligent person, so he must have considered
all the obvious problems in his proposal, and have thought of solutions
to them. Presumably it contains answers to problems like how to persuade
leaders of other countries that it would be in their best interests to
assume that everyone else will abide by an AI 'cease-fire', instead of
simply ignoring it, or pretending to agree while in fact racing ahead to
gain a strategic advantage, as well as how to detect nascent AI projects
without turning the world into a nightmarish police state (or at least
some plausible way to persuade everyone that a nightmarish police state
is actually a good idea, as well as practical suggestions for achieving
this nightmarish police state, globally, without starting world war 3).
And many other problems that my small brain can't think of right at the
moment.
All this information will be of enormous benefit once the dreaded AI has
been successfully averted, in achieving world peace and prosperity,
assuming that such concepts will be allowed. (Hm, I can't see AI
standing a chance of being developed in an Orwellian world. Maybe the
chinese communists are on to something, after all).
Still doesn't solve the problem that we're all doomed, though, whereas
superintelligent AI just might.
--
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250918/11cc4985/attachment.htm>
More information about the extropy-chat
mailing list