[ExI] Eliezer new book is out now

spike at rainier66.com spike at rainier66.com
Mon Sep 22 16:10:42 UTC 2025



-----Original Message-----
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of BillK via extropy-chat
...


The Guardian newspaper has a review here:
<https://www.theguardian.com/books/2025/sep/22/if-anyone-builds-it-everyone-dies-review-how-ai-could-kill-us-all>
Quote:
>...Yudkowsky and Soares present their case with such conviction that it’s easy to emerge from this book ready to cancel your pension contributions. The glimmer of hope they offer – and it’s low wattage – is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the commercial and strategic incentives, and the current state of political leadership, this seems a little unlikely.
----------------------
BillK

_______________________________________________


BillK, the Guardian so understates their case (it's a British thing to do that) it emphasizes the point in a way.  Eliezer started in with that position a number of years ago when he was still occasionally posting to ExI, but after Less Wrong was already going.  After all these years I remember that post, a comment which is burned into my retinas to this day.  He commented "...I have wept my last tear for humanity..."

That comment was in a post where he mapped out his vision of the future: human level AI was coming, and of course if it reaches that level it would be self-improving, it was initially programmed by a warrior species (us) and even if not warrior class, it would be the mercenary class (ehhhh... that would be me) and it may or may not care if we humans accompany it to Nerdvana, but his notion was that it would not.  It wouldn't care about us.  Rather it would occupy all available computing devices, every computer, every cell phone, every video game processor, every computer controlled military weapon system, not necessarily slaying humans but sending us back into times before our automation, where the environmental carrying capacity is insufficient to sustain our population, and we don't know how to survive now.

In retrospect, I don't know how much of that was in his last tear post, and how much of it we filled in after he left, so do not treat that as a quote or even a paraphrase of Eliezer please.

We figured out at the time that his last tear attitude was a kind of resignation, accepting the notion way back then that if anyone builds it, everyone dies.  And someone will build it eventually.  Hell we have entire divisions of the military working on it (very thinly disguised under the name "Space Force" perhaps (heh, ja sure, Space Force (that's so hard to figure out (whaddya gonna do in space that you can't do better down here?  (have satellites dogfight each other?  (sheesh of course that is your most secret military research (and of course it is AI.))))))  We have major universities teaching courses in it, and now it is a required course for the computer science majors.  Engineers want to take that training too, and we know engineers: ready, fire, aim!  We build it first, then worry about the ethics later, if at all.  Hey we needed to see if it could be done, for if it cannot, the ethical problem goes away, and if it can be built, the ethical problem is someone else's.  We engineers can be that way.

In all that last-teat business, I can end this dirge with a cheerful note.  We have all seen the enormous benefit of highly capable software which appears to perform humanlike tasks: the automated phone answering (OK bad example perhaps) the self-driving cars, the video games (oooohhh those video games today (did you ever imagine those would get this good?)) the excellent AI-based education tools, our own BillK who has mastered AI query techniques and really come up with insights for us.  In all that, what we have is not human level AI, it is not sentient at all, for it is reflecting us in the mirror really.  It is reading around on the internet and is really a super-advanced search engine, far better than anything before it, highly effective it is.  But it is not sentient.  There may be some still-undiscovered and unknown but very adamant principle that human-like intelligence and human-like emotions ARE substrate dependent.  I have pondered that question since about the time Eliezer was born, and I don't know what reason would be, but perhaps there is a reason.

There is another way I can end this funeral march for humanity with a cheerful passage from a Sousa march.  Human emotion might not be substrate dependent, and if not, uploading might be theoretically possible, AI may lead to uploading (for we offer it insights it might want) and if so... that... my human friend... is our only plausible ticket out of this brief life alive.  Religions have promised it over the ages, and we see where that has led (to mostly workable systems of ethics and some benefit (but life beyond the grave, nah.))  AI is a thread of hope.  Cryonics might be a path to that, and I no other plausible escape.  Do you?  Some form of humanity might come thru.

I cannot pray, but we can always hope.

spiike




More information about the extropy-chat mailing list