[ExI] Yudkowsky - new book on AI dangers - Published Sept 2025

Keith Henson hkeithhenson at gmail.com
Wed May 14 23:49:46 UTC 2025


It doesn't matter.

Unless technical progress is stopped we will face whatever problems AI
generates (including possible extinction) sooner or later.  AI has
upsides as well as downsides, It might prevent extinction if it
develops sooner.

In any case, you can't stop it in a world where you can run an AI on a
high-end laptop.

Keith

Best wishes,

Keith

On Wed, May 14, 2025 at 3:42 PM BillK via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
> by Eliezer Yudkowsky (Author), Nate Soares (Author)
>
> Comment from - Stephen Fry, actor, broadcaster, and writer -
>
> The most important book I’ve read for years: I want to bring it to
> every political and corporate leader in the world and stand over them
> until they’ve read it. Yudkowsky and Soares, who have studied AI and
> its possible trajectories for decades, sound a loud trumpet call to
> humanity to awaken us as we sleepwalk into disaster. Their brilliant
> gift for analogy, metaphor and parable clarifies for the general
> reader the tangled complexities of AI engineering, cognition and
> neuroscience better than any book on the subject I’ve ever read, and
> I’ve waded through scores of them.
> We really must rub our eyes and wake the **** up!
> -----------------
> Preorders here - <https://ifanyonebuildsit.com/?ref=nslmay>
>
> Amazon site -
> <https://www.amazon.co.uk/Anyone-Builds-Everyone-Dies-All/dp/0316595640>
> -----------------
> .
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list