[ExI] Yudkowsky - new book on AI dangers - Published Sept 2025

Brent Allsop brent.allsop at gmail.com
Thu May 15 21:30:24 UTC 2025


Yes, I agree with Keith.

And It'd sure help if people who really believe this would work to build
and track expert consensus around concise descriptions of the best
arguments, instead of just shouting one person's opinion into the void.

That is what we're doing on canonizer with the "Friendly AI Importance"
topic.  And as more people weigh in on this issue, the "Friendly AI is
Sensible
<https://canonizer.com/topic/16-Friendly-AI-Importance/9-FriendlyAIisSensible?is_tree_open=1>"
camp continues to fall further behind the "Such Concern Is Mistaken
<https://canonizer.com/topic/16-Friendly-AI-Importance/3-Such-Concern-Is-Mistaken?is_tree_open=1>"
camp.

Do any of the arguments in the book come anywhere close to the arguments in
the "AI can only be friendly
<https://canonizer.com/topic/16-Friendly-AI-Importance/2-AI-can-only-be-friendly?is_tree_open=1>"
camp?  Does he even address those, convincing to me, arguments anywhere in
the book?









On Wed, May 14, 2025 at 5:51 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> It doesn't matter.
>
> Unless technical progress is stopped we will face whatever problems AI
> generates (including possible extinction) sooner or later.  AI has
> upsides as well as downsides, It might prevent extinction if it
> develops sooner.
>
> In any case, you can't stop it in a world where you can run an AI on a
> high-end laptop.
>
> Keith
>
> Best wishes,
>
> Keith
>
> On Wed, May 14, 2025 at 3:42 PM BillK via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
> > by Eliezer Yudkowsky (Author), Nate Soares (Author)
> >
> > Comment from - Stephen Fry, actor, broadcaster, and writer -
> >
> > The most important book I’ve read for years: I want to bring it to
> > every political and corporate leader in the world and stand over them
> > until they’ve read it. Yudkowsky and Soares, who have studied AI and
> > its possible trajectories for decades, sound a loud trumpet call to
> > humanity to awaken us as we sleepwalk into disaster. Their brilliant
> > gift for analogy, metaphor and parable clarifies for the general
> > reader the tangled complexities of AI engineering, cognition and
> > neuroscience better than any book on the subject I’ve ever read, and
> > I’ve waded through scores of them.
> > We really must rub our eyes and wake the **** up!
> > -----------------
> > Preorders here - <https://ifanyonebuildsit.com/?ref=nslmay>
> >
> > Amazon site -
> > <https://www.amazon.co.uk/Anyone-Builds-Everyone-Dies-All/dp/0316595640>
> > -----------------
> > .
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250515/26f82509/attachment.htm>


More information about the extropy-chat mailing list