[ExI] In the News

Keith Henson hkeithhenson at gmail.com
Thu Mar 6 02:47:24 UTC 2025


https://www.theguardian.com/global/ng-interactive/2025/mar/05/zizians-artificial-intelligence

Bloodthirsty Vegans, oh my.

(short snip)

In the 2000s, Yudkowsky began building on the work of earlier AI
theorists. In a series of blogposts, he argued that the tsunami was
coming – and would remake everything in its tidal path. By the time he
was 20, his writing won the attention of AI academics, who accepted
him into their ranks despite the fact that Yudkowsky never attended
high school.

Today Yudkowsky is regarded as a leader of the “doomers”, a faction
whose members believe that superintelligent AI will be unambiguously
bad for humanity and perhaps even cause our extinction. That wasn’t
always the case.

At first, Yudkowsky believed that the singularity had the potential to
be the best thing that ever happened to humanity. In the world he
hoped to bring about, a benevolent, centralized, god-like AI,
sometimes called a “singleton”, could end hunger and poverty and
protect the human species for eternity. But that AI, unless designed
carefully, could also prove to be disastrous to humanity.

Researchers call it the “alignment problem”: would a superintelligent
AI be hostile or benevolent? And is there any guarantee that its
understanding of benevolence aligns with ours?
^^^^^^^^^^^^^^
Between this and the bitcoin business, extropy has been getting
considerable attention.

Keith



More information about the extropy-chat mailing list