[ExI] The case for how and why AI might kill us all

Darin Sunley dsunley at gmail.com
Fri Mar 31 16:59:59 UTC 2023


The Twitter response to Yudkowsky's Time article was instructive - that is
to say, a lot of them saw the word "nuclear exchange", pattern-matched it
to "nuclear annihilation", and immediately shut their brains down.

Nuclear "annihilation" is and always has been a serious civilizational
risk, but never an existential risk. But it's been the subject of so much
propaganda that some people literally shut down when they try to analyze
it. Nuclear nonproliferation was such an important element of foreign
policy that nuclear weapons were literally demonized - to the point where a
lot of intelligent people are literally incapable of even visualizing
anything worse.

When Yudkowsky stated a plain, obvious truth - that a nuclear exchange is
preferable to a superintelligent paperclip optimizer getting lose because
at least some humans would survive a nuclear exchange - a lot of people who
literally can't imagine anything more intelligent than themselves [who
think therefore that ChatGPT4 is a lookup table, nevermind that such a
lookup table would be bigger than the sun] or meaningfully different from
themselves ["I don't optimize for anything, therefore optimizers don't
exist"] were, understandably [though grotesquely in error] skeptical.

On Fri, Mar 31, 2023 at 10:12 AM Tara Maya via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I'm just a humble historian. I don't understand what in the history of
> human interaction with technology has led to the conclusion that the way to
> survive is to reject technology.
>
> All I can figure out is that these doomsayers are assuming AI will be
> pitted against humanity, whereas it seems far more likely to me that
> Humans-In-Group-A+AI will be pitted against Humans-in-Group-B without AI.
> In which case, yeah, it's obvious those with AI will win.
>
> Indian pundits at one point decided that Brahmans should not engage in sea
> travel. China burned their ocean-going ships on the shore. As a result,
> both India and China, previously greater civilizations, lost out to Europe
> in the Age of Exploration.
>
> We are entering a new Age of Exploration. It disturbs me to hear calls to
> burn our ships on the sea, preserve the purity of our souls by refraining
> from the new scary ships.
>
> I still see fear of technology as a greater danger than technology.
>
> Tara Maya
>
>
>
> On Mar 31, 2023, at 3:28 AM, BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> This is not the first time humanity has stared down the possibility of
> extinction due to its technological creations. But the threat of AI is
> very different from the nuclear weapons we've learned to live with.
> Nukes can't think. They can't lie, deceive or manipulate. They can't
> plan and execute. Somebody has to push the big red button.
> ----------
> Sam Altman forecasts that within a few years, there will be a wide
> range of different AI models propagating and leapfrogging each other
> all around the world, each with its own smarts and capabilities, and
> each trained to fit a different moral code and viewpoint by companies
> racing to get product out of the door. If only one out of thousands of
> these systems goes rogue for any reason, well... Good luck. "The only
> way I know how to solve a problem like this is iterating our way
> through it, learning early and limiting the number of
> 'one-shot-to-get-it-right scenarios' that we have," said Altman.
> ------------
> Yudkowski believes even attempting this is tantamount to a suicide
> attempt aimed at all known biological life. "Many researchers steeped
> in these issues, including myself, expect that the most likely result
> of building a superhumanly smart AI, under anything remotely like the
> current circumstances, is that literally everyone on Earth will die.,"
> he wrote. "Not as in 'maybe possibly some remote chance,' but as in
> 'that is the obvious thing that would happen.'
> ------------------------
>
>
> So Altman thinks the world might end up with hundreds of competing
> AIs, all with different value systems and running under different
> legal systems. That sounds like out-of-control chaos to me. Until one
> AI system comes out on top and closes all the weaker systems down.
> Will the winner look after humans though?
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/0011e048/attachment.htm>


More information about the extropy-chat mailing list