[ExI] Is AGI development going to destroy humanity?

Adrian Tymes atymes at gmail.com
Sat Apr 2 15:29:25 UTC 2022

On Sat, Apr 2, 2022 at 6:56 AM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> He is far more convinced than most of us that
> unfriendly AI will destroy humanity.

His main argument seems to be that AI will be unimaginably smarter than
humans (achieving superintelligence near-instantaneously through the
Technological Singularity process) therefore AI can do literally anything
it wants with effectively infinite resources (including time since it will
act so much faster than humanity), and unfriendly AI will have the same
advantage over friendly AI since it is easier to destroy than to create.

Both parts of the argument fall flat, but in particular, he fails to
consider what kind of intelligence can take advantage of the Technological
Singularity process: inherently, one that is interested in optimization and
benefits.  This precludes active sociopathy, or malice against the vast
majority of humans as it will never interact with them.  (Not even to
spread over the Earth, when there are so many more accessible and
uncontested resources off-planet.)

In all of the grim scenarios I
> can easily foresee, most of the African continent survives.

I'd add in most of South America, and certain other parts too, but the
point stands: when talking about something for all of humanity, one must
discuss all of humanity, not just the first and maybe second world
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220402/00b92759/attachment.htm>

More information about the extropy-chat mailing list