[ExI] Is AGI development going to destroy humanity?

BillK pharos at gmail.com
Sat Apr 2 17:19:53 UTC 2022


On Sat, 2 Apr 2022 at 16:31, Adrian Tymes via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> His main argument seems to be that AI will be unimaginably smarter than humans (achieving superintelligence near-instantaneously through the Technological Singularity process) therefore AI can do literally anything it wants with effectively infinite resources (including time since it will act so much faster than humanity), and unfriendly AI will have the same advantage over friendly AI since it is easier to destroy than to create.
>
<snip>
> _______________________________________________


Assuming a super-intelligent powerful AGI, perhaps it is not likely to
be unfriendly to humans so much as to hardly notice them. Humanity
would be destroyed almost accidentally when the AGI used something
essential to human life.
An opposite (but equally disastrous) option is when the AGI is
programmed to love humans and decides to completely protect and care
for humanity. So no human evil is permitted, no killing or violence,
even verbal violence. Just quiet and complete care.
There are so many ways an AGI could end humanity.


BillK



More information about the extropy-chat mailing list