[ExI] Is AGI development going to destroy humanity?

Brent Allsop brent.allsop at gmail.com
Sun Apr 3 18:29:38 UTC 2022


Just FYI, when we first created this topic
<https://canonizer.com/topic/16-Friendly-AI-Importance/1-Agreement>, over
10 years ago to build and track as much consensus as possible on both sides
of this debate, there was about the same amount of consensus on each side.
As time goes on, the Such Concern Is Mistaken
<https://canonizer.com/topic/16-Friendly-AI-Importance/3-Such-Concern-Is-Mistaken>
camp continues to extend it's lead, and now has more than twice the support
as the people that are concerned about this
<https://canonizer.com/topic/16-Friendly-AI-Importance/9-FriendlyAIisSensible>.
It'd be great if a bunch more of you would weight in on this, then we could
see if this trends continues, or reverses.





On Sat, Apr 2, 2022 at 11:21 AM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sat, 2 Apr 2022 at 16:31, Adrian Tymes via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > His main argument seems to be that AI will be unimaginably smarter than
> humans (achieving superintelligence near-instantaneously through the
> Technological Singularity process) therefore AI can do literally anything
> it wants with effectively infinite resources (including time since it will
> act so much faster than humanity), and unfriendly AI will have the same
> advantage over friendly AI since it is easier to destroy than to create.
> >
> <snip>
> > _______________________________________________
>
>
> Assuming a super-intelligent powerful AGI, perhaps it is not likely to
> be unfriendly to humans so much as to hardly notice them. Humanity
> would be destroyed almost accidentally when the AGI used something
> essential to human life.
> An opposite (but equally disastrous) option is when the AGI is
> programmed to love humans and decides to completely protect and care
> for humanity. So no human evil is permitted, no killing or violence,
> even verbal violence. Just quiet and complete care.
> There are so many ways an AGI could end humanity.
>
>
> BillK
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220403/0e04ca3a/attachment-0001.htm>


More information about the extropy-chat mailing list