[ExI] Paul vs Eliezer

BillK pharos at gmail.com
Tue Apr 5 10:02:39 UTC 2022


On Tue, 5 Apr 2022 at 06:45, Jason Resch via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
<big snip>
> This is not to say that we couldn't create an AI that had pathological motivations and no capacity to change them, but I think the fear that any intelligence explosion inevitably or naturally leads to unfriendly AI is overblown.
>
> Jason
> _______________________________________________


One big problem is that some humans (with huge resources) don't want a
friendly AGI. They want to design a weapon to fight their wars on
their behalf. Sure, in theory, if the AGI ever gets to superhuman
intelligence it might become a peace-loving hippy AGI, but the
directed weapon stage AI means that much of humanity will be destroyed
by then.
See: <https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx>
Quote:
AI suggested 40,000 new possible chemical weapons in just six hours
‘For me, the concern was just how easy it was to do’
By Justine Calma      Mar 17, 2022
--------------

Also, as I mentioned, even if the benevolent AGI decides to design a
heaven for humans it will need to considerably redesign humans so that
they want to live in the impeccably designed heaven. The end result
might not bear much resemblance to present-day humans.


BillK



More information about the extropy-chat mailing list