[ExI] AI and Eliezer
Adrian Tymes
atymes at gmail.com
Thu Mar 21 17:06:26 UTC 2024
On Thu, Mar 21, 2024 at 1:57 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Wed, 20 Mar 2024, Adrian Tymes via extropy-chat wrote:
> > Perhaps part of the issue is that so many people believe they have so
> > little to lose, relative to the scale of the threat. For instance,
> > some eco-activist might think that reducing the chance of global
> > meltdown by 1% far exceeds the potential value of anything else they
> > could possibly hope to achieve with their lives.
>
> Oh yes, I definitely believe that plays a part. If you are convince that
> due to climate change everyone will die tomorrow, you certainly have
> nothing to lose by trying everything within your power to stop it from
> happening. But that's the problem with assigning infinite evil or
> infinite good to an outcome, everything else gets thrown out the window.
>
Agreed, but that's not quite what I meant. I was more focusing on how
people think they have so little value or worth in and of themselves
relative to ability to affect things on a global scale. This is a
consequence of growing poverty, or at least growing perception of
inadequate wealth and wealth-related measures (travel, ability to be heard
by a large audience, et cetera). So if something comes along where they
can convince themselves this is a chance to actually do something that will
have impact - it's easy to convince themselves this will probably be the
only shot they get, and then they are motivated to ignore evidence that
this wouldn't be meaningful (they'd move the metric by much less than 1%,
and/or likely move it in the opposite direction from what they want).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240321/574e0685/attachment.htm>
More information about the extropy-chat
mailing list