[ExI] Why “Everyone Dies” Gets AGI All Wrong by Ben Goertzel

BillK pharos at gmail.com
Fri Oct 3 09:36:07 UTC 2025


On Fri, 3 Oct 2025 at 06:26, Adam A. Ford <tech101 at gmail.com> wrote:

> >  Getting what we desire may cause us to go extinct
> Perhaps what we need is indirect normativity
> <https://www.scifuture.org/indirect-normativity/>
>
> Kind regards,  Adam A. Ford
>  Science, Technology & the Future <http://scifuture.org>
> _______________________________________________
>


Yes, everybody agrees that AI alignment is a problem that needs to be
solved.  :)
And using Initial versions of AI to assist in devising alignment rules is a
good idea. After all, we will be using AI to assist in designing everything
else!
I see a few problems though. The early versions of AI are likely to be
aligned to fairly specific values. Say, for example, in line with the
values of the richest man in the world. This is unlikely to iterate into
ethical versions suitable for humanity as a whole.
The whole alignment problem runs up against the conflicting beliefs and
world views of the widely different groups of humanity.
These are not just theoretical differences of opinion. These are
fundamental conflicts, leading to wars and destruction.
An AGI will have to be exceptionally persuasive to get all humans to agree
with the final ethical system that it designs!

BillK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251003/03cb8a34/attachment.htm>


More information about the extropy-chat mailing list