[ExI] Why “Everyone Dies” Gets AGI All Wrong by Ben Goertzel
Keith Henson
hkeithhenson at gmail.com
Fri Oct 3 15:19:34 UTC 2025
Uploaded humans living in private spaces don't have to agree on
anything. Their simulated world can be anything they like, including
simulated slaves to beat. Not my ideal world, but I am sure there
will be some who want it.
Keith
On Fri, Oct 3, 2025 at 2:37 AM BillK via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Fri, 3 Oct 2025 at 06:26, Adam A. Ford <tech101 at gmail.com> wrote:
>>
>> > Getting what we desire may cause us to go extinct
>> Perhaps what we need is indirect normativity
>>
>> Kind regards, Adam A. Ford
>> Science, Technology & the Future
>> _______________________________________________
>
>
>
> Yes, everybody agrees that AI alignment is a problem that needs to be solved. :)
> And using Initial versions of AI to assist in devising alignment rules is a good idea. After all, we will be using AI to assist in designing everything else!
> I see a few problems though. The early versions of AI are likely to be aligned to fairly specific values. Say, for example, in line with the values of the richest man in the world. This is unlikely to iterate into ethical versions suitable for humanity as a whole.
> The whole alignment problem runs up against the conflicting beliefs and world views of the widely different groups of humanity.
> These are not just theoretical differences of opinion. These are fundamental conflicts, leading to wars and destruction.
> An AGI will have to be exceptionally persuasive to get all humans to agree with the final ethical system that it designs!
>
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
More information about the extropy-chat
mailing list