[ExI] Why “Everyone Dies” Gets AGI All Wrong by Ben Goertzel
Brent Allsop
brent.allsop at gmail.com
Sat Oct 4 19:22:50 UTC 2025
In some cases, Mother Nature (or God, or whoever you think is our creator)
has miss wired our reward system (phenomenal joyes) with bad things
(hurting others). But once we learn how to do phenomenal engineering,
there is no reason for any of this to be the case. Being able to choose
what you want to want, and having the ability to correct miswired rewards
like this is what true freedom is.
So to think that truly intelligently designed beings will have problems
like this seems wrong to me, and nothing to worry about. Again, I think
AI's will save us from all this primitive, still broken irrationality.
On Fri, Oct 3, 2025 at 9:20 AM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Uploaded humans living in private spaces don't have to agree on
> anything. Their simulated world can be anything they like, including
> simulated slaves to beat. Not my ideal world, but I am sure there
> will be some who want it.
>
> Keith
>
> On Fri, Oct 3, 2025 at 2:37 AM BillK via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > On Fri, 3 Oct 2025 at 06:26, Adam A. Ford <tech101 at gmail.com> wrote:
> >>
> >> > Getting what we desire may cause us to go extinct
> >> Perhaps what we need is indirect normativity
> >>
> >> Kind regards, Adam A. Ford
> >> Science, Technology & the Future
> >> _______________________________________________
> >
> >
> >
> > Yes, everybody agrees that AI alignment is a problem that needs to be
> solved. :)
> > And using Initial versions of AI to assist in devising alignment rules
> is a good idea. After all, we will be using AI to assist in designing
> everything else!
> > I see a few problems though. The early versions of AI are likely to be
> aligned to fairly specific values. Say, for example, in line with the
> values of the richest man in the world. This is unlikely to iterate into
> ethical versions suitable for humanity as a whole.
> > The whole alignment problem runs up against the conflicting beliefs and
> world views of the widely different groups of humanity.
> > These are not just theoretical differences of opinion. These are
> fundamental conflicts, leading to wars and destruction.
> > An AGI will have to be exceptionally persuasive to get all humans to
> agree with the final ethical system that it designs!
> >
> > BillK
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251004/9d8ca4f4/attachment.htm>
More information about the extropy-chat
mailing list