<div dir="ltr"><br><div>In some cases, Mother Nature (or God, or whoever you think is our creator) has miss wired our reward system (phenomenal joyes) with bad things (hurting others). But once we learn how to do phenomenal engineering, there is no reason for any of this to be the case. Being able to choose what you want to want, and having the ability to correct miswired rewards like this is what true freedom is.</div><div><br></div><div>So to think that truly intelligently designed beings will have problems like this seems wrong to me, and nothing to worry about. Again, I think AI's will save us from all this primitive, still broken irrationality.</div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Fri, Oct 3, 2025 at 9:20 AM Keith Henson via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Uploaded humans living in private spaces don't have to agree on<br>
anything. Their simulated world can be anything they like, including<br>
simulated slaves to beat. Not my ideal world, but I am sure there<br>
will be some who want it.<br>
<br>
Keith<br>
<br>
On Fri, Oct 3, 2025 at 2:37 AM BillK via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> On Fri, 3 Oct 2025 at 06:26, Adam A. Ford <<a href="mailto:tech101@gmail.com" target="_blank">tech101@gmail.com</a>> wrote:<br>
>><br>
>> > Getting what we desire may cause us to go extinct<br>
>> Perhaps what we need is indirect normativity<br>
>><br>
>> Kind regards, Adam A. Ford<br>
>> Science, Technology & the Future<br>
>> _______________________________________________<br>
><br>
><br>
><br>
> Yes, everybody agrees that AI alignment is a problem that needs to be solved. :)<br>
> And using Initial versions of AI to assist in devising alignment rules is a good idea. After all, we will be using AI to assist in designing everything else!<br>
> I see a few problems though. The early versions of AI are likely to be aligned to fairly specific values. Say, for example, in line with the values of the richest man in the world. This is unlikely to iterate into ethical versions suitable for humanity as a whole.<br>
> The whole alignment problem runs up against the conflicting beliefs and world views of the widely different groups of humanity.<br>
> These are not just theoretical differences of opinion. These are fundamental conflicts, leading to wars and destruction.<br>
> An AGI will have to be exceptionally persuasive to get all humans to agree with the final ethical system that it designs!<br>
><br>
> BillK<br>
> _______________________________________________<br>
> extropy-chat mailing list<br>
> <a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
> <a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>