<div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 3 Oct 2025 at 06:26, Adam A. Ford <<a href="mailto:tech101@gmail.com" target="_blank">tech101@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>>
Getting what we desire may cause us to go extinct</div><div>Perhaps what we need is <a href="https://www.scifuture.org/indirect-normativity/" target="_blank">indirect normativity</a></div><div><br></div><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Kind regards,<span class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)"> </span>Adam A. Ford<br><div><font size="1"> </font><font style="font-family:verdana,sans-serif" size="1"><span style="color:rgb(102,102,102)"><a href="http://scifuture.org" target="_blank">Science, Technology & the Future</a></span><span style="color:rgb(102,102,102)"> </span></font></div></div><div><div>
</div>
</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>_______________________________________________</div></blockquote><div><br></div><div><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Yes, everybody agrees that AI alignment is a problem that needs to be solved. :)</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">And using Initial versions of AI to assist in devising alignment rules is a good idea. After all, we will be using AI to assist in designing everything else!</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">I see a few problems though. The early versions of AI are likely to be aligned to fairly specific values. Say, for example, in line with the values of the richest man in the world. This is unlikely to iterate into ethical versions suitable for humanity as a whole.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">The whole alignment problem runs up against the conflicting beliefs and world views of the widely different groups of humanity. </div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">These are not just theoretical differences of opinion. These are fundamental conflicts, leading to wars and destruction.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">An AGI will have to be exceptionally persuasive to get all humans to agree with the final ethical system that it designs!</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">BillK</div></div></div>
</div>