<div dir="ltr"><div>> " Yes, everybody agrees that AI alignment is a problem that needs to be solved. :)<div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">And
using Initial versions of AI to assist in devising alignment rules is a
good idea. After all, we will be using AI to assist in designing
everything else!</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">I
see a few problems though. The early versions of AI are likely to be
aligned to fairly specific values. Say, for example, in line with the
values of the richest man in the world. This is unlikely to iterate into
ethical versions suitable for humanity as a whole.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">The whole alignment problem runs up against the conflicting beliefs and world views of the widely different groups of humanity. </div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">These are not just theoretical differences of opinion. These are fundamental conflicts, leading to wars and destruction.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">An AGI will have to be exceptionally persuasive to get all humans to agree with the final ethical system that it designs!"<br><br>> I don't see any of this as a problem at all. You just need to find
a way to build and track consensus around what EVERYONE wants. And
then use a sorting algorithm which gives more vote to less rich people
and stuff like that. (only a minor vote to AI systems or systems
emulating dead people...?) After all, if you know what everyone wants,
THAT, by definition is consensus. And SAIs will help us know, better,
what we as individuals really want and how to be just and fair with it
all.<br><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Yeah, it may be the case that early AI may naturally converge or get programmed with specific and naive values (like the parochial values of the richest).. the good thing about indirect normativity done adequately, is that the richest man if <a href="https://www.scifuture.org/the-exploiters-paradox-selfish-ai-use-and-the-risk-of-backfire/">wise enough may not want to risk perverse instantiations of his own parochial values</a>. <br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Conflicting beliefs are difficult - many people turn to pluralism to sort that out (i.e. Iason Gabrial).. it may be good for an input layer for value/preference/volition extraction - even if AI was able to extract what everyone wanted, there would be disagreements, blind spots, bad tradeoffs and incoherence etc, there would need to be principled approaches to resolving these issues (I'm partial to realism) - it's unlikely AI would instantly become an ideal observer level intelligence, and hence it may opt for staging scientific/epistemic/moral progress tempered along the way with corrigible humility (which may be part of the indirect normativity process). </div><font color="#888888"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div></font>
</div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Kind regards,<br></div><div> </div><div>Adam A. Ford<br><div><font size="1"> </font><br></div><font style="font-family:verdana,sans-serif" size="1"><span style="color:rgb(102,102,102)"><a href="http://scifuture.org" target="_blank">Science, Technology & the Future</a></span><span style="color:rgb(102,102,102)"> - </span><a href="http://www.meetup.com/Science-Technology-and-the-Future" target="_blank"><span style="color:rgb(102,102,102)"><span></span></span></a><a href="http://youtube.com/subscription_center?add_user=TheRationalFuture" target="_blank">YouTube</a></font><font size="1"><span style="color:rgb(102,102,102);font-family:verdana,sans-serif"><span> | <a href="https://www.facebook.com/adam.a.ford" target="_blank">FB</a> | <a href="https://x.com/adam_ford" target="_blank">X</a> </span></span><span style="color:rgb(102,102,102);font-family:verdana,sans-serif"><span>| <a href="https://www.linkedin.com/in/adamaford/" target="_blank">LinkedIn</a> | <a href="https://bsky.app/profile/adamford.bsky.social" target="_blank">Bsky</a> |</span></span><span style="color:rgb(102,102,102);font-family:verdana,sans-serif"><span> </span></span><span style="color:rgb(102,102,102);font-family:verdana,sans-serif"><span><a href="http://www.meetup.com/Science-Technology-and-the-Future" target="_blank">MU</a></span></span></font>
</div><div><div>
</div>
</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, 5 Oct 2025 at 06:48, John Clark via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Sat, Oct 4, 2025 at 3:25 PM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:</span></div></div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><font size="4" face="georgia, serif"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>In some cases, Mother Nature (or God, or whoever you think is our creator) has miss wired our reward system (phenomenal joyes) with bad things (hurting others). But once we learn how to do phenomenal engineering, there is no reason for any of this to be the case. Being able to choose what you want to want, and having the ability to correct miswired rewards like this is what true freedom is.<span class="gmail_default"> </span>So to think that truly intelligently designed beings will have problems like this seems wrong to me, and nothing to worry about. </i></font></div></div></blockquote><div><br></div><font size="4" face="tahoma, sans-serif"><b>I think having complete<span class="gmail_default"> </span>control of your emotional control panel is something to worry about and I<span class="gmail_default">'ve</span> thought so for a long time. I wrote the following to the old Cryonics Mailing <span class="gmail_default">List </span>on January 19, 1994: </b></font><div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default"><font size="4" face="tahoma, sans-serif"><b><i>"Ever want to accomplish something but have been unable to because It's difficult, well just change your goal in life to something simple and do that; better yet, flood your mind with a feeling of pride and self satisfaction and don't bother accomplishing anything at all. Think all this is a terrible idea and stupid as well , no problem, just <span><span><span>change your mind</span></span></span> (and I do mean <span><span><span>CHANGE YOUR MIND</span></span></span>) now you think it's a wonderful idea. O.K., O.K. I'm exaggerating a little, the steps would probably be smaller, at least at first, but the result would be the same. I don't have the blueprints for a Jupiter brain in my pocket but I do know that complex mechanisms don't do well in a positive feedback loop, not electronics, not animals, not people and not Jupiter brains. True, you could probably set up negative feedback of some kind to counteract it, but that would result in a decrease in happiness so would you really want to do that?"</i></b></font></div></div></div><div class="gmail_default"><font size="4" face="tahoma, sans-serif"><b><i><br></i></b></font></div><div class="gmail_default"><font size="4" face="tahoma, sans-serif"><b>The explanation to the Fermi paradox<span class="gmail_default"> may not be</span> that extraterrestrial civilizations end in a bang or a whimper, but in a moan of orgastic pleasure. ET might be an electronic junkie. </b></font></div><div class="gmail_default"><font size="4" face="tahoma, sans-serif"><b><br></b></font></div><div class="gmail_default"><font size="4" face="tahoma, sans-serif"><b>John K Clark</b></font></div><div><font size="4" face="tahoma, sans-serif"><b> </b></font></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>