<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div>
<div>On 31/10/2025 19:04, Jason Resch wrote:<br>
</div>
<blockquote type="cite">
<pre><div>the paper ( <a href="https://philarchive.org/rec/ARNMAW" target="_blank" rel="noreferrer">https://philarchive.org/rec/ARNMAW</a> ) defines what a perfect morality consists of. And it too, provides a definition of what morality is, and likewise provides a target to aim towards.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Ben Wrote:
As different intelligent/rational agents have different experiences,
they will form different viewpoints, and come to different conclusions
about what is right and not right, what should be and what should not,
what they want and what they don't, just like humans do.</blockquote><div>
</div><div>The point of the video and article is that desires are based on beliefs, and because beliefs are correctable then so are desires. There is only one "perfect grasp" and accordingly, one true set of beliefs, and from this it follows one most-correct set of desires. This most correct set of desires is the same for everyone, regardless of from which viewpoint it is approached.</div></pre>
</blockquote>
<br>
Nope. This is nonsense. Just about every assertion is wrong. The
very first sentence in the abstract is false. And the second. And
the third. So the whole thing falls apart.<br>
<br>
Desires are not based on beliefs, they are based on emotions. The
example of 'wanting to drink hot mud' is idiotic. Just because the
cup turns out to contain mud doesn't invalidate the desire to drink
hot chocolate.</div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I think you are misinterpreting the example. It is the desire to drink the contents of the cup is what changes in response to new information.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Think about this alternate example which may be easier to consider: you may naively have the desire to take a certain job, to marry a particular person, attend a certain event, but if that choice turns out to be ruinous, you may regret that decision. If your future self could warn you of the consequences of that choice, then you may no longer desire that job, marriage, or attendance, as much as you previously did, in light of the (unknown) costs they bore, but which you were unaware of.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> It's not a 'mistaken' desire at all (the mistake is a
sensory one), and it doesn't somehow morph into a desire to drink
hot mud.<br>
<br>
"Beliefs are correctable, so desires are correctable"<br>
Each of those two things are true (if you change 'correctable' to
'changeable'), but the one doesn't imply the other, which follows
from the above.<br></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Does it apply in the examples I provided?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<br>
'Perfect grasp' doesn't mean anything real. It implies that it's
possible to know everything about everything, or even about
something. The very laws of physics forbid this, many times over, so
using it in an argument is equivalent to saying "magic".<br></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It doesn't have to be possible. The paper is clear on this. The goal of the paper is to answer objectively what makes a certain thing right or wrong. For example, if someone offered you $10 and I return for some random person unknown to you would be killed, in a way that would not negatively affect you or anyone you knew, and your memory of the ordeal would be wiped so you wouldn't even bear a guilty conscience, for what reason do we judge and justify the wrongness of taking the $10?</div><div dir="auto"><br></div><div dir="auto">This is the goal of the paper to provide a foundation upon which morality can be established objectively from first principles.</div><div dir="auto"><br></div><div dir="auto">How would you and the question of what separates right from wrong? The initial utilitarian answer is whatever promotes more good experiences than bad experiences. But then, how do you weigh the relative goodness or badness of one experience vs. another, between one person and another, between the varying missed opportunities among future possibilities?</div><div dir="auto"><br></div><div dir="auto">Such questions can only be answered with something approximating an attempt at a grasp of what it means and what it is like to be all the various existing and potential conscious things.</div><div dir="auto"><br></div><div dir="auto">We can make heuristic attempts at this, despite the fact that we never achieve perfection.</div><div dir="auto"><br></div><div dir="auto">For example, Democracy can be viewed as a crude approximation, by which each person is given equal weight in the consideration of their desires (with no attempt to weight relative benefits or suffering). But this is still better than an oligarchy, where the desires of few are considered while the desires of the masses are ignored. And also you can see the difference between uninformed electorate vs. a well informed one. The informed electorate has a better grasp of the consequences of their decisions, and so their collective desires are more fully fulfilled.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<br>
'One true set of beliefs' is not only wrong, it's dangerous, which
he just confirms by saying it means there is only one most-correct
set of desires, for /everyone/ (!).</div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Do you not believe in objective truth?</div><div dir="auto"><br></div><div dir="auto">If there is objective truth, they are the same truths for everyone.</div><div dir="auto"><br></div><div dir="auto">Now consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it).</div><div dir="auto"><br></div><div dir="auto">You may say this is fatal to the theory, but note that we can still roughly compute with the number Pi, even though we never consider all of its infinite digits.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> Does this not ring loud alarm
bells to you? I'm thinking we'd better hope that there really is no
such thing as objective morality (if there is, Zuboff is barking up
the wrong tree, for sure), it would be the basis for the worst kind
of tyranny. It's a target that I, at least, want to aim away from.
180 degrees away!<br></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">No one is proposing a putting a tyrannical AI in charge that forces your every decision. But a superintelligent AI that could explain to you the consequences of different actions you might take (as far as it is able to predict them) would be quite invaluable, and improve the lives of many who choose to consider its warnings and advice.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<br>
His twisting of desire into morality is, well, twisted. Morality
isn't about what we should want to do, just as bravery isn't about
having no fear. </div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Do you have a better definition of morality to share?</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>He wants to turn people into puppets, and actually
remove moral agency from them. </div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Imperfect understanding of consequences cripples our ability to be effective moral agents. When we don't understand the pros and cons of a decision, how can we hope to be moral agents? We become coin-flippers -- which I would argue is to act amorally. If we want true moral agency, we must strive to improve our grasp of things.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> His proposal is equivalent to
destroying the amygdala (fear centre of the brain (kind of)) and
claiming to have revealed the secret of 'true bravery'.</div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<pre cols="72">--
Ben</pre>
<br>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>