<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 01/11/2025 13:32, Jason Resch wrote:<br>
</div>
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div><div
class="gmail_quote gmail_quote_container"><div dir="ltr"
class="gmail_attr">On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat <<a
href="mailto:extropy-chat@lists.extropy.org"
moz-do-not-send="true" class="moz-txt-link-freetext">extropy-chat@lists.extropy.org</a>> wrote:
</div><blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div><div>On 31/10/2025 19:04, Jason Resch wrote:
</div><blockquote type="cite"><pre><div>the paper ( <a
href="https://philarchive.org/rec/ARNMAW" target="_blank"
rel="noreferrer" moz-do-not-send="true"
class="moz-txt-link-freetext">https://philarchive.org/rec/ARNMAW</a> ) defines what a perfect morality consists of. And it too, provides a definition of what morality is, and likewise provides a target to aim towards.</div><div> </div><blockquote
class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Ben Wrote:
As different intelligent/rational agents have different experiences,
they will form different viewpoints, and come to different conclusions
about what is right and not right, what should be and what should not,
what they want and what they don't, just like humans do.</blockquote><div>
</div><div>The point of the video and article is that desires are based on beliefs, and because beliefs are correctable then so are desires. There is only one "perfect grasp" and accordingly, one true set of beliefs, and from this it follows one most-correct set of desires. This most correct set of desires is the same for everyone, regardless of from which viewpoint it is approached.</div></pre>
</blockquote>
Nope. This is nonsense. Just about every assertion is wrong. The
very first sentence in the abstract is false. And the second. And
the third. So the whole thing falls apart.
Desires are not based on beliefs, they are based on emotions. The
example of 'wanting to drink hot mud' is idiotic. Just because the
cup turns out to contain mud doesn't invalidate the desire to drink
hot chocolate.</div></blockquote></div></div><div dir="auto">
</div><div dir="auto">I think you are misinterpreting the example. It is the desire to drink the contents of the cup is what changes in response to new information.</div><div
dir="auto">
</div></pre>
</blockquote>
I wouldn't have put it as 'desire to drink the contents of the cup',
when the desire is to drink hot chocolate. There are originating
desires and there are planned actions to satisfy the desire.
Drinking from the cup might turn out to be a bad idea (the plan is
faulty because of incorrect information), but the original desire is
not changed.<br>
If you want to see a Batman movie at a movie theatre, and find that
the only movie available is a romantic comedy, you don't say that
you have a desire to watch any movie which has suddenly changed. You
still want to watch Batman, but can't, so your desire is thwarted,
not changed.<br>
<br>
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto">Think about this alternate example which may be easier to consider: you may naively have the desire to take a certain job, to marry a particular person, attend a certain event, but if that choice turns out to be ruinous, you may regret that decision. If your future self could warn you of the consequences of that choice, then you may no longer desire that job, marriage, or attendance, as much as you previously did, in light of the (unknown) costs they bore, but which you were unaware of.</div><div
dir="auto">
</div></pre>
</blockquote>
Decisions are often regretted. That is a fact of life. Future selves
warning you about bad decisions is not. That's time-travel (aka
'magic'), and should not feature in any serious consideration of how
to make good decisions. "If x could..." is no help when x is
impossible. We have workable tools to help people make better
decisions, but time-travel isn't one of them.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> It's not a 'mistaken' desire at all (the mistake is a
sensory one), and it doesn't somehow morph into a desire to drink
hot mud.
"Beliefs are correctable, so desires are correctable"
Each of those two things are true (if you change 'correctable' to
'changeable'), but the one doesn't imply the other, which follows
from the above.
</div></blockquote></div></div><div dir="auto">
</div><div dir="auto">Does it apply in the examples I provided?</div><div
dir="auto">
</div></pre>
</blockquote>
No. The examples are about decisions, not desires, and they don't
address the beliefs that lead to the decisions. "You may have the
desire to do X" is different to "You decide to do X". The desire may
drive the decision or at least be involved in it, but it isn't the
decision (some poeple act immediately on their desires, but that
still doesn't mean they are the same thing).<br>
Can you regret a desire? I don't think so, but it is arguable. It
would be regretting something that you have no direct control over,
so would be rather silly.<br>
<br>
Apart from that, there is still no dependency of desires on beliefs.
A belief may well affect the plan you make to satisfy a desire, but
changing the belief doesn't change the desire. Can a belief give
rise to a desire? That's a more complicated question than it
appears, I think, and leads into various types of desires, but
still, there's no justification for the statement "beliefs can
change, therefore desires can".<br>
<br>
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto"><div
class="gmail_quote gmail_quote_container"><blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> 'Perfect grasp' doesn't mean anything real. It implies that it's
possible to know everything about everything, or even about
something. The very laws of physics forbid this, many times over, so
using it in an argument is equivalent to saying "magic".
</div></blockquote></div></div><div dir="auto">
</div><div dir="auto">It doesn't have to be possible. The paper is clear on this. The goal of the paper is to answer objectively what makes a certain thing right or wrong. For example, if someone offered you $10 and I return for some random person unknown to you would be killed, in a way that would not negatively affect you or anyone you knew, and your memory of the ordeal would be wiped so you wouldn't even bear a guilty conscience, for what reason do we judge and justify the wrongness of taking the $10?</div></pre>
</blockquote>
This is 'Trolley problem thinking'. Making up some ridiculous
scenario that would never, or even could never, occur in the real
world, then claiming that it has relevance to the real world.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto">This is the goal of the paper to provide a foundation upon which morality can be established objectively from first principles.</div></pre>
</blockquote>
Let's see some examples that are grounded in reality that 'provide a
foundaton upon which morality can be established objectively'. I'm
not closed to the possibility that such a thing can be done, but I'm
not holding my breath.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto">How would you and the question of what separates right from wrong? The initial utilitarian answer is whatever promotes more good experiences than bad experiences. But then, how do you weigh the relative goodness or badness of one experience vs. another, between one person and another, between the varying missed opportunities among future possibilities?</div><div
dir="auto">
</div><div dir="auto">Such questions can only be answered with something approximating an attempt at a grasp of what it means and what it is like to be all the various existing and potential conscious things.</div></pre>
</blockquote>
That's just another way of saying that it can't be answered.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto">We can make heuristic attempts at this, despite the fact that we never achieve perfection.</div></pre>
</blockquote>
Exactly. We always have to make decisions in the /absence/ of full
information. What we would do if we had 'all the information' is
irrelevant, if it even means anything.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto">For example, Democracy can be viewed as a crude approximation, by which each person is given equal weight in the consideration of their desires (with no attempt to weight relative benefits or suffering). But this is still better than an oligarchy, where the desires of few are considered while the desires of the masses are ignored. And also you can see the difference between uninformed electorate vs. a well informed one. The informed electorate has a better grasp of the consequences of their decisions, and so their collective desires are more fully fulfilled.</div><div
dir="auto">
</div></pre>
</blockquote>
I don't see the relevance to morality. Politics and morality are
rarely on talking terms.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto"><div
class="gmail_quote gmail_quote_container"><blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
'One true set of beliefs' is not only wrong, it's dangerous, which
he just confirms by saying it means there is only one most-correct
set of desires, for /everyone/ (!).</div></blockquote></div></div><div
dir="auto">
</div><div dir="auto">Do you not believe in objective truth?</div></pre>
</blockquote>
No.<br>
This is religious territory, and the road to dogmatism.<br>
This is the very reason wny science is superior to religion. It
doesn't assume that there is any 'absolute truth' which can be
discovered, after which no further inquiry is needed or wanted.<br>
As to whether, for instance, the laws of physics are invariant
everywhere and at all times, that's a question we can't answer, and
probably will never be able to.<br>
<br>
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto">If there is objective truth, they are the same truths for everyone.</div><div
dir="auto">
</div><div dir="auto">Now consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it).</div><div
dir="auto">
</div><div dir="auto">You may say this is fatal to the theory, but note that we can still roughly compute with the number Pi, even though we never consider all of its infinite digits.</div><div
dir="auto">
</div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> Does this not ring loud alarm
bells to you? I'm thinking we'd better hope that there really is no
such thing as objective morality (if there is, Zuboff is barking up
the wrong tree, for sure), it would be the basis for the worst kind
of tyranny. It's a target that I, at least, want to aim away from.
180 degrees away!
</div></blockquote></div></div><div dir="auto">
</div><div dir="auto">No one is proposing a putting a tyrannical AI in charge that forces your every decision. But a superintelligent AI that could explain to you the consequences of different actions you might take (as far as it is able to predict them) would be quite invaluable, and improve the lives of many who choose to consider its warnings and advice.</div><div
dir="auto">
</div></pre>
</blockquote>
Absolutely. I have no argument with that. But we were talking about
morality.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto"><div
class="gmail_quote gmail_quote_container"><blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
His twisting of desire into morality is, well, twisted. Morality
isn't about what we should want to do, just as bravery isn't about
having no fear. </div></blockquote></div></div><div dir="auto">
</div><div dir="auto">Do you have a better definition of morality?</div><div
dir="auto">
</div></pre>
</blockquote>
I don't think that's the answer you want to ask. A dictionary can
provide the answer.<br>
<br>
I do have my own moral code though, if that's what you want to know.
I don't expect everyone to see the value of it, or adopt it. And I
might change my mind about it in the future.
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto">
</div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>He wants to turn people into puppets, and actually
remove moral agency from them. </div></blockquote></div></div><div
dir="auto">
</div><div dir="auto">Imperfect understanding of consequences cripples our ability to be effective moral agents.</div></pre>
</blockquote>
Then you think we are crippled as effective moral agents, and doomed
to always be so (because we will always have imperfect understanding
of consquences).
<blockquote type="cite"
cite="mid:mailman.10.1762003957.18922.extropy-chat@lists.extropy.org">
<pre class="moz-quote-pre" wrap=""><div dir="auto"> When we don't understand the pros and cons of a decision, how can we hope to be moral agents? We become coin-flippers -- which I would argue is to act amorally. If we want true moral agency, we must strive to improve our grasp of things.</div></pre>
</blockquote>
This is taking an extreme position, and saying either we are
'perfect' or no use at all. We are neither. Acting with incomplete
information is inevitable. That doesn't mean morality is impossible.<br>
<br>
Just as bravery is being afraid, but acting anyway, morality is not
knowing for sure what the best action is, but acting anyway. Making
the best decision you can, in line with your values. It's about
having a choice. If it were possible to have 'perfect knowledge',
there would be no morality, no choice. I'm not sure what you'd call
it. Predetermination, perhaps.
<pre class="moz-signature" cols="72">--
Ben</pre>
<br>
</body>
</html>