[ExI] Zuboff's morality

Ben Zaiboc ben at zaiboc.net
Fri Nov 7 22:18:25 UTC 2025


On 07/11/2025 06:17, Jason Resch wrote:
> On Wed, Nov 5, 2025, 9:24 AM Ben Zaiboc via extropy-chat 
> <extropy-chat at lists.extropy.org> wrote:
>
>     Ok, I've had a look at his paper, and made a few substitutions to
>     make
>     it easier to understand. Let me know if you object to any of these:
>
>     'desire' = intention
>     'belief' = anticipated result
>
>
> I'm fine with these.
>
>     'correctable' = changeable
>
>
> Okay. But I'll note this word loses the connotation of "an improvement."


Well, the reason for making that substitution is to lose the implication 
that the original thing /needs/ to be improved.


>
>     'real' = preferred
>
>
> I can go along with this, but keep in mind they would be the 
> actual/genuine preferences in light of accurate information of concern.


You mean /theoretical/ preferences, in light of accurate information of 
concern. I don't see how this can work. It would mean people often don't 
actually know what their preferences are. So how could they act on them? 
(or anticipate an outcome from them?). Using the word "actual" to mean 
"theoretical" rather confuses things, don't you think?


>
>     'perfect grasp' = foreknowledge
>
>
> Just one thing to add: in the paper, a perfect grasp embodies not only 
> foreknowledge (e.g. perfect knowledge of future states (think 
> *depth*), but also perfect lateral knowledge concerning the 
> perspectives and impacts and effects on other beings (e.g. think breadth).
>
> So the perfect grasp represents a near omniscient understanding of all 
> the future consequences for all involved and effected by a particular 
> action, including those who don't and won't exist.


So we could use 'omniscience' instead of 'foreknowledge'. Ok.

Er, consequences for those who don't and won't exist??

That kind of cancels itself out, doesn't it? There can't, by definition, 
be any such consequences.


>
>     The relevant passages now read, with my comments in brackets:
>
>     "Imagine that I have before me on a table a cup containing a thick,
>     brown, steaming liquid.
>
>     I want to drink that stuff because I think it is hot chocolate.
>     But it
>     is actually hot mud. Well, in that case I don’t really intend to
>     drink
>     it. And neither is it in my self-interest to do so.
>
>     This example brings out the way in which intentions depend on
>     anticipated results. I only ever intend to do a thing because of
>     what I
>     anticipate the result to be."
>
>
> Not bad, I can follow along with that substitution.
>
>
>     (this is not true. It's not uncommon to have an intention to do
>     something in order to /find out/ what the result will be rather
>     than in
>     anticipation of an expected result.
>
>
> I don't think this escapes the statement.
>
> Your example asks: why would a scientist ever desire (intend) to test 
> a hypothesis when he doesn't know the outcome?


Not quite. I'm saying that, in contrast to Zuboff's statement, 
intentions sometimes /don't/ depend on anticipated results, they are 
intended to /discover/ the results instead (you don't have to be a 
scientist to do this. Non-scientists do it all the time, particularly 
young ones).


>
> My answer to this is that for the scientist, he believes (anticipates) 
> that the outcome of the experiment will provide new information for 
> the scientist. Certainly, if the scientist did not believe 
> (anticipate) any possibility of learning anything from the experiment, 
> he would not bother performing it.


You're making 'the result' include 'finding out the result'. So now we 
have two results, an actual result, and a meta-result. That could apply 
to all intentions. I expect something to happen when I do x, and I also 
expect to find out if it actually happens. So you could say "I only ever 
intend to do a thing because I anticipate finding out if I'm right about 
what the result will be", and if you don't have an expectation, it's 
just 'to find out the result'.

A problem that this presents to Zuboff's thesis is that this is not an 
anticipated result that can be changed. It applies to all intentions 
(except the case when someone decides to do something 'just because'. 
They aren't thinking about any result, anticipated or discovered. Or, I 
suppose, the case where something is purely a habit).


>
>     It would be more accurate to say
>     that intentions CAN be based on anticipated results, and that you
>     MAY do
>     a thing because of the anticipated result. In Zuboff's original
>     language, you would say 'to have a desire to form a belief about
>     something'. The 'desire' precedes the 'belief', rather than the other
>     way around, in this case. When A can cause B or B can cause A, you
>     can't
>     draw the conclusion that 'A depends on B')
>
>
> But to use your language, Zuboff is saying: intentions depend on 
> anticipated results.
>
> I still think that is true, given my scientist example.
>
> And I don't see how it makes sense to say the reverse, that 
> "anticipated results depend on intentions" -- perhaps only in the 
> wishful thinking way, but not in any rational way (that I can see), 
> but perhaps you have an example.


Are you kidding? There are thousands of examples.

I intend to go to my auntie Susan's and anticipate getting a meal of 
roast chicken because that's what she usually cooks. If I change my 
intention and go to a restaurant instead, my anticipation could well 
change to getting a steak. My intention could have changed due to any 
number of things, even tossing a coin. Maybe I can't decide, and say 
"Head's it's Susan's, Tails it's Restaurant". The expectation changes 
accordingly, as a result of the changed intention. Random reasons 
determine or affect people's intentions all the time, and their 
expectations follow.


>
>     "And since anticipated results are changeable, so are intentions."
>
>     (this implies that intentions are changeable /because/ anticipated
>     results can change. It's possible to change your mind about the
>     anticipated results of an intended action, or to change your intended
>     action and anticipate the same result. It would be more accurate
>     to say
>     that both anticipations and intentions are changeable, but a
>     change in
>     one doesn't necessarily enforce a change in the other)
>
>
> True, not every revelation will justify a change in action or 
> intention. When playing chess you may find a better move, and change 
> your action without changing your intention to win. Or you may learn 
> that if you don't throw the game, the child will abandon chess 
> altogether, and therefore you may change your intention to win against 
> the child.
>
>
>     "From this observation I arrive at a sweeping principle: My only
>     preferred intentions are those I would have if I had a
>     foreknowledge of
>     everything involved."
>
>     (because of the above, this is a false conclusion)
>
>
> I'm, sorry, which are you referring to when you say "the above"? Could 
> you better break down for me how you see this argument collapsing?


"And since anticipated results are changeable, so are intentions."

(this implies that intentions are changeable /because/ anticipated
results can change...)

A false premise, therefore a false conclusion.

(quite apart from "if I had a foreknowledge of
everything involved.")


>
>     "If there is any intention I have only because my foreknowledge of
>     the
>     outcome is imperfect, then that cannot be among my preferred
>     intentions."
>
>     (this would rule out any intention to find something out (because
>     if you
>     want to find something out, you necessarily don't already know the
>     answer).
>
>
> I think I addressed this with my scientist example.


Yes, by introducing a 'meta-result': that you find out something. But 
even this is not guaranteed, so the foreknowledge is not perfect. So the 
conclusion now becomes that NO intention can be preferred, regardless of 
the outcome.


>
>
>     I don't know about anyone else, but a lot of my 'preferred
>     intentions' have the aim of finding things out that I don't already
>     know. If you already knew, there would be no need to have an
>     intention
>     to find it out)
>
>
> These represent intentions to learn.
>
>
>     "And gratifying that intention cannot be in my preferred
>     self-interest.
>     The principle going along with this that governs my actions must
>     tell me
>     to act, as far as possible, as I would want myself to be acting
>     with a
>     foreknowledge of everything involved."
>
>     (it should be obvious now why this is nonsense, but nevertheless,
>     let's
>     follow this line of thought through (italics are mine):)
>
>
> It's not obvious to me yet, but I will follow along below.
>
>
>     "This foreknowledge that defines my preferred intentions and my best
>     course of action," /is of course impossible. He goes on to explain
>     why/.
>     "It would have to embrace not only the full experience, from
>     behind the
>     eyes (or other sensors), of every sentient being but also every
>     potential development of experience. It would include within it,
>     all the
>     motivations of all of the various systems of intention" /which would
>     simply conflict with each other. The overall result would be chaos
>     and
>     paralysis (in case this is not obvious, consider combining the
>     motivations of a religious fundamentalist with those of a
>     scientifically-literate materialist. These are conflicting value
>     systems. Objective facts can't reconcile them.
>
>
> It is knowledge of the subjective feeling of what it is like to be all 
> those concerned, what Zuboff describes as "the full experience, from 
> behind the eyes, of every sentient being" that provides such a resolution.


Such a thing doesn't, and can't, even in principle, exist.


>
> Think of it like this: in your own life there is a version of you that 
> goes to work does, chores, prepares meals, which doesn't enjoy those 
> tasks. But also in your life there is a version of you that goes on 
> vacation and enjoys recreation and leisure, and enjoying the meals 
> your other self prepared.


Why are these different versions? They are both me. There's only one 
version of me. That may change in the future, but that's a different matter.


>
> You have knowledge of both of those states of existence, and that puts 
> you in a position to answer whether or not your life is a life worth 
> living.


Worth living according to who? Me? I should think that I'd think my life 
worth living regardless, if that was something I'd be inclined to ponder.


>  And also it enables you to answer questions about what changes, and 
> trade offs are worth it. E.g. should the toiling-self take on extra 
> hours so that the leisure-self can enjoy a nicer vacation.


My understanding of my own life, from my own viewpoint enables me to, 
etc. Well, I'd hardly call that a revelation. It's true of everyone, and 
hardly worth mentioning.


>
> From the vantage point of the perfect grasp,


This is my main problem with this whole thing. /There is no such thing 
as a 'perfect grasp' (omniscience)/. There can't be such a thing, or 
everything we know about the world is wrong, and I'm pretty certain that 
that's not true. We would soon realise.


> one could make such trade off decisions between different individuals, 
> because in the same way you understand what it's like to work and be 
> on vacation, the vantage point of the perfect grasp understands what 
> it's like to be the scientific materialist *and* the religious 
> fundamentalist, and so any actions that would affect their lives, 
> negatively or positively, this perfect grasp could decide an 
> appropriate trade offs just as you make such trade off decisions 
> within your own life.


One could wave a magic wand and utter "resolvio!" and all problems would 
be solved and everybody would be friends and we'd live happily ever 
after. Great. If such a thing was possible. It's not. Sometimes there is 
no trade-off.

I really don't get why anyone can take this seriously. "The perfect 
grasp understands..." is meaningless, because there is no such thing as 
a perfect grasp.


>
> Making such trade off decisions is what is meant by the reconciliation 
> of all systems of desire. Think of it like all conscious perspectives 
> are all part of a single life, and how one super intelligent being 
> would optimize that life (which embodies and includes all those many 
> perspectives).  That optimization, is what Zuboff contends is the aim 
> of morality.


No, not 'superintelligent'. That's something we think is possible. You 
mean 'Omniscient'. And everyone except religious zealots knows that 
there's no such thing. There can't be such a thing. Physics forbids it. 
Common sense forbids it. Logic forbids it. Zuboffs theory of morality 
requires it. So Zuboffs theory of morality is, literally, forbidden by 
reality. He is claiming that something utterly impossible is the 'aim of 
morality'.

"Think of it like all conscious perspectives are all part of a single 
life" The only 'optimisation' that would be possible for such a life 
would be to cut it short as soon as possible. As I said before, it would 
be chaos and paralysis. It would be the ultimate psychosis. Fortunately, 
it's not possible.


>
>
>     'Perfect foreknowledge'
>     can't do a thing when subjective values are involved. Let's say
>     that you
>     have the opportunity to punish/forgive someone who has stolen
>     something
>     from you. The values of one person (that you have, according to this
>     theory, magical access to) dictate that the thief should be punished
>     regardless of the circumstances of the crime, because 'STEALING IS
>     WRONG'. You also have access to the values that tell you that
>     stealing
>     is often wrong, but can be forgiven under certain circumstances.
>     How can
>     there be any reconciliation of these two views? What facts can
>     help?)/.
>
>
> I think my explanation above is sufficient but if not let me know.
>
>
>     So even if there was any possibility of this, it still couldn't
>     lead to
>     any rational definition of morality. The requirement to know all
>     possible points of view, and all outcomes of all actions are
>     impossible
>     enough, but add on top the requirement to /reconcile/ all points of
>     view? And only then can you figure out what's good and what's bad?
>
>
> Moral decisions are hard for exactly this reason. They involve 
> weighing consequences to subjective states to which most parties have 
> no access to. I think we should be upfront with acknowledging that 
> difficulty as it suggests paths for resolving age old moral questions.
>
> Consider for example whether a law should be passed to increase the 
> square footage allotted to egg laying hens. To answer the question 
> requires understanding the stress and emotional states of the chickens 
> with varying levels of room, and that has to be balanced again the 
> correspondingly higher price of eggs, the unaffordability, possible 
> hunger or nutritional drficicinies or worse health for those who can't 
> afford eggs at those prices, etc.
>
> None of these are easy problems to solve, but with this definition, it 
> makes it clearer how to organize a strategy to answer the question, 
> and balance the concerns of all involved (to "reconcile all the 
> systems of desire").


Really? So how? How do you organise a strategy to answer that question? 
How does that work?

I'd say this is quite easy to solve, with no omniscience required. Such 
a law would be immoral because it would tend to compel farmers to 
increase the price of eggs, and in practice would make more farmers 
criminals because some of them would realise that the legal penalties 
would be insignificant to the economic ones, and if they are going to 
break the law, they may as well make the maximum profit from it, so some 
of them would be even less likely to take good care of their hens than 
at present. Realising that, the authorities would have to put more 
resources into policing this law, taking them away from more useful 
things like solving murders and catching thieves. Sounds like a 
lose-lose situation. If the basis of your morality is "The greatest good 
for the greatest number of people", for example, this law fails in a big 
way. I suspect that the only way it could be regarded as morally good is 
if your morality gives higher priority to the welfare of chickens than 
people. But I have to make a disclaimer: I know next to nothing about 
chicken farming, and may have it all woefully wrong.

So who knows? Maybe an omniscient being would see that a healthy 
chicken's eggs would make a crucial difference in the brain development 
of a person who eventually invents or discovers something fantastic that 
benefits all mankind forever after, and deem this law morally good.

-- 
Ben

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251107/ec585094/attachment.htm>


More information about the extropy-chat mailing list