[ExI] Zuboff's morality

Jason Resch jasonresch at gmail.com
Sat Nov 8 00:19:48 UTC 2025


On Fri, Nov 7, 2025, 5:19 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 07/11/2025 06:17, Jason Resch wrote:
>
> On Wed, Nov 5, 2025, 9:24 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Ok, I've had a look at his paper, and made a few substitutions to make
>> it easier to understand. Let me know if you object to any of these:
>>
>> 'desire' = intention
>> 'belief' = anticipated result
>>
>
> I'm fine with these.
>
> 'correctable' = changeable
>>
>
> Okay. But I'll note this word loses the connotation of "an improvement."
>
>
>
> Well, the reason for making that substitution is to lose the implication
> that the original thing /needs/ to be improved.
>


Sure. If the action is already morally optimal, then it can't be corrected.



>
>
> 'real' = preferred
>>
>
> I can go along with this, but keep in mind they would be the
> actual/genuine preferences in light of accurate information of concern.
>
>
>
> You mean /theoretical/ preferences, in light of accurate information of
> concern. I don't see how this can work. It would mean people often don't
> actually know what their preferences are.
>

They often don't, as the parable of the Chinese farmer highlights:

"Once upon a time there was a Chinese farmer whose horse ran away. That
evening, all of his neighbors came around to commiserate. They said, “We
are so sorry to hear your horse has run away. This is most unfortunate.”
The farmer said, “Maybe.” The next day the horse came back bringing seven
wild horses with it, and in the evening everybody came back and said, “Oh,
isn’t that lucky. What a great turn of events. You now have eight horses!”
The farmer again said, “Maybe.”

The following day his son tried to break one of the horses, and while
riding it, he was thrown and broke his leg. The neighbors then said, “Oh
dear, that’s too bad,” and the farmer responded, “Maybe.” The next day the
conscription officers came around to conscript people into the army, and
they rejected his son because he had a broken leg. Again all the neighbors
came around and said, “Isn’t that great!” Again, he said, “Maybe.”

The whole process of nature is an integrated process of immense complexity,
and it’s really impossible to tell whether anything that happens in it is
good or bad — because you never know what will be the consequence of the
misfortune; or, you never know what will be the consequences of good
fortune."

— Alan Watts


This illustrates the dependence on knowledge for distinguishing good from
bad. And knowing good from bad is required to what what outcomes we prefer.
The farmer  acknowledges his imperfect knowledge, which is why he always
answers "Maybe."


So how could they act on them? (or anticipate an outcome from them?). Using
> the word "actual" to mean "theoretical" rather confuses things, don't you
> think?
>


As agents operating with constraints in the real world, we have to make our
best guess. But we should do so with the understanding that with better
knowledge, information, understanding, experience, etc. we can do better.



>
>
> 'perfect grasp' = foreknowledge
>>
>
> Just one thing to add: in the paper, a perfect grasp embodies not only
> foreknowledge (e.g. perfect knowledge of future states (think *depth*), but
> also perfect lateral knowledge concerning the perspectives and impacts and
> effects on other beings (e.g. think breadth).
>
> So the perfect grasp represents a near omniscient understanding of all the
> future consequences for all involved and effected by a particular action,
> including those who don't and won't exist.
>
>
>
> So we could use 'omniscience' instead of 'foreknowledge'. Ok.
>
> Er, consequences for those who don't and won't exist??
>
> That kind of cancels itself out, doesn't it? There can't, by definition,
> be any such consequences.
>

I've explained multiple times that the paper acknowledges the impossibility
of the perfect grasp, and that it explains why that's unimportant to obtain
the result the paper presents.

If you still think this is impossible, then consider that Alan Turing
defined a mathematical concept of computation using a device which is
impossible to build in practice. It's the same kind of thing. This is
presenting a definition. That it's physically impossible is irrelevant.


>
>
>> The relevant passages now read, with my comments in brackets:
>>
>> "Imagine that I have before me on a table a cup containing a thick,
>> brown, steaming liquid.
>>
>> I want to drink that stuff because I think it is hot chocolate. But it
>> is actually hot mud. Well, in that case I don’t really intend to drink
>> it. And neither is it in my self-interest to do so.
>>
>> This example brings out the way in which intentions depend on
>> anticipated results. I only ever intend to do a thing because of what I
>> anticipate the result to be."
>>
>
> Not bad, I can follow along with that substitution.
>
>
>> (this is not true. It's not uncommon to have an intention to do
>> something in order to /find out/ what the result will be rather than in
>> anticipation of an expected result.
>
>
> I don't think this escapes the statement.
>
> Your example asks: why would a scientist ever desire (intend) to test a
> hypothesis when he doesn't know the outcome?
>
>
>
> Not quite. I'm saying that, in contrast to Zuboff's statement, intentions
> sometimes /don't/ depend on anticipated results, they are intended to
> /discover/ the results instead (you don't have to be a scientist to do
> this. Non-scientists do it all the time, particularly young ones).
>

I agree one doesn't have to be a scientist.
However, I still disagree that your example provides a counter example.


>
>
> My answer to this is that for the scientist, he believes (anticipates)
> that the outcome of the experiment will provide new information for the
> scientist. Certainly, if the scientist did not believe (anticipate) any
> possibility of learning anything from the experiment, he would not bother
> performing it.
>
>
>
> You're making 'the result' include 'finding out the result'.
>
So now we have two results, an actual result, and a meta-result. That could
> apply to all intentions.
>

Exactly. I would say all intelligent actions are based on some
predicted/anticipated results of the action.

I expect something to happen when I do x, and I also expect to find out if
> it actually happens. So you could say "I only ever intend to do a thing
> because I anticipate finding out if I'm right about what the result will
> be", and if you don't have an expectation, it's just 'to find out the
> result'.
>
> A problem that this presents to Zuboff's thesis is that this is not an
> anticipated result that can be changed. It applies to all intentions
> (except the case when someone decides to do something 'just because'.
>

Indeed.

They aren't thinking about any result, anticipated or discovered. Or, I
> suppose, the case where something is purely a habit).
>
>
>
> It would be more accurate to say
>> that intentions CAN be based on anticipated results, and that you MAY do
>> a thing because of the anticipated result. In Zuboff's original
>> language, you would say 'to have a desire to form a belief about
>> something'. The 'desire' precedes the 'belief', rather than the other
>> way around, in this case. When A can cause B or B can cause A, you can't
>> draw the conclusion that 'A depends on B')
>>
>
> But to use your language, Zuboff is saying: intentions depend on
> anticipated results.
>
> I still think that is true, given my scientist example.
>
> And I don't see how it makes sense to say the reverse, that "anticipated
> results depend on intentions" -- perhaps only in the wishful thinking way,
> but not in any rational way (that I can see), but perhaps you have an
> example.
>
>
>
> Are you kidding? There are thousands of examples.
>
> I intend to go to my auntie Susan's and anticipate getting a meal of roast
> chicken because that's what she usually cooks.
>

I think you are confusing actions and intentions. Recall that you
substituted "desires" (what you want to happen) with "intentions" (what you
intellectually intend to happen).

Generally speaking, anticipated results (what one thinks *will* happen)
don't depend on what you want or intend to happen, they depend on the
actions one takes, the current state of reality, and one's modeling of that
reality.

It is wishful thinking that can lead one to believe that what one wants to
have happen influences what will happen (absent any intervening actions).


If I change my intention and go to a restaurant instead, my anticipation
> could well change to getting a steak. My intention could have changed due
> to any number of things, even tossing a coin. Maybe I can't decide, and say
> "Head's it's Susan's, Tails it's Restaurant". The expectation changes
> accordingly, as a result of the changed intention. Random reasons determine
> or affect people's intentions all the time, and their expectations follow.
>
>
>
>> "And since anticipated results are changeable, so are intentions."
>>
>> (this implies that intentions are changeable /because/ anticipated
>> results can change. It's possible to change your mind about the
>> anticipated results of an intended action, or to change your intended
>> action and anticipate the same result. It would be more accurate to say
>> that both anticipations and intentions are changeable, but a change in
>> one doesn't necessarily enforce a change in the other)
>>
>
> True, not every revelation will justify a change in action or intention.
> When playing chess you may find a better move, and change your action
> without changing your intention to win. Or you may learn that if you don't
> throw the game, the child will abandon chess altogether, and therefore you
> may change your intention to win against the child.
>
>
>> "From this observation I arrive at a sweeping principle: My only
>> preferred intentions are those I would have if I had a foreknowledge of
>> everything involved."
>>
>> (because of the above, this is a false conclusion)
>>
>
> I'm, sorry, which are you referring to when you say "the above"? Could you
> better break down for me how you see this argument collapsing?
>
>
>
> "And since anticipated results are changeable, so are intentions."
>
> (this implies that intentions are changeable /because/ anticipated
> results can change...)
>
> A false premise, therefore a false conclusion.
>

I don't see that it's false. Which part is false?


> (quite apart from "if I had a foreknowledge of
> everything involved.")
>
>
>
>> "If there is any intention I have only because my foreknowledge of the
>> outcome is imperfect, then that cannot be among my preferred intentions."
>>
>> (this would rule out any intention to find something out (because if you
>> want to find something out, you necessarily don't already know the
>> answer).
>
>
> I think I addressed this with my scientist example.
>
>
>
> Yes, by introducing a 'meta-result': that you find out something. But even
> this is not guaranteed,
>

It doesn't have to be guaranteed. If we needed guarantees to act, we would
never act.

so the foreknowledge is not perfect.
>

I agree.

So the conclusion now becomes that NO intention can be preferred,
> regardless of the outcome.
>

I can't make sense of this sentence.


>
>
>
> I don't know about anyone else, but a lot of my 'preferred
>> intentions' have the aim of finding things out that I don't already
>> know. If you already knew, there would be no need to have an intention
>> to find it out)
>>
>
> These represent intentions to learn.
>
>
>> "And gratifying that intention cannot be in my preferred self-interest.
>> The principle going along with this that governs my actions must tell me
>> to act, as far as possible, as I would want myself to be acting with a
>> foreknowledge of everything involved."
>>
>> (it should be obvious now why this is nonsense, but nevertheless, let's
>> follow this line of thought through (italics are mine):)
>>
>
> It's not obvious to me yet, but I will follow along below.
>
>
>> "This foreknowledge that defines my preferred intentions and my best
>> course of action," /is of course impossible. He goes on to explain why/.
>> "It would have to embrace not only the full experience, from behind the
>> eyes (or other sensors), of every sentient being but also every
>> potential development of experience. It would include within it, all the
>> motivations of all of the various systems of intention" /which would
>> simply conflict with each other. The overall result would be chaos and
>> paralysis (in case this is not obvious, consider combining the
>> motivations of a religious fundamentalist with those of a
>> scientifically-literate materialist. These are conflicting value
>> systems. Objective facts can't reconcile them.
>
>
> It is knowledge of the subjective feeling of what it is like to be all
> those concerned, what Zuboff describes as "the full experience, from behind
> the eyes, of every sentient being" that provides such a resolution.
>
>
>
> Such a thing doesn't, and can't, even in principle, exist.
>

That's well acknowledged by me, the paper, and Zuboff.



>
>
> Think of it like this: in your own life there is a version of you that
> goes to work does, chores, prepares meals, which doesn't enjoy those tasks.
> But also in your life there is a version of you that goes on vacation and
> enjoys recreation and leisure, and enjoying the meals your other self
> prepared.
>
>
>
> Why are these different versions? They are both me. There's only one
> version of me. That may change in the future, but that's a different matter.
>

All I mean here is that there are different states of you in different
points in time.



>
>
> You have knowledge of both of those states of existence, and that puts you
> in a position to answer whether or not your life is a life worth living.
>
>
>
> Worth living according to who? Me? I should think that I'd think my life
> worth living regardless, if that was something I'd be inclined to ponder.
>

According to you.


>
>  And also it enables you to answer questions about what changes, and trade
> offs are worth it. E.g. should the toiling-self take on extra hours so that
> the leisure-self can enjoy a nicer vacation.
>
>
>
> My understanding of my own life, from my own viewpoint enables me to, etc.
> Well, I'd hardly call that a revelation. It's true of everyone, and hardly
> worth mentioning.
>

Right, I am using this to establish a point below:


>
>
> From the vantage point of the perfect grasp,
>
>
>
> This is my main problem with this whole thing. /There is no such thing as
> a 'perfect grasp' (omniscience)/.
>

Correct.

There can't be such a thing, or everything we know about the world is
> wrong, and I'm pretty certain that that's not true. We would soon realise.
>

>
> one could make such trade off decisions between different individuals,
> because in the same way you understand what it's like to work and be on
> vacation, the vantage point of the perfect grasp understands what it's like
> to be the scientific materialist *and* the religious fundamentalist, and so
> any actions that would affect their lives, negatively or positively, this
> perfect grasp could decide an appropriate trade offs just as you make such
> trade off decisions within your own life.
>
>
>
> One could wave a magic wand and utter "resolvio!" and all problems would
> be solved and everybody would be friends and we'd live happily ever after.
> Great. If such a thing was possible. It's not. Sometimes there is no
> trade-off.
>
> I really don't get why anyone can take this seriously. "The perfect grasp
> understands..." is meaningless, because there is no such thing as a perfect
> grasp.
>

I'll demonstrate the utility of this definition below.



>
>
> Making such trade off decisions is what is meant by the reconciliation of
> all systems of desire. Think of it like all conscious perspectives are all
> part of a single life, and how one super intelligent being would optimize
> that life (which embodies and includes all those many perspectives).  That
> optimization, is what Zuboff contends is the aim of morality.
>
>
>
> No, not 'superintelligent'. That's something we think is possible. You
> mean 'Omniscient'. And everyone except religious zealots knows that there's
> no such thing. There can't be such a thing. Physics forbids it. Common
> sense forbids it. Logic forbids it.
>

We agree .

Zuboffs theory of morality requires it.
>

He doesn't. Zuboff acknowledges the perfect grasp, as defined is logically
impossible. From his paper:

"Such a perfect grasp would thus have to comprehend at once, and perfectly,
states of
consciousness that essentially exclude one another. Perhaps this means that
our hypothetical perfect grasp of reality is logically impossible. But,
possible or not, omniscience is the inevitable ideal of our knowledge and
the perfect grasp of reality is the inevitable
hypothetical basis of an appropriate responsiveness to reality, which is
the whole point of action. The perfect grasp need not be logically
consistent to have this significance."


So in the same sense that omniscience *as a concept* is significant as "the
ideal of knowledge", or that the Turing machine *as a concept* is
significant as "the ideal of computation", the reconciliation of all
systems of desire *as a concept* is significant as "the ideal of morality."

Or even just consider the concept of infinity, and how useful and important
it is in mathematics, despite it never being realizable or attainable by us
mere finite beings in a finite observable universe.

Do you not agree that such unattainable/unrealizable ideals have utility as
concepts?


So Zuboffs theory of morality is, literally, forbidden by reality.
>

We have access to the theory, it's in his paper.

He is claiming that something utterly impossible is the 'aim of morality'.
>

He defines an ideal, and aim of morality. And despite that ideal being
unrealizable, it stool has utility, just as computer scientists find
utility in the definition of Turing machines, or how mathematicians find
utility in the definition of infinity.



> "Think of it like all conscious perspectives are all part of a single
> life" The only 'optimisation' that would be possible for such a life would
> be to cut it short as soon as possible.
>

A rather nihilistic view.

As I said before, it would be chaos and paralysis. It would be the ultimate
> psychosis. Fortunately, it's not possible.
>
>
>
>
> 'Perfect foreknowledge'
>> can't do a thing when subjective values are involved. Let's say that you
>> have the opportunity to punish/forgive someone who has stolen something
>> from you. The values of one person (that you have, according to this
>> theory, magical access to) dictate that the thief should be punished
>> regardless of the circumstances of the crime, because 'STEALING IS
>> WRONG'. You also have access to the values that tell you that stealing
>> is often wrong, but can be forgiven under certain circumstances. How can
>> there be any reconciliation of these two views? What facts can help?)/.
>>
>
> I think my explanation above is sufficient but if not let me know.
>
>
>> So even if there was any possibility of this, it still couldn't lead to
>> any rational definition of morality. The requirement to know all
>> possible points of view, and all outcomes of all actions are impossible
>> enough, but add on top the requirement to /reconcile/ all points of
>> view? And only then can you figure out what's good and what's bad?
>>
>
> Moral decisions are hard for exactly this reason. They involve weighing
> consequences to subjective states to which most parties have no access to.
> I think we should be upfront with acknowledging that difficulty as it
> suggests paths for resolving age old moral questions.
>
> Consider for example whether a law should be passed to increase the square
> footage allotted to egg laying hens. To answer the question requires
> understanding the stress and emotional states of the chickens with varying
> levels of room, and that has to be balanced again the correspondingly
> higher price of eggs, the unaffordability, possible hunger or nutritional
> drficicinies or worse health for those who can't afford eggs at those
> prices, etc.
>
> None of these are easy problems to solve, but with this definition, it
> makes it clearer how to organize a strategy to answer the question, and
> balance the concerns of all involved (to "reconcile all the systems of
> desire").
>
>
>
> Really? So how? How do you organise a strategy to answer that question?
> How does that work?
>

With a great deal of research in copmparative psychology, comparative
biology, and economic impact studies, etc. As I said, these are not easy
problems.


> I'd say this is quite easy to solve, with no omniscience required. Such a
> law would be immoral because it would tend to compel farmers to increase
> the price of eggs, and in practice would make more farmers criminals
> because some of them would realise that the legal penalties would be
> insignificant to the economic ones, and if they are going to break the law,
> they may as well make the maximum profit from it, so some of them would be
> even less likely to take good care of their hens than at present. Realising
> that, the authorities would have to put more resources into policing this
> law, taking them away from more useful things like solving murders and
> catching thieves. Sounds like a lose-lose situation.
>

By this logic, no animal welfare/anti cruelty laws are justified.

If the basis of your morality is "The greatest good for the greatest number
> of people", for example, this law fails in a big way. I suspect that the
> only way it could be regarded as morally good is if your morality gives
> higher priority to the welfare of chickens than people. But I have to make
> a disclaimer: I know next to nothing about chicken farming, and may have it
> all woefully wrong.
>

Current factory farming conditions are quite abhorrent, which we might
expect when the welfare of the animals isn't a factor in the economics of
making eggs as cheaply as possible.

But how many of us would choose to keep our pets in the cheapest kennel
possible, to send our kids to the cheapest school possible, or to live in
the cheapest house possible?


> So who knows? Maybe an omniscient being would see that a healthy chicken's
> eggs would make a crucial difference in the brain development of a person
> who eventually invents or discovers something fantastic that benefits all
> mankind forever after, and deem this law morally good.
>

If you think an omniscient mind is better positioned than any lesser mind
to make morally correct decisions, then you already tacitly accept the
utility of using a "perfect grasp" to define morally optimal actions.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251107/dc239928/attachment-0001.htm>


More information about the extropy-chat mailing list