[ExI] Zuboff's morality
Jason Resch
jasonresch at gmail.com
Sat Nov 8 13:31:17 UTC 2025
On Sat, Nov 8, 2025, 4:56 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> We're getting mired in confusing terminology, I think, and this is getting
> far too long. Let's zoom out and look at the essentials.
>
> On 08/11/2025 00:20, Jason Resch wrote:
>
> On Fri, Nov 7, 2025, 5:19 PM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
> So who knows? Maybe an omniscient being would see that a healthy chicken's
>> eggs would make a crucial difference in the brain development of a person
>> who eventually invents or discovers something fantastic that benefits all
>> mankind forever after, and deem this law morally good.
>>
>
> If you think an omniscient mind is better positioned than any lesser mind
> to make morally correct decisions, then you already tacitly accept the
> utility of using a "perfect grasp" to define morally optimal actions.
>
>
> Yeah, that was sarcasm. You're not supposed to take it seriously.
>
> What I accept is the truth of the trivial assertion that IF we knew more
> than we do, we'd be able to make better decisions.
>
It was sarcasm, but also, you believe the sarcastic statement you made.
If this is what Zuboff's idea boils down to,
>
That's part of it. Greater knowledge enables better decision making,
therefore enables more correct moral decisions (but this is only half of
the picture). The other is on what basis are good and bad, and right and
wrong, defined.
then I change my mind, the man's a genius (that was more sarcasm), he has
> discovered something we all knew all along, the obvious idea that more
> knowledge about a thing can enable you to make better decisions about it.
>
It is good to see that you understand and accept this half of Zuboff's
argument.
> What has that got to do with morality, though?
>
That half, alone, has nothing to do with morality.
How is this idea, that everybody already knows, supposed to be a basis for
> a moral system?
>
You have to consider his connecting glue between desires (what we want),
corrected desires (what we would still want with a perfect grasp), and the
reconciliation of all systems of desire (a balancing act which is what one
would still want in the consideration of how those wants (and obtain them)
affect all other consciousness beings who also have wants of their own.
This is how he defines the ideal of good and bad right and wrong, and the
proper aim of morality.
> Having better knowledge enables more /effective/ decisions, but that says
> nothing about whether they are 'good' or 'bad' decisions. It doesn't enable
> someone to define what 'good' means, for them.
>
That's what the paper does.
At this point we could have saved a lot of time if you had simply read it.
> "If you think an omniscient mind is better positioned than any lesser mind
> to make morally correct decisions..." Losing the 'omniscient', and
> replacing it with 'more knowledgeable', which puts things on a realistic
> footing, I'd have to say No, I don't think that. Is the morality of a less
> knowledgeable or less intelligent person less valid than that of a more
> knowledgeable or more intelligent one? I'd think (or certainly hope,
> anyway!) that the answer to this is obvious. If your answer is "yes", then
> you're already halfway down the slippery slope that leads to most of, if
> not all, the worst atrocities we are capable of. It's basically saying that
> some people are intrinsically inferior to others, because of their ability
> to know things. I don't think that was really the intention of whoever
> coined the phrase 'knowledge is power'.
>
Earlier you acknowledged that it was trivial that having less knowledge
means we make worse decisions. This is why so many quotes compare evil and
stupidity:
https://www.goodreads.com/quotes/230940-never-attribute-to-malice-that-which-is-adequately-explained-by
https://www.goodreads.com/quotes/8616320-stupidity-is-a-more-dangerous-enemy-of-the-good-than
https://www.goodreads.com/quotes/8616320-stupidity-is-a-more-dangerous-enemy-of-the-good-than
But note that this says nothing of the moral value of persons. It doesn't
even say that more intelligent people act more morally than less
intelligent people.
An intelligent person who is unmotivated to make moral decisions is not
inherently better behaving than a lesser intelligent person who makes
attempts to act morally.
> More realistic moral foundations, in my opinion, can be found here:
> https://moralfoundations.org/
>
> Notice that 'knowledge' is not mentioned in any of these.
>
Nor is any attempt made to define good or evil.
> I think the important thing, going back to the distant original topic of
> this discussion, is to realise where morality (as actually practiced) comes
> from. It comes from our developmental past.
>
Historically yes. I don't dispute that.
AIs are a future unfolding of that, and I reckon that, rather than
> speculating on their morality springing de-novo from their intelligence, it
> might be useful to consider it being a consequence of where they come from
> (humanity) and how it might develop, just as ours has developed over time.
>
Then you should see Zuboff's paper as a continuation of that development.
And for those (human ornl artificial) intelligences capable of seeing the
truth in it (assuming it is sound) they will be rationally motivated to
adopt the morality described in the paper.
Zuboff has experimented with having AIs of today read and evaluate this
paper. All the current models seem to accept it's conclusions as valid.
That provides me some hope, at least.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251108/21874793/attachment-0001.htm>
More information about the extropy-chat
mailing list