[ExI] 1DIQ: an IQ metaphor to explain superintelligence
Ben Zaiboc
ben at zaiboc.net
Sun Nov 2 14:50:46 UTC 2025
On 01/11/2025 23:20, Jason Resch wrote:
> If you believe it will be good for you, you may desire it. If you
learn later that it will be bad for you, you may no longer desire it.
Here, what you desire has a dependency on what you believe.
Discuss that with a cigarette smoker. I think you'll find they disagree.
> It's to frame the problem: where does morality come from, what is its
basis, by what method do we determine right or wrong?
Well that's easy, I can tell you. Morality comes from us. We make it up.
And the methods we use are various.
>> We always have to make decisions in the /absence/ of full
information. What we would do if we had 'all the information' is
irrelevant, if it even means anything.
> Yes, this is what I've been saying from the beginning. Perfect grasp
is used only to define the aim of morality, not to serve as a practical
theory.
We know what the aim of morality is: To distinguish right actions from
wrong ones. Nothing difficult about that, and no 'perfect grasp' is
needed to establish it. The difficulty is in deciding what 'right' and
'wrong' mean. Different people in various different circumstances seem
to have different opinions.
> Consider weather prediction. We can't predict with 100% accuracy, nor
predict arbitrarily far into the future. Yet we can make near term
predictions with some modicum of accuracy.
This is how moral decisions can (and should) be approached.
Can, yes. Should? Who are you to say? You are now deciding for other
people. My morality tells me that this is immoral.
> Please consider what I wrote carefully. It is an example of putting
into practice a heuristic. And how better heuristics are based on the
same model and definition of morality as defined in that paper.
You may think so. I don't. That paper is nonsense. As I said, the first
three statements are flat-out wrong.
> > Without objective truth, by what measure is any theory in science
said to be better than any other?
Yes, I've addressed that in another post. I was too hasty in saying "No"
to the question, mistaking 'objective' for 'absolute'. My mistake.
> what is the definition of right or wrong, good or bad? Zuboff's paper
is an example of a theoretical basis on which we can form such
definitions, and define what we mean by right and wrong, good and bad.
Apart from the fact that Zuboff's paper is based on false premises, and
therefore worthless, the very question "what is right and what is
wrong?" can't be given a definitive answer that is true for everyone in
every circumstance. It's like trying to give a definitive answer to
"what is the tastiest food?", that applies to everyone in all
circumstances. You can't solve subjective problems with an objective
approach.
> Let us say you have a particular set of rules in your code.
I do.
> By [what] process do you decide what rules to adopt, or decide to
adopt one rule vs. another.
There is a heirarchy, built on a principle that I worked out a long time
ago. I just need to slot a problem into the right level of the
heirarchy, and the solution is obvious. I've never met a (real)
situation that it can't handle to my satisfaction (I'm not claiming to
have the answer to the trolley problem!).
> My contention is that to even form a moral code, one must hold some
meta-rule for optimizing what knew [one?] considers to be good while
minimizing or avoiding bad.
Indeed. And I'd say that the meta-rule is what defines 'good' and 'bad'.
> And I think if you explored this meta-rule, you would find it is not
all that different from the position Zuboff reaches in his paper.
On the contrary, it is totally different, and much simpler, than
Zuboff's nonsense.
> Ultimately, what is good (for one individual) is what that individual
would want for themselves if they had a complete knowledge of everything
involved.
First, No.
Second, this would be reducing morality to what is good for an individual...
> And then this then extended to define good as a maximization of good
for all concerned, to achieve the most possible good among all beings
who have desires, by satisfying (to the maximum possible extent) the
desires each individual would still hold if they all had a perfect grasp
of everything. This he refers to as a reconciliation of all systems of
desire.
... then dragging everyone else into it (Golden Rule, and we know what's
wrong with that)
I really don't see the point of positing an impossible knowledge then
using this as the basis of a system of morality (or anything at all).
Saying "Oh, but it's just theoretical, not real, don't take it too
literally" is basically the same as saying it's totally useless for any
practical purpose.
A 'reconciliation of all systems of desire' is equivalent to 'a
reconciliation of all systems of taste'.
That's apart from the fact that the whole paper is nonsense.
>> It's about having a choice. If it were possible to have 'perfect
knowledge', there would be no morality, no choice.
> I'm not sure that follows. Even with perfect knowledge, you could
still choose whether or not to act in accordance with morally best action.
That's true. People can choose to be evil. Does anyone actually do that?
We'd probably class it as mental illness.
I don't undertand why you are taking Zuboff's paper seriously. Do you
take his first three statements in the paper's abstract at face value?:
1) "If I desire to drink some stuff thinking it is hot chocolate when
actually it is hot mud, my desire is not a real one - it’s mistaken or
only apparent."
(misconstruing the desire to drink hot chocolate as a desire to drink
whatever is in the cup. If that were the case, he'd drink the mud)
2) "This example illustrates how a desire must always depend on a belief
about its object, a belief about what it is and what it’s like."
(false assumption that if any desire is dependent on a belief (something
that I'd dispute, but it needs closer examination), all desires must
always be dependent on beliefs. Saying "This example illustrates..." is
deflecting the reader from the fact that he's making an assumption and
failing to show why it should be true)
3) "But beliefs are correctable, so desires are correctable"
( I don't know why he uses the term 'correctable', which implies
wrongness, but this statement just compounds the above errors and adds
one more: False conclusion that if a belief can change, this means that
a desire can change)
I can understand someone saying that beliefs are sometimes based on
desires (I'm sure this is often the case), but not the reverse. That's
just daft. Desires are emotional, derived from feedback on bodily
states, and elaborated by memories and imagination. Beliefs about
various things can certainly contribute to the process, but you can't
reasonably claim that (all) desires are a result of (only) beliefs.
At the best, Zuboff is guilty of grossly oversimplifying and
misattributing things. At the worst, well, I'd be committing the
Internet Sin of Ad-Hominem Attack to say anything more, and that goes
against my moral code.
--
Ben
More information about the extropy-chat
mailing list