[ExI] 1DIQ: an IQ metaphor to explain superintelligence
Jason Resch
jasonresch at gmail.com
Sun Nov 2 16:50:55 UTC 2025
On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 01/11/2025 23:20, Jason Resch wrote:
>
> > If you believe it will be good for you, you may desire it. If you
> learn later that it will be bad for you, you may no longer desire it.
> Here, what you desire has a dependency on what you believe.
>
>
> Discuss that with a cigarette smoker. I think you'll find they disagree.
>
Compare to the hypothetical reality where cigarettes are healthy, and you
will see my point stands.
>
> > It's to frame the problem: where does morality come from, what is its
> basis, by what method do we determine right or wrong?
>
>
> Well that's easy, I can tell you. Morality comes from us. We make it up.
That's the common view. But that common view is lacking a base, and it
doesn't help answer the question of whether AI, or superintelligences will
tend towards any sort of morality, nor what it might be that they tend
towards. Hence the utility of this framework.
> And the methods we use are various.
>
And some methods, I would contend, are better than others.
>
> >> We always have to make decisions in the /absence/ of full
> information. What we would do if we had 'all the information' is
> irrelevant, if it even means anything.
>
> > Yes, this is what I've been saying from the beginning. Perfect grasp
> is used only to define the aim of morality, not to serve as a practical
> theory.
>
>
> We know what the aim of morality is: To distinguish right actions from
> wrong ones. Nothing difficult about that, and no 'perfect grasp' is
> needed to establish it. The difficulty is in deciding what 'right' and
> 'wrong' mean. Different people in various different circumstances seem
> to have different opinions.
>
Likewise, people used to disagree about what lightning was.
>
> > Consider weather prediction. We can't predict with 100% accuracy, nor
> predict arbitrarily far into the future. Yet we can make near term
> predictions with some modicum of accuracy.
> This is how moral decisions can (and should) be approached.
>
>
> Can, yes. Should? Who are you to say?
It seems you still haven't read the paper, as you question suggests you
still hold some imagined caricatured version of the theory.
But note here, I am only saying, that even though we can't predict the
future perfectly nor arbitrarily far into the future, the basic idea behind
deciding which actions are right or wrong, involves making some attempt at
predicting the future consequences of an action. All rational decision
making processes work this way.
You are now deciding for other
> people. My morality tells me that this is immoral.
>
If you understand the paper you will see this definition of morality is
based on the fulfillment of the desires of everyone, where those desires
are what each person would genuinely want for themselves when fully
informed about everything relevant. It has nothing to do with me, or anyone
else telling you what to do. It is merely a definition.
>
> > Please consider what I wrote carefully. It is an example of putting
> into practice a heuristic. And how better heuristics are based on the
> same model and definition of morality as defined in that paper.
>
>
> You may think so. I don't. That paper is nonsense. As I said, the first
> three statements are flat-out wrong.
>
It seen you never read any more than the abstract. If you are constrained
by time, feed the paper into you favorite AI and ask what it thinks about
the paper.
>
> > > Without objective truth, by what measure is any theory in science
> said to be better than any other?
>
>
> Yes, I've addressed that in another post. I was too hasty in saying "No"
> to the question, mistaking 'objective' for 'absolute'. My mistake.
>
No worries! I appreciate the clarification.
>
> > what is the definition of right or wrong, good or bad? Zuboff's paper
> is an example of a theoretical basis on which we can form such
> definitions, and define what we mean by right and wrong, good and bad.
>
>
> Apart from the fact that Zuboff's paper is based on false premises, and
> therefore worthless, the very question "what is right and what is
> wrong?" can't be given a definitive answer that is true for everyone in
> every circumstance. It's like trying to give a definitive answer to
> "what is the tastiest food?", that applies to everyone in all
> circumstances. You can't solve subjective problems with an objective
> approach.
>
You can by making it observer-relative. E.g., forget about trying to find a
"tastiest food" and instead consider "the tastiest food for this particular
person in this time and place."
That is what this paper does with morality, it starts with considering the
desires of individual subjects. Moves on to correcting those individual
desires with better information, and ultimately shows how with enough
information, including how ones own desires impact other observers, there
is an eventual convergence, where ones desires extend beyond merely wanting
what's best for oneself, but also a consideration of what's best for all
concerned. This full understanding of what's best for all concerned is the
same understanding, regardless of which initial subject you start from.
>
> > Let us say you have a particular set of rules in your code.
>
>
> I do.
>
>
> > By [what] process do you decide what rules to adopt, or decide to
> adopt one rule vs. another.
>
>
> There is a heirarchy, built on a principle that I worked out a long time
> ago. I just need to slot a problem into the right level of the
> heirarchy, and the solution is obvious. I've never met a (real)
> situation that it can't handle to my satisfaction (I'm not claiming to
> have the answer to the trolley problem!).
>
If you don't mind sharing, I am curious what that principle is that you
worked out. But I also understand if you consider it private.
>
> > My contention is that to even form a moral code, one must hold some
> meta-rule for optimizing what knew [one?]
(yes "one" sorry for the typo)
considers to be good while
> minimizing or avoiding bad.
>
>
> Indeed. And I'd say that the meta-rule is what defines 'good' and 'bad'.
>
That seems a bit circular to me.. I am not sure how it gets off the ground
without a way to distinguish good from bad.
>
> > And I think if you explored this meta-rule, you would find it is not
> all that different from the position Zuboff reaches in his paper.
>
>
> On the contrary, it is totally different, and much simpler, than
> Zuboff's nonsense.
>
It may seem that way, but I think you have swept the details of how to
distinguish good from bad under the rug.
>
> > Ultimately, what is good (for one individual) is what that individual
> would want for themselves if they had a complete knowledge of everything
> involved.
>
>
> First, No.
> Second, this would be reducing morality to what is good for an
> individual...
>
Note that I was careful to specify "good for the individual." I.e., start
with the simple model of only a single conscious being in all reality. Then
it becomes clear this is a working definition of good that works for that
lone being.
>
> > And then this then extended to define good as a maximization of good
> for all concerned, to achieve the most possible good among all beings
> who have desires, by satisfying (to the maximum possible extent) the
> desires each individual would still hold if they all had a perfect grasp
> of everything. This he refers to as a reconciliation of all systems of
> desire.
>
>
> ... then dragging everyone else into it
Where are you getting this "dragging into it" from?
Does the golden rule "drag everyone else into it"?
Does your moral code "drag everyone else into it"?
No, these are just alternate definitions of moral and immoral behavior.
That is what Zuboff's paper provides, a new definition.
(Golden Rule, and we know what's
> wrong with that)
>
You could liken Zuboff's result to the platinum rule, corrected by better
information, weighted appropriately, modulated by future consequences, and
with further concern for possible/future beings who may not (yet) exist.
>
> I really don't see the point of positing an impossible knowledge then
> using this as the basis of a system of morality (or anything at all).
I've addressed this many times already. At this point all I can suggest is
to read the paper, or have AI read it then ask it to answer these questions
for you based on what the paper says.
> Saying "Oh, but it's just theoretical, not real, don't take it too
> literally" is basically the same as saying it's totally useless for any
> practical purpose.
>
I haven't said that.
> A 'reconciliation of all systems of desire' is equivalent to 'a
> reconciliation of all systems of taste'.
> That's apart from the fact that the whole paper is nonsense.
>
You say this a person who has not read the whole paper.
>
> >> It's about having a choice. If it were possible to have 'perfect
> knowledge', there would be no morality, no choice.
>
> > I'm not sure that follows. Even with perfect knowledge, you could
> still choose whether or not to act in accordance with morally best action.
>
>
> That's true. People can choose to be evil. Does anyone actually do that?
All the time.
> We'd probably class it as mental illness.
>
We all do it in small ways all the time.
For example, we will choose to pay $20 to go see a movie instead of taking
the time to buy a $20 meal for a hungry person. We know it would be a more
moral way to spend the $20, but will choose a less moral action instead.
>
> I don't undertand why you are taking Zuboff's paper seriously. Do you
> take his first three statements in the paper's abstract at face value?:
>
> 1) "If I desire to drink some stuff thinking it is hot chocolate when
> actually it is hot mud, my desire is not a real one - it’s mistaken or
> only apparent."
>
> (misconstruing the desire to drink hot chocolate as a desire to drink
> whatever is in the cup. If that were the case, he'd drink the mud)
>
I think you are misreading and over-examining this. It is nothing more than
an example of how a desire "ooh that looks good I want to drink it!" can be
corrected with new information.
I see no problem with that observation. To me it is obviously true.
>
> 2) "This example illustrates how a desire must always depend on a belief
> about its object, a belief about what it is and what it’s like."
>
> (false assumption that if any desire is dependent on a belief (something
> that I'd dispute, but it needs closer examination), all desires must
> always be dependent on beliefs. Saying "This example illustrates..." is
> deflecting the reader from the fact that he's making an assumption and
> failing to show why it should be true)
>
A desire is the will to fulfill some need or want. By definition, then, it
relates to some anticipated future state or experience, which is presently
unrealized.
Accordingly, that desire concerns a belief (about what the future state or
experience will be we like).
Again this is all from the abstract, which I'll not give the full
exposition or justification. If the full argument and justification could
be made in abstracts, we wouldn't need papers. Which is why I suggest you
to read the paper is you have questions about it, as it is quite thorough
in addressing all the concerns you are raising.
>
> 3) "But beliefs are correctable, so desires are correctable"
>
> ( I don't know why he uses the term 'correctable', which implies
> wrongness, but this statement just compounds the above errors and adds
> one more: False conclusion that if a belief can change, this means that
> a desire can change)
>
I don't know what about this is controversial. Consider this example:
T1: Smoking looks cool, I want to smoke.
T2: You know smoking causes lung cancer, right?
T3: Oh it does? I suppose then I no longer want to smoke.
>
> I can understand someone saying that beliefs are sometimes based on
> desires (I'm sure this is often the case), but not the reverse.
That's
> just daft. Desires are emotional, derived from feedback on bodily
> states, and elaborated by memories and imagination.
"Drives" might be a better word to use for such things, and it would also
help in understanding his paper to distinguish innate drives which we can't
decide or change, from the desires that we decide with our minds, which we
can change.
Beliefs about
> various things can certainly contribute to the process, but you can't
> reasonably claim that (all) desires are a result of (only) beliefs.
>
There are instinctual and unconscious motivations an preferences, things we
find innately pleasurable or painful, those I agree are not based on
beliefs. They are inherent to what one is.
When Zuboff's paper refers to desires, I think it should be taken to refer
to wants and desires based on, or justified by, conscious thought.
> At the best, Zuboff is guilty of grossly oversimplifying and
> misattributing things. At the worst, well, I'd be committing the
> Internet Sin of Ad-Hominem Attack to say anything more, and that goes
> against my moral code.
>
I think you should give him a fair shake before judging him so harshly, and
read more than just the abstract:
https://drive.google.com/file/d/1l8T1z5dCQQiwJPlQlqm8u-1oWpoeth3-/view?usp=drivesdk
Jason
> --
> Ben
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251102/3ecc94fe/attachment.htm>
More information about the extropy-chat
mailing list