[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    Ben Zaiboc 
    ben at zaiboc.net
       
    Mon Nov  3 14:11:01 UTC 2025
    
    
  
On 02/11/2025 16:51, Jason Resch wrote:
> On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat 
> <extropy-chat at lists.extropy.org> wrote:
>
>     On 01/11/2025 23:20, Jason Resch wrote:
>
>      > If you believe it will be good for you, you may desire it. If you
>     learn later that it will be bad for you, you may no longer desire it.
>     Here, what you desire has a dependency on what you believe.
>
>
>     Discuss that with a cigarette smoker. I think you'll find they
>     disagree.
>
>
> Compare to the hypothetical reality where cigarettes are healthy, and 
> you will see my point stands.
If cigarettes were healthy (and non-addictive), there would be no 
problem. Your point is that desire depends on belief. I see no logic 
here. When cigarette smokers learn that smoking is bad for their health, 
they may wish they didn't desire to smoke, but they still do. Whether 
they believe it's healthy or not, they still have the desire.
>
>     Morality comes from us. We make it up.
>
>
> That's the common view. But that common view is lacking a base,
It is solidly based on experience.
>  and it doesn't help answer the question of whether AI, or 
> superintelligences will tend towards any sort of morality, nor what it 
> might be that they tend towards.
No, it doesn't. And there's a good reason for that. It's the same reason 
that you can't answer whether Jenny in primary school will tend towards 
any sort of morality or what it might be.
>  Hence the utility of this framework.
What framework? There is no framework, just some half-baked assertions 
that have no basis in reality or logic.
>
>
>     And the methods we use are various.
>
>
> And some methods, I would contend, are better than others.
You'd have to define 'better' for that to mean anything, and that's a 
problem. Better according to whom? For whom?
This is where you say "for everyone if they knew what was really best 
for them", which leaves us where, exactly? Nowhere.
I'm at a loss to understand how this 'perfect grasp' concept, which you 
admit is impossible, can be used to derive any kind of moral system. 
Let's say, for a moment, that I agree that it does make some kind of 
sense, what then? How to we proceed to build a moral system based on it? 
How does it help me to decide whether to go back to the shop and give 
the cashier the extra change that she gave me by mistake, or to keep it? 
How does it give some guidance to the Ukrainian soldier faced with a 
dilemma about whether to use his drone bomb to either kill a group of 
russian soldiers or to save a family by blowing up the drone threatening 
them?
How does it predict what kind of morals a superintelligent AI will display?
>
>      >> We always have to make decisions in the /absence/ of full
>     information. What we would do if we had 'all the information' is
>     irrelevant, if it even means anything.
>
>      > Yes, this is what I've been saying from the beginning. Perfect
>     grasp
>     is used only to define the aim of morality, not to serve as a
>     practical
>     theory.
>
>
>     We know what the aim of morality is: To distinguish right actions
>     from
>     wrong ones. Nothing difficult about that, and no 'perfect grasp' is
>     needed to establish it. The difficulty is in deciding what 'right'
>     and
>     'wrong' mean. Different people in various different circumstances
>     seem
>     to have different opinions.
>
>
> Likewise, people used to disagree about what lightning was.
Not 'likewise'. Not at all. Lightning is an objective phenomenon. We can 
examine it and figure out how it works. Opinions are subjective, and 
unless they are about objective facts, can't be falsified. "Does 
ice-cream taste good?" is a different kind of question to "Does 
convection cause charge separation in a cloud?".
Which category do you think the question "Should I lie to my kids about 
what happened to the cat?" falls into?
>
>
>      > Consider weather prediction. We can't predict with 100%
>     accuracy, nor
>     predict arbitrarily far into the future. Yet we can make near term
>     predictions with some modicum of accuracy.
>     This is how moral decisions can (and should) be approached.
>
>
>     Can, yes. Should? Who are you to say?
>
>
> It seems you still haven't read the paper, as you question suggests 
> you still hold some imagined caricatured version of the theory.
If I hold a caricatured verson of the theory, blame the abstract. I 
assume that abstracts are reasonable summaries, and rely on them a lot. 
I rarely read a full paper, for a number of reasons. If you think it and 
the abstract are at odds, you should probably contact the author and let 
him know. And maybe create your own summary.
>
> But note here, I am only saying, that even though we can't predict the 
> future perfectly nor arbitrarily far into the future, the basic idea 
> behind deciding which actions are right or wrong, involves making some 
> attempt at predicting the future consequences of an action. All 
> rational decision making processes work this way.
You seem to be dismissing Kant as irrational (this is not an objection, 
just an observation).
>
>     You are now deciding for other
>     people. My morality tells me that this is immoral.
>
>
> If you understand the paper you will see this definition of morality 
> is based on the fulfillment of the desires of everyone, where those 
> desires are what each person would genuinely want for themselves when 
> fully informed about everything relevant. It has nothing to do with 
> me, or anyone else telling you what to do. It is merely a definition.
Yes, I understand the definition, and it's implication that the same 
morality should apply to everyone.
I also understand that the definition is based upon an impossibility and 
several false premises, and I regard the implication as immoral.
>
> ...
> this paper ... starts with considering the desires of individual 
> subjects. Moves on to correcting those individual desires with better 
> information
Whoa!
You mean like how smokers, when told that smoking is harmful to their 
health, suddenly don't have any desire to smoke anymore?
What planet does this guy live on?
> , and ultimately shows how with enough information, including how ones 
> own desires impact other observers, there is an eventual convergence, 
> where ones desires extend beyond merely wanting what's best for 
> oneself, but also a consideration of what's best for all concerned. 
> This full understanding of what's best for all concerned is the same 
> understanding, regardless of which initial subject you start from.
Apart from the physical impossibility, how can that possibly be true?
>
>
>
>
>      > Let us say you have a particular set of rules in your code.
>
>      > By [what] process do you decide what rules to adopt, or decide to
>     adopt one rule vs. another.
>
>
>      > My contention is that to even form a moral code, one must hold
>     some
>     meta-rule for optimizing what one considers to be good while
>     minimizing or avoiding bad.
>
>
>     Indeed. And I'd say that the meta-rule is what defines 'good' and
>     'bad'.
>
>
> That seems a bit circular to me.. I am not sure how it gets off the 
> ground without a way to distinguish good from bad.
The meta-rule is what defines 'good', 'better', 'bad' and 'worse'. 
Whatever the rule is (which will be different for different people, and 
groups of people), is the basis for the moral system.
Here's an example (admittedly a terrible one, with lots of problems, but 
still a real one): Whatever (my) god wants, is Good, whatever (my) god 
doesn't want, is Bad.
>
>
>      > And I think if you explored this meta-rule, you would find it
>     is not
>     all that different from the position Zuboff reaches in his paper.
>
>
>     On the contrary, it is totally different, and much simpler, than
>     Zuboff's nonsense.
>
>
> It may seem that way, but I think you have swept the details of how to 
> distinguish good from bad under the rug.
That is a very lumpy rug.
>
>
>
>      > Ultimately, what is good (for one individual) is what that
>     individual
>     would want for themselves if they had a complete knowledge of
>     everything
>     involved.
>
>
>     First, No.
>     Second, this would be reducing morality to what is good for an
>     individual...
>
>
> Note that I was careful to specify "good for the individual." I.e., 
> start with the simple model of only a single conscious being in all 
> reality. Then it becomes clear this is a working definition of good 
> that works for that lone being.
Still no.
Do you not recognise that someone's moral code can be based on something 
other than their own personal benefit?
>
>
>      > And then this then extended to define good as a maximization of
>     good
>     for all concerned, to achieve the most possible good among all beings
>     who have desires, by satisfying (to the maximum possible extent) the
>     desires each individual would still hold if they all had a perfect
>     grasp
>     of everything. This he refers to as a reconciliation of all
>     systems of
>     desire.
>
>
>     ... then dragging everyone else into it 
>
>
> Where are you getting this "dragging into it" from?
"extended ... for all concerned ... all beings who have desires"
>
> Does the golden rule "drag everyone else into it"?
Yes, it explicitly does. 'Treat /others/ as you would treat yourself'
>
> Does your moral code "drag everyone else into it"?
No, I apply it only to myself.
...
>
> You could liken Zuboff's result to the platinum rule, corrected by 
> better information, weighted appropriately, modulated by future 
> consequences, and with further concern for possible/future beings who 
> may not (yet) exist.
The platinum rule is the platinum rule. When you 'correct' it, you turn 
it into something else.
Where does this 'better information' come from, who decides if it's 
better or not, how is it weighted, how are the future consequences 
discovered and evaluated, and, oh, I won't even bother addressing 
non-existent beings. We're now drifting into the absurd.
>
>
>     I really don't see the point of positing an impossible knowledge then
>     using this as the basis of a system of morality (or anything at all).
>
>
> I've addressed this many times already.
As I have refuted it.
>
>     Saying "Oh, but it's just theoretical, not real, don't take it too
>     literally" is basically the same as saying it's totally useless
>     for any
>     practical purpose.
>
>
> I haven't said that.
Maybe not literally, but in essence?
If not, then it's real, we should take it seriously?
I'm talking here about the idea of 'a perfect grasp'.
If this is a real thing, not just a fantasy, I'd like to know how it's 
done. I would certainly take that seriously.
>
>
>      >> It's about having a choice. If it were possible to have 'perfect
>     knowledge', there would be no morality, no choice.
>
>      > I'm not sure that follows. Even with perfect knowledge, you could
>     still choose whether or not to act in accordance with morally best
>     action.
>
>
>     That's true. People can choose to be evil. Does anyone actually do
>     that?
>
>
> All the time.
>
>
>     We'd probably class it as mental illness.
>
>
> We all do it in small ways all the time.
>
> For example, we will choose to pay $20 to go see a movie instead of 
> taking the time to buy a $20 meal for a hungry person. We know it 
> would be a more moral way to spend the $20, but will choose a less 
> moral action instead.
That's not choosing to be evil, even in a small way. That's prioritising 
what you decide to be the better outcome. That's your moral system in 
action. If you really think that it would be more moral to spend the 
money in a different way, then you have conflicting moral systems, and 
need to do some thinking.
>
>
>     I don't undertand why you are taking Zuboff's paper seriously. Do you
>     take his first three statements in the paper's abstract at face
>     value?:
>
>     1) "If I desire to drink some stuff thinking it is hot chocolate when
>     actually it is hot mud, my desire is not a real one - it’s
>     mistaken or
>     only apparent."
>
>     (misconstruing the desire to drink hot chocolate as a desire to drink
>     whatever is in the cup. If that were the case, he'd drink the mud)
>
>
> I think you are misreading and over-examining this. It is nothing more 
> than an example of how a desire "ooh that looks good I want to drink 
> it!" can be corrected with new information.
>
> I see no problem with that observation. To me it is obviously true.
>
>
>
>
>     2) "This example illustrates how a desire must always depend on a
>     belief
>     about its object, a belief about what it is and what it’s like."
>
>     (false assumption that if any desire is dependent on a belief
>     (something
>     that I'd dispute, but it needs closer examination), all desires must
>     always be dependent on beliefs. Saying "This example
>     illustrates..." is
>     deflecting the reader from the fact that he's making an assumption
>     and
>     failing to show why it should be true)
>
>
> A desire is the will to fulfill some need or want. By definition, 
> then, it relates to some anticipated future state or experience, which 
> is presently unrealized.
>
> Accordingly, that desire concerns a belief (about what the future 
> state or experience will be we like).
>
> Again this is all from the abstract, which I'll not give the full 
> exposition or justification. If the full argument and justification 
> could be made in abstracts, we wouldn't need papers. Which is why I 
> suggest you to read the paper is you have questions about it, as it is 
> quite thorough in addressing all the concerns you are raising.
>
>
>
>
>     3) "But beliefs are correctable, so desires are correctable"
>
>     ( I don't know why he uses the term 'correctable', which implies
>     wrongness, but this statement just compounds the above errors and
>     adds
>     one more: False conclusion that if a belief can change, this means
>     that
>     a desire can change)
>
>
> I don't know what about this is controversial. Consider this example:
>
> T1: Smoking looks cool, I want to smoke.
> T2: You know smoking causes lung cancer, right?
> T3: Oh it does? I suppose then I no longer want to smoke.
>
>
>
>
>
>     I can understand someone saying that beliefs are sometimes based on
>     desires (I'm sure this is often the case), but not the reverse.
>
>     That's
>     just daft. Desires are emotional, derived from feedback on bodily
>     states, and elaborated by memories and imagination.
>
>
> "Drives" might be a better word to use for such things, and it would 
> also help in understanding his paper to distinguish innate drives 
> which we can't decide or change, from the desires that we decide with 
> our minds, which we can change.
Ok, so we can change 'desires' to 'intentions'. Fair enough?
That, at least, makes the smoking example more reasonable.
This does mean, of course, that we are now interpreting Zuboff (he might 
say 'correcting'!), and he might not agree with the interpretation.
Now I'm going to have to go back over most of it again (apart from the 
silly 'perfect grasp' stuff).
I'll be back.
-- 
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251103/bda025ab/attachment.htm>
    
    
More information about the extropy-chat
mailing list