[ExI] 1DIQ: an IQ metaphor to explain superintelligence
    Jason Resch 
    jasonresch at gmail.com
       
    Sat Nov  1 23:20:21 UTC 2025
    
    
  
On Sat, Nov 1, 2025, 6:06 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Apologies for the formatting of this. I've just noticed that some email
> clients jam the text together, making it hard to read.
>
> Here is a better formatted version (I hope!):
>
> On 01/11/2025 21:42, Ben wrote:
>
>
> On 01/11/2025 13:32, Jason Resch wrote:
>
> On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>>     On 31/10/2025 19:04, Jason Resch wrote:
>>
>>
>> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a perfect morality consists of. And it too, provides a definition of what morality is, and likewise provides a target to aim towards.
>>
>>
>>>
>>> Ben Wrote:
>>>
>>> As different intelligent/rational agents have different experiences,
>>>
>>> they will form different viewpoints, and come to different conclusions
>>>
>>> about what is right and not right, what should be and what should not,
>>>
>>> what they want and what they don't, just like humans do.
>>
>> The point of the video and article is that desires are based on beliefs, and because beliefs are correctable then so are desires. There is only one "perfect grasp" and accordingly, one true set of beliefs, and from this it follows one most-correct set of desires. This most correct set of desires is the same for everyone, regardless of from which viewpoint it is approached.
>>
>> Nope. This is nonsense. Just about every assertion is wrong. The very
>> first sentence in the abstract is false. And the second. And the third. So
>> the whole thing falls apart. Desires are not based on beliefs, they are
>> based on emotions. The example of 'wanting to drink hot mud' is idiotic.
>> Just because the cup turns out to contain mud doesn't invalidate the desire
>> to drink hot chocolate.
>>
> I think you are misinterpreting the example. It is the desire to drink the
> contents of the cup is what changes in response to new information.
>
>
>
> I wouldn't have put it as 'desire to drink the contents of the cup', when
> the desire is to drink hot chocolate. There are originating desires and
> there are planned actions to satisfy the desire. Drinking from the cup
> might turn out to be a bad idea (the plan is faulty because of incorrect
> information), but the original desire is not changed.
> If you want to see a Batman movie at a movie theatre, and find that the
> only movie available is a romantic comedy, you don't say that you have a
> desire to watch any movie which has suddenly changed. You still want to
> watch Batman, but can't, so your desire is thwarted, not changed.
>
>
> Think about this alternate example which may be easier to consider: you may naively have the desire to take a certain job, to marry a particular person, attend a certain event, but if that choice turns out to be ruinous,  you may regret that decision. If your future self could warn you of the consequences of that choice, then you may no longer desire that job, marriage, or attendance, as much as you previously did, in light of the (unknown) costs they bore, but which you were unaware of.
>
>
>
> Decisions are often regretted. That is a fact of life. Future selves
> warning you about bad decisions is not. That's time-travel (aka 'magic'),
> and should not feature in any serious consideration of how to make good
> decisions. "If x could..." is no help when x is impossible. We have
> workable tools to help people make better decisions, but time-travel isn't
> one of them.
>
>
These are examples to communicate a point. They are not intended be taken
literally.
The point is you may desire a job, but had you known more about the job,
you would not have desired it.
>  It's not a 'mistaken' desire at all (the mistake is a
>>     sensory one), and it doesn't somehow morph into a desire to drink
>>     hot mud.
>>
>>
>>
>>     "Beliefs are correctable, so desires are correctable"
>>
>>     Each of those two things are true (if you change 'correctable' to
>>     'changeable'), but the one doesn't imply the other, which follows
>>     from the above.
>>
> Does it apply in the examples I provided?
>
>
>
> No. The examples are about decisions, not desires, and they don't address
> the beliefs that lead to the decisions. "You may have the desire to do X"
> is different to "You decide to do X". The desire may drive the decision or
> at least be involved in it, but it isn't the decision (some poeple act
> immediately on their desires, but that still doesn't mean they are the same
> thing).
> Can you regret a desire? I don't think so, but it is arguable. It would be
> regretting something that you have no direct control over, so would be
> rather silly.
>
> The decision is irrelevant.
You either desire the job or you don't. The point is that this can change
based on new information.
> Apart from that, there is still no dependency of desires on beliefs.
>
>
If you believe it will be good for you, you may desire it. If you learn
later that it will be bad for you, you may no longer desire it. Here, what
you desire has a dependency on what you believe.
A belief may well affect the plan you make to satisfy a desire, but
> changing the belief doesn't change the desire. Can a belief give rise to a
> desire? That's a more complicated question than it appears, I think, and
> leads into various types of desires, but still, there's no justification
> for the statement "beliefs can change, therefore desires can".
>
>
>
>     'Perfect grasp' doesn't mean anything real. It implies that it's
>>     possible to know everything about everything, or even about
>>     something. The very laws of physics forbid this, many times over, so
>>     using it in an argument is equivalent to saying "magic".
>>
> It doesn't have to be possible. The paper is clear on this. The goal of the paper is to answer objectively what makes a certain thing right or wrong. For example, if someone offered you $10 and I  return for some random person unknown to you would be killed, in a way that would not negatively affect you or anyone you knew, and your memory of the ordeal would be wiped so you wouldn't even bear a guilty conscience, for what reason do we judge and justify the wrongness of taking the $10?
>
>
>
> This is 'Trolley problem thinking'. Making up some ridiculous scenario
> that would never, or even could never, occur in the real world, then
> claiming that it has relevance to the real world.
>
>
It's to frame the problem: where does morality come from, what is its
basis, by what method do how do we determine right or wrong?
> This is the goal of the paper to provide a foundation upon which morality can be established objectively from first principles.
>
>
>
> Let's see some examples that are grounded in reality that 'provide a
> foundaton upon which morality can be established objectively'. I'm not
> closed to the possibility that such a thing can be done, but I'm not
> holding my breath.
>
>
> How would you and the question of what separates right from wrong? The initial utilitarian answer is whatever promotes more good experiences than bad experiences. But then, how do you weigh the relative goodness or badness of one experience vs. another, between one person and another, between the varying missed opportunities among future possibilities?
> Such questions can only be answered with something approximating an attempt at a grasp of what it means and what it is like to be all the various existing and potential conscious things.
>
> That's just another way of saying that it can't be answered.
>
> We can make heuristic attempts at this, despite the fact that we never achieve perfection.
>
>
>
> Exactly. We always have to make decisions in the /absence/ of full
> information. What we would do if we had 'all the information' is
> irrelevant, if it even means anything.
>
>
Yes, this is what I've been saying from the beginning. Perfect grasp is
used only to define the aim of morality, not to serve as a practical theory.
Consider weather prediction. We can't predict with 100% accuracy, nor
predict arbitrarily far into the future. Yet we can make near term
predictions with some modicum of accuracy.
This is how moral decisions can (and should) be approached.
> For example, Democracy can be viewed as a crude approximation, by which each person is given equal weight in the consideration of their desires (with no attempt to weight relative benefits or suffering). But this is still better than an oligarchy, where the desires of few are considered while the desires of the masses are ignored. And also you can see the difference between uninformed electorate vs. a well informed one. The informed electorate has a better grasp of the consequences of their decisions, and so their collective desires are more fully fulfilled.
>
>
>
> I don't see the relevance to morality. Politics and morality are rarely on
> talking terms.
>
>
Please consider what I wrote carefully. It is an example of putting into
practice a heuristic. And how better heuristics are based on the same model
and definition of morality as defined in that paper.
>
>>
>>
>>     'One true set of beliefs' is not only wrong, it's dangerous, which
>>     he just confirms by saying it means there is only one most-correct
>>     set of desires, for /everyone/ (!).
>>
> Do you not believe in objective truth?
>
>
>
> No.
> This is religious territory, and the road to dogmatism.
>
> Belief in objective truth is the basis of science.
This is the very reason wny science is superior to religion.
>
> Without objective truth, by what measure is any theory in science said to
be better than any other? What is the meaning of "falsified" if there are
no objective truths or falsehoods? Science as a field and endeavor
collapses without a notion of objective truth (unless, perhaps you
subscribe to some constructionist, relativist, post-modern notion of
reality/truth). But I take the view that most scientists consider their
work as something beyond a social interaction/game.
It doesn't assume that there is any 'absolute truth' which can be
> discovered, after which no further inquiry is needed or wanted.
>
>
I think you may be confusing the existence of objective truth, with the
idea that we can access that objective truth and have certainty when we
hold it. One does not imply the other.
I believe there is objective truth, *and* I believe we can never be certain
if/when we have it.
We think it is objectively true that 2+2=4, but we can't prove it
mathematically, unless we assume some set of axioms (which themselves may
or may not be true), and we cannot prove the set of axioms are true. So
even on the most trivial matters, we never achieve certainty.
As to whether, for instance, the laws of physics are invariant everywhere
> and at all times, that's a question we can't answer, and probably will
> never be able to.
>
>
Many things are true that we will never know.
The 10^(googolplex)th digit of the binary representation of Pi is either 1
or 0. But we, in our finite universe, will never have the computational
resources to determine which.
Nevertheless at least one of these two statements is objectively true:
- The 10^(googolplex)th digit of the binary representation of Pi is 1.
- The 10^(googolplex)th digit of the binary representation of Pi is 0.
But for those who believe in objective truth, one of these statements is
true.
>
> If there is objective truth, they are the same truths for everyone.
> Now consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it).
> You may say this is fatal to the theory, but note that we can still roughly compute with the number Pi, even though we never consider all of its infinite digits.
>
>>  Does this not ring loud alarm
>>     bells to you? I'm thinking we'd better hope that there really is no
>>     such thing as objective morality (if there is, Zuboff is barking up
>>     the wrong tree, for sure), it would be the basis for the worst kind
>>     of tyranny. It's a target that I, at least, want to aim away from.
>>     180 degrees away!
>>
> No one is proposing a putting a tyrannical AI in charge that forces your every decision. But a superintelligent AI that could explain to you the consequences of different actions you might take (as far as it is able to predict them) would be quite invaluable, and improve the lives of many who choose to consider its warnings and advice.
>
>
>
> Absolutely. I have no argument with that. But we were talking about
> morality.
>
>
Yes and morality concerns which actions are right or wrong.
>
>
>>     His twisting of desire into morality is, well, twisted. Morality
>>     isn't about what we should want to do, just as bravery isn't about
>>     having no fear.
>>
> Do you have a better definition of morality?
>
> I don't think that's the answer you want to ask. A dictionary can provide
> the answer.
>
> This is what the dictionary says:
"principles concerning the distinction between right and wrong or good and
bad behavior."
But this only pushes the problem back: what is the definition of right or
wrong, good or bad?
Zuboff's paper is an example of an theoretical basis on which we can form
such a definitions, and define what we mean by right and wrong, good and
bad.
> I do have my own moral code though, if that's what you want to know. I
> don't expect everyone to see the value of it, or adopt it. And I might
> change my mind about it in the future.
>
>
Let us say you have a particular set of rules in your code.
By why process do you decide what rules to adopt, or decide to adopt one
rule vs. another.
My contention is that to even form a moral code, one must hold some
meta-rule for optimizing what knew considers to be good while minimizing or
avoiding bad. And I think if you explored this meta-rule, you would find it
is not all that different from the position Zuboff reaches in his paper.
Ultimately, what is good (for one individual) is what that individual would
want for themselves if they had a complete knowledge of everything
involved. And then this then extended to define good as a maximization of
good for all concerned, to achieve the most possible good among all beings
who have desires, by satisfying (to the maximum possible extent) the
desires each individual would still hold if they all had a perfect grasp of
everything. This he refers to as a reconciliation of all systems of desire.
He wants to turn people into puppets, and actually
>>     remove moral agency from them.
>>
> Imperfect understanding of consequences cripples our ability to be effective moral agents.
>
>
>
> Then you think we are crippled as effective moral agents, and doomed to
> always be so (because we will always have imperfect understanding of
> consquences).
>
> Indeed. That is why life is so hard, and why "to err is human." As
imperfect beings we perpetual mistakes are inevitable.
But with greater knowledge, experience, and intelligence, we can strive to
minimize that error.
>
>  When we don't understand the pros and cons of a decision, how can we hope to be moral agents? We become coin-flippers -- which I would argue is to act amorally. If we want true moral agency, we must strive to improve our grasp of things.
>
>
>
> This is taking an extreme position, and saying either we are 'perfect' or
> no use at all.
>
> Not at all. I specified "when we don't understand..."
We are neither. Acting with incomplete information is inevitable.
>
> Yes.
That doesn't mean morality is impossible.
>
> I fully agree.
> Just as bravery is being afraid, but acting anyway, morality is not
> knowing for sure what the best action is, but acting anyway.
>
> Since we never know anything for sure, I'm not sure that qualifier adds
anything useful. I would instead say: moral action requires an attempt to
identify the morally best action, and then choosing that action.
Then, "amoral action" is action without attempting to identify what the
morally best action is, and "immoral action" would be an attempt to
identify the morally best action, but then choosing a different action.
Making the best decision you can, in line with your values. It's about
> having a choice. If it were possible to have 'perfect knowledge', there
> would be no morality, no choice.
>
>
I'm not sure that follows. Even with perfect knowledge, you could still
choose whether or not to act in accordance with morally best action.
Jason
I'm not sure what you'd call it. Predetermination, perhaps.
>
>
> --
> Ben
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251101/9294079f/attachment-0001.htm>
    
    
More information about the extropy-chat
mailing list