[ExI] 1DIQ: an IQ metaphor to explain superintelligence
Jason Resch
jasonresch at gmail.com
Fri Oct 31 21:42:34 UTC 2025
On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 31/10/2025 19:04, Jason Resch wrote:
>
> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a perfect morality consists of. And it too, provides a definition of what morality is, and likewise provides a target to aim towards.
>
>
>>
>> Ben Wrote:
>>
>> As different intelligent/rational agents have different experiences,
>>
>> they will form different viewpoints, and come to different conclusions
>>
>> about what is right and not right, what should be and what should not,
>>
>> what they want and what they don't, just like humans do.
>
> The point of the video and article is that desires are based on beliefs, and because beliefs are correctable then so are desires. There is only one "perfect grasp" and accordingly, one true set of beliefs, and from this it follows one most-correct set of desires. This most correct set of desires is the same for everyone, regardless of from which viewpoint it is approached.
>
>
> Nope. This is nonsense. Just about every assertion is wrong. The very
> first sentence in the abstract is false. And the second. And the third. So
> the whole thing falls apart.
>
> Desires are not based on beliefs, they are based on emotions. The example
> of 'wanting to drink hot mud' is idiotic. Just because the cup turns out to
> contain mud doesn't invalidate the desire to drink hot chocolate.
>
I think you are misinterpreting the example. It is the desire to drink the
contents of the cup is what changes in response to new information.
Think about this alternate example which may be easier to consider: you may
naively have the desire to take a certain job, to marry a particular
person, attend a certain event, but if that choice turns out to be
ruinous, you may regret that decision. If your future self could warn you
of the consequences of that choice, then you may no longer desire that job,
marriage, or attendance, as much as you previously did, in light of the
(unknown) costs they bore, but which you were unaware of.
It's not a 'mistaken' desire at all (the mistake is a sensory one), and it
> doesn't somehow morph into a desire to drink hot mud.
>
> "Beliefs are correctable, so desires are correctable"
> Each of those two things are true (if you change 'correctable' to
> 'changeable'), but the one doesn't imply the other, which follows from the
> above.
>
Does it apply in the examples I provided?
> 'Perfect grasp' doesn't mean anything real. It implies that it's possible
> to know everything about everything, or even about something. The very laws
> of physics forbid this, many times over, so using it in an argument is
> equivalent to saying "magic".
>
It doesn't have to be possible. The paper is clear on this. The goal of the
paper is to answer objectively what makes a certain thing right or wrong.
For example, if someone offered you $10 and I return for some random
person unknown to you would be killed, in a way that would not negatively
affect you or anyone you knew, and your memory of the ordeal would be wiped
so you wouldn't even bear a guilty conscience, for what reason do we judge
and justify the wrongness of taking the $10?
This is the goal of the paper to provide a foundation upon which morality
can be established objectively from first principles.
How would you and the question of what separates right from wrong? The
initial utilitarian answer is whatever promotes more good experiences than
bad experiences. But then, how do you weigh the relative goodness or
badness of one experience vs. another, between one person and another,
between the varying missed opportunities among future possibilities?
Such questions can only be answered with something approximating an attempt
at a grasp of what it means and what it is like to be all the various
existing and potential conscious things.
We can make heuristic attempts at this, despite the fact that we never
achieve perfection.
For example, Democracy can be viewed as a crude approximation, by which
each person is given equal weight in the consideration of their desires
(with no attempt to weight relative benefits or suffering). But this is
still better than an oligarchy, where the desires of few are considered
while the desires of the masses are ignored. And also you can see the
difference between uninformed electorate vs. a well informed one. The
informed electorate has a better grasp of the consequences of their
decisions, and so their collective desires are more fully fulfilled.
> 'One true set of beliefs' is not only wrong, it's dangerous, which he just
> confirms by saying it means there is only one most-correct set of desires,
> for /everyone/ (!).
>
Do you not believe in objective truth?
If there is objective truth, they are the same truths for everyone.
Now consider the objective truths for statements such as "it is right to do
X" or "it is wrong to do Y". If there are objective truths, these extend to
an objective morality. There would be an objective truth to what action is
best (even if we lack the computational capacity to determine it).
You may say this is fatal to the theory, but note that we can still roughly
compute with the number Pi, even though we never consider all of its
infinite digits.
Does this not ring loud alarm bells to you? I'm thinking we'd better hope
> that there really is no such thing as objective morality (if there is,
> Zuboff is barking up the wrong tree, for sure), it would be the basis for
> the worst kind of tyranny. It's a target that I, at least, want to aim away
> from. 180 degrees away!
>
No one is proposing a putting a tyrannical AI in charge that forces your
every decision. But a superintelligent AI that could explain to you the
consequences of different actions you might take (as far as it is able to
predict them) would be quite invaluable, and improve the lives of many who
choose to consider its warnings and advice.
> His twisting of desire into morality is, well, twisted. Morality isn't
> about what we should want to do, just as bravery isn't about having no
> fear.
>
Do you have a better definition of morality to share?
He wants to turn people into puppets, and actually remove moral agency from
> them.
>
Imperfect understanding of consequences cripples our ability to be
effective moral agents. When we don't understand the pros and cons of a
decision, how can we hope to be moral agents? We become coin-flippers --
which I would argue is to act amorally. If we want true moral agency, we
must strive to improve our grasp of things.
Jason
His proposal is equivalent to destroying the amygdala (fear centre of the
> brain (kind of)) and claiming to have revealed the secret of 'true bravery'.
>
--
> Ben
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251031/4e819b35/attachment.htm>
More information about the extropy-chat
mailing list