[ExI] 1DIQ: an IQ metaphor to explain superintelligence
Ben Zaiboc
ben at zaiboc.net
Sat Nov 1 22:05:16 UTC 2025
Apologies for the formatting of this. I've just noticed that some email
clients jam the text together, making it hard to read.
Here is a better formatted version (I hope!):
On 01/11/2025 21:42, Ben wrote:
>
> On 01/11/2025 13:32, Jason Resch wrote:
>> On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>>
>> On 31/10/2025 19:04, Jason Resch wrote:
>>> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a
>>> perfect morality consists of. And it too, provides a definition
>>> of what morality is, and likewise provides a target to aim towards.
>>>
>>> Ben Wrote: As different intelligent/rational agents have
>>> different experiences, they will form different viewpoints,
>>> and come to different conclusions about what is right and
>>> not right, what should be and what should not, what they
>>> want and what they don't, just like humans do.
>>>
>>> The point of the video and article is that desires are based on
>>> beliefs, and because beliefs are correctable then so are
>>> desires. There is only one "perfect grasp" and accordingly, one
>>> true set of beliefs, and from this it follows one most-correct
>>> set of desires. This most correct set of desires is the same for
>>> everyone, regardless of from which viewpoint it is approached.
>> Nope. This is nonsense. Just about every assertion is wrong. The
>> very first sentence in the abstract is false. And the second. And
>> the third. So the whole thing falls apart. Desires are not based
>> on beliefs, they are based on emotions. The example of 'wanting
>> to drink hot mud' is idiotic. Just because the cup turns out to
>> contain mud doesn't invalidate the desire to drink hot chocolate.
>>
>> I think you are misinterpreting the example. It is the desire to
>> drink the contents of the cup is what changes in response to new
>> information.
> I wouldn't have put it as 'desire to drink the contents of the cup',
> when the desire is to drink hot chocolate. There are originating
> desires and there are planned actions to satisfy the desire. Drinking
> from the cup might turn out to be a bad idea (the plan is faulty
> because of incorrect information), but the original desire is not changed.
> If you want to see a Batman movie at a movie theatre, and find that
> the only movie available is a romantic comedy, you don't say that you
> have a desire to watch any movie which has suddenly changed. You still
> want to watch Batman, but can't, so your desire is thwarted, not changed.
>
>> Think about this alternate example which may be easier to consider:
>> you may naively have the desire to take a certain job, to marry a
>> particular person, attend a certain event, but if that choice turns
>> out to be ruinous, you may regret that decision. If your future self
>> could warn you of the consequences of that choice, then you may no
>> longer desire that job, marriage, or attendance, as much as you
>> previously did, in light of the (unknown) costs they bore, but which
>> you were unaware of.
> Decisions are often regretted. That is a fact of life. Future selves
> warning you about bad decisions is not. That's time-travel (aka
> 'magic'), and should not feature in any serious consideration of how
> to make good decisions. "If x could..." is no help when x is
> impossible. We have workable tools to help people make better
> decisions, but time-travel isn't one of them.
>> It's not a 'mistaken' desire at all (the mistake is a sensory
>> one), and it doesn't somehow morph into a desire to drink hot
>> mud. "Beliefs are correctable, so desires are correctable" Each
>> of those two things are true (if you change 'correctable' to
>> 'changeable'), but the one doesn't imply the other, which follows
>> from the above.
>>
>> Does it apply in the examples I provided?
> No. The examples are about decisions, not desires, and they don't
> address the beliefs that lead to the decisions. "You may have the
> desire to do X" is different to "You decide to do X". The desire may
> drive the decision or at least be involved in it, but it isn't the
> decision (some poeple act immediately on their desires, but that still
> doesn't mean they are the same thing).
> Can you regret a desire? I don't think so, but it is arguable. It
> would be regretting something that you have no direct control over, so
> would be rather silly.
>
> Apart from that, there is still no dependency of desires on beliefs. A
> belief may well affect the plan you make to satisfy a desire, but
> changing the belief doesn't change the desire. Can a belief give rise
> to a desire? That's a more complicated question than it appears, I
> think, and leads into various types of desires, but still, there's no
> justification for the statement "beliefs can change, therefore desires
> can".
>
>> 'Perfect grasp' doesn't mean anything real. It implies that it's
>> possible to know everything about everything, or even about
>> something. The very laws of physics forbid this, many times over,
>> so using it in an argument is equivalent to saying "magic".
>>
>> It doesn't have to be possible. The paper is clear on this. The goal
>> of the paper is to answer objectively what makes a certain thing
>> right or wrong. For example, if someone offered you $10 and I return
>> for some random person unknown to you would be killed, in a way that
>> would not negatively affect you or anyone you knew, and your memory
>> of the ordeal would be wiped so you wouldn't even bear a guilty
>> conscience, for what reason do we judge and justify the wrongness of
>> taking the $10?
> This is 'Trolley problem thinking'. Making up some ridiculous scenario
> that would never, or even could never, occur in the real world, then
> claiming that it has relevance to the real world.
>> This is the goal of the paper to provide a foundation upon which
>> morality can be established objectively from first principles.
> Let's see some examples that are grounded in reality that 'provide a
> foundaton upon which morality can be established objectively'. I'm not
> closed to the possibility that such a thing can be done, but I'm not
> holding my breath.
>> How would you and the question of what separates right from wrong?
>> The initial utilitarian answer is whatever promotes more good
>> experiences than bad experiences. But then, how do you weigh the
>> relative goodness or badness of one experience vs. another, between
>> one person and another, between the varying missed opportunities
>> among future possibilities?
>> Such questions can only be answered with something approximating an
>> attempt at a grasp of what it means and what it is like to be all the
>> various existing and potential conscious things.
> That's just another way of saying that it can't be answered.
>> We can make heuristic attempts at this, despite the fact that we
>> never achieve perfection.
> Exactly. We always have to make decisions in the /absence/ of full
> information. What we would do if we had 'all the information' is
> irrelevant, if it even means anything.
>> For example, Democracy can be viewed as a crude approximation, by
>> which each person is given equal weight in the consideration of their
>> desires (with no attempt to weight relative benefits or suffering).
>> But this is still better than an oligarchy, where the desires of few
>> are considered while the desires of the masses are ignored. And also
>> you can see the difference between uninformed electorate vs. a well
>> informed one. The informed electorate has a better grasp of the
>> consequences of their decisions, and so their collective desires are
>> more fully fulfilled.
> I don't see the relevance to morality. Politics and morality are
> rarely on talking terms.
>> 'One true set of beliefs' is not only wrong, it's dangerous,
>> which he just confirms by saying it means there is only one
>> most-correct set of desires, for /everyone/ (!).
>>
>> Do you not believe in objective truth?
> No.
> This is religious territory, and the road to dogmatism.
> This is the very reason wny science is superior to religion. It
> doesn't assume that there is any 'absolute truth' which can be
> discovered, after which no further inquiry is needed or wanted.
> As to whether, for instance, the laws of physics are invariant
> everywhere and at all times, that's a question we can't answer, and
> probably will never be able to.
>
>> If there is objective truth, they are the same truths for everyone.
>> Now consider the objective truths for statements such as "it is right
>> to do X" or "it is wrong to do Y". If there are objective truths,
>> these extend to an objective morality. There would be an objective
>> truth to what action is best (even if we lack the computational
>> capacity to determine it).
>> You may say this is fatal to the theory, but note that we can still
>> roughly compute with the number Pi, even though we never consider all
>> of its infinite digits.
>>
>> Does this not ring loud alarm bells to you? I'm thinking we'd
>> better hope that there really is no such thing as objective
>> morality (if there is, Zuboff is barking up the wrong tree, for
>> sure), it would be the basis for the worst kind of tyranny. It's
>> a target that I, at least, want to aim away from. 180 degrees away!
>>
>> No one is proposing a putting a tyrannical AI in charge that forces
>> your every decision. But a superintelligent AI that could explain to
>> you the consequences of different actions you might take (as far as
>> it is able to predict them) would be quite invaluable, and improve
>> the lives of many who choose to consider its warnings and advice.
> Absolutely. I have no argument with that. But we were talking about
> morality.
>> His twisting of desire into morality is, well, twisted. Morality
>> isn't about what we should want to do, just as bravery isn't
>> about having no fear.
>>
>> Do you have a better definition of morality?
> I don't think that's the answer you want to ask. A dictionary can
> provide the answer.
>
> I do have my own moral code though, if that's what you want to know. I
> don't expect everyone to see the value of it, or adopt it. And I might
> change my mind about it in the future.
>>
>> He wants to turn people into puppets, and actually remove moral
>> agency from them.
>>
>> Imperfect understanding of consequences cripples our ability to be
>> effective moral agents.
> Then you think we are crippled as effective moral agents, and doomed
> to always be so (because we will always have imperfect understanding
> of consquences).
>> When we don't understand the pros and cons of a decision, how can we
>> hope to be moral agents? We become coin-flippers -- which I would
>> argue is to act amorally. If we want true moral agency, we must
>> strive to improve our grasp of things.
> This is taking an extreme position, and saying either we are 'perfect'
> or no use at all. We are neither. Acting with incomplete information
> is inevitable. That doesn't mean morality is impossible.
>
> Just as bravery is being afraid, but acting anyway, morality is not
> knowing for sure what the best action is, but acting anyway. Making
> the best decision you can, in line with your values. It's about having
> a choice. If it were possible to have 'perfect knowledge', there would
> be no morality, no choice. I'm not sure what you'd call it.
> Predetermination, perhaps.
--
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251101/2a385a84/attachment.htm>
More information about the extropy-chat
mailing list