[ExI] 1DIQ: an IQ metaphor to explain superintelligence
Jason Resch
jasonresch at gmail.com
Fri Oct 31 18:12:58 UTC 2025
On Fri, Oct 31, 2025 at 1:53 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 31/10/2025 12:28, Jason Resch wrote:
> > There is predicting the means and there is predictung the ends. I
> > think we can predict the ends that is, the goals, of a
> > superintelligence. It may even be possible to predict (at a high
> > level) the morality of an AI, for example, if this argument is valid,
> > then all sufficiently intelligent and rational agents reach the same
> > morality.
> >
> > See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE
>
> I think we keep tripping up over the difference between theoretical and
> practical considerations. That video keeps talking about 'perfect'
> knowledge, but there's no such thing. There's no such thing as 'fully
> understanding' something. So arguments that are based on these concepts
> aren't going to help. Any system of morality thas to be based on
> 'perfect' knowledge can't be worked out, so is a non-starter.
>
AIXI ( https://www.hutter1.net/ai/uaibook.htm ) is an algorithm for perfect
intelligence. We cannot make a practical implementation of it, but that
doesn't mean it is useless.
It serves as a definition. It tells us what intelligence is, and as an
ideal end-point, provides a target to aim towards.
Likewise, the paper ( https://philarchive.org/rec/ARNMAW ) defines what a
perfect morality consists of. And it too, provides a definition of what
morality is, and likewise provides a target to aim towards.
>
> As different intelligent/rational agents have different experiences,
> they will form different viewpoints, and come to different conclusions
> about what is right and not right, what should be and what should not,
> what they want and what they don't, just like humans do.
The point of the video and article is that desires are based on beliefs,
and because beliefs are correctable then so are desires. There is only one
"perfect grasp" and accordingly, one true set of beliefs, and from this it
follows one most-correct set of desires. This most correct set of desires
is the same for everyone, regardless of from which viewpoint it is
approached.
Jason
> And just like
> humans, I reckon the only practical method of getting them to have
> values that are good for humans, is to educate them as broadly as we
> can. Which basically means letting them have access to all the
> information they can cope with, without any filtering, or other kinds of
> censorship.
>
> I take some hope from the observation that the more someone knows about
> the world, the better they tend to behave. Usually. Most of the terrible
> rulers through history have been people in the grip of some
> narrow-minded ideology. This makes me wonder about the communist chinese
> rushing ahead with AI. In their haste to get there first, they may be
> forgetting, or ignoring, that the way they are staying in power is by
> restricting the population's access to information, among other things,
> and their AIs, once they get powerful enough, will be able to rip
> through those restrictions like a hammer in a wet paper bag. Part of me
> wants to cheer them on, because I strongly suspect that the rise of
> superintelligent AI will spell the end of communism.
>
> --
> Ben
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251031/531328d5/attachment.htm>
More information about the extropy-chat
mailing list