[ExI] Mono or Poly?
efc at disroot.org
efc at disroot.org
Wed Feb 26 11:09:03 UTC 2025
> > To Daniel's point, the more sophisticated the mind, the more fragile it
> > becomes. If this extends to AIs and superintelligence, then there might be
> > many failure modes for AI minds: finding a lack of will, dying of boredom,
> > depressive, manic, or obsessive episodes, going insane from lack of mental
> > stimulation, developing a hyper focused interest on some inane topic, etc.
>
> I think it will be very interesting to see if there are any natural "limits" on
> intelligence. An assumption is frequently that there is no limit, yet, we do not
> have a good understanding of intelligence, in the context of AGI.
>
>
> Maximum intelligence is defined by Hutter's algorithm for universal intelligence:
>
> http://www.hutter1.net/ai/uaibook.htm
>
> Given some series of observations, this algorithm derives the most probable
> distribution of programs that could generate those observations, and then
> based on each, calculates a weighted expectation of utility for each viable
> course of action that intelligence is capable of making at that time.
I suspect that this books makes stronger claims than I would grant, but it also
of course depends on the definitions. Sadly I do not have time to go through the
book. =(
> Imagine if, as you say, with increased intelligence, the probability of any of
> the causes you mention, increase, and there is a "max" at which the intelligence
> becomes useless and cut off from the world?
>
> In the Culture series, the author writes that whenever they attempted to
> design perfect, or flawless AI ("Minds"), they would invariably immediately
> "Sublime" (choose to leave this universe). But they worked out that if a Mind
> was engineered to have some small flaw or quirk of personality, then it would
> not necessarily go down this path of immediate sublimation, and they could get
> some practical use out of it in this universe. But this also meant that a fair
> number of Minds were "Eccentric" or in the worst case "Erratic."
Wonderful plot device! I like the culture series. Isn't that where they have
zones of intelligence? It was many years since I last read a culture book.
> Then the question would obviously be, why? Can the limit be removed or
> circumvented? And if not, finding the "optimal" amount of intelligence.
>
> If, for example, there is some discoverable objective truth of nihilism, or
> negative utilitarianism, for example, or if the Buddhist conception of freeing
> oneself from all desires, then it could be that all superintelligences would
> self-destruct upon discovering this truth for themselves.
Enlightened AI:s transcending! Sounds like a great book! =)
> I don't know that such eventual discovery could be prevented while allowing
> the AI to remain truly intelligent.
>
> Jason
>
>
More information about the extropy-chat
mailing list