[ExI] Mono or Poly?
Jason Resch
jasonresch at gmail.com
Wed Feb 26 13:58:38 UTC 2025
On Wed, Feb 26, 2025 at 6:10 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> > > To Daniel's point, the more sophisticated the mind, the more
> fragile it
> > > becomes. If this extends to AIs and superintelligence, then
> there might be
> > > many failure modes for AI minds: finding a lack of will, dying
> of boredom,
> > > depressive, manic, or obsessive episodes, going insane from lack
> of mental
> > > stimulation, developing a hyper focused interest on some inane
> topic, etc.
> >
> > I think it will be very interesting to see if there are any
> natural "limits" on
> > intelligence. An assumption is frequently that there is no limit,
> yet, we do not
> > have a good understanding of intelligence, in the context of AGI.
> >
> >
> > Maximum intelligence is defined by Hutter's algorithm for universal
> intelligence:
> >
> > http://www.hutter1.net/ai/uaibook.htm
> >
> > Given some series of observations, this algorithm derives the most
> probable
> > distribution of programs that could generate those observations, and then
> > based on each, calculates a weighted expectation of utility for each
> viable
> > course of action that intelligence is capable of making at that time.
>
> I suspect that this books makes stronger claims than I would grant, but it
> also
> of course depends on the definitions. Sadly I do not have time to go
> through the
> book. =(
>
I haven't read the book, but just that webpage provides enough information
to grasp the concept of AIXI, and why no intelligence could make better
decisions than it.
I just realized, however, that there are implicit assumptions built into
this model of intelligence, namely that one exists in a reality of
comprehensible (computable) laws, that such laws extend into the future,
that simpler laws are preferred to more complex ones (when all else is
equal). Note also, that these are the same sorts of ideas Einstein spoke of
as the necessary "faith" of a scientist.
>
> > Imagine if, as you say, with increased intelligence, the
> probability of any of
> > the causes you mention, increase, and there is a "max" at which
> the intelligence
> > becomes useless and cut off from the world?
> >
> > In the Culture series, the author writes that whenever they attempted to
> > design perfect, or flawless AI ("Minds"), they would invariably
> immediately
> > "Sublime" (choose to leave this universe). But they worked out that if a
> Mind
> > was engineered to have some small flaw or quirk of personality, then it
> would
> > not necessarily go down this path of immediate sublimation, and they
> could get
> > some practical use out of it in this universe. But this also meant that
> a fair
> > number of Minds were "Eccentric" or in the worst case "Erratic."
>
> Wonderful plot device! I like the culture series. Isn't that where they
> have
> zones of intelligence? It was many years since I last read a culture book.
>
Hmm. I don't recall the zones of intelligence in that series..
>
> > Then the question would obviously be, why? Can the limit be
> removed or
> > circumvented? And if not, finding the "optimal" amount of
> intelligence.
> >
> > If, for example, there is some discoverable objective truth of nihilism,
> or
> > negative utilitarianism, for example, or if the Buddhist conception of
> freeing
> > oneself from all desires, then it could be that all superintelligences
> would
> > self-destruct upon discovering this truth for themselves.
>
> Enlightened AI:s transcending! Sounds like a great book! =)
>
It does! Also, the phrase "Enlightened AIs" brought to mind a conversation
I had, with a seemingly enlightened AI:
https://photos.app.goo.gl/osskvbe4fYpbK5uZ9
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250226/a7479240/attachment.htm>
More information about the extropy-chat
mailing list