[ExI] Mono or Poly?

efc at disroot.org efc at disroot.org
Thu Feb 27 19:00:23 UTC 2025



On Wed, 26 Feb 2025, Jason Resch via extropy-chat wrote:

>       > Given some series of observations, this algorithm derives the most probable
>       > distribution of programs that could generate those observations, and then
>       > based on each, calculates a weighted expectation of utility for each viable
>       > course of action that intelligence is capable of making at that time.
>
>       I suspect that this books makes stronger claims than I would grant, but it also
>       of course depends on the definitions. Sadly I do not have time to go through the
>       book. =(
> 
> I haven't read the book, but just that webpage provides enough information to
> grasp the concept of AIXI, and why no intelligence could make better decisions
> than it.
> 
> I just realized, however, that there are implicit assumptions built into this
> model of intelligence, namely that one exists in a reality of comprehensible
> (computable) laws, that such laws extend into the future, that simpler laws
> are preferred to more complex ones (when all else is equal). Note also, that
> these are the same sorts of ideas Einstein spoke of as the necessary "faith"
> of a scientist.

I am not surprised that there would be implicit assumptions, and also a clear
definition of what is meant by intelligence in order to set the stage for such
things.

>       Wonderful plot device! I like the culture series. Isn't that where they have
>       zones of intelligence? It was many years since I last read a culture book.
> 
> Hmm. I don't recall the zones of intelligence in that series..

No my mistake, it was Vernor Vinge.

>       > If, for example, there is some discoverable objective truth of nihilism, or
>       > negative utilitarianism, for example, or if the Buddhist conception of freeing
>       > oneself from all desires, then it could be that all superintelligences would
>       > self-destruct upon discovering this truth for themselves.
>
>       Enlightened AI:s transcending! Sounds like a great book! =)
> 
> It does! Also, the phrase "Enlightened AIs" brought to mind a conversation I had, with a seemingly enlightened AI:
> https://photos.app.goo.gl/osskvbe4fYpbK5uZ9

I'm surprised that he still responds, instead of living in bliss in paradise! ;)

Best regards, 
Daniel



More information about the extropy-chat mailing list