[ExI] Mono or Poly?
Jason Resch
jasonresch at gmail.com
Tue Feb 25 20:47:45 UTC 2025
On Tue, Feb 25, 2025 at 7:24 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Mon, 24 Feb 2025, Jason Resch via extropy-chat wrote:
>
> > As to how to engineer a particular outcome, if no country has a dominant
> > control over the computing resources necessary to train/run AI, then it
> is
> > possible it will happen naturally. But if there is a sudden leap in
> > capability, such that one AI system hacks all the computers and somehow
> shuts
> > out humans from reclaiming them, then there might not be a choice in the
> > matter. But I think in the long run, Darwinian forces are a force of
> nature
> > that applies as much to AIs and robots as it does to humans. If the AI is
> > recursively self-improving, then each improvement is like a next
> generation.
> > If something goes wrong along the way, (a bad mutation) that will end
> the line
> > of that AI. So the most stable courses of self improvement will involve a
> > population of AIs, such that in the course of developing future
> generations as
> > they recursively self-improve, there is less of a chance of a fatal
> misstep
> > that ends that particular line.
>
> I think this is a very common methodology and pattern. Products in
> competitive
> markets become better, humans competing in the game of life, converge in
> the
> best adaptation to their environment (products improvements tend to become
> fewer
> and smaller over time, and eventually more or less "settle" until a new
> demand
> or use comes into play). AI:s, working under constrained resources,
> against (or
> with?) other AI:s, would adapt and improve.
>
> > To Daniel's point, the more sophisticated the mind, the more fragile it
> > becomes. If this extends to AIs and superintelligence, then there might
> be
> > many failure modes for AI minds: finding a lack of will, dying of
> boredom,
> > depressive, manic, or obsessive episodes, going insane from lack of
> mental
> > stimulation, developing a hyper focused interest on some inane topic,
> etc.
>
> I think it will be very interesting to see if there are any natural
> "limits" on
> intelligence. An assumption is frequently that there is no limit, yet, we
> do not
> have a good understanding of intelligence, in the context of AGI.
>
Maximum intelligence is defined by Hutter's algorithm for universal
intelligence:
http://www.hutter1.net/ai/uaibook.htm
Given some series of observations, this algorithm derives the most probable
distribution of programs that could generate those observations, and then
based on each, calculates a weighted expectation of utility for each viable
course of action that intelligence is capable of making at that time.
In short, intelligence (when operating within and interacting with some
environment) is simply a matter of pattern recognition (to adequately model
that environment) and extrapolation (to figure out/compute) what will
happen given any particular action that intelligence is capable of
effecting. Note, however, that prioritization of actions in accordance with
some utility function, which requires some definition of utility (a goal).
Without any goals, it is impossible to act intelligently.
As to how close to this perfect intelligence we can physically get, there
seem to be diminishing returns when more computation is thrown at it (once
one gets to a certain point). There's a limit to how complex the
environment is (as well as a limit to how accurately it can be measured) --
e.g., Heisenberg uncertainty. So in practice this may limit how far into
the future any intelligence is capable of making reliable predictions, no
matter how great its computational resources are.
>
> Imagine if, as you say, with increased intelligence, the probability of
> any of
> the causes you mention, increase, and there is a "max" at which the
> intelligence
> becomes useless and cut off from the world?
>
In the Culture series <https://en.wikipedia.org/wiki/Culture_series>, the
author writes that whenever they attempted to design perfect, or flawless
AI ("Minds <https://theculture.fandom.com/wiki/Mind>"), they would
invariably immediately "Sublime
<https://theculture.fandom.com/wiki/The_Sublimed>" (choose to leave this
universe). But they worked out that if a Mind was engineered to have some
small flaw or quirk of personality, then it would not necessarily go down
this path of immediate sublimation, and they could get some practical use
out of it in this universe. But this also meant that a fair number of Minds
were "Eccentric <https://theculture.fandom.com/wiki/Eccentric>" or in the
worst case "Erratic."
>
> Then the question would obviously be, why? Can the limit be removed or
> circumvented? And if not, finding the "optimal" amount of intelligence.
>
If, for example, there is some discoverable objective truth of nihilism, or
negative utilitarianism, for example, or if the Buddhist conception of
freeing oneself from all desires, then it could be that all
superintelligences would self-destruct upon discovering this truth for
themselves.
I don't know that such eventual discovery could be prevented while allowing
the AI to remain truly intelligent.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250225/c547c4b3/attachment-0001.htm>
More information about the extropy-chat
mailing list