[ExI] Mono or Poly?
efc at disroot.org
efc at disroot.org
Tue Feb 25 12:23:26 UTC 2025
On Mon, 24 Feb 2025, Jason Resch via extropy-chat wrote:
> As to how to engineer a particular outcome, if no country has a dominant
> control over the computing resources necessary to train/run AI, then it is
> possible it will happen naturally. But if there is a sudden leap in
> capability, such that one AI system hacks all the computers and somehow shuts
> out humans from reclaiming them, then there might not be a choice in the
> matter. But I think in the long run, Darwinian forces are a force of nature
> that applies as much to AIs and robots as it does to humans. If the AI is
> recursively self-improving, then each improvement is like a next generation.
> If something goes wrong along the way, (a bad mutation) that will end the line
> of that AI. So the most stable courses of self improvement will involve a
> population of AIs, such that in the course of developing future generations as
> they recursively self-improve, there is less of a chance of a fatal misstep
> that ends that particular line.
I think this is a very common methodology and pattern. Products in competitive
markets become better, humans competing in the game of life, converge in the
best adaptation to their environment (products improvements tend to become fewer
and smaller over time, and eventually more or less "settle" until a new demand
or use comes into play). AI:s, working under constrained resources, against (or
with?) other AI:s, would adapt and improve.
> To Daniel's point, the more sophisticated the mind, the more fragile it
> becomes. If this extends to AIs and superintelligence, then there might be
> many failure modes for AI minds: finding a lack of will, dying of boredom,
> depressive, manic, or obsessive episodes, going insane from lack of mental
> stimulation, developing a hyper focused interest on some inane topic, etc.
I think it will be very interesting to see if there are any natural "limits" on
intelligence. An assumption is frequently that there is no limit, yet, we do not
have a good understanding of intelligence, in the context of AGI.
Imagine if, as you say, with increased intelligence, the probability of any of
the causes you mention, increase, and there is a "max" at which the intelligence
becomes useless and cut off from the world?
Then the question would obviously be, why? Can the limit be removed or
circumvented? And if not, finding the "optimal" amount of intelligence.
> Jason
>
>
>
>
More information about the extropy-chat
mailing list