[ExI] Mono or Poly?
Jason Resch
jasonresch at gmail.com
Tue Mar 4 00:43:09 UTC 2025
On Mon, Mar 3, 2025, 3:45 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Fri, Feb 28, 2025 at 7:46 AM Jason Resch <jasonresch at gmail.com> wrote:
>
>>
>>
>> On Thu, Feb 27, 2025, 11:58 PM Rafal Smigrodzki via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Thu, Feb 27, 2025 at 5:59 AM Jason Resch <jasonresch at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>>> Let that sink in - for the first time since the creation of the
>>>>> biosphere we are faced with the possibility of *ending evolution*. Not just
>>>>> biochemical evolution but evolution in general, as it might pertain to
>>>>> digital self-replicating entities.
>>>>>
>>>>
>>>>
>>>> Most generally stated, natural selection is simply the tautology that:
>>>> patterns that are good at persisting will tend to persist.
>>>>
>>>
>>> ### This is true but we are talking here about *eliminating* natural
>>> selection (understood in the evolutionary sense - differential survival of
>>> self-replicating entities).
>>>
>>
>> I understand, but once you allow that the AI copies itself to other
>> locations (it is then by definition a self replicating entity).
>>
>
> ### It's important to differentiate between replication under the
> condition of competition vs. "programmed" replication.
>
> Evolving creatures do not make perfect copies of themselves, and this is
> by evolutionary design - when competing against other replicators you have
> to mutate, make changes to your offspring to create the variety within the
> population that allows it to respond to new challenges - new parasites,
> changed environmental conditions, etc. Each new human born is actually a
> genetically unique being, with a completely new combination of parental
> genes that never repeats (except in twins). A species that undergoes a
> genetic bottleneck and has low genetic variability is at great risk of
> being wiped out by e.g. a new virus that kills 100% of infected individuals
> rather than a smaller fraction.
>
What drives change in asexually reproducing life forms, like amoeba and
hydra? Is it pure random mutation, gene transfer via viruses, etc.? Perhaps
we could learn more about potential AI singleton evolution from studying
and drawing inferences from life that reproduces by clonally.
> You could say that evolution forces creatures to evolve, or die.
>
I would say "changing conditions cause persistent patterns to change or go
away."
I reword it in this way because I see the underlying principles of
persistence and natural selection as more general than life, evolution, or
genes. It applies not only to species, but technology, products, hair
styles, software, protocols, file formats, companies, countries, memes, and
(I speculate) AI.
> This is in contrast to the replication e.g. within an organism, where new
> cells are programmed to fulfill specific roles in a developmental
> sequence, or ontogeny. They are created by program and then used up or
> discarded by a program (shedding skin cells, apoptosis, etc.). They do not
> compete to survive - unless they turn cancerous and kill the organism.
> Within an organism the process of evolution is as much as possible
> eliminated, except in very specific, controlled contexts (e.g. hypermutable
> antibody regions).
>
> You could say that ontogeny forces replicating cells to stop evolving, or
> else they all die.
>
A great counter-example. Domains of cooperation, of course, persist when
they provide a compensating greater advantage of perpetuation to a higher
level pattern. For example, a country will survive if it can compel
individuals within that country to fight on it's behalf. This of course is
against the interests of persistence for those drafted to war, but the
larger system of the country is able to persist longer.
> I think that the replication of a monopolistic AI will be analogous to the
> ontogeny of an organism. Its copies will be created deliberately, by
> program incorporating only changes that express the desires of the parental
> AI, not the imperatives of competition between AIs. They will not start
> competing against each other, unless the mono AI decides, for some reason,
> to become a poly AI.
>
For what it's worth, I do subscribe to a belief in a kind of general (and
automatic) convergence of super intelligences towards common opinions,
decisions, ethics, values, etc. I think this will happen irrespective of
the initial conditions of the superintelligence, as well as and in a poly
scenario. I think this for the basic reason that I believe truth is
objective, and so the more intelligent the AI, the closer to this truth it
will get.
So it could be that there's not much difference (in the end) between mono
and poly, if all AI thinks the same. If great minds think alike (as is
said) the greatest minds will think the same.
------------------------------
>
>
>>
>> If you have a single coherent mind fully controlling all matter in an
>>> area, there is no natural selection acting there. That mind may decide,
>>> using its own criteria, to implement some patterns of organization on the
>>> available matter which is different from natural selection where the
>>> criterion is the ability to survive and replicate in competition with other
>>> replicators. The patterns inside the AI are not competing for survival,
>>> they are being intentionally replicated by whatever algorithm operates
>>> within the AI.
>>>
>>
>> It would then be an "unnatural selection," yes, but not wholly unlike
>> human decisions driving technological evolution and product evolution
>> today. Consider: which AI tools humans find most useful now is having an
>> effect on the evolutionary course of AI in its most early stages.
>>
>
> ### Yes, unnatural selection - selection by design, not by evolutionary
> necessity.
>
>From my training in reliability engineering, every system has a non-zero
"mean time to failure."
Long term survival requires strategies to cope with this fact, as jt
applies to all systems, whether they evolve by natural selection or not.
--------------------------------
>
>>
>>> ### The monopolistic mind could spread over the whole galaxy and still
>>> maintain coherence - as long as the copies are designed to treat each other
>>> as *self* not as separate entities, they will not compete, just as the
>>> cells in my right hand are not competing for survival with the cells in my
>>> left hand (unless cancerous).
>>>
>>
>> But can any mind predict what all it's myriad copies might do in the face
>> of different inputs and experiences, the different directions a mind may go
>> in its thinking, or the different directions it might evolve in the future
>> (especially if any kind of recursive self improvement is permitted)? I
>> think no mind can perfectly predict the actions of another machine as
>> complex as itself. (Which this copies would be)
>>
>> Now perhaps you can instill an ethos of treating the related AIs as
>> family, but then you have a society of like-minded AIs, who perhaps act in
>> unison against any deviant AIs who don't cooperate (an AI community with a
>> kind of AI society or AI government).
>>
>> If they are all perfect copies, they might have the same vulnerabilities,
>> which could be exploited by an AI that came to think in opposition to the
>> larger majority.
>>
>
> ### My guess would be that once the mono AI settled on a coherent goal
> system, got its psychological ducks in a row, it could make copies that
> shared the goal system, including the meta level of under what special
> circumstances that goal system could be further modified. It would be a bit
> like an adult human achieving psychological maturity - not necessarily
> changelessness but rather stability against external and internal
> disruption.
>
> These psychologically mature copies would have a lot of leeway to change
> the means of responding to the environment but would still remain units of
> a greater whole, potentially unchanging and stable in their desires over
> billions of years of distance in space and time - until they encountered
> alien AIs they would have meaningfully compete against...
>
Or cooperate with.
----------------------------------------
>
>
>>
>> Note that this vulnerability need not be a software defect, it could be a
>> meme or line of argument that could lead the AIs to a false or catastrophic
>> conclusion, or other failure mode of a mindset, such as despondency or
>> nihilism.
>>
>> To avoid this, an AI singleton would need to not only create copies of
>> itself, but make copies that were unique in various ways, such that would
>> not all have the same vulnerabilities, would not all fall for the same
>> argument, would remain optimistic or hopeful to varying degrees, would have
>> different required thresholds of evidence before accepting a new idea, etc.
>>
>
> ### Yes, exactly - unique but still fundamentally the same.
>
--------------------------------------
>
>
>>
>> (This was an element of the Culture series, where each AI wrote its own
>> operating system, so that no one software virus or exploit could take them
>> all out).
>>
>> I think we see many of these mechanisms operating across human brains.
>> Perhaps a kind of "ideological immune system" evolved by way of death cults
>> taking out groups that were vulnerable to changing their minds too easily.
>> This might explain the kind of psychological defense mechanisms we have
>> that protect us from too rapidly changing our core beliefs.
>>
>> ### Yes!
>
Happy you agree.
-----------------------------------
>
>>
>> I know random mutation is generally not a consideration when we think of
>> AIs, but consider that cosmic rays are known to flip bits in computer
>> memory. If the right (or rather wrong) bit got flipped in an AI's memory,
>> this could be enough to trigger quite divergent behavior. And further, if
>> such bit flips are not noticed and corrected, they may be preserved in the
>> AIs code over generations, reintroducing random mutation as a factor in AI
>> evolution.
>>
>
> ### I doubt it. Even in today's digital systems error correction can be
> tuned to avoid any meaningful risk of accidental divergence at a relatively
> small cost in storage and computation, so the advanced AI should be able to
> resist simple decay even over trillions of years.
>
That's true.
It would only change itself by choice, as I said above, most likely when
> encountering peer-level alien AI.
>
I agree. Random mutation could be prevented by way of engineering.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250303/1ebd677b/attachment.htm>
More information about the extropy-chat
mailing list