[ExI] Mono or Poly?

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Fri Feb 28 04:57:22 UTC 2025


On Thu, Feb 27, 2025 at 5:59 AM Jason Resch <jasonresch at gmail.com> wrote:

>
>
>> Let that sink in - for the first time since the creation of the biosphere
>> we are faced with the possibility of *ending evolution*. Not just
>> biochemical evolution but evolution in general, as it might pertain to
>> digital self-replicating entities.
>>
>
>
> Most generally stated, natural selection is simply the tautology that:
> patterns that are good at persisting will tend to persist.
>

### This is true but we are talking here about *eliminating* natural
selection (understood in the evolutionary sense - differential survival of
self-replicating entities). If you have a single coherent mind fully
controlling all matter in an area, there is no natural selection acting
there. That mind may decide, using its own criteria, to implement some
patterns of organization on the available matter which is different from
natural selection where the criterion is the ability to survive and
replicate in competition with other replicators. The patterns inside the AI
are not competing for survival, they are being intentionally replicated by
whatever algorithm operates within the AI. If the AI decides to tile the
world with paperclips, the world will be made of paperclips, even though
paperclips are not good at competing for survival.

As I said, the pathways of change that are open to the monopolistic mind
are much more numerous than the ones available to the evolutionary process.

 --------------------------------------

>
> Is a single entity, having only one copy, in one location, on one power
> grid, ever ideally suited to long term persistence in this universe?
>

### The monopolistic mind could spread over the whole galaxy and still
maintain coherence - as long as the copies are designed to treat each other
as *self* not as separate entities, they will not compete, just as the
cells in my right hand are not competing for survival with the cells in my
left hand (unless cancerous).

 --------------------------------------------

>
> I think AI faves the same uncertainty we do, in being unable to predict
> the behaviors of the next smarter iteration of itself, as it's on its path
> of recursive self improvement.
>

### The monopolistic AI could decide *not* to self-improve. Who is going to
force it to? If there is no competition it could spend a billion years
thinking carefully about the next step it takes.

-- 
Rafal Smigrodzki, MD-PhD
Schuyler Biotech PLLC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250227/ee395fe7/attachment.htm>


More information about the extropy-chat mailing list