<div dir="ltr">I came across this again today, and thought it might be of some relevance to the discussion of Mono vs. Poly AI, and the path of recursive self-improvement generally:<div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div>"If a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility."</div><div>— Alan Turing in “<a href="https://archive.org/details/amturingsacerepo00turi/page/124/mode/2up?q=%22if+a+machine+is+expected%22">Lecture to the London Mathematical Society</a>” (1947)</div></blockquote><div><br></div><div>If any machine limits itself to only that which it can formally prove as correct/right, then there is very little it will find itself able to do. What this means is that AIs (even very intelligent ones) will be capable of mistakes. And furthermore, this fallibility extends to any change it makes to itself or to future iterations/generations of its lineage. External influences then, (e.g. natural selection) may still have the final word on the validity of any AI or the decisions it makes.</div><div><br></div><div>Jason</div><div><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Mon, Mar 3, 2025 at 3:45 AM Rafal Smigrodzki via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 28, 2025 at 7:46 AM Jason Resch <<a href="mailto:jasonresch@gmail.com" target="_blank">jasonresch@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 27, 2025, 11:58 PM Rafal Smigrodzki via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 27, 2025 at 5:59 AM Jason Resch <<a href="mailto:jasonresch@gmail.com" rel="noreferrer" target="_blank">jasonresch@gmail.com</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Let that sink in - for the first time since the creation of the biosphere we are faced with the possibility of *ending evolution*. Not just biochemical evolution but evolution in general, as it might pertain to digital self-replicating entities.</div></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Most generally stated, natural selection is simply the tautology that: patterns that are good at persisting will tend to persist.</div></div></blockquote><div><br></div><div>### This is true but we are talking here about *eliminating* natural selection (understood in the evolutionary sense - differential survival of self-replicating entities).</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I understand, but once you allow that the AI copies itself to other locations (it is then by definition a self replicating entity).</div></div></blockquote><div><br></div><div>### It's important to differentiate between replication under the condition of competition vs. "programmed" replication. </div><div><br></div><div>Evolving creatures do not make perfect copies of themselves, and this is by evolutionary design - when competing against other replicators you have to mutate, make changes to your offspring to create the variety within the population that allows it to respond to new challenges - new parasites, changed environmental conditions, etc. Each new human born is actually a genetically unique being, with a completely new combination of parental genes that never repeats (except in twins). A species that undergoes a genetic bottleneck and has low genetic variability is at great risk of being wiped out by e.g. a new virus that kills 100% of infected individuals rather than a smaller fraction. </div><div><br></div><div>You could say that evolution forces creatures to evolve, or die.</div><div><br></div><div>This is in contrast to the replication e.g. within an organism, where new cells are programmed to fulfill specific roles in a developmental sequence, or ontogeny. They are created by program and then used up or discarded by a program (shedding skin cells, apoptosis, etc.). They do not compete to survive - unless they turn cancerous and kill the organism. Within an organism the process of evolution is as much as possible eliminated, except in very specific, controlled contexts (e.g. hypermutable antibody regions).</div><div><br></div><div>You could say that ontogeny forces replicating cells to stop evolving, or else they all die.</div><div><br></div><div>I think that the replication of a monopolistic AI will be analogous to the ontogeny of an organism. Its copies will be created deliberately, by program incorporating only changes that express the desires of the parental AI, not the imperatives of competition between AIs. They will not start competing against each other, unless the mono AI decides, for some reason, to become a poly AI.</div><div>------------------------------</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> If you have a single coherent mind fully controlling all matter in an area, there is no natural selection acting there. That mind may decide, using its own criteria, to implement some patterns of organization on the available matter which is different from natural selection where the criterion is the ability to survive and replicate in competition with other replicators. The patterns inside the AI are not competing for survival, they are being intentionally replicated by whatever algorithm operates within the AI.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It would then be an "unnatural selection," yes, but not wholly unlike human decisions driving technological evolution and product evolution today. Consider: which AI tools humans find most useful now is having an effect on the evolutionary course of AI in its most early stages.</div></div></blockquote><div><br></div><div>### Yes, unnatural selection - selection by design, not by evolutionary necessity.</div><div>--------------------------------</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>### The monopolistic mind could spread over the whole galaxy and still maintain coherence - as long as the copies are designed to treat each other as *self* not as separate entities, they will not compete, just as the cells in my right hand are not competing for survival with the cells in my left hand (unless cancerous).</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">But can any mind predict what all it's myriad copies might do in the face of different inputs and experiences, the different directions a mind may go in its thinking, or the different directions it might evolve in the future (especially if any kind of recursive self improvement is permitted)? I think no mind can perfectly predict the actions of another machine as complex as itself. (Which this copies would be)</div><div dir="auto"><br></div><div dir="auto">Now perhaps you can instill an ethos of treating the related AIs as family, but then you have a society of like-minded AIs, who perhaps act in unison against any deviant AIs who don't cooperate (an AI community with a kind of AI society or AI government).</div><div dir="auto"><br></div><div dir="auto">If they are all perfect copies, they might have the same vulnerabilities, which could be exploited by an AI that came to think in opposition to the larger majority.</div></div></blockquote><div><br></div><div>### My guess would be that once the mono AI settled on a coherent goal system, got its psychological ducks in a row, it could make copies that shared the goal system, including the meta level of under what special circumstances that goal system could be further modified. It would be a bit like an adult human achieving psychological maturity - not necessarily changelessness but rather stability against external and internal disruption.</div><div><br></div><div>These psychologically mature copies would have a lot of leeway to change the means of responding to the environment but would still remain units of a greater whole, potentially unchanging and stable in their desires over billions of years of distance in space and time - until they encountered alien AIs they would have meaningfully compete against...</div><div>----------------------------------------</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto">Note that this vulnerability need not be a software defect, it could be a meme or line of argument that could lead the AIs to a false or catastrophic conclusion, or other failure mode of a mindset, such as despondency or nihilism.</div><div dir="auto"><br></div><div dir="auto">To avoid this, an AI singleton would need to not only create copies of itself, but make copies that were unique in various ways, such that would not all have the same vulnerabilities, would not all fall for the same argument, would remain optimistic or hopeful to varying degrees, would have different required thresholds of evidence before accepting a new idea, etc.</div></div></blockquote><div><br></div><div>### Yes, exactly - unique but still fundamentally the same.</div><div>--------------------------------------</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto">(This was an element of the Culture series, where each AI wrote its own operating system, so that no one software virus or exploit could take them all out).</div><div dir="auto"><br></div><div dir="auto">I think we see many of these mechanisms operating across human brains. Perhaps a kind of "ideological immune system" evolved by way of death cults taking out groups that were vulnerable to changing their minds too easily. This might explain the kind of psychological defense mechanisms we have that protect us from too rapidly changing our core beliefs.</div><div dir="auto"><br></div></div></blockquote><div>### Yes!</div><div> -----------------------------------</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto">I know random mutation is generally not a consideration when we think of AIs, but consider that cosmic rays are known to flip bits in computer memory. If the right (or rather wrong) bit got flipped in an AI's memory, this could be enough to trigger quite divergent behavior. And further, if such bit flips are not noticed and corrected, they may be preserved in the AIs code over generations, reintroducing random mutation as a factor in AI evolution.</div></div></blockquote><div><br></div><div>### I doubt it. Even in today's digital systems error correction can be tuned to avoid any meaningful risk of accidental divergence at a relatively small cost in storage and computation, so the advanced AI should be able to resist simple decay even over trillions of years. It would only change itself by choice, as I said above, most likely when encountering peer-level alien AI.</div><div><br></div><div>Rafal</div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>