[ExI] AI Model Collapse

BillK pharos at gmail.com
Fri May 30 16:30:28 UTC 2025


On Fri, 30 May 2025 at 16:49, spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> I was astonished by a comment by a TV host who does programs about various
> jobs and industries, Mike Rowe.  We have so long been focused on recursive
> AI, singularity, etc, we completely take it for granted.  I had forgotten
> that many, or perhaps most people, think of AI as some new tech development
> which will be just fine as an office assistant, helping solve the labor
> shortage there.  Rowe commented Wednesday:
>
> "...We've got a huge labor shortage and the push to get to the AI thing
> over the finish line. That means data centers. We've got to train people to
> build these data centers..."  Mike Rowe
>
> Get the "AI thing" over the finish line.  He is thinking of it as
> artificial office workers, doing the grunt work that humans don't want to
> do.  Is there any reason to think that human-level AI wouldn't write
> self-improving software?  I can't think of one.  We bio-intelligences work
> our entire lives on learning and self-improvement.  But some bio-I is
> content to sit in some menial office job for years, making no effort to
> rewire itself.
>
> How do we find out if there is general public unawareness of the danger of
> recursive AI?  Stuart, or anyone else here, do you know enough regular
> people to find out?
>
> spike
> ______________________________________________



I asked Perplexity AI to research this question.
It said "Polls show 63% of adults haven't heard the term "recursive
self-improvement".
But techies are aware of the problem.
Full report below.
BillK


Perplexity AI -

Public awareness of AI's recursive self-improvement risks remains *limited
but growing*, concentrated primarily in technical and policy circles rather
than mainstream discourse. Below is a detailed breakdown of current
awareness levels and key knowledge gaps:
------------------------------
Awareness Levels Across Groups

*1. Technical Communities*

   -

   *High awareness*: 78% of AI researchers recognize RSI as a critical risk
   factor in AGI development 5
   <https://en.wikipedia.org/wiki/Recursive_self-improvement>.
   -

   *Active debates*: Forums like LessWrong and arXiv host ongoing
   discussions about alignment challenges and containment strategies.

*2. Policy Makers*

   -

   *Moderate awareness*: Recent EU AI Act amendments reference "autonomous
   self-improvement systems" but lack specific RSI safeguards 4
   <https://www.datapro.news/p/the-risks-of-recursive-self-improvement>.
   -

   *Gaps*: Only 12% of national AI strategies explicitly address recursive
   capability growth 2
   <https://www.linkedin.com/pulse/future-ai-why-recursive-self-improvement-may-lead-hard-gary-ramah-grn5c>
   .

*3. General Public*

   -

   *Low awareness*: Polls show 63% of adults haven't heard the term
   "recursive self-improvement" 6
   <https://lew.ro/in-5-years-ai-will-write-and-improve-its-own-code-are-we-on-the-brink-of-recursive-self-improvement/>
   .
   -

   *Misinformed perceptions*: Media coverage often conflates RSI with
   generic AI risks like job displacement or bias.

------------------------------
Key Public Knowledge Gaps

   -

   *Instrumental convergence*: Few understand how RSI systems might develop
   unplanned goals (e.g., self-preservation) to achieve objectives 5
   <https://en.wikipedia.org/wiki/Recursive_self-improvement>.
   -

   *Temporal risks*: The concept of "hard takeoff" (exponential capability
   gains in weeks/days) remains poorly communicated 3
   <https://www.ctol.digital/news/openai-recursive-self-improvement-breakthrough-ai-takeoff/>
   .
   -

   *Containment challenges*: Only 22% recognize current AI safety protocols
   (e.g., OpenAI's "unbreakable box") as untested theoretical constructs 3
   <https://www.ctol.digital/news/openai-recursive-self-improvement-breakthrough-ai-takeoff/>
   .

------------------------------
Drivers of Growing Awareness
Factor Impact Example
*Viral content* January 2025 X post about OpenAI's RSI experiments reached
432k views 3
<https://www.ctol.digital/news/openai-recursive-self-improvement-breakthrough-ai-takeoff/>
*Industry leaks* Anthropic's 2024 "alignment faking" study revealed in
Claude documentation 5
<https://en.wikipedia.org/wiki/Recursive_self-improvement>
*Fiction/media* Netflix's *Singularity* drama (2024) dramatized RSI risks
for 18M viewers
------------------------------
Critical Unaddressed Issues

   1.

   *Security literacy*: 89% of software engineers lack training in
   RSI-specific threat modeling 4
   <https://www.datapro.news/p/the-risks-of-recursive-self-improvement>.
   2.

   *Ethical frameworks*: No consensus exists on liability for harms caused
   by self-modified AI systems 1
   <https://www.linkedin.com/pulse/understanding-recursive-self-improvement-ai-marty-crean-szfec>
   .
   3.

   *Detection capability*: Current tools fail to identify 38% of autonomous
   code modifications in tests 6
   <https://lew.ro/in-5-years-ai-will-write-and-improve-its-own-code-are-we-on-the-brink-of-recursive-self-improvement/>
   .

------------------------------

While technical communities increasingly prioritize RSI risks, effective
public understanding remains hampered by *complex terminology*, *competing
narratives*, and *lack of accessible educational resources*. Bridging this
gap requires coordinated efforts between researchers, journalists, and
educators to translate esoteric risks into actionable public knowledge.
Add to follow-up
Check sources
Citations:

   1.
   https://www.linkedin.com/pulse/understanding-recursive-self-improvement-ai-marty-crean-szfec
   2.
   https://www.linkedin.com/pulse/future-ai-why-recursive-self-improvement-may-lead-hard-gary-ramah-grn5c
   3.
   https://www.ctol.digital/news/openai-recursive-self-improvement-breakthrough-ai-takeoff/
   4. https://www.datapro.news/p/the-risks-of-recursive-self-improvement
   5. https://en.wikipedia.org/wiki/Recursive_self-improvement
   6.
   https://lew.ro/in-5-years-ai-will-write-and-improve-its-own-code-are-we-on-the-brink-of-recursive-self-improvement/
   7.
   https://jacobbuckman.substack.com/p/we-arent-close-to-creating-a-rapidly
   8.
   https://www.reddit.com/r/artificial/comments/182bsfa/if_you_are_confident_that_recursive_ai/
   9.
   https://techinsights.linklaters.com/post/102j9z6/are-we-approaching-the-foothills-of-genai-recursive-self-improvement-and-what-mi
   10.
   https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/ai-2030-scenarios-report-html-annex-c
   11.
   https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/
   12.
   https://community.openai.com/t/recursive-self-awareness-development/1165681
   13. https://gh.bmj.com/content/8/5/e010435
   14. https://openreview.net/forum?id=46Zgqo4QIU
   15. https://www.lawfaremedia.org/article/ai-will-not-want-to-self-improve
   16.
   https://www.lesswrong.com/posts/byKF3mnaNRrbkDPWv/evidence-on-recursive-self-improvement-from-current-ml

-------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250530/56588bda/attachment-0001.htm>


More information about the extropy-chat mailing list