<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Fri, Mar 21, 2025 at 4:10 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This article from the Future of Life Institute suggests that the<br>
intelligence explosion could be only a few years away.<br>
BillK<br>
<br>
<<a href="https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/" rel="noreferrer" target="_blank">https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/</a>><br>
Quotes:<br>
Are we close to an intelligence explosion?<br></blockquote><div><br></div><div>Great article. I particularly liked this chart: <a href="https://futureoflife.org/wp-content/uploads/2025/03/Test-scores-of-Al-systems-capabilities-Our-World-in-Data-1024x723.png">https://futureoflife.org/wp-content/uploads/2025/03/Test-scores-of-Al-systems-capabilities-Our-World-in-Data-1024x723.png</a></div><div><br></div><div>I can't help but feel that we are presently in the midst of an intelligence explosion. People more and more are relying on AI to make more intelligent, better informed decisions in their day to day life or jobs.</div><div><br></div><div>To summarize books and articles so they can absorb more information more quickly.</div><div><br></div><div>AI researchers are using AI to write better, more optimized code for future iterations of AI. Humans, for now, remain part of this loop, which is why it is not going as quickly as it could. But as time progresses, I think humans are gradually stepping back and becoming an ever smaller part of this recursive loop of self improvement.</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
AIs are inching ever-closer to a critical threshold. Beyond this<br>
threshold are great risks—but crossing it is not inevitable.<br>
Published: March 21, 2025 Author: Sarah Hastings-Woodhouse<br></blockquote><div><br></div><div>Even exponential functions are continuous, so there may never be a feeling of a discontinuous break-out-moment, we'll just see an ever increasing slope of ever-faster progress per unit of time.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Timelines to superintelligent AI vary widely, from just two or three<br>
years (a common prediction by lab insiders) to a couple of decades<br>
(the median guess among the broader machine learning community). That<br>
those closest to the technology expect superintelligence in just a few<br>
years should concern us. We should be acting on the assumption that<br>
very powerful AI could emerge soon.<br></blockquote><div><br></div><div>AI is arguably already smarter than humans (by most metrics, anyway, according to that chart). How exactly are we to define "superintelligence" beyond the vaguely-defined "smarter than any human"? There are many levels of superintelligence beyond human intelligence. Humans are much closer to the intelligence of a pocket calculator, than to the level of a Matrioshka brain or kilogram of computronium:</div><div><br></div><div><a href="https://docs.google.com/spreadsheets/d/1_8QfebbBvQXo_3OroBhOfp24RAJPKCM4e_q5njbfBbU/edit?usp=sharing">https://docs.google.com/spreadsheets/d/1_8QfebbBvQXo_3OroBhOfp24RAJPKCM4e_q5njbfBbU/edit?usp=sharing</a><br></div><div><br></div><div>Jason</div></div></div>