[ExI] what if... the singularity isn't near?

Jason Resch jasonresch at gmail.com
Wed Nov 5 03:44:47 UTC 2025


On Tue, Nov 4, 2025, 9:32 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> OK sure, this site is all about the singularity being near, and OK sure, I
> am a big heretic for even entertaining the notion it might not be.
>
>
>
> But what if… the Singularity is a coupla decades off still, and Kurzweil
> was mostly right, but off a little, and the Singularity is still coming but
> not right away?
>

Have you seen my megatrends presentation?

https://docs.google.com/presentation/d/18jn51f6DXMykCAL6gjZilK27TXAZielm5djcnHuh-7k/edit?usp=drivesdk
(Note there is additional information in the "slide notes," but you may be
to be in a desktop computer to see them).

The trends are (from what I can tell) aligned with a near term < 6 years
away (pessimistic), and possibly < 2 year away (optimistic), intelligence
explosion.

Of course, there could be a derailment. We might hit some road block in
making faster chips, or have some kind of economic or energy shock which
stalls progress in AI. But until we see such signs I think we can assume
we're on track for a singular that's near.


>
> Then what?  Do I get to sell Singularity insurance?
>
I suppose if one worries about a delayed singularity, they should continue
saving for retirement.

>
>
> Because it appears to me that what we are calling AI might be a kind of
> false alarm: a big advance in smart search might make us think the
> Singularity is nearer than it really is.
>
Even if progress in LLMs stopped where it is today, it's already able to
upgrade the IQs of most of the population by 20-40 points, acts as a
professor/tutor/expert in your pocket, knowledgeable on almost every
subject, and can turn any natural language speaker into at least a modest
computer programmer.

So if the singularity was 20 years away before LLMs when the world has just
50 million programmers, how much nearer does it become it with 5 billion
programmers (or even just 500 million)?



>
> Then what?  What if… intelligence really is substrate dependent for
> reasons we don’t currently understand,
>
We already know intelligence isn't substrate dependent, for we already have
intelligent software systems, and software can run in any computer,
whatever it's physical substrate may be.

or the Singularity depends on technology we don’t yet have.
>
Our brains, and all modern AI, are based on the technology of the neuron
(or artificial neuron respectively). I J. Good, when he wrote in the ultra
intelligent machine predicted (in 1965) that the first ultra intelligent
machine would be based on artificial neural networks.

And further, we have discovered general purpose learning algorithms.
DeepMind made a single AI that was able to master 57 different Atari games
entirely on its own, with no specific training or instruction.

I don't think there are any missing breakthroughs separating us from super
intelligence, there's only increasing training time and collecting larger
sets of training data.



>
> Then we would predict the Singularity sooner than it is, ja?
>
>
>
> Singularity thinkers come on, help me Obi wans, alla yas: might we be in a
> situation where we are fooling ourselves?  OK then what please?
>
The beauty (or horror) of exponential trends, is that even if we
underestimate what's required to achieve super intelligence by 1000-fold,
that only postpones things by 10 doublings.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251104/dd64739c/attachment.htm>


More information about the extropy-chat mailing list