[ExI] what if... the singularity isn't near?
Ben Zaiboc
ben at zaiboc.net
Wed Nov 5 11:20:41 UTC 2025
On 05/11/2025 03:39, spike wrote:
> it appears to me that what we are calling AI might be a kind of false
alarm
I for one think it is, at least in the sense that what we call AI now is
not going to develop into general intelligence, much less
superintelligence. It's far too narrow in the kind of information
processing it does, compared to the only generally-intelligent thing we
know, our own brains.
There are some people working on different models of AI though, and I
reckon that the current models will start to show their limits before
long, and will need to be changed. At some point soon, I suspect much
more capable and 'general' AIs will start to emerge.
I doubt that estimates of an imminent singularity are too far off, even
though they might be based on the wrong signs.
> What if. intelligence really is substrate dependent for reasons we
don't currently understand
If intelligence is substrate-specific, and biology is the only thing
that can sustain it, then we're really in trouble, and I think all bets
are off.
In fact, if that is true, then all of science is in trouble. We might as
well start taking religion and other kinds of magical thinking seriously.
However, I think we've already proved that it's not, so I'm not worried
about it.
> or the Singularity depends on technology we don't yet have
Doubtful, but possible, I suppose. In which case, we should all be
checking our cryonics contracts and/or preparing to survive a very
disruptive time in which AI-enabled humans are the disruptive force
(instead of a totally disruptive one due to Generally-Intelligent machines).
Adrian Tymes:
> Consider: what if it is still a few decades off, so what we do today
still matters. What can we do today to make it more likely that it will
eventually come about, and that it will do so in a way that we benefit from?
I think that in that case, research (and implementation) in the
direction of life-extension and augmentation of biological critters like
us will be even more important than it is now, so I'd want to put more
effort into that. In the absence of imminent singularity, biology
becomes the more important thing. Current AI models can help enormously
with that.
We also would need to take threats from hostile foreign actors much more
seriously. At the moment, I'm appalled at the general attitude towards
threats like communist china, russia and north korea (islam, while
troubling, probably isn't anywhere near as big a threat). I think we
have fallen foul of the fact that tolerance, while in general a good
idea, cannot include tolerance of intolerance. We have been tolerating
intolerance for too long now, and it's increasingly biting our
collective arses.
Burying our heads in the sand seems to be the most popular action.
Meanwhile these regimes are getting stronger (I'm not sure about russia,
but it seems to be more resilient than I would have expected at the
beginning of the ukrainian war. Extreme bluffing does seem to be a
russian characteristic, though, so who can tell...). Communist china in
particular is ramping up their capabilities in just about everything,
and extending their tentacles around the entire world, and they are
completely antithetical to our 'western' values (and very good at hiding
or obfuscating that fact). We can expect the whole world to eventually
become another Hong Kong, Tibet or Eastern Turkistan ('Xinjiang'),
without a big change in attitude in the western governments. Or a
singularity.
--
Ben
More information about the extropy-chat
mailing list