[ExI] ai 2027

Ben Zaiboc ben at zaiboc.net
Sat Nov 29 14:04:39 UTC 2025


On 28/11/2025 18:28, spike wrote:
>
> I have always been a racy kinda guy.  That’s hard to stop.  Are you a 
> slow-down or a race?  John, Adrian, Ben, Mike, others please?  Racers 
> all?  Or… what?  Both of these choices have their challenges, but the 
> way the choice is framed is too Boolean for me.  Are there any other 
> branches on their flow chart?
>
> The study was published almost a year ago, so it might be far enough 
> along we can see which of their models is working:
>
> https://ai-2027.com/
>

I'd say I'm not really either, exactly, but do tend away from the 'slow 
down' side, because not only is it counterproductive, it's not even 
possible. But it doesn't really matter, nothing anyone can actually do 
in practice will have an effect. We are racing into the future, faster 
and faster. That's the nature of exponential progress. Personally, I 
welcome it, but as I've said before, for me, the important thing is that 
intelligence survives and grows. Humans surviving would be very nice, 
but is still secondary, so the 'existential risk' aspect is not so 
important, philosophically speaking. As long as intelligent awareness of 
some sort gets through the coming bottleneck, I'll count that as a win. 
Humans in general getting through it would be a bonus. Me personally 
getting through it comes a distant third. Still desirable, obviously, 
but not so important.

I think the ai-2027 attempt at prediction is just as wrong as any other. 
What will happen will probably surprise us all, regardless of what 
anyone currently thinks.

-- 
Ben

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251129/babb0d6d/attachment.htm>


More information about the extropy-chat mailing list