[ExI] ai 2027
John Clark
johnkclark at gmail.com
Sat Nov 29 20:55:06 UTC 2025
On Sat, Nov 29, 2025 at 9:06 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
*> nothing anyone can actually do in practice will have an effect. We are
> racing into the future, faster and faster. That's the nature of exponential
> progress. Personally, I welcome it, but as I've said before, for me, the
> important thing is that intelligence survives and grows. Humans surviving
> would be very nice, but is still secondary, so the 'existential risk'
> aspect is not so important, philosophically speaking. As long as
> intelligent awareness of some sort gets through the coming bottleneck, I'll
> count that as a win. *
*I agree with that 100%. *
*> Humans in general getting through it would be a bonus. Me personally
> getting through it comes a distant third.*
*My agreement is not quite as strong on that one. I'm a bit of a selfish
bastard so I would switch priorities. *
* > I think the ai-2027 attempt at prediction is just as wrong as any
> other.*
>
*I think the precise predictions they make are almost certainly wrong, like
the year and the month the Chinese will steal the source code and the
weights of a bleeding edge frontier AI model, but I think the general trend
and the dates they predict when certain AI milestones will be achieved will
prove to be approximately correct. *
*> What will happen will probably surprise us all, regardless of what
> anyone currently thinks.*
>
*Yeah, whenever the Singularity happens it's going to be a big surprise. *
*John K Clark*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251129/51f92a71/attachment.htm>
More information about the extropy-chat
mailing list