[ExI] How Close is the AI Intelligence Explosion?

Ben Zaiboc ben at zaiboc.net
Sat Mar 22 18:29:30 UTC 2025


On 22/03/2025 14:25, BillK wrote:
> The article references another essay -
> Read next: Keep The Future Human proposes four essential, practical
> measures to prevent uncontrolled AGI and superintelligence from being
> built, all politically feasible and possible with today’s technology –
> but only if we act decisively today.
> <https://keepthefuturehuman.ai/>
> ---------------------
>
> Very sensible suggestions, but with no hope of being implemented.
> The over-riding driving force is that the West must get AGI before
> China and Russia.

This is just like global warming. It's happening, and there's nothing we 
can, or rather, nothing we will, do about it.

And, like global warming, I reckon that anyone with an ounce of sense 
should not be concentrating on how to avoid it (because that's 
pointless), but on how to survive it (because that's slightly less 
pointless).

'Keep the future human' is sad, distasteful and just wrong. The future 
will not be 'human', that's pretty much certain (and I'd be disappointed 
if it were. It would mean, essentially, that we'd failed).

What's more important, I think, is that we do what we can to make our 
mind-children the best they can be, regardless of what happens to the 
human race.

-- 
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250322/c8a7db70/attachment.htm>


More information about the extropy-chat mailing list