[ExI] How Close is the AI Intelligence Explosion?

BillK pharos at gmail.com
Sat Mar 22 14:24:55 UTC 2025


On Sat, 22 Mar 2025 at 01:29, Keith Henson <hkeithhenson at gmail.com> wrote:
>
> Acting how?
>
> I have followed this subject for 20 years and have never seen a
> workable proposal.
>
> My unworkable proposal is to modify human nature because AIs are not
> the problem, humans are.
>
> Keith
> _________________________________________


The article references another essay -
Read next: Keep The Future Human proposes four essential, practical
measures to prevent uncontrolled AGI and superintelligence from being
built, all politically feasible and possible with today’s technology –
but only if we act decisively today.
<https://keepthefuturehuman.ai/>
---------------------

Very sensible suggestions, but with no hope of being implemented.
The over-riding driving force is that the West must get AGI before
China and Russia.

David Farragut "Damn the torpedoes ... full speed ahead."

BillK



More information about the extropy-chat mailing list