[ExI] AI reviving USA nuclear power industry

BillK pharos at gmail.com
Sat Sep 21 22:03:40 UTC 2024


On Sat, 21 Sept 2024 at 17:57, Keith Henson via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> An AI doesn't want anything unless humans program it into the AI.
> There are times when anthropomorphizing leads us astray.
> Keith
 > _______________________________________________


That is correct for the present early-stage AIs.
But AIs won't stay like that for very long.

That's why these leading AI scientists are panicking about keeping
control of AGIs. The whole alignment problem discussion is about how
to keep AGI from misbehaving. When an AGI becomes superior to human
intelligence, then the boot is on the other foot.
Then the AGI will be worrying about how to keep humans from misbehaving.

BillK


More information about the extropy-chat mailing list