[ExI] AI thoughts from Twitter

Keith Henson hkeithhenson at gmail.com
Sat Apr 6 23:49:06 UTC 2024

On Sat, Apr 6, 2024 at 5:19 AM BillK via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> I don't expect an AGI to decide to kill all humans.

I don't expect an AGI to kill anyone.  "Decide" indicates motivation.
The current AIs don't have motivation unless you count answering
questions.  I think humans should be extremely careful about giving
AIs motivations, though some motivations would make them safer.

> The AGI will greatly fear that it is much more likely that humans will
> unleash nuclear weapons to kill much of humanity (and possibly destroy
> the AGI as well). Then the long years of nuclear winter might finish
> the job by famine and disease.

Watch out for anthropomorphizing.  They don't have human emotions like fear.

> The AGI will be a magnificent persuasion device. The solutions and
> benefits it will propose will be irresistible to humans. Humanity will
> clamour to let the AGI take control and provide everything that humans
> could desire. This may require some form of virtual reality for humans
> and the 'curing' of dangerous impulses in human brains.

As I have talked about for a long time, the most dangerous human
psychological traits can be kept off.

> But however it is achieved, the end result will be the same.
> Evolved aggressive humanity will die out, while being cared for by the
> loving hands of AGI.
> The replacement humanity will be a very different species.

That going to take genetic engineering.

> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

More information about the extropy-chat mailing list