[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

spike at rainier66.com spike at rainier66.com
Thu Mar 30 05:46:19 UTC 2023


 

 

From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Will Steinberg via extropy-chat



 

 

>….  We almost killed ourselves with nukes in the middle of the last century… Will

 

Doesn’t it seem like we should be able to retire an old existential risk when a new one shows up?  It feels to me like we are in as much danger of old-fashioned nuclear holocaust now as we have ever been in the past.  But now two new existential risks pop up: man-made weaponized viruses and ambiguously human level AI.  But we don’t get to retire the nuke risk.

 

Sheesh.  I liked it better when we only had one serious existential risk.

 

spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/8459876c/attachment.htm>


More information about the extropy-chat mailing list