[ExI] NO to the "Statement on Superintelligence"
    Giulio Prisco 
    giulio at gmail.com
       
    Mon Oct 27 05:12:45 UTC 2025
    
    
  
On Sun, Oct 26, 2025 at 6:15 PM Ben Zaiboc via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On 26/10/2025 14:40, Giulio Prisco wrote:
> > My latest @Mindplex_AI article. I strongly criticize the "Statement on
> > Superintelligence" recently issued by the Future of Life Institute. I
> > think their proposed ban on artificial superintelligence (ASI)
> > research is misguided and potentially dangerous.
> > https://magazine.mindplex.ai/post/the-misguided-crusade-against-superintelligence
>
> I agree with everything there, except the last part: "controlled by bad
> humans".
>
Right. I should have said "developed and initially controlled by bad humans."
> If it was controllable by humans, it wouldn't be superintelligence.
>
> I think it's more likely that if we heed the Future of Life Institute
> statement, it will make little difference. Whether superintelligent AI
> is developed by 'good' or 'bad' humans, it will inevitably make its own
> decisions. The dangerous part that we can actually influence is
> pre-superintelligent advanced AI, where humans might reasonably expect
> AIs to do what they want them to. Briefly. This, I think, is a good
> argument for developing it as quickly as possible, to stay ahead of
> other players who will attempt to enforce their values on everyone else.
>
> Perhaps the best rebuttal to the FLI statement would be: "Do you really
> want to live under Communist Chinese rule?" (even if this is likely to
> be only for a brief time).
>
My reply to Max Tegmark's X post on the statement was:
非常感謝! ! !
which should mean "Thank you very much !!!" in Chinese.
> --
> Ben
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
    
    
More information about the extropy-chat
mailing list