[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Ben Zaiboc ben at zaiboc.net
Sat Apr 1 14:12:01 UTC 2023

I know I'm resorting to science-fiction here, and won't object to any 
resulting flak, but maybe our only realistic chance lies in something 
like the 'Quiet War' referred to in many of Neal Asher's books (one of 
my favourite sf writers).

Rather crude summary: Superintelligent AI quietly and (largely 
bloodlessly) takes over from humans and puts a stop to all our 
bickering, enabling an age of abundance and peace and progress for both 
humans and (non-biological) machines (with the usual hiccups that make 
for a good story, of course).

Lots of nasties in the stories, but overall, in the background of the 
various adventures, they have one of the few good portrayals of a 
generally positive future for the human race (and the AIs).

But aside from all that, I honestly think that any truly 
superintelligent AI system is going to think the idea of a 'paperclip 
maximiser' or other type of world-destroyer, is totally bonkers.

The real danger is with the less-than-superintelligent systems that can 
give one group of humans an enormous advantage over the others.

It's we, not the AIs, that are the biggest danger.


More information about the extropy-chat mailing list