[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Adrian Tymes atymes at gmail.com
Fri Mar 31 18:49:52 UTC 2023


On Fri, Mar 31, 2023 at 1:27 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> This is stupid. A government is a long-feedback loop entity, extremely
> inefficient and slow in responding to truly new challenges, unlikely to
> maintain alignment with the goals of its human subjects and its failures
> grow with its size. It would be suicidal to try to use the mechanism of
> government to solve AI alignment.
>
> Our only chance of surviving the singularity is to build a guardian AI, an
> aligned superhuman AI that would be capable of preventing the emergence of
> unaligned or malicious superhuman AIs - a bit like a world government but
> without the psychopaths and the idiots.
>
> Our best chance for building the guardian AI is for highly competent and
> benevolent AI programmers with unlimited resources to work as fast as they
> can, unimpeded by regulations (see "long-feedback loop" and "extremely
> inefficient" for why regulations are a bad idea). Give them all the compute
> they can use and keep our fingers crossed.
>

Indeed.  But it's easy for those in a panic to distrust everyone and call
for shutdowns.  It's hard for them to trust - even when historical examples
show that trust works and bans don't in this sort of situation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/98d6e974/attachment.htm>


More information about the extropy-chat mailing list