[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Adrian Tymes atymes at gmail.com
Thu Mar 30 04:25:26 UTC 2023


On Wed, Mar 29, 2023 at 8:34 PM Will Steinberg via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I think it's fair to say that haphazardly developing tech that even has
> possible total existential risk associated with it is bad.
>

That argument can be extended to anything.

It's true.  Any action you take has a mathematically non-zero chance of
leading to the destruction of all of humanity, in a way that you would not
have helped with had you taken a certain other action.

Choose this restaurant or that?  The waiter you tip might use that funding
to bootstrap world domination - or hold a grudge if you don't tip,
inspiring an ultimately successful world domination.

Wait a second or don't to cross the street?  Who do you ever so slightly
inconvenience or help, and how might their lives be different because of
that?

Make an AI, or don't make the AI that could have countered a genocidal AI?

"But it could possibly turn out bad" is not, by itself, reason to favor any
action over any other.  If you can even approximately quantify the level of
risk for each alternative, then perhaps - but I see no such calculations
based on actual data being done here, just guesswork and assumptions.  We
have no data showing whether developing or not developing better AI is the
riskier path.

We do, however, have data showing that if we hold off on developing AI,
then people who are more likely to develop genocidal AI will continue
unchallenged.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/9b60c740/attachment.htm>


More information about the extropy-chat mailing list