[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Will Steinberg steinberg.will at gmail.com
Thu Mar 30 05:09:20 UTC 2023


That's a bit silly.  At the very least this has a very real possibility of
absolutely obliterating the global economy.  You talk about the zealots
against AI but there is the opposite as well.

Like I said I don't think it's sensible or feasible to halt development,
but we should be fast-tracking regulations around this and be pouring
billions of dollars into research around alignment and outcomes

On Thu, Mar 30, 2023 at 12:26 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Wed, Mar 29, 2023 at 8:34 PM Will Steinberg via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I think it's fair to say that haphazardly developing tech that even has
>> possible total existential risk associated with it is bad.
>>
>
> That argument can be extended to anything.
>
> It's true.  Any action you take has a mathematically non-zero chance of
> leading to the destruction of all of humanity, in a way that you would not
> have helped with had you taken a certain other action.
>
> Choose this restaurant or that?  The waiter you tip might use that funding
> to bootstrap world domination - or hold a grudge if you don't tip,
> inspiring an ultimately successful world domination.
>
> Wait a second or don't to cross the street?  Who do you ever so slightly
> inconvenience or help, and how might their lives be different because of
> that?
>
> Make an AI, or don't make the AI that could have countered a genocidal AI?
>
> "But it could possibly turn out bad" is not, by itself, reason to favor
> any action over any other.  If you can even approximately quantify the
> level of risk for each alternative, then perhaps - but I see no such
> calculations based on actual data being done here, just guesswork and
> assumptions.  We have no data showing whether developing or not developing
> better AI is the riskier path.
>
> We do, however, have data showing that if we hold off on developing AI,
> then people who are more likely to develop genocidal AI will continue
> unchallenged.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230330/da77d9ea/attachment.htm>


More information about the extropy-chat mailing list