[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'
Giovanni Santostasi
gsantostasi at gmail.com
Fri Mar 31 09:12:24 UTC 2023
Adrian,
Right, everything, even crossing the street has existential risk.
The AI doomers would say, but this is different from everything else
because.... it is like God.
There is some religious overtone in their arguments. This superintelligence
can do everything, it can be everything, it cannot be contained, it cannot
be understood and if it can get rid of humans it will.
In their views AI is basically like God but while the ancient religions
made God also somehow benign (in a perverted way), this
superintelligent God AI is super focused in killing everybody.
Their arguments seem logical but they are actually not. We already have bad
agents in the world and they already have powers that are superior to that
of a particular individual or groups of individuals. For example, nations.
Take Russia, or North Korea. Russia could destroy humanity or do
irreparable damage. Why doesn't it happen? Mutual Destruction is part of
the reason. Same would apply to a rogue AI.
We know how to handle viruses both biological and digital. We do have to be
aware and vigilant but I'm pretty sure we can handle problems as they
present themselves. It would be nice to prepare for every possible
existential threat but we also did well overall as a species to face the
problems when they presented themselves because no matter how well we can
prepare, the real problem is never exactly how models predicted. We are
good at adapting and surviving. One thing is to warn of the possible
dangers, another this relentless and exaggerated doom sayers cries.
Giovanni
On Wed, Mar 29, 2023 at 9:27 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Wed, Mar 29, 2023 at 8:34 PM Will Steinberg via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I think it's fair to say that haphazardly developing tech that even has
>> possible total existential risk associated with it is bad.
>>
>
> That argument can be extended to anything.
>
> It's true. Any action you take has a mathematically non-zero chance of
> leading to the destruction of all of humanity, in a way that you would not
> have helped with had you taken a certain other action.
>
> Choose this restaurant or that? The waiter you tip might use that funding
> to bootstrap world domination - or hold a grudge if you don't tip,
> inspiring an ultimately successful world domination.
>
> Wait a second or don't to cross the street? Who do you ever so slightly
> inconvenience or help, and how might their lives be different because of
> that?
>
> Make an AI, or don't make the AI that could have countered a genocidal AI?
>
> "But it could possibly turn out bad" is not, by itself, reason to favor
> any action over any other. If you can even approximately quantify the
> level of risk for each alternative, then perhaps - but I see no such
> calculations based on actual data being done here, just guesswork and
> assumptions. We have no data showing whether developing or not developing
> better AI is the riskier path.
>
> We do, however, have data showing that if we hold off on developing AI,
> then people who are more likely to develop genocidal AI will continue
> unchallenged.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/bf8f184d/attachment.htm>
More information about the extropy-chat
mailing list