[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'
Darin Sunley
dsunley at gmail.com
Fri Mar 31 19:26:05 UTC 2023
Eliezer's position is extreme - and his rhetoric regarding nuclear
exchanges may be an intentionally rhetorically extreme reductio - but it is
not absurd.
A unaligned superintelligent AGI with access to the internet and the
capability to develop and use Drexlerian nanotech can trivially
deconstruct the planet. [Yes, all the way down to and past the extremophile
bacteria 10 miles down in the planetary crust.] This is a simple and
obvious truth. This conclusion /is/ vulnerable to attack at its constituent
points - superintelligence may very well be impossible, unaligned
superintelligences may be impossible, Drexlerian nanotech may be
impossible, etc. But Eliezer's position is objectively not false, given
Eliezer's premises.
As such, the overwhelming number of voices in the resulting twitter
discourse are just mouth noises - monkeys trying to shame a fellow monkey
for making a [to them] unjustified grab for social status by "advocating
violence". They aren't even engaging with the underlying logic. I'm not
certain if they're capable of doing so.
On Fri, Mar 31, 2023 at 1:03 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Fri, Mar 31, 2023 at 2:13 AM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> The AI doomers would say, but this is different from everything else
>> because.... it is like God.
>>
>
> Indeed, and in so doing they make several errors often associated with
> religion, for example fallacies akin to Pascal's Wager (see: Roko's
> Basilisk).
>
>
>> Take Russia, or North Korea. Russia could destroy humanity or do
>> irreparable damage. Why doesn't it happen? Mutual Destruction is part of
>> the reason.
>>
>
> To be fair, given what's been revealed in their invasion of Ukraine (and
> had been suspected for a while), it is possible that Russia does not in
> fact - and never actually did - have all that many functioning long-range
> nuclear weapons. But your point applies to why we've never had to find out
> for sure yet.
>
>
>> One thing is to warn of the possible dangers, another this relentless and
>> exaggerated doom sayers cries.
>>
>
> Which, being repeated and exaggerated when the "honest" reports fail to
> incite the supposedly justified degree of alarm (rather than seriously
> considering that said justification might in fact be incorrect), get melded
> into the long history of unfounded apocalypse claims, and dismissed on that
> basis. The Year 2000 bug did not wipe out civilization. Many predicted
> dates for the Second Coming have come and gone with no apparent effect; new
> predictions rarely even acknowledge that there have been said prior
> predictions, let alone give reason why those proved false where this
> prediction is different. Likewise for the 2012 Mayan Apocalypse, which
> was literally just their calendar rolling over (akin to going from
> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/4a889099/attachment.htm>
More information about the extropy-chat
mailing list