[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Giovanni Santostasi gsantostasi at gmail.com
Fri Mar 31 21:23:01 UTC 2023


Darin,
As I pointed out early the argument is based on attributing the AGI god
like powers. It makes a lot of assumptions as you mentioned. It is a
reductio argument, a philosophical one based on taking an extreme position
and seeing what it "logically" leads to. But the premises are not based on
reality. We don't know how a fully conscious AI would look like, how we
will go there from where we are now and what the steps would look like. All
that we have is what we have observed so far.

It is not just that GPT-4 is benign (Bing at most can insult you or be
mean) but also relatively simple to contain and limit its activities. Yes,
GPT-4 was a quantum jump from the previous version but also not in a way
that all of the sudden took over humanity. As we approach those boundaries
we will understand better the nature of these systems, how to minimize the
risks and adapt to the disruption that they will create. We have done this
for 100,000 years so far. One may argue AI is more disruptive than the
invention of fire, agriculture and so on and the time scales involved are
very different but we also have better tools to understand and face
problems that we had in the past.
Fear mongering and apocalyptic thinking is not going to help here. Yes, we
need to be vigilant and think about the possible problems ahead but we
should also be open and curious and fear creates the opposite effect.

Giovanni



On Fri, Mar 31, 2023 at 12:27 PM Darin Sunley via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Eliezer's position is extreme - and his rhetoric regarding nuclear
> exchanges may be an intentionally rhetorically extreme reductio - but it is
> not absurd.
>
> A unaligned superintelligent AGI with access to the internet and the
> capability to develop and use Drexlerian nanotech can trivially
> deconstruct the planet. [Yes, all the way down to and past the extremophile
> bacteria 10 miles down in the planetary crust.] This is a simple and
> obvious truth. This conclusion /is/ vulnerable to attack at its constituent
> points - superintelligence may very well be impossible, unaligned
> superintelligences may be impossible, Drexlerian nanotech may be
> impossible, etc. But Eliezer's position is objectively not false, given
> Eliezer's premises.
>
> As such, the overwhelming number of voices in the resulting twitter
> discourse are just mouth noises - monkeys trying to shame a fellow monkey
> for making a [to them] unjustified grab for social status by "advocating
> violence". They aren't even engaging with the underlying logic. I'm not
> certain if they're capable of doing so.
>
>
> On Fri, Mar 31, 2023 at 1:03 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Fri, Mar 31, 2023 at 2:13 AM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>
>>> The AI doomers would say, but this is different from everything else
>>> because.... it is like God.
>>>
>>
>> Indeed, and in so doing they make several errors often associated with
>> religion, for example fallacies akin to Pascal's Wager (see: Roko's
>> Basilisk).
>>
>>
>>> Take Russia, or North Korea. Russia could destroy humanity or do
>>> irreparable damage. Why doesn't it happen? Mutual Destruction is part of
>>> the reason.
>>>
>>
>> To be fair, given what's been revealed in their invasion of Ukraine (and
>> had been suspected for a while), it is possible that Russia does not in
>> fact - and never actually did - have all that many functioning long-range
>> nuclear weapons.  But your point applies to why we've never had to find out
>> for sure yet.
>>
>>
>>> One thing is to warn of the possible dangers, another this relentless
>>> and exaggerated doom sayers cries.
>>>
>>
>> Which, being repeated and exaggerated when the "honest" reports fail to
>> incite the supposedly justified degree of alarm (rather than seriously
>> considering that said justification might in fact be incorrect), get melded
>> into the long history of unfounded apocalypse claims, and dismissed on that
>> basis.  The Year 2000 bug did not wipe out civilization.  Many predicted
>> dates for the Second Coming have come and gone with no apparent effect; new
>> predictions rarely even acknowledge that there have been said prior
>> predictions, let alone give reason why those proved false where this
>> prediction is different.   Likewise for the 2012 Mayan Apocalypse, which
>> was literally just their calendar rolling over (akin to going from
>> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway.
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/041960f7/attachment.htm>


More information about the extropy-chat mailing list