[ExI] [Extropolis] Effective accelerationism (e/acc) is good
Giulio Prisco
giulio at gmail.com
Tue Dec 12 06:50:15 UTC 2023
Hi John,
<So the real question an AI scientist should ask himself is not should I
help build a super intelligence but should I wait for somebody more ethical
than me to do so...>
I don't have a concept of "ethics" more precise than the nebulous concept
that we should align with the will of the universe or something like that,
for which we should first know more about what the will of the universe
could be.
< I don't believe that the scientists in China or Russia or North Korea are
more moral than the scientists working at OpenAI or Google or Anthropic...>
Same as above for morality: I don't know what morality is. But from a more
practical perspective, history shows that modern (or better pre-modern)
Western culture, which has been based on competition, pluralism, freedom of
thought and speech, freedom of scientific research, has produced better
sci/tech faster than the centralized cultures of China or Russia or North
Korea.
Which brings me to:
<Because I committed heresy and the Extropians excommunicated me, and
because I don't want to be mistaken as a closet Trump supporter.>
So we extropians are Trump supporters? I didn't know that, but if so, then
feel free to call me a Trump supporter!
Trump is far from being my ideal politician. But then very few politician
are, and those few don't have a realistic chance to be elected to positions
where they can make a difference.
In politics, one must think not only of how good a certain candidate or a
certain policy is, but also of the alternatives on the table.
To me, Trump is a symptom. It is like part of the collective mind of
America, perhaps the more perceptive part, has sensed that American culture
(and Western culture at large) is slipping on a dangerous slope that could
lead to abandoning the principles of competition, pluralism, freedom of
thought and speech, and freedom of scientific research. This is where the
ultra-PC "wokeness" of certain intolerant cultural and political actors
leads. And that part of our collective mind has embraced Trump as the best
way to fight back.
Of course there could be better ways to fight back, and of course a better
candidate than Trump could emerge. But we must fight back in some or some
other way.
On Mon, Dec 11, 2023 at 3:38 PM John Clark <johnkclark at gmail.com> wrote:
> On Mon, Dec 11, 2023 at 12:02 AM Giulio Prisco <giulio at gmail.com> wrote:
>
>
>> *> Hi John, my take on this point is similar to my take on space
>> expansion. I would *like* it if the good guys are the first to develop
>> superhuman AI and expand into space, but if the bad guys must be the first,
>> so be it. The universe will provide and make sure things work out well.*
>>
>
> It has always been clear to me that unless there was some physical reason
> that a super intelligent AI was impossible to make it was inevitable that
> somebody somewhere will build one, and events of the last year have
> increased my certainty that there was no such physical obstacle from 99% to
> 100%. So the real question an AI scientist should ask himself is not should
> I help build a super intelligence but should I wait for somebody more
> ethical than me to do so. I don't believe that the scientists in China or
> Russia or North Korea are more moral than the scientists working at OpenAI
> or Google or Anthropic. And I'm certain the universe will see to it that
> things work out, but I'm not certain it will work out to the advantage of
> human beings, however the probability is a little higher if the first
> superhuman AI is made by one of the good guys.
>
>
> For similar reasons I think Oppenheimer was wrong in opposing the building
> of the H-bomb. After the war it was not known if it was even physically
> possible to build a thermonuclear bomb, and the answer to that question
> certainly seems to me like something that should be known if you're
> interested in national security. But Oppenheimer opposed even researching
> the possibility of such a thing. The US pretty much followed Oppenheimer's
> advice and did little thermonuclear research until the USSR exploded a
> fission bomb in 1949. After that the US started a huge H-bomb project and
> in 1951 the Teller–Ulam design was discovered making it clear it was not
> only possible but practical to make such a device and in 1952 they not only
> made an H-bomb they tested one. And this is where I think the US made a
> second mistake, they should not have tested it.
>
> If the US had started the H-bomb program as soon as the war ended they
> could've probably built an H bomb in 1947 or 1948 before the USSR's fission
> bomb test. No nation could put H bombs into their weapons stockpile if they
> have not tested it and it would be impossible to test such a huge thing
> without the entire world becoming aware of it. So if 1948 the US had said
> that they had made an H-Bomb but would not test it unless some other nation
> tested one then,maybe Stalin would have decided not to test one either.
> Probably Stalin would have built and tested an H-bomb anyway but I think it
> would have been worth taking the chance.
>
>
>> *> Why have you stopped saying you are an extropian?*
>>
>
> Because I committed heresy and the Extropians excommunicated me, and
> because I don't want to be mistaken as a closet Trump supporter.
>
> John K Clark See what's on my new list at Extropolis
> <https://groups.google.com/g/extropolis>
> ich
>
>
>
>>
>> On Sun, Dec 10, 2023 at 4:10 PM John Clark <johnkclark at gmail.com> wrote:
>>
>>> On Fri, Dec 8, 2023 at 2:48 AM Giulio Prisco <giulio at gmail.com> wrote:
>>>
>>>
>>>> *> Effective accelerationism (e/acc) is good. Thoughts on effective
>>>> accelerationism (e/acc), extropy, futurism,
>>>> cosmism.https://www.turingchurch.com/p/effective-accelerationism-eacc-is
>>>> <https://www.turingchurch.com/p/effective-accelerationism-eacc-is>*
>>>>
>>>
>>> I agree with almost everything you said, and I too became a
>>> card-carrying extropian in the mid-1990s and until a few years ago I was
>>> proud to say I was still an extropian. But today I feel more comfortable
>>> saying I'm a believer in effective accelerationism, not because I believe
>>> AI poses no danger to the human race but because I believe the development
>>> of a superhuman AI is inevitable and the chances that the AI will not
>>> decide to exterminate us is greater if baby Mr. Jupiter Brain is developed
>>> by the US, Europe, Japan, Taiwan, or South Korea, than if it was developed
>>> by China, Russia, or North Korea. If given a choice between low chance and
>>> no chance I'll pick low chance every time.
>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to extropolis+unsubscribe at googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/extropolis/CAJPayv2-rtm8hO57uRqrskUk2Yst3eE4toptZzZ7xbNTCnv_Lg%40mail.gmail.com
> <https://groups.google.com/d/msgid/extropolis/CAJPayv2-rtm8hO57uRqrskUk2Yst3eE4toptZzZ7xbNTCnv_Lg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20231212/8ff0be9d/attachment.htm>
More information about the extropy-chat
mailing list