<div dir="ltr">Hi John,<br><br><So the real question an AI scientist should ask himself is not should I help build a super intelligence but should I wait for somebody more ethical than me to do so...><br><br>I don't have a concept of "ethics" more precise than the nebulous concept that we should align with the will of the universe or something like that, for which we should first know more about what the will of the universe could be.<br><br>< I don't believe that the scientists in China or Russia or North Korea are more moral than the scientists working at OpenAI or Google or Anthropic...><br><br>Same as above for morality: I don't know what morality is. But from a more practical perspective, history shows that modern (or better pre-modern) Western culture, which has been based on competition, pluralism, freedom of thought and speech, freedom of scientific research, has produced better sci/tech faster than the centralized cultures of China or Russia or North Korea.<br><br>Which brings me to:<br><br><Because I committed heresy and the Extropians excommunicated me, and because I don't want to be mistaken as a closet Trump supporter.><br><br>So we extropians are Trump supporters? I didn't know that, but if so, then feel free to call me a Trump supporter!<br><br>Trump is far from being my ideal politician. But then very few politician are, and those few don't have a realistic chance to be elected to positions where they can make a difference.<br><br>In politics, one must think not only of how good a certain candidate or a certain policy is, but also of the alternatives on the table.<br><br>To me, Trump is a symptom. It is like part of the collective mind of America, perhaps the more perceptive part, has sensed that American culture (and Western culture at large) is slipping on a dangerous slope that could lead to abandoning the principles of competition, pluralism, freedom of thought and speech, and freedom of scientific research. This is where the ultra-PC "wokeness" of certain intolerant cultural and political actors leads. And that part of our collective mind has embraced Trump as the best way to fight back.<br><br>Of course there could be better ways to fight back, and of course a better candidate than Trump could emerge. But we must fight back in some or some other way.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Dec 11, 2023 at 3:38 PM John Clark <<a href="mailto:johnkclark@gmail.com">johnkclark@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Mon, Dec 11, 2023 at 12:02 AM Giulio Prisco <</span><a href="mailto:giulio@gmail.com" style="font-family:Arial,Helvetica,sans-serif" target="_blank">giulio@gmail.com</a><span style="font-family:Arial,Helvetica,sans-serif">> wrote:</span><br></div></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Hi John, my take on this point is similar to my take on space expansion. I would *like* it if the good guys are the first to develop superhuman AI and expand into space, but if the bad guys must be the first, so be it. The universe will provide and make sure things work out well.<br></i></div></blockquote><div><br></div><font size="4">It has always been clear to me that unless there was some physical reason that a <span class="gmail_default" style="font-family:arial,helvetica,sans-serif">s</span>uper intelligent AI was impossible to make it was inevitable that somebody somewhere will build one, and events of the last year have increased my certainty that there was no such physical obstacle from 99% to 100%. So the real question an AI scientist should ask himself is not should I help build a super intelligence but should I wait for somebody more ethical than me to do so. I don't believe that the scientists in China or Russia or North Korea are more moral than the scientists working at OpenAI or Google or Anthropic. And I'm certain the universe will see to it that things work out, but I'm not certain it will work out to the advantage of human beings, however the probability is a little higher if the first superhuman AI is made by one of the good guys.</font><br><div><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><div><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><font size="4">For similar reasons I think Oppenheimer was wrong in opposing the building of the H-bomb. After the war it was not known if it was even physically possible to build a thermonuclear bomb, and the answer to that question certainly seems to me like something that should be known if you're interested in national security. But Oppenheimer opposed even researching the possibility of such a thing. The US pretty much followed Oppenheimer's advice and did little thermonuclear research until the USSR exploded a fission bomb in 1949. After that the US started a huge H-bomb project and in 1951 the Teller–Ulam design was discovered making it clear it was not only possible but practical to make such a device and in 1952 they not only made an H-bomb they tested one. And this is where I think the US made a second mistake, they should not have tested it.</font><br></div><div class="gmail_quote"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></span></div><div class="gmail_quote"><font size="4">If the US had started the H-bomb program as soon as the war ended they could've probably built an H bomb in 1947 or 1948 before the USSR's fission bomb test. No nation could put H bombs into their weapons stockpile if they have not tested it and it would be impossible to test such a huge thing without the entire world becoming aware of it. So if 1948 the US had said that they had made an H-Bomb but would not test it unless some other nation tested one then,maybe Stalin would have decided not to test one either. Probably Stalin would have built and tested an H-bomb anyway but I think it would have been worth taking the chance.<span style="font-family:arial,helvetica,sans-serif"> </span></font></div><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Why have you stopped saying you are an extropian?</i></div></blockquote><div><br></div><font size="4">Because I committed heresy and the Extropians excommunicated me, and because I don't want to be mistaken as a closet Trump supporter. </font><div><br></div><div><font size="4" style="font-family:arial,helvetica,sans-serif;color:rgb(80,0,80)"><span class="gmail_default"> </span>John K Clark See what's on my new list at </font><font size="6" style="font-family:arial,helvetica,sans-serif;color:rgb(80,0,80)"><a href="https://groups.google.com/g/extropolis" rel="nofollow" target="_blank">Extropolis</a></font> <br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="1" color="#ffffff">ich</font></div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Dec 10, 2023 at 4:10 PM John Clark <<a href="mailto:johnkclark@gmail.com" target="_blank">johnkclark@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Fri, Dec 8, 2023 at 2:48 AM Giulio Prisco <</span><a href="mailto:giulio@gmail.com" style="font-family:Arial,Helvetica,sans-serif" target="_blank">giulio@gmail.com</a><span style="font-family:Arial,Helvetica,sans-serif">> wrote:</span><br></div></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><i><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Effective accelerationism (e/acc) is good. Thoughts on effective accelerationism (e/acc), extropy, futurism, cosmism.<br><a href="https://www.turingchurch.com/p/effective-accelerationism-eacc-is" target="_blank">https://www.turingchurch.com/p/effective-accelerationism-eacc-is</a></i></div></blockquote><div><br></div><div><font size="4">I agree with almost everything you said, and I too became a card-carrying extropian in the mid-1990s and until a few years ago I was proud to say I was still an extropian. But today I feel more comfortable saying I'm a believer in effective accelerationism, not because I believe AI poses no danger to the human race but because I believe the development of a superhuman AI is inevitable and the chances that the AI will not decide to exterminate us is greater if baby Mr. Jupiter Brain is developed by the US, Europe, Japan, Taiwan, or South Korea, than if it was developed by China, Russia, or North Korea. If given a choice between low chance and no chance I'll pick low chance every time. </font><br></div><div><br></div></div></div></blockquote></div>
</blockquote></div></div>
<p></p>
-- <br>
You received this message because you are subscribed to the Google Groups "extropolis" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:extropolis+unsubscribe@googlegroups.com" target="_blank">extropolis+unsubscribe@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/extropolis/CAJPayv2-rtm8hO57uRqrskUk2Yst3eE4toptZzZ7xbNTCnv_Lg%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank">https://groups.google.com/d/msgid/extropolis/CAJPayv2-rtm8hO57uRqrskUk2Yst3eE4toptZzZ7xbNTCnv_Lg%40mail.gmail.com</a>.<br>
</blockquote></div>