<div dir="ltr">ChatGPT-4 is not an existential threat to humanity in its current form. No one who understands anything about the field is saying that it is.<div><br></div><div>What it is, is a HUGE pile of Bayesian evidence that should shift all of our priors about 2 bazillion bits in the direction of "Human-level AGIs are entirely possible, are basically knocking on our door, and their superintelligent cousins are about 5 minutes behind them."</div><div><br></div><div>The Waluigi effect, and related similar observations of recent LLMs should give us all great concern that we don't have anything like even the slightest ability to put any kind of deep and rigorous post-hoc external controls on the behavior of several hundred billion parameters of linear algebra. We just don't know how to do that. I think OpenAI may have thought they knew how to do that 6 months ago. They have admitted they were wrong.</div><div><br></div><div>So yeah - human level AGIs are basically a few small architectural tweaks away from being here, and superintelligence is now much more obviously plausible than it was 6 months ago - there was some hope that training data would be a bottleneck on capabilities, but GPT4 is massively superior to GPT3 with roughly the same training data corpus.</div><div><br></div><div>Drexerlian nanotech remains elusive (or at least highly classified) so there's that at least. But as we've all seen, you can do enough damage with simple gain-of-function research on virii. You can't eat the planet with it, but it's still not great.</div><div><br></div><div>If I wasn't already pretty confident that we were /already/ under the absolute control of an omniscient, omnipotent superintelligence [significant fractions of humanity worked this out a few thousand years ago, it's only recently that we've allowed ourselves to forget], I'd be quite concerned.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 31, 2023 at 1:26 PM Darin Sunley <<a href="mailto:dsunley@gmail.com">dsunley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Eliezer's position is extreme - and his rhetoric regarding nuclear exchanges may be an intentionally rhetorically extreme reductio - but it is not absurd.<div><br></div><div>A unaligned superintelligent AGI with access to the internet and the capability to develop and use Drexlerian nanotech can trivially deconstruct the planet. [Yes, all the way down to and past the extremophile bacteria 10 miles down in the planetary crust.] This is a simple and obvious truth. This conclusion /is/ vulnerable to attack at its constituent points - superintelligence may very well be impossible, unaligned superintelligences may be impossible, Drexlerian nanotech may be impossible, etc. But Eliezer's position is objectively not false, given Eliezer's premises. </div><div><br></div><div>As such, the overwhelming number of voices in the resulting twitter discourse are just mouth noises - monkeys trying to shame a fellow monkey for making a [to them] unjustified grab for social status by "advocating violence". They aren't even engaging with the underlying logic. I'm not certain if they're capable of doing so.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 31, 2023 at 1:03 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Fri, Mar 31, 2023 at 2:13 AM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com" target="_blank">gsantostasi@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">The AI doomers would say, but this is different from everything else because.... it is like God. <br></div></blockquote><div><br></div><div>Indeed, and in so doing they make several errors often associated with religion, for example fallacies akin to Pascal's Wager (see: Roko's Basilisk).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Take Russia, or North Korea. Russia could destroy humanity or do irreparable damage. Why doesn't it happen? Mutual Destruction is part of the reason.</div></blockquote><div><br></div><div>To be fair, given what's been revealed in their invasion of Ukraine (and had been suspected for a while), it is possible that Russia does not in fact - and never actually did - have all that many functioning long-range nuclear weapons. But your point applies to why we've never had to find out for sure yet.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">One thing is to warn of the possible dangers, another this relentless and exaggerated doom sayers cries. </div></blockquote><div><br></div><div>Which, being repeated and exaggerated when the "honest" reports fail to incite the supposedly justified degree of alarm (rather than seriously considering that said justification might in fact be incorrect), get melded into the long history of unfounded apocalypse claims, and dismissed on that basis. The Year 2000 bug did not wipe out civilization. Many predicted dates for the Second Coming have come and gone with no apparent effect; new predictions rarely even acknowledge that there have been said prior predictions, let alone give reason why those proved false where this prediction is different. Likewise for the 2012 Mayan Apocalypse, which was literally just their calendar rolling over (akin to going from 12/31/1999 to 1/1/2000) and may have had the wrong date anyway.</div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
</blockquote></div>