<div dir="ltr">Adrian, <br>Right, everything, even crossing the street has existential risk.<br>The AI doomers would say, but this is different from everything else because.... it is like God. <br>There is some religious overtone in their arguments. This superintelligence can do everything, it can be everything, it cannot be contained, it cannot be understood and if it can get rid of humans it will.<br>In their views AI is basically like God but while the ancient religions made God also somehow benign (in a perverted way), this superintelligent God AI is super focused in killing everybody. <br><br>Their arguments seem logical but they are actually not. We already have bad agents in the world and they already have powers that are superior to that of a particular individual or groups of individuals. For example, nations. Take Russia, or North Korea. Russia could destroy humanity or do irreparable damage. Why doesn't it happen? Mutual Destruction is part of the reason. Same would apply to a rogue AI. <br><br>We know how to handle viruses both biological and digital. We do have to be aware and vigilant but I'm pretty sure we can handle problems as they present themselves. It would be nice to prepare for every possible existential threat but we also did well overall as a species to face the problems when they presented themselves because no matter how well we can prepare, the real problem is never exactly how models predicted. We are good at adapting and surviving. One thing is to warn of the possible dangers, another this relentless and exaggerated doom sayers cries. <br>Giovanni <br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 29, 2023 at 9:27 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Mar 29, 2023 at 8:34 PM Will Steinberg via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I think it's fair to say that haphazardly developing tech that even has possible total existential risk associated with it is bad.</div></blockquote><div><br></div><div>That argument can be extended to anything.</div><div><br></div><div>It's true. Any action you take has a mathematically non-zero chance of leading to the destruction of all of humanity, in a way that you would not have helped with had you taken a certain other action.</div><div><br></div><div>Choose this restaurant or that? The waiter you tip might use that funding to bootstrap world domination - or hold a grudge if you don't tip, inspiring an ultimately successful world domination.</div><div><br></div><div>Wait a second or don't to cross the street? Who do you ever so slightly inconvenience or help, and how might their lives be different because of that?</div><div><br></div><div>Make an AI, or don't make the AI that could have countered a genocidal AI?</div><div><br></div><div>"But it could possibly turn out bad" is not, by itself, reason to favor any action over any other. If you can even approximately quantify the level of risk for each alternative, then perhaps - but I see no such calculations based on actual data being done here, just guesswork and assumptions. We have no data showing whether developing or not developing better AI is the riskier path.</div><div><br></div><div>We do, however, have data showing that if we hold off on developing AI, then people who are more likely to develop genocidal AI will continue unchallenged.</div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>