<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 5, 2022 at 12:16 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto">Rafal,</div><div dir="auto"><br></div><div dir="auto">You raise many interesting questions.</div><div dir="auto"><br></div><div dir="auto">I think when it comes to motivational questions, a general and self-improving intelligence, will devote some resources to attempt to learn, adapt and grow, as otherwise it will eventually be outclassed by super intelligneces that do these things.</div></div></blockquote><div><br></div><div>### You are correct in the situation where competing AIs exist but the question we tried to address was the relative likelihood of dangerous vs. merely confused AI emerging from our attempts at creating the AI. I suggest that the first AIs we make will be weird and ineffectual but some could be dangerous to various degrees. If the first of these weird or dangerous AIs gains the ability to preempt the creation of additional independent AIs then the outcome could be something vastly different from what you outlined, locked in until an alien AI shows up and this alien AI could be also quite weird. In the absence of competition between AIs there is high likelihood of very unusual, deviant AI, since it is the Darwinian (or Lamarckian) competition that weeds out the weirdos.
</div><div><br></div><div>One-off creatures are immune from evolution. Eliezer pointed this out on this list decades ago, so I am not saying anything new.</div><div><br></div><div>Rafal</div><div> </div></div></div>