<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 21, 2023 at 2:24 PM Keith Henson via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The LLM of AI chatbots are trained on what humans know. They don't<br>
seem any more dangerous to me than exceptionally well-informed humans.<br>
<br>
Of course, I could be wrong.<br></blockquote><div><br></div><div>Nick Bostrom in "Superintelligence" defines three possible dimensions of superintelligence:</div><div><br></div><div>1. Speed</div><div>2. Depth</div><div>3. Breadth</div><div><br></div><div>You have identified that LLMs, in having read everything on the Internet, every book, every wikipedia article, etc. already vastly outstrip humans in terms of their breadth of knowledge. I think another dimension in which they outpace us already is in terms of speed. They can compose entire articles in a matter of seconds, which might take a human hours or days to write. This is another avenue by which superintelligence could run-away from us (if a population of LLMs could drive technological progress forward 100 years in one year's time, then humans would no longer be in control, even if individual LLMs are no smarter than human engineers).</div><div><br></div><div>I think depth of reasoning may be one area where the best humans are currently dominant, but a small tweak, such as giving LLMs a working memory and recursion, as well as tightening up their ability to make multiple deductive logical steps/leaps, could quickly change this.</div><div><br></div><div>Jason</div><div><br></div><div> </div></div></div>