<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; "><BR><DIV><DIV>On May 17, 2006, at 1:34 PM, Russell Wallace wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite">On 5/17/06, <B class="gmail_sendername">Samantha Atkins</B> <<A href="mailto:sjatkins@mac.com">sjatkins@mac.com</A>> wrote:<BR> <DIV><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV>You seem to be confusing Singularity with some particular extremely utopian or dystopian outcomes. I am not using the word in that sense and I don't think many others in this thread are either. I don't expect a deity to manifest on Earth as part and parcel of Singularity.</DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> Eliezer effectively does though, and it's his view that I was arguing against here.<BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV>Actually I think he abandoned his Sysop type positions quite some time ago and even that was no deity. What I was referring to was Singularity as what comes of the emergence of >human intelligence (Vinge singularity) as opposed to a variety of other notions. </DIV><DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> <BR> To recap, my position is:<BR> <BR> 1) The Singularity is a fine way of thinking about the distant future, a state of affairs we may ultimately reach.<BR> <BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV>>human intelligence is arguably not at all "distant".</DIV><DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> 2) However, we are not close enough to be able to make meaningful, specific predictions about questions like what will society be like during and after the Singularity.<BR> <BR> 3) Therefore it is not good to turn it into a political football at this time, because we lack the data to base a political debate on fact and reason, and such a debate based on conjecture and emotion is likely to be counterproductive.<BR> <BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV><DIV>Haven't I already agreed a good way back that AGI regulation would be quite wrong at this time? Are you attempting in insist that I agree with your reasoning and means (denial of Singularity) for opposing this? If so you can save yourself from carpal tunnel. It ain't gonna happen. :-)</DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> There are various memes floating around for versions of the idea that Singularity can be achieved in a short timescale by currently known processes. I'm arguing that none of these are realistic; the post you're replying to here was the one in which I present my reasons for believing that Eliezer's version involving recursive self-improvement, in particular, is unrealistic. I do this reluctantly, since when I studied the question of hard takeoff it was not with a view to disproving it. But I find the conclusion inescapable nonetheless.<BR> </DIV><BR><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV><SPAN class="q"><BLOCKQUOTE type="cite">Hard takeoff = a process by which seed AI (of complexity buildable by a team of very smart humans in a basement) undertakes self-improvement to increase its intelligence, without needing mole quantities of computronium, _without needing to interact with the real world outside its basement_, and without needing a long time; subsequently emerging in a form that is already superintelligent.<BR> <BR></BLOCKQUOTE><DIV><BR></DIV></SPAN></DIV><DIV><DIV>Who says this will happen with no interaction with the rest of the world on the part of the seed AI?</DIV></DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> Eliezer has said it can so happen, and you yourself support this conjecture in the next few sentences:<BR> </DIV><BR><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV><DIV>As you know much has been written about the difficulty of keeping the AI isolated sufficiently from the world. But I have no reason to consider self-improvement or even really clever hacking utterly insufficient to reach super-intelligence in that "basement" with just the corpus of information reachable in read-only mode from the internet and reasonably good self-improving code. The latter is imho key if in fact humans are incapable of designing the components of a super-intelligence.</DIV></DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> The concept of "self-improving code" is one of the big pitfalls in reasoning about this area (the other being the concept of "intelligence" as a formal property of a collection of bits). Let's stop to dissect it for a bit.<BR> <BR> When we think about self-improving code, the image that always comes to our minds (not just mine, this is true of the examples people post when the idea is discussed) is that of tweaking code to produce the _same output_ using fewer CPU cycles or bytes of memory.<BR> <BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV>Not true. Improvement is not limited to just getting the same answer faster but includes getting better answers in some measurable way and also includes other things such as better learning and abstraction from inputs. </DIV><DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> I emphasize those words because they are the crux of this issue. Improving an AI's performance isn't about producing the same output faster, it's about producing _different output_ from the same input. In other words, it's not just about changing the code, but changing the _specification_, and that's a completely different thing. As you know yourself, changing the specification with a reasonable assurance that the result will be an improvement isn't the sort of thing that can be done by a smart compiler. It's something that requires domain (not just programming) expertise, and real-world testing - which is what I've been saying.<BR> </DIV><BR></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV>If it can be done by any intelligence, such as a biological human, then it can be done by other comparatively competent intelligences. I was not talking of your above constrained notion of self-improvement which I believe was obvious from my earlier post.</DIV><DIV><BR><BLOCKQUOTE type="cite"><DIV><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV><DIV>Why do you think computronium is required? It was not required to get to human intelligence obviously.</DIV></DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> Yes it was. A single human brain contains more than an exaflop's worth of fault-tolerant self-rewiring nanotech computronium (just looking at the neurons alone, nevermind the as yet unquantified contributions of the glial cells, peripheral nervous system and rest of the body). And getting to human intelligence took a large population of such entities over millions of years of interaction with and live testing in the real world.<BR> <BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV><DIV>There is nowhere to go if you are going to put forth such meaningless arguments as claiming human intelligence depends on computronium. </DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> Not being restricted to blind Darwinian evolution, it shouldn't take us millions of years to create AI; but the requirement for interaction with the real world isn't going to go away.<BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV>Any kind of interaction is sufficient for learning dependent on interaction. It need not be with the "real" world. But that was hardly the point, was it? </DIV><DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> </DIV><BR><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV><SPAN class="q"><BLOCKQUOTE type="cite"> 1) I've underlined the most important condition, because it exposes the critical flaw: the concept of hard takeoff assumes "intelligence" is a formal property of a collection of bits. It isn't. It's an informal comment on the relationship between a collection of bits and its environment.<BR></BLOCKQUOTE><DIV><BR></DIV></SPAN></DIV><DIV>Huh? It is you who posited a completely isolated environment that cannot be breached.</DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> Actually the concept has been floating around over on SL4 since long before I joined the list; I was just summarizing.<BR> </DIV><BR></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV><DIV>Actually SL4 has talked a lot about such isolation being impossible to maintain indefinitely and as being a safety measure rather than a design criteria. </DIV><DIV><BR class="khtml-block-placeholder"></DIV><BR><BLOCKQUOTE type="cite"><DIV><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV>An initial environment does not have to include the entire world in order for the intelligence to grow.</DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> Sure. That doesn't change the fact that the AI will depend on an environment and the rate at which it learns will depend on the rate at which it can do things in that environment. The reason I'm emphasizing this is to refute Eliezer's idea that the AI can learn at the rate at which transistors switch between 0 and 1, independent of the real world.<BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV>I find your argument on this weak.</DIV><DIV><BR><BLOCKQUOTE type="cite"><DIV><DIV> </DIV><BR> <BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV><DIV>Yes and no. What is the real world constraint on building and testing software?</DIV></DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> The hard part is the specification: knowing what output your program should be producing in the first place. <BR> </DIV><BR></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV><DIV>This was not the point of the question. Given a set of desirability criteria and ability to self-modify and test the results it is possible to optimize the system. The desirability criteria can be quite broad and include things like extracting valid information and abstractions from inputs. </DIV><DIV><BR class="khtml-block-placeholder"></DIV><BR><BLOCKQUOTE type="cite"><DIV><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV><DIV>Yes if you want to solve engineering problems you eventually need to get beyond the design tools and simulation but these are still highly critical. Actually building and testing a new solution can be done by human beings from the design the AI came up with. Where is the crucial problem?</DIV></DIV></DIV></DIV></BLOCKQUOTE><DIV><BR> Design tools and simulation are critical yes, and actually building and testing solutions by humans is also critical (I don't expect an early-stage AI to be able to effectively control robots).<BR> <BR> As long as these things are available, there is no crucial problem. Please bear in mind that I'm not saying strong AI isn't possible - on the contrary, I believe it is. I'm just listing the requirements for it.<BR> <BR></DIV></DIV></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV><DIV>OK. I still don't see these requirements (the ones I agree with) taking more than a decade or two. </DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>- samantha</DIV><DIV><BR class="khtml-block-placeholder"></DIV><BR></DIV></BODY></HTML>