<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; "><BR><DIV><DIV>On May 16, 2006, at 1:00 PM, Russell Wallace wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite">On 5/16/06, <B class="gmail_sendername">Samantha Atkins</B> <<A href="mailto:sjatkins@mac.com">sjatkins@mac.com</A>> wrote:<BR> <DIV><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><DIV><DIV style=""><DIV>Surely you jest. Given a >human AI capable of self-improvement a hard take-off is at least possible. </DIV></DIV></DIV></BLOCKQUOTE></DIV><BR> "Hard takeoff" as the term has been previously used typically denotes a process in which superintelligent AI is supposed to come about, not something that is supposed to happen after such AI is achieved by other means - what do you understand the term to mean?<BR></BLOCKQUOTE><DIV><BR class="khtml-block-placeholder"></DIV><DIV><BR class="khtml-block-placeholder"></DIV>I understand it to be what happens after such greater than human intelligence come about. I don't see any way to hard takeoff without this.</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>- samantha</DIV><BR class="khtml-block-placeholder"></BODY></HTML>