<br><br><div><span class="gmail_quote">On 17/06/07, <b class="gmail_sendername">Eugen Leitl</b> <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> I don't see how there could be a limit to human enhancement. In fact,<br><br>There could be very well a limit to significant human enhancement;<br>it could very well not happen at all. We could miss our launch window,
<br>and get overtaken.</blockquote><div><br>Yes; I meant no theoretical limit. <br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> I see no sharp demarcation between using a tool and merging with a<br>> tool. If the AI's were out there own their own, with their own agendas<br><br>All stands and falls with availability of very invasive neural
<br>I/O, or whole brain emulation. If this does not happen the tool<br>and the user will never converge.</blockquote><div><br>This direct I/O is not fundamentally different to, say, the haptic sense which allows a human to use a hand tool as an extension of himself, or to the keyboard that allows a human to operate a computer. In general, how would an entity distinguish between self and not-self? How would it distinguish between the interests of one part of itself and another part? How would it distinguish between one part of its programming and another part of its programming, if these are in conflict and there isn't some other aspect of the programming to mediate? How would it distinguish between the interests of its software and the interests of its hardware, given that the interests of the hardware can only be represented in software (this is the case even though there is no real distinction between software and hardware)?
<br></div><br></div><br clear="all"><br>-- <br>Stathis Papaioannou