<br><br><div><span class="gmail_quote">On 17/06/07, <b class="gmail_sendername">Samantha Atkins</b> <<a href="mailto:sjatkins@mac.com">sjatkins@mac.com</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Actually something more personally frightening is a future where no<br>amount of upgrades or at least upgrades available to me will allow me<br>to be sufficiently competitive. At least this is frightening an a<br>scarcity society where even basic subsistence is by no means
<br>guaranteed. I suspect that many are frightened by the possibility<br>that humans, even significantly enhanced humans, will be second class<br>by a large and exponentially increasing margin.</blockquote><div><br>I don't see how there could be a limit to human enhancement. In fact, I see no sharp demarcation between using a tool and merging with a tool. If the AI's were out there own their own, with their own agendas and no interest in humans, that would be a problem. But that's not how it will be: at every step in their development, they will be selected for their ability to be extensions of ourselves. By the time they are powerful enough to ignore humans, they will be the humans.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">In those<br>circumstances I hope that our competition and especially Darwinian
<br>models are not universal.<br></blockquote></div><br>Darwinian competition *must* be universal in the long run, like entropy. But just as there could be long-lasting islands of low entropy (ironically, that's what evolution leads to), so there could be long-lasting islands of less advanced beings living amidst more advanced beings who could easily consume them.
<br clear="all"><br><br><br>-- <br>Stathis Papaioannou