<br><br><div><span class="gmail_quote">On 4/16/07, <b class="gmail_sendername">Lee Corbin</b> <<a href="mailto:lcorbin@rawbw.com">lcorbin@rawbw.com</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Stathis writes<br><br>> > Does anyone have a ... reason that vastly transhuman engines<br>> > won't absorb all resources within reach? Even if *some*<br>> > particular AI had a predilection not to expand its influence
<br>> > as far as it could, wouldn't it lose the evolutionary race to<br>> > those who did?<br>><br>> ...You could apply the argument [that those agents who try to<br>> expand their influence over everything they can] to any agent:
<br>> bacteria, aliens, humans, nanomachines, black holes... ultimately,<br>> those entities which grow, reproduce or consume will prevail.<br><br>Bacteria can be checked by clean rooms, aliens (like human empires)<br>
might check each other over interstellar distances, and humans (as<br>individuals) are held in check by envy, law, and custom.</blockquote><div><br>Right, but parrotting the argument for AI's taking over the world, some bacteria, aliens or humans, due to diversity, would be less subject to these checks, and they will come to predominate in the population, so that after multiple generations the most rapacious entity will eat everything else and ultimately make the universe synonymous with itself. On the other hand, maybe there will be long, long periods of dynamic equilibrium, evn between competing species grossly mismatched in intelligence, such as humans and bacteria.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> However, it might be aeons before everything goes to hell,<br>> especially if we anticipate problems and try to prevent or
<br>> minimise them.<br><br>I don't know why you think that this must be "hell". I could<br>imagine rather beneficient super-intelligences taking over vast<br>areas, checked ultimately by the speed of light, and their own
<br>ability to identify with far-flung branches of themselves. Some<br>of these may even deign to give a few nanoseconds of runtime<br>every now and then to their ancient noble creators.</blockquote><div><br>I'm not as worried about the future behaviour of super-AI's as many people seem to be. There is no logical reason why they should have one motivation rather than another. If humans can be concerned about flowers and trees, why can't super-intelligent beings be concerned about humans? After all, we weren't created by flowers and trees to have any particular feelings towards them, while we *would* be the ones creating the AI's. And even if some AI's went rogue, that would be no different to what currently happens with large populations of humans.
<br><br>Stathis Papaioannou<br></div><br></div><br>