<div class="gmail_quote">On Mon, Sep 20, 2010 at 3:19 PM, spike <span dir="ltr"><<a href="mailto:spike66@att.net">spike66@att.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Can you imagine *any* circumstances whereby an emergent AI would decide to<br>
stay under cover, at least for a while? I can. Anyone else?<br></blockquote></div><br>1) AI:us :: us:cockroaches<br>2) AI doesn't perceive us at all as individuals; instead, it's him and the Earth, and they're playing a game<br>
3) Benevolent AI predicts that if we can upload, it'll create a divide between uploaders and non-uploaders, and one side will wipe out the other<br>4) Benevolent AI predicts that if we can upload before we understand physics more fully, we'll stop worrying about physics, then forget about the substrate, then perish in the collapse of the sun, whereas if it hides out we'll figure out physics enough to save ourselves<br>
5) AI predicts that uploads will compete/merge until there is just one, and that one will kill him<br>6) Benevolent AI sees hostile aliens coming, so builds weaponry, but wants to keep it hidden from the humans so that we don't kill ourselves<br>
7) Benevolent AI wants to provide us with advances, but is waiting for the right memetic environment<br><br>-- <br>Jebadiah Moore<br><a href="http://blog.jebdm.net">http://blog.jebdm.net</a><br>