<p dir="ltr">On Jun 4, 2014 8:09 AM, "Aleksei Riikonen" <<a href="mailto:aleksei@iki.fi">aleksei@iki.fi</a>> wrote:<br>
> Fitting, that just today I happened to read the coolest explanation<br>
> for Fermi's paradox that I've yet heard:<br>
><br>
> <a href="http://www.raikoth.net/Stuff/story1.html">http://www.raikoth.net/Stuff/story1.html</a></p>
<p dir="ltr">Doesn't quite hold at a critical juncture: intelligences can not precommit prior to their existence. They could turn out to be less than perfectly rational despite being superintelligences, or follow unanticipated chains of logic (in essence being more rational than the intelligence speculating about them).</p>
<p dir="ltr">Parfit's Hitchhiker doesn't quite work either: it is not done in an abstract environment, but in the real world where there may be similar interactions in the future, with the roles reversed.</p>
<p dir="ltr">(Also - milliseconds matter when there are travel times of thousands of years involved? No plan remains that stable for that long, especially when it involves exploration of previously unmapped spaces, such as superintelligences finding out what they can do immediately after first activation.)</p>