[ExI] for the fermi paradox fans

Adrian Tymes atymes at gmail.com
Wed Jun 4 18:31:48 UTC 2014

On Jun 4, 2014 8:09 AM, "Aleksei Riikonen" <aleksei at iki.fi> wrote:
> Fitting, that just today I happened to read the coolest explanation
> for Fermi's paradox that I've yet heard:
> http://www.raikoth.net/Stuff/story1.html

Doesn't quite hold at a critical juncture: intelligences can not precommit
prior to their existence.  They could turn out to be less than perfectly
rational despite being superintelligences, or follow unanticipated chains
of logic (in essence being more rational than the intelligence speculating
about them).

Parfit's Hitchhiker doesn't quite work either: it is not done in an
abstract environment, but in the real world where there may be similar
interactions in the future, with the roles reversed.

(Also - milliseconds matter when there are travel times of thousands of
years involved?  No plan remains that stable for that long, especially when
it involves exploration of previously unmapped spaces, such as
superintelligences finding out what they can do immediately after first
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140604/86b65032/attachment.html>

More information about the extropy-chat mailing list