[ExI] for the fermi paradox fans
atymes at gmail.com
Mon Jun 9 04:27:26 UTC 2014
On Sat, Jun 7, 2014 at 10:05 AM, Henry Rivera <hrivera at alumni.virginia.edu>
> I'm disappointed no one has commented on the story linked to in the second
> post of this thread:
So I'm no one?
Reposting in case my comment didn't go through:
Doesn't quite hold at a critical juncture: intelligences can not precommit
prior to their existence. They could turn out to be less than perfectly
rational despite being superintelligences, or follow unanticipated chains
of logic (in essence being more rational than the intelligence speculating
Parfit's Hitchhiker doesn't quite work either: it is not done in an
abstract environment, but in the real world where there may be similar
interactions in the future, with the roles reversed.
(Also - milliseconds matter when there are travel times of thousands of
years involved? No plan remains that stable for that long, especially when
it involves exploration of previously unmapped spaces, such as
superintelligences finding out what they can do immediately after first
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat