[ExI] for the fermi paradox fans
Henry Rivera
hrivera at alumni.virginia.edu
Tue Jun 10 00:02:03 UTC 2014
On Mon, Jun 9, 2014 at 12:27 AM, Adrian Tymes <atymes at gmail.com> wrote:
> On Sat, Jun 7, 2014 at 10:05 AM, Henry Rivera <hrivera at alumni.virginia.edu
> > wrote:
>
>> I'm disappointed no one has commented on the story linked to in the
>> second post of this thread:
>>
>
> So I'm no one?
>
> Reposting in case my comment didn't go through:
>
> Doesn't quite hold at a critical juncture: intelligences can not precommit
> prior to their existence. They could turn out to be less than perfectly
> rational despite being superintelligences, or follow unanticipated chains
> of logic (in essence being more rational than the intelligence speculating
> about them).
>
> Parfit's Hitchhiker doesn't quite work either: it is not done in an
> abstract environment, but in the real world where there may be similar
> interactions in the future, with the roles reversed.
>
> (Also - milliseconds matter when there are travel times of thousands of
> years involved? No plan remains that stable for that long, especially when
> it involves exploration of previously unmapped spaces, such as
> superintelligences finding out what they can do immediately after first
> activation.)
>
>
Sorry if I missed your post earlier. If it never made it to the list, I'm
glad you re-posted it.
>intelligences can not precommit prior to their existence
Yes, I agree a bit of artistic freedom was at work there. But could an
intelligence extrapolate with enough probability how such a conversation
could go and precommit to a position based on sound logic?
>They could turn out to be less than perfectly rational
That's a good point. I guess one of the assumptions of the story is that a
superintelligence would necessarily be capable of unbiased, error-free
logic and rational behavior in turn.
>No plan remains that stable for that long
There does appear to be much risk involved in assuming or predicting such a
long period of stability. Making such a commitment could only be done after
gaining enough knowledge to enable calculating the probabilities of the
risks. Gaining the knowledge would take more time than is given in the
story. If we changed that part of the story, adding some time to explore
one's environment, could a superintelligence justify making such a decision
and commitment and turn out to be right?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140609/ec1b176f/attachment.html>
More information about the extropy-chat
mailing list