[ExI] for the fermi paradox fans
Adrian Tymes
atymes at gmail.com
Tue Jun 10 00:44:54 UTC 2014
On Jun 9, 2014 5:03 PM, "Henry Rivera" <hrivera at alumni.virginia.edu> wrote:
> Sorry if I missed your post earlier. If it never made it to the list, I'm
glad you re-posted it.
I thought it had. Oh well, it definitely has now.
> >intelligences can not precommit prior to their existence
>
> Yes, I agree a bit of artistic freedom was at work there. But could an
intelligence extrapolate with enough probability how such a conversation
could go and precommit to a position based on sound logic?
No. Problem is, there is always the danger of there being some salient
point - some relevant chain of logic - that did not occur to you or to the
other side.
This is, for instance, why cops dread chasing amateur crooks more than
experienced ones: amateurs are unaware of what actions (such as pulling a
gun when the cop's gun is already drawn) cause mutual bad ends for cop and
crook (injured or dead crook, paperwork and reviews for cop). And for all
their superintelligence, a new SI is an amateur at being a SI.
> >They could turn out to be less than perfectly rational
>
> That's a good point. I guess one of the assumptions of the story is that
a superintelligence would necessarily be capable of unbiased, error-free
logic and rational behavior in turn.
Error-free and rational according to who? It could be argued that a
Prisoner's Dilemma playing engine is irrational unless it always defects,
yet that does not get it the best outcome over several games in a row with
the same players and communication between rounds - which is a much closer
model of the real world than one-off PD games.
> >No plan remains that stable for that long
>
> There does appear to be much risk involved in assuming or predicting such
a long period of stability. Making such a commitment could only be done
after gaining enough knowledge to enable calculating the probabilities of
the risks. Gaining the knowledge would take more time than is given in the
story. If we changed that part of the story, adding some time to explore
one's environment, could a superintelligence justify making such a decision
and commitment and turn out to be right?
The time necessary would place all other potential SIs in the explored
area, negating the uncertainty driving the chain of thought in the first
place. And that's assuming some accepted bound on finite space, e.g. if
the SI were content to explore the galaxy and dismiss the chance of an
intelligent extragalactic intruder.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140609/9bc9eb7e/attachment.html>
More information about the extropy-chat
mailing list