<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Jun 9, 2014 at 12:27 AM, Adrian Tymes <span dir="ltr"><<a href="mailto:atymes@gmail.com" target="_blank">atymes@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Jun 7, 2014 at 10:05 AM, Henry Rivera <span dir="ltr"><<a href="mailto:hrivera@alumni.virginia.edu" target="_blank">hrivera@alumni.virginia.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div>I'm disappointed no one has commented on the story linked to in the second post of this thread: </div>
</div></blockquote><div><br></div><div>So I'm no one?<br><br>Reposting in case my comment didn't go through:<br><p dir="ltr">Doesn't quite hold at a critical juncture: intelligences
can not precommit prior to their existence. They could turn out to be
less than perfectly rational despite being superintelligences, or follow
unanticipated chains of logic (in essence being more rational than the
intelligence speculating about them).</p>
<p dir="ltr">Parfit's Hitchhiker doesn't quite work either: it is not
done in an abstract environment, but in the real world where there may
be similar interactions in the future, with the roles reversed.</p>
<p dir="ltr">(Also - milliseconds matter when there are travel times of
thousands of years involved? No plan remains that stable for that long,
especially when it involves exploration of previously unmapped spaces,
such as superintelligences finding out what they can do immediately
after first activation.)</p></div></div></div></div>
<br></blockquote><div> </div></div><div>Sorry if I missed your post earlier. If it never made it to the list, I'm glad you re-posted it.<br><br>>intelligences
can not precommit prior to their existence<br><br></div>Yes, I agree a
bit of artistic freedom was at work there. But could an intelligence
extrapolate with enough probability how such a conversation could go
and precommit to a position based on sound logic?<br><br>>They could turn out to be
less than perfectly rational<br><br></div><div class="gmail_extra">That's a good point. I guess one of the assumptions of the story is that a superintelligence would necessarily be capable of unbiased, error-free logic and rational behavior in turn.<br>
</div><div class="gmail_extra"><div><br>>No plan remains that stable for that long<br><br></div><div>There does appear to be much risk involved in assuming or predicting such a long period of stability. Making such a commitment could only be done after gaining enough knowledge to enable calculating the probabilities of the risks. Gaining the knowledge would take more time than is given in the story. If we changed that part of the story, adding some time to explore one's environment, could a superintelligence justify making such a decision and commitment and turn out to be right?<br>
</div>
</div></div>