<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Jun 7, 2014 at 10:05 AM, Henry Rivera <span dir="ltr"><<a href="mailto:hrivera@alumni.virginia.edu" target="_blank">hrivera@alumni.virginia.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div>I'm disappointed no one has commented on the story linked to in the second post of this thread: </div>
</div></blockquote><div><br></div><div>So I'm no one?<br><br>Reposting in case my comment didn't go through:<br><p dir="ltr">Doesn't quite hold at a critical juncture: intelligences
can not precommit prior to their existence. They could turn out to be
less than perfectly rational despite being superintelligences, or follow
unanticipated chains of logic (in essence being more rational than the
intelligence speculating about them).</p>
<p dir="ltr">Parfit's Hitchhiker doesn't quite work either: it is not
done in an abstract environment, but in the real world where there may
be similar interactions in the future, with the roles reversed.</p>
<p dir="ltr">(Also - milliseconds matter when there are travel times of
thousands of years involved? No plan remains that stable for that long,
especially when it involves exploration of previously unmapped spaces,
such as superintelligences finding out what they can do immediately
after first activation.)</p></div></div></div></div>