[ExI] Superrationality

Eliezer Yudkowsky sentience at pobox.com
Wed May 7 09:03:43 UTC 2008


On Wed, May 7, 2008 at 12:56 AM, Lee Corbin <lcorbin at rawbw.com> wrote:
> Eliezer writes
>
>  > Your condition is sufficient but not necessary.  It suffices for the
>  > players to know each other's algorithms.
>
>  seems to me to fail.  First, for an algorithm to be of any use
>  here, it would have to terminate in a finite number of steps
>  and yield a "Y" or an "N".
>
>  But your solution sounds reflexive: each player must use the
>  output of the other player's algorithm as input to his own.
>  How could it ever get started?

Oh, that's what they say about *all* self-modification.

>  Even if you were to somehow explain that (good luck),
>  then the condition I stated is still necessary, to wit,
>  that the players realize somehow (can explain) that
>  their behaviors are highly correlated.

Yes, but in this case a *motive* exists to *deliberately* correlate
your behavior to that of your opponent, if the opponent is one who
will cooperate if your behaviors are highly correlated and defect
otherwise.  You might prefer to have the opponent think that your
behaviors are correlated, and then defect yourself; but if your
opponent knows enough about you to know you are thinking that, the
opponent knows whether your behaviors are really correlated or not.

I'm thinking here about two dissimilar superintelligences that happen
to know each other's source code.

-- 
Eliezer Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list