[ExI] Superrationality

Lee Corbin lcorbin at rawbw.com
Wed May 7 22:06:29 UTC 2008


Eliezer writes

>>  First, for an algorithm to be of any use here, it would have to
>>  terminate in a finite number of steps and yield a "Y" or an "N".
>>
>>  But your solution sounds reflexive: each player must use the
>>  output of the other player's algorithm as input to his own.
>>  How could it ever get started?
> 
> Oh, that's what they say about *all* self-modification.

It may or may not be accurate to criticize all attempts to
design self-modification as suffering from this defect (I
would guess it's not)> But in this specific case, the question
above still stands. (see below).

>>  Even if you were to somehow explain that (good luck),
>>  then the condition I stated is still necessary, to wit,
>>  that the players realize somehow (can explain) that
>>  their behaviors are highly correlated.
> 
> Yes, but in this case a *motive* exists to *deliberately* correlate
> your behavior to that of your opponent, if the opponent is one who
> will cooperate if your behaviors are highly correlated and defect
> otherwise.

Yes, but without a communication channel---which is normally
stipulated to be absent in the NIPD---the entities have no way
of achieving this cooperation.

> You might prefer to have the opponent think that your
> behaviors are correlated, and then defect yourself; but
> if your opponent knows enough about you to know you
> are thinking that, the opponent knows whether your
> behaviors are really correlated or not.
> 
> I'm thinking here about two dissimilar superintelligences
> that happen to know each other's source code.

Is this much different from the case of two humans who
already know each other pretty well? For example,
Hofstadter 2008 knows Hofstadter 2003 pretty well,
and actually the latter has quite a fair knowledge of
the former. Your AIs know each other's *code* but
they do not know each other's present *state*. Therefore
they cannot in confidence complete a model of the other's
behavior, unless "always cooperate with an entity whose
source code I have read and whose source code contains
this statement or its equivalent" is indeed part of the source
code of each. But then, they would no longer have any
option to be affected by other memes, such as the following:

     "What will happen if I Defect, the a post *I* just
      read on Extropians implies I could?"

Isn't the condition I stated, namely, that "the players
realize somehow (and can explain) that their behaviors
are highly correlated"  still quite necessary?

Lee




More information about the extropy-chat mailing list