[ExI] Next moment, everything around you will probably change
lcorbin at rawbw.com
Fri Jun 22 04:59:21 UTC 2007
>> On 21/06/07, Lee Corbin <lcorbin at rawbw.com> wrote:
>> I admit that there is irony in the situation of a person or
>> program trying to destroy instances that are identical to
>> itself, even though it has been programmed to safeguard
>> "its own existence". But I consider the programs or persons
>> acting in such a fashion to simply be deeply mistaken.
>> All *outside* observers who are much less biased see them as
>> identical. Why aren't they identical? Why should we view them
>> as separate *people* or separate *programs* just because they're
>> at each other's throats?
> Interesting question.
> Perhaps they are quite literally at each other's throats *because*
> the context in which they are embedded lacks the default
> mechanisms to distinguish them in some formal way.
<META> A classic case of a single topic sentence that desperately
wants a follow-up sentence saying almost the same thing in different
words so as to remove ambiguity in the reader's mind </META>
Could you repeat the question? :-)
> From a game theoretical standpoint this seems to make sense,
> since the infrastructure in which we are embedded normally
> provides for some significant resource controls that guard our
> individual interests. In a system that cannot distinguish the
> difference between two agents,
I'm groping for an example here.... how about the law and the
courts for a "significant resource that guards our individual
interests"? Then you are perhaps saying that if somehow the
law could not distinguish two people (as in many works of
> then they are in full competition for all their shared resources.
> Sure, they may decide to cooperate, but the normal
> mechanisms that would enforce cooperation to some
> degree would be reduced or entirely lacking, allowing
> those agents to violate agreements between themselves
> without consequence.
Yes, (maybe my example works too). If you have a book
or movie in which two duplicates could for some reason
never go to the authorities about their condition, and all
the rest of society could not distinguish between them, then
I guess it would be just as you say: it could be an all-out
no-holds-barred game between them.
Even their "reputations" would not limit the deceit that they
could practice on each other, nor any other kind of foul play.
> I'd be wary of interacting with my twin if he wouldn't partially
> and voluntarily limit his ability to outright violate our agreements,
> and iterating this across the surface area of our ongoing
> interactions seems to amount to rebuilding the defunct
> capabilities normally provided by the infrastructure of society.
Now I assume that you are focusing on the case (like Jef did)
of the peculiar situation wherein for some reason you don't
know how your twin would behave. After all, you *are*
your twin in the sense of being composed in exactly the same
way, and being exactly similar. So you should already know
whether or not your twin will behave himself (all you have to
ask yourself is how you would behave towards your duplicate).
> After providing for some enforceable contractual safeguards
> to eliminate mutual vulnerabilities in the zero-sum resource
Assuming that you found some way to do this
> I don't see why tightly bound cooperation wouldn't be highly
> likely toward the production of non-zero-sum gains.
> Does this make sense?
Yes to both. Tightly bound cooperation would ensue even
between these (in my eyes) pathological creatures who refused
to cooperate even with their identical counterparts, because
of the enforceable contractual safeguards you mention. (Speaking
of pathology, Raymond Smullyan once said that in a non-iterative
Prisoner's Dilemma, he would not cooperate EVEN WITH a
mirror image of himself!!)
Although your stipulation "eliminating mutual vulnerabilities" may
be a bit of overkill.
More information about the extropy-chat