[ExI] Repeated Experience (was Affecting Past Experience)

Vladimir Nesov robotact at mail.ru
Thu Jul 26 23:45:04 UTC 2007


Friday, July 27, 2007, Lee Corbin wrote:

LC> Vladimir writes


LC> Naturally, I should not worry if I know that the Earth is going
LC> to be thoroughly cooked by a gamma burst 15 minutes from now:
LC> I can't do anything about it. Still, I would consider it an event
LC> worth taking note of, and it definitely would affect my priorities.

(Only as a side effect of heuristic that attracts attention to
processes that can significantly affect you.) - in this case
discussion is about fuzzy definition of 'should' in "should worry".

LC> In order to make this a real choice, we have to introduce the
LC> possibility that through strenuous effort (say, for example, 
LC> praying very hard to the OS) you can avert the midlife termination
LC> of the 2nd run.  This, then, brings it back into the normal or usual
LC> range of "worry" (not that it's especially rational).  One would,
LC> for example, worry that one had not done quite enough praying.

Sounds like Pascal's wager in original scenario, since feedback of
this kind wasn't considered.

>> Objective many worlds perspective is equivalent
>> to subjective reformulation of mind operation in the following terms.
>> Mind is an algorithm that selects an
>> action of an agent, or equivalently mind anticipates an action of an
>> agent, and anticipated action is performed.

LC> This is very hard to follow, sorry.  For one thing, I understand
LC> that "mind" has no equivalent in German.  (That's probably a
LC> very good thing, German metaphysics are already unendureable,
LC> so thank God they never stumbled upon "Mind".  I'm sure you
LC> know how philosophers have spent so much time and killed so
LC> many trees over the Mind/Body problem!)  At any rate, it's
LC> a sign that perhaps the term is not needed, and can be replaced
LC> with other phraseology.

I'm just inventing a bicycle here. Agent is a body which interacts
with universe, mind is an algorithmic process running in its 'brain'.
Mind isn't a person, but a framework for anticipation, predicting
among other things processes attributed to self, in particular actions
which are in result executed by agent (body). Person seems to
correspond to agent+self.

>> Mind also anticipates performance of universe (grounded to senses,
>> whatever). It doesn't know with certainty what will happen, but it

LC> by 'it' I guess you me you, me, or someone

Since mind itself is mechanical and without personality, I refer to it
as 'it'.

>> must select a single action for an agent, so it holds a measure over
>> possible states of the universe, selecting an action of agent with
>> greatest measure.

LC> Is this measure over the many-worlds, or over some state-space
LC> in our possibly infinite physical universe?

General case (which doesn't prohibit zero measure for vast classes of
universe).

>> MWI trick is that performing a quantum suicide experiment is
>> expected under some circumstances to be selected by a rational
>> mind [person?] over not performing an experiment.

LC> I really don't recall debating with anyone recently who held
LC> that quantum suicide is a good idea;  that is, were the options
LC> truly available (and I guess they are) then I don't know anyone
LC> who'd do it.  Hmm.  Of course!  I guess it's not surprising that
LC> I don't know someone like that.

Well, with not-that-bad chance there should be at least some successful
adopters in that case :). But as I see it the gist of quantum suicide
is that if you are ideally egoistic you shouldn't care.

>> In subjective interpretation of MWI it corresponds to agent
>> having a theory of its mind's operation, so that agent can
>> manipulate decision making procedure of its mind, allowing
>> otherwise irrational decisions.

LC> Although we may have trouble understanding each other here,
LC> I don't usually find "subjective" accounts to be very valuable,
LC> although there are exceptions. It's a lot easier, anyway, to
LC> concentrate on the objective, I think.

When discussion involves observers it's inevitable... Also viewing the
same issue from both points of view can help in consistency checking
(which is what I tried to do for MWI from subjective point of view).

-- 
 Vladimir Nesov                            mailto:robotact at mail.ru




More information about the extropy-chat mailing list