[extropy-chat] Fragmentation of computations

Stathis Papaioannou stathisp at gmail.com
Sun Apr 1 08:52:30 UTC 2007


On 4/1/07, Lee Corbin <lcorbin at rawbw.com> wrote:

> >
> But a 10% salary increase at no cost would be worth having. I know, it
> looks like you are living only half as much, but if you can't
> tell the difference and no-one else can tell the difference, why not go
> for the salary increase? It would be analogous to the
> situation if teleportation became available. People would initially be
> reluctant to use it, but once they try it and see that they
> feel exactly the same as before - not at all as if they've died and been
> replaced by a copy - they will stop worrying about it. The
> obvious extension of this idea is to increase the zombie proportion so
> that you are only actually conscious, say, for one second in
> any year.
> <
>
> "It looks like you are living only half as much" --- the first part of the
> above statement ---
> seems correct to me. That is what is physically happening to the subject,
> no matter
> what the subject reports. Then you write "but if you [the subject] can't
> tell the difference",
> which shifts to the subjective mode. It is the properties of the
> subjective mode that
> are what this is all about ultimately, but to me it begs the question to
> introduce the
> conclusion so abruptly.


The subjective mode is the important part though, isn't it?  It presents
something of a paradox, because you feel that you are living a full life
when in fact you are not. What about the inverse situation where you are
granted an extra second of life for every second lived, but remember nothing
of that extra second? What about all the extra copies of you in branches of
the multiverse no longer in your potential future? I know that your view is
that copies are selves and it is just a matter of summing the total runtime.
This is a consistent way out of the paradoxes of personal identity raised by
the thought experiments with which we are familiar. My way is to deny that
there is any self persisting through time at all, but accept that there is
an illusion of this due to the way our brains have evolved, and "survival"
consists in maintaining this illusion.

And then you write "and no one else can tell the difference...".  But I
> think that that
> is false.  I believe that the scientists observing the phenomenon (a
> subject getting
> runtime) determine that there *is* no subject (he doesn't exist as a
> conscious entity)
> during those times in which his states are merely being looked up.  [To
> harp on my
> view of this, why even bother to look them up?  I.e., why move the static
> image
> into a certain register?  Why not leave it on disk or in RAM?  Wouldn't it
> still be
> the same thing?  The sequence still exists.  In fact, why do anything at
> all, since
> the patterns are out there already? But I see below you already jumped to
> the
> freewill/determinism quandry.]


How would the scientists know that there is no consciousness present when
the behaviour is the same? It is like trying to decide if a robot is
conscious.

     >>
>      It seems quite inescapable that conscious
>      robots could, and shortly will exist, and that it will be possible to
>      take such a program and single-step through its deterministic
>      execution.  And that such a program---either perhaps suffering
>      horribly or gaining a great deal of satisfaction---compels us to make
>      a moral choice.  But if rocks continue to be conscious whether
>      pulverized or not, as does any system that can take on many states,
>      (together with a fantastically loose definition of "system"), then of
>      what special status or value are humans and animals? Is caring for
>      another human being completely inconsequential because either
>      saving them from grief or inflicting grief upon them doesn't change
>      the platonic realities at all?
>      <<
>
> > This question can be applied to most multiverse theories: if everything
> > that can happen does happen, why should we bother doing anything
> > in particular?
>
> Because we increase the measure of favorable outcomes, the fraction
> of universes that develop in a desirable way. (Naturally, from a
> different viewpoint, it's just a machine that can act in no other way
> than it does, and the fraction of multiverses in which, say, I don't
> get killed in an auto accident is fixed.  Nonetheless, I think that I
> ought to drive as safely as I can.)
>
> > Even in a single universe, why should we worry about making
> > decisions when we know that the outcome has already been
> > determined by the laws of physics?
>
> To me,
> a totally deterministic program, say a weather forecasting program,
> has complete free will.  It takes in a huge amount of data, and after
> ruminating on it a long while, "decides" whether it's likely to rain or
> not tomorrow.   But that's all my brain does, too.
>
> So if I am predisposed to have great foresight and minimize my pain
> over the long run, then the measure of the universes that contain
> happy Lees is greater than it "otherwise" would have been.  In
> other words, it's good if I can allow memes such as prudence,
> civility, frugality, etc., to affect me. Or determine me.  Whatever.
>
> Although you probably already have your own explanations,
> to me, the basic error in the asking of such a question---
> like why should we worry about anything or exert ourselves---
> lies in its unconscious assumption that souls are possible, that not
> all events have causes, that there can be somewhere in the
> universe events that are completely uncaused (such as a
> decision that a certain human makes).  Once one has thoroughly
> purged oneself of the idea that such uncaused things can exist, and
> has internalized that we are all machines, only machines, and
> nothing else is conceivable then don't such questions lose their meaning?


The question is meaningless even without true randomness playing a part,
because the common sense view of free will (the sense of it we have when we
aren't concerned with analysing it) is that our decisions are neither
determined nor random, but something else; and there isn't anything else,
even in theory.

Returning to your example of driving to avoid an accident, imagine you are a
being in a Life simulation. You come to a point where you can either slow
down or keep going and run over a pedestrian. You decide to slow down,
because you think that running people over is bad and because you think you
have control over your life. In reality, you could not do other than slow
down: that was determined in the Life universe with the force of a
mathematical proof. Your feeling that you could have acted otherwise is
entirely illusory, as you could no more have changed what was to happen than
you could, through much mental straining, have changed 16 into a prime
number.

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070401/613fe545/attachment.html>


More information about the extropy-chat mailing list