[ExI] What should survive and why?

Lee Corbin lcorbin at rawbw.com
Sat May 5 16:02:44 UTC 2007


Stathis writes

On 05/05/07, Lee Corbin <lcorbin at rawbw.com> wrote:

> > *Assuming* that right now your brain is not being tampered with, and
> > *assuming* that you are not living in a temporary simulation, then it 
> > is possible to gingerly reach out and begin establishing reliability
> > footholds. We do it in science all the time, for example, say, in gathering
> > astronomical data. And so did they who first dared to try to quantify 
> > "hot" and "cold" on a linear scale.
> > 
> > They had to constantly go back and forth between the objective and
> > subjective, until things began falling into place, and they could begin
> > building instruments more reliable than their own senses. 
> > 
> > So it could turn out with brain science. We now postulate that we
> > are conscious, and that the higher animals are also, presumably,
> > conscious, but not quite at the human level. Already comparisons
> > between subjective accounts and objective brain scans are made 
> > by researchers. And so forth.
> 
> You can't escape subjectivity so easily when the subject *is* subjectivity.
> When I say I want to survive into the future, what I mean is that I want
> my subjectivity to survive.

I understand, and I agree. Though even here peculiar paradoxes await.
Let's say you would find immortality sufficient, provided also that it
was subjectively great beyond your wildest dreams, and it even included
a vast community of somewhat like-minded individuals.  Would y'all then
be satisfied by the following?

In 2061 an AI ruling Earth has extremely recently discovered certain
astounding things, such as how using quantum effects to produce
infinitely many computations over a finite interval of time. Now, how
to deal with all the troglotyte humans?  Well, maybe some of them 
will agree to this:  Y'all will be down loaded into one grain of sand
on a shore in Siciliy, and during the first second, you will subjectively
experience one second of your great life. During the next half second
you will experience you will experience the next second, during the
next quarter second, the third second, so that at the end, objectively,
of two seconds the Ruling AI has eliminated the resource problem 
insofar as regards y'all.  

Or do you want more?  Do you want *objectively* to be around
at all times and places in the future?

> I don't really care what the objective facts are except insofar as
> they affect my subjective experience. [Even given the above?]
> If there is some precise neurological correlate of consciousness
> then that's good, because it means if I have my brain rewired
> I can be reassured that everything will continue as before.

Yes, that's the important criterion, at least to me.

> However, it would be OK with me if my subjectivity continued
> the same as before despite flouting all the known objective correlates
> of consciousness and personal identity. This begs the question, how
> could I possibly know that I have survived if I lack any objective
> evidence that I have survived? And if I can't know does that mean
> it would be pointless wanting to be resurrected in the far future,
> or in Heaven, because I simply couldn't be sure that I was me? 

I agree. So I assume that the objective correlates we get will be
our best working guess, and I'll go with them. Even right now, no
one knows for sure if even cryonics works, but it still remains
true that the second stupidest thing that you can do is to die and
be frozen.

Lee




More information about the extropy-chat mailing list