[extropy-chat] Self-preservation and unyielding belief

Jef Allbright jef at jefallbright.net
Sun Dec 3 07:21:15 UTC 2006


Lee Corbin wrote:
>>> You should make it clear that this is only a 
>>> conjecture on your part that such a form of
>>> collaborative social decision-making exists or 
>>> will exist. Yes, it's probably tied up with
>>> identity, but I've taken a stand on what I
>>> mean by identity, and so long as I'm Lee Corbin, 
>>> that view, which I've held since 1966, is not
>>> going to change. I'll turn into someone else
>>> the day it does.
>> 
>> Lee, I appreciate the frankness and clarity of
>> this last paragraph in which you have stated
>> the main reason why such discussion tends not to 
>> proceed.
>>
>> Self-preservation is the root of much unyielding belief.
> 
> Again, just how the hell do you think that people are going 
> to fall completely over to your views in real time?  It almost
> *never* happens, and I will go so far to say that indeed it 
> does never happen on any deep issue at all. 


The metaphor I most often use is of planting seeds of thought that may
occasionally flourish.


> As for my words above, apparently you simply have failed to 
> understand--despite all my effort--what I mean by identity 
> and how unconnected with relatively peripheral issues such as 
> morality it is. 

My understanding of your view of identity (in rough overview) is that it
has everything to do with the idea of your personal survival into the
indefinite future.  That the most important element of your self is your
memories (your current memories, right?) and that if a suitable
computational substrate implementing these memories were given runtime
(the more the better) to process experience in a Lee Corbin-esque way,
effectively to be aware of the ongoing experience of self (relative to
those preserved memories) in a future setting, that you would feel
satisfied that survival had been accomplished.  Further, if multiple
copies, each with its own runtime, were to exist, you would be even more
satisfied due to the "increased measure" of your personal identity in
active existence.

Just a few weeks ago you said that my summary of your position on
personal identity was excellent, the best you had ever seen, or some
similar words.  Here I have provided a similar synopsis, but with more
words and thus more potential points of failure.  If you please, let us
know where you agree or would make changes to more clearly state the
essentials of your position as concisely as possible.

I see self-identification over expanding scope as absolutely essential
to a rational description of morality, but no need to explain that again
here and now.


> All my examples of what ORDINARY people mean 
> when they stare death in the face seem to have utterly no 
> effect on you. 


As discussed, popular thinking on these issues will be challenged
dramatically by anticipated technological developments.  Our present
common-sense notions are biased by our linguistic idioms, cultural
patterns, and a very strong evolved drive to maintain homeostasis.
These biases lead to incoherence and breakdown when extrapolated beyond
present common usage.  

You want to discuss ORDINARY on this list?  Seriously, I don't know what
effect you were looking for.  I take it for granted that we share such
common understanding and try to keep my focus on the leading edge when
posting in this public forum. 


> As I mentioned offlist, do you think that 
> Nathan Hale believed that *he* was going to survive when he 
> said "I regret that I have but one life to give for my country?".


The issue offlist was with regard to an idealized agent, and which is
more fundamental to its decision-making:  its values or its survival.
My point has always been that survival of an entity is just one of its
many values, and thus values are more fundamental.  I actually thought
we had achieved understanding and agreement on this until today, when
you said something to the effect that survival and other values are like
apples and oranges.  I can understand that someone might think that
survival is essential to promoting the other values, but it is easy to
show that while commonly true today, it isn't true in general.  Values,
such as Nathan Hales', can be promoted into the future quite
independently of the agent's existence in that future.

With regard to Nathan Hale, I think your point may be to ask whether I
think he is in some sense "surviving" by promoting his values, perhaps
in the same sense that we say a great artist lives on though his works
or a parent lives on through his children.  I have never taken such a
position because I think it is a very narrow and distorted sense of the
concept of personal survival.

Where we really seem to disagree is where I say that survival without
change is not possible even in principle within an open coevolutionary
environment.


> 
> You also have a distressing tendency not to answer direct 
> questions of the above kind.

My children have often told me the same thing.  So, as simply as I can,
with regard to Nathan Hale, I would suppose that he did not expect that
*he* was going to survive.  Please let me know where you want to go with
that.

- Jef




More information about the extropy-chat mailing list