[extropy-chat] Bully Magnets

Jef Allbright jef at jefallbright.net
Thu Dec 14 18:02:14 UTC 2006


Thanks Rafal.  I very much enjoy the clarity and coherence of your
thinking, which I can't criticize within your intended context.  

You made an interesting (and eye-opening, to me) statement a short while
ago about your scoring very low on self-transcendence.  Studies of human
values place self-transcendence diametrically opposite self-enhancement.
I think we can agree on which way the extropian list tends to lean.

In my case I tend very strongly, almost off the scale, toward
self-transcendence. I highly value what works over increasing scope, far
more than I value what works for my own (nominal) scope.  For this
reason I value systems thinking (but principles more than practice),
rationality (but always in terms of context), moral thinking (but beyond
conventional morality), and human enhancement (but beyond enhancement of
individual self.)

It all comes down to values, which can't be directly argued or denied.

[Actually I think that values /can/ be argued in evolutionary terms of
some working better than others over increasing scope, but this appears
to be a value-less argument to those who don't value
self-transcendence.]

Thanks again Rafal. By engaging and responding in your highly rational
style you've helped clarify a disconnect that I've been experiencing for
a long time.

- Jef

 
Rafal Smigrodzki wrote:
> 
> On 12/13/06, Jef Allbright <jef at jefallbright.net> wrote:
> 
>>
>> I understand your point about the rational narrow-context 
>> benefits of promoting hate.
>> Do you understand my point about the moral broader-context 
>> detriments of promoting hate?
>>
>> If so, then how do you rationalize such a discontinuity in 
>> the ethics function over expanding scope?  If this is a
>> general principle of truth, then what general principle
>> determines the dividing line?
> 
> ### I do not recognize expanding the scope of ethics (i.e.
> constructing ethics as to maximize the size of the in-group, 
> if I understand you correctly) as an independent value. The 
> basis for all of my ethical reasoning is satisfaction of my 
> goals. Inclusiveness of my ethics is then a function of my 
> assessment of relationship between inclusiveness and 
> satisfaction. The in-group is redefined as needed to achieve 
> optimal satisfaction. This implies that satisfaction 
> trade-offs in the ingroup always must be a positive-sum game, 
> or else the group would be redefined so as to exclude some members.
> 
> The dividing line will then exclude those neural networks 
> whose inclusion would reduce my satisfaction. There are no 
> useful (for me) trade-offs between me and snails, which is 
> why snails are not a part of my ingroup. Regrettably, there 
> is a certain number of humans who are not members of my 
> ingroup, so the same reason.





More information about the extropy-chat mailing list