[extropy-chat] Bully Magnets
jef at jefallbright.net
Wed Dec 13 20:17:48 UTC 2006
Rafal Smigrodzki wrote:
> On 12/13/06, Jef Allbright <jef at jefallbright.net> wrote:
>> Hate is a heuristic, most useful in cases of insufficient
> ### I see hate as an important public good, at least until we
> develop truth machines capable of assessing levels of
> commitment to a course of action. A member of your group who
> hates your enemies is less likely to defect when the chips
> come down. It is a simple matter of rational calculation that
> cooperation with haters is under many circumstances likely to
> be more stable than cooperation with wusses.
Hate certainly acts to promote solidarity within a specified group, but
at the same time drawing a sharper line between in-group and out-group.
The hate dynamic tends toward immoral actions because hate motivates
narrower context decision-making, evaluating consequences over narrower
scope of possible agents and possible interactions.
I understand your point about the rational narrow-context benefits of
Do you understand my point about the moral broader-context detriments of
If so, then how do you rationalize such a discontinuity in the ethics
function over expanding scope? If this is a general principle of truth,
then what general principle determines the dividing line?
If we teach our Marines to kill the Gooks in order to win the battle, do
we later reverse their programming somehow when their mission becomes
one of peacekeeping? If we teach that hate is an important public good,
then are we intentionally encouraging others to hate us?
Or should we aspire to develop and spread methods of rational
problem-solving that surpass the simpler methods of our evolutionary
ancestors in their simpler world?
It's the age-old question of whether ends justify means. In any
specified narrow context we can argue that they do. But real life is
not a closed context and with increasing context the question morphs
into whether we value what we became to achieve those short-term ends.
All paradox is due to insufficient context. In the bigger picture all
the pieces must fit.
More information about the extropy-chat