[extropy-chat] Playing Go & demandingness in ethics
Samantha Atkins
sjatkins at mac.com
Thu Jan 5 20:24:22 UTC 2006
On Jan 5, 2006, at 6:54 AM, Jef Allbright wrote:
>
> This question appears to be the same as what I asked myself when I was
> about eight years old, had just finished reading the Bible and was
> trying to reconcile the Christian words and teachings (in which I was
> immersed) with easily apparent Christian hypocrisy.
>
The teachings are full of contradictions, invalid assumptions and
fairy tales. It is not possible or desirable to live fully in
accordance with them.
> And I eventually came to understand that people have only limited
> awareness of themselves and their relationship to others, and to
> compound the matter, they have only limited motivation to increase
> their awareness, or their awareness of their awareness, and so on.
Human beings have limited abilities and intellect. Only when we
think of humans as some nearly infinite "soul" housed in flesh do we
wonder why we don't do so much better. We did not evolve to do
better in all ways that someone may claim or feel that we should or
that someone thinks would be desirable.
>
> It became clear to me that while a completely objective, big-picture
> view of any particular domain will facilitate rational decision-making
> (and thus behavior) within that domain, in the real world there is
> always a larger possible domain within which, even under the best
> conditions, we must operate under conditions of incomplete
> information.
Complete objectivity and knowledge is a mystical fantasy.
> Consistency is the ultimate measure of morality, but we
> can only approach and never reach that level of objectivity (and
> gladly so, because then our subjective values would be nil.)
Who says that consistency is the ultimate measure of morality?
Consistency within what context and acknowledged limitations? What
sort of morality? I don't think 2000+ year old prescriptions drilled
into us long before we could intellectually resist memetic plaques
qualify as any sort of morality that rational people should be
worrying themselves about.
>
> Later it became clear to me that even the Self, which is assumed to be
> evaluating and making these rational decisions, doesn't exist in the
> discrete and independent sense that is assumed by most people raised
> within western culture.
The capital s "Self" is a mystical fantasy. It doesn't exist.
>
> So, objective terms, as systems operating within the physical
> universe, our behavior certainly is consistent, but in terms of
> subjective agents, operating with only an approximate internal model
> of reality, our behavior is quite naturally inconsistent.
>
Yes! You see it, at least in part. Our models are approximations
with limits to their accuracy and correctness. We are creatures with
limits to our thinking, modeling and understanding of ourselves and
everything else. We are evolved creatures with many relatively
difficult to address proclivities. It is amazing we have as much
self-control over our behavior as we do. Can most of us do better
than we do? Probably. But the means to do so are not so obvious as
just deciding we should do X rather than Y. There is a bit more to
it than that.
> I hope the foregoing provides a useful response to your question about
> why people do not meet the demands of their professed ethics.
>
> To your second question, "should and can we do something to change,
> acting more in accord with the demands of our ethics?", I would point
> out that for the last four or so years I have been something of a
> broken record on this very subject.
>
> It is disturbing to see intelligent thinkers simply assume that giving
> to charity is a fundamental good, that all humans or all "sentient
> beings" possess equal moral value, or that "rights" are somehow
> inherent in the structure of our world.
>
Hear. hear on most of that. I do think the concept of natural rights
as defined as what is essential to the full use of the human mind,
our main enabler of survival and advancement, has some validity.
> These values and the actions they imply, like all others, are "good"
> to the extent that they work to promote increasingly shared values
> into the future.
Why is it important that the values be shared? Particularly as some
humans seriously augment it is very unlikely that most humans will
even understand >human values in any real depth much less share them.
> Charity, for example, is seen as a fundamental good
> because altruism is woven deeply into the fabric of our culture and
> our genes -- because it tends to work, and our values have been
> thereby shaped.
It is not a fundamental or unlimited good. It is an approximate good
in limited circumstances and context. Much evil has been done in the
name of "altruism". Sometimes it seems to me that much evil would be
avoided if people weren't so keen to pull on the mantle of "saving"
or "uplifting" all of humanity. Ask first whether what you are doing
is a good and compelling thing that you would do for yourself and
those who understand it. Ask if doing it for that limited group is
compelling enough to get you to take action. If it is not then
dressing it up as being for all of humanity will not improve it or
even necessarily energize you. It will simply make it vastly more
dangerous. It is much more difficult to remain honest when out to
save the world.
> Given a different environment of evolutionary
> adaptation -- or an imminent environment of rapid technological change
> -- charity may be seen and evaluated differently.
Yep.
>
> Certain values become increasingly shared because they work, meaning
> they persist and grow. These increasingly shared values are the basis
> of our morality, what we tend to agree is good.
>
This seems to say that certain values persist and grow because they
persist and grow. And that the ones that persist and grow we agree
are good. But this is surely not a valid way to determine what is
good. It is an average across humanity with its current limitations.
> Certain actions are considered good, to the extent that they promote
> our increasingly shared values.
>
> What can we do? We can continue to build a framework of social
> decison-making that increases our awareness of our increasingly shared
> subjective values, and that increases our increasingly objective
> instrumental knowledge applied to the promotion of those values.
>
How about knowledge applied to the examination of those values as not
necessarily benign or leading to our transhuman goals?
- samantha
More information about the extropy-chat
mailing list