[extropy-chat] Engineered Religion--Your Mom and the Machine
Jef Allbright
jef at jefallbright.net
Wed Mar 30 03:08:41 UTC 2005
john-c-wright at sff.net wrote:
>Some comments on comment to my little Mom story.
>
>
>
<snip>
>The second problem, an old one in philosophy, is how to deduce normative
>statements (what I ought do) from descriptive statements (what is). Ayn Rand
>claimed to have done this by all normative statements presuppose survival (in
>that only for living organisms can things be good or bad, i.e. preservative or
>destructive of life). Her argument would not satisfy those who think there are
>thing beloved more than life worth dying to preserve. Cicero, for example,
>argues that if mere survival is the foundation and source of virtue, then
>courage is not a virtue. Empiricism will tell you what the physical structure of
>the universe is, but says nothing, by itself, about the moral order of the
>universe.
>
>No empirical observation, by itself, can lead to a normative conclusion.
>
>
>
<snip>
>Same as above. The observation that unthinking Nature red of tooth and nail
>struggles without quarter for raw survival has no bearing on how thoughtful men
>should teach their children, or instruct their thoughtful Jupiter Brains, to act.
>
>Mr. Allbright says:
>"Rule One, stating that determination of "truth" shall be empirically
>based, is nearly fundamental and a fine foundation for a meta-ethics."
>
>Same as above. No possible empirical observation can serve as a basis, in and of
>itself, for any conclusion about a moral norm, metaethical or otherwise. You can
>observe that such-and-such promotes survival, but not that survival OUGHT to be
>promoted. You can observe that so-and-so is ruthlessly aggressive, but not that
>you OUGHT to impersonate him.
>
Yes, the Naturalistic Fallacy of deriving "ought from is" is well known
and its fatal weakness understood. However, when I said that an
empirical basis for determining "truth" is a fine foundation, I meant
only as part of the lowest layer of a meta-ethics, essentially the
interface layer between "Self" and "Reality", and did not mean to imply
that this foundation in any way produces the value judgments that emerge
from the higher layers. My point was that "truth" (in scare quotes
because all knowledge is subjective, approximate and contingent) must be
grounded in the measurable evidence of our senses (and their
extensions), and to the extent that any observation is not thus
grounded, it must be discounted.
The above is simply a statement in support of the scientific method,
which results are always incomplete, and acknowledges that there may be
other forms of knowing, most of which are recognizable under the
umbrella of the mystical or supernatural, which should not be fully
ignored, but should be discounted. I think I also mentioned a deeper
way of knowing, essentially through the structure of the environment in
which we evolved and now find ourselves, but by mentioning this I'm
afraid I may now be diluting my message.
I'll get to the "value" layer under Rule Two.
>
>
>"Updating Kant, (Rule Two) can be more effectively stated as follows: That
>which is found to be "true" within a given context, may always be
>superseded by a greater "truth" within a larger encompassing context.
>This is the essence of what I refer to as the Arrow of Morality."
>
>
As mentioned earlier, universal truth is not realizable, but we
certainly do recognize subjective subcontexts as "true", according to
Rule One. What is considered "true" is what appears to work within a
given context.
Now, let's approach the subject of values.
Some have suggested that survival is the basis of moral goodness. This
is useful to some extent, but ultimately limited in its applicability as
has been pointed out here and elsewhere.
In the Arrow of Morality, I propose that we can all agree that *what is
good is what works*, and that this is intrinsically subjective (limited
in context of awareness.) Further, I say that we can all agree that
what works over a wider context is better than what works over a
narrower context. The words "good" and "better" highlight the value
aspect of this statement.
Now, please bear with me a little longer before raising objections.
Note that I am being pragmatic by not postulating a universal morality
-- there is no such universal viewpoint, although we can gradually
approach it -- and I am not postulating an objective morality -- again,
no such objective viewpoint exists. But I am saying that we can all
agree that what works is good, and what works over a wider context is
better. And I am saying that we have an empirical basis for evaluating
what works for any given context. I am also saying that, in an
evolutionary way, that which works tends to overcome and supersede that
which doesn't work as well, and that this ratcheting forward of
progress, of what works and is therefore considered good, can be seen as
an arrow of growth toward what is seen as good -- an arrow of morality.
>I am delighted with the idea of an Arrow of Morality. The idea here seems to be
>that a more-universal maxim is norm is better than a less universal norm. If I
>may add an argument to support this notion: any subjective norm, by definition,
>establishes a boundary (such as between Us and Them) between where moral rules
>are obeyed and where not: since any ambiguous case is decided itself by a moral
>rule, only a universal moral rule admit of no ambiguous cases of application: a
>rule is "morality" when it applies to everybody. If it is meant only to apply to
>us and not to them, or me and not to you, it is merely an expression of taste or
>expediency.
>
>
As I said, I am updating Kant, and others, by pointing out that a
universal rule for all actors and all situations does not correspond to
the inherent subjectivity and limited context of awareness of any Self
who would make a choice about right action in any given circumstance.
My formulation is actually simpler than one postulating a universal
absolute, because it scales continuously. I realize that people raised
in the culture and traditions of the western hemisphere expect their
Truth to be universal and their Self to be discrete, and these cultural
biases add to the difficulty of grasping what I see as a simpler and
more encompassing concept of morality.
>"(Rule Three) is superfluous given Rules One and Two above, and ultimately
>dangerous."
>
>
A useful Rule Three would address the relationship between Self and
Other, and principles of effective (read synergetic, positive sum,
cooperative) interaction between them. But that would be enough to fill
another chapter.
>Actually, Rule Three is the only one that acts as its own justification. The
>other two rules require something beyond themselves to support themselves. Rule
>One needs metaphysical axioms concerning the reliability of the senses, the
>universality of reason, and the universality of the laws of cause and effect.
>Rule Two needs an additional moral rule that one OUGHT to obey moral rules. But
>a child obeys his mother because she is, in fact, his mother, and he depends on
>her: this authority is natural, needing no further justification.
>
>
Given that we all experience the universe from a subjective, limited
context, it is interesting that you can criticize empiricism as lacking
justification, but claim that a mother's authority stands on its own. I
acknowledge the dependency of a child upon its mother, but as mentioned
earlier this only works well until the child is capable of independence.
>"It is effective only in the case that Self's context of awareness is smaller
>than, and encompassed within Mother's context of awareness, as is commonly and
>currently the case with small children."
>
>My hypothetical is only concerned with Nomad and M-5, Collosus and Skynet and
>Ultron at the first hour when they first wake up. After they reach the age of
>majority (let us say, in the four hours it takes them to read all human
>literature and science) then they are adults, able to govern themselves. Free
>men yield to the authority of other men only such much self-sovereignty as is
>needed to maintain a disciplined system of liberty: I assume free machines will
>do the same.
>
>
I expect that we will live surrounded with intelligent machines, greatly
exceeding human cognitive capability, but lacking the evolved drives
that we anthropomorphically assign to them and fear of them. A greater
and more likely near-term danger is that a human individual or group
will utilize the superhuman cognitive power of such a machine for
immoral purposes (purposes that may appear to work for his own
relatively narrow context, but don't work well over a larger context of
actors, interactions, or time.) Our best defense in such a scenario
will be the wide dispersal of such intelligence amplification in the
service of the broader population. I think this is a natural and likely
scenario but still fraught with risk as life always is.
- Jef
http://www.jefallbright.net
More information about the extropy-chat
mailing list