[extropy-chat] On the communication of moral wisdom

Adrian Tymes wingcat at pacbell.net
Tue Sep 28 02:36:20 UTC 2004


--- The Avantguardian <avantguardian2020 at yahoo.com>
wrote:
> This is I believe, the
> best strategy to defeat the Luddites. Essentially
> one
> has to defend these technologies by staying within
> their own religious paradigms. Otherwise, mere logic
> will be lost on them, because they won't believe
> your
> premises.

Very true.  All the logic in the world doesn't matter
if the facts you're arguing from are false.  Many
Luddites tend to think, "Because this logic proves my
beliefs false, and I can not easily see any flaws in
the logic itself, the easiest path - dismissing the
facts the logic is based on - must be the correct
path."  Arguing from facts they defend disallows this.

> I have researched and compiled quite a lot
> of references from the bible, the koran, and other
> religious sources that supports transhumanist views
> on
> these technologies. When I am done, I will write a
> comprehensive essay on them and submit them to the
> ExI
> list.

I am not sure that would do much good.  Religious
types are already used to people invoking passages of
their texts out of context to defend all manner of
views, and have developed memetic defenses against
this.  After all, they do it themselves.  ("Even Satan
can quote scripture.")  It might be better to defend
this from the core concepts that the religions share,
than to seek out specific sayings.

> I believe that in order win over public
> opinion
> and support for our endeavors, we must try to
> embrace
> and assimilate spirituality and religion instead of
> trying to antagonize it as many on this list have
> done
> in the past. It doesn't really matter if you
> actually
> believe it, so long as you are willing to exploit
> it.

Or at least cite it as not inconsistent.  "I might not
believe as you do about God, but it doesn't matter.  I
won't bore you with why this is the right way if you
don't believe in God, but here is why this is the
right way if you do believe in God."

> In this vein, I have been working toward a concept
> of
> "spiritual transhumanism" which lays the framework
> for
> achieving our goals without pissing off God and his
> faithful. Hell, if as an extropian, I am going to be
> accused of being a cultist anyways, I might as well
> try to start a cult.

Maybe it would be better to view this as a mod for our
beliefs.  We have a well-developed main version for
atheists/agnostics, but what about a belief path for
Christians?  For Muslims?  For Buddhists?  A path that
their minds can follow, which gets them to support our
beliefs without having to uninstall their previous
memeset - and, indeed, even merging with it to draw
support.  (Imagine what would happen if a high-ranking
bishop declared that we were created to "play God" -
responsibly and so forth, since God is not
irresponsible and punishes those who attempt to
imitate Him without that crucial component, but within
that limit...)

Side note: as I've looked at various religions over
the years, I've noticed that a number of their core
concepts are the same.  Expressed differently, and
with some variations to account for the local
conditions when the religion was founded (for example,
dietary restrictions when eating or raising certain
animals was a Bad Idea for economic or ecological
reasons, in an era before economy or ecology were
known), but mostly attempts to encode basic rules of
civilization in forms that many people can understand.
Although the main surviving ones have been successful
at the basics (no surprise: else they wouldn't be
around any longer), there has been a consistent
problem with misinterpretation and misunderstanding.
Even today, language does not fully encode moral
meaning; it can attempt to inspire certain chains of
thought, for instance by leading people along those
chains with examples, but it can not guarantee that
people will not form incorrect chains.  One could not
see certain steps of the chain, or find unanticipated
ways of linking the steps of the example together, or
incorrectly apply some example-specific qualification
that makes the chain inapplicable elsewhere.   And
then one could simply fail to make the connection that
one is supposed to apply the chain to other situations
one encounters in life.

So I wonder...how can one accurately express moral
wisdom, given the inaccuracies with the most common
method?  By this I mean specific chains of thought
that guide the evaluation process for actions.  (In
other words, one's moral wisdom is one's method of
answering the question, "Should I do this?")  Jesus
Christ, Mohammed, Buddha, and others had this process
highly developed, and sought to share what they had
refined with their fellows.  I keep thinking of some
kind of AI notation, but every method I come up with
runs into the definition problem: you can define "if
X will cause harm to others, do not do X", but how
does one define "harm"?  Even human beings can't do
that (in the same concrete sense as, say, defining
"one" and "zero").

Or might it be possible to induce thoughts - say, find
the brain centers responsible for forming concepts and
trigger certain template paths?  Hopefully, the most
this could do would be to add memories of being guided
through this to whatever they had, so a child could
learn but an adult could not involuntarily be
reprogrammed.  (Although, the child would have to
learn to follow the lessons first, which itself might
be a problem.)

Just something to think about.



More information about the extropy-chat mailing list