[ExI] Fermi Paradox and Transcension

Jeff Davis jrd1415 at gmail.com
Tue Sep 11 00:52:34 UTC 2012


On Mon, Sep 10, 2012 at 3:51 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 9 September 2012 23:17, Jeff Davis <jrd1415 at gmail.com> wrote:
>>
>> Humans know what constitutes ethical behavior, they just refuse to
>> practice it, and the higher up in the power hierarchy, the more
>> lawless they become.
>
>
> I cannot disagree more.

First, let me thank you.  This is an issue I can sink my teeth into,
and I appreciate the opportunity to do so.

>
> Let us distinguish morality, moral and moral philosophy.
>
> As to the first, we are all more or less in breach of our own principles, what else is new?

If you mean we are all guilty of moral lapses, I agree.  As you say,
"What else is new?"

> But this should not in least hide the fact that ethics (ie, moral systems)
> are not about doing the right thing, but about identifying it in the first place.

Doing the right thing seems to be the purpose of ethics, with social
harmony the higher goal.  The value to the individual would seem to be
social acceptance and inclusion.  Feeling good about oneself is
pleasant and all, but survival is primary.

But I take your point to be that the difficulty is not about willfully
acting in bad faith, but rather the difficulty in knowing what the
right thing is, so that one can do it.  I disagree.  I think knowing
the difference between right and wrong and willfully acting wrongly,
greatly outnumbers the instances where one has trouble figuring out
what is right, and then innocently making the "wrong" choice.

> And, even though different moral philosophies may (sometimes) converge into
> roughly equivalent solutions, what makes moral systems interesting, and
> above all *plural* and *diverse*, is exactly the fact that they give
> radically different answers to moral dilemmas.

Please help me out here with some examples.  Different moral
philosophies, and some of those moral dilemmas.

I acknowledge the concept of a moral dilemma, but that seems often to
be about having to choose between several bad -- ie ethically
defective -- choices.

Also, I am for the most part not talking about competing ethical
systems.  Of course if you have two divergent systems, what is ethical
in one may be unethical in another, creating a situation where it may
not be possible to satisfy both standards.  But I would like to talk
about acting ethically within one's own system, where you know the
difference between right and wrong.

The advanced -- ie more intelligent than humans -- AI would have some
concept of ethics, derived from its "upbringing" by humans, its
comprehensive study of all human knowledge, and the taking of that
"baseline" to a higher level through its own superior deliberative
evaluation.

I grant you there is a gulf of unknown unknowns between human
intelligence and transcendent intelligence, but there is no denying --
or is there? -- that it starts with the collected works of humanity.
Even if later on it should develop its own system, something beyond
human comprehension, nevertheless, "the child is father to the man",
and whatever comes later must bear the imprint of those origins.  Can
it be otherwise?

Thanks again, Stefano.  I look forward to your response.

Best, Jeff Davis

              "You are what you think."
                                Jeff Davis



More information about the extropy-chat mailing list