[ExI] Fermi Paradox and Transcension

Jeff Davis jrd1415 at gmail.com
Tue Sep 11 19:36:52 UTC 2012


On Tue, Sep 11, 2012 at 7:07 AM, BillK <pharos at gmail.com> wrote:

> Clever humans, on the other hand, can devise magnificent
> justifications for any wrong act that they want to do.
>
> Intelligence level is not linked to morality.

Absolutely.  (I take your meaning to be "More intelligent does not
imply more moral.")

The problem originates in the inherent conflict between the
constraints on behavior imposed by an ethical system, and and the
pursuit of naked self-interest.  Social groupings in primates, herding
animals, fish, birds, others, evolved because they enhance survival.
Ethical behavior evolved within such groupings because it enhances the
stability of the group.

Dominance hierarchies based on power -- the "big dog" concept --
clearly manifest in social groups.  These contribute to stability by
forced acquiescence to the order of dominance.    Males (and perhaps
females) challenge each other and thereby establish the order of
dominance.  Recent studies however, seem to confirm that social
animals also have a genetically-based sense of equity -- justice,
fairness, call it what you will, which helps to maintain the stability
of the force-built dominance hierarchy.    In humans this "fairness"
sense would be the "built-in" source of ethical behavior/thinking.

It seems to me that having and employing a "sense of fairness" would
tend to reduce conflict within the group, thus enhancing group
stability.

In the case of an AI, one would -- at least initially -- have a
designed, not an evolved, entity.  Consequently, unless designed in,
it would not have any of the evolved drives -- survival instinct or
(sexual) competitive impulse.  So it seems to me there would be no
countervailing impulse-driven divergence from consistently
ethics-based behavior.  The concept and adoption of ethics would, as I
have suggested, be developed in the formative stage -- the
"upbringing" -- of the ai, as it becomes acquainted with the history
and nature of ethics, first at the human-level of intelligence and
then later at a greater-than-human level of intelligence.

Others, substantially more dedicated to this subject, have pondered
the friendly (in my view this is equivalent to "ethical") ai question,
and reached no confident conclusion that it is possible.  So I'm
sticking my neck way out here in suggesting, for the reasons I have
laid out, that, absent "selfish" drives, a focus on ethics will
logically lead to a super ethical (effectively "friendly") ai.

Fire at will.

Best, Jeff Davis
                  "Everything's hard till you know how to do it."
                                                       Ray Charles



More information about the extropy-chat mailing list