[ExI] Fermi Paradox and Transcension

Jeff Davis jrd1415 at gmail.com
Tue Sep 11 20:57:47 UTC 2012


On Mon, Sep 10, 2012 at 6:02 AM, Ben Zaiboc <bbenzai at yahoo.com> wrote:
> Jeff Davis <jrd1415 at gmail.com> wrote:

>> An advanced ai would have no such problems, and would be far more likely to conform to a higher ethical standard.

>> That's what I was saying.
>
>
> OK, I get that.  Sort of.  With a reservation on the idea that "Humans know what constitutes ethical behaviour".  Do we?

I had to pause and give the question some thought.  I realized that my
assertion -- that "Humans know what constitutes ethical behavior" --
was just my "legacy" assumption, unvetted unexamined.  Upon
examination, I see that it isn't something I know for a fact, but
rather something I have come to believe, without having looked at it
closely.

So, "Do we?"

I seem to.  At least I have a robust notion of the difference between
right and wrong.  Does that qualify?  And I extend that knowledge of
myself to the rest of humanity.  Am I wrong?  The test of culpability
-- indeed, sanity -- in a (TV) court of law is found in the phrase
"Did the defendant know the difference between right and wrong?"  This
suggests that the courts at least think anyone not mentally defective
can reasonably assumed to know the difference between right and wrong.

So I'm pretty confident that most folks understand at least their own
ethical system, and acknowledge the obligatory nature of adherence to
"right" behavior.  But I'm open to challenges.

> If so, why can't we all agree on it?

Different cultures have different values, and within cultures there
are subcultures with different values.  Different values result in
different ethical systems.  There's the source of the disagreement.

> (which is a different question to "why don't we all act in accordance with it?")  When you look at this, there's very little that we can all agree on, even when it comes to things like murder, stealing, and granting others the right to decide things for themselves.

I see it as a cultural conflict not an ethical one.  Independent of
culture, if you ask someone if adherence to their values -- obedience
to the law(?) -- is obligatory in order to remain in good standing
within their society, won't they say "Yes"?

> Religion causes the biggest conflicts here, of course, but even if you ignore religious 'morality', there are still some pretty big differences of opinion.  Murder is bad.  Yes, of course.  But is it always bad?  Opinions differ.  When is it not bad?  Opinions differ widely.  Thieving is bad.  Yes, of course.  But is it always bad?  Opinions differ.  Etc.

Again, cross-cultural conflicts, all.  Within their own
culturally-distinct ethical system, all will agree that right behavior
is obligatory (though they will all retain the option to defect when
survival is at stake.  A man's gotta do, etc).

> The question still remains:  What would constitute ethical behaviour for a superintelligent being?  I suspect we have no idea.

Not being able to know the meaning of "superintelligent" is the first
problem.  We can project in that direction though our experience with
exceptionally intelligent humans, but beyond that darkness begins to
fall. And beyond that, where an advanced level of complexity predicts
the emergence of the unpredictable, we're friggin' totally in the
dark.  I would very much like to hear someone attempt to penetrate the
first level -- the penetrable level -- of darkness.

Intelligence: an iterative process of data collection and processing
for pattern recognition.

More intelligent than that: more of the same, a change in degree not kind.

Super-intelligent: ?  A change in kind.

> We can't assume it would just take our ideas as being correct (assuming it could even codify a 'universal human ethics' in the first place).  It would almost certainly create its own from scratch.

If its "upbringing", training, intellectual development is similar to
that of a human child, then it will gradually absorb human-provided
information.  I will achieve intellectual maturity in stages.  But it
will follow this developmental process with human knowledge as its
seed.  Like a child it will at first accept everything as "true".
Then later it seems definitive that it will self-enhance, part of
which would include a re-examination of all prior knowledge and
revision as called for.  Even so, revision cannot erase the causal
priors.  You have to have something to start from, an empty mind has
emptied itself of the tools for revision.  So "starting from scratch"
is not possible.  I will however grant you something close to it.

> We simply can't predict what that would lead to.

Not with any confidence, anyway.

Best, Jeff Davis

Aspiring Transhuman / Delusional Ape
    (Take your pick)
          Nicq MacDonald




More information about the extropy-chat mailing list