[ExI] Losing control (was: Unfrendly AI is a mistaken idea.)
Jef Allbright
jef at jefallbright.net
Mon Jun 18 14:25:37 UTC 2007
On 6/17/07, Samantha Atkins <sjatkins at mac.com> wrote:
>
> On Jun 17, 2007, at 3:06 PM, Jef Allbright wrote:
>
<snip>
> >> Good question and difficult to answer. Do you protect everyone
> >> cradle
> >> to [vastly remote] grave from their own stupidity? How exactly do
> >> they grow or become wiser if you do? As long as they can recover
> >> (which can be very advanced in the future) to be a bit smarter I am
> >> not at all sure that direct intervention is wise or moral or best for
> >> its object.
> >
<snip>
> > In each case, the morality, or perceived rightness, of a course of
> > action corresponds to the extent to which the action is assessed as
> > promoting, in principle, over an increasing scope of consequences, an
> > increasingly coherent set of values of an increasing context of agents
> > identified with the decision-making agent as self.
> >
>
> Do you think this makes it a great deal clearer than mud? That
> assessment, "in principle" over some increasing and perhaps unbounded
> scope of consequences pretty well sums up to "difficult to answer".
> You only said it in a fancier way without really gaining any clarity.
Samantha, as long as I've known you it's been apparent to me that we
have sharply different preferences in how we make sense of the world
we each perceive. You see a blue bicycle, where I see an instance of a
class of human-powered vehicle. Which is clearer, or more
descriptive? It depends on what you're going to do with your model.
If you're shopping for a good bike, concrete is better. If you're
trying to think about variations, extensions, and limits to a concept,
then abstract is better.
When I think about morality as a concept, it's nearly as precise --
and devoid of content -- as the quadratic formula. I may not know or
care about the actual values of the variables but I will know very
clearly how to proceed and that there will always be two solutions.
In this thread I tried to show some of the boundaries and deficiencies
of the present problem statement, trying to clarify the path, rather
futilely trying to clarify the destination.
My formula for morality, above, is very terse but I'm hesitant to
expand on it here since I've done so many times before and don't wish
to overstep my share of this email commons.
>
> > In the context of an individual agent acting in effective isolation,
> > there is no distinction between "moral" and simply "good."
>
> Where are there any such agents though?
This is a very important point -- no one of us is an island -- but the
problem statement seemed to specify first the case of an isolated
individual, then introducing society, then introducing a
superintelligent AI.
> > The
> > individual agent should (in the moral sense), following the
> > formulation above, take whatever course of action appears to best
> > promote its individual values. In the first case above, we have no
> > information about the individual's value set other than what we might
> > assign from our own "common sense"; in particular we lack any
> > information about the relative perceived value of the advice of the
> > AI, so we are unable to draw any specific normative conclusions.
> >
>
> Sure.
>
> > In the second and third cases above, it's not clear whether the
> > subject is intended to be moral actor, assessor, or agent (both.)
> > I'll assume here (in order to remain within practical email length)
> > that only passive moral assessment of the human's neurohacking was
> > intended.
> >
> > The second case illustrates our most common view of moral judgment,
> > with the values of our society defining the norm.
>
> I am a unclear those are well defined.
Here we see again our different cognitive preferences. I made the
abstract statement that in this common view, the values of society
define the norm. To me, this statement is clear and meaningful and
stands on its own. Your response indicates that you perceive a
deficiency in my statement, namely that its referent is not concrete.
In the next paragraph I make the point that the value set of
contemporary society is quite incoherent, so I feel a bit disappointed
that you criticized without tying these together.
> > Most of our values
> > in common are encoded into our innate psychology and aspects of our
> > culture such as language and religion as a result of evolution, but
> > the environment has changed significantly over time, leaving us with a
> > relatively incoherent mix of values such as "different is dangerous"
> > vs. "growth thrives on diversity" and "respect authority" vs. "respect
> > truth", and countless others. To the question at hand we can presume
> > to assign society's common-sense values set and note that the
> > neurohacking will have little congruence with common values, what
> > congruence exists will suffer from significant incoherence, and the
> > scope of desirable consequences will be largely unimaginable. Given
> > this assessment in today's society, the precautionary principle would
> > be expected to prevail.
> >
>
> Really? That principle is not held in high esteem around here. I
> would point out that roughly the same argument is put forward to
> justify the war on some drugs.
Please note that I said this straw-man result was based on a
presumption of [contemporary] society's common-sense values set. I'm
disappointed that you mistook my intention here, but glad of course
that we concur in deploring the current state of our society's moral
framework.
[I had hoped that the response would have been in the direction of how
we might intentionally improve our society's framework for moral
reasoning.]
> > The third case, of a superintelligent but passive AI, would offer a
> > vast improvement in coherence over human capacity, but would be
> > critically dependent on an accurate model of the present values of
> > human society. When applied and updated in an **incremental** fashion
> > it would provide a superhuman adjunct to moral reasoning. Note the
> > emphasis on "incremental", because, because coherence does not imply
> > truth within any practical computational bounds.
> >
>
> Assuming that the SAI really had a deep understanding of humans then
> perhaps. But I am not at all sure I would want to live in the
> ultimate nanny state.
Didn't my phrase "critically dependent on an accurate model of
human..." register with you? How about the specific words "passive",
and "incremental"? I was explicitly NOT addressing the issue of
active closed-loop intervention by an AI since it has never been
well-defined.
> Most likely that statement qualifies me for a
> major psychological adjustment come singularity. Are you sure that
> forceful intervention is justified by an increasingly nuanced moral
> reasoning? Within what limits?
>
> Still scans as difficult to answer.
Shit. Thanks Samantha for helping me to see (remember) why these
public email discussions are mostly a waste of time.
- Jef
More information about the extropy-chat
mailing list