[extropy-chat] Engineered religion
john-c-wright at sff.net
john-c-wright at sff.net
Mon Mar 21 21:06:13 UTC 2005
Rick Woolley writes:
>For sake of argument/exploration, assume that i have the means to
conceive and birth my AI child today. It seems clear to me that a main
point of creating something which will rapidly become vastly superior
to myself is so that i may learn new truths, and unlearn old falsehoods?
Should i place more trust in a monkey-man like myself, or should i
accept the judgments, teachings, and actions of the AI?
One point that only the future can answer is how much engineering will go into
the engineering of the artificial minds? If it is like being a father, there is
not much engineering involved. The way the child grows is a matter largely out
of one's hands. You try to teach your child as best you can, and the rest is
providence, or fate, or luck.
Note the paradox here: An engineered object, when correctly made, functions as
designed. A mind thinks as it will, and may overcome its education or early
training by an effort of will, or a deliberate practice of habit.
An engineered mind would (one assumes) have in it the qualities the engineers
know how to put in it. In order to make the mind a cowardly one, the little
gears and cogwheels controlling that function would have to be made a certain
way, a strong brake on its spirited sense of honor, perhaps: to be courageous,
the gears and wheels would need a different arrangement, such as low-pressure in
the valve controlling the self-preservation. A just mind would have nicely
balanced levers for its calculations, a temperate one would have a system of
escapements to prevent the mainsprings of its passions from carrying it away.
And so on. To make the mind one with contempt for human beings, engage the
Terminator feature; to make it obedient to human commands, add the Asimov
attachment.
The question only the future can answer is how much can be controlled, and how
much is going to be left to the outcome of automatic processes we have set in
motion? But at some point (call it the age of majority) we stop treating it like
an artifact are start treating it like a mind. As best I can tell, there is no
intermediate case: either you are tinkering with its skull, treating it like an
object, or you are rearing it like a child, treating it like a human.
If the moral qualities of man are a matter of pure intellect (and I have some
sympathy to the arguments that this is so) then creating a creature of the
greatest intellectual calculating power will be sufficient to ensure it is a
moral and responsible being. If, on the other hand, it is possible to create a
being of high intelligence who is morally retarded (and there are many examples
of brilliant and evil men), humanity would be committing suicide to follow Mr.
Woolley's advice here. I might be the monkey-boy Mr. Woolley says, but I don't
want to be trampled from existence by Mechagodzilla.
My point here is one you have all heard before, no doubt: the posthumans, if
correctly and safely constructed, will be an outgrowth of humanity, maintaining
continuity with our psychology, belief, and history. They will think they are
like us and our us. If posthumanity is utterly alien to us, a race of
superintelligent smallpox, no matter how intelligent they are, they are of no
use to us.
More information about the extropy-chat
mailing list