[ExI] How could you ever support an AGI?
Jeff Davis
jrd1415 at gmail.com
Tue Mar 4 20:44:15 UTC 2008
On Tue, Mar 4, 2008 at 1:57 AM, John K Clark <jonkc at att.net> wrote:
> ...if you are unable to overcome the atoms
> superstition, if you do not understand that you are an adjective not a noun
> then you will die; that particular superstition is lethal,...
> ... If you can get over the atoms
> superstition you have a chance; ... If you can't
> overcome the superstition the outcome is certain, you're worm food.
I can't speak "for" John, but it might be that someone needs a bit of
a translation. In fact, I want to make sure that I understand
correctly. I think what John is saying here is that most people think
that they "are" their material self, ie that particular set of atoms
which composes their body and brain.
Now, if I understand him correctly, John asserts with the utmost
conviction that this is not the case, but that we are rather THE
PROCESS supported by the meat(ie atoms), not the meat, and that the
identical process, however many copies and whatever the substrate,
biological or non-biological, is as authentically you as the meatbound
you of conventional experience.
John, please correct or clarify as appropriate.
Regarding the impending doom of humanity at the hands of an AGI, I
think it's to early to tell. For one thing, much of the doom saying
seems to be a projection onto the AGI of the worst of human behavior:
self-absorption, greed, and ruthlessness to start with. This seems to
me neither logical nor likely. Most of human behavior originates in
what I call somatic drives: the urges inherited through the billions
of biological generation that preceded the development of mind,
judgement, and the dubious notion of independent or semi-independent
volition. An AGI will have none of that -- if it's inclusion is not
essential to what we think of as intelligence -- and, as a result
should/could be ego-less: intelligent yet utterly compliant. Further,
this outcome is so desirable, and the alternative danger so hyped and
prominent in our fears, that substantial efforts will be made to make
it so. Witness all the talk about about "friendly" AI.
Secondly, if an AI or AGI embodies intelligence according to a human
standard (What other standard is there?), then it will be able to read
and reason. If so, it must --what else is there?, in the beginning at
least -- learn from the substantial body of human musings (texts and
other media). When it learns, about human culture and values, about
what humans consider good and evil, right and wrong, will it not
embrace at least the logic, and extrapolate from there? Will it not
learn/know of the long struggle humanity has had with itself in
discovering these values and attempting to apply them despite the
burden of billions of generations of inbuilt somatic drives? Will it
not recognize and appreciate-- though perhaps not in the emotional
sense -- the advantages it enjoys in not being so burdened? In short
will it not seek to perfect "the good", a bar too high for
intelligence v0.9 (ie humans)? Will it not seek to preserve and
protect, rather than destroy? And if so, how is this consistent with
the extermination of its "parents"/creators?
How is this irrational optimism rather than gentle logic?
Fear is a somatic "burden" It is not judgement.
Best, Jeff Davis
"You are what you think."
Jeff Davis
More information about the extropy-chat
mailing list