[ExI] An old text on "singularitarianism"...

Stefano Vaj stefano.vaj at gmail.com
Thu Jan 6 15:41:41 UTC 2011


... has re-emerged on the Associazione Italiana Transumanisti's mailing
list, which I believe expresses a rather authoritative POV on issues
recently discussed again and again in this list concerning possible
AGI-related "rapture" and "doom" visions.

<<From: Daniel C. Dennett
Date: October 4, 2000
A friendly alert to Jaron Lanier
Unalloyed enthusiasm for anything is bound to be a mistake, so thank
goodness for the critics, the skeptics, the second-thought-havers, and even
the outright apostates. Apparently the price one must pay for jumping off a
fast moving bandwagon is missing the target somewhat, since it seems that
apostates usually overstate the case and land somewhere rather far from
where they aimed. Reading Jaron Lanier's half a manifesto, I was reminded of
an earlier critic of digital dreams, Joseph Weizenbaum, whose 1976 book,
Computer Power and Human Reason, was an uneven mix of serious criticism in
the tradition of Norbert Wiener and ill-developed jeremiads. Weizenbaum, in
spite of my efforts (for which I was fulsomely thanked in his preface),
could never figure out if he was trying to say that AI was impossible, or
all-too-possible but evil. Was AI something we couldn't develop or shouldn't
develop? Entirely different cases, requiring different arguments. There is a
similar tension in Lanier's writing: are the Cybernetic Totalists just
hopelessly wrong—their dream is, for deep reasons, impossible—or are they
cheerleaders we must not follow—because we/they might succeed? There is an
interesting middle course, combining both options in a coherent possibility,
and I take it that this is the best reading of Lanier's manifesto: the
Cybernetic Totalists are wrong and if we take them seriously we will end up
creating something—not what they dream of, but something else—that is evil.
But who are the Cybernetic Totalists? I'm glad that Lanier entertains the
hunch that Dawkins and I (and Hofstadter and others) "see some flaw in logic
that insulates [our] thinking from the eschatalogical implications" drawn by
Kurzweil and Moravec. He's right. I, for one, do see such a flaw, and I
expect Dawkins and Hofstadter would say the same. My reason has always been
that the visionaries who imagine self-reproducing robots taking over in the
near future have bizarrely underestimated the complexities of life. Consider
the parallel flaw in the following passage from truth to foolishness:
TRUE: living bodies are made up of nothing but millions of varieties of
organic molecules organized by the trillions into complex dynamic structures
such as cells and larger assemblies (there is no élan vital, in other
words).
FOOLISH CONCLUSION: therefore we shall soon achieve immortality; all we have
to do is direct all our research and development into molecular biology with
the goal of replacing those individual molecules, one at a time, as they
break or wear out.
You don't have to be a vitalist to reject this technocratic fantasy, and you
don't have to be a dualist, an anti-mechanist, to reject simplistic visions
of some AI utopia just around the corner. Lanier is wistful about the
possibility "that in rational thought the brain does some as yet
unarticulated thing that might have originated in a Darwinian process, but
that cannot be explained by it [my italics]," but why should it matter?
Lanier is too clever to ask for a skyhook, but he can't keep himself from
yearning for . . . . half a skyhook.
It is ironic that when Lanier succumbs to temptation and indulges in a bit
of cybernetic totalism of his own, he's pretty good at it. His speculative
analysis of the inevitability of what might be called legacy inertia,
creating diminishing returns that will always blunt Moore's law, is
insightful, and I welcome these new reasons his essay gives me for my
skepticism about the cybernetic future. But I wish he didn't also indulge in
so much presumptive caricature of those positions he finds threatening. He
apparently doesn't want there to be subtle, nuanced, modest versions of the
theses he resists, since those would be so hard to sweep away, so he follows
the example of one of his heroes, Stephen Jay Gould, and stoops to the
demagogic stunt of creating strawpeople and then blasting away at them. He's
got me wrong, and Dawkins, and Thornhill and Palmer, to name the most
obvious cases. It's child's play to hoot at parodies of me on consciousness,
Dawkins on memes, Thornhill and Palmer on rape. Grow up and do some real
criticism, worth responding to. We're not the bad guys; we hold positions
that are entirely congenial to his trenchant criticisms of simplistic
thinking about computation and evolution.
Joseph Weizenbaum soon found himself drowning under a wave of fans, the
darling of a sloppy-thinking gaggle of Euro-intellectuals who struck
fashionable Luddite poses while comprehending almost nothing about the
technology engulfing them. Weizenbaum had important, reasoned criticisms to
offer, but all they heard was a Voice on Our Side against the Godless
Machines. Jaron, these folks will love your message, but they are not your
friends. Aren't your criticisms worthy of the attention of people who
actually will try to understand them?>>

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110106/9cf78a3d/attachment.html>


More information about the extropy-chat mailing list