[extropy-chat] Fw: Deconstruction deconstructed....
Harvey Newstrom
mail at HarveyNewstrom.com
Sun Jan 11 22:24:02 UTC 2004
Anders Sandberg wrote,
> Just another paper about hoax papers that I found absolutely
> delicious: The Cartesian Conspiracy: How to do Post-Modernism
> With Marquis de Sade http://web.nwe.ufl.edu/~jdouglas/S03finart9.pdf
This paper didn't really produce nonsense. It took the logical sequences of
de Sade trying to justify his activities and translated them into a
non-sexual subject. The fact that the authors could find non-sexual
subjects that could also be justified in a parallel manner with de Sade
doesn't break the logical sequence. Instead of seeking different sex
partners, they say we seek different intellectual associates. Instead of
wanting physical stimulation, we are seeking intellectual stimulation.
Instead of demanding more and more challenging physical acts, we demand more
and more intellectual exercises. The resulting paper makes perfect sense in
many places and was not produced by any method guaranteed to produce
nonsense.
> "In an age where nonsense is identical to scholarship,
> text-generating computers such as the Dada Engine are no less
> useful than postmodern scholars, and the necessity of the
> latter becomes difficult to substantiate."
I also question whether the Dada Engine always produces nonsense. I haven't
seen this particular engine, but I have seen similar ones. Consider these
points about its non-randomness. It doesn't produce random bits, because
that will screw up our screens. It doesn't produce random characters,
because it is forced to choose real words. It doesn't produce random
phrases, because it uses phrase books to put words together that actually go
together. It doesn't produce random sentences, because it uses the phrase
books to make sure that the ending words or one phrase fit into the next
phrase. It doesn't produce invalid sentences, because it uses grammar
checkers to add consistency between subjects and verbs, or adjectives and
objects. It doesn't produce totally random paragraphs, because keywords are
repeated so that other sentences about the same topic are selected.
There is so much work put into making this stuff appear non-random, that I
am not surprised that some people might think its generated text is real.
If we keep improving it to remove clues to its nonsensical nature, it is
getting further and further away from nonsense all the time. The better we
improve it, the less we will be able to detect its artificial origins. I
don't see how this is any indictment of the reader if they can't tell. It
is, rather, a credit to the engine designers to their ability to simulate
human language.
--
Harvey Newstrom, CISSP, CISA, CISM, IAM, IBMCP, GSEC
Certified IS Security Pro, Certified IS Auditor, Certified InfoSec Manager,
NSA Certified Assessor, IBM Certified Consultant, SANS Certified GIAC
<HarveyNewstrom.com> <Newstaff.com>
More information about the extropy-chat
mailing list