[Paleopsych] Edge: The Pancake People, or, "The Gods are Pounding My Head"

Premise Checker checker at panix.com
Wed Apr 20 22:57:33 UTC 2005


The Pancake People, or, "The Gods are Pounding My Head"
http://www.edge.org/3rd_culture/foreman05/foreman05_index.html
    ______________________________________________________________________

     But today, I see within us all (myself included) the replacement of
       complex inner density with a new kind of self-evolving under the
    pressure of information overload and the technology of the "instantly
    available". A new self that needs to contain less and less of an inner
      repertory of dense cultural inheritance--as we all become "pancake
    people"--spread wide and thin as we connect with that vast network of
             information accessed by the mere touch of a button.


    THE PANCAKE PEOPLE, OR, "THE GODS ARE POUNDING MY HEAD" [3.8.05]
    Richard Foreman

    vs.

    THE GÖDEL-TO-GOOGLE NET [3.8.05]
    George Dyson

    As Richard Foreman so beautifully describes it, we've been pounded
    into instantly-available pancakes, becoming the unpredictable but
    statistically critical synapses in the whole Gödel-to-Google net. Does
    the resulting mind (as Richardson would have it) belong to us? Or does
    it belong to something else?

    THE REALITY CLUB: [11]Kevin Kelly, [12]Jaron Lanier, [13]Steven
    Johnson, [14]Marvin Minsky ,[15] Douglas Rushkoff, [16]Roger Schank,
    [17]James O'Donnell, [18]Rebecca Goldstein. respond to Richard Foreman
    and George Dyson

                                     ___

    Introduction
    In early 2001, avant-garde playwright and director Richard Foreman,
    called to enquire about Edge's activities. He had noticed the optimism
    of the Edge crowd and the range of intellectual interests and
    endeavors and felt that he needed to to begin a process to explore
    these areas. Then 9/11 happened. We never had our planned meeting.

    Several years have gone by and recently Foreman opened his most recent
    play for his Ontological-Hysteric Theater at St. Marks Church in the
    Bowery in New York City. He also announced that the play--The Gods Are
    Pounding My Head--would be his last.

    Foreman presents Edge with a statement and a question. The statement
    appears in his program and frames the sadness of The Gods Are Pounding
    My Head. The question is an opening to the future. With both, Foreman
    belatedly hopes to engage Edge contributors in a discussion, and in
    this regard George Dyson has written the initial response, entitled
    "The Gödel-to-Google Net".

    [19]--JB

    RICHARD FOREMAN, Founder Director, Ontological-Hysteric Theater, has
    written, directed and designed over fifty of his own plays both in New
    York City and abroad.  Five of his plays have received "OBIE" awards
    as best play of the year--and he has received five other "OBIE'S" for
    directing and for 'sustained achievement'.

    [20]RICHARD FOREMAN 's [21]Edge Bio Page
      _________________________________________________________________

    THE PANCAKE PEOPLE, OR, "THE GODS ARE POUNDING MY HEAD"

    A Statement
    When I began rehearsing, I thought The Gods Are Pounding My Head would
    be totally metaphysical in it's orientation. But as rehearsals
    continued, I found echoes of the real world of 2004 creeping into many
    of my directorial choices. So be it.

    Nevertheless, this very--to my mind--elegiac play does delineate my
    own philosophical dilemma. I come from a tradition of Western culture
    in which the ideal (my ideal) was the complex, dense and
    "cathedral-like" structure of the highly educated and articulate
    personality--a man or woman who carried inside themselves a personally
    constructed and unique version of the entire heritage of the West.

    And such multi-faceted evolved personalities did not hesitate--
    especially during the final period of "Romanticism-Modernism"--to cut
    down , like lumberjacks, large forests of previous achievement in
    order to heroically stake new claim to the ancient inherited land--
    this was the ploy of the avant-garde.

    But today, I see within us all (myself included) the replacement of
    complex inner density with a new kind of self-evolving under the
    pressure of information overload and the technology of the "instantly
    available". A new self that needs to contain less and less of an inner
    repertory of dense cultural inheritance--as we all become "pancake
    people"--spread wide and thin as we connect with that vast network of
    information accessed by the mere touch of a button.

    Will this produce a new kind of enlightenment or
    "super-consciousness"? Sometimes I am seduced by those proclaiming
    so--and sometimes I shrink back in horror at a world that seems to
    have lost the thick and multi-textured density of deeply evolved
    personality.

    But, at the end, hope still springs eternal...

                                     ___

    A Question
    Can computers achieve everything the human mind can achieve?
    Human beings make mistakes. In the arts--and in the sciences, I
    believe?--those mistakes can often open doors to new worlds, new
    discoveries and developments--the mistake itself becoming the basis of
    a whole new world of insights and procedures.
    Can computers be programmed to 'make mistakes' and turn those mistakes
    into new and heretofore unimaginable developments?

                        ______________________________

    As Richard Foreman so beautifully describes it, we've been pounded
    into instantly-available pancakes, becoming the unpredictable but
    statistically critical synapses in the whole Gödel-to-Google net. Does
    the resulting mind (as Richardson would have it) belong to us? Or does
    it belong to something else?

                               [dysong.150.jpg]

    THE GÖDEL-TO-GOOGLE NET
    George Dyson

    GEORGE DYSON, science historian, is the author of Darwin Among the
    Machines.

    [22]George Dyson's Edge Bio Page

                                     ___

    THE GÖDEL-TO-GOOGLE NET

    Richard Foreman is right. Pancakes indeed!
    He asks the Big Question, so I've enlisted some help: the Old
    Testament prophets Lewis Fry Richardson and Alan Turing; the New
    Testament prophets Larry Page and Sergey Brin.
    Lewis Fry Richardson's answer to the question of creative thinking by
    machines is a circuit diagram, drawn in the late 1920s and published
    in 1930, illustrating a self-excited, non-deterministic circuit with
    two semi-stable states, captioned "Electrical Model illustrating a
    Mind having a Will but capable of only Two Ideas."
    [dyson.scan.jpg]
    Machines that behave unpredictably tend to be viewed as
    malfunctioning, unless we are playing games of chance. Alan Turing,
    namesake of the infallible, deterministic, Universal machine,
    recognized (in agreement with Richard Foreman) that true intelligence
    depends on being able to make mistakes. "If a machine is expected to
    be infallible, it cannot also be intelligent," he argued in 1947,
    drawing this conclusion as a direct consequence of Kurt Gödel's 1931
    results.
    "The argument from Gödel's [theorem] rests essentially on the
    condition that the machine must not make mistakes," he explained in
    1948. "But this is not a requirement for intelligence." In 1949, while
    developing the Manchester Mark I for Ferranti Ltd., Turing included a
    random number generator based on a source of electronic noise, so that
    the machine could not only compute answers, but occasionally take a
    wild guess.
    "Intellectual activity consists mainly of various kinds of search,"
    Turing observed. "Instead of trying to produce a programme to simulate
    the adult mind, why not rather try to produce one which simulates the
    child's? Bit by bit one would be able to allow the machine to make
    more and more `choices' or `decisions.' One would eventually find it
    possible to program it so as to make its behaviour the result of a
    comparatively small number of general principles. When these became
    sufficiently general, interference would no longer be necessary, and
    the machine would have `grown up'"
    That's the Old Testament. Google is the New.
    Google (and its brethren metazoans) are bringing to fruition two
    developments that computers have been waiting for sixty years. When
    John von Neumann's gang of misfits at the Institute for Advanced Study
    in Princeton fired up the first 32 x 32 x 40 bit matrix of random
    access memory, no one could have imagined that the original scheme for
    addressing these 40,960 ephemeral bits of information, conceived in
    the annex to Kurt Gödel's office, would now have expanded, essentially
    unchanged, to address all the information contained in all the
    computers in the world. The Internet is nothing more (and nothing
    less) than a set of protocols for extending the von Neumann address
    matrix across multiple host machines. Some 15 billion transistors are
    now produced every second, and more and more of them are being
    incorporated into devices with an IP address.
    As all computer users know, this system for Gödel-numbering the
    digital universe is rigid in its bureaucracy, and every bit of
    information has to be stored (and found) in precisely the right place.
    It is a miracle (thanks to solid-state electronics, and
    error-correcting coding) that it works. Biological information
    processing, in contrast, is based on template-based addressing, and is
    consequently far more robust. The instructions say "do X with the next
    copy of Y that comes around" without specifying which copy, or where.
    Google's success is a sign that template-based addressing is taking
    hold in the digital universe, and that processes transcending the von
    Neumann substrate are starting to grow. The correspondence between
    Google and biology is not an analogy, it's a fact of life. Nucleic
    acid sequences are already being linked, via Google, to protein
    structures, and direct translation will soon be underway.
    So much for the address limitation. The other limitation of which von
    Neumann was acutely aware was the language limitation, that a formal
    language based on precise logic can only go so far amidst real-world
    noise. "The message-system used in the nervous system... is of an
    essentially statistical character," he explained in 1956, just before
    he died. "In other words, what matters are not the precise positions
    of definite markers, digits, but the statistical characteristics of
    their occurrence... Whatever language the central nervous system is
    using, it is characterized by less logical and arithmetical depth than
    what we are normally used to [and] must structurally be essentially
    different from those languages to which our common experience refers."
    Although Google runs on a nutrient medium of von Neumann processors,
    with multiple layers of formal logic as a base, the higher-level
    meaning is essentially statistical in character. What connects where,
    and how frequently, is more important than the underlying code that
    the connections convey.
    As Richard Foreman so beautifully describes it, we've been pounded
    into instantly-available pancakes, becoming the unpredictable but
    statistically critical synapses in the whole Gödel-to-Google net. Does
    the resulting mind (as Richardson would have it) belong to us? Or does
    it belong to something else?
    Turing proved that digital computers are able to answer most--but not
    all--problems that can be asked in unambiguous terms. They may,
    however, take a very long time to produce an answer (in which case you
    build faster computers) or it may take a very long time to ask the
    question (in which case you hire more programmers). This has worked
    surprisingly well for sixty years.
    Most of real life, however, inhabits the third sector of the
    computational universe: where finding an answer is easier than
    defining the question. Answers are, in principle, computable, but, in
    practice, we are unable to ask the questions in unambiguous language
    that a computer can understand. It's easier to draw something that
    looks like a cat than to describe what, exactly, makes something look
    like a cat. A child scribbles indiscriminately, and eventually
    something appears that happens to resemble a cat. A solution finds the
    problem, not the other way around. The world starts making sense, and
    the meaningless scribbles are left behind.
    "An argument in favor of building a machine with initial randomness is
    that, if it is large enough, it will contain every network that will
    ever be required," advised Turing's assistant, cryptanalyst Irving J.
    Good, in 1958. Random networks (of genes, of computers, of people)
    contain solutions, waiting to be discovered, to problems that need not
    be explicitly defined. Google has answers to questions no human being
    may ever be able to ask.
    Operating systems make it easier for human beings to operate
    computers. They also make it easier for computers to operate human
    beings. (Resulting in Richard Foreman's "pancake effect.") These views
    are complementary, just as the replication of genes helps reproduce
    organisms, while the reproduction of organisms helps replicate genes.
    Same with search engines. Google allows people with questions to find
    answers. More importantly, it allows answers to find questions. From
    the point of view of the network, that's what counts. For obvious
    reasons, Google avoids the word "operating system." But if you are
    ever wondering what an operating system for the global computer might
    look like (or a true AI) a primitive but fully metazoan system like
    Google is the place to start.
    Richard Foreman asked two questions. The answer to his first question
    is no. The answer to his second question is yes.
      _________________________________________________________________

    [kelly100.jpg]
    Kevin Kelly:

    "Can computers achieve everything the human mind can achieve?"
    Computers can't, but the children of computers will.

    [23]KEVIN KELLY is Editor-At-Large, Wired; Author, Out of Control: The
    New Biology of Machines, Social Systems, and the Economic World.
      _________________________________________________________________

    The only way to deal with other people's brains or bodies sanely is to
    grant them liberty as far as you're concerned, but to not lose hope
    for them. Each person ought to decide whether to be a pancake or not,
    and some of those pre-pancakes Foreman misses were actually vacuous
    soufflés anyway. Remember? There are plenty of creamy rich three
    dimensional digitally literate people out there, even a lot of young
    ones. There is a lot of hope and beauty in digital culture, even if
    the prevalent fog is sometimes heavy enough to pound your head.

    [jaron100.jpg]
    Jaron Lanier:

    In the 1990s, I used to complain about the "suffocating nerdiness and
    blandness" of Silicon Valley. This was how the pioneer days of Richard
    Foreman's pancake personhood felt to me. I fled to live in New York
    City precisely for the antidote of being around venues like Foreman's
    Ontological-Hysterical Theater and his wonderful shows. But computer
    culture broke out of its cage and swallowed Manhattan whole only a few
    years later.
    The very first articulate account of what information technology would
    be like was written by E.M. Forster in 1909, in a story called "The
    Machine Stops". The characters have something like the Internet with
    video phones and the web and email and the whole shebang and they
    become pancake people, but realize it, and long for a way out, which
    ultimately involves a left-wing tableaux of the machine smashed. The
    pancakes walk outside into a pastoral scene, and the story ends. We
    don't really learn if they leaven.

    Computer culture's reigning cool/hip wing of the moment, the free
    software --or "open source"--movement, uses the idea of the Cathedral
    as a metaphorical punching bag. In a famous essay by Eric Raymond
    ("The Cathedral and the Bazaar"), the Cathedral is compared
    unfavorably to an anarchic village market, and the idea is that true
    brilliance is to be found in the "emergent" metapersonal wisdom of
    neo-Darwinian competition. The Cathedral is derided as a monument to a
    closed, elitist, and ultimately constricting kind of knowledge. It's a
    bad metaphor.

    All this supports Foreman's pancake premise, but I recommend adding a
    wing to Foreman's mental cathedral. In this wing there would be a
    colossal fresco of two opposing armies. One one side, there would be a
    group led by Doug Englebart. He'd be surrounded by some eccentric
    characters such as the late Jef Raskin, Ted Nelson, David Gelernter,
    Alan Kay, Larry Tesler, Andy van Dam, Ben Schneiderman, among others.
    They are facing an opposing force made up of both robots and people
    and mechochimeras.

    The first group consists of members of the humanist tradition in
    computer science, and are people that Foreman might enjoy. They are
    not pancakes and they don't make others into pancakes. They are no
    more the cause of mental shrinkage than the written word (despite
    Plato's warnings to the contrary) or Gutenberg.

    The only way to deal with other people's brains or bodies sanely is to
    grant them liberty as far as you're concerned, but to not lose hope
    for them. Each person ought to decide whether to be a pancake or not,
    and some of those pre-pancakes Foreman misses were actually vacuous
    soufflés anyway. Remember? There are plenty of creamy rich three
    dimensional digitally literate people out there, even a lot of young
    ones. There is a lot of hope and beauty in digital culture, even if
    the prevalent fog is sometimes heavy enough to pound your head.

    If Foreman is serious about quitting the theater, he will be missed.
    But that's not a reason to offer computers, arbiters on their own of
    nothing but insubstantiality, the power to kick his butt and pound his
    head. The only reality of a computer is the person on the other side
    of it.

    [24]JARON LANIER is a Computer Scientist and Musician.
      _________________________________________________________________

    ...the kind of door-opening exploration that Google offers is in fact
    much more powerful and unpredictable than previous modes of
    exploration. It's a lot easier to stumble across something totally
    unexpected--but still relevant and interesting--using Google than it
    is walking through a physical library or a bookstore. A card catalogue
    is a terrible vehicle for serendipity. But hypertext can be a
    wonderful one. You just have to use it the right way.

    [Johnson,Steven100.jpg]
    Steven Johnson:

    I think it's a telling sign of how far the science of information
    retrieval has advanced that we're seriously debating the question of
    whether computers can be programmed to make mistakes. Rewind the tape
    8 years or so--post-Netscape, pre-Google--and the dominant complaint
    would have been that the computers were always making mistakes: you'd
    plug a search query into Alta-Vista and you'd get 53,000 results, none
    of which seemed relevant to what you were looking for. But sitting
    here now in 2005, we've grown so accustomed to Google's ability to
    find the information we're looking for that we're starting to yearn
    for a little fallibility.
    But the truth is most of our information tools still have a fuzziness
    built into them that can, in Richard Foreman's words, "often open
    doors to new worlds." It really depends on how you choose to use the
    tool. Personally, I have two modes of using Google: one very directed
    and goal-oriented, the other more open-ended and exploratory.
    Sometimes I use Google to find a specific fact: an address, the
    spelling of a name, the number of neurons estimated to reside in the
    human brain, the dates of the little ice age. In those situations, I'm
    not looking for mistakes, and thankfully Google's quite good at
    avoiding them. But I also use Google in a far more serendipitous way,
    when I'm exploring an idea or a theme or an author's work: I'll start
    with a general query and probe around a little and see what the oracle
    turns up; sometimes I'll follow a trail of links out from the original
    search; sometimes I'll return and tweak the terms and start again.
    Invariably, those explorations take me to places I wasn't originally
    expecting to go--and that's precisely why I cherish them. (I have a
    similar tool for exploring my own research notes--a program called
    DevonThink that lets me see semantic associations between the
    thousands of short notes and quotations that I've assembled on my hard
    drive.)
    In fact, I would go out on a limb here and say that the kind of
    door-opening exploration that Google offers is in fact much more
    powerful and unpredictable than previous modes of exploration. It's a
    lot easier to stumble across something totally unexpected--but still
    relevant and interesting--using Google than it is walking through a
    physical library or a bookstore. A card catalogue is a terrible
    vehicle for serendipity. But hypertext can be a wonderful one. You
    just have to use it the right way.
    [25]
    STEVEN JOHNSON, columnist, Discover; Author: Emergence: The Connected
    Lives of Ants, Brains, Cities, and Software.
      _________________________________________________________________

    I don't see any basic change; there always was too much information.
    Fifty years ago, if you went into any big library, you would have been
    overwhelmed by the amounts contained in the books therein.
    Furthermore, that "touch of a button" has improves things in two ways:
    (1) it has change the time it takes to find a book from perhaps
    several minutes into several seconds, and (2) in the past date usually
    took many minutes, or even hours, to find what you want to find inside
    that book--but now, a Computer can help you can search through the
    text, and I see this as nothing but good.

    [minsky100.jpg]

    Marvin Minsky:

    Mr. Foreman complains that he is being replaced (by "the pressure of
    information overload") with "a new self that needs to contain less and
    less of an inner repertory of dense cultural inheritance" because he
    is connected to "that vast network of information accessed by the mere
    touch of a button."
    I think that this is ridiculous because I don't see any basic change;
    there always was too much information. Fifty years ago, if you went
    into any big library, you would have been overwhelmed by the amounts
    contained in the books therein. Furthermore, that "touch of a button"
    has improves things in two ways: (1) it has change the time it takes
    to find a book from perhaps several minutes into several seconds, and
    (2) in the past date usually took many minutes, or even hours, to find
    what you want to find inside that book--but now, a Computer can help
    you can search through the text, and I see this as nothing but good.
    Indeed, it seems to me that only one thing has gone badly wrong. I do
    not go to libraries any more, because I can find most of what I want
    by using that wonderful touch of a button! However the copyright laws
    have gotten worse--and I think that the best thoughts still are in
    books because, frequently, in those ancient times, the authors
    developed their ideas for years well for they started to publicly
    babble. Unfortunately, not much of that stuff from the past fifty
    years is in the public domain, because of copyrights.
    So, in my view, it is not the gods, but Foreman himself who has been
    pounding on his own head. Perhaps if he had stopped longer to think,
    he would have written something more sensible. Or on second thought,
    perhaps he would not--if, in fact, he actually has been replaced.

    [26]MARVIN MINSKY is a mathematician and computer scientist; Cofounder
    of MIT's Artificial Intelligence Laboratory; Author, The Society of
    Mind.
      _________________________________________________________________

    We give up the illusion of our power as deriving from some notion of
    individual collecting data, and find out that having access to data
    through our network-enabled communities gives us an entirely more
    living flow of information that is appropriate to the ever changing
    circumstances surrounding us. Instead of growing high, we grow wide.
    We become pancake people.

    [rushkoff100.jpg]
    Douglas Rushkoff:
    I don't think it's the computer itself enabling the pancake people,
    but the way networked computers give us access to other people. It's
    not the data--for downloaded data is just an extension of the wealthy
    gentleman in his library, enriching himself as a "self." What creates
    the pancake phenomenon is our access to other people, and the
    corresponding dissolution of our perception of knowledge as an
    individual's acquisition.

    Foreman is hinting at a "renaissance" shift I've been studying for the
    past few years.

    The original Renaissance invented the individual. With the development
    of perspective in painting came the notion of perspective in
    everything. The printing press fueled this even further, giving
    individuals the ability to develop their own understanding of texts.
    Each man now had his own take on the world, and a person's storehouse
    of knowledge and arsenal of techniques were the measure of the man.

    The more I study the original Renaissance, the more I see our own era
    as having at least as much renaissance character and potential. Where
    the Renaissance brought us perspective painting, the current one
    brings virtual reality and holography. The Renaissance saw humanity
    circumnavigating the globe; in our own era we've learned to orbit it
    from space. Calculus emerged in the 15th Century, while systems theory
    and chaos math emerged in the 20th. Our analog to the printing press
    is the Internet, our equivalent of the sonnet and extended metaphor is
    hypertext.

    Renaissance innovations all involve an increase in our ability to
    contend with dimension: perspective. Perspective painting allowed us
    to see three dimensions where there were previously only two.
    Circumnavigation of the globe changed the world from a flat map to a
    3D sphere. Calculus allowed us to relate points to lines and lines to
    objects; integrals move from x to x-squared, to x-cubed, and so on.
    The printing press promoted individual perspectives on religion and
    politics. We all could sit with a text and come up with our own,
    personal opinions on it. This was no small shift: it's what led to the
    Protestant wars, after all.

    Out of this newfound experience of perspective was born the notion of
    the individual: the Renaissance Man. Sure, there were individual
    people before the Renaissance, but they existed mostly as parts of
    small groups. With literacy and perspective came the abstract notion
    the person as a separate entity. This idea of a human being as a
    "self," with independent will, capacity, and agency, was pure
    Renaissance--a rebirth and extension of the Ancient Greek idea of
    personhood. And from it, we got all sorts of great stuff like the
    autonomy of the individual, agency, and even democracy and the
    republic. The right to individual freedom is what led to all those
    revolutions.

    But thanks to new emphasis on the individual, it was also during the
    first great Renaissance that we developed the modern concept of
    competition. Authorities became more centralized, and individuals
    competed for how high they could rise in the system. We like to think
    of it as a high-minded meritocracy, but the rat-race that ensued only
    strengthened the authority of central command. We learned compete for
    resources and credit made artificially scarce by centralized banking
    and government.

    While our renaissance also brings with it a shift in our relationship
    to dimension, the character of this shift is different. In a
    holograph, fractal, or even an Internet web site, perspective is no
    longer about the individual observer's position; it's about that
    individual's connection to the whole. Any part of a holographic plate
    recapitulates the whole image; bringing all the pieces together
    generates greater resolution. Each detail of a fractal reflects the
    whole. Web sites live not by their own strength but the strength of
    their links. As Internet enthusiasts like to say, the power of a
    network is not the nodes, it's the connections.

    That's why new models for both collaboration and progress have emerged
    during our renaissance--ones that obviate the need for competition
    between individuals, and instead value the power of collectivism. The
    open source development model, shunning the corporate secrets of the
    competitive marketplace, promotes the free and open exchange of the
    codes underlying the software we use. Anyone and everyone is invited
    to make improvements and additions, and the resulting projects--like
    the Firefox browser--are more nimble, stable, and user-friendly.
    Likewise, the development of complementary currency models, such as
    Ithaca Hours, allow people to agree together what their goods and
    services are worth to one another without involving the Fed. They
    don't need to compete for currency in order to pay back the central
    creditor--currency is an enabler of collaborative efforts rather than
    purely competitive ones.

    For while the Renaissance invented the individual and spawned many
    institutions enabling personal choices and freedoms, our renaissance
    is instead reinventing the collective in a new context. Originally,
    the collective was the clan or the tribe--an entity defined no more by
    what members had in common with each other than what they had in
    opposition to the clan or tribe over the hill.

    Networks give us a new understanding of our potential relationships to
    one another. Membership in one group does not preclude membership in a
    myriad of others. We are all parts of a multitude of overlapping
    groups with often paradoxically contradictory priorities. Because we
    can contend with having more than one perspective at a time, we
    needn't force them to compete for authority in our hearts and
    minds--we can hold them all, provisionally. That's the beauty of
    renaissance: our capacity to contend with multiple dimensions is
    increased. Things don't have to be just one way or directed by some
    central authority, alive, dead or channeled. We have the capacity to
    contend with spontaneous, emergent reality.

    We give up the illusion of our power as deriving from some notion of
    individual collecting data, and find out that having access to data
    through our network-enabled communities gives us an entirely more
    living flow of information that is appropriate to the ever changing
    circumstances surrounding us. Instead of growing high, we grow wide.
    We become pancake people.

    [27]DOUGLAS RUSHKOFF is a media analyst; Documentary Writer; Author,
    Media Virus.
      _________________________________________________________________

    As to Dyson's remarks: "Turing proved that digital computers are able
    to answer most  but not all¤ programs that can be asked in unambiguous
    terms." Did he? I missed that. Maybe he proved that computers could
    follow instructions which is neither here nor there. It is difficult
    to give instructions about how to learn new stuff or get what you
    want. Google's "allowing people with questions to find answers" is
    nice but irrelevant. The Encyclopedia Britannica does that as well and
    no one makes claims about its intelligence or draws any conclusion
    whatever from it. And, Google is by no means an operating system--I
    can't even imagine what Dyson means by that or does he just not know
    what an operating system is?

    [schank100.jpg]
    Roger Schank:
    I am constantly astounded by people who use computers but who really
    don't understand them at all when I hear people talk about artificial
    intelligence (AI) . I shouldn't be surprised by most folk's lack of
    comprehension I suppose, since the people inside AI often fail to get
    it as well. I recently attended a high level meeting in Washington
    where the AI people and the government people were happily dreaming
    about what computers will soon be able to do and promising that they
    would soon make it happen when they really had no idea what was
    involved in what they were proposing. So, that being said, let me talk
    simply about what it would mean and what it would look like for a
    computer to be intelligent.

    Simple point number 1: A smart computer would have to be able to
    learn.

    This seems like an obvious idea. How smart can you be if every
    experience seems brand new? Each experience should make you smarter
    no? If that is the case then any intelligent entity must be capable of
    learning from its own experiences right?

    Simple point number 2: A smart computer would need to actually have
    experiences. This seems obvious too and follows from simple point
    number 1. Unfortunately, this one isn't so easy. There are two reasons
    it isn't so easy. The first is that real experiences are complex, and
    the typical experience that today's computers might have is pretty
    narrow. A computer that walked around the moon and considered
    seriously what it was seeing and decided where to look for new stuff
    based on what it had just seen would be having an experience. But,
    while current robots can walk and see to some extent, they aren't
    figuring out what to do next and why. A person is doing that. The best
    robots we have can play soccer. They play well enough but really not
    all that well. They aren't doing a lot of thinking. So there really
    aren't any computers having much in the way of experiences right now.

    Could there be computer experiences in some future time? Sure. What
    would they look like? They would have to look a lot like human
    experiences. That is, the computer would have to have some goal it was
    pursuing and some interactions caused by that goal that caused it to
    modify what it was up to in mid-course and think about a new strategy
    to achieve that goal when it encountered obstacles to the plans it had
    generated to achieve that goal. This experience might be
    conversational in nature, in which case it would need to understand
    and generate complete natural language, or it might be physical in
    nature, in which case it would need to be able to get around and see,
    and know what it was looking at. This stuff is all still way too hard
    today for any computer. Real experiences, ones that one can learn
    from, involve complex social interactions in a physical space, all of
    which is being processed by the intelligent entities involved. Dogs
    can do this to some extent. No computer can do it today. Tomorrow
    maybe.

    The problem here is with the goal. Why would a computer have a goal it
    was pursuing? Why do humans have goals they are pursuing? They might
    be hungry or horny or in need of a job, and that would cause goals to
    be generated, but none of this fits computers. So, before we begin to
    worry about whether computers would make mistakes, we need to
    understand that mistakes come from complex goals not trivially
    achieved. We learn from the mistakes we make when the goal we have
    failed at satisfying is important to us and we choose to spend some
    time thinking about what to do better next time. To put this another
    way, learning depends upon failure and failure depends upon having had
    a goal one care's about achieving and that one is willing to spend
    time thinking about how to achieve next time using another plan. Two
    year olds do this when they realize saying "cookie" works better than
    saying "wah" when they want a cookie.

    The second part of the experience point is that one must know one has
    had an experience and know the consequences of that experience with
    respect to one's goals in order to even think about improving. In
    other words, a computer that thinks would be conscious of what had
    happened to it, or would be able to think it was conscious of what had
    happened to it which may not be the same thing.

    Simple point number 3: Computers that are smart won't look like you
    and me.

    All this leads to the realization that human experience depends a lot
    on being human. Computers will not be human. Any intelligence they
    ever achieve will have to come by virtue of their having had many
    experiences that they have processed and understood and learned from
    that have helped them better achieve whatever goals they happen to
    have.

    So, to Foreman's question: Computers will not be programmed to make
    mistakes. They will be programmed to attempt to achieve goals and to
    learn from experience. They will make mistakes along the way, as does
    any intelligent entity.

    As to Dyson's remarks: "Turing proved that digital computers are able
    to answer most  but not all¤ programs that can be asked in unambiguous
    terms." Did he? I missed that. Maybe he proved that computers could
    follow instructions which is neither here nor there. It is difficult
    to give instructions about how to learn new stuff or get what you
    want. Google's "allowing people with questions to find answers" is
    nice but irrelevant. The Encyclopedia Britannica does that as well and
    no one makes claims about its intelligence or draws any conclusion
    whatever from it. And, Google is by no means an operating system--I
    can't even imagine what Dyson means by that or does he just not know
    what an operating system is?

    People have nothing to fear from smart machines. With the current
    state of understanding of AI I suspect they wont have to even see any
    smart machines any time soon. Foreman's point was about people after
    all and people are being changed by the computer's ubiquity in their
    lives. I think the change is, like all changes in the nature of man's
    world, interesting and potentially profound, and probably for the
    best. People may well be more pancake-like, but the syrup is going to
    very tasty.

    [28]ROGER SCHANK is a Psychologist & Computer Scientist; Author,
    Designing World-Class E-Learning.
      _________________________________________________________________

    I have trouble imagining what students will know fifty years from now,
    when devices in their hands spare them the need to know multiplication
    tables or spelling or dates of the kings of England. That probably
    leaves us time and space for other tasks, but the sound of the gadgets
    chasing us is palpable. What humans will be like, accordingly, in 500
    years is just beyond our imagining.
    [odonnell100.jpg]
    James O'Donnell:

    Can computers achieve everything the human mind can achieve? Can they,
    in other words, even make fruitful mistakes? That's an ingenious
    question.

    Of course, computers never make mistakes--or rather, a computer's
    "mistake" is a system failure, a bad chip or a bad disk or a power
    interruption, resulting in some flamboyant mis-step, but computers can
    have error-correcting software to rescue them from those. Otherwise, a
    computer always does the logical thing. Sometimes it's not the thing
    you wanted or expected, and so it feels like a mistake, but it usually
    turns out to be a programmer's mistake instead.

    It's certainly true that we are hemmed in constantly by technology.
    The technical wizardry in the graphic representation of reality that
    generated a long history of representative art is now substantially
    eclipsed by photography and later techniques of imaging and
    reproduction. Artists and other humans respond by doing more and more
    creatively in the zone that is still left un-competed, but if I want
    to know what George W. Bush looks like, I don't need to wait for a
    Holbein to track him down. We may reasonably expect to continue to be
    hemmed in. I have trouble imagining what students will know fifty
    years from now, when devices in their hands spare them the need to
    know multiplication tables or spelling or dates of the kings of
    England. That probably leaves us time and space for other tasks, but
    the sound of the gadgets chasing us is palpable. What humans will be
    like, accordingly, in 500 years is just beyond our imagining.

    So I'll ask what I think is the limit case question: can a computer be
    me? That is to say, could there be a mechanical device that embodied
    my memory, aptitudes, inclinations, concerns, and predilections so
    efficiently that it could replace me? Could it make my mistakes?

    I think I know the answer to that one.

    [29]JAMES O'DONNELL is a classicist; cultural historian; Provost,
    Georgetown University; Author, Avatars of the Word.
      _________________________________________________________________

    The complexity suddenly facing us can feel overwhelming and perhaps
    such souls as Lugubrioso's will momentarily shrink at how much they
    must master in order to appropriate this complexity and make it their
    own. It's that shrinkage that Lugubriosos is feeling, confusing his
    own inadequacy to take in the new forms of knowing with the inadequacy
    of the forms themselves. Google doesn't kill people, Rosa admonished
    him. People kill people.

    [goldstein100.jpg]
    Rebecca Goldstein:
    I admit that I'm of two distinct minds on the question posed by
    Richard Foreman as to whether the technological explosion has led to
    an expansion or a flattening of our selves. In fact, a few years ago
    when I was invited to represent the humanities at Princeton
    University's celebration of the centenary of their graduate studies, I
    ended up writing a dialogue to express my inner bifurcation. My way of
    posing the question was to wonder whether the humanities, those
    "soul-explorations," had any future at all, given that the soul had
    been all but pounded out of existence, or in any case pounded into a
    very attenuated sort of existence.

    My one character, dubbed Lugubrioso, had a flair for elaborate
    phraseology that rivaled the Master's, and he turned it to deploring
    the loss of the inner self's solemn, silent spaces, the hushed
    corridors where the soul communes with itself, chasing down the
    subtlest distinctions of fleeting consciousness, catching them in
    finely wrought nets of words, each one contemplated for both its
    precise meaning and euphony, its local and global qualities, one's
    flight after that expressiveness which is thought made surer and
    fleeter by the knowledge of all the best that had been heretofore
    thought, the cathedral-like sentences (to change the metaphor) that
    arose around the struggle to do justice to inexhaustible complexity
    themselves making of the self a cathedral of consciousness.
    (Lugubrioso spoke in long sentences.)

    He contemplated with shuddering horror the linguistic impoverishment
    of our technologically abundant lives, arguing that privation of
    language is both an effect and a cause of privation of thought. Our
    vocabularies have shrunk and so have we. Our expressive styles have
    lost all originality and so have we. The passivity of our image-heavy
    forms of communication--too many pictures, not enough words,
    Lugubrioso cried out pointing his ink-stained finger at the popular
    culture--substitutes in an all-too-pleasant anodyne for the rigors of
    thinking itself, and our weakness for images encourages us to reduce
    people, too--even our very own selves--to images, which is why we are
    drunk on celebrity hood and feel ourselves to exist only to the extent
    that we exist for others.

    What is left but image when the self has stopped communing with
    itself, so that in a sad gloss on Bishop Berkeley's apothegm, our esse
    has become percipi, our essence is to be perceived? Even the torrents
    of words posted on "web-related locations" (the precise nature of
    which Lugubrioso had kept himself immaculately ignorant) are not words
    that are meant for permanence; they are pounded out on keyboards at
    the rate at which they are thought, and will vanish into oblivion just
    as quickly, quickness and forgetfulness being of the whole essence of
    the futile affair, the long slow business of matching coherence to
    complexity unable to keep up, left behind in the dust.

    My other character was Rosa and she pointed out that at the very
    beginning of this business that Lugubrioso kept referring to, in
    stentorian tones, as "Western Civilization," Plato deplored the
    newfangled technology of writing and worried that it tolled the death
    of thought. A book, Plato complained in Phaedrus, can't answer for
    itself. (Rosa found the precise quotation on the web. She found
    'stentorian,' too, when she needed it, on her computer's thesaurus.
    She sort of knew the word, thought it might be "sentorian," or
    "stentorious"but she'll know where to find it if she ever needs it
    again, a mode of knowing that Lugubrioso regards as epistemologically
    damnable.)

    When somebody questions a book, Plato complained, it just keeps
    repeating the same thing over and over again. It will never, never, be
    able to address the soul as a living breathing interlocutor can, which
    is why Plato, committing his thoughts to writing with grave
    misgivings, adopted the dialogue form, hoping to approximate something
    of the life of real conversation. Plato's misgivings are now
    laughable--nobody is laughing harder than Lugubrioso at the thought
    that books diminish rather than enhance the inner life--and so, too,
    will later generations laugh at Lugubrioso's lamentations that the
    cognitive enhancements brought on by computers will make of us less
    rather than more.

    Human nature doesn't change, Rosa tried to reassure Lugubrioso,
    backing up her claims with the latest theories of evolutionary
    psychology propounded by Steven Pinker et al. Human nature is
    inherently expansive and will use whatever tools it develops to grow
    outward into the world. The complexity suddenly facing us can feel
    overwhelming and perhaps such souls as Lugubrioso's will momentarily
    shrink at how much they must master in order to appropriate this
    complexity and make it their own. It's that shrinkage that Lugubriosos
    is feeling, confusing his own inadequacy to take in the new forms of
    knowing with the inadequacy of the forms themselves. Google doesn't
    kill people, Rosa admonished him. People kill people.

    Lugubrioso had a heart-felt response, but I'll spare you.

    [30]REBECCA GOLDSTEIN is a philosopher and novelist; Author,
    Incompleteness.

References

   11. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#kelly
   12. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#lanier
   13. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#johnson
   14. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#minsky
   15. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#rushkoff
   16. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#schank
   17. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#odonnell
   18. http://www.edge.org/3rd_culture/foreman05/foreman05_index.html#goldstein
   19. http://www.edge.org/3rd_culture/bios/brockman.html
   20. http://www.edge.org/3rd_culture/bios/foreman.html
   21. http://www.edge.org/3rd_culture/bios/foreman.html
   22. http://www.edge.org/3rd_culture/bios/dysong.html
   23. http://www.edge.org/3rd_culture/bios/kelly.html
   24. http://www.edge.org/3rd_culture/bios/lanier.html
   25. http://www.edge.org/3rd_culture/bios/johnson.html
   26. http://www.edge.org/3rd_culture/bios/minsky.html
   27. http://www.edge.org/3rd_culture/bios/rushkoff.html
   28. http://www.edge.org/3rd_culture/bios/schank.html
   29. http://www.edge.org/3rd_culture/bios/odonnell.html
   30. http://www.edge.org/3rd_culture/bios/goldstein.html


More information about the paleopsych mailing list