[ExI] Wall Street Journal on the Singularity Summit

Anne Corwin sparkle_robot at yahoo.com
Sun Sep 23 01:26:03 UTC 2007

OK, a few thoughts -- not all are necessarily relevant to the exact question asked, but I've been trying to put certain thoughts into words for a long time anyhow, and I figure this is as good an opportunity as any.

Firstly, it sounds like the guy who wrote the WSJ article didn't do much in the way of actually listening to the speakers.  There *is* a kind of millenarianist mystique surrounding the whole AGI/Singularity topic-space, but the thing is, once you get past that mystique (and the various fringe elements it tends to attract), there's plenty in the way of reasonable, coherent discussion. Not perfect discussion, mind you (I'll get to that later), but reasonable discussion nonetheless.

I think that reporters and various others tend to respond primarily to the mystique without bothering to explore beyond it -- probably because it makes for more exciting news copy to postulate throngs of "12 year old sci-fi addicts" than it does to describe how a guy from some computer company stood up and described neural architecture.  

It's clear that AGI folks in the public sphere are dealing with general anti-intellectualism as well as prejudice against nerds (and the accompanying desire some people have to disassociate themselves from anything potentially "nerdy"). I'm sure many people on this list can recall being picked on for being a "nerd" in junior high, and being told that their interests either didn't matter or were "stupid and pointless".  It isn't much of a stretch to consider that we're still living in a similar social demographic, albeit on a larger scale. 

Hence, assertions that it is somehow "weird" to spend a day in a room talking about robots and algorithms, but perfectly normal and acceptable to drive to a large round outdoor arena and pay to spend the day stuffing yourself with nitrate-soaked dead animals and brain-liquefying fermented corn juice while watching grown men attack one another for possession of an inflated, oblong piece of animal skin.  

I don't really care about nerd stigma.  I've been a nerd my whole life, and I like how my brain works, and I am going to be interested in my interests regardless of the fact that it's more common for 28-year-old women to be curious about Britney's latest hairdo than about robots or neuroscience or computing.  Nerd stigma exists, and probably informs some of the less sophisticated critiques of topics that tend to attract nerds, but I don't think nerds need to give it the time of day.  We have better things to do with our time than protest that we're "not really sci-fi addicts".  Many of us, myself included, probably *are* sci-fi addicts.  More power to us. :P

But -- then there's the whole "distrust of privelege" factor, which is quite a bit more complicated and serious than the junior-high-level "you're a NERD!" jibe.  Not all criticisms of AGI focus and "singularitarianism" can be dismissed as *mere* anti-intellectualism.  

I can't speak for anyone else, obviously, but personally I am painfully aware of how "priveleged" I am.  Despite the fact that being (a) on the autistic spectrum, and (b) female have meant that I've lacked certain kinds of power while growing up, I am still sitting here typing this via my always-on Internet connection at 5 PM on a Saturday evening.  Which means that I am not toiling somewhere in a field, or being ordered around by some dictatorial husband I was sold and married off to at the age of thirteen.  

This is not something I take for granted.  

Just by being born in a developed nation (USA) into a family that cared about things like education, I started out life with a massive head start as compared to a whopping percentage of the world's population.  

I'm sure the same is true for most AI researchers, and most people on this list.  We ARE privileged, and we have a certain degree of power that millions do not share.  This cannot be ignored, even if the people accusing privilege-blindness on our part are hypocrites (seeing as writers for prestigious newspapers can't exactly claim membership on the poverty wagon).  Where you start out in life, and what advantages you have as a result of starting out there, can shape the course of your life in extremely significant ways.  And like it or not, the mindset that people go into AGI development with probably doesn't include a sense of what it might have been like to grow up in a grass hut where your toilet consisted of a hole in the dirt.

I'm not saying here that we all ought to go off and live in grass huts.  Nor am I saying that we ought to air-drop questionnaires regarding preferred AGI features into third-world villages (obviously, those people would benefit far more from a freakin' sandwich).  Rather, I am saying that acknowledging privelege (and power) is an important component of any endeavor as ambitious as AGI development.  As in, people involved in such projects would do well to ask themselves (repeatedly and rigorously) what factors have enabled them to get involved in those projects in the first place.

This might sound like "applause lights", but I assure you it is not -- if you read this and come away from it thinking I'm trying to draw cheers from some particular political demographic, I've obviously done a poor job of communicating.  What I'm trying to say is that if any of us in this hyper-privileged techie demographic are going to claim that we can solve the world's major problems (including the potential for AI-borne risks) through investing in AGI research, we *cannot* assume that we can do this without learning about other cultures and socioeconomic populations.  We need to learn what systems in place in the real world contribute toward the oppression of some populations and the benefit of others.

And even if we *aren't* taking our privileges for granted, not acknowledging that we *have* those privileges could lead to serious blind spots in our idea of what a good AI ought to feature (or in our idea of what a "bad" AI might be capable of breaking).  

Now, I highly doubt that most newspaper reporters have a very sophisticated understanding of power and privilege dynamics.  

And I don't think that the mass media in general cares very much about truth -- they care more about getting stories that will sell papers, and marketability has very little to do with integrity.  

But if AGI folks are tired of being called elitists or defending themselves against accusations of being simultaneously wealthy and clueless, there are two obvious choices: (1) ignore the critics, or (2) figure out how to speak the language of media to express the truth in such a way that future reports on AGI conferences will likely be printed in the "science and technology" section of the paper rather than in whatever section wannabe cultural anthropologists are presently using to tell their tales of their travels among the weirdos.

It's been done before.  People used to laugh at the idea of "home computers", but at this point, computers that you can hold in the palm of your hand are old news.  

The way I see it, if there is any real substance to the AGI/Singularity discourse, people who are sufficiently perceptive will pick up on it eventually.  Moreso if someone actually builds something that makes an AGI seem more plausible (including an actual, if highly limited, AGI).

I don't know if there's much that can be done in the meantime aside from working busily in the lab to hopefully produce tangible results, and avoiding overwrought "THE END IS NIGH!" sentiments.  People are going to think whatever they want to think, so it would probably be best to devote maximum energies toward actual technical progress as opposed to toward "image management".  If the substance is there, respect will follow, albeit not on the timescale most here would probably prefer.  

- Anne

Tyler Emerson <tyleremerson at gmail.com> wrote: Feedback appreciated on this.


---------- Forwarded message ----------
From: The Singularity Institute Blog < tyleremerson at gmail.com>
 Date: Sep 22, 2007 7:41 AM
Subject: The Singularity Institute Blog
To: tyleremerson at gmail.com

                The Singularity Institute Blog  
         Summit Coverage in The Wall Street Journal raises questions 
  Posted: 21 Sep 2007 11:55 PM CDT
 Earlier this week SIAI and the Singularity Summit got some major coverage in The Wall Street Journal.  Lee Gomes, the Portals columnist for The Journal attended the Summit, and has some challenging thoughts about our movement and its perceived relevancy to the business community and the public at large. 
 In his article, Gomes likens Singularitarians at times to 12-year-old sci-fi addicts, alien worshipers, and even gynephobics (don't tell my 3 daughters).   While it is always fun to play "knock the nerds" in the popular press, I think Gomes raises key issues that point out why we sometimes struggle for credibility outside of our safety net in The Valley. 
 As we start to organize our thoughts about next year's Singularity Summit, it is apparent that we need to focus more on bridging the knowledge and perception gaps between the scientific community, the business and investment community, and the public at large.  Our success in crossing this chasm over the next couple of years will dictate how successfully the mission of the Singularity Institute will be embraced by broader segments of humanity. 
 I'd like to open this discussion up to our community at large to get your ideas and feedback.  How do we stay true to the vision of Singularity Institute, and at the same time create a partnership with the business community that creates an exciting and positive perspective on what we can accomplish? And how do we shake some of the more adverse associations to the lunatic fringe? 
 I look forward to your thoughts.  I've posted Lee's article below.  Leave a comment to this post or contact me directly at   lamis at singinst.org.
 Reprinted from The Wall Street Journal
 The Singular Question Of Human vs. Machine Has a Spiritual Side
 The Wall Street Journal
 September 19, 2007; Page B1
 You can tell a lot about people from what they worry about. A few Saturdays ago, I spent the day in an auditorium full of fellow citizens concerned with "singularity." The word refers to the day when the intelligence of computers will exceed our own. 
 The auditorium was filled with people who listed many things that might occur with singularity, such as a human-machine synthesis into a new, superintelligent life-form. The date has been projected as anytime from nine to 40 years hence. 
 Singularity-believers say humanity urgently needs to begin preparing for this moment, if only to make sure that humans don't become kabobs at the first post-singularity tailgate party held by supersmart computers. There is even a Singularity Institute, bankrolled by Silicon Valley wealthoids. 
 The weekend session featured speeches, panel discussions and informal chatting. About 800 people were on hand, more, frankly, than I would have expected. Who but 12-year-old sci-fi addicts still fret over malevolent, superintelligent machines? Most of us, living every day with computers, appreciate how even the world's most powerful one not only is incapable of an autonomous thought, it can't even distinguish spam from real email. 
 To get to the singularity that we are supposed to be preparing for, we are going to need AGI, or Artificial General Intelligence, a topic the singularists go on about endlessly.
 A computer with AGI thinks and reasons the same way a human being does, only much more quickly. But don't singularity people know that AI researchers have been trying to make such machines since the 1950s, without much success? 
 It turns out, there is a schism between the AGI and the AI worlds. The AGI faction thinks AI researchers have sold out, abandoning their early dreams of "general" intelligence to concentrate on more attainable (and more lucrative) projects. 
 They're right. The machines today that recognize speech or play chess are one-trick wonders. Of course, AI researchers defend that approach by saying their early dreams of general intelligence were naïve.
 The singularists, though, don't seem bothered by those earlier AI failures; new approaches will bear fruit, they insist. They thus didn't think it a waste of either time or carbon offsets to be gathering at a conference to ask such questions as, "If you made a superintelligent robot, then forced it to work only for you, would that be slavery?" 
 Robots are just computers with moving parts, of course, but the public is still confused about them, just like they used to be about computers themselves. The Great Metallic Hope of the robotics industry, for example, is currently a small, round vacuum cleaner that ambles across the floor by itself. 
 A high-tech wonder? Actually, Consumer Reports said that even cheap vacuum cleaners did better than the first model. A little more of this, and no one will ever again worry about enslaving robots.
 There is another way of thinking about the obsession with robots. John Huntington, professor of English, University of Illinois, has studied the genre and says sci-fi authors, especially the early ones who wrote about robots or aliens, were working out their own unacknowledged anxieties about closer-to-home topics. 
 Most commonly, he said, these anxieties involved women, who were seen as becoming threatening as they gained social power. Racial and class tensions also were involved, he added.
 I have a supplemental theory: that the discussion of singularity involves a sublimated spiritual yearning for some form of eternal life and an all-powerful being, but one articulated by way of technical, secular discourse. 
 As it happens, there is considerable overlap between the singularity and the "life extension" communities. Ray Kurzweil, the best-known singularity writer, also co-wrote a lengthy guide to life extension. He once told me he expects literally to live forever — first by prolonging his life via a daily regimen that includes hundreds of pills and the nonstop consumption of green tea, then, once super-powerful computers arrive, by uploading his consciousness into one. 
 Singularists also have an affinity for the Search for Extraterrestrial Intelligence, or SETI, program, which scans the skies looking for other civilizations. Isn't that a longing by some for an intergalactic messiah? 
  Then, consider a poem read at the singularity conference that described an Aquarian Age scene in which humans and other mammals frolicked in a "cybernetic meadow … all watched over by machines of loving grace." Those computer protectors sound a lot like the guardian angels my grade-school nuns told us about. 
 Years ago, a friend and I spent an evening with Arthur C. Clarke, the creator in "2001″ of HAL, the malevolent computer of every singularist's nightmare. He brought along slides, showing himself with some astronauts and with the authors of the musical "Hair." 
 We talked about science and had our picture taken, which I still have. It proves that while I may have reached a different conclusion, at least I studied with the master.
      You are subscribed to email updates from   The Singularity Institute Blog 
To stop receiving these emails, you may unsubscribe now.  Email Delivery powered by FeedBurner   Inbox too full?    Subscribe to the feed version of The Singularity Institute Blog in a feed reader.    If you prefer to unsubscribe via postal mail, write to: The Singularity Institute Blog, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610    
extropy-chat mailing list
extropy-chat at lists.extropy.org

Need a vacation? Get great deals to amazing places on Yahoo! Travel. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070922/04b37d37/attachment.html>

More information about the extropy-chat mailing list