[extropy-chat] Singularity Summit at Stanford, 5/13

Damien Sullivan phoenix at ugcs.caltech.edu
Fri Apr 14 02:59:36 UTC 2006


On Thu, Apr 13, 2006 at 03:56:37PM -0700, Eliezer S. Yudkowsky wrote:
> http://sss.stanford.edu/

> Among the issues to be addressed:
> 
> Bostrom: Will superintelligence help us reduce or eliminate existential 
> risks, such as the risk that advanced nanotechnology will be used by 
> humans in warfare or terrorism?
> 
> Doctorow: Will our technology serve us, or control us?
> 
> Drexler: Will productive nanosystems enable the development of more 
> intricate and complex productive systems, creating a feedback loop that 
> drives accelerating change?
> 
> Hofstadter: What is the likelihood of our being eclipsed by (or absorbed 
> into) a vast computational network of superminds, in the course of the 
> next few decades?
> 
> Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take 
> off and how will humans stay in control?
> 
> More: Will our emotional, social, psychological, ethical intelligence 
> and self-awareness keep up with our expanding cognitive abilities?
> 
> Peterson: How can we safely bring humanity and the biosphere through the 
> Singularity?
> 
> Thrun: Where does AI stand in comparison to human-level skills, in light 
> of the recent autonomous robot race, the DARPA Grand Challenge?
> 
> Yudkowsky: How can we shape the intelligence explosion for the benefit 
> of humanity?

Heh.  Note how a majority of the questions clearly take the Singularity for
granted.  Oddly Drexler's question is as open as Hofstadter's, though I expect
they'll take opposition answers.  Thrun's question could be that of a skeptic.

-xx- Damien X-) 
http://mindstalk.net 



More information about the extropy-chat mailing list