[ExI] Best case, was Hard Takeoff

Samantha Atkins sjatkins at mac.com
Mon Nov 29 04:58:10 UTC 2010


On Nov 28, 2010, at 10:25 AM, Stefano Vaj wrote:

> 2010/11/26 Michael Anissimov <michaelanissimov at gmail.com>:
>> Contrary to consensus, we have people in the transhumanist community calling
>> us cultists and as deluded as fundamentalist Christians.
> 
> I think that the obsession with "friendly AI" is criticised from three
> perspectives:
> - the first has to do with technological eschatologism ("the Robot-God
> shall save us in its infinite goodness, how can you dare to doubt its
> wisdom");

Not really.  Friendly AI is posited as a better alternative than Unfriendly AI given that AI of great enough power to be dangerous is likely.  All the wonderful things that some ascribe to what FAI will or at least may do for us are quite beside the fundamental point.

> - the second has to do with technological skepticism as to the reality
> and imminence of any perceived threat;

AGI is very very likely IFF we don't destroy the technological base beyond repair first.   How soon is a separable question.  Expert opinions range from 10 - 100 years with a median at around 30 years.

> - the third has to do with a more fundamental, philosophical
> questioning of such a perception of threat and the underlying
> psychology and value system, not to mention its possible neoluddite
> implications.

Irrelevant to whether AGI friendliness is important to think about or not.    Calling it neoluddite to be concerned is prejudging the entire question in an unhelpful manner.

- s



More information about the extropy-chat mailing list