[ExI] Best case, was Hard Takeoff
stefano.vaj at gmail.com
Sun Dec 5 10:55:56 UTC 2010
On 29 November 2010 05:58, Samantha Atkins <sjatkins at mac.com> wrote:
> Not really. Friendly AI is posited as a better alternative than Unfriendly AI given that AI of great enough power to be dangerous is likely. All the wonderful things that some ascribe to what FAI will or at least may do for us are quite beside the fundamental point.
Such idea however required some less acritical definition of
"friendly/unfriendly", of "danger" and of "AI". If I were to say that
unfriendly wordprocessor are dangerous, most people would ask what I
really mean. For AGI, many seem to make a number of factual and
valorial assumptions that do not bear a closer inspection, IMHO.
>> - the second has to do with technological skepticism as to the reality
>> and imminence of any perceived threat;
> AGI is very very likely IFF we don't destroy the technological base beyond repair first. How soon is a separable question. Expert opinions range from 10 - 100 years with a median at around 30 years.
I certainly hope you are right. Only, I am quite opposed to make it an
article of faith relative to something that would be achieved anyway
irrespective of any actual effort to this end.
>> - the third has to do with a more fundamental, philosophical
>> questioning of such a perception of threat and the underlying
>> psychology and value system, not to mention its possible neoluddite
> Irrelevant to whether AGI friendliness is important to think about or not. Calling it neoluddite to be concerned is prejudging the entire question in an unhelpful manner.
Fear of the machines as such is the very definition of Luddism.
Admittedly, most of those concerned with unfriendly AGI imagine that
friendly AGI could and should be developed in its stead. From a
practical point of view, they risk however to be objective allies of
those who would like to control AGI research on a precautionary
More information about the extropy-chat