[extropy-chat] Fools building AIs

Eliezer S. Yudkowsky sentience at pobox.com
Fri Oct 6 01:59:34 UTC 2006


Samantha Atkins wrote:
> 
> Perhaps they aren't even evil.  Perhaps they are so disgusted with  
> human foibles that they really don't care much anymore whether much  
> brighter artificial minds are particularly friendly to humanity or not.

Sounds to me like an affective evaluation by respect to biased recall 
and biased search on particular, negative characteristics, with a few 
exemplars dominating the affective evaluation of the whole - in other 
words, carrying out a biased search for negative examples, then 
remembering a few outstanding negative examples rather than attending to 
the vast statistical majority of cases.  Anyone who knows about 
heuristics and biases is going to be on their guard against that.

Then the reaction is more of an instinctive expression of disgust, not 
an attempt to solve anything or optimize anything.  If you were trying 
to seriously search for a plan, you wouldn't stop after deciding that 
exterminating humanity was superior to leaving it exactly as it is now 
(itself a rather unlikely conclusion!) but would continue your search 
for a third alternative.

Also a rationalist would know about the Bayesian value of information, 
so they'd be willing to spend some time thinking about the problem, 
rather than reacting in 0.5 seconds.

"Disgusted with human foibles" makes a nice little snappy phrase.  But 
someone seriously capable of building and shaping an AGI would know 
better, I suspect.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list