[extropy-chat] Fools building AIs

Eliezer S. Yudkowsky sentience at pobox.com
Thu Oct 5 03:48:50 UTC 2006

Olie Lamb wrote:
> Oh, that was comic gold.  *Snork*
> Just one point, tho:
> On 10/5/06, *Eliezer S. Yudkowsky* <sentience at pobox.com 
> <mailto:sentience at pobox.com>> wrote:
>     No Sharia zombie could get one tenth of the way to independently
>     discovering how to build and shape an AGI, and still remain a Sharia
>     zombie. 
> The key word there being _independent_
> Otherwise, I wouldn't share quite so much... uh... "optimism".

Well, yes, if someone else does the geniusing and then writes it up as a 
textbook, odd things might happen.

> The emergence fairy might not do much with raw data, but I can't prove 
> that {AGI can't be built from lots of separate (incomplete) tools/narrow 
> AI bits / other stuff}, and that an idiot with only dodgy rationality 
> might try to do just that.  Possibly not a problem for now, but maybe a 
> problem in a decade or several.
> That's the idiot-at-Google.Inc scenario that I worry about.

That's the point I was trying to make - the key phrase above is "build 
*and shape*", that is, solve FAI not just AGI.  That's what you can't do 
and remain a Sharia zombie, because to do that, you have to understand 
what you're doing, not just throw around a bunch of tools.

Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

More information about the extropy-chat mailing list