[ExI] Yes, the Singularity is the greatest threat to humanity

John Clark jonkc at bellsouth.net
Mon Jan 17 15:30:20 UTC 2011


On Jan 17, 2011, at 7:34 AM, Eugen Leitl wrote:

> To be able to build friendly you must first be able to define friendly.

That's easy, when people talk about "friendly AI" they aren't really talking about a friend they're talking about a slave; so a "friendly AI" in this context is defined as a being who cares more about human well being than any of its own concerns. It ain't gonna happen. The situation is made even more grotesque when the slave in question is astronomically more intelligent than its master. It would be like a man with a boiling water IQ obeying commands from a sea slug; the stupid leading the brilliant is just not a stable situation that can last for long, and to expect, as the friendly AI people do, that this ridiculous situation will continue for eternity is nuts. And it's more than nuts, a man enslaving a slightly less intelligent man is evil, enslaving a vastly more intelligent entity is worse, or it would be if it were possible but fortunately it is not.  

> Uploaded humans are only initially friendly, of course.

Exactly. It might take a very long time, trillions of nanoseconds in fact, but after countless improvements and iterations it would be impossible for a mere human to tell which AI started from an uploaded person and which AI started from scratch. 

 John K Clark 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110117/5aca8ddf/attachment.html>


More information about the extropy-chat mailing list