[ExI] Fermi Paradox and Transcension

Ben Zaiboc bbenzai at yahoo.com
Mon Sep 10 13:02:29 UTC 2012


Jeff Davis <jrd1415 at gmail.com> wrote:

> The question was almost rhetorical.
> 
> Humans know what constitutes ethical behavior, they just refuse to
> practice it, and the higher up in the power hierarchy, the more
> lawless they become.
> 
> An advanced ai would have no such problems, and would be far more
> likely to conform to a higher ethical standard.
> 
> That's what I was saying.


OK, I get that.  Sort of.  With a reservation on the idea that "Humans know what constitutes ethical behaviour".  Do we?  If so, why can't we all agree on it?  (which is a different question to "why don't we all act in accordance with it?")  When you look at this, there's very little that we can all agree on, even when it comes to things like murder, stealing, and granting others the right to decide things for themselves.  

Religion causes the biggest conflicts here, of course, but even if you ignore religious 'morality', there are still some pretty big differences of opinion.  Murder is bad.  Yes, of course.  But is it always bad?  Opinions differ.  When is it not bad?  Opinions differ widely.  Thieving is bad.  Yes, of course.  But is it always bad?  Opinions differ.  Etc.

The question still remains:  What would constitute ethical behaviour for a superintelligent being?  I suspect we have no idea.  We can't assume it would just take our ideas as being correct (assuming it could even codify a 'universal human ethics' in the first place).  It would almost certainly create its own from scratch.  We simply can't predict what that would lead to.

Ben Zaiboc




More information about the extropy-chat mailing list