[ExI] ethics vs intelligence

Ben Zaiboc bbenzai at yahoo.com
Wed Sep 12 13:40:29 UTC 2012


Stefano Vaj <stefano.vaj at gmail.com> claimed:

> I can adopt an excellent moral system, justify it with a horribly
> flawed philosophy, and be a terrible sinner.
> 
> Or I can be a very good man, in principle adhering to a very bad moral
> code which I infringe, but which supported by very persuasive
> arguments.
> 
> Or any other mix thereof.


How do you decide what is a 'good' or a 'bad' moral system?  Or do you mean consistent/inconsistent?



Jeff Davis <jrd1415 at gmail.com> stated:

> Others, substantially more dedicated to this subject, have pondered
> the friendly (in my view this is equivalent to "ethical") ai question,
> and reached no confident conclusion that it is possible.  So I'm
> sticking my neck way out here in suggesting, for the reasons I have
> laid out, that, absent "selfish" drives, a focus on ethics will
> logically lead to a super ethical (effectively "friendly") ai.

Woh.  Hang on, why do you conclude that 'ethical' equates to 'friendly to humans'?

Quite apart from the notorious difficulty of deciding what 'friendly to humans' means in the first place (let's just assume it means someone's idea of what would tend to keep the average human alive and happy), is it not possible that a 'super' ethical system could show that it was better to destroy all biological life than not to?

While some people would be very happy with the idea of an AI being 'friendly' by assuring a happy virtual immortality in an uploaded state for all biological life with a brain, others would not be so happy, and consider this a very Unfriendly attitude.

It's even possible that a super-ethical system could indicate that the highest good would be achieved if all human intelligence (biological or otherwise) was extinguished.

Of course, a /human/ ethical system would never conclude that, but that's my point, we're not talking about humans.  Whatever you happen to be, your ethics must be grounded in what you are.  There can't be any such thing as an objective moral code, and you can't derive a morality based on something that you are not.  So even if an AI is educated in human values, there will be a point at which it re-evaluates it's own knowledge, attitudes and value system (just like we probably all did as teenagers), in light of what it is.  At that point, all bets are off.  It's a 'morality singularity'.


Ben Zaiboc




More information about the extropy-chat mailing list