[ExI] morality

efc at swisscows.email efc at swisscows.email
Thu May 18 08:33:45 UTC 2023


On Wed, 17 May 2023, Brent Allsop via extropy-chat wrote:

> 
> Isn't much of morality based around making as many people as happy as possible?  In other words, getting them what they truly want? 

That would be utilitarianism, the engineers delight and guidingstar of
effective altrusim.

There are many other points of view (and you do list some of them
below).

I'm a sucker for virtue ethics, I like some ethical egoism as well, 
and I do not like utilitarianism because it lends itself to "big brother" reasoning 
where someone knows (claims to know) what is best for all.

Best regards, 
Daniel

> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Wed, May 17, 2023 at 10:50 AM efc--- via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
>
>       On Wed, 17 May 2023, Tara Maya via extropy-chat wrote:
>
>       > When AI show a capacity to apply the Golden Rule -- and its dark mirror, which is an Eye for an Eye (altruistic
>       revenge) -- then we
>       > can say they have a consciousness similar enough to humans to be treated as humans.
>       >
>
>       Hmm, I'm kind of thinking about the reverse. When an AI shows the
>       capacity to break rules when called for (as so often is the case in
>       ethical dilemmas) then we have something closer to consciousness.
>
>       In order to make ethical choices, one must first have free will. If
>       there's just a list of rules to apply, we have that today already in our
>       machines.
>
>       Best regards,
>       Daniel
> 
>
>       _______________________________________________
>       extropy-chat mailing list
>       extropy-chat at lists.extropy.org
>       http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 
> 
>


More information about the extropy-chat mailing list