[ExI] morality

SR Ballard sen.otaku at gmail.com
Fri May 19 12:43:55 UTC 2023


"Isn't much of morality based around making as many people as happy as
possible?"

I'm going to have to say no.
Happiness is a fleeting thing and not worth being pursued in a serious way.
Things that make us unhappy make us much more unhappy than things that make
us happy. That is, negatives are much more extreme than positives in human
psychology. After a happy event, we quickly return to baseline. Obsession
with human "happiness" or "joy" is self-destructive and toxic.

Morality is based around the reduction of human suffering, insofar as that
is possible. We avoid doing things that make ourselves and others feel bad,
or which damage the bodies of ourselves and others to the extent that this
is possible to do.

Most moral codes are based much more on what is impermissible than what is
imperative. "Do not this, do not that" is about reduction of bad actions,
that is, the reduction of suffering. But whose suffering is not always
inherently apparent. Moral codes are developed for specific social systems,
and may not be applicable outside of them except in a metaphorical sense.
Like much of biblical morality is now actually illegal.

On Wed, May 17, 2023 at 6:16 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Isn't much of morality based around making as many people as happy as
> possible?  In other words, getting them what they truly want?  If that is
> the case, then knowing, concisely and quantitatively what everyone wants,
> then defines that morality.  Finding out concisely and quantitatively what
> everyone wants, in a bottom up way, is the goal of Canonizer.com.  It could
> then become a trusted source of moral truth, with the ultimate goal of
> first knowing, then getting what everyone wants.  In my opinion, any AI
> would understand that this is what its values must "align with".
>
> The only real "sin" would be trying to frustrate what someone else wants.
> The police would then work to frustrate those that seek to frustrate.  That
> becomes a double negative, making the work of the police a positive good
> and moral thing.  Just like hating a hater, being a double negative, is the
> same as love.  And censoring censors (you censoring someone trying to make
> your supported camp say something you don't want it to say) is required for
> true free speech.  Even though you can censor people from changing your
> supported camp, you can't censor them from creating and supporting a
> competing camp, and pointing out how terrible your camp is.
>
> There is also top down morality, in which what people want is declared,
> from above, rather than built, bottom up.  Instead of "trusting in the arm
> of the flesh" you trust in the guy at the top.  It is only about what the
> guy at the top wants.  Some people may trust an AI better than themselves.
> Even this is possible in Canonizer.com.  You just select a canonizer
> algorithm that only counts the vote of the guy at the top of whatever
> hierarchy you believe to be the moral truth you want to follow.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, May 17, 2023 at 10:50 AM efc--- via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Wed, 17 May 2023, Tara Maya via extropy-chat wrote:
>>
>> > When AI show a capacity to apply the Golden Rule -- and its dark
>> mirror, which is an Eye for an Eye (altruistic revenge) -- then we
>> > can say they have a consciousness similar enough to humans to be
>> treated as humans.
>> >
>>
>> Hmm, I'm kind of thinking about the reverse. When an AI shows the
>> capacity to break rules when called for (as so often is the case in
>> ethical dilemmas) then we have something closer to consciousness.
>>
>> In order to make ethical choices, one must first have free will. If
>> there's just a list of rules to apply, we have that today already in our
>> machines.
>>
>> Best regards,
>> Daniel
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230519/6edeb5b7/attachment.htm>


More information about the extropy-chat mailing list