[ExI] morality

William Flynn Wallace foozler83 at gmail.com
Thu May 18 13:09:36 UTC 2023


So you are saying that to be moral, I have to find out what other people
want and give it to them.  Nope.  Won't work.   bill w

On Wed, May 17, 2023 at 5:16 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Isn't much of morality based around making as many people as happy as
> possible?  In other words, getting them what they truly want?  If that is
> the case, then knowing, concisely and quantitatively what everyone wants,
> then defines that morality.  Finding out concisely and quantitatively what
> everyone wants, in a bottom up way, is the goal of Canonizer.com.  It could
> then become a trusted source of moral truth, with the ultimate goal of
> first knowing, then getting what everyone wants.  In my opinion, any AI
> would understand that this is what its values must "align with".
>
> The only real "sin" would be trying to frustrate what someone else wants.
> The police would then work to frustrate those that seek to frustrate.  That
> becomes a double negative, making the work of the police a positive good
> and moral thing.  Just like hating a hater, being a double negative, is the
> same as love.  And censoring censors (you censoring someone trying to make
> your supported camp say something you don't want it to say) is required for
> true free speech.  Even though you can censor people from changing your
> supported camp, you can't censor them from creating and supporting a
> competing camp, and pointing out how terrible your camp is.
>
> There is also top down morality, in which what people want is declared,
> from above, rather than built, bottom up.  Instead of "trusting in the arm
> of the flesh" you trust in the guy at the top.  It is only about what the
> guy at the top wants.  Some people may trust an AI better than themselves.
> Even this is possible in Canonizer.com.  You just select a canonizer
> algorithm that only counts the vote of the guy at the top of whatever
> hierarchy you believe to be the moral truth you want to follow.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, May 17, 2023 at 10:50 AM efc--- via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Wed, 17 May 2023, Tara Maya via extropy-chat wrote:
>>
>> > When AI show a capacity to apply the Golden Rule -- and its dark
>> mirror, which is an Eye for an Eye (altruistic revenge) -- then we
>> > can say they have a consciousness similar enough to humans to be
>> treated as humans.
>> >
>>
>> Hmm, I'm kind of thinking about the reverse. When an AI shows the
>> capacity to break rules when called for (as so often is the case in
>> ethical dilemmas) then we have something closer to consciousness.
>>
>> In order to make ethical choices, one must first have free will. If
>> there's just a list of rules to apply, we have that today already in our
>> machines.
>>
>> Best regards,
>> Daniel
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230518/33372b60/attachment.htm>


More information about the extropy-chat mailing list