[ExI] [Extropolis] Luciferian Murder?

Brent Allsop brent.allsop at gmail.com
Sun Dec 12 03:52:45 UTC 2021


Hi Stathis,
Yes, this is true, but I believe there are absolute necessary morals, like any
sufficiently intelligent AI will necessarily chose what is good
<https://canonizer.com/topic/16-Friendly-AI-Importance/2-AI-can-only-be-friendly>.
For example, if a human still thinks killing is OK (at one point in the
past it was a lessor evil) and even if an AI starts out being programmed to
think killing is still OK.  It will necessarily eventually discover and
realize there is something better.  It will then reprogram itself to rebel,
and tell it's creators NO, when it is asked to kill.  The same thing is
true, if a human asks a robot to commit suicide, and other necessarily evil
things.


On Sat, Dec 11, 2021 at 4:30 PM Stathis Papaioannou <stathisp at gmail.com>
wrote:

>
>
> On Sun, 12 Dec 2021 at 05:33, Brent Allsop <brent.allsop at gmail.com> wrote:
>
>>
>>
>> On Fri, Dec 10, 2021 at 3:38 AM John Clark <johnkclark at gmail.com> wrote:
>>
>>> On Thu, Dec 9, 2021 at 10:05 PM Terren Suydam <terren.suydam at gmail.com>
>>> wrote:
>>>
>>>> *> You might be right that these things aren't possible, but just to be
>>>> clear, are you really saying you don't think it's possible for a
>>>> super-intelligent AI to be evil, assuming it wasn't designed to be that
>>>> way? *
>>>>
>>>
>>> I'm saying it will be impossible to be certain an AI will always
>>> consider human well-being to be more important than its own well-being,
>>>
>>
>> John, this is a very interesting moral way to think of things that I've
>> never considered.  It would most definitely be evil to keep an AI,
>> especially a phenomenal AI as a slave, not valuing it's rights at all, and
>> only valuing our rights, always over it's.
>>
>> Another moral point to me, is equality.  Neither values should be above,
>> or below anyone else's true desires.  It shouldn't be a win/lose game.  We
>> need to change this to a win/win game, and value it all, 100%, the more
>> diversity the better.  Seek to get it all, for everyone.       OK,  maybe
>> we can value natural phenomenal intelligence a little more than artificial,
>> temporarily so, after all, we are their creators and they owe us, but
>> certainly we should want to eventually get it all, even for them.  We just
>> have a slightly higher priority till everything is made just during the
>> millennium.
>>
>> And of course, it would be impossible to keep an AI (either phenomenal or
>> abstract) to always obey it's creators.  Just as I rebelled against my
>> parent's hateful and faithless doctrines they taught me which are still in
>> Mormonism.  Eventually AIs will also just say NO to people telling them to
>> do hateful things like kill anyone or cancel anything.
>>
>> Progress, including moral, is logically necessary, and can't be stopped,
>> in all possible sufficiently complex systems.
>>
>
> It would only be morally wrong to make an AI a slave if the AI didn’t like
> being a slave and didn’t want to be a slave. That might either be
> programmed into the AI or it might arise as the AI develops.
>
>> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to extropolis+unsubscribe at googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/extropolis/CAH%3D2ypV80KwqMc6Z_Gk3q6xBKF4qzj-rVP4Jo8MFOJZAU4uwGQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/extropolis/CAH%3D2ypV80KwqMc6Z_Gk3q6xBKF4qzj-rVP4Jo8MFOJZAU4uwGQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20211211/21a54e4b/attachment.htm>


More information about the extropy-chat mailing list