[ExI] global moral code (was:Re: very informative)

BillK pharos at gmail.com
Thu Dec 31 11:20:41 UTC 2020


On Thu, 31 Dec 2020 at 11:03, Ben Zaiboc via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> I can't see that happening. Not any time soon. Whose moral code would it
> use? If it is to be a global one, that means it would have to be
> forcibly imposed on some people. I'd say that was morally wrong,
> wouldn't you?
>
> It would have to be agreed upon by everyone, and that's not going to
> happen, because people have different values, often wildly different.
>
> I suspect we wouldn't even be able to agree on a common moral code here
> on this list, never mind among the population of the entire world (and
> that's without even bringing religion into it).
> --
> Ben Zaiboc
> _______________________________________________


That's the problem that the AGI developers are running into. They want
the AI to be moral and look after humans, but humans have too many
different moral codes. An AI developed by Muslim researchers would
want to install Sharia law everywhere. A Chinese AI would want to
ensure obedience to the state. An American AI would want to install
corporate capitalism everywhere. And so on........

Competing AIs will have to fight it out.



BillK


More information about the extropy-chat mailing list