[ExI] Self-driving cars to make moral and ethical decisions like humans
William Flynn Wallace
foozler83 at gmail.com
Fri Jul 14 22:29:21 UTC 2017
My worry is that everyone has different ethical systems.
BillK
Surely any system will prefer saving lives to preserving objects. Whether
it should put equal weight to its riders and the other car's riders, or
prefer its riders is a problem to be worked out.
Another issue: there are many different ways to get into an accident, and
many other types of vehicles to get into one with. If the other is a smart
car, then it's one thing, if it's a rig (lorry) then it's another thing
entirely. So many different situations to program for.
bill w
On Fri, Jul 14, 2017 at 2:37 PM, BillK <pharos at gmail.com> wrote:
> By Jonathan Wilson Published Wednesday, July 5, 2017
> <https://eandt.theiet.org/content/articles/2017/07/self-
> driving-cars-to-make-moral-and-ethical-decisions-like-humans/>
>
> Quote:
> A new study has demonstrated that human ethical decisions can be
> implemented into machines using morality modelling. This has strong
> implications for how autonomous vehicles could effectively manage the
> moral dilemmas they will face on the road.
>
> The results were conceptualised by statistical models leading to
> rules, with an associated degree of explanatory power to explain the
> observed behavior. The research showed that moral decisions in the
> scope of unavoidable traffic collisions can be explained well, and
> modelled, by a single value-of-life for every human, animal or
> inanimate object.
>
> Leon Sütfeld, the first author of the study, says that until now it
> has been assumed that moral decisions are strongly context dependent
> and therefore cannot be modelled or described algorithmically.
>
> “We found quite the opposite”, he said. “Human behavior in dilemma
> situations can be modelled by a rather simple value-of-life-based
> model that is attributed by the participant to every human, animal, or
> inanimate object.”
>
> This implies that human moral behavior can be well described by
> algorithms that could be used by machines as well.
>
> Prof. Gordon Pipa, a senior author of the study, says that since it
> now seems to be possible that machines can be programmed to make
> human-like moral decisions, it is crucial that society engages in an
> urgent and serious debate.
>
> “We need to ask whether autonomous systems should adopt moral
> judgements,” he said. “If yes, should they imitate moral behavior by
> imitating human decisions, should they behave along ethical theories
> and if so, which ones and critically, if things go wrong who or what
> is at fault?”
>
> ----------------------
>
>
> My worry is that everyone has different ethical systems.
>
> I'm not sure that I would buy a 'Jesus-freak' car that says "I'm very
> sorry Bill, but my ethics module says that in this situation saving
> your life is not the most efficient solution".
>
>
> BillK
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170714/30f33087/attachment.html>
More information about the extropy-chat
mailing list