[ExI] Self-driving cars to make moral and ethical decisions like humans

BillK pharos at gmail.com
Mon Jul 17 14:38:01 UTC 2017

On 14 July 2017 at 23:29, William Flynn Wallace  wrote:
> Surely any system will prefer saving lives to preserving objects.  Whether
> it should put equal weight to its riders and the other car's riders, or
> prefer its riders is a problem to be worked out.
> Another issue:  there are many different ways to get into an accident, and
> many other types of vehicles to get into one with.  If the other is a smart
> car, then it's one thing, if it's a rig (lorry) then it's another thing
> entirely.  So many different situations to program for.

I agree that saving lives is preferred to saving material damage. That
is already designed into cars by way of crumple zones and safety cages
that protect the passengers while destroying the vehicle.

My concern was designing 'one-size-fits-all' rules for the car AI that
would stop people buying these cars because they disagree with the
imposed morality choices.

Many people would want a rule to save the driver's life wherever
possible, regardless of who else might be killed. Where there is a
choice of who or how many might be killed or injured, people would
like some say in how the car AI is programmed, before entrusting their
lives and their families lives to it.


More information about the extropy-chat mailing list