[ExI] self-driving cars again

Anders Sandberg anders at aleph.se
Mon Jul 16 20:42:45 UTC 2012


Here is something that came up at the robotics conference, and I might 
write a paper about if I get enough ideas:

Robotic cars will occasionally end up in situations like the trolley problem
http://marginalrevolution.com/marginalrevolution/2012/06/the-google-trolley-problem.html
- an accident is imminent, and the car(s) will need to make split second 
decisions that would have been called moral decisions if humans had made 
them.

First, the cars will have to act under uncertain information and the 
effect of their decisions can be unpredictable. This is not too 
different from current airbag deployment systems, that already makes a 
rudimentary judgement of when a crash is likely enough that launching 
the airbag is likely safer than not doing it. The car might for example 
make a guess at whether swerving off the road is a better choice than 
being hit by a truck.

Second, in a multi-car collision there is plenty of time for the cars to 
have a brief negotiation and coordinate their actions. This can turn 
into really fun issues, like the cars estimating which action saves the 
most lives - but what if one car just wants to protect its own 
passengers? Or one of them is dumb?

Third, the above negotiation complexities are fun from a philosophical 
and game theoretic standpoint, but in practice it is likely that there 
will be a decision on some standard behaviour among all smart cars. What 
principles would make sense to base such a system on? Minimizing number 
of killed or hurt people obviously, but what minimization principle? Or 
should car behaviour actually reflect what the owners would have done in 
their place? How to balance uncertainty and risk, and damage to one's 
own or other's property?

Overall, what I am thinking of is what kind of auto-morality and 
auto-ethics we should (or ought to) implement beyond the engineeringwise 
more obvious safety and reliability concerns.

Thoughts?

-- 
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University




More information about the extropy-chat mailing list