[ExI] self-driving cars again

Anders Sandberg anders at aleph.se
Mon Jul 16 21:51:45 UTC 2012


On 16/07/2012 21:56, Adrian Tymes wrote:
> On Mon, Jul 16, 2012 at 1:42 PM, Anders Sandberg <anders at aleph.se> wrote:
>> Robotic cars will occasionally end up in situations like the trolley problem
>> http://marginalrevolution.com/marginalrevolution/2012/06/the-google-trolley-problem.html
>> - an accident is imminent,
> The programming will likely reject that hypothesis.  Being in a
> condition state where any action will result in your vehicle
> impacting people harmfully is equal to a failure condition,
> therefore, the programming tries to avoid getting into that state,
> such as by not driving fast enough to prevent coming to a
> complete halt within the visible distance.

That kind of careful programming might make it impossible to drive at all.

Applying reachability analysis to failure conditions mean that all 
states where a failure can occur due to a worst-case disturbance must 
also be avoided, and so do all states that can lead to one of them, and 
so on. What remains is the safe set. If you find yourself in the unsafe 
set you can of course apply some control strategy to get out of it as 
fast as possible - disaster is not guaranteed. But the problem remains: 
often the safe set ends up very small. This is extra true for vehicles 
dealing with an unknown environment where weird random events can 
happen. There are some current robotics work on trying to keep the 
safety guarantees from becoming too conservative ( 
http://www.roboticsproceedings.org/rss08/p11.html ) but it is not clear 
that this approach could actually work for a robotic car that should 
drive in normal traffic.

I am interested in what the system does when the careful programming has 
failed. Not adding safeguards for those states would be pretty stupid, 
even if 99.9% of the actual safety of the car comes from avoiding 
getting even close to failure states.


-- 
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University




More information about the extropy-chat mailing list