[ExI] self-driving cars again
Adrian Tymes
atymes at gmail.com
Mon Jul 16 20:56:52 UTC 2012
On Mon, Jul 16, 2012 at 1:42 PM, Anders Sandberg <anders at aleph.se> wrote:
> Robotic cars will occasionally end up in situations like the trolley problem
> http://marginalrevolution.com/marginalrevolution/2012/06/the-google-trolley-problem.html
> - an accident is imminent,
The programming will likely reject that hypothesis. Being in a
condition state where any action will result in your vehicle
impacting people harmfully is equal to a failure condition,
therefore, the programming tries to avoid getting into that state,
such as by not driving fast enough to prevent coming to a
complete halt within the visible distance.
> and the car(s) will need to make split second
> decisions that would have been called moral decisions if humans had made
> them.
And they may yet be for self-driving cars - but the morality
is on the part of the programmer.
More information about the extropy-chat
mailing list