[ExI] self-driving cars again

BillK pharos at gmail.com
Mon Jul 16 22:27:23 UTC 2012


On Mon, Jul 16, 2012 at 10:51 PM, Anders Sandberg wrote:
<snip>
> I am interested in what the system does when the careful programming has
> failed. Not adding safeguards for those states would be pretty stupid, even
> if 99.9% of the actual safety of the car comes from avoiding getting even
> close to failure states.
>
>

What the system does when a crash is inevitable is limited by the
choices available.
Basically, in a car about to crash, the choice is maximum braking and
/ or turning to minimize damage. If a crash can be avoided by driving
off road into a field, then it is not a failure situation.
Communication with other vehicles won't happen in a crash situation.
That may have occurred earlier when trying to avoid a crash. But in
failure mode it is every man for himself.
Each robot will try to minimize the damage to it's own occupants. Send
an emergency call for assistance and fire the airbags at the best
possible time.

In theory, in some circumstances it is possible that some priority
system could be considered. Like choosing the one car that could
escape the worst of the accident. But as you say, some sort of value
system would have to be programmed in. Such as, the car containing a
baby should escape. But then every driver would lie and tell their
robot that they always carried a baby in the car. When survival is at
stake, a small lie costs nothing. All the other drivers are strangers
and worth less to you than yourself and your family.


BillK



More information about the extropy-chat mailing list