[ExI] Autonomous car ethics

Anders anders at aleph.se
Sun Jun 26 12:54:58 UTC 2016


The problem with much of proactive AI ethics discussion is that there 
has to be an agreement on what level of technology we are talking about. 
Things often get confused when people assume different levels and then 
"waste time" by talking about something that is on the wrong level (both 
too high and too low).

A car that senses intentions/situations like Mike's example seems 
theoretically doable - we can imagine a human driver doing something 
heroic to save others. It also requires a level of human understanding, 
common sense, and reliability that is pretty far ahead: by that point, 
we have bigger problems with powerful AI than the morality of cars. The 
real issue is if we want cars that act as moral agents to this degree: 
should the car try to enforce some common sense morality (let alone a 
more specific morality)? It seems that we can agree cars should act to 
avoid causing damage, but it is less clear they should drive criminals 
straight to the police station if they figure out they are being used as 
a getaway car.

I have often argued that we may want to ensure that our technology has 
loyalties to *us* individual humans rather than to some abstract society 
or state, since otherwise we become trapped by it. But the ethical issue 
is on the tech deployment side, not so much on the rules and designs for 
the machines.


On 2016-06-24 14:00, Mike Dougherty wrote:
> On Thu, Jun 23, 2016 at 6:16 PM, BillK <pharos at gmail.com> wrote:
>> Should a self-driving car kill its passengers for the greater good –
> Oh, the many ways that can be exploited.
>
> Imagine a passenger gets into the car with a machine gun and a bomb
> vest (or some equally absurd situation) ...
>
> Can we expect the car to drive off the cliff "for the greater good" ?
> Of course the guy is a terrorist, and of course the car is equipped
> with enough intention-sensors to know it.
> (yes, intention-sensors: because intelligence is effectively
> prediction and anticipation, so AI is about reliably guessing the
> future from clues available in the present)
>
> idk, I feel like discussing 'ethics' for cars is taking the
> conversation down a literal wrong road.

-- 
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University




More information about the extropy-chat mailing list