[ExI] Do robot cars need ethics as well?

Anders Sandberg anders at aleph.se
Sun Dec 2 00:00:13 UTC 2012

On 01/12/2012 18:00, Tom Nowell wrote:
> Does my autonomic system's tendency to drive me to panic make me a less ethical person than someone who is calmer? Is a computerised car, free from an evolved adrenaline-based sytem, in a much better situation to make decisions in a car-based emergency than me?

There is a distinction between acting morally - you do the right thing - 
and acting ethically - you think about what the moral thing to do is. In 
an emergency it is usually stupid to try to be ethical, but using cached 
good behaviours or dispositions may allow you to act morally. This is 
part of what makes nurses so useful in emergencies. And I suspect 
insofar we can cram good behaviours into cars they could execute them 
with amazing speed.

The first problem is that cars are *dumb*. They will apply their rules 
without insight, and this might lead to bad results due to the attempt 
to do the right thing. The car that veers off the road to avoid hitting 
a pedestrian might not realize that it now will hit people in a tent 
next to the road since it does not know about tents. Second, we are not 
always certain about what the right thing to do is.

Dumb humans have as far as I know not been studied much in ethics. 
Ethicists assume everybody is as smart as them, and will hence be able 
to perform the right logical operations to reach their conclusions. This 
is clearly not right. And ethicists typically assume the rules they 
analyse will be implemented by humans who can reflect on them and will 
not just naively do what they are told - again not true for AI ethics. 
Ethicists are hence rather unprepared to suggest rules for machine ethics.

> Anders also wrote: " But they might show other morally relevant behaviours (largely due to design, of course) like politeness in traffic or helping."
>   Well, humans with little ethical thinking might show morally relevant behaviours like politeness if there were laws enacted to enforce them. In the Netherlands, there are traffic systems where you have to give way to joining traffic, and the "zipper" system where you have to let someone in from the left, and later on let in someone from the right, sounded very complex to me but it seems to work there. All developed countries have quite complex laws regarding roads and behaviour on them, perhaps the development of computerised cars will encourage lawmakers to rationalise the rules or to formalise some things which haven't been set down yet.

I think this is a real problem. A lot of the functionality of the road 
system is due to "politeness" and shared understanding of what goes on. 
Tonight I witnessed two cars on a narrow street 'conversing' by blinking 
lights to decide who would go where. I doubt a machine could have 
replicated that communication (sure, two AI-cars would just talk wifi, 
but what about one AI and one human?) Many of the rules of the road are 
informal and hard to strictly pin down: making them exact enough to run 
machines by would make them unwieldy for humans. It is just like the 
attempts at formalising human social relationships in social media 
platforms: "friends" and "circles" do not reflect the full complexity, 
and forcing people to relate and identify according to some prescribed 
system tends to impair their social functions.

I suspect that the solution will be a bit more formalisation of 
previously informal rules, but also making autonomous cars very 
noticeable: it matters what is driving, since it tells you what to expect.

Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University

More information about the extropy-chat mailing list