[ExI] Do robot cars need ethics as well?

Anders Sandberg anders at aleph.se
Sat Dec 1 00:10:42 UTC 2012


On 30/11/2012 18:25, Stefano Vaj wrote:
> Let me confess that I miss the point as well.

Yes, but at least you miss it in a way that makes ethicists want to 
discuss with you rather than facepalm :-)

> If the car is programmed to sacrifice its passengers for the sake of
> increasing the number of the survivors, or the the other way around,
> it is no more nor less ethical than a car with a software-limited
> maximum speed as your everyday Merc or BMW has been for decades now.
>
> The "ethical" choice squarely remains in the manufacturers' ballpark
> or in that of legislators enforcing rules on them. The rest is just
> the accuracy with which the car is able to reflect it: the efficiency
> and comprehensiveness of the old, boring calculations to be made.

Yes and no. The car is not doing any real ethics in the sense of 
reflecting over its behaviour and choosing what principles it ought to 
follow, so it is by no means a moral agent in the strong sense of the 
word used by Kantians. But Deborah G. Johnson has a good paper "Computer 
Systems: Moral Entities but not Moral Agents"
http://www.nyu.edu/projects/nissenbaum/papers/computer_systems.pdf
where she points out that while autonomous systems are not really moral 
agents, they are still moral entities: the interaction between the 
designer, user and the system can certainly be morally relevant and the 
"morality" of the system does matter. Yes, the designer might get part 
or all of the blame when things go wrong, but it does make sense of 
saying different car software is "good", "immoral" or 
"consequentialist". You can have a self-sacrificing car, even if it is 
not conscious. The question is whether you should want to have one.

The smarter the cars are, the better they will likely be at avoiding 
ever getting involved in accidents. But they might show other morally 
relevant behaviours (largely due to design, of course) like politeness 
in traffic or helping.

It seems that the challenge of deciding what to put into the cars is 
that it needs to 1) reduce risk as much as possible to everyone in the 
traffic system, 2) work in an environment full of erratic humans, other 
machines and random events, 3) not interfere with efficiency, property 
rights, comfort and people's intuitions. I suspect we should be happy if 
we could even get two out of three. Most likely we will get a little bit 
of each, but be forced to compromise - and that is again an ethical 
decision, since we are talking about something that deals with how 
people and their tools ought to behave.

 From a consequentialist standpoint this is just messy but cool and fun 
engineering. From a Kantian standpoint this is worrisome: one should not 
abdicate moral responsibility to tools, ever! From a virtue standpoint 
human learning and psychology might matter: we should not make an 
auto-system that makes us dependent and unable to take responsibility, 
but rather aim at cars that extend our abilities.

-- 
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University




More information about the extropy-chat mailing list