[ExI] Do robot cars need ethics as well?

Stefano Vaj stefano.vaj at gmail.com
Fri Nov 30 18:25:56 UTC 2012


On 30 November 2012 00:25, Anders Sandberg <anders at aleph.se> wrote:
> Teaching ethics to engineers is apparently very frustrating. The teacher
> explains the trolley problem, and the students immediately try to weasel out
> of making the choice - normal people do that too, but engineers are very
> creative in avoiding confronting the pure thought experiment and its
> unpalatable choice. They miss the point about the whole exercise (to analyse
> moral reasoning), and the teacher typically miss the point about engineering
> (rearranging situations so the outcomes are good enough).

Let me confess that I miss the point as well.

An "intelligent" car need not have more ethical software in its
programming than a corkscrew, ethics having of course nothing to do
with following hard-wired criteria, but with making its own decisions.

If the car is programmed to sacrifice its passengers for the sake of
increasing the number of the survivors, or the the other way around,
it is no more nor less ethical than a car with a software-limited
maximum speed as your everyday Merc or BMW has been for decades now.

The "ethical" choice squarely remains in the manufacturers' ballpark
or in that of legislators enforcing rules on them. The rest is just
the accuracy with which the car is able to reflect it: the efficiency
and comprehensiveness of the old, boring calculations to be made.

-- 
Stefano Vaj



More information about the extropy-chat mailing list