[ExI] Do robot cars need ethics as well?
Anders Sandberg
anders at aleph.se
Thu Nov 29 23:25:42 UTC 2012
I'm actually working on a paper on this. (It is a bit of a holiday from
uploading and superintelligences taking over the world - and, yes, I am
writing it with a person who actually drives and knows cars!)
On 29/11/2012 20:59, Adrian Tymes wrote:
> On Thu, Nov 29, 2012 at 11:53 AM, BillK <pharos at gmail.com> wrote:
>> On Thu, Nov 29, 2012 at 6:26 PM, Adrian Tymes wrote:
>>> The correct decision, and the one taught by any state-licensed
>>> driver's ed course in the US, is to maintain enough awareness and
>>> reaction distance so that this never happens in the first place.
>> Yes, it would be nice if we could plan to avoid having to make choices
>> between two evils. But unfortunate situations will still arise.
> Certain ones will. But other situations can be hypothesized that
> are more unfortunate than will actually occur. Further, important
> details are often left out that make the choice in practice simpler.
Teaching ethics to engineers is apparently very frustrating. The teacher
explains the trolley problem, and the students immediately try to weasel
out of making the choice - normal people do that too, but engineers are
very creative in avoiding confronting the pure thought experiment and
its unpalatable choice. They miss the point about the whole exercise (to
analyse moral reasoning), and the teacher typically miss the point about
engineering (rearranging situations so the outcomes are good enough).
The problem with the real world is that many situations cannot be neatly
"solved": you cannot design an autonomous car that will never get
involved in an accident unless it is immobile in a garage, unreachable
for animals, children, crazy adults or other cars. And in an accident
situation, no matter what the cause, actions have to be taken that have
moral implications. Even ignoring the problem and not adding any moral
code to the design is a morally relevant action that should be ethically
investigated.
> For instance in this case: a bus swerves in front of you. Do you
> swerve to avoid or no? Well...this place that you would be
> swerving to, is it safe & clear? Is there a wall there, such that
> swerving would not prevent your impact with the bus but would
> damage your car (and potentially you) further?
You can imagine a set of goals for the car:
1. Get the passengers from A to B safely (and ideally fast and
comfortably)
2. Follow traffic rules
3. Protect yourself (and other objects) from damage
These are *really* nontrivial, and just as tricky as Asimov's laws. Just
imagine defining 'safely' or the theory of mind required to implement 2
in the presence of other cars (driven by AI and humans of varying
sanity). In practice there will be loads of smaller heuristics,
subsumption behaviours and special purpose tricks that support them, but
also cause quirks of behaviour that are not always good.
Oops, already this writeup has a huge bug: pedestrians are just objects
in goal 3. They probably should be given as much priority (or more,
depending on traffic law and ethics in your country) as the passengers.
In fact, recognizing that humans have special value is probably an
important first step of any autonomous machine ethics-software design
(it is one of the few things nearly all ethicists can agree on).
So in the swerving case, the car will now try to evaluate which option
is best. If it is utilitarian it will minimize the expected amount of
hurt people, taking uncertainty into account. But apparently UK road
ethics suggests that the driver should prefer to take risks to
themselves to reduce risks to others: swerving into a wall to avoid the
bus might be better than risking hitting it and hurting any passengers.
And so on. There are different views, different legal regulations, and
different moral and ethical theories about what should be done. Which
ones to implement? And why those?
It is unlikely that cars would do very deep or innovative moral thinking
(we are not assuming anything close to human level AI), but even
preprogrammed behaviors can get very complex and confusing. Especially
if cars network and coordinate their actions ("OK, AUQ343, you swerve to
the right, since that spot is empty according to TJE232's sensors.")
Would governments mandate a single car-ethics, likely implemented as
part of safety regulations? Besides the impact on tinkerers and foreign
cars, it poses questions about human drivers who obviously follow their
individual non-mandated morality. If your car does not drive as you
would have done and you are not even allowed to change its behaviour,
can you be held responsible for its actions in any way? More deeply,
would it be better to have ethical monocultures on the road or not? Is
it even possible?
When should auto-autos allow humans to drive? Sometimes overrides are
necessary, sometimes the car may know things the human doesn't know.
There is a whole range of issues of trustworthiness, predictability and
owner-pet relationships going on here that are very non-engineering-like.
In short, I don't envy the engineers who have to implement something
workable. But figuring out some moral heuristics that solve a lot of
problems seems doable and not too impossible.
--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121129/8fb20d9d/attachment.html>
More information about the extropy-chat
mailing list