<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">I'm actually working on a paper on
this. (It is a bit of a holiday from uploading and
superintelligences taking over the world - and, yes, I am writing
it with a person who actually drives and knows cars!)<br>
<br>
On 29/11/2012 20:59, Adrian Tymes wrote:<br>
</div>
<blockquote
cite="mid:CALAdGNTc2rjQvuXAmxAe8xh-iwa3R8Ex2RqqTZ8_sReyahLbnw@mail.gmail.com"
type="cite">
<pre wrap="">On Thu, Nov 29, 2012 at 11:53 AM, BillK <a class="moz-txt-link-rfc2396E" href="mailto:pharos@gmail.com"><pharos@gmail.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Thu, Nov 29, 2012 at 6:26 PM, Adrian Tymes wrote:
</pre>
<blockquote type="cite">
<pre wrap="">The correct decision, and the one taught by any state-licensed
driver's ed course in the US, is to maintain enough awareness and
reaction distance so that this never happens in the first place.
</pre>
</blockquote>
<pre wrap="">
Yes, it would be nice if we could plan to avoid having to make choices
between two evils. But unfortunate situations will still arise.
</pre>
</blockquote>
<pre wrap="">
Certain ones will. But other situations can be hypothesized that
are more unfortunate than will actually occur. Further, important
details are often left out that make the choice in practice simpler.</pre>
</blockquote>
<br>
Teaching ethics to engineers is apparently very frustrating. The
teacher explains the trolley problem, and the students immediately
try to weasel out of making the choice - normal people do that too,
but engineers are very creative in avoiding confronting the pure
thought experiment and its unpalatable choice. They miss the point
about the whole exercise (to analyse moral reasoning), and the
teacher typically miss the point about engineering (rearranging
situations so the outcomes are good enough).<br>
<br>
The problem with the real world is that many situations cannot be
neatly "solved": you cannot design an autonomous car that will never
get involved in an accident unless it is immobile in a garage,
unreachable for animals, children, crazy adults or other cars. And
in an accident situation, no matter what the cause, actions have to
be taken that have moral implications. Even ignoring the problem and
not adding any moral code to the design is a morally relevant action
that should be ethically investigated.<br>
<br>
<blockquote
cite="mid:CALAdGNTc2rjQvuXAmxAe8xh-iwa3R8Ex2RqqTZ8_sReyahLbnw@mail.gmail.com"
type="cite">
<pre wrap="">For instance in this case: a bus swerves in front of you. Do you
swerve to avoid or no? Well...this place that you would be
swerving to, is it safe & clear? Is there a wall there, such that
swerving would not prevent your impact with the bus but would
damage your car (and potentially you) further?</pre>
</blockquote>
<br>
You can imagine a set of goals for the car:<br>
<blockquote>1. Get the passengers from A to B safely (and ideally
fast and comfortably)<br>
2. Follow traffic rules<br>
3. Protect yourself (and other objects) from damage<br>
</blockquote>
These are *really* nontrivial, and just as tricky as Asimov's laws.
Just imagine defining 'safely' or the theory of mind required to
implement 2 in the presence of other cars (driven by AI and humans
of varying sanity). In practice there will be loads of smaller
heuristics, subsumption behaviours and special purpose tricks that
support them, but also cause quirks of behaviour that are not always
good. <br>
<br>
Oops, already this writeup has a huge bug: pedestrians are just
objects in goal 3. They probably should be given as much priority
(or more, depending on traffic law and ethics in your country) as
the passengers. In fact, recognizing that humans have special value
is probably an important first step of any autonomous machine
ethics-software design (it is one of the few things nearly all
ethicists can agree on).<br>
<br>
So in the swerving case, the car will now try to evaluate which
option is best. If it is utilitarian it will minimize the expected
amount of hurt people, taking uncertainty into account. But
apparently UK road ethics suggests that the driver should prefer to
take risks to themselves to reduce risks to others: swerving into a
wall to avoid the bus might be better than risking hitting it and
hurting any passengers. And so on. There are different views,
different legal regulations, and different moral and ethical
theories about what should be done. Which ones to implement? And why
those?<br>
<br>
It is unlikely that cars would do very deep or innovative moral
thinking (we are not assuming anything close to human level AI), but
even preprogrammed behaviors can get very complex and confusing.
Especially if cars network and coordinate their actions ("OK,
AUQ343, you swerve to the right, since that spot is empty according
to TJE232's sensors.") <br>
<br>
Would governments mandate a single car-ethics, likely implemented as
part of safety regulations? Besides the impact on tinkerers and
foreign cars, it poses questions about human drivers who obviously
follow their individual non-mandated morality. If your car does not
drive as you would have done and you are not even allowed to change
its behaviour, can you be held responsible for its actions in any
way? More deeply, would it be better to have ethical monocultures on
the road or not? Is it even possible?<br>
<br>
When should auto-autos allow humans to drive? Sometimes overrides
are necessary, sometimes the car may know things the human doesn't
know. There is a whole range of issues of trustworthiness,
predictability and owner-pet relationships going on here that are
very non-engineering-like. <br>
<br>
In short, I don't envy the engineers who have to implement something
workable. But figuring out some moral heuristics that solve a lot of
problems seems doable and not too impossible. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University </pre>
</body>
</html>