[ExI] Do robot cars need ethics as well?

BillK pharos at gmail.com
Thu Nov 29 14:31:14 UTC 2012

Moral Machines
Posted by Gary Marcus   November 27, 2012
Your car is speeding along a bridge at fifty miles per hour when
errant school bus carrying forty innocent children crosses its path.
Should your car swerve, possibly risking the life of its owner (you),
in order to save the children, or keep going, putting all forty kids
at risk? If the decision must be made in milliseconds, the computer
will have to make the call.
An all-powerful computer that was programmed to maximize human
pleasure, for example, might consign us all to an intravenous dopamine
drip; an automated car that aimed to minimize harm would never leave
the driveway. Almost any easy solution that one might imagine leads to
some variation or another on the Sorceror’s Apprentice, a genie that’s
given us what we’ve asked for, rather than what we truly desire. A
tiny cadre of brave-hearted souls at Oxford, Yale, and the Berkeley
California Singularity Institute are working on these problems, but
the annual amount of money being spent on developing machine morality
is tiny.

So it is not just making sure the car doesn't crash (as at present).
Humans will require the robots to make judgement calls. And that is
definitely not easy.

Consider the bus crash example. What if it is a prison bus with
convicted murderers in it? Does each human carry a value tag for the
computer to use? Or is it personal survival that always wins? As the
writer says, human morality is still a work in progress, so
programming ethics into robots is problematical.


More information about the extropy-chat mailing list