[ExI] A Vindication of the Rights of Machines
Anders Sandberg
anders at aleph.se
Tue Feb 12 13:21:41 UTC 2013
On 12/02/2013 10:20, BillK wrote:
> Now that Watson is starting to produce recommendations for cancer
> treatment plans, who gets blamed for mistakes?
Blame can be moral and legal. No doubt the doctor responsible for the
treatment is legally responsible: if Watson suggests something crazy he
is expected to catch it. If things go badly the doctor might get sued
for neglience, which he will then try to deflect by arguing that he was
following best practice (in the UK, likely according to the Bolam test
http://en.wikipedia.org/wiki/Bolam_v_Friern_Hospital_Management_Committee )
and perhaps even try passing on the blame on the IBM team (as per
http://en.wikipedia.org/wiki/Bolitho_v_City_and_Hackney_Health_Authority
). Messy, but basically a question of what expertise a doctor should
follow. (Can you tell that I attended a lecture on this a few days ago? :-)
Moral blame is more fun for us ethicists. Moral agents can be
blameworthy, since they perform moral actions and are commonly thought
to be able to respond to the blame. If you cannot change your actions
because of praise or blame you are likely not a moral agent. So they
need 1) to be able to change their behavior due to learning new
information, and 2) understand moral blame - just having reinforcement
learning doesn't count.
> For many years staff have used the 'computer error' excuse for every
> incompetent treatment of customers. Even big banks losing millions in
> wild trading deals blame the computer.
>
> So, yes, machines will get the blame until they can argue back and
> make a case for the defence.
Exactly. Right now the complicated moral proxyhood of machines means
that responsibility gets so spread out that nobody is responsible. This
is called the "many hands problem", and is of course quite the opposite
of a problem to many organisations: the inability to find a responsible
party means that they have impunity. In fact, a machine that could take
the blame might be *bad* from this perspective, since that means that
responsibility might stop being diffuse.
I think blameworthy machines are going to be hard to make, however. An
AGI might very well be able to change its behavior based on its own
decisions or what it learns, but might have a hard time understanding
human blame concepts since it has a fundamentally alien outlook and
motivation system.
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
More information about the extropy-chat
mailing list