[ExI] A Vindication of the Rights of Machines
pharos at gmail.com
Tue Feb 12 10:20:23 UTC 2013
On Tue, Feb 12, 2013 at 9:38 AM, Anders Sandberg wrote:
> Personally I think machine rights make sense when the machine can
> understand them, something that is pretty far away (AGI complete?). Some
> machines might be moral patients (i.e. we might not be morally allowed to
> treat them badly, for some kinds of bad) much earlier - I am arguing this
> especially for early uploading experiments, but it might apply to some other
> systems. Many machines are also moral proxies: they are not moral agents nor
> responsible, but they are proxies for a moral agent and that person extends
> their responsibility through the machine.
Now that Watson is starting to produce recommendations for cancer
treatment plans, who gets blamed for mistakes?
For many years staff have used the 'computer error' excuse for every
incompetent treatment of customers. Even big banks losing millions in
wild trading deals blame the computer.
So, yes, machines will get the blame until they can argue back and
make a case for the defence.
More information about the extropy-chat