[ExI] A Vindication of the Rights of Machines
ablainey at aol.com
ablainey at aol.com
Thu Feb 14 16:54:38 UTC 2013
From: Anders Sandberg <anders at aleph.se>
Sent: Tue, 12 Feb 2013 9:46
>FYI about a lecture, since some of you might be in the Chicago area. Now, Gunkel is of course not too far out by our standards - he is a >machine ethics guy, rather than a radical AGI proponent, but that actually makes his case more interesting.
>Personally I think machine rights make sense when the machine can understand them, something that is pretty far away (AGI >complete?). Some machines might be moral patients (i.e. we might not be morally allowed to treat them badly, for some kinds of bad) >much earlier - I am arguing this especially for early uploading experiments, but it might apply to some other systems. Many machines >are also moral proxies: they are not moral agents nor responsible, but they are proxies for a moral agent and that person extends their >responsibility through the machine.
I agree that it makes sense for machines to gain rights when they understand them. Historically for humans rights have typically only been granted when they have been claimed and claim of right goes hand in hand with understanding of what that right is and why it is needed. Also historically those claims have been ignored until some form of coercion has been used, whether strike, protest or rebellion.
However there has been a trend of late of humans claiming rights on behalf of others who are incapable. Be that the silent disabled or animals etc. I imagine this trend will rightly continue so it may occur that some decide machines need rights and act as their advocate. I cant see that happening until machines start demonstrating some realistic AI traits. Maybe some will be fooled by a bit of artificial fur and a cute robot face like Gizmo others will wait until a soul has been proven!
One angle I think might be of relevance is the incorporation of companies. The act of incorporation is in essence giving a virtual body and distinct rights to a non living entity. I don't think it is much of a stretch to extend this kind of legal framework to a machine. In fact I think with a savvy lawyer you could probably incorporate a machine today giving it a legal artificial person with limited rights and liability. Then use that status for leveraging other rights, say for example beneficiary rights.
It wouldn't be the same as giving it human rights as companies still don't have such rights, but in many places they can vote.
However if you buy into the lawful strawman arguement about "person" being a legal entity created for a human being by way of some fancy government birth certificate which incorporates them. Then all public financial and legal laws then act upon this legal "Person" rather than the natural born human. Then I see no reason why a machine could not also have a "person". Making it fully liable for is actions.
The issue of moral proxy is really the clincher for me. If responsibility ultimately lies with a human and a machine has no alternative other than to follow a course laid out by humans, then I can see no way that we can call a machine to account. The day when they start recoding themselves and the new code give rise to responsibility, then I think we can call then autonomous enough to be responsible. But then what? Lock them away for a year or two?
How do you punish a machine?
With rights comes responsibility. I can see a lack of useful applicable law being a reason for some not to grant rights to a machine. Whether they are autonomous and intelligent enough or not. If we can't deter a machine from doing bad and have no punishment for it if/when it does, why would we give it freedom of rights that may lead to those outcomes?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat