[ExI] A Vindication of the Rights of Machines
anders at aleph.se
Thu Feb 14 18:12:26 UTC 2013
On 14/02/2013 16:54, ablainey at aol.com wrote:
> However there has been a trend of late of humans claiming rights on
> behalf of others who are incapable. Be that the silent disabled or
> animals etc. I imagine this trend will rightly continue so it may
> occur that some decide machines need rights and act as their advocate.
> I cant see that happening until machines start demonstrating some
> realistic AI traits. Maybe some will be fooled by a bit of artificial
> fur and a cute robot face like Gizmoothers will wait until a soul has
> been proven!
You can also have analogies. In my upcoming paper on upload ethics I
argue that emulations of animals should be treated as if they had the
same moral standing as the animal unless we can prove that the emulation
lack the relevant properties for having moral patienthood. But this is
because they are analogous to the original. If the AI is something
unique we have a harder time figuring out its moral status.
> One angle I think might be of relevance is the incorporation of
> companies. The act of incorporation is in essence giving a virtual
> body and distinct rights to a non living entity. I don't think it is
> much of a stretch to extend this kind of legal framework to a machine.
> In fact I think with a savvy lawyer you could probably incorporate a
> machine today giving it a legal artificial person with limited rights
> and liability. Then use that status for leveraging other rights, say
> for example beneficiary rights.
Legal persons are however not moral persons. Nobody says that it is
wrong for the government to dissolve or split a company, despite the
misgivings we have about capital punishment. Same thing for legal
rights: ideally they should track moral rights, but it is a bit random.
> It wouldn't be the same as giving it human rights as companies still
> don't have such rights, but in many places they can vote.
Where else but the City of London?
> The issue of moral proxy is really the clincher for me. If
> responsibility ultimately lies with a human and a machine has no
> alternative other than to follow a course laid out by humans, then I
> can see no way that we can call a machine to account. The day when
> they start recoding themselves and the new code give rise to
> responsibility, then I think we can call then autonomous enough to be
> responsible. But then what? Lock them away for a year or two?
> How do you punish a machine?
This is a real problem. If there is nothing like punishment, there might
not be any real moral learning. You can have a learning machine that
gets negative reinforcement and *behaves* right due to this, but it is
just like a trained animal. The interesting thing is that the negative
reinforcement doesn't have to be a punishment by our standards, just an
Moral proxies can also misbehave: I tell my device to do A, but it does
B. This can be because I failed at programming it properly, but it can
also be because I did not foresee the consequences of my instructions.
Or the interaction between the instructions and the environment. My
responsibility is 1) due to how much causal control I have over the
consequences, and 2) how much I allow consequences to ensue outside my
Future of Humanity Institute
Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat