<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 14/02/2013 16:54, <a class="moz-txt-link-abbreviated" href="mailto:ablainey@aol.com">ablainey@aol.com</a>
wrote:<br>
</div>
<blockquote
cite="mid:8CFD8EB1B6AF1D8-1370-4A28@webmail-d065.sysops.aol.com"
type="cite"><font size="2" color="black" face="arial"><font
size="2" color="black" face="arial">
<div> <br>
</div>
</font></font><font size="2"><font face="arial">How<font
size="2">ever there has been a trend of late of humans
claiming rights on behalf of others who are incapable. Be
that the silent disabled or animals etc. I imagine this
trend will rightly continue <font size="2">s</font>o it may
occur that some decide machines need rights and act as their
advocate. I cant see that <font size="2">happening</font>
until machines start d<font size="2">emonstrating</font>
some rea<font size="2">listic </font></font></font></font>AI
traits. Maybe some will be fooled by a bit of artificial fur and a
cute robot face like Gizmo<font size="2"> others will wait until a
soul has been proven<font size="2">!</font></font><br>
</blockquote>
<br>
You can also have analogies. In my upcoming paper on upload ethics I
argue that emulations of animals should be treated as if they had
the same moral standing as the animal unless we can prove that the
emulation lack the relevant properties for having moral patienthood.
But this is because they are analogous to the original. If the AI is
something unique we have a harder time figuring out its moral
status.<br>
<br>
<br>
<blockquote
cite="mid:8CFD8EB1B6AF1D8-1370-4A28@webmail-d065.sysops.aol.com"
type="cite">
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_a9dbf806-1be4-4ba9-9661-07d4f81dd8d4"><font
size="2"><font face="arial"><font size="2"><font size="2">
<br>
<font size="2">O<font size="2">n</font>e angle I think
might be </font>of relevance is t<font size="2">he
incorporation of companies. The act of incorporation
is in essence giving a virtual body and <font
size="2">distinct rights to a non living entity. I
<font size="2">don't</font> think it is much of a
stretch to extend this kind of legal framework to
a machine. In fact I think with a <font size="2">savvy</font>
law<font size="2">yer you could probably
incorporate a machine today <font size="2">giving
it a legal artificial person with limited
rights and liability. Then use that status for
leveraging other rights, say <font size="2">for
example beneficiary</font> rights</font></font></font></font></font></font></font></font>.
<br>
</div>
</div>
</blockquote>
<br>
Legal persons are however not moral persons. Nobody says that it is
wrong for the government to dissolve or split a company, despite the
misgivings we have about capital punishment. Same thing for legal
rights: ideally they should track moral rights, but it is a bit
random.<br>
<br>
<blockquote
cite="mid:8CFD8EB1B6AF1D8-1370-4A28@webmail-d065.sysops.aol.com"
type="cite">
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_a9dbf806-1be4-4ba9-9661-07d4f81dd8d4">
<font size="2">It would<font size="2">n't be the same as givi<font
size="2">ng it human rights as companies still don't
have such rights<font size="2">, but in many places they
can vote.</font></font></font></font></div>
</div>
</blockquote>
Where else but the City of London?<br>
<br>
<br>
<blockquote
cite="mid:8CFD8EB1B6AF1D8-1370-4A28@webmail-d065.sysops.aol.com"
type="cite">
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_a9dbf806-1be4-4ba9-9661-07d4f81dd8d4"><font
size="2"><font size="2"><font size="2"> </font></font></font><font
size="2"><font size="2"><br>
<font size="2">The issue of moral proxy is really the
clincher for me. If <font size="2">responsibility <font
size="2">ultimately</font> lies with a human and a
machine has no alternative other than to follow a
course laid out by humans, then I can see no way that
we can call a machine to account. The da<font size="2">y
when they start recoding themselves and the new code
give rise to responsibility<font size="2">, then I
think we can call then autonomous enough to be
responsible. But then what? Lock them away for a
year or two?<br>
<font size="2">How do you punish a machine?<br>
</font></font></font></font></font></font></font></div>
</div>
</blockquote>
This is a real problem. If there is nothing like punishment, there
might not be any real moral learning. You can have a learning
machine that gets negative reinforcement and *behaves* right due to
this, but it is just like a trained animal. The interesting thing is
that the negative reinforcement doesn't have to be a punishment by
our standards, just an error signal.<br>
<br>
Moral proxies can also misbehave: I tell my device to do A, but it
does B. This can be because I failed at programming it properly, but
it can also be because I did not foresee the consequences of my
instructions. Or the interaction between the instructions and the
environment. My responsibility is 1) due to how much causal control
I have over the consequences, and 2) how much I allow consequences
to ensue outside my causal control. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University </pre>
</body>
</html>