<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 14/02/2013 19:13, <a class="moz-txt-link-abbreviated" href="mailto:ablainey@aol.com">ablainey@aol.com</a>
wrote:<br>
</div>
<blockquote
cite="mid:8CFD8FE7C8830B6-500-2715C@webmail-d138.sysops.aol.com"
type="cite"><font size="2" color="black" face="arial"><font
size="2" color="black" face="arial">
<div> <br>
</div>
</font></font>>You can also have analogies. In my upcoming
paper on upload ethics I argue that emulations of animals
>should be treated as if they had the same moral standing as
the animal unless we can prove that the >emulation lack the
relevant properties for having moral patienthood. But this is
because they are analogous >to the original. If the AI is
something unique we have a harder time figuring out its moral
status.<br>
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_946890b0-61e5-478c-8435-878f29a0cc86"><font
size="2" color="black" face="arial"><font size="2"
color="black" face="arial"> <br>
<font size="2">Similarly <font size="2">w</font>hat of <font
size="2">the</font> moral status of an incomplete or
deficientl<font size="2">e</font>y uploaded human<font
size="2">,</font> <font size="2">d</font>o we afford
them equal rights<font size="2"> to the original?<font
size="2"> Personally I am tempted to put them in <font
size="2">the</font> same unknown <font size="2">moral
category</font> as AI.</font></font></font></font><br>
</font></div>
</div>
</blockquote>
<br>
Well, if you want to be consistent you should treat early human
uploads the same as animal uploads: assume they have the same moral
standing as the original, and then check if there are relevant
impairments. So if the upload doesn't function, it might be
equivalent to a person in persistent vegetative state or suffering
massive brain damage. The interesting option is that you can freeze
it and try repairing someday later (= super cryonics). <br>
<br>
<br>
<blockquote
cite="mid:8CFD8FE7C8830B6-500-2715C@webmail-d138.sysops.aol.com"
type="cite">
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_946890b0-61e5-478c-8435-878f29a0cc86"><font
size="2" color="black" face="arial"> </font><br>
<font size="2">></font>This is a real problem. If there is
nothing like punishment, there might not be any real moral
learning. You <font size="2">></font>can have a learning
machine that gets negative reinforcement and *behaves* right
due to this, but it is just <font size="2">></font>like a
trained animal. The interesting thing is that the negative
reinforcement doesn't have to be a <font size="2">></font>punishment
by<font size="2"> </font>our standards, just an error signal.<br>
<br>
<font size="2">Perhaps. <font size="2">I personally have <font
size="2">"Brain Hurt" with this area. I can only equate
it to the issue of needing to replicat<font size="2">e</font>
the Human chemistry</font></font> in uploads<font
size="2"> or all our <font size="2">e</font>rror signals<font
size="2"> such as pain, remorse, jealousy will be lost.
T<font size="2">o</font> me I cant help but think a
simple error signal to a machine is as meaningless as
showing a red card to a sportsman who doesn't know what
it <font size="2">me<font size="2">ans</font></font>.
It is only a symbol, the actual punishment only comes
from the chemistr<font size="2">y it evokes. </font></font></font>If
we give a machine a symbol of pain, t<font size="2">hat
really won't cut it imho. </font></font><br>
</div>
</div>
</blockquote>
<br>
Suppose you followed the rule for some reason that you do actions
less often when you see a red card as a consequence. That is
equivalent to reinforcement learning, even if you have no sense of
the meaning of the card. Or rather, to you the meaning would be "do
that action less often". <br>
<br>
Remorse, shame and guilt are about detecting something more: you
have misbehaved according to some moral system you think is true. So
they are signals that you have inconsistent behavior (in relation to
your morals or to your community). So they hinge on 1) understanding
that there are moral systems you ought to follow, 2) understanding
that you acted against the system and sometimes 3) a wish to
undertake action to fix the error. All pretty complex concepts, and
usually not even properly conceptualised in humans - we typically
run this as an emotional subsystem rather than as a conscious plan
(this is partly why ethicists behave so... nonstandard... in regards
to morals). I totally can imagine an AI doing the same, but
programming all this requires some serious internal abilities. It
needs to be able to reason about itself, its behavior patterns, the
fact that it and behaviors are inconsistent, and quite likely have a
theory of mind for other agents. A tall order. Which is of course
why evolution has favored shortcut emotions that do much of the
work. <br>
<br>
<blockquote
cite="mid:8CFD8FE7C8830B6-500-2715C@webmail-d138.sysops.aol.com"
type="cite">
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_946890b0-61e5-478c-8435-878f29a0cc86">
<br>
<font size="2">></font>Moral proxies can also misbehave: I
tell my device to do A, but it does B. This can be because I
failed at <font size="2">></font>programming it properly,
but it can also be because I did not foresee the consequences
of my instructions. >Or the interaction between the
instructions and the environment. My responsibility is 1) due
to how much <font size="2">></font>causal control I have
over the consequences, and 2) how much I allow consequences to
ensue outside my <font size="2">></font>causal control. <br>
<br>
<font size="2">A problem that already exists. I have wondered
about t<font size="2">he implications of automatic parking
available in some cars. Should you engage this parking
system and your car <font size="2">*decides* to prang the
parke<font size="2">d vehicle in front, who is
responsible? The car for a bad judgement, You for not
applying the breaks, <font size="2">the engineer who
designed the physical mechanisms, the software
developer or the salesman who told you it w<font
size="2">as <font size="2">infallible</font>?</font></font></font></font></font></font><br>
</div>
</div>
</blockquote>
<br>
Exactly. If the car is just a proxy the responsibility question is
about who made the most culpable assumptions. <br>
<br>
<blockquote
cite="mid:8CFD8FE7C8830B6-500-2715C@webmail-d138.sysops.aol.com"
type="cite">
<div
style="font-family:arial,helvetica;font-size:10pt;color:black">
<div id="AOLMsgPart_2_946890b0-61e5-478c-8435-878f29a0cc86">
<font size="2">I thi<font size="2">nk as such autonomous
systems evolve there should and hopefully will be a <font
size="2">corresponding</font> evolution of law brought
about by <font size="2">liability</font> suits</font>. Im
not aware of any yet, but im 100% sure they will appear i<font
size="2">f they <font size="2">haven't already</font>.
Perhaps the stakes are not yet high enough with simple
parking mishaps, but when t<font size="2">he first <font
size="2">self driving car ploughs through a bus stop
full of Nuns, <font size="2">the</font> la<font
size="2">wyers will no doubt wrestle it out for us.</font></font></font>
</font></font><br>
</div>
</div>
</blockquote>
<br>
Liability is the big thing. While ethicists often think law is a
boring afterthought, there is quite a lot of clever analysis in
legal reasoning about responsibility. <br>
<br>
But the wrong liability regime can also mess up whole fields. The
lack of software liability means security is of far too little
concern, yet stricter software liability would no doubt make it
harder to write free software. Car companies are scared about giving
cars too much autonomy due to liability, yet the lack of liability
in military operations is leading to some dangerously autonomous
systems (the main reason IMHO the military is not keen on fully
autonomous drones is simply traditionalism and employment security;
the CIA likely has less inhibitions). Pharmaceutical liability seems
to be throttling drug development altogether. <br>
<br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University </pre>
</body>
</html>