[ExI] A Vindication of the Rights of Machines
Anders Sandberg
anders at aleph.se
Thu Feb 14 21:39:20 UTC 2013
On 14/02/2013 19:13, ablainey at aol.com wrote:
>
> >You can also have analogies. In my upcoming paper on upload ethics I
> argue that emulations of animals >should be treated as if they had the
> same moral standing as the animal unless we can prove that the
> >emulation lack the relevant properties for having moral patienthood.
> But this is because they are analogous >to the original. If the AI is
> something unique we have a harder time figuring out its moral status.
>
> Similarly what of the moral status of an incomplete or deficientley
> uploaded human, do we afford them equal rightsto the
> original?Personally I am tempted to put them in the same unknown moral
> category as AI.
Well, if you want to be consistent you should treat early human uploads
the same as animal uploads: assume they have the same moral standing as
the original, and then check if there are relevant impairments. So if
the upload doesn't function, it might be equivalent to a person in
persistent vegetative state or suffering massive brain damage. The
interesting option is that you can freeze it and try repairing someday
later (= super cryonics).
>
> >This is a real problem. If there is nothing like punishment, there
> might not be any real moral learning. You >can have a learning machine
> that gets negative reinforcement and *behaves* right due to this, but
> it is just >like a trained animal. The interesting thing is that the
> negative reinforcement doesn't have to be a >punishment byour
> standards, just an error signal.
>
> Perhaps. I personally have "Brain Hurt" with this area. I can only
> equate it to the issue of needing to replicate the Human chemistry in
> uploadsor all our error signalssuch as pain, remorse, jealousy will be
> lost. To me I cant help but think a simple error signal to a machine
> is as meaningless as showing a red card to a sportsman who doesn't
> know what it means. It is only a symbol, the actual punishment only
> comes from the chemistry it evokes. If we give a machine a symbol of
> pain, that really won't cut it imho.
Suppose you followed the rule for some reason that you do actions less
often when you see a red card as a consequence. That is equivalent to
reinforcement learning, even if you have no sense of the meaning of the
card. Or rather, to you the meaning would be "do that action less often".
Remorse, shame and guilt are about detecting something more: you have
misbehaved according to some moral system you think is true. So they are
signals that you have inconsistent behavior (in relation to your morals
or to your community). So they hinge on 1) understanding that there are
moral systems you ought to follow, 2) understanding that you acted
against the system and sometimes 3) a wish to undertake action to fix
the error. All pretty complex concepts, and usually not even properly
conceptualised in humans - we typically run this as an emotional
subsystem rather than as a conscious plan (this is partly why ethicists
behave so... nonstandard... in regards to morals). I totally can imagine
an AI doing the same, but programming all this requires some serious
internal abilities. It needs to be able to reason about itself, its
behavior patterns, the fact that it and behaviors are inconsistent, and
quite likely have a theory of mind for other agents. A tall order. Which
is of course why evolution has favored shortcut emotions that do much of
the work.
>
> >Moral proxies can also misbehave: I tell my device to do A, but it
> does B. This can be because I failed at >programming it properly, but
> it can also be because I did not foresee the consequences of my
> instructions. >Or the interaction between the instructions and the
> environment. My responsibility is 1) due to how much >causal control I
> have over the consequences, and 2) how much I allow consequences to
> ensue outside my >causal control.
>
> A problem that already exists. I have wondered about the implications
> of automatic parking available in some cars. Should you engage this
> parking system and your car *decides* to prang the parked vehicle in
> front, who is responsible? The car for a bad judgement, You for not
> applying the breaks, the engineer who designed the physical
> mechanisms, the software developer or the salesman who told you it was
> infallible?
Exactly. If the car is just a proxy the responsibility question is about
who made the most culpable assumptions.
> I think as such autonomous systems evolve there should and hopefully
> will be a corresponding evolution of law brought about by liability
> suits. Im not aware of any yet, but im 100% sure they will appear if
> they haven't already. Perhaps the stakes are not yet high enough with
> simple parking mishaps, but when the first self driving car ploughs
> through a bus stop full of Nuns, the lawyers will no doubt wrestle it
> out for us.
Liability is the big thing. While ethicists often think law is a boring
afterthought, there is quite a lot of clever analysis in legal reasoning
about responsibility.
But the wrong liability regime can also mess up whole fields. The lack
of software liability means security is of far too little concern, yet
stricter software liability would no doubt make it harder to write free
software. Car companies are scared about giving cars too much autonomy
due to liability, yet the lack of liability in military operations is
leading to some dangerously autonomous systems (the main reason IMHO the
military is not keen on fully autonomous drones is simply traditionalism
and employment security; the CIA likely has less inhibitions).
Pharmaceutical liability seems to be throttling drug development
altogether.
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130214/1f8469a5/attachment.html>
More information about the extropy-chat
mailing list