[extropy-chat] Neural Engineering
Anders Sandberg
asa at nada.kth.se
Fri Mar 19 10:11:21 UTC 2004
fredagen den 19 mars 2004 06.10 wrote Robert J. Bradbury:
> Now what I am interested in from the perspective of the
> "Neural Engineering" perspective is the possibility of
> extracting (non-destructively and non-painfully) information
> that leads directly to the people who are responsible for
> the direct (and most probably painful) termination of
> innocent people.
It seems that the question here is how strong our right to our own mindstates
is compared to other rights. This is actually a good question.
As I tend to think of it, we own the contents of our minds just as we own our
bodies. We have the sole right to determine what to do with our internal
information, whether to reveal it or remain silent. This right appears to be
stronger than just the ordinary ownership rights just as the right to one's
body is stronger than ordinary ownership of external objects.
The freedom to choose is part of this; the ability to make one's own decisions
is not just tightly linked to the mindstate but also to being a moral subject
at all. If that is infringed it is even more serious than just 'stealing'
someone's mental information. But the current scenario is not about limiting
this freedom, just checking that the mind does not contain any plans for
killing someone, or the memories of having done so.
My basic heuristics of what rights may be infringed on in the pursuit of
justice is that only weaker rights may be violated when trying to deal with a
rights violation (e.g. stealing (right to property violation) should not be
punished by death (right to life violation)) and that coercion may not be
initiated. Being a minarchist, I acknowledge that we can delegate a monopoly
on coercion to the (local) state if we are careful.
>From this standpoint it is entirely OK to make a social contract where the
police is allowed to check the minds of people in the pursuit of a crime. It
is not truly different from all the other investigatory powers we allow the
police, powers which we accept since they produce a safer society (assuming
the police to be competent, not corrupt etc). The integrity violation is
lesser than the life violation done by the murderer. Such scanning is only
moral if the crime is at least a crime against the right to one's body or
mind - no scanning for thieves. But these investigations are after the fact:
scanning brains for murderers is OK, but it is not clear it is moral to look
for would-be murderers.
In current legal practice it is often problematic even to look too broadly for
suspects, there has to be a reasonable amount of evidence to allow invasions
of privacy. One could of course assume a kind of transparent society
contract, where random scanning for serious misdeeds (murder, rape,
brainhacking) is accepted. But we are a long way from that, and building
institutions that could handle that power without leaking or corruption is
hard.
It is the would-be murderers that are problematic. Innocent until proven
guilty is an important heuristic for open societies - it embodies the trust/
niceness part of the reciprocal altruism we need to keep the society
functioning, and it shows that we value freedom higher than punishment. This
means that coercive scanning for potential crimes is not really acceptable,
it violates both the non-initiation of force and produces violations of
rights. Doing a voluntary scanning showing that one is "nice" is of course
OK, and can be included into making voluntary contracts. It is the forcing of
scanning that makes it problematic, and this is compounded by the idea of
pre-emptive justice. Pre-emption places the burden of proof on the accused
("Prove that you *won't do it!") and tends to lead to unstable situations
where it is better to act before anybody else has a chance to act, producing
rash decisions.
Would the benefits of finding these dangerous persons outweigh the risks? I
doubt it. Assume that one would-be murderer in 10 actually does murder
someone (the exact numbers are of course uncertain, but 1:10 doesn't sound
that strange - it could be 1:100) and one in a hundred is a would-be murderer
(probably a bit too high :-), and that we use scanning on the entire
population. That means that 99% of people get their mental privacy violated.
Then we have the 0.9% potential murderers we now have to deal with. Most are
simply in need of some help (psychological, economical, social, whatever). If
they all can get it it isn't that bad, but this is very unlikely to happen in
the near term. More likely they are treated in ways that limit the risk to
the others, which in general is a limitation for them. So we end up with 0.1%
stopped would-be murderers, and 0.9% people who actually would never have
murdered anybody but now have their freedom circumscribed anyway. Plus the
basic 99% of coerced citizens. Is the price of preventing 1 murder worth
putting 9 innocents (possibly nasty people, but innocent) in custody or
permanent monitoring? (plus 990 people who got privacy invasions). I think
most people disagree.
Maybe if that single murderer really was so dangerous that his damage was
"worth" 9 innocents in jail, but that leaves only some of the nastier
terrorists. While it is hard to compare rights (they are qualitative things),
one could perhaps view them as orders of magnitude of utility. If a wrongful
incraceration is worth 1/10 of wrongful death and privacy invasion 1/100, the
total "cost" of the above scheme would be about 10 lives - and that is *real*
lives, if we are looking for risks it must mean that the probability of the
crime times the number of lives lost > 10 (assuming the above 1000 people
scenario). Which means that we need to be pretty certain (more than 1 chance
in 100 of the crime actually happening) even when dealing with a potential
~1000 victim attack.
[I think the reasoning in this paragraph is seriously flawed, but maybe one
could make something better out of this mess. The assumptions of the "sizes"
of the rights is downright random. ]
To sum up, I think looking for would-be murderers using neuroengineering is a
dangerous step both morally and socially. Using it to find murderers is far
more OK, we only need to set up proper safeguards about the information and
instutions handling it.
> There is a converse side of this perhaps -- i.e. those
> individuals/companies who offer full disclosure ("go ahead
> read my mind") so that it is completely obvious that they
> are dealing from an up-front perspective.
I ran a rpg scenario where one culture (of course, it was a
libertarian-transhumanist planet) had something like this. Everybody was
wearing wearables that contained micro-fmri that allowed them to display
their mental state in augmented reality. Not exactly truth machines, but
enough to give a sense of the mind behind the poker face. Not showing a
mindstate was a sign of untrustworthiness, and faking it was a serious social
gaffe (making sudden, unexpected remarks and checking that the response was
plausible had become a part of social interaction; offworlders found the
atlanteans annoyingly rude).
It is still mostly a symbolic sign, just like shaking hands to show there is
no weapon there. But it helps a bit.
--
Anders Sandberg
http://www.nada.kth.se/~asa
http://www.aleph.se/andart/
The sum of human knowledge sounds nice. But I want more.
More information about the extropy-chat
mailing list