[ExI] Singletons
Anders Sandberg
anders at aleph.se
Tue Jan 4 10:22:23 UTC 2011
On 2011-01-04 02:28, Samantha Atkins wrote:
>> Which could be acceptable if the rules are acceptable. Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too.
>>
>> Now, is this really unacceptable and/or untenable?
>
> It is unacceptable to have any body enforcing not examining the possibility when said body has no idea whatsoever there is any particular danger. Such regulating bodies on the other hand are a clear and very present danger to any real progress forward.
I am a literally card-carrying libertarian (OK, the card is a joke card
saying "No to death and taxes!"), so I am not fond of unnecessary
regulation or coercion. But in order to protect freedoms there may be
necessary and rationally desirable forms of coercion (the classic
example is of course self-defense).
If everybody had the potential to cause a global terminal disaster
through some action X, would it really be unacceptable to institute some
form of jointly agreed coercion that prevented people from doing X? It
seems that even from very minimalist libertarian principles this would
be OK. We might have serious practical concerns about how to actually
implement it, but ethically it would be the right thing to set up the
safeguard (unless the safeguard managed to be a cure worse than the
illness, of course).
[ Also, there is the discussion about how to handle lone holdouts - I'm
not sure I agree with Nozik's solution in ASU or even whether it is
applicable to the singleton issue, but let's ignore this headache for
the time being. ]
So unless you think there is no level of existential threat that can
justify coordinated coercion, there exist *some* (potentially very high)
level of threat where it makes sense. And clearly there are other lower
levels where it does *not*. Somewhere in between there is a critical
level where the threat does justify the coercion. The fact that we (or
the future coercive system) do not know everything doesn't change things
much, it just makes this decisionmaking under uncertainty. That is not
an insurmountable obstacle. Just plug in your favorite model of decision
theory and see what it tells you to do.
It might be true that a galactic civilization has no existential threats
and no need for enforcing global coordination. It might even be true for
smaller civilizations. But I think this is a claim that needs to be
justified based on risk assessment in the real world, not just a
rejection a priori.
> No singleton can have effective enough localized enough information feeds enabling it to outperform any/all more localized decision making systems. A singleton is by design a single point of failure.
These are two good criticisms.
The first works when talking about economics. However, it is not clear
that a single/local agent will outperform the singleton. Who has the
advantage likely depends on the technology involved and the relative
power ratio: this is going to be different from case to case. Bans on
nuke production is relatively easy to enforce, computer virus production
isn't.
The single point of failure is IMHO a much more deep problem. This is
where I think singletons may be fatally flawed - our uncertainty in
designing them correctly and the large consequences of mistakes *might*
make them incoherent as xrisk-reduction means (if the risk from the
singleton is too large, then it should not be used; however, this likely
depends sensitively on your decision theory).
To really say something about the permissibility and desirability of
singletons we need to have:
1. A risk spectrum - what xrisks exist, how harmful they are, how likely
they are, how uncertain we are about them.
2. An estimate of the costs of implementing a singleton that can deal
with the risk spectrum, and our uncertainty about the costs. This
includes an estimate of the xrisks from the singleton.
3. A decision theory, telling us how to weigh up these factors.
Our moral theories will come in by setting the scale of the harms and
costs (some moral theories also make claims about the proper decision
theory, it seems).
My claim in this post is that most reasonable moral theories will allow
nonzero-cost singletons for sufficiently nasty risk spectra, and that it
can be rational to implement one. I do not know whether our best xrisk
estimates makes this look likely to be the case in the real world: we
likely need to wait for Nick to finish his xrisk book before properly
digging into it.
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list