[ExI] uploads again
Anders Sandberg
anders at aleph.se
Sun Dec 23 15:47:39 UTC 2012
On 22/12/2012 11:15, BillK wrote:
> On 12/22/12, Anders Sandberg wrote:
> <snip>
>> (Especially if some other constraints on security are weak, leading to a
>> conflict-prone situation; I am working on a paper about the link between
>> computer security and risks of brain emulation - bad computer security
>> means that a war of everybody against everybody is more likely, due to
>> first-strike advantages and winner-take-all dynamics).
> Computer security seems to be getting worse as systems become more complex.
> Partly because governments *want* weak security on all computers
> (except their own, of course).
Blaming governments for bad security is giving them too much credit
(besides, if that were their strategy they are also reaping insecure
software running their systems and essential infrastructure). Instead,
try Bruce Schneier's observation that the lack of liability for insecure
software produces disincentives to improve security. Lots of
after-the-fact patching, little reason to make systems secure from the
start.
> It is the old 'power corrupts' problem. If you realise that hitting
> the button means your upload will be able to have total power, what do
> you do? If you hesitate, and say you don't want to do that, then very
> likely the button will be pressed by the next researcher in a
> competing team and your research will disappear. Once you get to that
> stage then you will hit the button. Better your upload than another
> with qualities that you don't know about. Once something this powerful
> becomes possible, then it will happen. Saying sorry afterwards is
> always easier than trying to get permission first.
Yes, if there is a big first-mover advantage you should expect a race
for it.
However, there is a another problem (we call it the unilateralist
curse): in some situations it is enough that one agent decides to act
and the full effects of the action will affect everyone. When choosing
whether to act or not agents try to decide whether the consequences will
be good, but have a certain risk of being wrong. The more agents there
are, the more likely at least one will be very optimistic, and act -
even when the consequences are likely bad, and correctly judged by the
other agents. So we should expect that in the case of unleashing new
technologies there will be mistakes done, especially if there is a tight
race.
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
More information about the extropy-chat
mailing list