[ExI] The second step towards immortality
Anders Sandberg
anders at aleph.se
Fri Jan 3 23:35:36 UTC 2014
On 2014-01-03 16:09, Rafal Smigrodzki wrote:
> On 1/3/14, Martin Sustrik <sustrik at 250bpm.com> wrote:
>> In any case, I believe that we have drifted too far away from the
>> original topic and this discussion would fit better to some
>> crypto-related mailing list.
> ### On the contrary! This is fascinating - fully homeomorphic
> encryption in a computer is like a haunting, an alien presence
> inhabiting a house nominally under our control, perhaps expungeable by
> burning the house down but not corruptible to our purposes. With
> enough knowledge and computing power this approach gives security to
> our souls - one should remember that our biological hardware is
> protected against hacking only by our poor knowledge of the mechanics
> of biological computing, a situation likely to change in the next 50
> years.
Exactly. Peter Eckersly and me have been looking at computer security
for uploaded minds, and it is a worrying problem. If you can be edited
or copied you are in deep trouble. If homeomorphic encryption can be
made effective enough to keep uploads running, the future looks much
brighter.
It is hard to tell whether homeomorphic encryption is easy or hard to
do. Current methods have fairly heavy slowdown factors (see
http://www.ijetae.com/files/Conference_ICMTSET_2013/IJETAE_ICMTSET_08.pdf )
but they look polynomial. If they can be quantum parallelized things
would be awesome (there, you can also protect data by having it decay if
anybody looks at it - intermediate states might be information-less).
Even if it is all classical it wouldn't surprise me if cleverness might
make it efficient... I would be more surprised (but still not very) if
there was some kind of bound saying that a L level gate network
*requires* something like L^3 operations , since 3 is a weird arbitrary
number. But intuition and computational complexity doesn't mix.
> It is absolutely fascinating to think that privacy could again exist
> but in fine gradations not achievable using our present embodiments -
> you can imagine a place for different levels of privacy in the same
> mind. Parts of the mind might be completely or partially transparent,
> to enable rigorous verification of its sincere willingness to acquit
> obligations and fulfill promises, thus allowing highly advanced,
> trusted-agent cooperative ventures. Parts might be opaque,
> unpredictable, making the society itself less likely to be corrupted
> by a single idea or a single wielder of power.
Exactly. However, partial transparency might not work: a mind that has
transparent part X and opaque part Y might have X sincerely willing to
fulfil an obligation, but X+Y is not. There are ways around it, though.
If you are OK with "spurs", temporary short branches that get deleted,
then you and your negotiation partner might send spurs into an encrypted
black box where they credibly bare their minds to each other, check that
they both agree, and then send a cryptographically signed agreement bit
out before being deleted. That way you can show somebody a secret you
know, and he can offer you a fair price for it, without the secret being
leaked if there is no trade.
--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
More information about the extropy-chat
mailing list