[ExI] Uploading cautions, "Speed Up"

Anders Sandberg anders at aleph.se
Thu Dec 22 08:28:44 UTC 2011


On 2011-12-22 02:30, Keith Henson wrote:
> On Wed, Dec 21, 2011 at 2:50 PM,  Anders Sandberg<anders at aleph.se>  wrote:
>> You are assuming very mature nanotech. It is quite likely that long
>> before that we will have devices like Kenneth Hayworth's ATLUM, that use
>> microtomes and electron microscopy to automatically scan tissue.
>
> We already have them.

Yes, and there are projects aiming at brain emulation already. Lots of 
scaling up issues, big question marks about what needs to be simulated, 
but these are early days. Once you have a proof-of-concept for a fairly 
big mammal, how long will it take before you get a human volunteer? I 
know several people who have so far said they would.

> Unless making copies is illegal and strongly enforced, for example, by
> AIs.

But that requires a singleton (to use Nick's term), an agency that can 
enforce global coordination. If you have a singleton a lot of the 
existential risks are reduced (at the price of the risks from the 
singleton itself). If you have AIs before brain emulation, fine, the 
most dangerous hurdle is already in the past. But I think there is a 
decent chance of emulation before useful AI (and other technology that 
would enable/induce singleton formation). All possibilities have to be 
analyzed.


   Think about it this way, how many copies of Keith Henson could
> you put up with?

I don't mind a population of 90% Keiths. As long as you don't mind a lot 
of Anderses around. I think the problem is the "one person"-persons who 
can't stand the threat to their concept of individuality.


>> A 5x5x5 nm^3 scan of the 1.4 liters of brain 10^22 bits is about one
>> zettabyte. Kryder's Law will eventually get there (?), but it will take
>> decades. Kenneth suggests using fixated pieces of the brain as a library
>> for itself, but it seems likely that most non-nanotech scanning methods
>> will burn it.
>
> I still don't see where you need a zettabyte.  Biological information
> storage has just got to be rotten low density.  A lifetime of memory
> has been estimated at only 140 M bytes.  It's been more than a decade
> since I had a disk that small.

But that is the information embodied in that zettabyte of volume data, a 
bit like the ~1 kilobyte of information in the text of a high resolution 
scanned page. You need the big dataset to extract the important dataset.

The exact size of the information that needs to be extracted is 
uncertain: 140M is a lower bound, and I would suspect it is actually on 
the order of terabytes (neural connectivity plus ~1 bit per synapse). In 
any case it is small compared to the actual raw scan data. And the 
problem is that if you get early uploading you cannot store the raw data 
permanently, so you better know what you want to extract since you will 
throw away most of the rest.



-- 
Anders Sandberg
Future of Humanity Institute
Oxford University



More information about the extropy-chat mailing list