[ExI] Uploading cautions, "Speed Up"

Anders Sandberg anders at aleph.se
Fri Dec 23 07:54:33 UTC 2011


On 2011-12-22 20:01, Keith Henson wrote:
> On Thu, Dec 22, 2011 at 5:00 AM,  Anders Sandberg<anders at aleph.se>  wrote:
>>> Unless making copies is illegal and strongly enforced, for example, by
>>> AIs.
>>
>> But that requires a singleton (to use Nick's term), an agency that can
>> enforce global coordination.
>
> Or really widespread agreement that something is a bad idea.

It is enough to have one defector to ruin the agreement. The only way of 
making the low-forking strategy evolutionarily stable is to coordinate 
so that deviations are punished enough *everywhere*. And that likely 
requires a global singleton, not just an agreement among all nice 
governments or companies.


>> The exact size of the information that needs to be extracted is
>> uncertain: 140M is a lower bound, and I would suspect it is actually on
>> the order of terabytes (neural connectivity plus ~1 bit per synapse). In
>> any case it is small compared to the actual raw scan data. And the
>> problem is that if you get early uploading you cannot store the raw data
>> permanently, so you better know what you want to extract since you will
>> throw away most of the rest.
>>
> I suspect that emulation at the level of cortical columns will be good enough.

Maybe. In that case we need about a petabyte of storage for the synaptic 
weight matrix. Not too bad.

At present we cannot deduce the right level of emulation, so we should 
investigate the consequences of different required levels (and ways of 
settling the question).

If it is on the column level the computational demands will be fairly 
modest, but figuring out the correct internal dynamics of the columns 
might be tricky - risk for a late breakthrough and hence a lot of 
hardware overhang, making a pretty sharp transition from no uploads to a 
lot of fast uploads.

If it is on the electrophysiological level (compartment models, 
simulated ion currents) then the computational demands will be higher 
and we need much more brain data for the emulation, but most of the 
dynamics is likely in the vicinity of known. Earlier breakthrough with 
slower and more expensive uploads?

If it requires a lot more biochemical detail - states of protein 
phosphorylation, receptor complexes and whatnot - then we need new ways 
of scanning we currently know nothing about. High computational demands 
too. So this might lead to situations where scanning is the last thing 
to be developed well, in which case we might get transitions with few 
initial uploads (slow and/or expensive scanning) that get copied widely, 
or that once the right scanning tech is developed then the neuroscience 
takes off and leads to a breakthrough like the top case.

If the brain really uses weird computation, like microtubule quantum 
states, then we need to wait for the right hardware. That might take a 
long time, especially if it needs to be dedicated to brain emulation - a 
fairly niche interest.

The longer it takes to get brain emulation the higher the chance that AI 
gets there first, of course.

-- 
Anders Sandberg
Future of Humanity Institute
Oxford University



More information about the extropy-chat mailing list