[ExI] Uploading cautions, "Speed Up"
Keith Henson
hkeithhenson at gmail.com
Thu Dec 22 19:01:24 UTC 2011
On Thu, Dec 22, 2011 at 5:00 AM, Anders Sandberg <anders at aleph.se> wrote:
> On 2011-12-22 02:30, Keith Henson wrote:
snip
>
>> Unless making copies is illegal and strongly enforced, for example, by
>> AIs.
>
> But that requires a singleton (to use Nick's term), an agency that can
> enforce global coordination.
Or really widespread agreement that something is a bad idea.
> If you have a singleton a lot of the
> existential risks are reduced (at the price of the risks from the
> singleton itself). If you have AIs before brain emulation, fine, the
> most dangerous hurdle is already in the past. But I think there is a
> decent chance of emulation before useful AI (and other technology that
> would enable/induce singleton formation). All possibilities have to be
> analyzed.
That's true.
The singularity is generally considered to be AI and nanotechnology.
I have argued before that no matter which one come first, it will be
used to bootstrap the other so we get both close in time. I.e., if we
get nanotech first, we bootstrap that into AI by taking brains apart
with and using the information to build AIs based off human models.
>
> Think about it this way, how many copies of Keith Henson could
>> you put up with?
>
> I don't mind a population of 90% Keiths. As long as you don't mind a lot
> of Anderses around. I think the problem is the "one person"-persons who
> can't stand the threat to their concept of individuality.
That doesn't bother me. Crew for interstellar space craft is one
place where copies would probably be required. Running people
through the duplicator in a limited space world is what concerns me.
>>> A 5x5x5 nm^3 scan of the 1.4 liters of brain 10^22 bits is about one
>>> zettabyte. Kryder's Law will eventually get there (?), but it will take
>>> decades. Kenneth suggests using fixated pieces of the brain as a library
>>> for itself, but it seems likely that most non-nanotech scanning methods
>>> will burn it.
>>
>> I still don't see where you need a zettabyte. Biological information
>> storage has just got to be rotten low density. A lifetime of memory
>> has been estimated at only 140 M bytes. It's been more than a decade
>> since I had a disk that small.
>
> But that is the information embodied in that zettabyte of volume data, a
> bit like the ~1 kilobyte of information in the text of a high resolution
> scanned page. You need the big dataset to extract the important dataset.
>
> The exact size of the information that needs to be extracted is
> uncertain: 140M is a lower bound, and I would suspect it is actually on
> the order of terabytes (neural connectivity plus ~1 bit per synapse). In
> any case it is small compared to the actual raw scan data. And the
> problem is that if you get early uploading you cannot store the raw data
> permanently, so you better know what you want to extract since you will
> throw away most of the rest.
>
I suspect that emulation at the level of cortical columns will be good enough.
Keith
More information about the extropy-chat
mailing list