[ExI] Uploading cautions, "Speed Up"

Keith Henson hkeithhenson at gmail.com
Thu Dec 22 01:30:09 UTC 2011


On Wed, Dec 21, 2011 at 2:50 PM,  Anders Sandberg <anders at aleph.se> wrote:

> On 2011-12-21 16:43, Keith Henson wrote:
>>> You shouldn't try to upload your brain before we have full-brain
>>> emulation since the methods are likely going to be 1) destructive,
>>
>> I have argued that, for marketing reasons alone, destructive uploads
>> are going to be a hard sell.  Especially since the technology to make
>> uploading fully reversible with no memory loss (or even loss of
>> consciousness) is no harder.  (See "The clinic seed.)
>
> You are assuming very mature nanotech. It is quite likely that long
> before that we will have devices like Kenneth Hayworth's ATLUM, that use
> microtomes and electron microscopy to automatically scan tissue.

We already have them.

> Sure, *most* people will keep their brains far away from this
> slice-and-dice approach. But it is enough that a few are willing to
> hazard the chance and they will be the first in cyberspace. Now, if
> Robin's analysis of upload economics is anywhere near reality, it is
> enough that *one* of these outlier people is OK with having a lot of
> copies and we will see a total transformation of the economy.

Unless making copies is illegal and strongly enforced, for example, by
AIs.  Think about it this way, how many copies of Keith Henson could
you put up with?

>>> 2)
>>> have to throw away information during processing due to storage
>>> constraints until at least mid-century,
>>
>> I don't see why.  The information in your brain fits in your skull.
>
> A 5x5x5 nm^3 scan of the 1.4 liters of brain 10^22 bits is about one
> zettabyte. Kryder's Law will eventually get there (?), but it will take
> decades. Kenneth suggests using fixated pieces of the brain as a library
> for itself, but it seems likely that most non-nanotech scanning methods
> will burn it.

I still don't see where you need a zettabyte.  Biological information
storage has just got to be rotten low density.  A lifetime of memory
has been estimated at only 140 M bytes.  It's been more than a decade
since I had a disk that small.

>>> 3) we will not have evidence it
>>> works before it actually works. Of course, some of us might have no
>>> choice because we are frozen in liquid nitrogen...
>>
>> The technology to do any of this is so similar that we should be able
>> to revive the cryonics patients and let them decide if they want to
>> upload.
>
> You are assuming really mature medical nanotechnology. I am assuming far
> cruder technology. While the largest influx of new people to cyberspace
> will occur once the technology is proven, safe and convenient the big
> changes are likely to happen when the early adopters mature, possibly
> decades earlier.

I kind of doubt anyone will be uploaded before the singularity.  And
yes medical nanotechnology would mature very rapidly indeed.

>> Ian Banks had a good deal of this in "Surface Detail."
>
> But the neural lace (which is very similar to Freitas nice idea for a
> nanotech scan) requires an amazing level of understanding of how to
> interface the sloppy, floppy and messy biological system without causing
> problems. I'd love to have it, but in order to get it we will have to
> insert nanofibers into a lot of living brains and learning from the
> messes produced...

Learning is the key word, neural interfaces would infiltrate the brain
and learn what is going on.

> This is one of the big question marks I have with the classic Drexlerian
> vision - how much good designahead can we do for systems that interact
> strongly with the messy real world. I agree completely with Eric that we
> can prove certain systems to work (through theory and simulation) and
> develop CAM files long before we get our manufacturies that will likely
> work when we start them. Bang, quick transformation of a lot of fields.
> But I think systems that do complex interaction with the environment
> (especially adaptive parts of it like bodies) are hard to impossible to
> design properly without testing/interaction/probing, and hence will not
> gain anything from designahead.

I don't think it will make a lot of difference.

One of the warning signs of the singularity will be the year that a
few scientific papers name AIs as coauthors.  My guess would be a few
years from there to the point humans can't keep up at all.

Keith




More information about the extropy-chat mailing list