[ExI] uploads again

Anders Sandberg anders at aleph.se
Sat Dec 22 07:56:47 UTC 2012


On 2012-12-22 01:39, ablainey at aol.com wrote:
> The time dilation/compression experienced by the first upload would
> allow them to learn and evolve at an exponential rate. In the time it
> would take us to ask" how is it in there" they could well have decided
> to wipe us out.

Whether brain emulation would enable hard takeoff remains to be seen. 
There are some pretty good arguments IMHO against it, but there is also 
a vigorous debate in the community (see Robin Hanson's and Carl 
Shulman's talks and papers).

The time compression will depend on the amount of hardware overhang when 
the final key technology arrives; the scary case is when the computers 
and scanning have been available for a long time, but successful 
modelling has been lagging due some missed component. But even with a 
lot of time compression upgrading actual performance requires real 
research. I am skeptical that this can just be automated easily by a 
single fast mind: typically improving performance of complex systems is 
itself a complex process, which requires lots of different disciplines, 
various rare insights, and plenty of work. Even when speeded up and 
copied a lot a single mind is unlikely to be great at it. There is only 
so much you can do with parameter tweaking of an opaque computational 
neuroscience model, and even running evolutionary algorithms for 
improvement is limited by your ability to design good fitness functions.

Carl pointed out something more relevant: the "early days" of the 
emulation economy Robin has been analyzing in detail might last a fairly 
short time as seen from the outside, especially if we have the hardware 
overhang scenario. After a few weeks the emulation sector could have 
found massive improvements and would be a de facto singularity. Not 
implausibly instant or based on a lone brain, but still about as 
disruptive.

(Especially if some other constraints on security are weak, leading to a 
conflict-prone situation; I am working on a paper about the link between 
computer security and risks of brain emulation - bad computer security 
means that a war of everybody against everybody is more likely, due to 
first-strike advantages and winner-take-all dynamics).

Also note that this presupposes the most extreme scenario. If models and 
scans come ahead of computing power, we are going to see a gradual  (one 
or two decades) emergence of ever higher mammalian emulations followed 
by slow and centralized human emulations. Plenty of time to understand 
and get a handle on things. The big risk in this scenario is that the 
neuroscience will trigger fast neuromorphic AGI instead.


> A solution would be to place uploads into a protected space. However
> when you have a bunch of them and finally open the doors, it could be
> digital carnage.

As well as the ethical problem. Software people are people too. There 
was some angry responses to Carl's suggestions in this direction at the 
Winter Intelligence conference, especially since I had earlier given a 
talk on emulation ethics.


-- 
Anders Sandberg
Future of Humanity Institute
Oxford University



More information about the extropy-chat mailing list