[ExI] Forking

Kelly Anderson kellycoinguy at gmail.com
Sat Dec 31 09:54:38 UTC 2011


On Tue, Dec 27, 2011 at 2:55 AM, Anders Sandberg <anders at aleph.se> wrote:
>> The inverse of forking is merging.  That could just as well become a
>> competing process or maybe even the dominate one for people who get
>> bored but don't want to exactly die.
>
> Assuming merging is doable. You cannot just add together two neural
> networks. It works in a few special cases like the Hopfield attractor
> network since it has perfectly additive learning, but given that that kind
> of network also has a total crash in performance when it gets overloaded
> because of its additiveness, it is not a good model for an upload mind. If
> you have a typical neural network, copy it, train them separately to learn
> maps A and B on domain X and Y, and then try to merge the copies together
> there doesn't seem to be any general method to produce a network that has
> the output A on domain X and B on Y.
>
> I would love to see some results here.

Anders, I don't think it would be possible to merge an "Anders" neural
network with a "Kelly" network, at least for a VERY long time.
However, if I take two copies of "Kelly" and run them for a day or two
of subjective time, then it seems a much easier job to merge those two
nearly identical networks back together again. Even easier if both are
uploads, but potentially possible even if the main one is still meat,
through the application of neural stimulation through some kind of (as
yet to be invented) nanotechnology. All you have to do is spend some
dream/sleep time constructing the new connections that differ between
the two "Kelly"s... Of course, the more time they diverge, the harder
the merging would become.

Loved the rest of your post too.

-Kelly




More information about the extropy-chat mailing list