[ExI] A million lines of code
ablainey at aol.com
ablainey at aol.com
Tue Sep 7 00:59:24 UTC 2010
1 Million lines of code may well be enough to act as a blueprint for the brain. However the problem as I see it is this is what is needed in the real world which already contains the rule set to make use of those plans. By that I mean DNA is self replicating catalyst which physically interfaces with the atoms of the real world. Those atoms move and interact according to the laws of physics (the rule set). Something which we don't yet fully understand.
In order for a digital model of the brain to work and develop from our digital blueprint, we must also include the rule set for the physics. Otherwise we can add digital ingredients to our hearts content but the brain would never use them. The digital building blocks would not come together and digital oxygen would never get to the neurons etc.
Some things that come to mind are Lego, magnets and the mandelbrot set.
You can create a blueprint for Lego blocks very easily, however they can then be arranged in almost infinite variation to build different structures. So unless you include a rule set for how those structures are developed you won't get a specific result like a house. Our DNA has instructions on what proteins, amino acids etc to produce, but the nitty gritty of this is done by chemistry and ultimately physics. Something which will not be included in the computer model if we just give it a blueprint.
Magnets are a closer analogy to the chemistry of the brain. You could define the magnet in the blueprint, but in order for the digital magnets to act as per the real world and stick together in the right orientation. You also need to include models of the laws of magnetism. Otherwise the digital blueprint will just turn out magnet models that float around freely without sticking together.
I mention the mandelbrot set due to the fractalic nature of the biological world where we see things like golden ratio virtually everywhere. A simple equation is all that is needed to create infinitely detailed patterns. Our blueprint may be many orders of magnitude simpler than the thing it creates. Even if we collectively understand our blueprint, will we ever understand the resultant model? If not, can we make it work properly?
A believe a digital brain lacking a sufficient physics rule set will not develop or behave as per a normal brain. However It may be a good approximation and only time will tell. Personally I predict the modelling of the brain will only be the start of the problem. Getting the thing to run properly will prove very difficult.
Now this whole thing throws up a glaring problem for me. If we need to include the full rule set of every law of nature in our model. Our digital brain would die. It would need a virtual life support system, a body or system that replicates the essential functions that keep the brain alive. Then it would need an environment which supplies everything that body needs. Its not to difficult to imagine that the logical conclusion to this is a perfect digital model of the living world. A Matrix.
And then one last problem. If our digital model were to function according to the laws of physics in order to work as per a real brain. It will inevitably age and die. Which kind of defeats the point ;o)
From: Ryan Rawson <ryanobjc at gmail.com>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Wed, 18 Aug 2010 1:56
Subject: Re: [ExI] A million lines of code
Unlike most of the blog writers I actually attended Kurzweil's talk on
Saturday (sadly though it was by videoconference)... He didn't really
say anything new there, and he was just pointing out that if you use
information theoretical analysis of the unique available information
(ie: dna) you can get an estimate of how much _information_ goes to
constructing the brain. He was NOT saying "1 million lines of code =
adult human", and I don't think anyone there got that sense. What he
said is with a million lines of code you can have a program that
_builds_ a brain, and then you have to go forth and teach it from that
point. You know, what we do to develop a neural network in all new
humans. Oh yes and his argument was we'll see this full reverse
engineering thing come to fruition in 2030 (not 2020).
I used to be a sceptic, and when you are caught up in the daily
struggles of making silicon nanolithography work it can be easy to be
pessimistic. But even so, the data looks good - every time a previous
generation of computing architecture hits it's limit, a new one comes
on the scene. We might as well be talking about the limits of vacuum
tube computing and the upcoming computation disaster/crunch.
And Kurzweil's prediction is based on solid projections of an
exponential growth in computing technology. Exponential trends are
powerful and difficult to spot sometimes.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat