[ExI] A million lines of code

Keith Henson hkeithhenson at gmail.com
Tue Aug 17 22:15:05 UTC 2010

On Tue, Aug 17, 2010 at 2:12 PM,   Bryan Bishop <kanzure at gmail.com> wrote:
> ---------- Forwarded message ----------
> From: Seth Woodworth <seth at isforinsects.com>
> Date: Tue, Aug 17, 2010 at 11:37 AM
> Subject: Re: [Body Hacking] Reverse-Engineering of Human Brain Likely
> by 2030, Expert Predicts
> To: bodyhacking at lists.caughq.org
> Cc: Bryan Bishop <kanzure at gmail.com>
> Um, no?
>> Here?s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
>> About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.
> How does the genome explain protein folding?
> Just because a million lines of code describe the genesis of the
> brain's biological systems, doesn't mean that we understand the
> interactions of the subsequent structures.

There is an interesting analogy here.  People do understand
microprocessors which are up in this class of complexity (it takes a
million lines of code to describe one).

Sort of.

Anyone can grasp the functional level of how a microprocessor works,
but as you get deeper into the modules (pipelines circuits for
example) the understanding fades out for all but a small number of
experts on that particular section--and they don't understand the next
section over.  *Nobody* understands a modern microprocessor from the
highest levels down through the modules to the transistors.

Incidentally, you have no need I can think of to invoke protein
folding as a problem for specifying brain development.  In fact, you
probably don't even need to know what they are, diffuseable attractor
proteins A-Z is probably detail enough.

In any case, modeling the brain is going to be a big enough problem
that the simulation hardware/software that results may be
"understandable" only in the same sense that people "understand"

Keith Henson

PS apologies to anyone who speaks up and says they understand a modern
microprocessor from the top level all the way down to the transistor

More information about the extropy-chat mailing list