[ExI] Tim May and DNA

John Clark johnkclark at gmail.com
Thu Feb 7 17:40:56 UTC 2019

On Wed, Feb 6, 2019 at 12:49 PM Stuart LaForge <avant at sollegro.com> wrote:


> * > That figure of 750 MB is more the maximally-compressed information
> content of the haploid genome and not so much the recipe. Much of the
> redundancy in DNA is functional and so the code would likely have to  be
> decompressed to be executable. For example, there might be little
> difference in the Shannon information content of the sequence TTAGGG
> repeated thousands of times or just once, but the information content  does
> not consider that the purpose of repeating (TTAGGG)n telomeric  sequence is
> to give the ends of chromosomes the flexibility to fold  over themselves
> repeatedly to hide the tip of the chromosome in the  center of a
> complicated knot where DNA-degrading enzymes called  exonucleases can't
> access and digest them.*

True, but there is reason to think much of the genome really is nothing but
parasitical junk at least from our point of view; after all the entire
point of Evolution is to get genes duplicated and our phenotype, aka our
bodies, are just a means to that end. And the fact that some very
commonplace looking creatures can have a huge genome gives support to the
idea that there must be a lot of junk in genomes. The human genome has
about 3 billion base pairs but a Mexican salamander called a Axolotl has 32
billion base pairs, the marbled lungfish has 130 billion base pairs, and a
humdrum looking Japanese flowering plant called Paris japonica has 150 base
pairs, 50 times the size of the human genome. It's hard to believe that
little bush or the body of a  lungfish is inherently more complex than a
human even if its genome is.

John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20190207/2f14c827/attachment.html>

More information about the extropy-chat mailing list