[Paleopsych] is evolutionary change stockpiled?

HowlBloom at aol.com HowlBloom at aol.com
Sat Nov 27 01:47:30 UTC 2004


In a message dated 11/24/2004 9:31:36 AM Eastern Standard Time, 
shovland at mindspring.com writes:
It could be that the accretion of microscopic changes 
in the genes without external implementation does in 
fact represent a period of testing the changes to see 
if they are appropriate.  '

Software enhancements are done this way.  We get
feedback from users of the existing version, we build
their perceptions into the system, we test it, and
eventually we go live.
the whole concept of natural selection gets very iffy if something like this 
is true.  A genetic suite can extend the skin of a small mammal, can give the 
mammal wings, and can turn a tree-climbing mammal into a bat.  But if that 
genetic suite can only test its viability to survive in the team of a genome and 
in the environment of a nucleus, if the gene suite remains hidden--or cryptic, 
to use the term applied by researchers on this topic, how can it test the 
viability of its product—the skin flaps connecting front limbs to hind limbs that 
are wings?
 
How can that suite of genes be "certain" that it will turn out a malformation 
of skin that's aerodynamically sound?  How can it be sure it will turn out a 
malformation that will serve any useful purpose, much less one that gives 
rodents the ability can fly an edge?
 
How, for that matter, does a suite of genes for a body segment of an insect 
"learn" how to produce a head if it shows up in one place, a thorax if the gene 
suite shows up in another, and an abdomen if it shows up third in line?
 
How could gene suites possibly learn to produce these things without trial 
and error, without testing, and without practice?
 
Or, to put it in Stephen Jay Gould's terms, if Darwin's gradualism is right, 
why do we not see a plethora of "hopeful monsters"--random experiments that 
don't work out?
 
Is it possible that when animals—including humans—are exposed to stress or 
to opportunity, gene suites that have never been tried out before suddenly 
appear, we have a flood of hopeful monsters, and those that are able to find or to 
invent a new way of making a living, a new niche, become fruitful and 
multiply?
 
If so, do we have any evidence for this among multicellular creatures?  We DO 
have evidence of this sort of body-plasticity among bacteria.  When bacteria 
are exposed to stress they become more open to new genetic inserts from phages 
and from bacterial sex.
 
In the ancient days when John Skoyles was among us, he pointed to research on 
heat-shock genes demonstrating that there are gene police that keep the 
genome rigidly in order under normal circumstances, but that loosen their grip when 
life gets tough and open the genome to new solutions to old problems, 
including solutions that turn old problems into new forms of food.
 
But is there plasticity of this sort in the bodies of multicellular 
organisms?  There’s some that comes from shifting the amount of time an embryo stays in 
the womb.  Eject your infant when it’s still highly plastic and you get 
neoteny, you get a lot of extra wiggle room.  And the brain is extremely plastic…at 
least in humans.  But how far can bodies stretch and bend without trial and 
error?
 
The two papers that relate to this issue are Eshel’s on “Meaning-Based 
Natural Intelligence” and Greg’s on “When Genes Go Walkabout”, so I’ll append 
them below.
 
Onward—Howard
 
________
WHEN GENES GO WALKABOUT
 
By Greg Bear
 
 
I’m pleased and honored to be asked to appear before the American 
Philosophical Society, and especially in such august company. Honored... and more than a 
little nervous! I am not, after all, a scientist, but a writer of fiction--and 
not just of fiction, but of science fiction. That means humility is not my 
strong suit. Science fiction writers like to be provocative. That’s our role. 
What we write is far from authoritative, or final, but science fiction works 
best when it stimulates debate.
I am an interested amateur, an English major with no degrees in science. And 
I am living proof that you don’t have to be a scientist to enjoy deep 
exploration of science. So here we go--a personal view.
A revolution is under way in how we think about the biggest issues in 
biology--genetics and evolution. The two are closely tied, and viruses--long regarded 
solely as agents of disease--seem to play a major role.
For decades now, I’ve been skeptical about aspects of the standard theory of 
evolution, the neo-Darwinian Modern Synthesis. But without any useful 
alternative--and since I’m a writer, and not a scientist, and so my credentials are 
suspect--I have pretty much kept out of the debate. Nevertheless, I have lots of 
time to read--my writing gives me both the responsibility and the freedom to 
do that, to research thoroughly and get my facts straight. And over ten years 
ago, I began to realize that many scientists were discovering key missing 
pieces of the evolutionary puzzle. 
Darwin had left open the problem of what initiated variation in species. 
Later scientists had closed that door and locked it. It was time to open the door 
again.
Collecting facts from many sources--including papers and texts by the 
excellent scientists speaking here today--I tried to assemble the outline of a modern 
appendix to Darwin, using ideas derived from disciplines not available in 
Darwin’s time: theories of networks, software design, information transfer and 
knowledge, and social communication--lots of communication. 
My primary inspiration and model was variation in bacteria. Bacteria initiate 
mutations in individuals and even in populations through gene transfer, the 
swapping of DNA by plasmids and viruses. 
Another inspiration was the hypothesis of punctuated equilibrium, popularized 
by Stephen Jay Gould and Niles Eldredge. In the fossil record--and for that 
matter, in everyday life--what is commonly observed are long periods of 
evolutionary stability, or equilibrium, punctuated by sudden change over a short span 
of time, at least geologically speaking--ten thousand years or less. And the 
changes seem to occur across populations.  
Gradualism--the slow and steady accumulation of defining mutations, a 
cornerstone of the modern synthesis--does not easily accommodate long periods of 
apparent stability, much less  rapid change in entire populations. If punctuated 
equilibrium is a real phenomenon, then it means that evolutionary change can be 
put on hold. How is that done? How is the alleged steady flow of mutation 
somehow delayed, only to be released all at once? 
I was fascinated by the possibility that potential evolutionary change could 
be stored up. Where would it be kept? Is there a kind of genetic library where 
hypothetical change is processed, waiting for the right moment to be 
expressed? Does this imply not only storage, but a kind of sorting, a critical editing 
function within our DNA, perhaps based on some unknown genetic syntax and 
morphology? 
If so, then what triggers the change? 
Most often, it appears that the trigger is either environmental challenge or 
opportunity. Niches go away, new niches open up. Food and energy becomes 
scarce. New sources of food and energy become available. Lacking challenge or 
change, evolution tends to go to sleep--perhaps to dream, and sometimes to rumple 
the covers, but not to get out of bed and go for coffee.
Because bacteria live through many generations in a very short period of 
time, their periods of apparent stability are not millennia, but years or months 
or even days.
The most familiar mutational phenomenon in bacteria--resistance to 
antibiotics--can happen pretty quickly. Bacteria frequently exchange plasmids that carry 
genes that counteract the effects of antibiotics. Bacteria can also absorb 
and incorporate raw fragments of DNA and RNA, not packaged in nice little 
chromosomes. The members of the population not only sample the environment, but 
exchange formulas, much as our grandmothers might swap recipes for soup and bread 
and cookies. How these recipes initially evolve can in many instances be 
attributed to random mutation--or to the fortuitous churning of gene 
fragments--acting through the filter of natural selection.  Bacteria do roll the dice, but 
recent research indicates that they roll the dice more often when they’re under 
stress--that is, when mutations will be advantageous. Interestingly, they 
also appear to roll the dice predominantly in those genetic regions where 
mutation will do them the most good! Bacteria, it seems, have learned how to change 
more efficiently.
Once these bacterial capabilities evolve, they spread rapidly. However, they 
spread only when a need arises--again, natural selection. No advantage, no 
proliferation. No challenge, no change.
But gene swapping is crucial. And it appears that bacteria accept these 
recipes not just through random action, but through a complicated process of 
decision-making. Bacterial populations are learning and sharing. In short, bacteria 
are capable of metaevolution--self-directed change in response to 
environmental challenges. 
Because of extensive gene transfer, establishing a strict evolutionary tree 
of bacterial types has become difficult, though likely not impossible. We’re 
just going to have to be clever, like detectives solving crimes in a town where 
everyone is a thief. 
Perhaps the most intriguing method of gene swapping in bacteria is the 
bacteriophage, or bacterial virus. Bacteriophages--phages for short--can either kill 
large numbers of host bacteria, reproducing rapidly, or lie dormant in the 
bacterial chromosome until the time is right for expression and release. Lytic 
phages almost invariably kill their hosts. But these latter types--known as 
lysogenic phages--can actually transport useful genes between hosts, and not just 
randomly, but in a targeted fashion. In fact, bacterial pathogens frequently 
rely on lysogenic phages to spread toxin genes throughout a population. 
Cholera populations become pathogenic in this fashion. In outbreaks of E. coli that 
cause illness in humans, lysogenic phages have transported genes from 
shigella--a related bacterial type--conferring the ability to produce shiga toxin, a 
potent poison. 
Thus, what at first glance looks like a disease--viral infection--is also an 
essential method of communication--FedEx for genes.
When genes go walkabout, bacteria can adapt quickly to new opportunities. In 
the case of bacterial pathogens, they can rapidly exploit a potential 
marketplace of naïve hosts. In a way, decisions are made, quorums are reached, genes 
are swapped, and behaviors change.
What lies behind the transfer of bacterial genes? Again, environmental 
challenges and opportunities. While some gene exchange may be random, bacterial 
populations overall appear to practice functions similar to education, 
regimentation, and even the execution of uncooperative members. When forming bacterial 
colonies, many bacteria--often of different types--group together and exchange 
genes and chemical signals to produce an organized response to environmental 
change. Often this response is the creation of a biofilm, a slimy polysaccharide 
construct complete with structured habitats, fluid pathways, and barriers 
that discourage predators. Biofilms can even provide added protection against 
antibiotics. Bacteria that do not go along with this regimen can be forced to 
die--either by being compelled to commit suicide or by being subjected to other 
destructive measures. If you don’t get with the picture, you break down and 
become nutrients for those bacterial brothers who do, thus focusing and 
strengthening the colony.
A number of bacteriologists have embraced the notion that bacteria can behave 
like multicellular organisms. Bacteria cooperate for mutual advantage. Today, 
in the dentist’s office, what used to be called plaque is now commonly 
referred to as a biofilm. They’re the same thing--bacterial cities built on your 
teeth.
In 1996, I proposed to my publishers a novel about the coming changes in 
biology and evolutionary theory. The novel would describe an evolutionary event 
happening in real-time--the formation of a new sub-species of human being. What 
I needed, I thought, was some analog to what happens in bacteria. And so I 
would have to invent ancient viruses lying dormant in our genome, suddenly 
reactivated to ferry genes and genetic instructions between humans. 
To my surprise, I quickly discovered I did not have to invent anything. Human 
endogenous retroviruses are real, and many of them have been in our DNA for 
tens of millions of years. Even more interesting, some have a close 
relationship to the virus that causes AIDS, HIV. 
The acronym HERV--human endogenous retrovirus--became my mantra. In 1997 and 
1998, I searched the literature (and the internet) for more articles about 
these ancient curiosities--and located a few pieces here and there, occasional 
mention in monographs, longer discussions in a few very specialized texts. I was 
especially appreciative of the treatment afforded to HERV in the Cold Spring 
Harbor text Retroviruses, edited by Drs. Coffin, Varmus, and Hughes. But to my 
surprise, the sources were few, and there was no information about HERV 
targeted to the general layman.
As a fiction writer, however, I was in heaven--ancient viruses in our genes! 
And hardly anyone had heard of them. 
If I had had any sense, I would have used that for what it seemed at face 
value--a ticking time bomb waiting to go off and destroy us all. But I had 
different ideas. I asked, what do HERV do for us? Why do we allow them to stay in 
our genome?  
In fact, even in 1983, when I was preparing my novel Blood Music, I asked 
myself--what do viruses do for us? Why do we allow them to infect us? I suspected 
they were part of a scheme involving computational DNA, but could not fit 
them in...not just then. HIV was just coming into the public consciousness, and 
retroviruses were still controversial. 
I learned that HERV express in significant numbers in pregnant women, 
producing defective viral particles apparently incapable of passing to another human 
host. So what were they--useless hangers-on? Genetic garbage? Instinctively, I 
could not believe that. I’ve always been skeptical of the idea of junk DNA, 
and certainly skeptical of the idea that the non-coding portions of DNA are 
deserts of slovenly and selfish disuse. 
HERV seemed to be something weird, something wonderful and 
counter-intuitive--and they were somehow connected with HIV, a species-crossing retrovirus that 
had become one of the major health scourges on the planet. I couldn’t 
understand the lack of papers and other source material on HERV. Why weren’t they 
being investigated by every living biologist?
In my rapidly growing novel, I wrote of Kaye Lang, a scientist who charts the 
possible emergence of an HERV capable of producing virions--particles that 
can infect other humans. To her shock, the HERV she studies is connected by 
investigators at the CDC with a startling new phenomenon, the apparent mutation 
and death of infants.  The infectious HERV is named SHEVA. But SHEVA turns out 
to be far more than a disease. It’s a signal prompting the expression of a new 
phenotype, a fresh take on humanity--a signal on Darwin’s Radio.
In 1999, the novel was published. To my gratified surprise, it was reviewed 
in Nature and other science journals. Within a very few months, news items 
about HERV became far more common. New scientific papers reported that ERV-related 
genes could help human embryos implant in the womb--something that has 
recently been given substantial credence. And on the web, I encountered the 
fascinating papers of Dr. Luis P. Villarreal.
I felt as if I had spotted a big wave early, and jumped on board just in 
time. Still, we have not found any evidence of infectious HERV--and there is 
certainly no proof that retroviruses do everything I accuse them of in Darwin’s 
Radio. But after four years, the novel holds up fairly well. It’s not yet 
completely out of date. 
And the parallel of HERV with lysogenic phages is still startling.  
But back to the real world of evolution and genetics.
The picture we see now in genetics is complex. Variation can occur in a 
number of ways. DNA sequence is not fate; far from it. The same sequence can yield 
many different products. Complexes of genes lie behind most discernible 
traits. Genes can be turned on and off at need. Non-coding DNA is becoming extremely 
important to understanding how genes do their work. 
As well, mutations are not reliable indicators of irreversible change. In 
many instances, mutations are self-directed responses to the environment. Changes 
can be reversed and then reenacted at a later time--and even passed on as 
reversible traits to offspring.
Even such neo-Darwinian no-nos as the multiple reappearances of wings in 
stick insects points toward the existence of a genetic syntax, a phylogenetic 
toolbox, rather than random mutation. Wings are in the design scheme, the bauplan. 
When insects need them, they can be pulled from the toolbox and implemented 
once again.
We certainly don’t have to throw out Mr. Darwin. Natural selection stays 
intact. Random variation is not entirely excised. But the neo-Darwinian dogma of 
random mutation as a cause of all variation, without exception, has been proven 
wrong.
Like genetics, evolution is not just one process, but a collaboration of many 
processes and techniques. And evolution is not entirely blind. Nor must 
evolution be directed by some outside and supernatural intelligence to generate the 
diversity and complexity we see. Astonishing creativity, we’re discovering, 
can be explained by wonderfully complicated internal processes.
These newer views of evolution involve learning and teamwork. Evolution is in 
large part about communication--comparing notes and swapping recipes, as it 
were.
It appears that life has a creative memory, and knows when and how to use it. 
Let’s take a look at what the scientists have discovered thus far. 
Viruses can and do ferry useful genes between organisms. Viruses can also act 
as site-specific regulators of genetic expression. Within a cell, 
transposable elements--jumping genes similar in some respects to endogenous 
retroviruses--can also be targeted to specific sites and can regulate specific genes. Both 
viruses and transposable elements can be activated by stress-related 
chemistry, either in their capacity as selfish pathogens--a stressed organism may be a 
weakened organism--or as beneficial regulators of gene expression--a stressed 
organism may need to change its nature and behavior. 
Viral transmission occurs not just laterally, from host to host (often during 
sex), but vertically through inherited mobile elements and endogenous 
retroviruses.
Chemical signals between organisms can also change genetic expression. As 
well, changes in the environment can lead to modification of genetic expression 
in both the individual and in later generations of offspring. These changes may 
be epigenetic--factors governing which genes are to be expressed in an 
organism can be passed on from parent to offspring--but also genetic, in the 
sequence and character of genes.
Our immune system functions as a kind of personal radar, sampling the 
environment and providing information that allows us to adjust our immune 
response--and possibly other functions, as well.
These pathways and methods of regulation and control point toward a massive 
natural network capable of exchanging information--not just genes themselves, 
but how genes should be expressed, and when. Each gene becomes a node in a 
genomic network that solves problems on the cellular level. Cells talk to each 
other through chemistry and gene transfer. And through sexual recombination, 
pheromonal interaction, and viruses, multicellular organisms communicate with each 
other and thus become nodes in a species-wide network.
On the next level, through predation and parasitism, as well as through 
cross-species exchange of genes, an ecosystem becomes a network in its own right, 
an interlinking of species both cooperating and competing, often at the same 
time.
Neural networks from beehives to brains solve problems through the exchange 
and the selective cancellation and modification of signals. Species and 
organisms in ecosystems live and die like signals in a network. Death--the ax of 
natural selection--is itself a signal, a stop-code, if you will.
Networks of signals exist in all of nature, from top to bottom--from gene 
exchange to the kinds of written and verbal communication we see at this event. 
Changes in genes can affect behavior. Sometimes even speeches can affect 
behavior.
Evolution is all about competition and cooperation--and communication. 
Traditional theories of evolution emphasize the competitive aspect and 
de-emphasize or ignore the cooperative aspect. But developments in genetics and 
molecular biology render this emphasis implausible. 
Genes go walkabout far too often. We are just beginning to understand the 
marvelous processes by which organisms vary and produce the diversity of living 
nature.
For now, evolution is a wonderful mystery, ripe for further scientific 
exploration. The gates have been blown open once again. 
And as a science fiction writer, I’d like to make two provocative and 
possibly ridiculous predictions.
The first is that the more viruses may be found in an organism and its 
genome, the more rapid will be that organism’s rate of mutation and evolution.
And the second: Bacteria are such wonderful, slimmed-down organisms, lacking 
introns and all the persiflage of eukaryotic biology. It seems to me that 
rather than bacteria being primitive, and that nucleated cells evolved from them, 
the reverse could be true. Bacteria may once have occupied large, primitive 
eukaryotic cells, perhaps similar to those seen in the fossil Vendobionts--or 
the xenophyophores seen on ocean bottoms today. There, they evolved and swam 
within the relative safety of the membranous sacs, providing various services, 
including respiration. They may have eventually left these sacs and become both 
wandering minstrels and predators, serving and/or attacking other sacs in the 
primitive seas. 
Eventually, as these early eukaryotic cells advanced, and perhaps as the 
result of a particularly vicious cycle of bacterial predation, they shed nearly 
all their bacterial hangers-on in a protracted phase of mutual separation, 
lasting hundreds of millions or even billions of years. 
And what the now trim and super-efficient bacteria--the sports cars of modern 
biology--left behind were the most slavish and servile members of that former 
internal community: the mitochondria.
Which group will prove to have made the best decision, to have taken the 
longest and most lasting road?
________
1
Meaning-Based Natural Intelligence
Vs.
Information-Based Artificial Intelligence
By
Eshel Ben Jacob and Yoash Shapira
School of Physics and Astronomy
Raymond & Beverly Sackler Faculty of Exact Sciences
Tel Aviv University, 69978 Tel Aviv Israel
Abstract
In this chapter, we reflect on the concept of Meaning-Based Natural 
Intelligence - a
fundamental trait of Life shared by all organisms, from bacteria to humans, 
associated with:
semantic and pragmatic communication, assignment and generation of meaning, 
formation of
self-identity and of associated identity (i.e., of the group the individual 
belongs to),
identification of natural intelligence, intentional behavior, decision-making 
and intentionally
designed self-alterations. These features place the Meaning-Based natural 
Intelligence
beyond the realm of Information-based Artificial Intelligence. Hence, 
organisms are beyond
man-made pre-designed machinery and are distinguishable from non-living 
systems.
Our chain of reasoning begins with the simple distinction between intrinsic 
and
extrinsic contextual causations for acquiring intelligence. The first, 
associated with natural
intelligence, is required for the survival of the organism (the biotic 
system) that generates it.
In contrast, artificial intelligence is implemented externally to fulfill a 
purpose for the benefit
of the organism that engineered the “Intelligent Machinery”. We explicitly 
propose that the
ability to assign contextual meaning to externally gathered information is an 
essential
requirement for survival, as it gives the organism the freedom of contextual 
decision-making.
By contextual, we mean relating to the external and internal states of the 
organism and the
internally stored ontogenetic knowledge it has generated. We present the view 
that contextual
interpretation of information and consequent decision-making are two 
fundamentals of
natural intelligence that any living creature must have.
2
A distinction between extraction of information from data vs. extraction of 
meaning from
information is drawn while trying to avoid the traps and pitfalls of the “
meaning of meaning”
and the “emergence of meaning” paradoxes. The assignment of meaning (internal
interpretation) is associated with identifying correlations in the 
information according to the
internal state of the organism, its external conditions and its purpose in 
gathering the
information. Viewed this way, the assignment of meaning implies the existence 
of intrinsic
meaning, against which the external information can be evaluated for 
extraction of meaning.
This leads to the recognition that the organism has self-identity.
We present the view that the essential differences between natural 
intelligence and
artificial intelligence are a testable reality, untested and ignored since it 
had been wrongly
perceived as inconsistent with the foundations of physics. We propose that 
the inconsistency
arises within the current, gene-network picture of the Neo-Darwinian paradigm 
(that regards
organisms as equivalent to a Turing machine) and not from in principle 
contradiction with
physical reality. Once the ontological reality of organisms’ natural 
intelligence is verified, a
paradigm shift should be considered, where inter- and intra-cellular 
communication and
genome plasticity (based on junk DNA” and the abundance of transposable 
elements) play
crucial roles. In this new paradigm, communication and gene plasticity might 
be able to
sustain the organisms with regulated freedom of choice between different 
available
responses.
There have been many attempts to attribute the cognitive abilities of 
organisms (e.g.,
consciousness) to underlying quantum-mechanical mechanisms, which can 
directly affect the
”mechanical” parts of the organism (i.e., atomic and molecular excitations) 
despite thermal
noise. Here, organisms are viewed as continuously self-organizing open 
systems that store
past information, external and internal. These features enable the 
macroscopic organisms to
have features analogous to some features in quantum mechanical systems. Yet, 
they are
essentially different and should not be mistaken to be a direct reflection of 
quantum effects.
On the conceptual level, the analogy is very useful as it can lead to some 
insights from the
knowledge of quantum mechanics. We show, for example, how it enables to 
metaphorically
bridge between the Aharonov-Vaidman and Aharonov-Albert-Vaidman concepts of
Protective and Weak Measurements in quantum mechanics (no destruction of the 
quantum
state) with Ben Jacob’s concept of Weak-Stress Measurements, (e.g., exposure 
to non-lethal
levels of antibiotic) in the study of organisms. We also reflect on the 
metaphoric analogy
3
between Aharonov-Anandan-Popescue-Vaidman Quantum Time-Translation Machine and
the ability of an external observer to deduce on an organism’s 
decision-making vs. arbitrary
fluctuations. Inspired by the concept of Quantum Non-Demolition measurements 
we propose
to use biofluoremetry (the use of bio-compatible fluorescent molecules to 
study intracellular
spatio-temporal organization and functional correlations) as a future 
methodology of
Intracellular Non-Demolition Measurements. We propose that the latter, 
performed during
Weak-Stress Measurements of the organism, can provide proper schemata to test 
the special
features associated with natural intelligence.
Prologue - From Bacteria Thou Art
Back in 1943, a decade before the discovery of the structure of the DNA, 
Schrödinger, one of
the founders of quantum mechanics, delivered a series of public lectures, 
later collected in a
book entitled “What is Life? The Physical Aspects of Living Cells” [1]. The 
book begins
with an “apology” and explanation why he, as a physicist, took the liberty 
to embark on a
quest related to Life sciences.
A scientist is supposed to have a complete and thorough I of knowledge, at 
first hand, of
some subjects and, therefore, is usually expected not to write on any topic 
of which he is
not a life master. This is regarded as a matter of noblesse oblige. For the 
present
purpose I beg to renounce the noblesse, if any, and to be the freed of the 
ensuing
obligation. …some of us should venture to embark on a synthesis of facts and 
theories,
albeit with second-hand and incomplete knowledge of some of them -and at the 
risk of
making fools of ourselves, so much for my apology.
Schrödinger proceeds to discuss the most fundamental issue of Mind from 
Matter [1-3]. He
avoids the trap associated with a formal definition of Life and poses instead 
more pragmatic
questions about the special features one would associate with living 
organisms - to what
extent these features are or can be shared by non-living systems.
What is the characteristic feature of life? When is a piece of matter said to 
be alive?
When it goes on 'doing something', moving, exchanging material with its 
environment,
and so forth, and that for a much longer period than we would expect of an 
inanimate
piece of matter to 'keep going' under similar circumstances.
4
…Let me use the word 'pattern' of an organism in the sense in which the 
biologist calls
it 'the four-dimensional pattern', meaning not only the structure and 
functioning of that
organism in the adult, or in any other particular stage, but the whole of its 
ontogenetic
development from the fertilized egg the cell to the stage of maturity, when 
the organism
begins to reproduce itself.
To explain how the organism can keep alive and not decay to equilibrium, 
Schrödinger
argues from the point of view of statistical physics. It should be kept in 
mind that the
principles of non-equilibrium statistical physics [4-6] with respect to 
organisms, and
particularly to self-organization in open systems [7-12], were to be 
developed only a decade
later, following Turing’s papers, “The chemical basis of morphogenesis”, “
The morphogen
theory of phyllotaxis” and “Outline of the development of the daisy” [13-15].
The idea Schrödinger proposed was that, to maintain life, it was not 
sufficient for organisms
just to feed on energy, like man-made thermodynamic machines do. To keep the 
internal
metabolism going, organisms must absorb low-entropy energy and exude 
high-entropy waste
products.
How would we express in terms of the statistical theory the marvelous faculty 
of a living
organism, by which it delays the decay into thermodynamic equilibrium 
(death)? We
said before: 'It feeds upon negative entropy', attracting, as it was a stream 
of negative
entropy upon itself, to compensate the entropy increase it produces by living 
and thus
to maintain itself on a stationary and fairly low entropy level. Indeed, in 
the case of
higher animals we know the kind of orderliness they feed upon well enough, 
viz. the
extremely well-ordered state of matter in more or less complicated organic 
compounds,
which serve them as foodstuffs. After utilizing it they return it in a very 
much degraded
form -not entirely degraded, however, for plants can still make use of it.
The idea can be continued down the line to bacteria - the most fundamental 
independent form
of life on Earth [16-18]. They are the organisms that know how to reverse the 
second law of
thermodynamics in converting high-entropy inorganic substance into 
low-entropy living
matter. They do this cooperatively, so they can make use of any available 
source of lowentropy
energy, from electromagnetic fields to chemical imbalances, and release 
highentropy
energy to the environment, thus acting as the only Maxwell Demons of nature. 
The
existence of all other creatures depends on these bacterial abilities, since 
no other organism
on earth can do it on its own. Today we understand that bacteria utilize 
cooperatively the
principles of self-organization in open systems [19-36]. Yet bacteria must 
thrive on
5
imbalances in the environment; in an ideal thermodynamic bath with no local 
and global
spatio-temporal structure, they can only survive a limited time.
In 1943, the year Schrödinger delivered his lectures, Luria and Delbruck 
performed a
cornerstone experiment to prove that random mutation exists [37]: 
non-resistant bacteria
were exposed to a lethal level of bacteriophage, and the idea was that only 
those that
happened to go through random mutation would survive and be observed. Their 
experiments
were then taken as a crucial support for the claim of the Neo-Darwinian dogma 
that all
mutations are random and can occur during DNA replication only [38-41]. 
Schrödinger
proposed that random mutations and evolution can in principle be accounted 
for by the laws
of physics and chemistry (at his time), especially those of quantum mechanics 
and chemical
bonding. He was troubled by other features of Life, those associated with the 
organisms’
ontogenetic development during life. The following are additional extracts 
from his original
lectures about this issue:
Today, thanks to the ingenious work of biologists, mainly of geneticists, 
during the last
thirty or forty years, enough is known about the actual material structure of 
organisms
and about their functioning to state that, and to tell precisely why 
present-day physics
and chemistry could not possibly account for what happens in space and time 
within a
living organism.
…I tried to explain that the molecular picture of the gene made it at least 
conceivable
that the miniature code should be in one-to-one correspondence with a highly
complicated and specified plan of development and should somehow contain the 
means
of putting it into operation. Very well then, but how does it do this? How 
are we going
to turn ‘conceivability’ into true understanding?
…No detailed information about the functioning of the genetic mechanism can 
emerge
from a description of its structure as general as has been given above. That 
is obvious.
But, strangely enough, there is just one general conclusion to be obtained 
from it, and
that, I confess, was my only motive for writing this book. From Delbruck's 
general
picture of the hereditary substance it emerges that living matter, while not 
eluding the
'laws of physics' as established up to date, is likely to involve 'other laws 
of physics'
hitherto unknown, which, however, once they have been revealed, will form 
just as
integral a part of this science as the former. This is a rather subtle line 
of thought, open
to misconception in more than one respect. All the remaining pages are 
concerned with
making it clear.
With the discovery of the structure of DNA, the evidence for the 
one-gene-one-protein
scheme and the discoveries of the messenger RNA and transfer RNA led to the 
establishment
of the gene-centered paradigm in which the basic elements of life are the 
genes. According to
this paradigm, Schrödinger’s old dilemma is due to lack of knowledge at the 
time, so the new
6
findings render it obsolete. The dominant view since has been that all 
aspects of life can be
explained solely based on the information stored in the structure of the 
genetic material. In
other words, the dominant paradigm was largely assumed to be a 
self-consistent and a
complete theory of living organisms [38-41], although some criticism has been 
raised over
the years [42-47], mainly with emphasis on the role of bacteria in 
symbiogenesis of species.
The latter was proposed in (1926) by Mereschkovsky in a book entitled 
"Symbiogenesis and
the Origin of Species" and by Wallin in a book entitled "Symbionticism and 
the Origins of
Species". To quote Margulis and Sagan [44]:
The pioneering biologist Konstantin S. Merezhkovsky first argued in 1909 that 
the little
green dots (chloroplasts) in plant cells, which synthesize sugars in the 
presence of
sunlight, evolved from symbionts of foreign origin. He proposed that “
symbiogenesis”—
a term he coined for the merger of different kinds of life-forms into new 
species—was a
major creative force in the production of new kinds of organisms. A Russian 
anatomist,
Andrey S. Famintsyn, and an American biologist, Ivan E. Wallin, worked
independently during the early decades of the twentieth century on similar 
hypotheses.
Wallin further developed his unconventional view that all kinds of symbioses 
played a
crucial role in evolution, and Famintsyn, believing that chloroplasts were 
symbionts,
succeeded in maintaining them outside the cell. Both men experimented with the
physiology of chloroplasts and bacteria and found striking similarities in 
their structure
and function. Chloroplasts, they proposed, originally entered cells as live 
food—
microbes that fought to survive—and were then exploited by their ingestors. 
They
remained within the larger cells down through the ages, protected and always 
ready to
reproduce. Famintsyn died in 1918; Wallin and Merezhkovsky were ostracized by 
their
fellow biologists, and their work was forgotten. Recent studies have 
demonstrated,
however, that the cell’s most important organelles—chloroplasts in plants and
mitochondria in plants and animals—are highly integrated and well-organized 
former
bacteria.
The main thesis is that microbes, live beings too small to be seen without 
the aid of
microscopes, provide the mysterious creative force in the origin of species. 
The
machinations of bacteria and other microbes underlie the whole story of 
Darwinian
evolution. Free-living microbes tend to merge with larger forms of life, 
sometimes
seasonally and occasionally, sometimes permanently and unalterably. 
Inheritance of
«acquired bacteria» may ensue under conditions of stress. Many have noted 
that the
complexity and responsiveness of life, including the appearance of new 
species from
differing ancestors, can be comprehended only in the light of evolution. But 
the
evolutionary saga itself is legitimately vulnerable to criticism from within 
and beyond
science. Acquisition and accumulation of random mutations simply are, of 
course,
important processes, but they do not suffice. Random mutation alone does not 
account
for evolutionary novelty. Evolution of life is incomprehensible if microbes 
are omitted
from the story. Charles Darwin (1809-1882), in the absence of evidence, 
invented
«pangenes» as the source of new inherited variation. If he and the first 
evolutionist, the
7
Frenchman Jean Baptiste de Lamarck, only knew about the sub visible world 
what we
know today, they would have chuckled, and agreed with each other and with us.
The Neo-Darwinian paradigm began to draw some additional serious questioning 
following
the human genome project that revealed less than expected genes and more than 
expected
transposable elements. The following is a quote from the Celera team [18].
Taken together the new findings show the human genome to be far more than a
mere sequence of biological code written on a twisted strand of DNA. It is a 
dynamic
and vibrant ecosystem of its own, reminiscent of the thriving world of tiny 
Whos
that Dr. Seuss' elephant, Horton, discovered on a speck of dust . . . One of 
the
bigger surprises to come out of the new analysis, some of the "junk" DNA 
scattered
throughout the genome that scientists had written off as genetic detritus 
apparently
plays an important role after all.
Even stronger clues can be deduced when social features of bacteria are 
considered: Eons
before we came into existence, bacteria already invented most of the features 
that we
immediately think of when asked to distinguish life from artificial systems: 
extracting
information from data, assigning existential meaning to information from the 
environment,
internal storage and generation of information and knowledge, and inherent 
plasticity and
self-alteration capabilities [9].
Let’s keep in mind that about 10% of our genes in the nucleus came, almost 
unchanged,
from bacteria. In addition, each of our cells (like the cells of any 
eukaryotes and plans)
carries its own internal colony of mitochondria - the intracellular multiple 
organelles that
carry their own genetic code (assumed to have originated from symbiotic 
bacteria), inherited
only through the maternal line. One of the known and well studied functions 
of mitochondria
is to produce energy via respiration (oxidative phosphorylation), where 
oxygen is used to
turn extracellular food into internally usable energy in the form of ATP. The 
present
fluorescence methods allow video recording of the mitochondria dynamical 
behavior within
living cells reveal that they play additional crucial roles for example in 
the generation of
intracellular calcium waves in glia cells[48-50].
Looking at the spatio-temporal behavior of mitochondria, it appears very much 
like that of
bacterial colonies. It looks as if they all move around in a coordinated 
manner replicate and
even conjugate like bacteria in a colony. From Schrödinger’s perspective, it 
seems that not
8
only do they provide the rest of the cell with internal digestible energy and 
negative entropy
but they also make available relevant information embedded in the 
spatio-temporal
correlations of localized energy transfer. In other words, each of our cells 
carries hundreds to
thousands of former bacteria as colonial Maxwell Demons with their own 
genetic codes, selfidentity,
associated identity with the mitochondria in other cells (even if belong to 
different
tissues), and their own collective self-interest (e.g., to initiate 
programmed death of their host
cell).
Could it be, then, that the fundamental, causality-driven schemata of our 
natural intelligence
have also been invented by bacteria [9,47], and that our natural intelligence 
is an ‘evolutionimproved
version’, which is still based on the same fundamental principles and shares 
the
same fundamental features? If so, perhaps we should also learn something from 
bacteria
about the fundamental distinction between our own Natural Intelligence and 
the Artificial
Intelligence of our created machinery.
Introduction
One of the big ironies of scientific development in the 20th century is that 
its burst of
creativity helped establish the hegemony of a paradigm that regards 
creativity as an illusion.
The independent discovery of the structure of DNA (Universal Genetic Code), 
the
introduction of Chomsky’s notion about human languages (Universal Grammar – 
Appendix
B) and the launching of electronic computers (Turing Universal Machines- 
Appendix C), all
occurring during the 1950’s, later merged and together established the 
dominance of
reductionism. Western philosophy, our view of the world and our scientific 
thought were
under its influence ever since, to the extent that many hold the deep 
conviction that the
Universe is a Laplacian, mechanical universe in which there is no room for 
renewal or
creativity [47].
In this Universe, concepts like cognition, intelligence or creativity are 
seen as mere
illusion. The amazing process of evolution (from inanimate matter, through 
organisms of
increasing complexity, to the emergence of intelligence) is claimed to be no 
more than a
successful accumulation of errors (random mutations) enhanced by natural 
selection (the
Darwinian picture). Largely due to the undeniable creative achievements of 
science,
unhindered by the still unsolved fundamental questions, the hegemony of 
reductionism
9
reached the point where we view ourselves as equivalent to a Universal Turing 
machine.
Now, by the logical reasoning inherent in reductionism, we are not and can 
not be essentially
different ‘beings’ from the machinery we can create like complex adaptive 
systems [51]. The
fundamental assumption is of top-level emergence: a system consists of a 
large number of
autonomous entities called agents, that individually have very simple 
behavior and that
interact with each other in simple ways. Despite this simplicity, a system 
composed of large
numbers of such agents often exhibits what is called emergent behavior that 
is surprisingly
complex and hard to predict. Moreover, in principle, we can design and build 
machinery that
can even be made superior to human cognitive abilities [52]. If so, we re
present living
examples of machines capable of creating machines (a conceptual hybrid of 
ourselves and
our machines) ‘better” then themselves, which is in contradiction with the 
paradigmatic idea
of natural evolution: that all organisms evolved according to a “Game of 
Random Selection”
played between a master random-number generator (Nature) and a collection of 
independent,
random number generators (genomes). The ontological reality of Life is 
perceived as a game
with two simple rules – the second law of thermodynamics and natural 
selection. Inherent
meaning and causality-driven creativity have no existence in such a reality – 
the only
meaning of life is to survive. If true, how come organisms have inherent 
programming to
stop living? So here is the irony: that the burst of real creativity was used 
to remove
creativity from the accepted epistemological description of Nature, including 
life.
The most intriguing challenge associated with natural intelligence is to 
resolve the
difficulty of the apparent contradiction between its fundamental concepts of 
decision-making
and creativity and the fundamental principle of time causality in physics. 
Ignoring the trivial
notion, that the above concepts have no ontological reality, intelligence is 
assumed to reflect
Top-Level-Emergence in complex systems. This commonly accepted picture 
represents the
“More is Different” view [53], of the currently hegemonic reductionism-based
constructivism paradigm. Within this paradigm, there are no primary 
differences between
machinery and living systems, so the former can, in principle, be made as 
intelligent as the
latter and even more. Here we argue that constructivism is insufficient to 
explain natural
intelligence, and all-level generativism, or a “More is Different on All 
Levels” principle, is
necessary for resolving the emergence of the meaning paradox [9]. The idea is 
the cogeneration
of meaning on all hierarchical levels, which involves self-organization and
contextual alteration of the constituents of the biotic system on all levels 
(down to the
10
genome) vs. top-level emergence in complex systems with pre-designed and 
pre-prepared
elements [51,52].
We began in the prologue with the most fundamental organisms, bacteria,
building the argument towards the conclusion that recent observations of 
bacterial collective
self-identity place even them, and not only humans, beyond a Turing machine: 
Everyone
agrees that even the most advanced computers today are unable to fully 
simulate even an
individual, most simple bacterium of some 150 genes, let alone more advanced 
bacteria
having several thousands of genes, or a colony of about 1010 such bacteria. 
Within the current
Constructivism paradigm, the above state of affairs reflects technical or 
practical rather than
fundamental limitations. Namely, the assumption is that any organelle, our 
brain included, as
well as any whole organism, is in principle equivalent to, and thus may in 
principle be
mapped onto a universal Turing Machine – the basis of all man-made digital 
information
processing machines (Appendix C). We argue otherwise. Before doing so we will 
place
Turing’s notions about “Intelligent Machinery” [54] and “Imitation Game” 
[55] within a new
perspective [56], in which any organism, including bacteria, is in principle 
beyond machinery
[9,47]. This realization will, in turn, enable us to better understand 
ourselves and the
organisms our existence depends on – the bacteria.
To make the argument sound, we take a detour and reflect on the philosophical
question that motivated Turing to develop his conceptual computing machine: 
We present
Turing’s universal machine within the causal context of its invention [57], 
as a manifestation
of Gödel’s theorem [58-60], by itself developed to test Hilbert’s idea about 
formal axiomatic
systems [61]. Then we continued to reexamine Turing’s seminal papers that 
started the field
of Artificial Intelligence, and argue that his “Imitation Game”, perceived 
ever since as an
“Intelligence Test”, is actually a “Self-Non-Self Identity Test”, or “
Identity Game” played
between two humans competing with a machine by rules set from machines 
perspective, and
a machine built by another human to win the game by presenting a false 
identity.
We take the stand that Artificial and Natural Intelligence are 
distinguishable, but not
by Turing’s imitation game which is set from machines perspective - the rules 
of the game
simply do not allow expression of the special features of natural 
intelligence. Hence, for
distinction between the two versions of Intelligence, the rules of the game 
must be modified
11
in various ways. Two specific examples are presented, and it is propose that 
it’s unlikely for
machines to win these new versions of the game.
Consequently, we reflect on the following questions about natural 
intelligence: 1. Is it a
metaphor or overlooked reality? 2. How can its ontological reality be tested? 
3. Is it
consistent with the current gene-networks picture of the Neo-Darwinian 
paradigm? 4. Is it
consistent with physical causal determinism and time causality? To answer the 
questions, we
first present the current accepted picture of organisms as ‘watery Turing 
machines’ living in
a predetermined Laplacian Universe. We then continue to describe the ‘
creative genome’
picture and a new perspective of the organism as a system with special 
built-in means to
sustain ‘learning from experience’ for decision-making [47]. For that, we 
reflect on the
analogy between the notions of the state of multiple options in organisms, 
the choice function
in the Axiom of Choice in mathematics (Appendix D) and the superposition of 
states in
quantum mechanics (Appendix E). According to the analogy, destructive quantum
measurements (that involve collapse of the wave function) are equivalent to 
strong-stress
measurements of the organisms (e.g., lethal levels of antibiotics) and to 
intracellular
destructive measurements (e.g., gene-sequencing and gene-expression in which 
the organism
is disassembled). Inspired by the new approach of protective quantum 
measurements, which
do not involve collapse of the wave function (Appendix E), we propose new 
conceptual
experimental methodologies of biotic protective measurements - for example, 
by exposing
the organisms to weak stress, like non-lethal levels of antibiotic [62,63], 
and by using
fluoremetry to record the intracellular organization and dynamics keeping the 
organism intact
[64-66].
Formation of self-identity and of associated identity (i.e., of the group the 
individual belongs
to), identification of natural intelligence in other organisms, intentional 
behavior, decisionmaking
[67-75] and intentionally designed self-alterations require semantic and 
pragmatic
communication [76-80], are typically associated with cognitive abilities and 
meaning-based
natural intelligence of human. One might accept their existence in the “
language of dolphins”
but regard them as well beyond the realm of bacterial communication 
abilities. We propose
that this notion should be reconsidered: New discoveries about bacterial 
intra- and intercellular
communication [81-92], colonial semantic and pragmatic language [9,47,93,94], 
the
above mentioned picture of the genome [45-47], and the new experimental 
methodologies
led us to consider bacterial natural intelligence as a testable reality.
12
Can Organisms be Beyond Watery Turing Machines
in Laplace’s Universe?
The objection to the idea about organisms’ regulated freedom of choice can be 
traced to the
Laplacian description of Nature. In this picture, the Universe is a 
deterministic and
predictable machine composed of matter parts whose functions obey a finite 
set of rules with
specified locality [95-98]. Laplace has also implicitly assumed that 
determinism,
predictability and locality go hand in hand with computability (using current 
terminology),
and suggested that:
“An intellect which at any given moment knew all the forces that animate 
Nature and
the mutual positions of the beings that comprises it. If this intellect were 
vast enough to
submit its data to analysis, could condense into a single formula the 
movement of the
greatest bodies of the universe and that of the lightest atom: for such an 
intellect
nothing could be uncertain: and the future just like the past would be 
present before its
eyes.”
Note that this conceptual intellect (Lacplace’s demon) is assumed to be an 
external observer,
capable, in principle, of performing measurements without altering the state 
of the system,
and, like Nature itself, equivalent to a universal Turing machine.
In the subsequent two centuries, every explicit and implicit assumption in the
Laplacean paradigm has proven to be wrong in principle (although sometimes a 
good
approximation on some scales). For example, quantum mechanics ruled out 
locality and the
implicit assumption about simultaneous and non-destructive measurements. 
Studies in
computer sciences illustrate that a finite deterministic system (with 
sufficient algorithmic
complexity) can be beyond Turing machine computability (the size of the 
required machine
should be comparable with that of the whole universe or the computation time 
of a smaller
machine would be comparable with the time of the universe). Computer 
sciences, quantum
measurements theory and statistical physics rule out backward computability 
even if the
present state is accurately known.
13
Consequently, systems’ unpredictability to an external observer is commonly
accepted. Yet, it is still largely assumed that nature itself as a whole and 
any of its parts must
in principle be predetermined, that is, subject to causal determinism 
[98],which must go hand
in hand with time causality [96]:
Causal determinism is the thesis that all events are causally necessitated by 
prior
events, so that the future is not open to more than one possibility. It seems 
to be
equivalent to the thesis that the future is in principle completely 
predictable (even if
in practice it might never actually be possible to predict with complete 
accuracy).
Another way of stating this is that for everything that happens there are 
conditions
such that, given them, nothing else could happen, meaning that a completely
accurate prediction of any future event could in principle be given, as in 
the famous
example of Laplace’s demon.
Clearly, a decomposable state of mixed multiple options and hence 
decision-making
can not have ontological reality in a universe subject to ‘causal determinism’
. Moreover, in
this Neo-Laplacian Universe, the only paradigm that does not contradict the 
foundations of
logic is the Neo-Darwinian one. It is also clear that in such clockwork 
universe there can not
be an essential difference, for example, between self-organization of a 
bacterial colony and
self-organization of a non living system such as electro-chemical deposition 
[99,100].
Thus, all living organisms, from bacteria to humans, could be nothing but 
watery Turing
machines created and evolved by random number generators. The conviction is 
so strong that
it is pre-assumed that any claim regarding essential differences between 
living organisms and
non living systems is an objection to the foundations of logic, mathematics, 
physics and
biology. The simple idea, that the current paradigm in life sciences might be 
the source of the
apparent inconsistency and hence should be reexamined in light of the new 
discoveries, is
mostly rejected point-blank.
In the next sections we present a logical argument to explain why the 
Neo-Laplacian
Universe (with a built-in Neo-Darwinian paradigm) can not provide a complete 
and selfconsistent
description of Nature even if random number generators are called for the 
rescue.
The chain of reasoning is linked with the fact that formal axiomatic systems 
cannot provide
complete bases for mathematics and the fact that a Universal Turing Machine 
cannot answer
all the questions about its own performance.
Hilbert’s Vision –
14
Meaning-Free Formal Axiomatic Systems
Computers were invented to clarify Gödel’s theorem, which by itself has been 
triggered by
the philosophical question about the foundation of mathematics raised by 
Russell’s logical
paradoxes [61]. These paradoxes attracted much attention, as they appeared to 
shatter the
solid foundations of mathematics, the most elegant creation of human 
intelligence. The best
known paradox has to do with the logical difficulty to include the intuitive 
concept of selfreference
within the foundations of Principia Mathematica: If one attempts to define 
the set
of all sets that are not elements of themselves, a paradox arises - that if 
the set is to be an
element of itself, it shouldn’t, and vice versa.
As an attempt to eliminate such paradoxes from the foundations of 
mathematics, Hilbert
invented his meta-mathematics. The idea was to lay aside the causal 
development of
mathematics as a meaningful ‘tool’ for our survival, and set up a formal 
axiomatic system so
that a meaning-independent mathematics can be built starting from a set of 
basic postulates
(axioms) and well-defined rules of deduction for formulating new definitions 
and theorems
clean of paradoxes. Such a formal axiomatic system would then be a perfect 
artificial
language for reasoning, deduction, computing and the description of nature. 
Hilbert’s vision
was that, with the creation of a formal axiomatic system, the causal meaning 
that led to its
creation could be ignored and the formal system treated as a perfect, 
meaning-free game
played with meaning-free symbols on paper.
His idea seemed very elegant - with “superior” rules, “uncontaminated” by 
meaning, at
our disposal, any proof would not depend any more on the limitation of human 
natural
language with its imprecision, and could be executed, in principle, by some 
advanced,
meaning-free, idealized machine. It didn’t occur to him that the built-in 
imprecision of
human linguistics (associated with its semantic and pragmatic levels) are not 
a limitation but
rather provide the basis for the flexibility required for the existence of 
our creativity-based
natural intelligence. He overlooked the fact that the intuitive (semantic) 
meanings of
intelligence and creativity have to go hand in hand with the freedom to err – 
there is no room
for creativity in a precise, clockwork universe.
Gödel’s Incompleteness/Undecidability Theorem
15
In 1931, in a monograph entitled “On Formally Undecidable Propositions of 
Principia
Mathematica and Related Systems” [58-61], Gödel proved that Hilbert’s vision 
was in
principle wrong - an ideal ‘Principia Mathematica’ that is both 
self-consistent and complete
can not exist.
Two related theorems are formulated and proved in Gödel’s paper: 1. The
Undecidability Theorem - within formal axiomatic systems there exist 
questions that are
neither provable nor disprovable solely on the basis of the axioms that 
define the system. 2.
The Incompleteness Theorem - if all questions are decidable then there must 
exist
contradictory statements. Namely, a formal axiomatic system can not be both 
self-consistent
and complete.
What Gödel showed was that a formal axiomatic system is either incomplete or
inconsistent even if just the elementary arithmetic of the whole numbers 
0,1,2,3, is
considered (not to mention all of mathematics). He bridged between the notion 
of selfreferential
statements like “This statement is false” and Number Theory. Clearly,
mathematical statements in Number Theory are about the properties of whole 
numbers,
which by themselves are not statements, nor are their properties. However, a 
statement of
Number Theory could be about a statement of Number Theory and even about 
itself (i.e.,
self-reference). To show this, he constructed one-to-one mapping between 
statements about
numbers and the numbers themselves. In Appendix D, we illustrate the spirit 
of Gödel’s
code.
Gödel’s coding allows regarding statements of Number Theory on two different 
levels:
(1) as statements of Number Theory, and (2) as statements about statements of 
Number
Theory. Using his code, Gödel transformed the Epimenides paradox (“This 
statement is
false”) into a Number Theory version: “This statement of Number Theory is 
improvable”.
Once such a statement of Number Theory that describes itself is constructed, 
it proves
Gödel’s theorems. If the statement is provable then it is false, thus the 
system is inconsistent.
Alternatively, if the statement is improvable, it is true but then the system 
is incomplete.
One immediate implication of Gödel’s theorem is that no man-made formal 
axiomatic
system, no matter how complex, is sufficient in principle to capture the 
complexity of the
simplest of all systems of natural entities – the natural whole numbers. In 
simple words, any
16
mathematical system we construct can not be prefect (self-consistent and 
complete) on its
own – some of its statements rely on external human intervention to be 
settled. It is thus
implied that either Nature is not limited by causal determinism (which can be 
mapped onto a
formal axiomatic system), or it is limited by causal determinism and there 
are statement
about nature that only an external Intelligence can resolve.
The implications of Gödel’s theorem regarding human cognition are still under
debate [108]. According to the Lucas-Penrose view presented in “Minds, 
Machines and
Gödel” by Lucas [101] and in “The emperor’s new mind: concerning computers, 
minds and
the law of physics” by Penrose [73], Gödel’s theorems imply that some of the 
brain functions
must act non-algorithmically. The popular version of the argumentation is: 
There exist
statements in arithmetic which are undecidable for any algorithm yet are 
intuitively decidable
for mathematicians. The objection is mainly to the notion of ‘intuition-based 
mathematical
decidability’. For example, Nelson in “Mathematics and the Mind” [109], 
writes:
For the argumentation presented in later sections, we would like to highlight 
the
following: Russell’s paradoxes emerge when we try to assign the notion of 
self-reference
between the system and its constituents. Unlike living organisms, the sets of 
artificial
elements or Hilbert’s artificial systems of axioms are constructed from fixed 
components
(they do not change due to their assembly in the system) and with no internal 
structure that
can be a functional of the system as a whole as it is assembled. The system 
itself is also fixed
in time or, more precisely, has no temporal ordering. The set is constructed 
(or the system of
axioms is defined) by an external spectator who has the information about the 
system, i.e.,
the system doesn’t have internally stored information about itself and there 
are no intrinsic
causal links between the constituents.
17
Turing’s Universal Computing Machine
Gödel’s theorem, though relating to the foundations of mathematical 
philosophy, led Alan
Turing to invent the concept of computing machinery in 1936. His motivation 
was to test the
relevance of three possibilities for formal axiomatic systems that are left 
undecidable in
Gödel’s theorems: 1. they can not be both self consistent and complete but 
can be either; 2.
they can not be self-consistent; 3. they can not be complete. Turing proved 
that formal
axiomatic systems must be at least incomplete.
To prove his theorem, Gödel used his code to map both symbols and operations. 
The
proof itself, which is quite complicated, utilizes many recursively defined 
functions. Turing’s
idea was to construct mapping between the natural numbers and their binary 
representation
and to include all possible transformations between them to be performed by a 
conceptual
machine. The latter performs the transformation according to a given set of 
pre-constructed
instructions (program). Thus, while Gödel used the natural numbers themselves 
to prove his
theorems, Turing used the space of all possible programs, which is why he 
could come up
with even stronger statements. For later reflections, we note that each 
program can be
perceived as functional correlation between two numbers. In other words the 
inherent
limitations of formal axiomatic systems are better transparent in the higher 
dimension space
of functional correlations between the numbers.
Next, Turing looked for the kind of questions that the machine in principle 
can’t
solve irrespective of its physical size. He proved that the kinds of 
questions the machine can
not solve are about its own performance. The best known is the ‘halting 
problem’: the only
way a machine can know if a given specific program will stop within a finite 
time is by
actually running it until it stops.
The proof is in the spirit of the previous “self-reference games”: assume 
there is a
program that can check whether any computer program will stop (Halt program). 
Prepare
another program which makes an infinite loop i.e., never stops (Go program). 
Then, make a
third Dual program which is composed of the first two such that a positive 
result of the Halt-
Buster part will activate the Go-Booster part. Now, if the Dual program is 
fed as input to the
Halt-Buster program it leads to a paradox: the Dual program is constructed so 
that, if it is to
18
stop, the Halt-Buster part will activate the Go-Booster part so it shouldn’t 
stop and vice
versa. In a similar manner it can be proven that Turing machine in principle 
can not answer
questions associated with running a program backward in time.
Turing’s proof illustrates the fact that the notion of self-reference can not 
be part of
the space of functional correlations generated by Universal Turing machine. 
In this sense,
Turing proved that if indeed Nature is equivalent to his machine (the 
implicit assumption
associated with causal determinism), we, as parts of this machine, can not in 
principle
generate a complete description of its functioning - especially so with 
regard to issues related
to systems’ self-reference.
The above argumentations appear as nothing more than, at best, an amusing 
game.
Four years later (in 1940), Turing converted his conceptual machine into a 
real one – the first
electronic computer The Enigma, which helped its human users decipher codes 
used by
another machine. For later discussion we emphasize the following: The Enigma 
provided the
first illustration, that while Turing machine is limited in answering on its 
own questions
about itself, it can provide a useful tool to aid humans in answering 
questions about other
systems, both artificial and natural. In other words, Turing machine can be a 
very useful tool
to help humans design another, improved Turing machine, but it is not capable 
of doing so on
its own - it can not answer questions about itself. In this sense, stand 
alone machines can not
have in principle the features we proposed to associate with natural 
intelligence.
The Birth of Artificial Intelligence –
Turing’s Imitation Game
In his 1936 paper [57], Turing claims that a universal computing machine of 
the kind he
proposed can, in principle, perform any computation that a human being can 
carry out. Ten
years later, he began to explore the potential range of functional 
capabilities of computing
machinery beyond computing and in 1950 he published an influential paper, “
Computing
Machinery and Intelligence” [55], which led to the birth of Artificial 
Intelligence. The paper
starts with a statement:
“I propose to consider the question, ‘Can machine think?’ This should begin 
with
definitions of the meaning of the terms ‘machine’ and ‘think’. The 
definitions might be
19
framed so as to reflect so far as possible the normal use of the words, but 
this attitude is
dangerous.”
So, in order to avoid the pitfalls of definitions of terms like ‘think’ and
‘intelligence’, Turing suggested replacing the question by another, which he 
claimed
“...is closely related to it and is expressed in relatively unambiguous 
words. The new
form of the problem can be described in terms of a game which we call the ‘
imitation
game’...”
This proposed game, known as Turing’s Intelligence Test, involves three 
players: a
human examiner of identities I, and two additional human beings, each having 
a different
associated identity. Turing specifically proposed to use gender identity: a 
man A and a
woman B. The idea of the game is that the identifier I knows (A;B) as (X;Y) 
and he has to
identify, by written communication, who is who, aided by B (a cooperator) 
against the
deceiving communication received from A (a defector). The purpose of I and B 
is that I will
be able to identify who is A. The identity of I is not specified in Turing’s 
paper saying that he
may be of either sex.
It is implicitly assumed that the three players have a common language, which 
can be used
also by machines, and that I, A, and B also have a notion about the identity 
of the other
players. Turing looked at the game from a machinery vs. human perspective, 
asking
‘What will happen when a machine takes the part of A in this game?’
He proposed that a machine capable of causing I to fail in his 
identifications as often as a
man would, should be regarded intelligent. That is, the rate of false 
identifications of A made
by I with the aid of B is a measure of the intelligence of A.
So, Turing’s intelligence test is actually about self identity and associated 
identity and
the ability to identify non-self identity of different kinds! Turing himself 
referred to his game
as an ‘imitation game’. Currently, the game is usually presented in a 
different version - an
intelligent being I has to identify who the machine is, while the machine A 
attempts to
imitated intelligent being. Moreover, it is perceived that the Inquirer I 
bases his identification
according to which player appears to him more intelligent. Namely, the game 
is presented as
20
an intelligence competition, and not about Self-Non-Self identity as was 
originally proposed
by Turing.
>From Kasparov’s Mistake to Bacterial Wisdom
Already in 1947, in a public lecture [15], Turing presented a vision that 
within 50 years
computers will be able to compete with people in the chess game. The victory 
of Deep Blue
over Kasparov exactly 50 years later is perceived today by many, scientists 
and layman alike,
as clear proof for computers’ Artificial Intelligence [109,110]. Turing 
himself considered
success in a chess game only a reflection of superior computational 
capabilities (the
computer’s ability to compute very fast all possible configurations). In his 
view, success in
the imitation game was a greater challenge. In fact, the connection between 
success in the
imitation game and intelligence is not explicitly discussed in his 1950 
paper. Yet, it has
become to be perceived as an intelligence test and led to the current 
dominant view of
Artificial Intelligence, that in principle any living organism is equivalent 
to a universal
Turing machine [107-110].
Those who view the imitation game as an intelligence test of the machine
usually assume that the machine’s success in the game reflects the machine’s 
inherent talent.
We follow the view that the imitation game is not about the machine’s talent 
but about the
talent of the designer of the machine who ‘trained it’ to play the role of A.
The above interpretation is consistent with Kasparov’s description of his 
chess
game with Deep Blue. According to him, he lost because he failed to foresee 
that after the
first match (which he won) the computer was rebuilt and reprogrammed to play 
positional
chess. So Kasparov opened with the wrong strategy, thus losing because of 
wrong decisionmaking
not in chess but in predicting the intentions of his human opponents (he 
wrongly
assumed that computer designing still hasn’t reached the level of playing 
positional chess).
Thus he lost because he underestimated his opponents. The ability to properly 
evaluate self
intelligence in comparison to that of others is an essential feature of 
natural intelligence. It
illustrates that humans with higher analytical skills can have lower skills 
associated with
natural intelligence and vice versa: the large team that designed and 
programmed Deep Blue
properly evaluated Kasparov’s superior talent relative to that of each one of 
them on its own.
21
So, before the second match, they extended their team. Bacteria, being the 
most primordial
organisms, had to adopt a similar strategy to survive when higher organisms 
evolved. The
“Bacterial Wisdom” principle [9,47], is that proper cooperation of 
individuals driven by a
common goal can generate a new group-self with superior collective 
intelligence. However,
the formation of such a collective self requires that each of the individuals 
will be able to
alter its own self and adapt it to that of the group’s (Appendix A).
Information-Based Artificial Intelligence vs.
Meaning-Based Natural Intelligence
We propose to associate (vs. define) meaning-based, natural intelligence 
with: conduction of
semantic and pragmatic communication, assignment and generation of meaning, 
formation of
self-identity (distinction between intrinsic and extrinsic meaning) and of 
associated identity
(i.e., of the group the individual belongs to), identification of natural 
intelligence in other
organisms, intentional behavior, decision-making and intentionally designed 
self alterations.
Below we explain why this features are not likely to be sustained by a 
universal Turing
machine, irrespective of how advanced its information-based artificial 
intelligence might be.
Turing set his original imitation game to be played by machine rules: 1. The 
selfidentities
are not allowed to be altered during the game. So, for example, the 
cooperators can
not alter together their associated identity - the strategy bacteria adopt to 
identify defectors. 2.
The players use fixed-in-time, universal-machine-like language (no semantic 
and pragmatic
aspects). In contrast, the strategy bacteria use is to modify their dialect 
to improve the
semantic and pragmatic aspect of their communication. 3. The efficiency of 
playing the game
has no causal drive, i.e., there is no reward or punishment. 4. The time 
frame within which
the game is to be played is not specified. As a result, there is inherent 
inconsistency in the
way Turing formulated his imitation game, and the game can not let the 
special features of
natural intelligence be expressed.
As Turing proved, computing machines are equivalent to formal axiomatic 
systems
that are constructed to be clean of meaning. Hence, by definition, no 
computer can generate
its own intrinsic meanings that are distinguishable from externally imposed 
ones. Which, in
turn, implies the obvious – computers can not have inherent notions of 
identity and self22
identity. So, if the statement: ‘When a machine takes the part of A in this 
game’ refers to the
machine as an independent player, the game has to be either inconsistent or 
undecidable. By
independent player we mean the use of some general-purpose machine (i.e., 
designed without
specific task in mind, which is analogous to the construction of a 
meaning-free, formal
axiomatic system). The other possibility is that Turing had in mind a 
specific machine,
specially prepared for the specific game with the specific players in mind. 
In this case, the
formulation of the game has no inconsistency/undecidability, but then the 
game is about the
meaning-based, causality-driven creativity of the designer of the machine and 
not about the
machine itself. Therefore, we propose to interpret the statement ‘When a 
machine takes the
part of A’ as implying that ‘A sends a Pre-designed and Pre-programmed 
machine to play
his role in the specific game’.
The performance of a specific machine in a specific game is information-based
Artificial Intelligence. The machine can even perform better than some humans 
in the
specific game with agreed-upon, fixed rules (time invariant); it has been 
designed to play.
However, the machine is the product of the meaning-based Natural Intelligence 
and the
causality-driven creativity of its designer. The designer can design 
different machines
according to the causal needs he foresees. Moreover, by learning from his 
experience and by
using purposefully gathered knowledge, he can improve his skills to create 
better machines.
It seems that Turing did realize the essential differences between some of 
the features
we associate here with Natural Intelligence vs. Artificial Intelligence. So, 
for example, he
wouldn’t have classified Deep Blue as an Intelligent Machine. In an 
unpublished report from
1948, entitled “Intelligent Machinery”, machine intelligence is discussed 
mainly from the
perspective of human intelligence. In this report, Turing explains that 
intelligence requires
learning, which in turn requires the machine to have sufficient flexibility, 
including selfalteration
capabilities (the equivalent of today’s neuro-plasticity). It is further 
implied that
the machine should have the freedom to make mistakes. The importance of 
reward and
punishment in the machine learning is emphasized (see the report summary 
shown below).
Turing also relates the machine’s learning capabilities to what today would 
be referred to as
genetic algorithm, one which would fit the recent realizations about the 
genome (Appendix
F).
In this regard, we point out that organisms’ decision-making and creativity 
which are
based on learning from experience (explained below) must involve learning 
from past
23
mistakes. Hence, an inseparable feature of natural intelligence is the 
freedom to err with
readiness to bear the consequences.
Beyond Machinery - Games of Natural Intelligence
Since the rules of Turing’s imitation game do not let the special features of 
natural
intelligence be expressed the game can not be used to distinguish natural 
from artificial
intelligence. The rules of the game must be modified to let the features of 
natural intelligence
be expressed, but in a manner machines can technically imitate.
First, several kinds of communication channels that can allow exchange of
meaning-bearing messages should be included, in addition to the written 
messages. Clearly,
all communication channels should be such that can be transferred and 
synthesized by a
machine; speech, music, pictures and physiological information (like that 
used in polygraph
tests) are some examples of such channels. We emphasize that a two-way 
communication is
used so, for example, the examiner (I) can present to (B) a picture he asked 
(A) to draw and
vice versa. Second, the game should be set to test the ability of human (I) 
vs. machine (I) to
make correct identification of (A) and (B), instead of testing the ability of 
human (A) vs.
machine (A) to cause human (I) false identifications. Third, the game should 
start after the
24
examiner (I) has had a training period. Namely, a period of time during which 
he is let to
communicate with (A) and (B) knowing who is who, to learn from his own 
experience about
their identities. Both the training period and the game itself should be for 
a specified
duration, say an hour each. The training period can be used by the examiners 
in various
ways; for example, he can expose the players to pictures, music pieces, 
extracts from
literature, and ask them to describe their impressions and feelings. He can 
also ask each of
them to reflect on the response of the other one or explain his own response. 
Another
efficient training can be to ask each player to create his own art piece and 
reflect on the one
created by the other. The training period can also be used by the examiner 
(I) to train (B) in
new games. For example, he could teach the other players a new game with 
built-in rewards
for the three of them to play. What we suggest is a way to instill in the 
imitation game
intrinsic meaning for the player by reward and decision-making.
The game can be played to test the ability of machine (I) vs. human (I) to
distinguish correctly between various kinds of identities: machine vs. human 
(in this case, the
machine should be identical to the one who plays the examiner), or two 
associated human
identities (like gender, age, profession etc).
The above are examples of natural intelligence games we expect machinery to
lose, and as such they can provide proper tests to distinguish their 
artificial intelligence from
the natural intelligence of living systems.
Let Bacteria Play the Game of Natural Intelligence
We proposed that even bacteria have natural intelligence beyond machinery: 
unlike a
machine, a bacterial colony can improve itself by alteration of gene 
expression, cell
differentiation and even generation of new inheritable genetic ‘tools’. 
During colonial
development, bacteria collectively use inherited knowledge together with 
causal information
it gathers from the environment, including other organisms (Appendix A). For 
that, semantic
chemical messages are used by the bacteria to conduct dialogue, to 
cooperatively assess their
situation and make contextual decisions accordingly for better colonial 
adaptability
(Appendix B). Should these notions be understood as useful metaphors or as 
disregarded
reality?
25
Another example of natural intelligence game could be a Bridge game between a
machine and human team playing the game against a team of two human players. 
This
version of the game is similar to the real life survival ‘game’ between 
cooperators and
cheaters (cooperative behavior of organisms goes hand in hand with cheating, 
i.e., selfish
individuals who take advantage of the cooperative effort). An efficient way 
cooperators can
single out the defectors is by using their natural intelligence - semantic 
and pragmatic
communication for collective alteration of their own identity, to outsmart 
the cheaters who
use their own natural intelligence for imitating the identity of the 
cooperators [111-114].
In Appendix A we describe how even bacteria use communication to generate
evolvable self-identity together with special “dialect”, so fellow bacteria 
can find each one in
the crowd of strangers (e.g., biofilms of different colonies of the same and 
different species).
For that, they use semantic chemical messages that can initiate specific 
alteration only with
fellow bacteria and with shared common knowledge (Appendix C). So in the 
presence of
defectors they modify their self-identity in a way unpredictable to an 
external observer not
having the same genome and specific gene-expression state. The external 
observer can be
other microorganisms, our immune system or our scientific tools.
The experimental challenge to demonstrate the above notions is to devise an 
identity
game bacteria can play to test if bacteria can conduct a dialogue to 
recognize self vs. non-self
[111-114]. Inspired by Turing’s imitation game, we adopted a new conceptual 
methodology
to let the bacteria tell us about their self-identity, which indeed they do: 
Bacterial colonies
from the same culture are grown under the same growth conditions to show that 
they exhibit
similar-looking patterns (Fig 1), as is observed during self-organization of 
azoic systems
[7,8,99,100]. However, unlike for azoic systems, each of the colonies 
develops its own self
identity in a manner no azoic system is expected to do.
26
Fig 1. Observed level of reproducibility during colonial developments: Growth 
of two
colonies of the Paenibacillus vortex taken from the same parent colony and 
under the same growth conditions.
For that, the next stage is to growth of four colonies on the same plate. In 
one case all are
taken from the same parent colony and in the other case they are taken from 
two different yet
similar-looking colonies (like those shown in Fig 1). In preliminary 
experiments we found
that the growth patterns in the two cases are significantly different. These 
observations imply
that the colonies can recognize if the other colonies came from the same 
parent colony or
from a different one. We emphasize that this is a collective phenomenon, and 
if the bacteria
taken from the parent colonies are first grown as isolated bacteria in fluid, 
the effect is
washed out.
It has been proposed that such colonial self-identity might be generated 
during the
several hours of stationary ‘embryonic stage’ or collective training 
duration of the colonies
between the time they are placed on the new surface and start to expand. 
During this
duration, they collectively generate their own specific colonial self 
identity [62,63]. These
findings revive Schrödinger’s dilemma, about the conversion of genetic 
information
(embedded in structural coding) into a functioning organism. A dilemma 
largely assumed to
be obsolete in light of the new experimental findings in life sciences when 
combined with the
Neo-Darwinian the Adaptive Complex Systems paradigms [51,115-120]. The 
latter, currently
the dominant paradigm in the science of complexity is based on the ‘top-level 
emergence’
principle which has evolved from Anderson’s constructivism (‘More is 
Different’ [53]).
27
Beyond Neo-Darwinism – Symbiogenesis on All Levels
Accordingly it is now largely assumed that all aspects of life can in 
principle be explained
solely on the basis of information storage in the structure of the genetic 
material. Hence, an
individual bacterium, bacterial colony or any eukaryotic organism is in 
principle analogous
to a pre-designed Turing machine. In this analogy, the environment provides 
energy (electric
power of the computer) and absorbs the metabolic waste products (the 
dissipated heat), and
the DNA is the program that runs on the machine. Unlike in an ordinary Turing 
machine, the
program also has instructions for the machine to duplicate and disassemble 
itself and
assemble many machines into an advanced machine – the dominant Top-Level 
Emergence
view in the studies of complex systems and system-biology based on the 
Neo-Darwinian
paradigm.
However, recent observations during bacterial cooperative self-organization 
show features
that can not be explained by this picture (Appendix A). Ben Jacob reasoned 
that Anderson’s
constructivism is insufficient to explain bacterial self-organization. Hence, 
it should be
extended to a “More is Different on All Levels” or all-level generativism 
[9]. The idea is that
biotic self-organization involves self-organization and contextual alteration 
of the
constituents of the biotic system on all levels (down to the genome). The 
alterations are based
on stored information, external information, information processing and 
collective decisionmaking
following semantic and pragmatic communication on all levels. Intentional
alterations (neither pre-designed nor due to random changes) are possible, 
however, only if
they are performed on all levels. Unlike the Neo-Darwinian based, top-level 
emergence, alllevel
emergence can account for the features associated with natural intelligence. 
For
example, in the colony, communication allows collective alterations of the 
intracellular state
of the individual bacteria, including the genome, the intracellular gel and 
the membrane. For
bacterial colony as an organism, all-level generativism requires collective ‘
natural genetic
engineering’ together with ‘creative genomic webs’ [45-47]. In a manuscript 
entitled:
“Bacterial wisdom, Gödel’s theorem and Creative Genomic Webs”, Ben Jacob 
refers to the
following special genomic abilities of individual bacteria when being the 
building agents of a
colony.
28
In the prologue we quoted Margulis’ and Sagan’s criticisms of the 
incompleteness of the
Neo-Darwinian paradigm and the crucial role of symbiogenesis in the 
transition from
prokaryotes to eukaryotes and the evolution of the latter. With regard to 
eukaryotic
organisms, an additional major difficulty arises from the notion that all the 
required
information to sustain the life of the organism is embedded in the structure 
of its genetic
code: this information seems useless without the surrounding cellular 
machinery [123,124].
While the structural coding contains basic instructions on how to prepare 
many components
of the machinery – namely, proteins – it is unlikely to contain full 
instructions on how to
assemble them into multi-molecular structures to create a functional cell. We 
mentioned
mitochondria that carry their own genetic code. In addition, membranes, for 
example, contain
lipids, which are not internally coded but are absorbed from food intake 
according to the
functional state of the organism.
Thus, we are back to Schrödinger’s chicken-and-egg paradox – the coding 
parts of the DNA
require pre-existing proteins to create new proteins and to make them 
functional. The
problem may be conceptually related to Russell’s self-reference paradoxes and 
Gödel’s
theorems: it is possible in principle to construct mapping between the 
genetic information
and statements about the genetic information. Hence, according to a proper 
version of
Gödel’s theorem (for finite system [47]), the structural coding can not be 
both complete and
self-consistent for the organism to live, replicate and have programmed cell 
death. In this
sense, the Neo-Darwinian paradigm can not be both self-consistent and 
complete to describe
29
the organism’s lifecycle. In other words, within this paradigm, the 
transition from the coding
part of the DNA to the construction of a functioning organism is 
metaphorically like the
construction of mathematics from a formal axiomatic system. This logical 
difficulty is
discussed by Winfree [125] in his review on Delbruck’s book “Mind from 
Matter? An Essay
on Evolutionary Epistemology”.
30
New discoveries about the role of transposable elements and the abilities of 
the Junk DNA to
alter the genome (including generation of new genes) during the organism’s 
lifecycle support
the new picture proposed in the above mentioned paper. So, it seems more 
likely now that
indeed the Junk DNA and transposable elements provide the necessary 
mechanisms for the
formation of creative genomic webs. The human genome project provided 
additional clues
about the functioning of the genome, and in particular the Junk DNA in light 
of the
unexpectedly low number of coding genes together with equally unexpectedly 
high numbers
of transposable elements, as described in Appendix B. These new findings on 
the genomic
level together with the new understanding about the roles played by 
mitochondria [126-132]
imply that the current Neo-Darwinian paradigm should be questioned. Could it 
be that
mitochondria – the intelligent intracellular bacterial colonies in eukaryotic 
cells, provide a
manifestation of symbiogenesis on all levels?
Learning from Experience –
Harnessing the Past to Free the Future
Back to bacteria, the colony as a whole and each of the individual bacteria 
are continuously
self-organized open systems: The colonial self-organization is coupled to the 
internal selforganization
process each of the individual bacteria. Three intermingled elements are
involved in the internal process: 1. genetic components, including the 
chromosomal genetic
sequences and additional free genetic elements like transposons and plasmids. 
2. the
31
membrane, including the integrated proteins and attached networks of 
proteins, etc. 3. The
intracellular gel, including the machinery required to change its 
composition, to reorganize
the genetic components, to reorganize the membrane, to exchange matter, 
energy and
information with the surrounding, etc. In addition, we specifically follow 
the assumption that
usable information can be stored in its internal state of spatio-temporal 
structures and
functional correlations. The internal state can be self-altered, for example 
via alterations of
the part of the genetic sequences which store information about transcription 
control. Hence,
the combination of the genome and the intra-cellular gel is a system with 
self reference.
Hence, the following features of genome cybernetics [9,50] can be sustained.
1. storage of past external information and its contextual internal 
interpretation.
2. storage of past information about the system’s past selected and possible 
states.
3. hybrid digital-analog processing of information.
4. hybrid hardware-software processing of information.
The idea is that the hardware can be self-altered according to the needs and 
outcome of the
information processing, and part of the software is stored in the structure 
of the hardware
itself, which can be self-altered, so the software can have self reference 
and change itself.
Such mechanisms may take a variety of different forms. The simplest 
possibility is by
ordinary genome regulation – the state of gene expression and 
communication-based
collective gene expression of many organisms. For eukaryotes, the 
mitochondria acting like a
bacterial colony can allow such collective gene expression of their own 
independent genes.
In this regard, it is interesting to note that about 2/3 of the mitochondria’
s genetic material is
not coding for proteins.
Genome cybernetics has been proposed to explain the reconstruction of the 
coding DNA
nucleus in ciliates [133,134]. The specific strains studied have two nuclei, 
one that contains
only DNA coded for proteins and one only non-coding DNA. Upon replication, 
the coding
nucleus disintegrates and the non-coding is replicated. After replication, 
the non-coding
nucleus builds a new coding nucleus. It has been shown that it is done using 
the transposable
elements in a computational process.
More recent work shows that transposable elements can effectively re-program 
the genome
between replications [135]. In yeast, these elements can insert themselves 
into messenger
32
RNA and give rise to new proteins without eliminating old ones[136]. These 
findings
illustrate that rather than wait for mutations to occur randomly, cells can 
apparently keep
some genetic variation on tap and move them to ‘hard disk’ storage in the 
coding part of the
DNA if they turn out to be beneficial over several life cycles. Some 
observations hint that the
collective intelligence of the intracellular mitochondrial colonies play a 
crucial role in these
processes of self-improvement [128-132].
Here, we further assume the existence of the following features:
5. storage of the information and the knowledge explicitly in its internal 
spatiotemporal
structural organizations.
6. storage of the information and the knowledge implicitly in functional 
organizations
(composons) in its corresponding high dimensional space of affinities.
7. continuous generation of models of itself by reflection forward (in the 
space of
affinities) its stored knowledge.
The idea of high dimensional space of affinities (renormalized correlations) 
has been
developed by Baruchi and Ben Jacob [137], for analyzing multi-channel 
recorded activity
(from gene expression to human cortex). They have shown the coexistence of 
functional
composons (functional sub-networks) in the space of affinities for recorded 
brain activity.
With this picture in mind, the system’s models of itself are not necessarily
dedicated ‘units’ of the system in the real space but in the space of 
affinities, so the models
should be understood as a caricature of the system in real space including 
themselves -
caricature in the sense that maximal meaningful information is represented. 
In addition, the
system’s hierarchical organization enables the smaller scales to contain 
information about the
larger scale they themselves form – metaphorically, like the formation of 
meanings of words
in sentences as we explain in Appendix B. The larger scale, the analog of the 
sentence and
the reader’s previous knowledge, selects between the possible lower scale 
organizations. The
system’s real time is represented in the models by a faster internal time, so 
at every moment
in real time the system has information about possible caricatures of itself 
at later times.
33
The reason that internal multiple composons (that serve as models) can 
coexist has to do
with the fact that going backward in time is undecidable for external 
observer (e.g., solving
backward reaction-diffusion equations is undetermined). So what we suggest is 
that, by
projecting the internally stored information about the past (which can not be 
reconstruct by
external observer), living organisms utilize the fact that going backward in 
time is
undetermined for regulated freedom of response: to have a range of possible 
courses of future
behavior from which they have the freedom to select intentionally according 
to their past
experience, present circumstances, and inherent predictions of the future. In 
contrast, the
fundamental assumption in the studies of complex adaptive systems according 
to Gell-Mann
[115], is that the behavior of organisms is determined by accumulations of 
accidents.
Any entity in the world around us, such as an individual human being, owes its
existence not only to the simple fundamental law of physics and the boundary 
condition
on the early universe but also to the outcomes of an inconceivably long 
sequence of
probabilistic events, each of which could have turned out differently. Now a 
great many
of those accidents, for instance most cases of the bouncing of a particular 
molecule in a
gas to the right rather than the left in a molecular collision, have few 
ramifications for
the future coarse-grained histories. Sometimes, however, an accident can have
widespread consequences for the future, although those are typically 
restricted to
particular regions of space and time. Such a "frozen accident" produces a 
great deal of
mutual algorithmic information among various parts or aspects of a future 
coarsegrained
history of the universe, for many such histories and for various ways of
dividing them up.
We propose that organisms use stored relevant information to generate an 
internal
mixed yet decomposable (separable) state of multiple options analogous to 
quantum
mechanical superposition of states .In this sense the process of 
decision-making to select a
specific response to external stimuli is conceptually like the projection of 
the wave function
in quantum mechanical measurement. There are two fundamental differences, 
though: 1. In
quantum measurement, the external observer directly causes the collapse of 
the system on a
specific eigenstate he pre-selects. Namely, the eigenstate is predetermined 
while its
corresponding eigenvalue is not. In the organism’s decision-making, the 
external stimuli
initiate the selection of a specific state (collapse on a specific response). 
The selected state is
in principle unknown directly to an external observer. The initiated internal 
decomposition of
the mixed states and the selection of a specific one are performed according 
to stored past
information. 2. In quantum measurement, the previous possible (expected) 
eigenvalues of the
other eigenstates are erased and assigned new uncertainties. In the organism’
s decision
34
making the process is qualitatively different: the external stimuli initiate 
decomposition of
the mixed states by the organism itself. The information about the other 
available options is
stored after the selection of the specific response. Therefore, the 
unselected past options are
expected to affect consequent decision-making.
Decomposable Mixed State of Multiple-Options –
A Metaphor or Testable Reality?
The above picture is rejected on the grounds that in principle the existence 
of a mixed and
decomposable state of multiple options can not be tested experimentally. In 
this sense, the
objection is similar in spirit to the objections to the existence of the 
choice function in
mathematics (Appendix D), and the wave function in physics (Appendix E).
The current experimental methodology in life science (disintegrating the 
organism
or exposing it to lethal stress), is conceptually similar to the notion of ”
strong measurements”
or “destructive measurements” in quantum mechanics in which the wave 
function is forced to
collapse. Therefore, the existence of an internal state decomposable only by 
the organism
itself can not be tested by that approach. A new conceptual methodology is 
required, of
protective biotic measurements. For example, biofluoremetry can be used to 
measure the
intracellular spatio-temporal organization and functional correlations in a 
living organism
exposed to weak stress. Conceptually, fluoremetry is similar to quantum 
non-demolition and
weak stress is similar to the notion of weak quantum measurements. Both allow 
the
measurement of the quantum state of a system without forcing the wave 
function to collapse.
Bacterial collective learning when exposed to non-lethal levels of 
antibiotics provide an
example of protective biotic measurements (Appendix E).
35
Fig 2. Confocal image of mitochondria within a single cultured rat cortical 
astrocyte
stained with the calcium-sensitive dye rhod-2 which partitions into 
mitochondria, permitting
direct measurements of intramitochondrial calciuum concentration (curtsey of 
Michael
Duchen).
It should be kept in mind that the conceptual analogy with quantum mechanics 
is subtle and
can be deceiving rather than inspiring if not properly used. For 
clarification, let us consider
the two-slit experiment for electrons. When the external observer measures 
through which of
the slits the electron passes, the interference pattern is washed out - the 
measurement causes
the wave function of the incoming electron to collapse on one of the two 
otherwise available
states.
Imagine now an equivalent two-slit experiment for organisms. In this thought
experiment, the organisms arrive at a wall with two closely located narrow 
open gates.
Behind the wall there are many bowls of food placed along an arc so that they 
are all at equal
distance from the gates. The organisms first choose through which of the two 
gates to pass
and then select one bowl of food. The experiment is performed with many 
organisms, and the
combined decisions are presented in a histogram of the selected bowls. In the 
control
experiment, two independent histograms are measured, for each door separately 
(no decisionmaking
is required). The distribution when the two gates are open is compared with 
the sum
of the distributions for the single gates. A statistically significant 
difference will indicate that
past unselected options can influence consequent decision-making even if the 
following
decision involves a different choice altogether (gates vs. food bowls).
36
Upon completion of this monograph, the development of a Robot-Scientist has 
just been
reported [138]. The machine was given the problem of discovering the function 
of different
genes in yeast, to demonstrate its ability to generate a set of hypotheses 
from what is known
about biochemistry and then design experiments and interpret the results 
(assign meaning)
without human help. Does this development provide the ultimate proof that 
there is no
distinction between Artificial Intelligence and Natural Intelligence? 
Obviously, advanced
automated technology interfaced with learning software can have important 
contribution. It
may replace human researchers from doing what machines can do, thus freeing 
them to be
more creative and to devote more effort to their beyond-machinery thinking. 
We don’t
expect, however, that a robot scientist will be able to design experiments to 
test, for example,
self-identity and decision-making, for the simple reason that it could not 
grasp these
concepts.
Epilogue – From Bacteria Shalt Thou Learn
Mutations as the causal driving force for the emergence of the diversity and 
complexity of
organisms and biosystems became the most fundamental principle in life 
sciences ever since
Darwin gave mutations a key role in natural selection.
Consequently, research in life sciences has been guided by the assumption 
that the
complexity of life can become comprehensible if we accumulate sufficient 
amounts of
detailed information. The information is to be deciphered with the aid of 
advanced
mathematical method within the Neo-Darwinian schemata. To quote Gell-Mann,
Life can perfectly well emerge from the laws of physics plus accidents, and 
mind, from
neurobiology. It is not necessary to assume additional mechanisms or hidden 
causes.
Once emergence is considered, a huge burden is lifted from the inquiring mind
. We don't
need something more in order to get something more.
This quote represents the currently, dominant view of life as a unique 
physical phenomenon
that began as a colossal accident, and continues to evolve via sequences of 
accidents selected
by random number generators – the omnipotent idols of science. We reason 
that, according to
37
this top-level emergence picture, organisms could not have evolved to have 
meaning-based,
natural intelligence beyond that of machinery.
Interestingly, Darwin himself didn’t consider mutations to be necessarily 
random, and
thought the environment can trigger adaptive changes in organisms – a notion 
associated
with Lamarckism. Darwin did comment, however, that it is reasonable to treat 
alterations as
random, so long as we do not know their origin. He says:
“I have hitherto sometimes spoken as if the variations were due to chance. 
This, of
course, is a wholly incorrect expression, but it serves to acknowledge 
plainly our
ignorance of the cause of each particular variation… lead to the conclusion 
that
variability is generally related to the conditions of life to which each 
species has been
exposed during several successive generations”.
In 1943, Luria and Delbruck performed a cornerstone experiment to prove that 
random
mutation exist by exposing bacteria to lethal conditions – bacteriophage that 
immediately
kills non-resistant bacteria. Therefore, only cells with pre-existing 
specific mutations could
survive. The other cells with didn’t have the chance to alter their self - a 
possibility that could
not be ruled out by the experiments. Nevertheless, these experiments were 
taken as a crucial
support for the Neo-Darwinian dogma which states that all mutations are 
random, and can
occur only during DNA replication. To bridge between these experiments, Turing
’s imitation
game and the notion of weak measurements in quantum mechanics, we suggest to 
test natural
intelligence by first giving the organisms a chance to learn from hard but 
non-lethal
conditions. We also proposed to let the bacteria play identity game proper 
for testing their
natural intelligence, similar in spirit to the real life games played between 
different colonies
and even with other organisms [139].
In Turing’s footsteps, we propose to play his imitation game with the reverse 
goal in
mind. Namely, human players participate in the game to learn about 
themselves. By playing
this reverse game with bacteria, - Nature’s fundamental organisms from which 
all life
emerged - we should be able to learn about the very essence of our self. This 
is especially so
when keeping in mind that the life, death and well being of each of our cells 
depend on the
cooperation of its own intelligent bacterial colony – the mitochondria. 
Specifically, we
believe that understanding bacterial natural intelligence as manifested in 
mitochondria might
be crucial for understanding the meaning-based natural intelligence of the 
immune system
38
and the central nervous system, the two intelligent systems we use for 
interacting with other
organisms in the game of life. Indeed, it has recently been demonstrated that 
mice with
identical nuclear genomes can have very different cognitive functioning if 
they do not have
the same mitochondria in their cytoplasm. The mitochondria are not 
transferred with the
nucleus during cloning procedures [140].
To quote Schrödinger,
Democritus introduces the intellect having an argument with the senses about 
what is
'real'. The intellect says; 'Ostensibly there is color, ostensibly sweetness, 
ostensibly
bitterness, actually only atoms and the void.' To which the senses retort; 
'Poor intellect,
do you hope to defeat us while from us you borrow your evidence? Your victory 
is your
defeat.'
Acknowledgment
We thank Ben Jacob’s student, Itay Baruchi, for many conversations about the 
potential
implications of the space of affinities, the concept he and Eshel have 
recently developed
together. Some of the ideas about bacterial self-organization and collective 
intelligence were
developed in collaboration with Herbert Levine. We benefited from enlightening
conversations, insights and comments by Michal Ben-Jacob, Howard Bloom, Joel 
Isaacson,
Yuval Neeman and Alfred Tauber. The conceptual ideas could be converted into 
concrete
observations thanks to the devoted and precise work of Inna Brainis. This 
work was
supported in part by the Maguy-Glass Chair in Physics of Complex Systems.
Personal Thanks by Eshel Ben-Jacob
About twenty-five years ago, when I was a physics graduate student, I read 
the book “The
Myth of Tantalus” and discovered there a new world of ideas. I went to seek 
the author, and
found a special person with vast knowledge and human approach. Our dialogue 
led to the
establishment of a unique, multidisciplinary seminar, where themes like “the 
origin of
creativity” and “mind and matter” were discussed from different 
perspectives. Some of the
questions have remained with me ever since, and are discussed in this 
monograph.
39
Over the years I have had illuminating dialogues with my teacher Yakir 
Aharonov about the
foundations of quantum mechanics and with my friend Adam Tenenbaum about 
logic and
philosophy.
In my Post-Doctoral years, I was very fortunate to meet the late Uri Merry, 
who introduced
me to the world of social science and linguistics and to Buber’s philosophy. 
Among other
things, we discussed the role of semantic and pragmatic communication in the 
emergence of
individual and group self.
References
[1] Schrödinger, E. (1943) What is life? The Physical Aspect of the Living 
Cell. Based on
lectures delivered under the auspices of the Dublin Institute for Advanced 
Studies at Trinity
College, Dublin, in February 1943. home.att.net/~p.caimi/Life.doc ; (1944) 
What is life?
The Physical Aspect of the Living Cel Cambridge University Press. (1958) Mind 
and Matter.
Cambridge University Press, Cambridge. (1992) What Is Life? The Physical 
Aspect of the
Living Cell with Mind and Matter and Autobiographical Sketches with forward 
by R.
Penrose
[2] Delbrück, M. (1946) Heredity and variations in microorganisms. Cold 
Spring Harbor
Symp. Quant. Biol., 11 ; Delbruck, M. (1986) Mind from Matter? An Essay on 
Evolutionary
Epistemology Blackwell Scientific Publication
[3]Winfree,A. T. (1988) Book review on Mind from Matter? An Essay on 
Evolutionary
Epistemology Bul. Math. Biol 50, 193-207
[4] Hemmer, P.C., Holden, H. and ratkje, S.K. (1996) The Collected Work of 
Lars Onsager
World Scientific
[5] Prigogine, I. and Nicolis, G. (1977) Self-organization in NonEequlibrium 
Systems;From
Dissipative Structures to Order through Fluctuations Wiley&Sons Prigogine, I. 
(1980)
>From Being to Becoming: Time and Complexity in the Physical Sciences H. 
Freeman&Co
[6] Cross, M.C. and Hohenberg, P.C. (1993) Pattern formation outside of 
equilibrium , Rev.
Mod. Phys. 65
[7] Ben Jacob, and Garik, P. (1990) The formation of patterns in 
non-equilibrium growth
Nature 33 523-530
[8] Ben Jacob, E. (1993) From snowflake formation to growth of bacterial 
colonies. I.
Diffusive patterning in azoic systems Contemp Physics 34 247-273 ; (1997) II. 
Cooperative
formation of complex colonial patterns Contem. Physics 38 205-241
40
[9] Ben-Jacob, E. (2003) Bacterial self-organization: co-enhancement of 
complexification
and adaptability in a dynamic environment. Phil. Trans. R. Soc. Lond. 
A361,1283-1312
[10] Schweitzer, F. (1997) Self-Organization of Complex Structures from 
Individual to
Collective Dynamics Gordon&Breach
[11] Ball, P. (1999) The Self-Made Tapestry: Pattern Formation in Nature 
Oxford University
Press
[12] Camazine, S. et al (2001) Self-Organization in Biological Systems 
Princeton University
Press
[13] Turing, A.M. (1952) The Chemical Basis of Morphogenesis, Philosophical 
Transactions
of the Royal Society B (London), 237, 37-72, 1952.
[14] Saunders, P.T. (1992) Morphogenesis: Collected Works of AM Turing Vol 3 
of Furbank,
P.N. (1992) The Collected Work of A. M. Turing North Holand Pulications
[15] Turing, A.M. Unpublished material Turing archive at King's College 
Cambridge, and
the Manchester National Archive for the History of Computing
[16]Lovelock, James. 1995. Gaia: A New Look at Life on Earth. Oxford 
University Press:
Oxford.Lovelock, James. 1988. The Ages of Gaia: A Biography of Our Living 
Earth. New
York: W.W. Norton.
[17] Margulies, L. and Dolan, M.F. (2002) Early life, Jones and Bartlett ; 
(1998) Five
Kingdoms ; (2002) Early Life: Evolution on the Precambrian Earth (with Dolan, 
M. F.) ;
(1997) Microcosmos; Four Billion Years of Evolution from Our Microbial 
Ancestors (with
Sagan, D.)
[18] Sahtouris, E. (2001) What Our Human Genome Tell Us? EcoISP ; Sahtouris, 
Elisabet,
with Swimme, Brian and Liebes, Sid. (1998) A Walk Through Time: From Stardust 
to Us.
Wiley: New York.; Harman, Willis and Sahtouris, Elisabet. 1998. Biology 
Revisioned. North
Atlantic Books: Berkeley, CA.
41
[19]E. Ben-Jacob, I. Cohen, H. Levine, Cooperative self-organization of 
microorganisms,
Adv. Phys. 49 (2000) 395-554
[20]Microbiology: A human perspective E.W. Nester, D.G. Anderson, C.E. 
Roberts, N.N
Pearsall, M.T. Nester, (3rd Edition), McGraw Hill, New York 2001;
[21]Shapiro, J.A. and Dworkin, M. (Eds.), (1997) Bacteria as Multicellular 
Organisms
Oxford University Press, New York
[22]Shapiro, J.A. (1988) Bacteria as multicellular organisms, Sci. Am. 258 
62-69; J.
Shapiro, J.A. (1995) The significance of bacterial colony patterns, 
BioEssays, 17 597-
607. Shapiro, J.A. (1998) Thinking about bacterial populations as 
multicellular
organisms, Annu. Rev. Microbiol. 52 81-104
[23] Losick, R. and Kaiser, D. (1997) Why and how bacteria communicate, Sci. 
Am. 276 68-
73; Losick, R. and Kaiser, D. (1993) How and Why Bacteria talk to each other, 
Cell 73
873-887
[24]Ben-Jacob, E., Cohen, I. and Gutnick, D.L. (1998) Cooperative 
organization of bacterial
colonies: From genotype to morphotype. Annu. Rev. Microbiol., 52 779-806
[25] Rosenberg, E. (Ed.), (1999) Microbial Ecology and Infectious Disease, 
ASM Press
[26] Crespi, B.J. (2001) The evolution of social behaviour in microorganisms. 
TrendsEcol.
Evol. 16, 178-183
[27] Kolenbrander, P.E. et al (2002) Communication among oral bacteria. 
Microbiol. Mol.
Biol. Rev. 66, 486-505
[28] Ben-Jacob, E. et al. (1994) Generic modeling of cooperative growth 
patterns in
bacterial colonies. Nature 368, 46-49
[29] Matsushita, M. and Fujikawa, H. (1990) Diffusion-limited growth in 
bacterial colony
formation. Physica A 168, 498-506
[30] Ohgiwari, M. et al. (1992) Morphological changes in growth of bacterial 
colony
patterns. J. Phys. Soc. Jpn. 61, 816-822
[31] Komoto, A. et al (2003) Growth dynamics of Bacillus circulans colony. J. 
Theo. Biology
225, 91-97
[32] Di Franco, C. et al. (2002) Colony shape as a genetic trait in the 
pattern-forming
Bacillus mycoides. BMC Microbiol 2(1):33
[33]Ben-Jacob, E., Cohen, I. and A. Czirók. (1997) Smart bacterial colonies. 
In Physics of
Biological Systems: From Molecules to Species, Lecture Notes in Physics, 
pages 307-324.
Springer-Verlag, Berlin,
42
[34]Ben-Jacob, E. et al. (1995) Complex bacterial patterns. Nature, 
373:566-567,
[35]Budrene, E.O. and Berg, H.C. (1991) Complex patterns formed by motile 
cells of
Esherichia coli. Nature, 349:630-633 ; (1995) Dynamics of formation of 
symmetrical
patterns by chemotactic bacteria. Nature, 376:49-53
[36]Blat, Y.and Eisenbach, M. (1995). Tar-dependent and -independent pattern 
formation by
Salmonella typhimurium . J. Bac., 177(7):1683-1691
[37] S. E. Luria and M. Delbrück. Mutations of bacteria from virus 
sensitivity to virus
resistance. Genetics, 28:491-511, 1943.
[38] Dawkins, R. (1986) The Blind Watchmaker. W.W. Norton, New York, 1986. The
Extended Phenotype. W.H. Freeman, Oxford, 1972. The Selfish Gene. Oxford 
University
Press, Oxford, 1976.
[39]Gould, S. J. (1977) Ever Since Darwin. W.W. Norton, New York
[40]Jacob, J. (1993) The Logic of Life, A History of Heredity. Princeton 
University Press.
[41]Joset, F. and Guespin-Michel, J. (1993) Prokaryotic Genetics. Blackwell 
Scientific
Publishing, London
[42]Keller, E.F. (1983) A Feeling for The Organism: The Life and Work of 
Barbara
McClintock. W.H. Freeman&Company
[43] Margulis, L. (1992) Symbiosis in Cell Evolution: Microbial Communities 
in the Archean
and Proterozoic Eons W.H. Freeman&Company ;Margulis, L., Sagan, D. and 
Morrison, P.
(1997) Slanted Truths: Essays on Gaia, Symbiosis, and Evolution Copernicus 
Books ;
Margulis, L. Sagan, D. (1999) Symbiotic Planet
A New Look At EvolutionBasic Books
[44] Margulis, L. and Sagan, D. (2003) Acquiring Genomes: A Theory of the 
Origins of
Species Perseus Publishing ; Chapman, M.J. and Margulis, L. (1998) 
Morphogenesis and
symbiogenesis Intl. Microbiol. 1 319-329
.
[45] Shapiro, J.A. (1992) Natural genetic engineering in evolution. Genetica 
86, 99-111
[46]Wesson, R. (1993) Beyond Natural Selection. The MIT Press, London
[47] Ben-Jacob, E. (1998) Bacterial wisdom, Godel’s theorem and creative 
genomic webs.
Physica A 248, 57-76
[48] Duchen, M.R., Leyssens, A. and Crompton, M. (1998). Transient 
mitochondrial
depolarisations in response to focal SR calcium release in single rat 
cardiomyocytes., J. Cell
Biol., 142(4), 1-14.
43
[49] Leyssens, A., Nowicky, A.V., Patterson, D.L., Crompton, M., and Duchen, 
M.R.,
(1996). The relationship between mitochondrial state, ATP hydrolysis, [Mg2+]i 
and [Ca2+]i
studied in isolated rat cardiomyocytes. J. Physiol., 496, 111-128
[50] Palmer, J.D. (1997) The Mitochondrion that Time Forgot,
Nature, 387. 454-455.
[51] Holland, J.H. (2000) Emergence from chaos to order Oxford University 
Press,
[52] Kurzweil, R. (1992) The Age of Intelligent Machines MIT Press ; (2000) 
The Age of
Spiritual Machines: When Computers Exceed Human Intelligence Penguin
[53] Anderson, P. (1972) More is different Science 177, 393-396
[54]Turing, A.M. (1948) Intelligent Machinery unpublished report.
[55]Turing, A.M. (1950) Computing machinery and intelligence Mind 59 no 236, 
433-460
[56] Siegelmann, H.T. (1995) Computation beyond the Turing machine. Science, 
268:545-
548
[57]Turing, A.M. (1936) On computable numbers, with an application to the
Entscheidungsproblem Proc. London. Math. Soc. 42, 230-265
[58]Gödel, K. (1931) On Formally Undecidable Propositions of Principia 
Mathematica and
Related Systems Mathematik und Physik, 38 173-198
[59]Nagel, E. and Newman, J.R.(1958) Godel's Proof New York
University Press ; (1995) Godel’s Collected Work, Unpublished Essays and 
Lectures Oxford
University Press
[60]Hofstadter, D.R. (1979) Gödel, Escher, Bach: an Eternal Golden Braid 
basic Books
[61]Chaitin, G.J. (2002) Computers, Paradoxes and the foundations of 
mathematics
American Scientist March-April issue
[62] Ben Jacob, E. et al. (2002) Bacterial cooperative organization under 
antibiotic stress.
Physica A 282, 247-282
44
[63]Golding, I. and Ben Jacob, E. (2001) The artistry of bacterial colonies 
and the antibiotic
crisis in Coherent Structures in Complex Systems. Selected Papers of the XVII 
Sitges
Conference on Statistical Mechanics. Edited by Reguera, D., Bonilla, L.L. and 
Rubi, J.M.
[64] Alimova. A. et al. (2003) Native Fluorescence and Excitation 
Spectroscopic Changes in
Bacillus subtilis and Staphylococcus aureus Bacteria Subjected to Conditions 
of Starvation
Applied Optics, 42, 4080-4087
[65]Katz, A. et al. (2002) Noninvasive native fluorescence imaging of head 
and neck tumors,
Technology in Cancer Research and Treatment, 1, 9-16
Applied Optics, Volume 42, Issue 19, 4080-4087
July 2003
[66]Deutsch, M.; Zurgil, N. and Kaufman, M. (2000) Spectroscopic Monitoring 
of Dynamic
Processes in Individual Cells. In: Laser Scanning Technology. Oxford, Oxford 
University
Press
[67] Tauber, A. (1991) Organisms and the Origin of Self Durdercht Kluwer 
Academic
Publishers
[68] Tauber, A. (1994) The Immune Self: Theory or Metaphor? Cambridge 
University Press
[69]Shoham, S.G. (1979) The Myth of Tantalus: scaffolding for an ontological 
personality
University of Queensland Press
[70]Bohm, D. (1996) On Dialogue, Routledge
[71]Merry, U. (1995) Coping with uncertainty, Praeger Publishers
[72]Rose, S. (1976) The Conscious Brain. Vintage Books, New-York, 1976.
[73]Penrose, R. (1996) Shadows of the Mind: A Search for the Missing Science 
of
Consciousness Oxford University Press ; Penrose, R. and Gardner, M. (2002) 
The Emperor's
New Mind: Concerning Computers, Minds, and the Laws of Physics Oxford 
University Press
; Penrose, R. (2000) The Large, the Small and the Human Mind (with Longair, 
M., Shimony,
A., Cartwright, N. and Hawking, S.) Cambridge University Press
[74] Bloom, H. (2001) Global Brain John Wiley&sons
45
[75] Kaufman, S. (1995) At Home in the Universe: The Search for the Laws of 
Self-
Organization and Complexity Oxford University Press ; (2002) Investigations 
Oxford
University Press
[76] Sperber D. and Wilson, D. Basil Blackwell, (1986) Relevance, 
Communication and
Cognition, Basil Blackwell Oxford
[77] Aitchison, J. and Atchison, J. (1999) Linguistics, NTC Contemporary Pub. 
Group.
Chicago
[78] Grice, H.P. (1989) Studies in the Ways of Words, Academic Press, New York
[79]Steiner, G. (1975) After Babel: Aspects of Language and Translation. 
Oxford University
Press, New York.
[80] Pinker, S. (1994). The Language Instinct: How the Mind Creates Language. 
New York:
HarperCollins
[81]Jones, S. (1993) The Language of The Genes. Flaming, Glasgow
[82]Peng, C. K. et al. (1992)Long-range correlations in nucleotide sequences. 
Nature,
356:168-171
[83] Mantegen, R.N. et al. (1994) Linguistic features of noncoding DNA 
sequences. Phys.
Rev. Lett. 73, 3169-3172
[84] Ptashne, M. and Gann, A. (2002) Genes and signals, Cold Spring Harbor 
Press
[85]. Nowak, M.A et al. (2002) Computational and evolutionary aspects of 
language. Nature
417, 611-617
[86] Searls, D.B. (2002) The Language of genes. Nature 420, 211-217
[87] Losick, R. and Kaiser, D. (1997) Why and how bacteria communicate. Sci. 
Am. 276, 68-73
[88] Wirth, R. et al.. (1996) The Role of Pheromones in Bacterial 
Interactions. Trends Microbiol. 4,
96-103
[89] Salmond, G.P.C. et al. (1995) The bacterial enigma: Cracking the code of 
cell-cell
communication. Mol. Microbiol. 16, 615-624
[90] Dunny, G.M. and Winans, S.C. (1999) Cell-Cell Signaling in Bacteria, ASM 
Press
[91] Shimkets, L.J. (1999) Intercellular signaling during fruiting-body 
development of
Myxococcus xanthus. Annu. Rev. Microbiol. 53, 525-549
[92] Bassler, B.L. (2002) Small talk: cell-to-cell communication in bacteria. 
Cell 109, 421-424
46
[93] Ben Jacob, E. et al. (2003) Communication-based regulated freedom of 
response in
bacterial colonies Physica A 330 218-231
[94]Raichman, N. et al. (2004) Engineered self-organization of natural and 
man-made
systems in Continuum Models and Discrete Systems (in press)
[95] The Open University (2004) The Clock Work Universe in The Physical World 
series
[96] Collier, John. (2003) Hierarchical Dynamical Information Systems
With a Focus on Biology Entropy 5(2): 100-124 ; Holism and Emergence: 
Dynamical
Complexity Defeats Laplace's Demon (unpublished)
[97] Swartz, N. (1997) Philosophical Notes
URL http://www.sfu.ca/philosophy/swartz/freewill1.htm
[98] Hoefer, C. (2004) Causal Determinism, The Stanford Encyclopedia of 
Philosophy
[99] Ben-Jacob, E. and Garik, P. (1990) The formation of patterns in 
non-equilibrium
growth. Nature, 343: 523-530
[100] Ben Jacob, E. and Herbert, L. (2001) The artistry of Nature 409, 985-986
[101] Searle, John R. (1984). Minds, Brains and Science. Harvard University 
Press
[102] Dennett, Daniel C. (1978). Brainstorms: Philosophical Essays on Mind and
Psychology. MIT Press, Cambridge, Mass.
[103] Johnson-Laird, P. N. (1988). The Computer and the Mind. Harvard 
University Press,
Cambridge Mass.
[104]Lucas, J.R. (1964) Minds, Machines and Gödel, in Minds and Machines, ed. 
Alan R.
Anderson Englewood Cliffs
[105]Dennett, D. (1993). Book Review: Allen Newell, Unified Theories of 
Cognition,
Artificial Intelligence, 59, 285-294.
[106]Rapaport, W.J. (1995). Understanding Understanding: Syntactic Semantics 
and
Computational Cognition, Philosophical Perspectives 9.
[107] Searl, J.R. (2001) Is the Brain a Digital Computer? McGraw-Hill
[108] Kay, K. (2001) Machines and the Mind: Do artificial intelligence 
systems incorporate
intrinsic meaning? The Harvard Brain Vol 8
[109] Lrson, E. Rethinking Deep Blue:Why a Computer Can't Reproduce a Mind 
Access
Research Network Origins & Design Archives
47
[110] Schaeffer, J. and Plaat, A. (1997) Kasparov versus Deep Blue: The 
Re-match ICCA
Journal vol. 20,. 95-102
[110]Nelson, E. (1999) Mathematics and the Mind in Toward a Science of 
Consciousness -
Fundamental Approaches
[111] Velicer, G.J. (2003) Social strife in the microbial world. Trends 
Microbiol. 7, 330-337
[112] Strassmann, (2000) Bacterial Cheaters Nature 404 555-556
[113] Strassmann, J.E. Zhu, Y. and Queller, D.C. (2000) Altruism and social 
cheating in the
social amoeba Dictyostellium dicoideum Nature 408 965-967
[114] Queller, D.C. and Strassmann, J.E. (2002) The many selves of social 
insects Science
296 311-313
[115]Gell-Mann, M. (1992) Nature Conformable To Herself The Bulletin of the 
Santa Fe
Institute, 7,1, 7-10, (1992) ; (1995/6) Complexity, 1,4. In these 
publications, Gell-Mann
refers to top-level emergence (i.e., the basic constituents are not altered 
during the emergence
process itself) in adaptive complex systems as sufficient mechanism together 
with the
principles of the Neo-Darwinian paradigm to explain Life saying that: “In my 
opinion, a
great deal of confusion can be avoided, in many different contexts, by making 
use of the
notion of emergence. Some people may ask, "Doesn't life on Earth somehow 
involve more
than physics and chemistry plus the results of chance events in the history 
of the planet and
the course of biological evolution? Doesn't mind, including consciousness or 
self-awareness,
somehow involve more than neurobiology and the accidents of primate 
evolution? Doesn't
there have to be something more?" But they are not taking sufficiently into 
account the
possibility of emergence. Life can perfectly well emerge from the laws of 
physics plus
accidents, and mind, from neurobiology. It is not necessary to assume 
additional mechanisms
or hidden causes. Once emergence is considered, a huge burden is lifted from 
the inquiring
mind. We don't need something more in order to get something more. Although 
the
"reduction" of one level of organization to a previous one – plus specific 
circumstances
arising from historical accidents – is possible in principle, it is not by 
itself an adequate
strategy for understanding the world. At each level, new laws emerge that 
should be studied
for themselves; new phenomena appear that should be appreciated and valued at 
their own
level”. He further explains that: “Examples on Earth of the operation of 
complex adaptive
systems include biological evolution, learning and thinking in animals 
(including people), the
functioning of the immune system in mammals and other vertebrates, the 
operation of the
human scientific enterprise, and the behavior of computers that are built or 
programmed to
evolve strategiesׁfor example by means of neural nets or genetic algorithms. 
Clearly,
complex adaptive systems have a tendency to give rise to other complex 
adaptive systems”.
[116] Gell-Mann, M. (1994) The quark and the Jaguar: Adventures in the Simple 
and the
Complex W. H. Freeman&Company,
48
[117] Wolfram, S. (2002) A New Kind of Science Wolfram Media Inc
[118] Langton, C.G.(Editor) (1997) Artificial Life: An Overview (Complex 
Adaptive Systems) MIT
Press
[119] Dooley, K. (1997) A Complex Adaptive Systems Model of Organization 
Change,
Nonlinear Dynamics, Psychology, & Life Science, 1, p. 69-97.
[120] Waldrop, M.M. (1992) Complexity: The Emerging Science at the Edge of 
Chaos.
Simon and Schuster
[121] Mitchell, M. (1998) An Introduction to Genetic Algorithms (Complex 
Adaptive
Systems) MIT Press
[122] Holland, J.H. (1995) Hidden Order, Addison-Wesley
[123]Berlinski, D. (2001) What Brings a World into Being?
Commentary 111, 17-24
[124]Feitelson, D.G. and Treinin, M. (2002) The Blueprint for Life?
IEEE Computer, July 34-40. Feitelson's and Treinin's article shows that DNA 
is a rather
incomplete code for life. DNA does not even completely specify a protein. 
Special peptides,
chaperons, are needed to help fold a newly synthesized protein into the 
correct form.
Furthermore, DNA has "multiple readings". A particular transcription is 
selected based on
the mix of the proteins in the cytoplasm – the current state of a cell. 
"Thus, DNA is only
meaningful in a cellular context in which it can express itself and in which 
there is an
iterative, cyclic relationship between the DNA and the context."
[125] Winfree, A.T. (1988) Book review on “Mind from Matter? An Essay on 
Evolutionary
Epistemology” Bull. Math. Biol. 50 193-207
[126]Abelson, J., Simon, M., Attardi, G. and Chomyn, A. (1995) Mitochondrial 
Biogenesis
and Genetics, Academic Press
[127] Holt, I.J.Editor (2003) Genetics of Mitochondrial Diseases Oxford 
Monographs on
Medical Genetics, No. 47 Oxford University Press
[128] Knight, R.D., Landweber, L.F., and Yarus, M. (2001) How mitochondria 
redefine the
code J. Mol. Evol. 53 299-313
[129]Burger, G.I. et al (1995) The mitochondrial DNA of the amoeboid 
protozoon,
Acanthamoeba castellanii. Complete sequence, gene content and genome 
organization J.
Mol. Biol. 245:522-537.
[130]Gray, M.W. (1992) The endosymbiont hypothesis revisited Mitochondrial 
Genomes
141:233-357.
49
[131]Wolff, G. et al (1993) Mitochondrial genes in the colorless alga 
Prototheca wickerhamii
resemble plant genes in their exons but fungal genes in their introns. 
Nucleic Acids Research
21:719-726. ;
[132]Wolf, G. et al, (1994) Complete sequence of the mitochondrial DNA of the 
chlorophyte
alga Prototheca wickerhamii. Gene content and genome organization." J. Mol. 
Biol. 237:74-
86.
[133] Landweber, L.F. and Kari, L. (1999) The evolution of cellular 
computing: natur’s
solution to a computational problem, Biosystems 52, 3-13
[134] Kari, L. and Landweber, L.F. (2003) Biocomputing in cilliates. In 
Cellular Computing,
edited by Amos, M. Oxford University Press
[135] Makalowski, W. (2003) Not junk after all. Science 300, 1246-7
[136] Lev-Maor, G. et al. (2003) The birth of an alternatively spliced exon: 3
’ splice-site
selection in Alu exons. Science 300, 1288-91
[137] Baruchi, I. and Ben Jacob, E. (2004) Hidden causal manifolds in the 
space of
functional correlations Neuroinformatics (invited) To evaluate the affinities 
for recorded
correlations from N locations the Euclidian distances between every two 
locations in the Ndimension
space of correlations are calculated. The affinities are defined as the 
correlations
normalized by the distances in the space of correlations. Next, the 
information is projected on
low dimension manifolds which contain maximal information about the functional
correlations. The space of affinities can be viewed as the analog of a Banach 
space
generalization (to include self reference) of quantum field theory. From a 
mathematical
perspective, the composons can be viewed as a Banach-Tarski decomposition of 
the space of
correlations into functional sets according to the Axiom of Choice (Appendix 
D).
[138] Oliver, S.G. et al, (2004) Functional genomic hypothesis generation and
experimentation by a robot scientist. Nature, 427, 247 - 252,
[139] Klironomos, J. N.and Hart M.M. (2001) Animal nitrogen swap for plant 
carbon Nature
410 651-652 Klironomos, J. N. (2002) Feedback with soil biota contributes to 
plant rarity
and invasiveness in communities. Nature, 217: 67-70. This study showed that 
soil
microorganisms can significantly affect the growth of plants in natural 
ecosystems.
Furthermore, these microorganisms can determine the degree to which plants 
spread and
invade within communities.
[140] Roubertoux, P.L. (2003) Mitochondrial DNA modifies cognition in 
interaction with the
nuclear genome and age in mice Nature genetics 35 65-69
50
[141] Chomsky, N. (1957) Syntactic Structures, The Hague: Mouton
[142] Bambrook, G. (1996) Language and computers, Edinburgh University Press,
Edinburgh
[143] Warnow, T. (1997) Mathematical approaches to comparative linguistics. 
Proc. Natl.
Acad. Sci. USA 94, 6585-6590
[144] Schechter, E. (1997) Handbook of Analysis and Its Foundations Academic 
Press and
references therein
[145] Aharonov, Y., Anandan, J. and Vaidman, L. (1996) The Meaning of 
Protective
Measurements Found. Phys. 26, 117
[146]Aharonov, Y., Anandan, J. and Vaidman, L. (1993) Meaning of the Wave 
Function
Phys. Rev. A 47, 4616
[147]Aharonov, Y. and Vaidman, L. (1993)The Schrödinger Wave is Observable 
After All!
in Quantum Control and Measurement, H. Ezawa and Y. Murayama (eds.) Elsevier 
Publ
[148] Aharonov, Y., Massar, S., Popescu, S., Tollaksen, J. and Vaidman, L. 
(1996) Adiabatic
Measurements on Metastable Systems Phys. Rev. Lett. 77, 983
[149] Aharonov, Y. and Bohm, D. (1961) Time in the Quantum Theory and the 
Uncertainty
Relation for Time and Energy Phys. Rev. 122, 1649
[150]Aharonov, Y., Anandan, J., Popescu, S. and Vaidman, L. (1990) 
Superpositions of
Time Evolutions of a Quantum System and a Quantum Time-Translation Machine
Phys. Rev. Lett. 64, 2965
[151]Aharonov, Y. and Vaidman, L. (1990) Properties of a Quantum System 
During the
Time Interval Between Two Measurements Phys. Rev. A 41, 11
[152] Orzag, M. (2000) Quantum Optics: Including Noise Reduction, Trapped 
Ions,
Quantum Trajectories, and Decoherence
[153]Yamamoto, Y. and Imamoglu, A. (1999) Mesoscopic Quantum Optics 
Wiley-Interscience
[154]Einstein, A., Podolsky, B. and Rosen, N. (1935) Can quantum-mechanical 
description
of physical reality be considered complete?, Physical Review 47 777
[155] 't Hooft, G. (2002) Determinism beneath quantum mechanics. Preprint
xxx.lanl.gov/abs/quant-ph/0212095, (2002). Talk presented at 'Quo vadis 
quantum
mechanics' conference, Temple University, Philadelphia.
[156] Ball, P. (2003) Physicist proposes deeper layer of reality Nature News 
8 January
51
Appendix A: Bacterial Cooperation – The Origin of Natural Intelligence
Under natural conditions, bacteria tend to cooperatively self-organize into 
hierarchically
structured colonies (109-1013 bacteria each), acting much like multi-cellular 
organisms
capable of coordinated gene expressions, regulated cell differentiation, 
division of tasks, and
more. Moreover, the colony behaves as a new organism with its own new self, 
although the
building blocks are living organisms, each with its own self, as illustrated 
in the figure below.
To achieve the proper balance of individuality and cooperation, bacteria 
communicate using
sophisticated communication methods which include a broad repertoire of 
biochemical
agents, such as simple molecules, polymers, peptides, proteins, pheromones, 
genetic
materials, and even “cassettes” of genetic information like plasmids and 
viruses. At the same
time, each bacterium has equally intricate intracellular communication means 
(signal
transduction networks and genomic plasticity) of generating intrinsic meaning 
for contextual
interpretation of the chemical messages and for formulating its appropriate 
response.
Collective decision-making: When the growth conditions become too stressful, 
bacteria can
transform themselves into inert, enduring spores. Sporulation is executed 
collectively and
begins only after "consultation" and assessment of the colonial stress as a 
whole by the
individual bacteria. Simply put, starved cells emit chemical messages to 
convey their stress.
Each of the other bacteria uses the information for contextual interpretation 
of the state of the
colony relative to its own situation. Accordingly, each of the cells decides 
to send a message
for or against sporulation. After all the members of the colony have sent out 
their decisions
and read all the other messages, if the “majority vote” is pro-sporulation, 
sporulation occurs.
Thus, sporulation illustrates semantic and pragmatic levels in bacterial 
communication, i.e.,
bacteria can transmit meaning-bearing messages to other bacteria to conduct a 
dialogue for
collective decision making (Appendix B).
Although spores can endure extreme conditions (e.g., high temperatures, toxic
materials, etc.), all they need for germination is to be placed under mild 
growth conditions.
How can they sense the environment so accurately while in almost non living 
state,
surrounded by a very solid membrane, is an unsolved and very little studied 
enigma.
Exchange of genetic information: Another example of bacterial special 
abilities has to do
with the rapid development of bacterial resistance to antibiotic: The 
emergence of bacterial
strains with multiple drug resistance has become one of the major health 
problems
worldwide. Efficient resistance rapidly evolves through the cooperative 
response of bacteria,
utilizing their sophisticated communication capabilities. Bacteria exchange 
resistance
information within the colony and between colonies, thus establishing a “
creative genomic
web”. Maintenance and exchange of the resistance genetic information is 
costly and might be
hazardous to the bacteria. Therefore, the information is given and taken on a 
“need to know”
basis. In other words, the bacteria prepare, send and accept the genetic 
message when the
information is relevant to their existence.
52
One of the tools for genetic communication is via direct physical transfer of 
conjugal
plasmids. These bacterial mating events, that can also include inter-colonial 
and even
interspecies conjugations, follow chemical courtship played by the potential 
partners.
Naively presented, bacteria with valuable information (say, resistance to 
antibiotic) emit
chemical signals to announce this fact. Bacteria in need of that information, 
upon receiving
the signal, emit pheromone-like peptides to declare their willingness to 
mate. Sometimes, the
decision to mate is followed by an exchange of competence factors (peptides). 
This preconjugation
communication modifies the membrane of the partner cell into a penetrable 
state
needed for conjugation, allowing the exchange of genetic information.
Hierarchical organization of vortices: Some bacteria cope with hazards by 
generating
module structures - vortices, which then become building blocks used to 
construct the colony
as a higher entity (Fig 2). To maintain the integrity of the module while it 
serves as a higherorder
building block of the colony requires an advanced level of communication. 
Messages
must be passed to inform each cell in the vortex that it is now playing a 
more complex role,
being a member of the specific module and the colony as a whole, so it can 
adjust its
behavior accordingly.
Once the vortex is recognized as a possible spatial structure, it becomes 
easy to
understand that vortices can be used as subunits in a more complex colonial 
structure for
elevated colonial plasticity. In Fig 3, we demonstrate how the P. vortex 
bacteria utilize their
cooperative, complexity-based plasticity to alter the colony structure to 
cope with antibiotic
stress, making use of some simple yet elegant solutions. The bacteria simply 
increase
cooperation (by intensifying both attractive and repulsive chemical 
signaling), leading to
larger vortices (due to stronger attraction) that move faster away from the 
antibiotic stress
(due to stronger repulsion by those left behind). Moreover, once they’ve 
encountered the
antibiotic, the bacteria seem to generate a collective memory so that in the 
next encounter
they can respond even more efficiently.
Fig. A1: Hierarchical colonial organization: Patterns formed during colonial 
development of the
swarming and lubricating Paenibacillus vortex bacteria. (Left) The vortices 
(modules) are the leading dots seen
on a macro-scale (~10cm2). The picture shows part of a circular colony 
composed of about 1012 bacteria - ~ the
number of cells of our immune system, ten times the number of neurons in the 
brain and hundred times the
human population on earth. Each vortex is composed of many cells that swarm 
collectively around their
53
common center. These vortices vary in size from tens to millions of bacteria, 
according to their location in the
colony and the growth conditions. The vortex shown on the right 
(magnification x500, hence each bar is a
single bacterium) is a relatively newly formed one. After formation, the 
cells in the vortex replicate, the vortex
expands in size and moves outward as a unit, leaving behind a trail of motile 
but usually non replicating cells –
the vortex tail. The vortices dynamics is quite complicated and includes 
attraction, repulsion, merging and
splitting of vortices. Yet, from this complex, seemingly chaotic movement, a 
colony with complex but nonarbitrary
organization develops (left). To maintain the integrity of the vortex while 
it serves as a higher-order
building block of the colony requires an advanced level of communication. 
Messages must be passed to inform
each cell in the vortex that it is now playing a more complex role, being a 
member of the specific vortex and the
colony as a whole, so it can adjust its behavior accordingly. New vortices 
emerge in the trail behind a vortex
following initiation signals from the parent vortex. The entire process 
proceeds as a continuous dialogue: a
vortex grows and moves, producing a trail of bacteria and being pushed 
forward by the very same bacteria
behind. At some point the process stalls, and this is the signal for the 
generation of a new vortex behind the
original one, that leaves home (the trail) as a new entity which serves a 
living building block of the colony as a
whole.
Fig. A2: Collective memory and learning: Self-organization of the P.vortex 
bacteria in the
presence of non-lethal levels of antibiotic added to the substrate. In the 
picture shown, bacteria were exposed to
antibiotic before the colonial developments. Note that it resulted in a more 
organized pattern (in comparison
with Fig 1.
>From multi-cellularity to sociality: In fact, bacteria can go a step higher; 
once an entire
colony becomes a new multi-cellular being with its own identity, colonies 
functioning as
organisms cooperate as building blocks of even more complex organizations of 
bacterial
communities or societies, such as species-rich biofilms. In this situation, 
cells should be able
to identify their own self, both within the context of being part of a 
specific colony-self and
part of a higher entity - a multi-colonial community to which their colony 
belongs. Hence, to
maintain social cooperation in such societies with species diversity, the 
bacteria need “multilingual”
skills for the identification and contextual interpretation of messages 
received from
colony members and from other colonies of the same species and of other 
species, and to
have the necessary means to sustain the highest level of dialogue within the “
chattering” of
the surrounding crowed.
Incomprehensible complexity: For perspective, the oral cavity, for example, 
hosts a
large assortment of unicellular prokaryotic and various eukaryotic 
microorganisms. Current
estimates suggest that sub-gingival plaque contains 20 genera of bacteria 
representing
54
hundreds of different species, each with its own colony of ~1010 bacteria, 
i.e., together
~thousand times the human population on earth. Thus, the level of complexity 
of such
microbial system far exceeds that of the computer networks, electric 
networks, transportation
and all other man-made networks combined. Yet bacteria of all those colonies 
communicate
for tropism in shared tasks, coordinated activities and exchange of relevant 
genetic bacterial
information using biochemical communication of meaning-bearing, semantic 
messages. The
current usage of “language” with respect to intra- and inter-bacteria 
communication is mainly
in the sense that one would use in, for example, “computer language” or “
language of
algebra”. Namely, it refers to structural aspects of communication, 
corresponding to the
structural (lexical and syntactic) linguistic motifs. Higher linguistic 
levels - assigning
contextual meaning to words and sentences (semantic) and conducting 
meaningful dialogue
(pragmatic) - are typically associated with cognitive abilities and 
intelligence of human.
Hence, currently one might accept their existence in the “language of dolphins
” but regard
them as well beyond the realm of bacterial communication abilities. We 
propose that this
notion should be reconsidered.
Appendix B: Clues and Percepts Drawn from Human Linguistics
Two independent discoveries the 1950’s latter bridged linguistics and 
genetics: Chomsky’s
proposed universal grammar of human languages [141] and the discovery of the 
structural
code of the DNA. The first suggested universal structural motifs and 
combinatorial principles
(syntactic rules) at the core of all natural languages, and the other 
provided analogous
universals for the genetic code of all living organisms. A generation later, 
these paradigms
continue to cross-pollinate these two fields. For example, Neo-Darwinian and 
population
genetics perspectives as well as phylogenetic methods are now used for the 
understanding the
structure, learning, and evolution of human languages. Similarly, Chomsky’s 
meaningindependent
syntactic grammar view combined with computational linguistic methods are
widely used in biology, especially in bioinformatics and structural biology 
but increasingly in
biosystemics and even ecology.
The focus has been on the formal, syntactic structural levels, which are also 
applicable to
“machine languages”: Lexical – formation of words from their components 
(e.g., characters
and phonemes); Syntactic – organization of phrases and sentences in 
accordance with wellspecified
grammatical rules [142,143].
Linguistics also deals with a higher-level framework, the semantics of human 
language.
Semantics is connected to contextual interpretation, to the assignment of 
context-dependent
meaning to words, sentences and paragraphs. For example, one is often able to 
capture the
meaning of a text only after reading it several times. At each such 
iteration, words, sentences
and paragraphs may assume different meanings in the reader's mind; iteration 
is necessary,
since there is a hierarchical organization of contextual meaning. Namely, 
each word
contributes to the generation of the meaning of the entire sentence it is 
part of, and at the
same time the generated whole meaning of the sentence can change the meaning 
of each of
the words it is composed of. By the same token, the meanings of all sentences 
in a paragraph
are co-generated along with the created meaning of the paragraph as a whole, 
and so on, for
all levels.
55
Readers have semantic plasticity, i.e., a reader is free to assign 
individualistic contextual and
causal meanings to the same text, according to background knowledge, 
expectations, or
purpose; this is accomplished using combined analytical and synthetic skills. 
Beyond this,
some linguists identify the conduction of a dialogue among converser using 
shared semantic
meaning as pragmatics. The group usage of a dialogue can vary from activity 
coordination
through collective decision-making to the emergence of a new group self. To 
sustain such
cognitive abilities might require analogous iterative processes of 
self-organization based
generation of composons of meaning within the brain which will be discussed 
elsewhere
Drawing upon human linguistics with regard to bacteria, semantics would imply
contextual interpretation of chemical messages, i.e., each bacterium has some 
freedom
(plasticity) to assign meaning according to its own specific, internal and 
external, contextual
state. For that, a chemical message is required to initiate an intra-cellular 
response that
involves internal restructuring - self-organization of the intracellular gel 
and/or the genenetwork
or even the genome itself. To sustain a dialogue based on semantic messages, 
the
bacteria should have a common pre-existing knowledge (collective memory) and 
abilities to
collectively generate new knowledge that is transferable upon replication. 
Thus, the ability to
conduct a dialogue implies that there exist some mechanisms of collective 
gene expression,
analogous to that of cell differentiation during embryonic development of 
multi-cellular
organisms, in which mitochondria might play an important role.
Appendix C: Gödel’s Code and the Axiom of Choice
Hilbert’s second problem
Gödel’s theorems provided an answer to the second of the 23 problems posed by 
Hilbert.
2. Can it be proven that the axioms of logic are consistent?
Gödel’s theorems say that the answer to Hilbert’s second question is 
negative. For that he has
invented the following three steps code:
1. Gödel assigned a number to each logical symbol, e.g.,
Not ≡ 1
Or ≡ 2
If then ≡ 3
∃ ≡ 4
2. He assigned prime numbers to variables, e.g.,
x ≡ 11
y ≡ 13
3. He assigned a number to any statement according to the following example: “
There is a
number not equal to zero”.
In logic symbols ( ∃ x ) ( x ∼ = 0 )
In Gödel’s numbers 8 4 11 9 8 11 1 5 6 9
The statement’s number is 28.34.511.79.118.1311.171.195.236.299
56
Note that it is a product of the sequence of prime numbers, each to the power 
of the
corresponding Gödel’s number. This coding enables one-to-one mapping between 
statements
and the whole numbers.
Hilbert’s first problem and the Axiom of Choice
Gödel also studied the first of the 23 essential problems posed by Hilbert.
1.a Is there a transfinite number between that of a denumerable set and the 
numbers of
the continuum? 1.b Can the continuum of numbers be considered a well ordered 
set?
In 1940, Gödel proved that a positive answer to 1.a is consistent with the 
axioms of von
Neumann-Bernays-Gödel set theory. However, in 1963, Cohen demonstrated that 
it is
inconsistent with the Zermelo-Frankel set theory. Thus, the answer is 
undecidable – it
depends on the particular set theory assumed. The second question is related 
to an important
and fundamental axiom in set sometimes called Zermelo's Axiom of Choice. It 
was
formulated by Zermelo in 1904 and states that, given any set of mutually 
exclusive nonempty
sets, there exists at least one set that contains exactly one element in 
common with each of
the nonempty sets. The axiom of choice can be demonstrated to be independent 
of all other
axioms in set theory. So the answer to 1.b is also undecidable.
The popular version of the Axiom of Choice is that [144]:
Let C be a collection of nonempty sets. Then we can choose a member from each 
set in
that collection. In other words, there exists a choice function f defined on 
C with the
property that, for each set S in the collection, f(S) is a member of S.
There is an ongoing controversy over how to interpret the words "choose" and
"exists" in the axiom: If we follow the constructivists, and "exists" means “
to find," then the
axiom is false, since we cannot find a choice function for the nonempty 
subsets of the real
numbers. However, most mathematicians give "exists" a much weaker meaning, 
and they
consider the Axiom to be true: To define f(S), just arbitrarily "pick any 
member" of S.
In effect, when we accept the Axiom of Choice, this means we are agreeing to 
the convention
that we shall permit ourselves to use a choice function f in proofs, as 
though it "exists" in
some sense, even though we cannot give an explicit example of it or an 
explicit algorithm for
it.
The choice function merely exists in the mental universe of mathematics. Many 
different
mathematical universes are possible. When we accept or reject the Axiom of 
Choice, we are
specifying which universe we shall work in. As was shown by Gödel and Cohen, 
both
possibilities are feasible – i.e., neither accepting nor rejecting AC yields 
a contradiction.
The Axiom of Choice implies the existence of some conclusions which seem to 
be counterintuitive
or to contradict "ordinary" experience. One example is the Banach-Tarski
Decomposition, in which the Axiom of Choice is assumed to prove that it is 
possible to take
the 3-dimensional closed unit ball,
57
B = {(x,y,z) ∈ R3 : x2 + y2 + z2 < 1}
and partition it into finitely many pieces, and move those pieces in rigid 
motions (i.e.,
rotations and translations, with pieces permitted to move through one 
another) and
reassemble them to form two copies of B.
At first glance, the Banach-Tarski Decomposition seems to contradict some of 
our intuition
about physics " " e.g., the Law of Mass Conservation from classical 
Newtonian physics.
Consequently, the Decomposition is often called the Banach-Tarski Paradox. 
But actually, it
only yields a complication, not a contradiction. If we assume a uniform 
density, only a set
with a defined volume can have a defined mass. The notion of "volume" can be 
defined for
many subsets of R3, and beginners might expect the notion to apply to all 
subsets of R3, but it
does not. More precisely, Lebesgue measure is defined on some subsets of R3, 
but it cannot
be extended to all subsets of R3 in a fashion that preserves two of its most 
important
properties: the measure of the union of two disjoint sets is the sum of their 
measures, and
measure is unchanged under translation and rotation. Thus, the Banach-Tarski 
Paradox does
not violate the Law of Conservation of Mass; it merely tells us that the 
notion of "volume" is
more complicated than we might have expected.
We emphasize that the sets in the Banach-Tarski Decomposition cannot be 
described
explicitly; we are merely able to prove their existence, like that of a 
choice function. One or
more of the sets in the decomposition must be Lebesgue unmeasurable; thus a 
corollary of
the Banach-Tarski Theorem is the fact that there exist sets that are not 
Lebesgue measurable.
The idea we lean toward is that in the space of affinities the composons 
represent similar
decomposition but of information which is the extensive functional in this 
space which
corresponds to the volume in the system real space.
Appendix D: Description of Turing’s Conceptual Machinery
To support our view of the limits of Artificial Intelligence or Machines 
Intelligence, we
present here a relatively detailed description of Turing’s Universal Machine. 
Turing proved that any
discrete, finite state with fixed in time finite set of instructions can be 
mapped onto his conceptual
machine. Note that there can be self-reference in the execution of the 
instructions but not in their
logical structure.
The process of computation was graphically depicted in Turing's paper when he 
asked the reader to
consider a device that can read and write simple symbols on a paper tape that 
is divided into
squares. The "reading/writing head" can move in either direction along the 
tape, one square at a
time, and a control unit that directs the actions of the head can interpret 
simple instructions about
reading and writing symbols in squares. The single square that is "scanned" 
or "read" at each stage
is known as the Active Square. Imagine that new sections can be added at 
either end of the existing
tape, so it is potentially infinite.
Suppose the symbols are "X" and "O". Suppose that the device can erase either 
symbol when it
reads it in the Active Square and replace it with the other symbol (i.e., 
erase an X and replace it with
an O, and vice versa). The device also has the ability to move left or right, 
one square at a time,
according to instructions interpreted by the control unit. The instructions 
cause a symbol to be
erased, written, or left the same, depending on which symbol is read.
58
Any number of games can be constructed using these rules, but they would not 
all necessarily be
meaningful. One of the first things Turing demonstrated was that some of the 
games constructed
under these rules can be very sophisticated, considering how crude and 
automaton-like the primitive
operations seem to be. The following example illustrates how this game can be 
used to perform a
simple calculation.
The rules of the game to be played by this Turing machine are simple: Given a 
starting position in
the form of a section of tape with some Xs and Os on it, and a starting 
square indicated, the device
is to perform the actions dictated by a list of instructions and follows the 
succeeding instructions
one at a time until it reaches an instruction that forces it to stop. (If 
there is no explicit instruction in
the table of instructions for a particular tape configuration, there is 
nothing that the machine can do
when it reaches that configuration, so it has to stop.)
Each instruction specifies a particular action to be performed if there is a 
certain symbol on the
active square at the time it is read. There are four different actions; they 
are the only legal moves of
this game. They are:
Replace O with X.
Replace X with O.
Go one square to the right.
Go one square to the left.
An example of an instruction is: "If there is an X on the active square 
replace it with O." This
instruction causes the machine to perform the second action listed above. In 
order to create a
"game," we need to make a list that specifies the number of the instruction 
that is being followed at
every step as well as the number of the instruction that is to be followed 
next. That is like saying
"The machine is now following (for example) instruction seven, and the 
instruction to be followed
next is (for example) instruction eight" (as is illustrated in appendix 3).
Here is a series of instructions, given in coded form and the more 
English-like translation.
Taken together, these instructions constitute an "instruction table" or a 
"program" that tells a
Turing machine how to play a certain kind off game:
1XO2 (Instruction #1:if an X is on the active square, replace it with O, then 
execute instruction #2.)
2OR3 (Instruction #2: if an O is on the active square, go right one square 
and then execute instruction #3.)
3XR3 (Instruction #3: if an X is on the active square, go right one square 
execute instruction #3;
3OR4 but if an O is on the active square, go right one square and then 
execute instruction #4.)
4XR4 (Instruction #4: if an X is on the active square, go right one square 
and then execute instruction #4;
4OX5 but if an O is on the active square, replace it with X and then execute 
instruction #5.)
5XR5 (Instruction #5: if an X is on the active square, go right one square 
and then execute instruction #5;
5OX6 but if an O is on the active square, replace it with X and then execute 
instruction #6.)
6XL6 (Instruction #6: if an X is on the active square, go left one square and 
then execute instruction #6
6OL7 but if an O is on the active square, go left one square and then execute 
instruction #7.)
7XL8 (Instruction #7: if an X is on the active square, go left one square and 
then execute instruction #8.)
8XL8 (Instruction #8: if an X is on the active square, go left one square and 
then execute instruction #8;
8OR1 but if an O is on the active square, go right one square and then 
execute instruction #1.)
Note that if there is an O on the active square in instruction #1 or #7, or 
if there is an X on the active square in
instruction #2, the machine will stop.
In order to play the game (run the program) specified by the list of 
instructions, one more
thing must be provided: a starting tape configuration. For our example, let 
us consider a tape
with two Xs on it, bounded on both sides by an infinite string of Os. The 
changing states of a
single tape are depicted here as a series of tape segments, one above the 
other. The Active
59
Square for each is denoted by a capital X or O. When the machine is started 
it will try to
execute the first available instruction, instruction #1. The following series 
of actions will then
occur
Instruction Tape What the Machine Does
#1 ...ooXxooooooo... One (of two) Xs is erased.
#2 ...ooOxooooooo...
#3 ...oooXooooooo... Tape is scanned to the right
#3 ...oooxOoooooo...
#4 ...oooxoOooooo...
#5 ...oooxoXooooo... Two Xs are written.
#5 ...oooxoxOoooo...
#6 ...oooxoxXoooo...
#6 ...oooxoXxoooo... Scanner returns to the other original X
#6 ...oooxOxxoooo...
#7 ...oooXoxxoooo...
#8 ...ooOxoxxoooo... Scanner moves to the right and execute #1
#1 ...oooXoxxoooo...
#2 ...oooOoxxoooo...
#3 ...ooooOxxoooo... Scanner moves to the right of the two Xs that were 
written earlier.
#4 ...oooooXxoooo...
#4 ...oooooxXoooo...
#4 ...oooooxxOooo...
#5 ...oooooxxXooo... Two more Xs are written.
#5 ...oooooxxxOoo...
#6 ...oooooxxxXoo...
#6 ...oooooxxXxoo... Scanner looks for any more original Xs
#6 ...oooooxXxxoo...
#6 ...oooooXxxxoo...
#6 ...ooooOxxxxoo...
#7 ...oooOoxxxxoo... The machine stops because there is no instruction for #7 
if O is being scanned.
This game may seem rather mechanical. The fact that it is mechanical was one 
of the points
Turing was trying to make. If you look at the starting position, note that 
there are two
adjacent Xs. Then look at the final position and note that there are four Xs. 
If you were to use
the same instructions, but start with a tape that had five Xs, you would wind 
up with ten Xs.
This list of instructions is the specification for a calculating procedure 
that can double the
input and display the output. It can, in fact, be done by a machine.
(This Appendix is edited with author’s permission from “Tools for Thoughts: 
The People and
Ideas of the Next Computer Revolution” by Howard Rheingold 1985)
Appendix E: Non-Destructive Quantum Measurements
Protective Quantum Measurements and Hardy’s Paradox
The debate about the existence of the choice function in the Axiom of choices 
is in the same
spirit as the debated questions about the reality of the wave function and 
paradoxes
connected with quantum entanglement like the one proposed by Hardy (see 
references in the
extract below). It has been proven by Aharonov and his collaborators[145-148 
]that it is
possible in principle to perform quantum measurements to extract information 
beyond
60
quantum uncertainty while the wave function is protected (for the case of 
eigenstate with
discrete spectrum of eigenvalue they refer to it as protective measurements, 
and for
continuous spectrum as weak measurements). The protective, weak and 
non-demolition
(described latter) quantum measurements provide different methods for 
non-destructive
measurements of quantum systems – there is no destruction of the quantum 
state of the
system due to externally imposed measurement. These kinds of measurements 
enable the
observations of unexpected quantum phenomena. For example, the thought 
experiment
proposed in Hardy’s paradox can be tested as illustrated in [Quantum Physics, 
abstract quantph/
0104062].
61
As with a multiple-options state for organism, Hardy’s paradox is usually 
assumed to be resolved on
the grounds that the thought experiment doesn't correspond to any possible 
real experiment and is
therefore meaningless. The only way to find out what really happens to the 
particles in the
experiment would be to measure their routes, rather than simply inferring 
them from the final result.
But, as soon as a particle detector is placed in any of the paths, standard 
strong quantum
measurement will cause the collapse of its wave function and wash out any 
possible future
interference between the electron and positron states.
However, Hardy’s thought experiment can be converted into a real one if the 
assumed strong
quantum measurement is replaced with weak measurements. The idea is to 
exploit quantum
uncertainty by using a quantum detector which is weakly coupled to the 
measured system to the
degree that it reads eigenvalues smaller than the expected quantum 
uncertainty. It was proved that
by doing so quantum superposition of states can be preserved (i.e., there is 
no collapse of the wave
function). Clearly, a single weak measurement can not, on its own, provide 
any meaningful
information. However, it was proved theoretically that, when repeated many 
times, the average of
these measurements approximates to the true eigenvalue that would be obtained 
by a single strong
measurement involving a collapse of the wave function [145-148]..
Therefore, when weak measurements are assumed, not only does the original 
paradox remain, but
an additional difficulty arises. The theoretical investigations imply that 
two pairs of electronpositron
can coexist in the apparatus at the same time: A detector located in the part 
of the
interferometer in which the particle trajectories are non-overlapping can 
yield a final reading of -1,
i.e., a "negative presence" of a pair of particles! To quote Aharonov:
The -1 result illustrates that there is a way to carry out experiments on the 
counter-intuitive
predictions of quantum theory without destroying all the interesting results. 
A single quantum
particle could have measurable effects on physical systems in two places at 
once, for instance.
Moreover, when you get a good look inside, quantum theory is even more 
bizarre than we
thought. Quantum particles can assume far more complex identities than simply 
being in two
places at once: pairs of particles are fundamentally different from single 
particles and they
can assume a negative presence. And the fact that weak measurements transform 
the paradox
from a mere technicality into an unavoidable truth suggests that they could 
provide a
springboard for new understanding of quantum mechanics. There are 
extraordinary things
within ordinary quantum mechanics; the negative presence result might be just 
the tip of the
iceberg: every paradox in quantum theory may simply be a manifestation of 
other strange
behaviors of quantum objects that we have not yet detected - or even thought 
of.
62
The Quantum Time-Translation Machine
Another unexpected quantum reality about the concept of time [149], can be 
viewed as being
metaphorically related to the organism’s internal model of itself, which acts 
on different time scales
for educated decision-making. We refer to the Aharonov, Anandan, Popescue and 
Vaidman
(AAPV) Quantum Time-Translation Machine [150,151]:
63
Quantum Non-Demolition Measurements
Another approach to measure the eigenvalue of a specific observable without 
demolition of the
quantum state of the observed system is referred to as QND measurements used 
mainly in quantum
optics [152,153]. The idea can be traced back to the Einstein, Podolsky and 
Rosen paradox [154],
presented in their 1935 paper entitled "Can quantum-mechanical description of 
physical reality be
considered complete?” They have shown that, according to quantum mechanics, 
if two systems in a
combined state (e.g., two half-spin particles in a combined-spin state) are 
at a large distance from
each other, a measurement of the state of one system can provide information 
about that of the other
one. The conceptual idea of the QND measurements is to first prepare the 
observed system and a
quantum detector (e.g., Polarized light) in an entangled state and then to 
extract information about
the observed system by using ordinary destructive measurement on the quantum 
detector. This way,
the state of the detector is demolished but that of the system of interest is 
protected. In this sense,
the newly developed biofluoremetry method for studying the intracellular 
spatio-temporal
organization and functional correlations is actually a version of QND 
measurements and not just an
analogy.
Proceeding with the same metaphor, bacterial colonies enable to perform new 
real
experiments in analogy with Aharonov’s ‘back from the future’ notion about 
the backward
propagation of the wave function. For example, several colonies taken from 
the same culture
in a stationary phase, or even better, from spores, can be grown at 
successive intervals of
time while exposed to the same constraints. The new concept is to let, for 
example, bacteria
taken from the future (the older colonies) to communicate with colonies at 
the present and
compare their consequent development with those who were not exposed to their 
own future.
Albeit simple, the detailed setup and interpretations of the experiments 
should be done
keeping in mind that (as we have shown), even similar colonies grown at the 
same time
develop distinguishable self-identities.
To Be is to Change
64
The picture of the decomposable mixed state of multiple options is also 
metaphorically
analogous to t’Hooft’s Universe [155,156], composed of underlying Be-able 
and Changeable
non-commuting observables at the Planck length scales (10-35meter). His 
motivation was
the paradox posed by the in principle contradiction of simulating backward in 
time a unified
theory composed of gravity and quantum mechanics based on the current 
Copenhagen
interpretation: There is no deeper reality, hidden variables do not exist and 
the world is
simply probabilistic. It holds that we are not ignorant about quantum 
objects; it's just that
there is nothing further to be known. This is in contradiction with Einstein’
s picture later
named ‘hidden variables’. The EPR paradox mentioned earlier was an attempt 
to illustrate
that, unless the existence of unknown and non-measurable variables is 
assumed, one runs
into contradiction with our intuitive perception of reality. Simply phrased, 
according to the
‘hidden variable’ picture, quantum uncertainty reflects some underlying 
deterministic reality
that in principle can be measured. Following the EPR paradox, Bell proposed a 
specific
inequality that, if measured, can distinguish between the Copenhagen and 
hidden variables
interpretations of quantum mechanics. The consequent experiments were in 
agreement with
the Copenhagen interpretation. In 2002, t’Hooft presented a new approach to 
the problem
that most perceived as being resolved. His answer to the Copenhagen 
interpretation is [155]:
65
To solve the paradox, he proposed a third approach based on the idea that, on 
the Planckian level,
reality might be essentially different from that on the larger scales of 
interest. The idea is to define
equivalence classes of states. Two states are defined as equivalent if and 
only if they evolve in the
near future to the same state. We emphasize that this is the analogy (in 
reverse) to our picture of
‘harnessing the past to free the future’ during internal self-organization 
of organisms.
Metaphorically, for similar reasons (in reverse) why loss of information 
leads to the quantum
uncertainty for an external observer, the storage of past information by the 
organism affords it an
internal state of multiple options inaccessible to an external observer. To 
take into consideration the
crucial role of information loss, t’Hooft proposes that two kinds of 
observables exist on the
Planckian scale. The ones that describe the equivalent classes are the 
be-able ones:
With regard to organisms, the corresponding observables are those connected 
with information
registered in the structural organization or statistically averaged dynamics 
(e.g., gene-expression
measurements from several organisms under the same conditions). According to t
’Hooft all other
operators are the change-able ones that do not commute with the be-able 
operators. So that,
In this picture, reality on the very fundamental level is associated with 
information rather than
matter:
66
This picture of nature is metaphorically similar to the picture we propose 
for organisms – a balance
between intrinsic and extrinsic flow of information. The essential difference 
is that organisms are
self-organizing open system that can store information, including about their 
self.
Appendix F: Turing’s Child Machine
In the 1950’s the three interchangeable terms ‘Machine Intelligence’, ‘
Artificial Intelligence’
and ‘Machine learning’ referred to the causal (goal) of learning about 
humans by building
machines to exhibit behavior which, if performed by humans, would be assumed 
to involve
the use of intelligence. In the next five decades, “Machine Intelligence” 
and its associated
terms evolved away from their original causal meanings. These terms are now 
primarily
associated with particular methodologies for attempting to achieve the goal 
of getting
computers to automatically solve problems. Thus, the term “artificial 
intelligence” is
associated today primarily with the efforts to design and utilize computers 
to solve problems
using methods that rely on knowledge, logic, and various analytical and 
mathematical
methods. Only in some spin-off branches of research, such as genetic 
programming and
evolvable hardware, does Turing’s term still communicate the broad goal of 
getting
computers to automatically solve problems in a human-like or even broader 
biological-like
manners.
In his 1948 paper, Turing identified three strategies by which 
human-competitive machine
intelligence might be achieved. The first is a logic-driven search which is 
the causal reason
(described earlier) that led Turing to develop the idea of his conceptual 
machine, i.e., to learn
about the foundations of mathematics and logics. The second reason for 
generating machine
intelligence is what he called a “cultural search” in which previously 
acquired knowledge is
accumulated, stored in libraries, and used in problem solving a - the 
approach taken by
modern knowledge-based expert systems. These first two approaches of Turing’s 
have been
pursued over the past 50 years by the vast majority of researchers using the 
methodologies
that are today primarily associated with the term “artificial intelligence.”
67
Turing also identified a third approach to machine intelligence in his 1948 
paper,
saying: “There is the genetical or evolutionary search by which a combination 
of genes is
looked for, the criterion being the survival value.” Note that this 
remarkable realization
preceded the discovery of the DNA and modern genetics. So Turing could not 
have specified
in 1948 how to conduct the “genetical or evolutionary search” for solutions 
to problems and
could not mention concepts like population genetics and recombination. 
However, he did
point out in his 1950 paper that:
We cannot expect to find a good child-machine at the first attempt. One must
experiment with teaching one such machine and see how well it learns. One can 
then try
another and see if it is better or worse. There is an obvious connection 
between this
process and evolution, by the identifications
“Structure of the child machine = Hereditary material”;
“Changes of the child machine = Mutations”;
“Natural selection = Judgment of the experimenter”.
Thus, Turing correctly perceived in 1948 and 1950 that machine intelligence 
can only be
achieved by an evolutionary process in which a description of a computer 
hardware and
software (the hereditary material) undergoes progressive modification 
(mutation) under the
guidance of natural selection (i.e., selective pressure in the form of what 
is now usually
called “fitness”). The measurement of fitness in modern-day genetics and 
evolutionary
computation is usually performed by automated means, as opposed to a human 
passing
judgment on each individual candidate, as suggested by Turing.
>From this perspective, Turing’s vision is actually closer to our view about 
organisms’
intelligence, provided that the external “teacher” is replaced by an inner 
one, and the
organism has freedom of response to the external information gathered, rather 
than forced to
follow specific instructions.
----------
Howard Bloom
Author of The Lucifer Principle: A Scientific Expedition Into the Forces of 
History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 
21st Century
Visiting Scholar-Graduate Psychology Department, New York University; Core 
Faculty Member, The Graduate Institute
www.howardbloom.net
www.bigbangtango.net
Founder: International Paleopsychology Project; founding board member: Epic 
of Evolution Society; founding board member, The Darwin Project; founder: The 
Big Bang Tango Media Lab; member: New York Academy of Sciences, American 
Association for the Advancement of Science, American Psychological Society, Academy 
of Political Science, Human Behavior and Evolution Society, International 
Society for Human Ethology; advisory board member: Youthactivism.org; executive 
editor -- New Paradigm book series.
For information on The International Paleopsychology Project, see: 
www.paleopsych.org
for two chapters from 
The Lucifer Principle: A Scientific Expedition Into the Forces of History, 
see www.howardbloom.net/lucifer
For information on Global Brain: The Evolution of Mass Mind from the Big Bang 
to the 21st Century, see www.howardbloom.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/paleopsych/attachments/20041126/e5462e22/attachment.html>


More information about the paleopsych mailing list