[Paleopsych] Seth Lloyd: Ultimate physical limits to computation
Premise Checker
checker at panix.com
Fri Jan 13 16:55:42 UTC 2006
Seth Lloyd: Ultimate physical limits to computation
http://puhep1.princeton.edu/~mcdonald/examples/QM/lloyd_nature_406_1047_00.pdf
d'Arbeloff Laboratory for Information Systems and Technology, MIT
Department of Mechanical Engineering, Massachusetts Institute of
Technology 3-160, Cambridge, Massachusetts 02139, USA (slloyd at mit.edu)
NATURE 406 (2000 August 31):1047-56
An insight review article
Abstract: Computers are physical systems: the laws of physics dictate what
they can and cannot do. In particular, the speed with which a physical
device can process information is limited by its energy and the amount of
information that it can process is limited by the number of degrees of
freedom it possesses. Here I explore the physical limits of computation as
determined by the speed of light c, the quantum scale h-bar and the
gravitational constant G. As an example, I put quantitative bounds to the
computational power of an 'ultimate laptop' with a mass of one kilogram
confined to a volume of one litre.
[I did some different calculations, on an upper bound for the number of
possible computations since the Big Bang. This was simply the time a
photon can cross the Planck distance times the number of photons times the
number of such instants since the Big Bang.
[a. The Plank length is sqrt(hG/c^3) = 1.62 x 10^-35 meters.
[b. The Plank time is the length divided by c, or sqrt(hG/c^5) = 5 x
10^-44 seconds.
[c. The universe is 4 x 10^17 seconds old, or 10^61 Plank time units old.
[d. There are 10^89 photons in the universe.
[I cannot easily recall where I got these numbers, but I don't think they
are in serious doubt. So the number of photons having moved across a
Planck distance since the Big Bang is the product of b, c, and d and is
10^150.
[10^150 = (10^3)^50 ~ (2^10)^50 = 2^500.
[The number of actual calculations is vastly smaller than this, since
particles do not communicate their movements instantaneously. How much
vaster doesn't really matter right now. What does matter is that cracking
a truth table puzzle that has 512 = 2^7 variables is impossible and that a
512-bit encryption scheme is likewise impossible.
[All this sets sharp limits on the ability of anything whatever to
calculate. It would not matter much, philosophically, if the figure were
128 or 256 or 1024 or 2048 or even 1024^2. It is humanly-comprhensible
number. We know from the results of Gödel, Church, Turing, and so on about
the limitations of any finite calculators.^ But when we come to people the
limits on reason are much sharper, since our brains weigh only three
pounds.
^[Recall that only finite sentences of finite length are admitted in the
"formal systems" of the title of Gödel's 1931 paper, which not so by the
way, means that finite has suddenly become an undefined term! This is true
of Raymond Smullyan's _Formal Systems_ and all similar books I have seen.
Of course, there are plenty of definitions of finite *inside* mathematics,
logic, and set theory, definitions that are not equivalent, but every one
of these theories is handled only by formal systems, with their undefined
finite sentences of finite length. And so in mathematics, one of the core
terms is left as a KNOW-HOW term and never actually gets defined. The same
is true of cost in economics. I could go on.]
[The Enlightenment optimism of using reason to solve all problems is dead.
No one in the Enlightenment thought about solving 512 variable truth
tables and all, except maybe LaPlace, would have admitted to limits on
reason. Socially, though, the weaking of this optimism took place on a
different plane. World War I, with it senseless trench warfare, destroved
much optimism about human nature, as far as governing itself goes.
Relativity theory and quantum mechanics did their part, too, in the realm
of science.
[The Enlightenment was killed for good during six seconds on the Dealey
Plaza.
[Before the article here is a talk by Lloyd with Edge.]
HOW FAST, HOW SMALL, AND HOW POWERFUL?
http://www.edge.org/3rd_culture/lloyd/lloyd_print.html
REBOOTING CIVILIZATION
"Something else has happened with computers. What's happened with society
is that we have created these devices, computers, which already can
register and process huge amounts of information, which is a significant
fraction of the amount of information that human beings themselves, as a
species, can process. When I think of all the information being processed
there, all the information being communicated back and forth over the
Internet, or even just all the information that you and I can communicate
back and forth by talking, I start to look at the total amount of
information being processed by human beings -- and their artifacts -- we
are at a very interesting point of human history, which is at the stage
where our artifacts will soon be processing more information than we
physically will be able to process."
SETH LLOYD -- HOW FAST, HOW SMALL, AND HOW POWERFUL?: MOORE'S LAW AND THE
ULTIMATE LAPTOP [7.23.01]
Introduction
"Lloyd's Hypothesis" states that everything that's worth understanding
about a complex system, can be understood in terms of how it processes
information. This is a new revolution that's occurring in science.
Part of this revolution is being driven by the work and ideas of Seth
Lloyd, a Professor of Mechanical Engineering at MIT. Last year, Lloyd
published an article in the journal Nature -- "Ultimate Physical Limits to
Computation" (vol. 406, no. 6788, 31 August 2000, pp. 1047-1054) -- in
which he sought to determine the limits the laws of physics place on the
power of computers. "Over the past half century," he wrote, "the amount of
information that computers are capable of processing and the rate at which
they process it has doubled every 18 months, a phenomenon known as Moore's
law. A variety of technologies -- most recently, integrated circuits --
have enabled this exponential increase in information processing power.
But there is no particular reason why Moore's law should continue to hold:
it is a law of human ingenuity, not of nature. At some point, Moore's law
will break down. The question is, when?"
His stunning conclusion?
"The amount of information that can be stored by the ultimate laptop, 10
to the 31st bits, is much higher than the 10 to the 10th bits stored on
current laptops. This is because conventional laptops use many degrees of
freedom to store a bit whereas the ultimate laptop uses just one. There
are considerable advantages to using many degrees of freedom to store
information, stability and controllability being perhaps the most
important. Indeed, as the above calculation indicates, to take full
advantage of the memory space available, the ultimate laptop must turn all
its matter into energy. A typical state of the ultimate laptop's memory
looks like a plasma at a billion degrees Kelvin -- like a thermonuclear
explosion or a little piece of the Big Bang! Clearly, packaging issues
alone make it unlikely that this limit can be obtained, even setting aside
the difficulties of stability and control."
Ask Lloyd why he is interested in building quantum computers and you will
get a two part answer. The first, and obvious one, he says, is "because we
can, and because it's a cool thing to do." The second concerns some
interesting scientific implications. "First," he says, "there are
implications in pure mathematics, which are really quite surprising, that
is that you can use quantum mechanics to solve problems in pure math that
are simply intractable on ordinary computers." The second scientific
implication is a use for quantum computers was first suggested by Richard
Feynman in 1982, that one quantum system could simulate another quantum
system. Lloyd points out that "if you've ever tried to calculate Feynman
diagrams and do quantum dynamics, simulating quantum systems is hard. It's
hard for a good reason, which is that classical computers aren't good at
simulating quantum systems."
Lloyd notes that Feynman suggested the possibility of making one quantum
system simulate another. He conjectured that it might be possible to do
this using something like a quantum computer. In 90s Lloyd showed that in
fact Feynman's conjecture was correct, and that not only could you
simulate virtually any other quantum system if you had a quantum computer,
but you could do so remarkably efficiently. So by using quantum computers,
even quite simple ones, once again you surpass the limits of classical
computers when you get down to, say, 30 or 40 bits in your quantum
computer. You don't need a large quantum computer to get a big huge
speedup over classical simulations of physical systems.
"A salt crystal has around 10 to the 17 possible bits in it," he points
out. "As an example, let's take your own brain. If I were to use every one
of those spins, the nuclear spins, in your brain that are currently being
wasted and not being used to store useful information, we could probably
get about 10 to the 28 bits there."
Sitting with Lloyd in the Ritz Carlton Hotel in Boston, overlooking the
tranquil Boston Public Gardens, I am suddenly flooded with fantasies of
licensing arrangements regarding the nuclear spins of my brain. No doubt
this would be a first in distributed computing
"You've got a heck of a lot of nuclear spins in your brain," Lloyd says.
"If you've ever had magnetic resonance imaging, MRI, done on your brain,
then they were in fact tickling those spins. What we're talking about in
terms of quantum computing, is just sophisticated 'spin tickling'."
This leads me to wonder how "spin tickling" fits into intellectual
property law. How about remote access? Can you in theory designate and
exploit people who would have no idea that their brains were being used
for quantum computation?
Lloyd points out that so far as we know, our brains don't pay any
attention to these nuclear spins. "You could have a whole parallel
computational universe going on inside your brain. This is, of course,
fantasy. But hey, it might happen."
"But it's not a fantasy to explore this question about making computers
that are much, much, more powerful than the kind that we have sitting
around now -- in which a grain of salt has all the computational powers of
all the computers in the world. Having the spins in your brain have all
the computational power of all the computers in a billion worlds like ours
raises another question which is related to the other part of the research
that I do."
In the '80s, Lloyd began working on how large complex systems process
information. How things process information at a very small scale, and how
to make ordinary stuff, like a grain of salt or a cube of sugar, process
information, relates to the complex systems work in his thesis that he did
with the late physicist Heinz Pagels, his advisor at Rockefeller
University. "Understanding how very large complex systems process
information is the key to understanding how they behave, how they break
down, how they work, what goes right and what goes wrong with them," he
says.
Science is being done in new an different ways, and the changes
accelerates the exchange of ideas and the development of new ideas. Until
a few years ago, it was very important for a young scientist to be to "in
the know" -- that is, to know the right people, because results were
distributed primarily by pre prints, and if you weren't on the right
mailing list, then you weren't going to get the information, and you
wouldn't be able to keep up with the field.
"Certainly in my field, and fundamental physics, and quantum mechanics,
and physics of information," Lloyd notes, "results are distributed
electronically, the electronic pre-print servers, and they're available to
everybody via the World Wide Web. Anybody who wants to find out what's
happening right now in the field can go to [13]http://xxx.lanl.gov and
find out. So this is an amazing democratization of knowledge which I think
most people aren't aware of, and its effects are only beginning to be
felt."
"At the same time," he continues, "a more obvious way in which science has
become public is that major newspapers such as The New York Times have all
introduced weekly science sections in the last ten years. Now it's hard to
find a newspaper that doesn't have a weekly section on science. People are
becoming more and more interested in science, and that's because they
realize that science impacts their daily lives in important ways."
A big change in science is taking place, and that's that science is
becoming more public -- that is, belonging to the people. In some sense,
it's a realization of the capacity of science. Science in some sense is
fundamentally public.
"A scientific result is a result that can be duplicated by anybody who has
the right background and the right equipment, be they a professor at
M.I.T. like me," he points out, "or be they from an underdeveloped
country, or be they an alien from another planet, or a robot, or an
intelligent rat. Science consists exactly of those forms of knowledge that
can be verified and duplicated by anybody. So science is basically, at it
most fundamental level, a public form of knowledge, a form of knowledge
that is in principle accessible to everybody. Of course, not everybody's
willing to go out and do the experiments, but for the people who are
willing to go out and do that, -- if the experiments don't work, then it
means it's not science.
"This democratization of science, this making it public, is in a sense the
realization of a promise that science has held for a long time. Instead of
having to be a member of the Royal Society to do science, the way you had
to be in England in the 17th, 18th, centuries today pretty much anybody
who wants to do it can, and the information that they need to do it is
there. This is a great thing about science. That's why ideas about the
third culture are particularly apropos right now, as you are concentrating
on scientists trying to take their case directly to the public. Certainly,
now is the time to do it."
--JB
SETH LLOYD is an Associate Professor of Mechanical Engineering at MIT and
a principal investigator at the Research Laboratory of Electronics. He is
also adjunct assistant professor at the Santa Fe Institute. He works on
problems having to do with information and complex systems from the very
small -- how do atoms process information, how can you make them compute,
to the very large -- how does society process information? And how can we
understand society in terms of its ability to process information?
[14]Click Here for Seth Lloyd's Bio Page[15] [7.23.01]
SETH LLOYD: Computation is pervading the sciences. I believe it began
about 400 years ago, if you look at the first paragraph of Hobbes's famous
book Leviathan. He says that just as we consider the human body to be like
a machine, like a clock where you have sinews and muscles to move energy
about, a pulse beat like a pendulum, and a heart that pumps energy in,
similar to the way a weight supplies energy to a clock's pendulum, then we
can consider the state to be analogous to the body, since the state has a
prince at its head, people who form its individual portions, legislative
bodies that form its organs, etc. In that case, Hobbes asked, couldn't we
consider the state itself to have an artificial life?
To my knowledge that was the first use of the phrase artificial life in
the form that we use it today. If we have a physical system that's
evolving in a physical way, according to a set of rules, couldn't we
consider it to be artificial and yet living? Hobbes wasn't talking about
information processing explicitly, but the examples he used were, in fact,
examples of information processing. He used the example of the clock as
something that is designed to process information, as something that gives
you information about time. Most pieces of the clock that he described are
devices not only for transforming energy, but actually for providing
information. For example, the pendulum gives you regular, temporal
information. When he next discusses the state and imagines it having an
artificial life, he first talks about the brain, the seat of the state's
thought processes, and that analogy, in my mind, accomplishes two things.
First, Hobbes is implicitly interested in information. Second, he is
constructing the fundamental metaphor of scientific and technological
inquiry. When we think of a machine as possessing a kind of life in and of
itself, and when we think of machines as doing the same kinds of things
that we ourselves do, we are also thinking the corollary, that is, we are
doing the same kinds of things that machines do. This metaphor, one of the
most powerful of the Enlightenment, in some sense pervaded the popular
culture of that time. Eventually, one could argue, that metaphor gave rise
to Newton's notions of creating a dynamical picture of the world. The
metaphor also gave rise to the great inquiries into thermodynamics and
heat, which came 150 years later, and, in some ways, became the central
mechanical metaphor that has informed all of science up to the 20th
century.
The real question is, when did people first start talking about
information in such terms that information processing rather than
clockwork became the central metaphor for our times? Because until the
20th century, this Enlightenment mode of thinking of physical things such
as mechanical objects with their own dynamics as being similar to the body
or the state was really the central metaphor that informed much scientific
and technological inquiry. People didn't start thinking about this
mechanical metaphor until they began building machines, until they had
some very good examples of machines, like clocks for instance. The 17th
century was a fantastic century for clockmaking, and in fact, the 17th and
18th centuries were fantastic centuries for building machines, period.
Just as people began conceiving of the world using mechanical metaphors
only when they had themselves built machines, people began to conceive of
the world in terms of information and information-processing, only when
they began dealing with information and information processing. All the
mathematical and theoretical materials for thinking of the world in terms
of information, including all the basic formulas, were available at the
end of the 19th century, because all these basic formulas had been created
by Maxwell, Boltzmann and Gibbs for statistical mechanics. The formula for
information was known back in the 1880s, but people didn't know that it
dealt with information. Instead, because they were familiar with things
like heat and mechanical systems that processed heat, they called
information in its mechanical or thermodynamic manifestation, entropy. It
wasn't until the 1930s, when people like Claude Shannon and Norbert
Wiener, and before them Harry Nyquist, started to think about information
processing for the primary purpose of communication, or for the purposes
of controlling systems so that the role of information and feedback could
be controlled. Then came the notion of constructing machines that actually
processed information. Babbage tried to construct one back in the early
19th century, which was a spectacular and expensive failure, and one which
did not enter into the popular mainstream.
Another failure concerns the outgrowth of the wonderful work regarding
Cybernetics in other fields such as control theory, back in the late
1950s, early 1960s, when there was this notion that cybernetics was going
to solve all our problems and allow us to figure out how social systems
work, etc. That was a colossal failure -- not because that idea was
necessarily wrong, but because the techniques for doing so didn't exist at
that point -- and, if we're realistic, may in fact never exist. The
applications of Cybernetics that were spectacularly successful are not
even called Cybernetics because they're so ingrained in technology, in
fields like control theory, and in the aerospace techniques that were used
to put men on the moon. Those were the great successes of Cybernetics,
remarkable successes, but in a more narrow technological field.
This brings us to the Internet, which in some sense is almost like Anti
Cybernetics, the evil twin of Cybernetics. The word Cybernetics comes from
the Greek word kybernotos which means a governor -- helmsman, actually,
the kybernotos was the pilot of a ship. Cybernetics, as initially
conceived, was about governing, or controlling, or guiding. The great
thing about the Internet, as far as I'm concerned, is that it's completely
out of control. In some sense the fact of the Internet goes way beyond and
completely contradicts the Cybernetic ideal. But, in another sense -- the
way in which the Internet and cybernetics are related, Cybernetics was
fundamentally on the right track. As far as I'm concerned what's really
going on in the world is that there's a physical world where things
happen. I'm a physicist by training and I was taught to think of the world
in terms of energy, momentum, pressure, entropy. You've got all this
energy, things are happening, things are pushing on other things, things
are bouncing around.
But that's only half the story. The other half of the story, its
complementary half, is the story about information. In one way you can
think about what's going on in the world as energy, stuff moving around,
bouncing off each other -- that's the way people have thought about the
world for over 400 years, since Galileo and Newton. But what was missing
from that picture was what that stuff was doing: how, why, what? These are
questions about information. What is going on? It's a question about
information being processed. Thinking about the world in terms of
information is complementary to thinking about it in terms of energy. To
my mind, that is where the action is, not just thinking about the world as
information on its own, or as energy on its own, but looking at the
confluence of information and energy and how they play off against each
other. That's exactly what Cybernetics was about. Wiener, who is the real
father of the field of Cybernetics, conceived of Cybernetics in terms of
information, things like feedback control. How much information, for
example, do you need to make something happen?
The first physicists studying these problems were scientists who happened
to be physicists, and the first person who was clearly aware of the
connection between information, entropy, and physical mechanics and energy
like quanta was Maxwell. Maxwell, in the 1850s and 60s, was the first
person to write down formulas that related what we would now call
information -- ideas of information -- to things like energy and entropy.
He was also the first person to make such an explicit connection.
He also had this wonderfully evocative far-out, William Gibsonesque notion
of a demon. "Maxwell's Demon" is this hypothetical being that was able to
look very closely at the molecules of gas whipping around in a room, and
then rearrange them. Maxwell even came up with a model in which the demon
was sitting at a partition, a tiny door, between two rooms and he could
open and shut this door very rapidly. If he saw fast molecules coming from
the right and slow molecules coming from the left, then he'd open the door
and let the fast molecules go in the lefthand side, and let the slow
molecules go into the righthand side.
And since Maxwell already knew about this connection between the average
speed of molecules and entropy, and he also knew that entropy had
something to do with the total number of configurations, the total number
of states a system can have, he pointed out, that if the demon continues
to do this, the stuff on the lefthand side will get hot, and the stuff on
the righthand side will get cold, because the molecules over on the left
are fast, and the molecules on the right are slow.
He also pointed out that there is something screwy about this because the
demon is doing something that shouldn't take a lot of effort since the
door can be as light as you want, the demon can be as small as you want,
the amount of energy you use to open and shut the door can be as small as
you desire, and yet somehow the demon is managing to make something hot on
one side. Maxwell pointed out that this was in violation of all the laws
of thermodynamics -- in particular the second law of thermodynamics which
says that if you've got a hot thing over here and a cold thing over there,
then heat flows from the hot thing to the cold thing, and the hot thing
gets cooler and the cold thing gets hotter, until eventually they end up
the same temperature. And it never happens the opposite way. You never see
something that's all the same temperature spontaneously. Maxwell pointed
out that there was something funny going on, that there was this
connection between entropy and this demon who was capable of processing
information.
To put it all in perspective, as far as I can tell, the main thing that
separates humanity from most other living things, is the way that we deal
with information. Somewhere along the line we developed methods,
sophisticated mechanisms, for communicating via speech. Somewhere along
the line we developed natural language, which is a universal method for
processing information. Anything that you can imagine is processed with
information and anything that could be said, can be said using language.
That probably happened around a hundred thousand years ago, and since
then, the history of human beings has been the development of ever more
sophisticated ways of registering, processing, transforming, and dealing
with information. Society through this methodology creates an
organizational formula that is totally wild compared with the
organizational structures of most other species, which makes the human
species distinctive, if there is something at all that makes us
distinctive. In some sense we're just like any ordinary species out there.
The extent to which we are different has to do with having more
sophisticated methods for processing information.
Something else has happened with computers. What's happened with society
is that we have created these devices, computers, which already can
register and process huge amounts of information, which is a significant
fraction of the amount of information that human beings themselves, as a
species, can process. When I think of all the information being processed
there, all the information being communicated back and forth over the
Internet, or even just all the information that you and I can communicate
back and forth by talking, I start to look at the total amount of
information being processed by human beings -- and their artifacts -- we
are at a very interesting point of human history, which is at the stage
where our artifacts will soon be processing more information than we
physically will be able to process. So I have to ask, how many bits am I
processing per second in my head? I could estimate it, it's going to be
around ten billion neurons, something like 10 to the 15 bits per second,
around a million billion bits per second.
Hell if I know what it all means -- we're going to find out. That's the
great thing. We're going to be around to find out some of what this means.
If you think that information processing is where the action is, it may
mean in fact that human beings are not going to be where the action is
anymore. On the other hand, given that we are the people who created the
devices that are doing this mass of information processing, we, as a
species, are uniquely poised to make our lives interesting and fun in
completely unforeseen ways.
Every physical system, just by existing, can register information. And
every physical system, just by evolving according to its own peculiar
dynamics, can process that information. I'm interested in how the world
registers information and how it processes it. Of course, one way of
thinking about all of life and civilization is as being about how the
world registers and processes information. Certainly that's what sex is
about; that's what history is about. But since I'm a scientist who deals
with the physics of how things process information, I'm actually
interested in that notion in a more specific way. I want to figure out not
only how the world processes information, but how much information it's
processing. I've recently been working on methods to assign numerical
values to how much information is being processed, just by ordinary
physical dynamics. This is very exciting for me, because I've been working
in this field for a long time trying to come up with mathematical
techniques for characterizing how things process information, and how much
information they're processing.
About a year or two ago, I got the idea of asking the question, given the
fundamental limits on how the world is put together -- (1) the speed of
light, which limits how fast information can get from one place to
another, (2) Planck's constant, which tells you what the quantum scale is,
how small things can actually get before they disappear altogether, and
finally (3) the last fundamental constant of nature, which is the
gravitational constant, which essentially tells you how large things can
get before they collapse on themselves -- how much information can
possibly be processed. It turned out that the difficult part of this
question was thinking it up in the first place. Once I'd managed to pose
the question, it only took me six months to a year to figure out how to
answer it, because the basic physics involved was pretty straightforward.
It involved quantum mechanics, gravitation, perhaps a bit of quantum
gravity thrown in, but not enough to make things too difficult.
The other motivation for trying to answer this question was to analyze
Moore's Law. Many of our society's prized objects are the products of this
remarkable law of miniaturization -- people have been getting extremely
good at making the components of systems extremely small. This is what's
behind the incredible increase in the power of computers, what's behind
the amazing increase in information technology and communications, such as
the Internet, and it's what's behind pretty much every advance in
technology you can possibly think of -- including fields like material
science. I like to think of this as the most colossal land grab that's
ever been done in the history of mankind.
From an engineering perspective, there are two ways to make something
bigger: One is to make it physically bigger, (and human beings spent a lot
of time making things physically bigger, working out ways to deliver more
power to systems, working out ways to actually build bigger buildings,
working out ways to expand territory, working out ways to invade other
cultures and take over their territory, etc.) But there's another way to
make things bigger, and that's to make things smaller. Because the real
size of a system is not how big it actually is, the real size is the ratio
between the biggest part of a system and the smallest part of a system. Or
really the smallest part of a system that you can actually put to use in
doing things. For instance, the reason that computers are so much more
powerful today than they were ten years ago is that every year and a half
or so, the basic components of computers, the basic wires, logic chips
etc., have gone down in size by a factor of two. This is known as Moore's
Law, which is just a historical fact about history of technology.
Every time something's size goes down by a factor of two, you can cram
twice as many of them into a box, and so every two years or so, the power
of computers doubles, and over the course of fifty years the power of
computers has gone up by a factor of a million or more. The world has
gotten a million times bigger because we've been able to make the smallest
parts of the world a million times smaller. This makes this an exciting
time to live in, but a reasonable question to ask is, where is all this
going to end? Since Moore proposed it in the early 1960s, Moore's Law has
been written off numerous times. It was written off in the early 1970s
because people thought that fabrication techniques for integrated circuits
were going to break down and you wouldn't be able to get things smaller
than a scale size of ten microns.
Now Moore's Law is being written off again because people say that the
insulating barriers between wires in computers are getting to be only a
few atoms thick, and when you have an insulator that's only a few atoms
thick then electrons can tunnel through them and it's not a very good
insulator. Well, perhaps that will stop Moore's Law, but so far nothing
has stopped it.
At some point Moore's Law has to stop? This question involves the ultimate
physical limits to computation: you can't send signals faster than the
speed of light, you can't make things smaller than the laws of quantum
mechanics tell you that you can, and if you make things too big, then they
just collapse into one giant black hole. As far as we know, it's
impossible to fool Mother Nature. I thought it would be interesting to see
what the basic laws of physics said about how fast, how small, and how
powerful, computers can get. Actually these two questions: given the laws
of physics, how powerful can computers be; and where must Moore's Law
eventually stop -- turn out to be exactly the same, because they stop at
the same place, which is where every available physical resource is used
to perform computation. So every little subatomic particle, every ounce of
energy, every photon in your system -- everything is being devoted towards
performing a computation. The question is, how much computation is that?
So in order to investigate this, I thought that a reasonable form of
comparison would be to look at what I call the ultimate laptop. Let's ask
just how powerful this computer could be.
The idea here is that we can actually relate the laws of physics and the
fundamental limits of computation to something that we are familiar with
-- something of human scale that has a mass of about a kilogram, like a
nice laptop computer, and has about a liter in volume, because kilograms
and liters are pretty good to hold in your lap, are a reasonable size to
look at, and you can put it in your briefcase, et cetera. After working on
this for nearly a year what I was able to show was that the laws of
physics give absolute answers to how much information you could process
with a kilogram of matter confined to a volume of one liter. Not only
that, surprisingly, or perhaps not so surprisingly, the amount of
information that can be processed, the number of bits that you could
register in the computer, and the number of operations per second that you
could perform on these bits are related to basic physical quantities, and
to the aforementioned constants of nature, the speed of light, Planck's
constant, and the gravitational constant. In particular you can show
without much trouble that the number of ops per second -- the number of
basic logical operations per second that you can perform using a certain
amount of matter is proportional to the energy of this matter.
For those readers who are technically-minded, it's not very difficult to
whip out the famous formula E = MC2 and show, using work of Norm Margolus
and Lev Levitin here in Boston that the total number of elementary logical
operations that you can perform per second using a kilogram of matter is
the amount of energy, MC2, times two, divided by H-bar Planck's constant,
times pi. Well, you don't have to be Einstein to do the calculation; the
mass is one kilogram, the speed of light is 3 times ten to the eighth
meters per second, so MC2 is about ten to the 17th joules, quite a lot of
energy (I believe it's roughly the amount of energy used by all the
world's nuclear power plants in the course of a week or so), a lot of
energy, but let's suppose you could use it to do a computation. So you've
got ten to the 17th joules, and H-bar, the quantum scale, is ten to the
minus 34 joules per second, roughly. So there you go. I have ten to the
17th joules, I divide by ten to the minus 34 joules-seconds, and I have
the number of ops: ten to the 51 ops per second. So you can perform 10 to
the 51 operations per second, and ten to the 51 is about a billion billion
billion billion billion billion billion ops per second -- a lot faster
than conventional laptops. And this is the answer. You can't do any better
than that, so far as the laws of physics are concerned.
Of course, since publication of the Nature article, people keep calling me
up to order one of these laptops; unfortunately the fabrication plant to
build it has not yet been constructed. You might then actually ask why it
is that our conventional laptops, are so slow by comparison when we've
been on this Moore's Law track for 50 years now? The answer is that they
make the mistake, which could be regarded as a safety feature of the
laptop, of locking up most of their energy in the form of matter, so that
rather than using that energy to manipulate information and transform it,
most of it goes into making the laptop sit around and be a laptop.
As you can tell, if I were to take a week's energy output of all the
world's nuclear power plants and liberate it all at once, I would have
something that looked a lot like a thermonuclear explosion, because a
thermonuclear explosion is essentially taking roughly a kilogram of matter
and turning it into energy. So you can see right away that the ultimate
laptop would have some relatively severe packaging problems. Since I am a
professor of mechanical engineering at MIT, I think packaging problems is
where it's at. We're talking about some very severe material and
fabrication problems to prevent this thing from taking not only you but
the entire city of Boston out with it when you boot it up the first time.
Needless to say, we didn't actually figure out how we were going to put
this thing into a package, but that's part of the fun of doing
calculations according to the ultimate laws of physics. We decided to
figure out how many ops per second we could perform, and to worry about
the packaging afterwards. Now that we've got 10 to the 51 ops per second
the next question is: what's the memory space of this laptop.
When I go out to buy a new laptop, I first ask how many ops per second can
it perform? If it's something like a hundred megahertz, it's pretty slow
by current standards; if it's a gigahertz, that's pretty fast though we're
still very far away from the 10 to the 51 ops per second. With a
gigahertz, we're approaching 10 to the 10th, 10 to the 11th, 10 to the
12th, depending how ops per second are currently counted. Next, how many
bits do I have -- how big is the hard drive for this computer, or how big
is its RAM? We can also use the laws of physics to calculate that figure.
And computing memory capability is something that people could have done
back in the early decades of this century.
We know how to count bits. We take the number of states, and the number of
states is two raised to the power of the number of bits. Ten bits, two to
the tenth states, 1024 states. Twenty bits, two to the 20 bits, roughly a
million states. You keep on going and you find that with about 300 bits,
two to the 300, well, it's about ten to the one hundred, which is
essentially a bit greater than the number of particles in the universe. If
you had 300 bits, you could assign every particle in the universe a serial
number, which is a powerful use of information. You can use a very small
number of bits to label a huge number of bits.
How many bits does this ultimate laptop have?
I have a kilogram of matter confined to the volume of a liter; how many
states, how many possible states for matter confined to the volume of a
liter can there possibly be?
This happened to be a calculation that I knew how to do, because I had
studied cosmology, and in cosmology there's this event, called the Big
Bang, which happened a long time ago, about 13 billion years ago, and
during the Big Bang, matter was at extremely high densities and pressures.
I learned from cosmology how to calculate the number of states for matter
of very high densities and pressures. In actuality, the density is not
that great. I have a kilogram of matter in a liter. The density is similar
to what we might normally expect today. However, if you want to ask what
the number of states is for this matter in a liter, I've got to calculate
every possible configuration, every possible elementary quantum state for
this kilogram of matter in a liter of volume. It turns out, when you count
most of these states, that this matter looks like it's in the midst of a
thermonuclear explosion. Like a little piece of the Big Bang -- a few
seconds after the universe was born -- when the temperature was around a
billion degrees. At a billion degrees, if you ask what most states for
matter are if it's completely liberated and able to do whatever it wants,
you'll find that it looks like a lot like a plasma at a billion degrees
Kelvin. Electrons and positrons are forming out of nothing, going back
into photons again, there's a lot of elementary particles whizzing about
and it's very hot. Lots of stuff is happening and you can still count the
number of possible states using the conventional methods that people use
to count states in the early universe; you take the logarithm of the
number of states, get a quantity that's normally thought of as being the
entropy of the system (the entropy is simply the logarithm of the number
of states which also gives you the number of bits, because the logarithm
of the number of states, the base 2, is the number of bits -- because the
number of bits raised to the power of -- 2 to the power of the number of
bits is the number of states. What more do I need to say?)
When we count them, we find that there are roughly 10 to the 31 bits
available. That means that there's 2 to the 10 to the 31 possible states
that this matter could be in. That's a lot of states - but we can count
them. The interesting thing about that is that you notice we've got 10 to
the 31 bits, we're performing 10 to the 51 ops per second, so each bit can
perform about 10 to the 20 ops per second. What does this quantity mean?
It turns out that the quantity -- if you like, the number of ops per
second per bit is essentially the temperature of this plasma. And I take
this plasma, I multiply it by Bell's constant, divide by Planck's
constant, and what I get is the energy per bit, essentially; that's what
temperature is. It tells you the energy per bit. It tells you how much
energy is available for a bit to perform a logical operation. Since I know
if I have a certain amount of energy I could perform a certain number of
operations per second, then the temperature tells me how many ops per bit
per second I can perform.
Then I know not only the number of ops per second, and the number of bits,
but also the number of ops per bit per second that can be performed by
this ultimate laptop, a kilogram of matter in a liter volume; it's the
number of ops per bit per second that could be performed by these
elementary particles back at the beginning of time by the Big Bang; it's
just the total number of ops that each bit can perform per second. The
number of times it can flip, the number of times it can interact with its
neighboring bits, the number of elementary logical operations. And it's a
number, right? 10 to the 20. Just the way that the total number of bits,
10 to the 31, is a number -- it's a physical parameter that characterizes
a kilogram of matter and a liter of volume. Similarly, 10 to the 51 ops
per second is the number of ops per second that characterize a kilogram of
matter, whether it's in a liter volume or not.
We've gone a long way down this road, so there's no point in stopping --
at least in these theoretical exercises where nobody gets hurt. So far all
we've used are the elementary constants of nature, the speed of light,
which tells us the rate of converting matter into energy or E = MC2. The
speed of light tells us how much energy we get from a particular mass.
Then we use the Planck scale, the quantum scale, because the quantum scale
tells you both how many operations per second you can get from a certain
amount of energy, and it also tells you how to count the number of states
available for a certain amount of energy. So by taking the speed of light,
and the quantum scale, we are able to calculate the number of ops per
second that a certain amount of matter can perform, and we're able to
calculate the amount of memory space that we have available for our
ultimate computer.
Then we can also calculate all sorts of interesting issues, like what's
the possible input-output rate for all these bits in a liter of volume.
That can actually be calculated quite easily from what I've just
described, because to get all this information into and out of a liter
volume -- take a laptop computer -- you can say okay, here's all these
bits, they're sitting in a liter volume, let's move this liter volume
over, by its own distance, at the speed of light. You're not going to be
able to get the information in or out faster than that.
We can find out how many bits per second we get into and out of our
ultimate laptop. And we find we can get around 10 to the 40, or 10 to the
41, or perhaps, in honor of Douglas Adams and his mystical number 42, even
10 to the 42 bits per second in and out of our ultimate laptop. So you can
calculate all these different parameters that you might think are
interesting, and that tells you how good a modem you could possibly have
for this ultimate laptop -- how many bits per second can you get in and
out over the Ultimate Internet, whatever the ultimate Internet would be. I
guess the Ultimate Internet is just space/time itself in this picture.
I noted that you can't possibly do better than this, right? These are the
laws of physics. But you might be able to do better in other ways. For
example, let's think about the architecture of this computer. I've got
this computer that's doing 10 to the 51 ops per second, or 10 to the 31
bits. Each bit can flip 10 to the 20 times per second. That's pretty fast.
The next question is how long does it take a bit on this side of the
computer to send a signal to a bit on that side of the computer in the
course of time it takes it to do an operation.
As we've established, it has a liter volume, which is about ten
centimeters on each side, so it takes about ten to the minus ten seconds
for light to go from one side to another -- one ten billionth of a second
for light to go from this side to the other. These bits are flipping 10 to
the 20 times per second -- a hundred billion billion times per second.
This bit is flipping ten billion times in the course of time it takes a
signal to go from one side of the computer to the other. This is not a
very serial computation. A lot of action is taking place over here in the
time it takes to communicate when all the action is taking place over on
this side of the computer. This is what's called a parallel computation.
You could say that in the kinds of densities of matter that we're familiar
with, like a kilogram per liter volume, which is the density of water, we
find that we can only perform a very parallel computation, if we operate
at the ultimate limits of computation; lots of computational action takes
place over here during the time it takes a signal to go from here to there
and back again.
How can we do better? How could we make the computation more serial?
Let's suppose that we want our machine to do more serial computation, so
in the time it takes to send a signal from one side of the computer to the
other, there are fewer ops that are being done. The obvious solution is to
make the computer smaller, because if I make the computer smaller by a
factor of two, it only takes half the time for light, for a signal, for
information, to go from one side of the computer to the other.
If I make it smaller by a factor of ten billion, it only takes one ten
billionth of the time for its signal to go from one side of the computer
to the other. You also find that when you make it smaller, these pieces of
the computer tend to speed up, because you tend to have more energy per
bit available in each case. If you go through the calculation you find out
that as the computer gets smaller and smaller, as all the mass is
compressed into a smaller and smaller volume, you can do a more serial
computation.
When does this process stop? When can every bit in the computer talk with
every other bit, in the course of time it takes for a bit to flip? When
can everybody get to talk with everybody else in the same amount of time
that it takes them to talk with their neighbors?
As you make the computer smaller and smaller, it gets denser and denser,
until you have a kilogram of matter in an ever smaller volume. Eventually
the matter assumes more and more interesting configurations, until it's
actually going to take a very high pressure to keep this system down at
this very small volume. The matter assumes stranger and stranger
configurations, and tends to get hotter and hotter and hotter, until at a
certain point a bad thing happens. The bad thing that happens is that it's
no longer possible for light to escape from it -- it becomes a black hole.
What happens to our computation at this point. This is probably very bad
for a computation, right? Or rather, it's going to be bad for
input-output. Input is good, because stuff goes in, but output is bad
because it doesn't come out since it's a black hole. Luckily, however,
we're safe in this, because the very laws of quantum mechanics that we
were using to calculate how much information a physical system can
compute, how fast it can perform computations, and how much information it
can register, actually hold.
Stephen Hawking showed, in the 1970s, that black holes, if you treat them
quantum-mechanically, actually can radiate out information. There's an
interesting controversy as to whether that information has anything to do
with the information that went in. Stephen Hawking and John Preskill have
a famous bet, where Preskill says yes -- the information that comes out of
a black hole reflects the information that went in. Hawking says no -- the
information that comes out of a black hole when it radiates doesn't have
anything to do with the information that went in; the information that
went in goes away. I don't know the answer to this.
But let's suppose for a moment that Hawking is wrong and Preskill is
right. Let's suppose for a moment that in fact the information that comes
out of a black hole when it evaporates, radiates information the wave
length of the radiation coming out which is the radius of the black hole.
This black hole, this kilogram black hole, is really radiating at a
whopping rate; it's radiating out these photons with wave lengths of 10 to
the minus 27 meters, this is not something you would actually wish to be
close to -- it would be very dangerous. In fact it would look a lot like a
huge explosion. But let's suppose that in fact that information that's
being radiated out by the black hole is in fact the information that went
in to construct it, but simply transformed in a particular way. What you
then see is that the black hole can be thought of in some sense as
performing a computation.
You take the information about the matter that's used to form the black
hole, you program it in the sense that you give it a particular
configuration, you put this electron here, you put that electron there,
you make that thing vibrate like this, and then you collapse this into a
black hole, 10 to the minus 27 seconds later, in one hundred billion
billionth of a second, the thing goes cablooey, and you get all this
information out again, but now the information has been transformed, by
some dynamics, and we don't know what this dynamics is, into a new form.
In fact we would need to know something like string theory or quantum
gravity to figure out how it's been transformed. But you can imagine that
this could in fact function as a computer. We don't know how to make it
compute, but indeed, it's taking in information, it's transforming it in a
systematic form according to the laws of physics, all right, and then
poop! It spits it out again.
It's a dangerous thing -- the Ultimate Laptop was already pretty
dangerous, because it looked like a thermonuclear explosion inside of a
liter bottle of coca cola. This is even worse, because in fact it looks
like a thermonuclear explosion except that it started out at a radius of
10 to the minus 27 meters, one billion billion billionth of a meter, so
it's really radiating at a very massive rate. But suppose you could
somehow read information coming out of the black hole. You would indeed
have performed the ultimate computation that you could have performed
using a kilogram of matter, in this case confining it to a volume of 10 to
the minus 81 cubic meters. Pretty minuscule but we're allowed to imagine
this happening.
Is there anything more to the story?
After writing my paper on the ultimate laptop in Nature, I realized this
was insufficiently ambitious; that of course the obvious question to ask
at this point is not what is the ultimate computational capacity of a
kilogram of matter, but instead to ask what is the ultimate computational
capacity of the universe as a whole? After all, the universe is processing
information, right? Just by existing, all physical systems register
information, just by evolving their own natural physical dynamics, they
transform that information, they process it. So the question then is how
much information has the universe processed since the Big Bang?
_________________________________________________________________
References
13. http://xxx.lanl.gov/ 14.
http://www.edge.org/3rd_culture/bios/lloyd.html 15.
http://www.edge.org/3rd_culture/bios/varela.html 16.
http://www.edge.org/discourse/information.html
[Now to the famous article by Seth Lloyd.]
-------------------
Over the past half century, the amount of information that computers are
capable of processing and the rate at which they process it has doubled
every 18 months, a phenomenon known as Moore's law. A variety of
technologies--most recently, integrated circuits-- have enabled this
exponential increase in information processing power. But there is no
particular reason why Moore's law should continue to hold: it is a law of
human ingenuity, not of nature. At some point, Moore's law will break
down. The question is, when?
The answer to this question will be found by applying the laws of physics
to the process of computation (1-85). Extrapolation of current exponential
improvements over two more decades would result in computers that process
information at the scale of individual atoms. Although an Avogadro-scale
computer that can act on 10^23 bits might seem implausible, prototype
quantum computers that store and process information on individual atoms
have already been demonstrated (64,65,76-80). Existing quantum computers
may be small and simple, and able to perform only a few hundred operations
on fewer than ten quantum bits or 'qubits', but the fact that they work at
all indicates that there is nothing in the laws of physics that forbids
the construction of an Avogadro-scale computer.
The purpose of this article is to determine just what limits the laws of
physics place on the power of computers. At first, this might seem a
futile task: because we do not know the technologies by which computers
1000, 100, or even 10 years in the future will be constructed, how can we
determine the physical limits of those technologies? In fact, I will show
that a great deal can be determined concerning the ultimate physical
limits of computation simply from knowledge of the speed of light, c =
2.9979 x 10^8 ms^-1, Planck's reduced constant, h-bar= h/2pi = 1.0545 x
10^-34 Js, and the gravitational constant, G = 6.673 x 10^-11 m^3 kg^-1
s^-2. Boltzmann's constant, k-sub-B = 1.3805 x10^-23 J K^-1, will also be
crucial in translating between computational quantities such as memory
space and operations per bit per second, and thermodynamic quantities such
as entropy and temperature. In addition to reviewing previous work on how
physics limits the speed and memory of computers, I present results--which
are new except as noted--of the derivation of the ultimate speed limit to
computation, of trade-offs between memory and speed, and of the analysis
of the behaviour of computers at physical extremes of high temperatures
and densities.
Before presenting methods for calculating these limits, it is important to
note that there is no guarantee that these limits will ever be attained,
no matter how ingenious computer designers become. Some extreme cases such
as the black-hole computer described below are likely to prove extremely
difficult or impossible to realize. Human ingenuity has proved great in
the past, however, and before writing off physical limits as unattainable,
we should realize that certain of these limits have already been attained
within a circumscribed context in the construction of working quantum
computers. The discussion below will note obstacles that must be
sidestepped or overcome before various limits can be attained.
Energy limits speed of computation
To explore the physical limits of computation, let us calculate the
ultimate computational capacity of a computer with a mass of 1 kg
occupying a volume of 1 litre, which is roughly the size of a conventional
laptop computer. Such a computer, operating at the limits of speed and
memory space allowed by physics, will be called the 'ultimate laptop'
(Fig. 1).
[Figure 1 The ultimate laptop. The 'ultimate laptop' is a computer with a
mass of 1 kg and a volume of 1 liter operating at the fundamental limits
of speed and memory capacity fixed by physics. The ultimate laptop
performs 2mc^2/pi h-bar = 5.4258 x 10^50 logical operations per second on
~10^31 bits. Although its computational machinery is in fact in a highly
specified physical state with zero entropy, while it performs a
computation that uses all its resources of energy and memory space it
appears to an outside observer to be in a thermal state at ~10^9 degrees
Kelvin. The ultimate laptop looks like a small piece of the Big Bang.]
First, ask what limits the laws of physics place on the speed of such a
device. As I will now show, to perform an elementary logical operation in
time Delta-t requires an average amount of energy E greater than or equal
to pi h-bar/2Delta-t. As a consequence, a system with average energy E can
perform a maximum of 2E/pi h-bar logical operations per second. A 1-kg
computer has average energy E = mc^2 = 8.9874 x 10^16 J. Accordingly, the
ultimate laptop can perform a maximum of 5.4258 x 10^50 operations per
second.
Maximum speed per logical operation
For the sake of convenience, the ultimate laptop will be taken to be a
digital computer. Computers that operate on nonbinary or continuous
variables obey similar limits to those that will be derived here. A
digital computer performs computation by representing information in the
terms of binary digits or bits, which can take the value 0 or 1, and then
processes that information by performing simple logical operations such as
AND, NOT and FANOUT. The operation, AND, for instance, takes two binary
inputs X and Y and returns the output 1 if and only if both X and Y are 1;
otherwise it returns the output 0. Similarly, NOT takes a single binary
input X and returns the output 1 if X = 0 and 0 if X = 1. FANOUT takes a
single binary input X and returns two binary outputs, each equal to X. Any
boolean function can be constructed by repeated application of AND, NOT
and FANOUT. A set of operations that allows the construction of arbitrary
boolean functions is called universal. The actual physical device that
performs a logical operation is called a logic gate.
How fast can a digital computer perform a logical operation? During such
an operation, the bits in the computer on which the operation is performed
go from one state to another. The problem of how much energy is required
for information processing was first investigated in the context of
communications theory by Levitin (11-16), Bremermann (17-19), Beckenstein
(20-22) and others, who showed that the laws of quantum mechanics
determine the maximum rate at which a system with spread in energy Delta-E
can move from one distinguishable state to another. In particular, the
correct interpretation of the time-energy Heisenberg uncertainty principle
Delta-E Delta-t is greater than or equal to h-bar is not that it takes
time Delta-t to measure energy to an accuracy Delta-E (a fallacy that was
put to rest by Aharonov and Bohm (23,24)), but rather that a quantum state
with spread in energy Delta-E takes time at least Deltat = pi
h-bar/2Delta-E to evolve to an orthogonal (and hence distinguishable)
state(23-26). More recently, Margolus and Levitin (15,16) extended this
result to show that a quantum system with average energy E takes time at
least Delta-t = pi h-bar/2E to evolve to an orthogonal state.
Performing quantum logic operations
As an example, consider the operation NOT performed on a qubit with
logical states |0> and |1>. (For readers unfamiliar with quantum
mechanics, the 'bracket' notation |> signifies that whatever is contained
in the bracket is a quantum-mechanical variable; |0> and |1> are vectors
in a two-dimensional vector space over the complex numbers.) To flip the
qubit, one can apply a potential H = E-sub-0|E-sub-0><E-sub-0|+
E-sub-1|E-sub-1><E-sub-1| with energy eigenstates |E-sub-0> =
(1/sqrt2)(|0> + |1>) and |E-sub-1> = (1/sqrt2)(|0> - |1>). Because |0> =
(1/sqrt2)(|E-sub-0> + |E-sub-1>) and |1> = (1/sqrt2)(|E-sub-0>-
|E-sub-1>), each logical state |0>, |1>has spread in energy Delta-E =
(E-sub-1 - E-sub-0)/2. It is easy to verify that after a length of time
Delta-t = pi h-bar/2Delta-E the qubit evolves so that |0> -> |1> and |1>
-> |0>. That is, applying the potential effects a NOT operation in a time
that attains the limit given by quantum mechanics. Note that the average
energy E of the qubit in the course of the logical operation is <0|H|0> =
<1|H|1> = (E-sub-0+ E-sub-1)/2 = E-sub-0+ Delta-E. Taking the ground-state
energy E-sub-0 = 0 gives E = Delta-E. So the amount of time it takes to
perform a NOT operation can also be written as Delta-t = pi h-bar/2E. It
is straightforward to show (15,16) that no quantum system with average
energy E can move to an orthogonal state in a time less than Delta-t. That
is, the speed with which a logical operation can be performed is limited
not only by the spread in energy, but also by the average energy. This
result will prove to be a key component in deriving the speed limit for
the ultimate laptop.
AND and FANOUT can be enacted in a way that is analogous to the NOT
operation. A simple way to perform these operations in a
quantum-mechanical context is to enact a so-called Toffoli or
controlled-controlled-NOT operation (31). This operation takes three
binary inputs, X, Y and Z, and returns three outputs, X', Y' and Z'.
The first two inputs pass through unchanged, that is, X' = X, Y' = Y. The
third input passes through unchanged unless both X and Y are 1, in which
case it is flipped. This is universal in the sense that suitable choices
of inputs allow the construction of AND, NOT and FANOUT. When the third
input is set to zero, Z = 0, then the third output is the AND of the first
two: Z' = X AND Y. So AND can be constructed. When the first two inputs
are 1, X = Y = 1, the third output is the NOT of the third input, Z' = NOT
Z. Finally, when the second input is set to 1, Y = 1, and the third to
zero, Z = 0, the first and third output are the FANOUT of the first input,
X' = X, Z' = X. So arbitrary boolean functions can be constructed from the
Toffoli operation alone.
By embedding a controlled-controlled-NOT gate in a quantum context, it is
straightforward to see that AND and FANOUT, like NOT, can be performed at
a rate 2E/pi h-bar times per second, where E is the average energy of the
logic gate that performs the operation. More complicated logic operations
that cycle through a larger number of quantum states (such as those on
non-binary or continuous quantum variables) can be performed at a rate
E/pi h-bar--half as fast as the simpler operations (15,16). Existing
quantum logic gates in optical-atomic and nuclear magnetic resonance (NMR)
quantum computers actually attain this limit. In the case of NOT, E is the
average energy of interaction of the qubit's dipole moment (electric
dipole for optic-atomic qubits and nuclear magnetic dipole for NMR qubits)
with the applied electromagnetic field. In the case of multiqubit
operations such as the Toffoli operation, or the simpler two-bit
controlled-NOT operation, which flips the second bit if and only if the
first bit is 1, E is the average energy in the interaction between the
physical systems that register the qubits.
Ultimate limits to speed of computation
We are now in a position to derive the first physical limit to
computation, that of energy. Suppose that one has a certain amount of
energy E to allocate to the logic gates of a computer. The more energy one
allocates to a gate, the faster it can perform a logic operation. The
total number of logic operations performed per second is equal to the sum
over all logic gates of the operations per second per gate. That is, a
computer can perform no more than
BigSigma-over-l 1/Delta-t-sub-l is less than or equal to
BigSigma-over- l underneath 2E-sub-l/pi h-bar= 2E/pi h-bar
operations per second. In other words, the rate at which a computer can
compute is limited by its energy. (Similar limits have been proposed by
Bremmerman in the context of the minimum energy required to communicate a
bit (17-19), although these limits have been criticized for
misinterpreting the energy-time uncertainty relation(21), and for failing
to take into account the role of degeneracy of energy eigenvalues13,14 and
the role of nonlinearity in communications7-9.) Applying this result to a
1-kg computer with energy E = mc^2 = 8.9874 2 1016 J shows that the
ultimate laptop can perform a maximum of 5.4258 x 10^50 operations per
second.
Parallel and serial operation
An interesting feature of this limit is that it is independent of computer
architecture. It might have been thought that a computer could be speeded
up by parallelization, that is, by taking the energy and dividing it up
among many subsystems computing in parallel. But this is not the case. If
the energy E is spread among N logic gates, each one operates at a rate
2E/pi h-bar N, and the total number of operations per second, N 2E/pi
h-barN = 2E/pi h-bar, remains the same. then the rate at which they
operate and the spread in energy per gate decrease. Note that in this
parallel case, the overall spread in energy of the computer as a whole is
considerably smaller than the average energy: in general Delta-E =
sqrt(BigSigma-over-l Delta-E-sub-l^2) ~ sqrt(N Delta-E-sub-l) whereas E =
BigSigma-over-l E-sub-l = NE-sub-l. efficiently, but it does not alter the
total number of operations per second. As I will show below, the degree of
parallelizability of the computation to be performed determines the most
efficient distribution of energy among the parts of the computer.
Computers in which energy is relatively evenly distributed over a larger
volume are better suited for performing parallel computations. More
compact computers and computers with an uneven distribution of energy are
performing parallel computations,
Comparison with existing computers
Conventional laptops operate much more slowly than the ultimate laptop.
There are two reasons for this inefficiency. First, mot of the energy is
locked up in the mass of the particles of which the computer is
constructed, leaving only an infintesimal fraction for performing logic.
Second, a conventional computer uses many degrees of freedom (billions and
billions of electrons) for registering a single bit. From the physical
perspective, such a computer operates in a highly redundant fashion. There
are, however, good technological reasons for such redundancy, with
conventional designs depending on it for reliability and
manufacturability. But in the present discussion, the subject is not what
computers are but what they might be, and in this context the laws of
physics do not require redundancy to perform logical operations--recently
constructed quantum microcomputers use one quantum degree of freedom for
each bit and operate at the Heisenberg limit De;ta-t = pi h-bar/2 Delta-E
for the time needed to flip a bit (64,65,76-80). Redundancy is, however,
required for error correction, as will be discussed below.
In sum, quantum mechanics provides a simple answer to the question of how
fast information can be processed using a given amount of energy. Now it
will be shown that thermodynamics and statistical mechanics provide a
fundamental limit to how many bits of informatio can be processed using a
given amount of energy confined to a given volume. Available energy
necessarily limits the rate at which a computer can process information.
Similarly, the maximum entropy of a physical system determines the amount
of information it can process. Energy limits speed. Entropy limits memory.
Entropy limits memory space
The amount of information that a physical system can store and process is
related to the number of distingt physical states that are accessible to
the system. A collection of m two-state systems has 2^m accessible states
and can register m bits of information. In general, a system with N
accessible states can register log (base 2) N bits of information. But it
has been known for more than a century that the number of accessible
states of a physical system, W, is related to its thermodynamic entropy by
the formula S = k-sub-B lnW, where kB is Boltzmann's constant. (Although
this formula is inscribed on Boltzmann's tomb, it is attributed originally
to Planck; before the turn of the century, kB was often known as Planck's
constant.)
The amount of information that can be registered by a physical system is I
= S(E)/k-sub-B ln2 where S(E) is the thermodynamic entropy of a system
with expectation value for the energy E. Combining this formula with the
formula 2E/pi h-bar for the number of logical operations that can be
performed per second, we see that when it is using all its memory, the
number of operations per bit per second that our ultimate laptop can
perform is k-sub-B x 2ln(2)E/pi h-barS = k-sub-B T/h-bar, where T =
(partialderivativeS/partialderivativeE)^-1 is the temperature of 1 kg of
matter in a maximum entropy in a volume of 1 liter. The entropy governs
the amount of information the system can register and the temperature
governs the number of operations per bit per second that it can perform.
Because thermodynamic entropy effectively counts the number of bits
available to a physical system, the following derivation of the memory
space available to the ultimate laptop is based on a thermo dynamic
treatment of 1 kg of matter confined to a volume of 1 l in a maximum
entropy state. Throughout this derivation, it is important to remember
that although the memory space available to the computer is given by the
entropy of its thermal equilibrium state, the actual state of the ultimate
laptop as it performs a computation is completely determined, so that its
entropy remains always equal to zero. As above, I assume that we have
complete control over the actual state of the ultimate laptop, and are
able to guide it through its logical steps while insulating it from all
uncontrolled degrees of freedom. But as the following discussion will make
clear, such complete control will be difficult to attain (see Box 1).
[Box 1: The role of thermodynamics in computation
[The fact that entropy and information are intimately linked has been
known since Maxwell introduced his famous 'demon' well over a century ago
(1). Maxwell's demon is a hypothetical being that uses its
information-processing ability to reduce the entropy of a gas. The first
results in the physics of information processing were derived in attempts
to understand how Maxwell's demon could function (1-4. The role of
thermodynamics in computation has been examined repeatedly over the past
half century. In the 1950s, von Neumann (10) speculated that each logical
operation performed in a computer at temperature T must dissipate energy
k-sub-B T ln2, thereby increasing entropy by k-sub-B ln2. This speculation
proved to be false. The precise, correct statement of the role of entropy
in computation was attributed to Landauer (5), who showed that reversible,
that is, one- to-one, logical operations such as NOT can be performed, in
principle, without dissipation, but that irreversible, many-to-one
operations such as AND or ERASE require dissipation of at least k-sub-B
ln2 for each bit of information lost. (ERASE is a one-bit logical
operation that takes a bit, 0 or 1, and restores it to 0.) The argument
behind Landauer's principle can be readily understood (37). Essentially,
the one-to-one dynamics of hamiltonian systems implies that when a bit is
erased the information that it contains has to go somewhere. If the
information goes into observable degrees of freedom of the computer, such
as another bit, then it has not been erased but merely moved; but if it
goes into unobservable degrees of freedom such as the microscopic motion
of molecules it results in an increase of entropy of at least k-sub-B ln2.
[In 1973, Bennett (28-30) showed that all computations could be performed
using only reversible logical operations. Consequently, by Landauer's
principle, computation does not require dissipation. (Earlier work by
Lecerf (27) had anticipated the possibility of reversible computation, but
not its physical implications. Reversible computation was discovered
independently by Fredkin and Toffoli(31).) The energy used to perform a
logical operation can be 'borrowed' from a store of free energy such as a
battery, 'invested' in the logic gate that performs the operation, and
returned to storage after the operation has been performed, with a net
'profit' in the form of processed information. Electronic circuits based
on reversible logic have been built and exhibit considerable reductions in
dissipation over conventional reversible circuits (33-35).
[Under many circumstances it may prove useful to perform irreversible
operations such as erasure. If our ultimate laptop is subject to an error
rate of epsilon bits per second, for example, then error-correcting codes
can be used to detect those errors and reject them to the environment at a
dissipative cost of epsilon k-sub-B T-sub-E ln2 J s^-1, where T-sub-E is
the temperature of the environment. (k-sub-B T ln2 is the minimal amount
of energy required to send a bit down an information channel with noise
temperature T (ref. 14).) Such error-correcting routines in our ultimate
computer function as working analogues of Maxwell's demon, getting
information and using it to reduce entropy at an exchange rate of k-sub-B
T ln2 joules per bit. In principle, computation does not require
dissipation. In practice, however, any computer--even our ultimate
laptop--will dissipate energy.
[The ultimate laptop must reject errors to the environment at a high rate
to maintain reliable operation. To estimate the rate at which it can
reject errors to the environment, assume that the computer encodes
erroneous bits in the form of black-body radiation at the characteristic
temperature 5.87 x 10^8 K of the computer's memory (21). The
Stefan-Boltzmann law for black-body radiation then implies that the number
of bits per unit area than can be sent out to the environment is B = pi^2
k-sub-B^3 T^3/60ln(2)h-bar^3 c^2 = 7.195 x 10^42 bits per square meter per
second. As the ultimate laptop has a surface area of 10^-2 m^2 and is
performing ~10^50 operations per second, it must have an error rate of
less than 10-^10 per operation in order to avoid over-heating. Even if it
achieves such an error rate, it must have an energy throughput (free
energy in and thermal energy out) of 4.04 x 10^26 W--turning over its own
resting mass energy of mc^2 ~ 10^17 J in a nanosecond! The thermal load of
correcting large numbers of errors clearly indicates the necessity of
operating at a slower speed than the maximum allowed by the laws of
physics.
[End of Box 1]
Entropy, energy and temperature
To calculate the number of operations per second that could be performed
by our ultimate laptop, I assume that the expectation value of the energy
is E. Accordingly, the total number of bits of memory space available to
the computer is S(E,V)/k-sub-B ln2 where S(E,V) is the thermodynamic
entropy of a system with expectation value of the energy E confined to
volume V. The entropy of a closed system is usually given by the so-called
microcanonical ensemble, which fixes both the average energy and the
spread in energy DeltaE, and assigns equal probability to all states of
the system within a range [E, E + Delta-E]. In the case of the ultimate
laptop, however, I wish to fix only the average energy, while letting the
spread in energy vary according to whether the computer is to be more
serial (fewer, faster gates, with larger spread in energy) or parallel
(more, slower gates, with smaller spread in energy). Accordingly, the
ensemble that should be used to calculate the thermodynamic entropy and
the memory space available is the canonical ensemble, which maximizes S
for fixed average energy with no constraint on the spread in energy
Delta-E. The canonical ensemble shows how many bits of memory are
available for all possible ways of programming the computer while keeping
its average energy equal to E. In any given computation with average
energy E, the ultimate laptop will be in a pure state with some fixed
spread of energy, and will explore only a small fraction of its memory
space.
In the canonical ensemble, a state with energy E-sub-i has probability
p-sub-i = (1/Z(T) e^(-E-sub-ii/k-sub-BT) where Z(T) = BigSigma-over-i
e^(E-sub-i/k-sub-BT) is the partition function, and the temperature T is
chosen so that E = BigSigma-over-i p-sub-iE-sub-i. The entropy is S =
-k-sub-bB BigSigma-over-i p-sub-i ln p-sub-i = E/T + k-sub-B lnZ. The
number of bits of memory space available to the computer is S/k-sub-bB
ln2. The difference between the entropy as calculated using the canonical
ensemble and that calculated using the microcanonical ensemble is minimal.
But there is some subtlety involved in using the canonical ensemble rather
than the more traditional microcanonical ensemble. The canonical ensemble
is normally used for open systems that interact with a thermal bath at
temperature T. In the case of the ultimate laptop, however, it is applied
to a closed system to find the maximum entropy given a fixed expectation
value for the energy. As a result, the temperature T =
(partialS/partialE)^-1 has a somewhat different role in the context of
physical limits of computation than it does in the case of an ordinary
thermodynamic system interacting with a thermal bath. Integrating the
relationship T = (partialS/partialE)^-1 over E yields T = CE/S, where C is
a constant of order unity (for example, C = 4/3 for black-body radiation,
C = 3/2 for an ideal gas, and C = 1/2 for a black hole). Accordingly, the
temperature governs the number of operations per bit per second, k-sub-B
ln(2) E/h-barS ~ k-sub-B T/h-bar, that a system can perform. As I will
show later, the relationship between temperature and operations per bit
per second is useful in investigating computation under extreme physical
conditions.
Calculating the maximum memory space
To calculate exactly the maximum entropy for a kilogram of matter in a
litre volume would require complete knowledge of the dynamics of
elementary particles, quantum gravity, and so on. Although we do not
possess such knowledge, the maximum entropy can readily be estimated by a
method reminiscent of that used to calculate thermodynamic quantities in
the early Universe (86). The idea is simple: model the volume occupied by
the computer as a collection of modes of elementary particles with total
average energy E. The maximum entropy is obtained by calculating the
canonical ensemble over the modes. Here, I supply a simple derivation of
the maximum memory space available to the ultimate laptop. A more detailed
discussion of how to calculate the maximum amount of information that can
be stored in a physical system can be found in the work of Bekenstein
(19-21).
For this calculation, assume that the only conserved quantities other than
the computer's energy are angular momentum and electric charge, which I
take to be zero. (One might also ask that the number of baryons be
conserved, but as will be seen below, one of the processes that could take
place within the computer is black-hole formation and evaporation, which
does not conserve baryon number.) At a particular temperature T, the
entropy is dominated by the contributions from particles with mass less
than k-sub-B T/2c^2. The lth such species of particle contributes energy E
= r-sub-i<pi^2 V(k-sub-B T)^4/30 h-bar^3 c^3 and entropy S = 2r-sub-l
k-sub-B pi^2 V(k-sub-B T)^3/ 45h-bar^3 c^3= 4E/3T, where r-sub-l is equal
to the number of particles/antiparticles in the species (that is, 1 for
photons, 2 for electrons/positrons) multiplied by the number of
polarizations (2 for photons, 2 for electrons/positrons) multiplied by a
factor that reflects particle statistics (1 for bosons, 7/8 for fermions).
As the formula for S in terms of T shows, each species contributes (2pi)^5
r-sub-l</90ln2 ~ 10^2 bits of memory space per cubic thermal wavelength
lambda-sub-T^3, where lambda-sub-T = 2pi h-barc/k-sub-B T. Re-expressing
the formula for entropy as a function of energy, the estimate for the
maximum entropy is
S = (4/3)k-sub-B(pi^2rV/30h-bar^3 c^3)^(1/4) E^(3/4) = k-sub-B ln(2)I
where r = BigSigma-over-l r-sub-l. Note that S depends only insensitively
on the total number of species with mass less than k-sub-bB T/2c^2.
A lower bound on the entropy can be obtained by assuming that energy and
entropy are dominated by black-body radiation consisting of photons. In
this case, r = 2, and for a 1-kg computer confined to a volume of a 1
liter we have k-sub-B T = 8.10 x10^-15 J, or T = 5.87 x 10^8 K. The
entropy is S = 2.04 x 10^8 J K^-1, which corresponds to an amount of
available memory space I = S/k-sub-Bln2 = 2.13 x 10^31 bits. When the
ultimate laptop is using all its memory space it can perform
2ln(2)k-sub-BE/pi h-bar S = 3ln(2)k-sub-BT/2pi h-bar ~ 10^19 operations
per bit per second. As the number of operations per second 2E/pi h-bar is
independent of the number of bits available, the number of operations per
bit per second can be increased by using a smaller number of bits. In
keeping with the prescription that the ultimate laptop operates at the
absolute limits given by physics, in what follows I assume that all
available bits are used.
This estimate for the maximum entropy could be improved (and slightly
increased) by adding more species of massless particles (neutrinos and
gravitons) and by taking into effect the presence of electrons and
positrons. Note that k-sub-BT/2c^2 = 4.51 x 10^-32 kg, compared with the
electron mass of 9.1 x 10^31 kg. That is, the ultimate laptop is close to
a phase transition at which electrons and positrons are produced
thermally. A more exact estimate of the maximum entropy and hence the
available memory space would be straightforward to perform, but the
details of such a calculation would detract from my general exposition,
and could serve to alter S only slightly. S depends insensitively on the
number of species of effectively massless particles: a change of r by a
factor of 10,000 serves to increase S by only a factor of 10.
Comparison with current computers
The amount of information that can be stored by the ultimate laptop,
~10^31 bits, is much higher than the ~10^10 bits stored on current
laptops. This is because conventional laptops use many degrees of freedom
to store a bit whereas the ultimate laptop uses just one. There are
considerable advantages to using many degrees of freedom to store
information, stability and controllability being perhaps the most
important. Indeed, as the above calculation indicates, to take full
advantage of the memory space available, the ultimate laptop must turn all
its matter into energy. A typical state of the ultimate laptop's memory
looks like a plasma at a billion degrees Kelvin--like a thermonuclear
explosion or a little piece of the Big Bang! Clearly, packaging issues
alone make it unlikely that this limit can be obtained, even setting aside
the difficulties of stability and control.
Even though the ultimate physical limit to how much information can be
stored in a kilogram of matter in a litre volume is unlikely to be
attained, it may nonetheless be possible to progress some way towards such
bit densities. In other words, the ultimate limits to memory space may
prove easier to approach than the ultimate limits to speed. Following
Moore's law, the density of bits in a computer has gone down from
approximately one per square centimetre 50 years ago to one per square
micrometre today, an improvement of a factor of 10^8. It is not
inconceivable that a similar improvement is possible over the course of
the next 50 years. In particular, there is no physical reason why it
should not be possible to store one bit of information per atom. Indeed,
existing NMR and ion-trap quantum computers already store information on
individual nuclei and atoms (typically in the states of individual nuclear
spins or in hyperfine atomic states). Solid-state NMR with high gradient
fields or quantum optical techniques such as spectral hole-burning provide
potential technologies for storing large quantities of information at the
atomic scale. A kilogram of ordinary matter holds on the order of 10^25
nuclei. If a substantial fraction of these nuclei can be made to register
a bit, then we could get close to the ultimate physical limit of memory
without having to resort to thermonuclear explosions. If, in addition, we
make use of the natural electromagnetic interactions between nuclei and
electrons in the matter to perform logical operations, we are limited to a
rate of ~10^15 operations per bit per second, yielding an overall
information processing rate of ~10^40 operations per second in ordinary
matter. Although less than the ~1051 operations per second in the ultimate
laptop, the maximum information processing rate in 'ordinary matter' is
still quite respectable. Of course, even though such an 'ordinary matter'
ultimate computer need not operate at nuclear energy levels, other
problems remain--for example, the high number of bits still indicates
substantial input/output problems. At an input/output rate of 10^12 bits
per second, an Avogadro-scale computer with 10^23 bits would take about
10,000 years to perform a serial read/write operation on the entire
memory. Higher throughput and parallel input/output schemes are clearly
required to take advantage of the entire memory space that physics makes
available.
Size limits parallelization
Up until this point, I have assumed that the ultimate laptop occupies a
volume of 1 liter. The previous discussion, however, indicates that
benefits are to be obtained by varying the volume to which the computer is
confined. Generally speaking, if the computation to be performed is highly
parallelizable or requires many bits of memory, the volume of the computer
should be greater and the energy available to perform the computation
should be spread out evenly among the different parts of the computer.
Conversely, if the computation to be performed is highly serial and
requires fewer bits of memory, the energy should be concentrated in
particular parts of the computer.
A good measure of the degree of parallelization in a computer is the ratio
between the time it takes to communicate from one side of the computer to
the other, and the average time it takes to perform a logical operation.
The amount of time it takes to send a message from one side of a computer
of radius R to the other is t-sub-com = 2R/c. The average time it takes a
bit to flip in the ultimate laptop is the inverse of the number of
operations per bit per second calculated above: t-sub-flip = pi
h-barS/k-sub-B2ln(2)E. The measure of the degree of parallelization in the
ultimate laptop is then
t-sub-com/t-sub-flip = k-sub-B 4ln(2) RE / pi h-bar cS which is
equalivalent to k-sub-B RT / h-bar c = 2piR/lambda-sub-T
That is, the amount of time it takes to communicate from one side of the
computer to the other, divided by the amount of time it takes to flip a
bit, is approximately equal to the ratio between the size of the system
and its thermal wavelength. For the ultimate laptop, with 2R = 10-^1 m,
2E/pi h-bar ~ 10^51 operations per second, and S/k-sub-Bln2 ~ 10^31 bits,
t/tflip ~ 10^10. The ultimate laptop is highly parallel. A greater degree
of serial computation can be obtained at the cost of decreasing memory
space by compressing the size of the computer or making the distribution
of energy more uneven. As ordinary matter obeys the Beckenstein bound
(20-22), k-sub-BRE/h-bar cS > 1/2pi, as the computer is compressed,
t-sub-com/t-sub-flip ~ k-sub-BRE/h-bar cS will remain greater than one,
that is, the operation will still be somewhat parallel. Only at the
ultimate limit of compression--a black hole--is the computation entirely
serial.
Compressing the computer allows more serial computation
Suppose that we want to perform a highly serial computation on a few bits.
Then it is advantageous to compress the size of the computer so that it
takes less time to send signals from one side of the computer to the other
at the speed of light. As the computer gets smaller, keeping the energy
fixed, the energy density inside the computer increases. As this happens,
different regimes in high-energy physics are necessarily explored in the
course of the computation. First the weak unification scale is reached,
then the grand unification scale. Finally, as the linear size of the
computer approaches its Schwarzchild radius, the Planck scale is reached
(Fig. 2). (No known technology could possibly achieve such compression.)
At the Planck scale, gravitational effects and quantum effects are both
important: the Compton wavelength of a particle of mass m, lC = 2pi
h-bar/mc, is on the order of its Schwarzschild radius, 2Gm/c 2. In other
words, to describe behaviour at length scales of the size l-sub-P =
sqrt(h-barwG/c^3) = 1.616 x 10^-35 m, timescales t-sub-P = sqrt(h-bar/c^5
= 5.391 x 10^-44 s, and mass scales of m-sub-P = sqrt(h-barc/G) ~ 2.177 x
10^-8 kg, a unified theory of quantum gravity is required. We do not
currently possess such a theory. Nonetheless, although we do not know the
exact number of bits that can be registered by a 1-kg computer confined to
a volume of 1 l, we do know the exact number of bits that can be
registered by a 1-kg computer that has been compressed to the size of a
black hole (87). This is because the entropy of a black hole has a
well-defined value.
[Figure 2. Computing at the black-hole limit. The rate at which the
components of a computer can communicate is limited by the speed of light.
In the ultimate laptop, each bit can flip ~10^19 times per second, whereas
the time taken to communicate from one side of the 1-liter computer to the
other is on the order of 10^9 s--the ultimate laptop is highly parallel.
The computation can be speeded up and made more serial by compressing the
computer. But no computer can be compressed to smaller than its
Schwarzschild radius without becoming a black hole. A 1-kg computer that
has been compressed to the black-hole limit of R-sub-S = 2Gm/c^2 = 1.485 x
10^-27 m can perform 5.4258 x 10^50 operations per second on its I =
4piGm^2/ln(2)h-barc = 3.827 x 10^16 bits. At the black-hole limit,
computation is fully serial: the time it takes to flip a bit and the time
it takes a signal to communicate around the horizon of the hole are the
same.]
In the following discussion, I use the properties of black holes to place
limits on the speed, memory space and degree of serial computation that
could be approached by compressing a computer to the smallest possible
size. Whether or not these limits could be attained, even in principle, is
a question whose answer will have to await a unified theory of quantum
gravity (see Box 2).
[Box 2: Can a black hole compute?
[No information can escape from a classical black hole: what goes in does
not come out. But the quantum mechanical picture of a black hole is
different. First of all, black holes are not quite black; they radiate at
the Hawking temperature T given above. In addition, the well-known
statement that 'a black hole has no hair'--that is, from a distance all
black holes with the same charge and angular momentum look essentially
alike--is now known to be not always true (89-91). Finally, research in
string theory (92-94) indicates that black holes may not actually destroy
the information about how they were formed, but instead process it and
emit the processed information as part of the Hawking radiation as they
evaporate: what goes in does come out, but in an altered form.
[If this picture is correct, then black holes could in principle be
'programmed': one forms a black hole whose initial conditions encode the
information to be processed, lets that information be processed by the
planckian dynamics at the hole's horizon, and extracts the answer to the
computation by examining the correlations in the Hawking radiation emitted
when the hole evaporates. Despite our lack of knowledge of the precise
details of what happens when a black hole forms and evaporates (a full
account must await a more exact treatment using whatever theory of quantum
gravity and matter turns out to be the correct one), we can still provide
a rough estimate of how much information is processed during this
computation (95-96). Using Page's results on the rate of evaporation of a
black hole (95), we obtain a lifetime for the hole t-sub-life = G^2 m^3 /
3C h-bar c^4, where C is a constant that depends on the number of species
of particles with a mass less than k-sub-BT, where T is the temperature of
the hole. For O (10^1 to 10^2) such species, C is on the order of 10^-3 to
10^-2, leading to a lifetime for a black hole of mass 1 kg of ~10^-19 s,
during which time the hole can perform ~10^32 operations on its ~10^16
bits. As the actual number of effectively massless particles at the
Hawking temperature of a 1-kg black hole is likely to be considerably
larger than 10^2, this number should be regarded as an upper bound on the
actual number of operations that could be performed by the hole. Although
this hypothetical computation is performed at ultra-high densities and
speeds, the total number of bits available to be processed is not far from
the number available to current computers operating in more familiar
surroundings.
[End of Box 2]
The Schwarzschild radius of a 1-kg computer is RS = 2Gm/c 2= 1.485 x 10^27
m. The entropy of a black hole is Boltzmann's constant multiplied by its
area divided by 4, as measured in Planck units. Accordingly, the amount of
information that can be stored in a black hole is I = 4piGm^2/ln(2)h-bar c
= 4pim^2/ln(2)m-sub-P^2. The amount of information that can be stored by
the 1-kg computer in the black- hole limit is 3.827 x 10^16 bits. A
computer compressed to the size of a black hole can perform 5.4258 x 10^50
operations per second, the same as the 1-l computer.
In a computer that has been compressed to its Schwarzschild radius, the
energy per bit is E/I = mc^2/I = ln(2)h-bar c^3 / 4pimG = ln(2)k-sub-B
T/2, where T = (partialS/partialE)^-1 = h-bar c/4pi k-sub-B R-sub-S is the
temperature of the Hawking radiation emitted by the hole. As a result, the
time it takes to flip a bit on average is t-sub-flip = pi h-barI/2E =
pi^2RS/c ln2. In other words, according to a distant observer, the amount
of time it takes to flip a bit, t-sub-flip, is on the same order as the
amount of time t-sub-com = piR-sub-S/c it takes to communicate from one
side of the hole to the other by going around the horizon:
t-sub-com/t-sub-flip = ln2/pi. In contrast to computation at lesser
densities, which is highly parallel, computation at the horizon of a black
hole is highly serial: every bit is essentially connected to every other
bit over the course of a single logic operation. As noted above, the
serial nature of computation at the black-hole limit can be deduced from
the fact that black holes attain the Beckenstein bound (20-22), k-sub-B
RE/h-barcS = 1/2pi.
Constructing ultimate computers
Throughout this entire discussion of the physical limits to computation,
no mention has been made of how to construct a computer that operates at
those limits. In fact, contemporary quantum 'microcomputers' such as those
constructed using NMR (76-80) do indeed operate at the limits of speed and
memory space described above. Information is stored on nuclear spins, with
one spin registering one bit. The time it takes a bit to flip from a state
|uparrow> to an orthogonal state |downarrow> is given by pi h-bar/2muB =
pi h-bar/2E, where m is the spin's magnetic moment, B is the magnetic
field, and E = muB is the average energy of interaction between the spin
and the magnetic field. To perform a quantum logic operation between two
spins takes a time pi h-bar/2E-sub-gammma, where E-sub-gamma is the energy
of interaction between the two spins.
Although NMR quantum computers already operate at the limits to
computation set by physics, they are nonetheless much slower and process
much less information than the ultimate laptop described above. This is
because their energy is locked up largely in mass, thereby limiting both
their speed and their memory. Unlocking this energy is of course possible,
as a thermonuclear explosion indicates. But controlling such an 'unlocked'
system is another question. In discussing the computational power of
physical systems in which all energy is put to use, I assumed that such
control is possible in principle, although it is certainly not possible in
current practice. All current designs for quantum computers operate at low
energy levels and temperatures, exactly so that precise control can be
exerted on their parts.
As the above discussion of error correction indicates, the rate at which
errors can be detected and rejected to the environment by error-correction
routines places a fundamental limit on the rate at which errors can be
committed. Suppose that each logical operation performed by the ultimate
computer has a probability e of being erroneous. The total number of
errors committed by the ultimate computer per second is then 2epsilonE/pi
h-bar. The maximum rate at which information can be rejected to the
environment is, up to a geometric factor, ln(2)cS/R (all bits in the
computer moving outward at the speed of light). Accordingly, the maximum
error rate that the ultimate computer can tolerate is epsilon less than or
equal to piln(2)h-bar cS/2ER = 2t-sub-flip/t-sub-com. That is, the maximum
error rate that can be tolerated by the ultimate computer is the inverse
of its degree of parallelization.
Suppose that control of highly energetic systems were to become possible.
Then how might these systems be made to compute? As an example of a
'computation' that might be performed at extreme conditions, consider a
heavy-ion collision that takes place in the heavy-ion collider at
Brookhaven (S. H. Kahana, personal communication). If one collides 100
nucleons on 100 nucleons (that is, two nuclei with 100 nucleons each) at
200 GeV per nucleon, the operation time is pi h-bar/2E ~ 10-29 s. The
maximum entropy can be estimated to be ~4k-sub-B per relativistic pion (to
within a factor of less than 2 associated with the overall entropy
production rate per meson), and there are ~10^4 relativistic pions per
collision. Accordingly, the total amount of memory space available is
S/k-sub-B ln2 ~ 10^4 to 10^5 bits. The collision time is short: in the
centre-of-mass frame the two nuclei are Lorentz-contracted to D/gamma
where D = 12-13 fermi and gamma = 100, giving a total collision time of
~10^-25 s. During the collision, then, there is time to perform
approximately 10^4 operations on 10^4 bits--a relatively simple
computation. (The fact that only one operation per bit is performed
indicates that there is insufficient time to reach thermal equilibrium, an
observation that is confirmed by detailed simulations.) The heavy-ion
system could be programmed by manipulating and preparing the initial
momenta and internal nuclear states of the ions. Of course, we would not
expect to be able do word processing on such a 'computer'. Rather it would
be used to uncover basic knowledge about nuclear collisions and
quark-gluon plasmas: in the words of Heinz Pagels, the plasma 'computes
itself ' (88).
At the greater extremes of a black-hole computer, I assumed that whatever
theory (for example, string theory) turns out to be the correct theory of
quantum matter and gravity, it is possible to prepare initial states of
such systems that causes their natural time evolution to carry out a
computation. What assurance do we have that such preparations exist, even
in principle?
Physical systems that can be programmed to perform arbitrary digital
computations are called computationally universal. Although computational
universality might at first seem to be a stringent demand on a physical
system, a wide variety of physical systems-- ranging from
nearest-neighbour Ising models52 to quantum electrodynamics84 and
conformal field theories (M. Freedman, unpublished results)--are known to
be computationally universal (51-53,55-65). Indeed, computational
universality seems to be the rule rather than the exception. Essentially
any quantum system that admits controllable nonlinear interactions can be
shown to be computationally universal (60,61). For example, the ordinary
electrostatic interaction between two charged particles can be used to
perform universal quantum logic operations between two quantum bits. A bit
is registered by the presence or absence of a particle in a mode. The
strength of the interaction between the particles, e^2/r, determines the
amount of time t-sub-flip = pi h-bar r/2e^2 it takes to perform a quantum
logic operation such as a controlled-NOT on the two particles. The time it
takes to perform such an operation divided by the amount of time it takes
to send a signal at the speed of light between the bits t-sub-com = r/c is
a universal constant, t-sub-flip/t-sub-com= pi h-bar c/2e^2= p/2alpha,
where alpha= e^2/h-bar c ~1/137 is the fine structure constant. This
example shows the degree to which the laws of physics and the limits to
computation are entwined.
In addition to the theoretical evidence that most systems are
computationally universal, the computer on which I am writing this article
provides strong experimental evidence that whatever the correct underlying
theory of physics is, it supports universal computation. Whether or not it
is possible to make computation take place in the extreme regimes
envisaged in this paper is an open question. The answer to this question
lies in future technological development, which is difficult to predict.
If, as seems highly unlikely, it is possible to extrapolate the
exponential progress of Moore's law into the future, then it will take
only 250 years to make up the 40 orders of magnitude in performance
between current computers that perform 10^10 operations per second on
10^10 bits and our 1-kg ultimate laptop that performs 1051 operations per
second on 10^31 bits.
Notes
1. Maxwell, J. C. Theory of Heat (Appleton, London, 1871).
2. Smoluchowski, F. Vortr|ge über die kinetische Theorie der Materie u.
Elektrizitat (Leipzig, 1914).
3. Szilard, L. Über die Entropieverminderung in einem thermodynamischen
System bei Eingriffen intelligenter Wesen. Z. Physik 53, 840-856 (1929).
4. Brillouin, L. Science and Information Theory (Academic Press, New York,
1953).
5. Landauer, R. Irreversibility and heat generation in the computing
process. IBM J. Res. Dev. 5, 183-191 (1961).
6. Keyes, R. W. & Landauer, R. Minimal energy dissipation in logic. IBM J.
Res. Dev. 14, 152-157 (1970).
7. Landauer, R. Dissipation and noise-immunity in computation and
communication. Nature 335, 779-784 (1988).
8. Landauer, R. Information is physical. Phys. Today 44, 23-29 (1991).
9. Landauer, R. The physical nature of information. Phys. Lett. A 217,
188-193 (1996).
10. von Neumann, J. Theory of Self-Reproducing Automata Lect. 3 (Univ.
Illinois Press, Urbana, IL, 1966).
11. Lebedev, D. S. & Levitin, L. B. Information transmission by
electromagnetic field. Inform. Control 9, 1-22 (1966).
12. Levitin, L. B. in Proceedings of the 3rd International Symposium on
Radio Electronics part 3, 1-15 (Varna, Bulgaria, 1970).
13. Levitin, L. B. Physical limitations of rate, depth, and minimum energy
in information processing. Int. J. Theor. Phys. 21, 299-309 (1982).
14. Levitin, L. B. Energy cost of information transmission (along the path
to understanding). Physica D 120, 162-167 (1998).
15. Margolus, N. & Levitin, L. B. in Proceedings of the Fourth Workshop on
Physics and Computation--PhysComp96 (eds Toffoli, T., Biafore, M. & Leão,
J.) (New England Complex Systems Institute, Boston, MA, 1996).
16.Margolus, N. & Levitin, L. B. The maximum speed of dynamical evolution.
Physica D 120, 188-195 (1998).
17. Bremermann, H. J. in Self-Organizing Systems (eds Yovits, M. C.,
Jacobi, G. T. & Goldstein, G. D.) 93-106 (Spartan Books, Washington DC,
1962).
18. Bremermann, H. J. in Proceedings of the Fifth Berkeley Symposium on
Mathematical Statistics and Probability (eds LeCam, L. M. & Neymen, J.)
Vol. 4, 15-20 (Univ. California Press, Berkeley, CA, 1967).
19. Bremermann, H. J. Minimum energy requirements of information transfer
and computing. Int. J. Theor. Phys. 21, 203-217 (1982).
20. Bekenstein, J. D. Universal upper bound on the entropy-to-energy
ration for bounded systems. Phys. Rev. D 23, 287-298 (1981).
21. Bekenstein, J. D. Energy cost of information transfer. Phys. Rev.
Lett. 46, 623-626 (1981).
22. Bekenstein, J. D. Entropy content and information flow in systems with
limited energy. Phys. Rev. D 30, 1669-1679 (1984).
23. Aharonov, Y. & Bohm, D. Time in the quantum theory and the uncertainty
relation for the time and energy domain. Phys. Rev. 122, 1649-1658 (1961).
24. Aharonov, Y. & Bohm, D. Answer to Fock concerning the time-energy
indeterminancy relation. Phys. Rev. B 134, 1417-1418 (1964).
25. Anandan, J. & Aharonov, Y. Geometry of quantum evolution. Phys. Rev.
Lett. 65, 1697-1700 (1990).
26. Peres, A. Quantum Theory: Concepts and Methods (Kluwer, Hingham,
1995).
27. Lecerf, Y. Machines de Turing réversibles. C.R. Acad. Sci. 257,
2597-2600 (1963).
28. Bennett, C. H. Logical reversibility of computation. IBM J. Res. Dev.
17, 525-532 (1973).
29. Bennett, C.H. Thermodynamics of computation--a review. Int. J. Theor.
Phys. 21, 905-940 (1982).
30. Bennett, C. H. Demons, engines and the second law. Sci. Am. 257, 108
(1987).
31. Fredkin, E. & Toffoli, T. Conservative logic. Int. J. Theor. Phys. 21,
219-253 (1982).
32. Likharev, K. K. Classical and quantum limitations on energy
consumption in computation. Int. J. Theor. Phys. 21, 311-325 (1982).
33. Seitz, C. L. et al. in Proceedings of the 1985 Chapel Hill Conference
on VLSI (ed. Fuchs, H.) (Computer Science Press, Rockville, MD, 1985).
34. Merkle, R. C. Reversible electronic logic using switches.
Nanotechnology 34, 21-40 (1993).
35. Younis, S. G. & Knight, T. F. in Proceedings of the 1993 Symposium on
Integrated Systems, Seattle, Washington (eds Berrielo, G. & Ebeling, C.)
(MIT Press, Cambridge, MA, 1993).
36. Lloyd, S. & Pagels, H. Complexity as thermodynamic depth. Ann. Phys.
188, 186-213 (1988).
37. Lloyd, S. Use of mutual information to decrease entropy--implications
for the Second Law of Thermodynamics. Phys. Rev. A 39, 5378-5386 (1989).
38. Zurek, W. H. Thermodynamic cost of computation, algorithmic complexity
and the information metric. Nature 341, 119-124 (1989).
39. Leff, H. S. & Rex, A. F. Maxwell's Demon: Entropy, Information,
Computing (Princeton Univ. Press, Princeton, 1990).
40. Lloyd, S. Quantum mechanical Maxwell's demon. Phys. Rev. A 56,
3374-3382 (1997).
41. Benioff, P. The computer as a physical system: a microscopic quantum
mechanical Hamiltonian model of computers as represented by Turing
machines. J. Stat. Phys. 22, 563-591 (1980).
42. Benioff, P. Quantum mechanical models of Turing machines that
dissipate no energy. Phys. Rev. Lett. 48, 1581-1585 (1982).
43. Feynman, R. P. Simulating physics with computers. Int. J. Theor. Phys.
21, 467 (1982).
44. Feynman, R. P. Quantum mechanical computers. Optics News 11, 11
(1985); reprinted in Found. Phys. 16, 507 (1986).
45. Zurek, W. H. Reversibility and stability of information-processing
systems. Phys. Rev. Lett. 53, 391-394 (1984).
46. Peres, A. Reversible logic and quantum computers. Phys. Rev. A 32,
3266-3276 (1985).
47. Deutsch, D. Quantum-theory, the Church-Turing principle, and the
universal quantum computer. Proc. R. Soc. Lond. A 400, 97-117 (1985).
48. Margolus, N. Quantum computation. Ann. N.Y. Acad. Sci. 480, 487-497
(1986).
49. Deutsch, D. Quantum computational networks. Proc. R. Soc. Lond. A 425,
73-90 (1989).
50. Margolus, N. in Complexity, Entropy, and the Physics of Information,
Santa Fe Institute Studies in the Sciences of Complexity Vol. VIII (ed.
Zurek, W. H.) 273-288 (Addison Wesley, Redwood City, 1991).
51. Lloyd, S. Quantum-mechanical computers and uncomputability. Phys. Rev.
Lett. 71, 943-946 (1993).
52. Lloyd, S. A potentially realizable quantum computer. Science 261,
1569-1571 (1993).
53.Lloyd, S. Necessary and sufficient conditions for quantum computation.
J. Mod. Opt. 41, 2503-2520 (1994).
54. Shor, P. in Proceedings of the 35th Annual Symposium on Foundations of
Computer Science (ed. Goldwasser, S.) 124-134 (IEEE Computer Society, Los
Alamitos, CA, 1994).
55. Lloyd, S. Quantum-mechanical computers. Sci. Am. 273, 140-145 (1995).
56. DiVincenzo, D. Quantum computation. Science 270, 255-261 (1995).
57. DiVincenzo, D. P. 2-Bit gates are universal for quantum computation.
Phys. Rev. A 51, 1015-1022 (1995).
58.Sleator, T. & Weinfurter, H. Realizable universal quantum logic gates.
Phys. Rev. Lett. 74, 4087-4090 (1995).
59. Barenco, A. et al. Elementary gates for quantum computation. Phys.
Rev. A 52, 3457-3467 (1995).
60. Lloyd, S. Almost any quantum logic gate is universal. Phys. Rev. Lett.
75, 346-349 (1995).
61. Deutsch, D., Barenco, A. & Ekert, A. Universality in quantum
computation. Proc. R. Soc. Lond. A 449, 669-677 (1995).
62. Cirac, J. I. & Zoller, P. Quantum computation with cold ion traps.
Phys. Rev. Lett. 74, 4091-4094 (1995).
63. Pellizzari, T., Gardiner, S. A., Cirac, J. I. & Zoller, P.
Decoherence, continuous observation, and quantum computing--a cavity QED
model. Phys. Rev. Lett. 75, 3788-3791 (1995).
64. Turchette, Q. A., Hood, C. J., Lange, W., Mabuchi, H. & Kimble, H. J.
Measurement of conditional phase-shifts for quantum logic. Phys. Rev.
Lett. 75, 4710-4713 (1995).
65. Monroe, C., Meekhof, D. M., King, B. E., Itano, W. M. & Wineland, D.
J. Demonstration of a fundamental quantum logic gate. Phys. Rev. Lett. 75,
4714-4717 (1995).
66. Grover, L. K. in Proceedings of the 28th Annual ACM Symposium on the
Theory of Computing 212-218 (ACM Press, New York, 1996).
67. Lloyd, S. Universal quantum simulators. Science 273, 1073-1078 (1996).
68. Zalka, C. Simulating quantum systems on a quantum computer. Proc. R.
Soc. Lond A 454, 313-322 (1998).
69.Shor, P. W. A scheme for reducing decoherence in quantum memory. Phys.
Rev. A 52, R2493-R2496 (1995).
70. Steane, A. M. Error correcting codes in quantum theory. Phys. Rev.
Lett. 77, 793-797 (1996).
71. Laflamme, R., Miquel, C., Paz, J. P. & Zurek, W. H. Perfect quantum
error correcting code. Phys. Rev. Lett. 77, 198-201 (1996).
72. DiVincenzo, D. P. & Shor, P. W. Fault-tolerant error correction with
efficient quantum codes. Phys. Rev. Lett. 77, 3260-3263 (1996).
73. Shor, P. in Proceedings of the 37th Annual Symposium on the
Foundations of Computer Science 56-65 (IEEE Computer Society Press, Los
Alamitos, CA, 1996).
74. Preskill, J. Reliable quantum computers. Proc. R. Soc. Lond. A 454,
385-410 (1998).
75.Knill, E., Laflamme, R. & Zurek, W. H. Resilient quantum computation.
Science 279, 342-345 (1998).
76. Cory, D. G., Fahmy, A. F. & Havel, T. F. in Proceedings of the Fourth
Workshop on Physics and Computation--PhysComp96 (eds Toffoli, T., Biafore,
M. & Leão, J.) 87-91 (New England Complex Systems Institute, Boston, MA,
1996).
77.Gershenfeld, N. A. & Chuang, I. L. Bulk spin-resonance quantum
computation. Science 275, 350-356 (1997).
78. Chuang, I. L., Vandersypen, L. M. K., Zhou, X., Leung, D. W. & Lloyd,
S. Experimental realization of a quantum algorithm. Nature 393, 143-146
(1998).
79. Jones, J. A., Mosca, M. & Hansen, R. H. Implementation of a quantum
search algorithm on a quantum computer. Nature 393, 344-346 (1998).
80. Chuang, I. L., Gershenfeld, N. & Kubinec, M. Experimental
implementation of fast quantum searching. Phys. Rev. Lett. 80, 3408-3411
(1998).
81. Kane, B. A silicon-based nuclear-spin quantum computer. Nature 393,
133 (1998).
82. Nakamura, Y., Pashkin, Yu. A. & Tsai, J. S. Coherent control of
macroscopic quantum states in a single-Cooper-pair box. Nature 398,
786-788 (1999).
83. Mooij, J. E. et al. Josephson persistent-current qubit. Science 285,
1036-1039 (1999).
84. Lloyd, S. & Braunstein, S. Quantum computation over continuous
variables. Phys. Rev. Lett. 82, 1784-1787 (1999).
85. Abrams, D. & Lloyd, S. Nonlinear quantum mechanics implies
polynomial-time solution for NP- complete and P problems. Phys. Rev. Lett.
81, 3992-3995 (1998).
86. Zel'dovich, Ya. B. & Novikov, I. D. Relativistic Astrophysics (Univ.
of Chicago Press, Chicago, 1971).
87. Novikov, I. D. & Frolov, V. P. Black Holes (Springer, Berlin, 1986).
88. Pagels, H. The Cosmic Code: Quantum Physics as the Language of Nature
(Simon and Schuster, New York, 1982).
89. Coleman, S., Preskill, J. & Wilczek, F. Growing hair on black-holes.
Phys. Rev. Lett. 67, 1975-1978 (1991).
90. Preskill, J. Quantum hair. Phys. Scr. T 36, 258-264 (1991).
91. Fiola, T. M., Preskill, J. & Strominger A. Black-hole thermodynamics
and information loss in 2 dimensions. Phys. Rev. D 50, 3987-4014 (1994).
92. Susskind, L. & Uglum, J. Black-hole entropy in canonical
quantum-gravity and superstring theory. Phys. Rev. D 50, 2700-2711 (1994).
93. Strominger A. & Vafa, C. Microscopic origin of the Bekenstein-Hawking
entropy. Phys. Lett. B 37, 99-104 (1996).
94. Das, S. R. & Mathur, S. D. Comparing decay rates for black holes and
D-branes. Nucl. Phys. B 478, 561-576 (1996).
95. Page, D. N. Particle emision rates from a black-hole: massless
particles form an uncharged non-rotating black-hole. Phys. Rev. D 13, 198
(1976).
96. Thorne, K. S., Zurek, W. H. & Price R. H. in Black Holes: The Membrane
Paradigm Ch. VIII (eds Thorne, K. S., Price, R. H. & Macdonald, D. A.)
280-340 (Yale Univ. Press, New Haven, CT, 1986).
More information about the paleopsych
mailing list