[ExI] Von Neumann Probes

Jason Resch jasonresch at gmail.com
Mon Jan 26 00:19:21 UTC 2026


On Sun, Jan 25, 2026 at 6:46 AM John Clark <johnkclark at gmail.com> wrote:

> On Sat, Jan 24, 2026 at 5:47 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> *> Computational speed (in operations per second) is a function of mass,
>> not of the amount of energy the computer consumes. (in operations per
>> second) is a function of mass, not of the amount of energy the computer
>> consumes.*
>
>
> *Since E=MC^2, energy and mass are two different sides of the same thing,
> which one is more convenient to use depends on circumstances. If you're
> talking about how fast a bit can flip (∆t) then it's energy dependent, the
> formula is ∆t≥ h/4E, where h is Planck's constant. But if you're talking
> about the maximum possible number of bits a physical object can process
> then you're talking about mass and Bremermann's Limit.*
>

Bemmermann's limit is derived from the minimum time to detect a change,
which is based on the "∆t≥ h/4E" formula you cite. I agree mass=energy, but
the question was whether computational speed depends on the mass-energy of
the computer, or the amount of energy consumed (power). It seems you agree
it does depend on the mass-energy of the computer, and not the power
consumption (at least for an ideally engineered computer that doesn't leak
or waste energy).


> * It's derived from the relationship between Shannon information entropy
> and the energy-time uncertainty principle, it states that the maximum
> number of bits a physical object can process is  1.36*1^50 bits per second
> per kilogram.*
>

I am not sure Shannon comes into it. It can be stated more concisely as the
minimum time to detect a change (which is required for any sequential
operation involving bits) depends on mass-energy. More simply, you can
consider the case of a single photon. Its vibrational frequency depends on
its energy, and if you extrapolate the frequency to mass-energy using
Planck's constant, you get Bremermann's limit.



> * If you try to go beyond Bremermann's Limit the energy/mass density would
> become so high that your computer would collapse into a Black Hole, and
> then information could go in but it couldn't get out so the machine
> wouldn't be of much use. *
>

I think here you are thinking of the Bekenstein bound.

Note that Bremermann's limit is independent of density; you can have a
distributed diffuse mass of computronium that achieves Bremermann's limit.
However, such a computer would be inherently more gearled for parallel,
rather than sequential computations. If you want to maximize the number of
sequential computers (maximum clock speed per number of bits) then you must
use computronium that is essentially a black hole. See:
https://cse.buffalo.edu/~rapaport/111F04/lloyd-ng-sciam-04.pdf

Or what I have written about computronium here:
https://drive.google.com/file/d/1LJuOQooUaVN0eHvPcL0zuKUT9Z0CLKic/view?usp=sharing

Jason



>
> *John K Clark *
>
>
>
>
>
>
>
>>> *assuming you don't have infinite memory or infinite time available, and
>>> by infinite I mean infinite and not just astronomically large. If your
>>> memory is finite then after you finish a calculation you're going to need
>>> to erase all the scratchpad stuff in memory you use to produce the answer
>>> and just keep the answer, but that takes energy. Landauer's principle
>>> allows us to calculate the fundamental lower bound of the energy needed to
>>> erase one bit of information, it is k*T*ln2, (K is Boltzman's constant, T
>>> is the temperature of the computer in kelvin, and ln2 is the natural
>>> logarithm of 2). **At room temperature it takes at least 2.9 x 10^-21
>>> joules of energy to erase one bit of information. Of course if you had
>>> infinite memory at your disposal then you wouldn't need to erase anything,
>>> but unfortunately you don't. *
>>>
>>> *There is one way around this, Landauer’s bound only applies to
>>> information erasure not to logic steps, so if your computer is made in a
>>> way that allows for reversible computing (everyday computers are not) then
>>> once you finish a computation you could keep the answer and then run the
>>> computer backwards to get back to the starting state, so no information is
>>> erased.*
>>>
>>
>> That's what I was referring to.
>>
>> * If you do that then, although you could never get to zero, you could
>>> perform a calculation using an arbitrarily small amount of energy. But the
>>> trouble is thermodynamics tells us the process needs to be as close to
>>> adiabatic as possible, so the less energy you use the slower your
>>> computation.*
>>>
>>
>> Computational speed (in operations per second) is a function of mass, not
>> of the amount of energy the computer consumes.
>>
>> Unless the computer is wasting energy for something other than
>> computation, which is what you seem to be suggesting here.
>>
>> Jason
>>
>>
>> * Of course if you had infinite time at your disposal it wouldn't matter
>>> how slow the computation is, but unfortunately you don't.*
>>>
>>> *John K Clark*
>>>
>>>
>>>
>>>
>>>
>>>>
>>>>
>>>>>
>>>>>> * > I appreciate the 'von Neumann probe' argument, but not all
>>>>>> civilisations are going to go that route*
>>>>>
>>>>>
>>>>> *It would only take one. And I'm not talking about one civilization,
>>>>> I'm talking about one individual in a civilization. It is simply not **tenable to
>>>>> maintain that precisely 100% of the technologically savvy individuals in
>>>>> the observable universe have decided not to make a Von Neumann Probe. I
>>>>> think William of Ockham would agree with me that the best explanation of
>>>>> the Fermi Paradox is simply we are the first. And as I keep saying,
>>>>> somebody has to be. *
>>>>>
>>>>> *> I have a hunch that we tend to vastly underestimate the difficulty
>>>>>> of interstellar travel.*
>>>>>
>>>>>
>>>>> *You don't need interstellar travel to make a Dyson sphere/swarm, and
>>>>> something like that should be very noticeable, but we have noticed nothing.
>>>>> And any technological civilization worth its salt should be able to get a
>>>>> Von Neumann Probe moving at 1% the speed of light because its mass would be
>>>>> very small, and so it could get from one side of the galaxy to the other in
>>>>> just 10 million years, a blink of the eye cosmically speaking. But just how
>>>>> much would a Von Neumann Probe weigh? *
>>>>>
>>>>>
>>>>> *Estimates vary, Freeman Dyson thought it would be about a kilogram
>>>>> but George Church and Zaza Osmanov think that's much too high, they think
>>>>> with advanced Nanotechnology one Von Neumann Probe could be about the size
>>>>> of a bacteria and, depending on various engineering considerations, weigh
>>>>> between a trillionth of a gram (10^-12) and a thousandth (10^-3) of a gram;
>>>>> and, if it had access to raw materials and light energy from a star, it
>>>>> could make a copy of itself in about a year. Then after 79 years there
>>>>> would be an Avogadro's number of Von Neumann Probes, 6.02*10^23. And one
>>>>> year after that it would be obvious to a blind man in a fog bank that not
>>>>> all the technologically knowledgeable minds in the galaxy were on the
>>>>> Earth. But we have seen nothing like that. I think I know why. *
>>>>>
>>>>> *John K Clark*
>>>>> _
>>>>
>>>> ____
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260125/f65b218c/attachment-0001.htm>


More information about the extropy-chat mailing list