[ExI] matrioshka brains again, was: RE: Symbol Grounding
Jason Resch
jasonresch at gmail.com
Mon Apr 24 03:59:46 UTC 2023
On Sun, Apr 23, 2023 at 6:40 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
>
>
> *…*> *On Behalf Of *Jason Resch via extropy-chat
> *…Cc:* Jason Resch <jasonresch at gmail.com>
>
> …
>
>
>
> >…Do you have a copy of this available online? I am interested…
>
>
>
> I don’t, but I do have a website now, with almost nothing on it, so I will
> find a way to digitize that content and put it on there. Thx for the good
> idea Jason.
>
Thank you spike, I look forward to that!
>
>
>
>
> >…Isn't incident solar radiation at 1 AU around 1300 W? 100 cm^2 should
> have 13 W available. I'm just making sure I'm not missing something, not
> sure why your estimate is 3 orders of magnitude less than my rough estimate…
>
>
>
> I accounted for the considerable shielding required for any
> super-long-lived solar cell, but I probably over-accounted. Significant
> spectral filtering is needed, as well as a physical barrier for
> micrometeoriods.
>
We could consider that it would have some self-healing mechanism, perhaps
nanobots, or some time of liquid material that can be manipulated into a
shape as micrometeorites ablate the material. Consider how trees can regrow
leaves as they are lost.
> I am assuming being away from earth orbit to reduce space debris problems,
> but we need something that will produce power with a half-life of a
> thousand years or more.
>
As long as each node on average can generate more energy than it takes to
rebuild and replace that node before it's mean time to failure, then they
can be self-replicating and self-sustaining.
> Even then, it isn’t entirely clear to me what technology is needed for
> adequate shielding from cosmic rays, which punch right on thru your
> favorite mechanical barrier.
>
Electronics can be hardened to tolerate such things. There is ECC RAM, for
example (using a few extra bits to provide error-correcting codes to detect
and recover from random bit flips).
>
>
>
>
> with a minimum latency between adjacent nodes of about 3 microseconds,
>
>
>
> >…A node here is a processor/solar cell pair? If so they should be ~10 cm
> from each other. At c the latency would be 3 nanoseconds rather than
> microseconds, but I think I am missing something…
>
>
>
> Eh, that’s what I get for doing this all from memory rather than
> recalculating, or even doing the dang calcs in my head to see if they are
> about in the right order of magnitude. Oh the ignominy, oy vey, sheesh.
>
>
>
> My strawman design had nodes spaced at 1 meter, so inherent latency would
> be 3 nanoseconds. Thx for the sanity check Jason: no sanity was detected.
>
I miscalculated as well, I should have said 0.3 ns based on my assumptions.
;-)
>
>
>
>
>
>
> with a cell-phone-ish 256GB of on-board memory per node, and given a
> trillion such nodes, can we park an effective GPT-4 chatbot on that?
>
>
>
> >…GPT-4 has a trillion parameters. At 8 bits per parameter you should be
> able to park it across 4 such nodes…
>
>
>
> OK here’s the design challenge. There is a tradeoff between memory
> availability, power use of the processor, signal bandwidth between nodes,
>
There are forms of memory which do not require more energy for a given
storage capacity. So there's nor necessarily a trade off between energy and
memory. Likewise, signal bandwidth seems unrelated, a 100 Gbps NIC doesn't
use 100 times the power as a 1 Gbps NIC. For processing speed, there will
be a relation, but not necessarily linear. Note that Moore's law would have
been unworkable, if we did not see a million fold reduction in power usage
which accompanied a million fold increase in operations/second. If not, our
current PCs would each need a power plant to run. So long as we continue to
miniaturize our systems we can perform more computations not only faster,
but more efficiently.
> and I do not know how to optimize that function other than just try some
> combinations and see how it works.
>
I've heard this rule of thumb, and it seems to ring true: a computer will
seem sluggish if it takes the CPU(s) more than 1 second of time to traverse
all the memory in RAM. So if we presume these CPUs are running programs
that interface with humans running in real time (as opposed to uploaded
human minds running however fast they can) you would want to have a total
processing capacity of the CPU to be roughly on par with the available
RAM. So if you had 8 GB of RAM, you might look for 2 CPU cores at 4 GHz.
Although I should clarify: I think when you said 256 GB, you are referring
to non-volatile memory, which is more for long-term storage, rather than
RAM which holds running programs.
> What I don’t know is how to optimize GPT with regard to number of
> processors, capability of each processor and so on.
>
Neural networks are implemented today as multiplications of huge 2D
matrices holding floating point numbers. Graphics cards are well suited to
this operation, far more so than CPUs are, which is why running deep neural
networks benefits greatly from having a fast graphics card.
>
>
> >…My last estimates on this (using 2020 numbers) was that we're about
> 10^34 away from the best physically possible computers. So Moore's law has
> another 115 years left to go…
>
>
>
> OK well, it might require AI to figure out how to do it, for it appears we
> are approaching the limits to what BI can do with electronics, at least for
> now.
>
It's quite incredible we've gotten this far without it. :-)
>
>
> >…Using 2016 tech estimates for such megastructures, feels to me a bit
> like a 1910 estimate of how many bits we could store in the future given
> the constraints that forests impose on the number of punch cards we can
> make…
>
>
>
> Ja, I have found the work on this generally disheartening without Robert’s
> constant goading and schmoding (he was a rather insistent chap when he
> wanted calculations done (a process I refer to as Bradburyish goading and
> schmoding.))
>
Not to derail this project, but have you looked into the potential of using
small blackholes as power plants? (
https://www.livescience.com/53627-hawking-proposes-mini-black-hole-power-source.html
)
I think it is promising for a number of reasons:
Building a Dyson swarm around a star requires vast amounts of matter and
energy. Entire planets would need to be disassembled to provide the raw
materials. In the end, the Dyson swarm would capture only 0.7% of the
energy present in the mass-energy of the star and it would take the entire
lifetime of the star to capture. An advanced civilization could much more
easily construct a black hole engine. Such an engine can turn 100% of mass
into energy–142 times the efficiency of fusion. Moreover, anything you feed
it is fuel. Just drop something into it and the black hole turns it into
pure energy in the form of Hawking radiation.
"A mountain-sized black hole would give off X-rays and gamma rays, at a
rate of about 10 million megawatts, enough to power the world’s electricity
supply."
-- Stephen Hawking <https://www.bbc.com/news/science-environment-35421439>
>
>
> Never mind the other rings for now, let's look at just one ring, for I am
> told GPT4 needs jillions of processors to do its magic,
>
>
>
> >…So long as the memory is there, you could use a pocket calculator to run
> GPT-4. It would just take a long time to produce its response…
>
>
>
> Ja, I don’t think the current GPT-4 is what we need on there eventually,
> but I don’t understand the memory/processor balance with the transformers
> or really even how to estimate that. I am told Elon is buying these GPUs
> and such, but at some point we need a collaborator who does understand that
> balance for working LLMs and other types of calculations.
>
Yes, that's not my area of expertise.. But note that there is a massive
difference between the cost of training the system, vs. the cost of running
the system once trained. GPT-4 cost many millions of dollars in processing
time, but their API to invoke GPT-4 only costs around $0.03 per prompt if I
remember correctly.
>
>
> …
>
>
>
> >…Isn't thermodynamic efficiency just a matter of the fraction of the sky
> filled with star vs. the fraction of sky with ~3K vacuum?
>
>
>
> Robert thought so, but I fear that he persistently failed (or rather he
> flatly refused) to take into account something important: the thermal
> gradient. I worked for a while on estimating that using Bessel functions,
> but eventually gave up on that approach because it was too easy for me to
> punch holes in my own reasoning.
>
Can this be resolved by just making the layer very thin?
>
>
> >… If I remember correctly then a Dyson sphere can at best utilize 50% of
> the energy present in the solar radiation. A ring, assuming rings don't
> fill most of the sky (from the point of view of the node on the ring)
> should be able to use closer to 100%...
>
>
>
> Disagree, but if you have some calculations which would return the
> equilibrium temperature of the innermost nodes, I am all eyes. The Bessel
> function approach predicts the inner nodes get hotter than blazes unless
> the entire device (collection of devices?) is quite diffuse. This might
> not be a problem, in fact I think it is a solution. It is a solution which
> comes with a cool bonus: it would explain why, if these things exist
> somewhere, we have never seen one, when they would be easily detectable if
> they used even 50% of the energy from the star (because it would have a
> weird-looking spectral signature.)
>
I think we can calculate what the temperature of the ring would be at 1 AU
using the Stefan-Boltzmann Law
<http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/stefan.html#c1>.
Incident solar radiation at 1 AU is 1360 W/m^2. In order to reach an
equilibrium temperature, the ring needs to radiate 1360 W to stop
increasing in temperature. According to the Stefan-Boltzmann Law, we need
to solve for T in this equation: 2*(5.7603*10^-8)*(T^4) = 1360, note I use
2 here because the ring has two sides to radiate from, one facing the sun
and one facing away from the sun.
To solve for the equilibrium temperature we compute: (1360 /
((5.7603*10^-8) * 2))^(1/4) = 329.62 K = 56.47 degrees C.
In other words, a flat 1 square meter sheet a 1 AU would radiate 1360 Watts
in blackbody radiation (equal to the radiation it receives from the sun)
when it is at 56.47 degrees C.
The formula gets a little more complicated for a Dyson sphere, as then only
one half can radiate away (the side facing the sun receives as much
additional blackbody radiation (from the sphere) as whatever the
equilibrium temperature of the sphere is.) A rough, approximation would be
the same formula above but changing the 2 to a 1: (1360 / ((5.7603*10^-8)
* 1))^(1/4) = 391.98 K = 118.83 degrees C.
I don't know if the Carnot efficiency is the appropriate formula for
collecting and using solar radiation or not, but assuming it is valid here,
then the maximum theoretical efficiency of the collection of the sphere
would be: (5772 - 391.98) / 5772 = 93%, and for the ring would be: (5772 -
329.62) / 5772 = 94%. (Using 5772 K as the temperature of the sun's
surface).
>
>
>
>
> Robert and I never did agree on this while he was with us. But for one
> ring, we don't care about that open question. Thermodynamic details
> cheerfully available on request.
>
>
>
> >…A single ring has as much space as it needs behind the ring for a long
> tail of a heatsink. I wouldn't imagine cooling a single ring would be much
> of a problem. But the temperature the computer operates at does set a floor
> on the efficiency of irreversible computations (by Laundauer's limit).
> Jason
>
>
>
> Ja of course, but with a single ring we don’t care about heat sink
> capabilities. We couldn’t overheat if we tried. Even with a single shell,
> which consists of a billion rings, thermal considerations are irrelevant.
> A billion rings with a trillion nodes per ring, if they can’t figure out
> the thermal heat sink problem, then we are just busted.
>
Are there any gains from multiple Dyson shells as compared with just using
the one biggest outer shell? It seems to me any intermediate shells would
lose substantially due to the Carnot efficiency (the N+1th shell would not
be much colder than the Nth shell, hurting the ability to radiate, and the
N-1th shell would not be very hot, compared to the sun).
>
>
> So let’s set aside the heat sink problem for now and just think about how
> to optimize one ring, or even a slightly different problem: see what
> happens with a million nodes co-orbiting a common barycenter.
>
>
>
Perhaps they could each occasionally reach out to each other nearby nodes
with filaments and trade momentum to stabilize their orbits from time to
time. Perhaps they could use magnetic fields to deflect solar wind to
perform course corrections, or collect particles of solar wind to replenish
their reactant stores for their ion drives. I don't know whether any of
these are workable, but it seems there's room for some kind of solution
when there are many watts of power to play with and thousands of years
overwhich to deploy it.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/0d6bf4c1/attachment.htm>
More information about the extropy-chat
mailing list