From spike at rainier66.com Wed Apr 1 01:24:21 2026 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 31 Mar 2026 18:24:21 -0700 Subject: [ExI] Fwd: announcement for talk "Storing Sunlight in Waste-derived Syngas" In-Reply-To: References: <4CM24AAR0c-jMQ7VLLuuJy7jYl-Ic-64zr5Bf1j58y2wxv1mcq2mL7RI0nwBnpypJsgy26V2uA6CEU52OQ1Ly7gk9diAVurZyOYK40GCX0c=@proton.me> <010201dcb226$ed4dc2f0$c7e948d0$@rainier66.com> <011401dcc07c$52183c70$f648b550$@rainier66.com> <009d01dcc130$21fa8750$65ef95f0$@rainier66.com> <003901dcc149$e24e06d0$a6ea1470$@rainier66.com> Message-ID: <005f01dcc176$45152e70$cf3f8b50$@rainier66.com> -----Original Message----- From: Keith Henson ... >>... Consumers will not tolerate intermittence in power either. The 1999-2001 era rolling blackouts made that very clear. The message was sent to politicians and the power company to make sure there will be no rolling blackouts, regardless of economic impact or what endangered species must be brutally extinguished. >...That was Enron playing games by shutting down power plants for maintenance when the power demand was highest to get a premium price for power... Well it worked. We have been paying premium prices for power ever since. >>... The power must keep coming. Result: the local peaker plant, which > turned out to be unnecessary and is now being prepared to run a data center. >...That's interesting. That's the Metcalf 605 MW plant. I heard it was called the Los Esteros Energy Center, a 300 MW facility, so we might be talking about two different plants. >...I remember when it was built. It is a seriously large combined cycle plant with (I think) two gas turbines... Keith It is exciting to see the data center go in. It is yuuuuuge. There was a big homeless community out there a year ago. They exported them to South San Jose; now the new data center takes up most of what used to be McCarthy Ranch. Fun aside: you may recall Alviso used to be the wrong side of the tracks. Well, it isn't that way now. A prole can go walking down there and no need to worry. Now with that big data center there, Alviso will likely be the happening place to live. spike From hkeithhenson at gmail.com Wed Apr 1 02:51:13 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 31 Mar 2026 19:51:13 -0700 Subject: [ExI] Fwd: announcement for talk "Storing Sunlight in Waste-derived Syngas" In-Reply-To: <005f01dcc176$45152e70$cf3f8b50$@rainier66.com> References: <4CM24AAR0c-jMQ7VLLuuJy7jYl-Ic-64zr5Bf1j58y2wxv1mcq2mL7RI0nwBnpypJsgy26V2uA6CEU52OQ1Ly7gk9diAVurZyOYK40GCX0c=@proton.me> <010201dcb226$ed4dc2f0$c7e948d0$@rainier66.com> <011401dcc07c$52183c70$f648b550$@rainier66.com> <009d01dcc130$21fa8750$65ef95f0$@rainier66.com> <003901dcc149$e24e06d0$a6ea1470$@rainier66.com> <005f01dcc176$45152e70$cf3f8b50$@rainier66.com> Message-ID: On Tue, Mar 31, 2026 at 6:24?PM wrote: snip > > Now with that big data center there, Alviso will likely be the happening place to live. Perhaps, but I really doubt it. Data centers occupy a chunk of land, draw a lot of power, and evaporate a lot of water for cooling, but they employ very few people once they are up and running. Keith > spike > > From avant at sollegro.com Wed Apr 1 06:18:30 2026 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 31 Mar 2026 23:18:30 -0700 Subject: [ExI] Monkeys in Space In-Reply-To: References: Message-ID: <9b74d20ac0292ed5fba3eb15d2f8cd94@sollegro.com> On 2026-03-30 02:33, Ben Zaiboc via extropy-chat wrote: > Yes, I think that habitats in orbit make more sense than colonies on > planets. That would certainly solve the gravity problem, but it still > leaves radiation shielding (there are ways to generate artificial > magnetospheres, but you still need to replicate the effect of a couple > of hundred miles of atmosphere, for shielding against high-energy > neutral particles), which means a lot of extra mass, and the simple > fact that we need a hell of a lot of oxygen, water, and food, as well > as all the myriad other requirements of biological organisms, which > contrasts very starkly with the probable requirements of uploads. -------- NASA is already working on that. They have tested radiotrophic fungi aboard the ISS and got good results. You could use the space between inner and outer hulls and bulk heads to store water and culture radiotrophic fungi (c.f. Cladosporium sphaerospermum) to act as a form of living radiation shield, allowing it to convert ionizing radiation into biomass that can then be processed into food or chemical fuel for the people in the colony. Turning radiation from a hazard into an energy source. Turning obstacles into opportunities is exactly how humanity will colonize space. ------- > > In practice, I think that if biological humans ever do live in space in > any numbers, it will have to be preceded by upload colonisation and the > creation of a lot of infrastructure, and the question remains: would it > be worth the bother and expense? Especially when an uploading solution > exists. You'd also have to haul tons and tons of fragile flesh up our > steep gravity well, which is very expensive no matter how it's done, > vs. a few watts of electricity to transmit some data. more I think > about it, the less I think that 'monkeys in space' is a viable > scenario. ------------ I think there will be be people in space before uploads become a viable technology. The difficulty lies in a large set engineering problems encompassing various disciplines. But the problems are finite and solvable. For example, we already roughly sketched out a viable Interplanetary Transport Network and these routes could be calculated and optimized on the fly by computers. We are close to developing fusion rocket engines that use Helium-3 as fuel. Helium-3 is abundant on the moon and lunar helium mining could become a space-based industry. I, for one, have high hopes humanity's future in space. Stuart LaForge From benzaiboc at proton.me Wed Apr 1 08:53:05 2026 From: benzaiboc at proton.me (Ben Zaiboc) Date: Wed, 01 Apr 2026 08:53:05 +0000 Subject: [ExI] Reversible computing (Was: Re: ET Emergence (Was Re: Uploads as a group of AI agents)) Message-ID: Isn't reversible computing a bit like the Casimir Effect, a theoretical thing that /in theory/ could produce amazing benefits, but in practice, we will probably never be able to use? Has anyone ever actually done any reversible computing? (in a practical sense, not in the 'a glass of water' sense). What are the practical difficulties we'd have to overcome in order to do it in any kind of useful way? -- Ben From benzaiboc at proton.me Wed Apr 1 10:48:47 2026 From: benzaiboc at proton.me (Ben Zaiboc) Date: Wed, 01 Apr 2026 10:48:47 +0000 Subject: [ExI] Monkeys in Space In-Reply-To: <9b74d20ac0292ed5fba3eb15d2f8cd94@sollegro.com> References: <9b74d20ac0292ed5fba3eb15d2f8cd94@sollegro.com> Message-ID: <770o7m0ZLQV5FT1brdo_MHElU9gz6fG7ka1UfrFDLQOyiWJ9RfFsuSqqGmnDj2DDm-rTjztDWlWJH12faMo_35Z7BnwNfCC24UN23VS-Zcw=@proton.me> On Wednesday, 1 April 2026 at 07:18, Stuart LaForge wrote: > On 2026-03-30 02:33, Ben Zaiboc via extropy-chat wrote: > > > Yes, I think that habitats in orbit make more sense than colonies on > > planets. That would certainly solve the gravity problem, but it still > > leaves radiation shielding (there are ways to generate artificial > > magnetospheres, but you still need to replicate the effect of a couple > > of hundred miles of atmosphere, for shielding against high-energy > > neutral particles), which means a lot of extra mass, and the simple > > fact that we need a hell of a lot of oxygen, water, and food, as well > > as all the myriad other requirements of biological organisms, which > > contrasts very starkly with the probable requirements of uploads. > -------- > > NASA is already working on that. They have tested radiotrophic fungi > aboard the ISS and got good results. You could use the space between > inner and outer hulls and bulk heads to store water and culture > radiotrophic fungi (c.f. Cladosporium sphaerospermum) to act as a form > of living radiation shield, allowing it to convert ionizing radiation > into biomass that can then be processed into food or chemical fuel for > the people in the colony. Turning radiation from a hazard into an energy > source. Turning obstacles into opportunities is exactly how humanity > will colonize space. > ------- > > > > In practice, I think that if biological humans ever do live in space in > > any numbers, it will have to be preceded by upload colonisation and the > > creation of a lot of infrastructure, and the question remains: would it > > be worth the bother and expense? Especially when an uploading solution > > exists. You'd also have to haul tons and tons of fragile flesh up our > > steep gravity well, which is very expensive no matter how it's done, > > vs. a few watts of electricity to transmit some data. more I think > > about it, the less I think that 'monkeys in space' is a viable > > scenario. > ------------ > > I think there will be be people in space before uploads become a viable > technology. The difficulty lies in a large set engineering problems > encompassing various disciplines. But the problems are finite and > solvable. For example, we already roughly sketched out a viable > Interplanetary Transport Network and these routes could be calculated > and optimized on the fly by computers. We are close to developing fusion > rocket engines that use Helium-3 as fuel. Helium-3 is abundant on the > moon and lunar helium mining could become a space-based industry. > > I, for one, have high hopes humanity's future in space. I too have (very) high hopes for humanity's future in space, I just don't think those humans will be biological, or at least not most of them. We already have humans in space, and have had for over 50 years, but I'm not talking about a few individuals, I'm talking about a substantial population, millions of individuals at least, able to do useful things (as well as happily and comfortably (and safely!) live), all over the solar system. Consider for example, asteroid mining. Just about all of the useful asteroids are at least twice as far from the Earth as Mars is. Biological asteroid miners would need to carry with them a huge amount of stuff, go in a large spacecraft with a rotating section producing more than 0.6g (I think it was 0.8g) for them to spend most of their time in, lots of radiation shielding, a way for them to grow and recycle food and oxygen for a long time, be restricted in its acceleration and keep them sane and functional for how long? I don't know, certainly months and maybe years. All these problems are indeed solvable in theory, but I don't know if they would be in practice, or scalable to a useful degree. Compare this with uploaded asteroid miners. They'd need reliable power, some radiation shielding (although probably less than biological people would need), they could reduce their clock speed during long boring voyages to make them go by quicker, take large libraries of entertainment and virtual environments with them, withstand much higher accelerations, not need to be kept at physiological temperatures, not need food or oxygen or waste recycling, and their spacecraft could be orders of magnitude smaller. Many more of them could be built and launched much more cheaply and quickly. You could also pack many more uploads into your spacecraft. Considering the distances, and the possibility for uploads to run at many times the thinking speed of biologicals, you'd probably want a much larger collection of people in your expedition, maybe equivalent to a town or even city, and the spacecraft would be their home, as opposed to a small team of biological people on a hazardous trip millions of miles away from their home, friends and family for a limited (but long) time, in less-than-ideal conditions, both physically and psychologically. The issue of actually mining an asteroid once you're there would probably be fairly similar in both cases, except for the need for biological humans to wear spacesuits in order to work outside (but maybe remotely-operated robots would work fine for this). The problems of each scenario are finite and solvable, but it seems to me that solving the relatively small set of problems involved in creating human uploads is closer to becoming reality than the much larger set of problems involved in supporting biological humans indefinitely in space. Just the single constraint of needing to be in a substantial gravity field for most of the time is a huge problem. That might be solvable with drugs of some kind at some future point, but that's just speculation, we have no real idea what might be involved in avoiding this problem. For uploads, zero-gravity will be an asset, not a problem, and we already know how to build computers and transmit radio signals. We don't yet know how to build large rotating habitats and launch large numbers of biological humans safely into orbit. Another factor is what happens when things go wrong. For biological humans, almost anything going wrong in space a couple of hundred million miles from earth would mean certain death. Permanent death, most likely. For uploads, if anything goes wrong (that doesn't result in the destruction of the spacecraft, at least), they could as a last resort, just shut down and wait to be rescued. It wouldn't matter how long the rescue took. But apart from this, there would simply be much less /to/ go wrong. Depressurisation, running out of air, water or food, wouldn't be problems, medical problems wouldn't exist, physical damage would be to robotic manipulators, not flesh-and-blood limbs, etc. About the only real problems would be physical destruction of the spacecraft and massive doses of radiation from a solar storm or something equivalent (and seeing as it would be just sensible for everyone to have their minds backed-up in robust media, or even in a remote location, even these things would be survivable). You see what I mean when I say "the more I think about it..."? Large-scale space colonisation is not going to happen quickly (or perhaps at all) with biological humans, but with uploads, it could happen in a much shorter time. We're probably talking about months or years vs. decades or centuries. --- Ben From johnkclark at gmail.com Wed Apr 1 12:09:33 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 1 Apr 2026 08:09:33 -0400 Subject: [ExI] Reversible computing (Was: Re: ET Emergence (Was Re: Uploads as a group of AI agents)) In-Reply-To: References: Message-ID: On Wed, Apr 1, 2026 at 4:54?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Isn't reversible computing a bit like the Casimir Effect, a theoretical > thing that /in theory/ could produce amazing benefits, but in practice, we > will probably never be able to use?* > *Very small circuits that use logically reversible gates have been built that operate on just a few bits of information, but it was enough to prove that the theory is correct because they beat any irreversible computer when it comes to energy usage because they didn't need to erase any information, Landauer?s Limit says that the minimum amount of energy needed to erase one bit of information is kT ln 2. The downside of those experimental reversible computers is that although they use less energy they also require more complex circuitry and they are much slower than conventional computers. * *A perfect quantum computer would be a reversible computer but such a thing is probably impossible, a real quantum computer would need error correction and that is irreversible. Drexler style Nanomachines can also be reversible but when designing them you need to take the Casimir Effect into account because it can be a problem; the force of attraction is inversely proportional to the distance to the fourth power so it grabs on fast as the distance between parts shrinks. Casimir attraction can also be helpful if you want a part in your nanomachine to have something that behaves like a spring. * *With careful design the Casimir force can be shaped, for example if you use a dielectric material instead of a conductive metal the attraction is greatly reduced, and if you put a fluid with carefully designed dielectric properties between the plates you can even get a force of repulsion instead of attraction. You could use Casimir to keep two parts in your nanomachine precisely aligned without contact, and then you could make a no contact no wear bearing that would operate similar to the way magnetic levitation does but by using quantum vacuum forces instead of magnetism.* *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Apr 1 12:58:56 2026 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 1 Apr 2026 08:58:56 -0400 Subject: [ExI] Reversible computing (Was: Re: ET Emergence (Was Re: Uploads as a group of AI agents)) In-Reply-To: References: Message-ID: On Wed, Apr 1, 2026 at 8:11?AM John Clark via extropy-chat wrote: > With careful design the Casimir force can be shaped, for example if you use a dielectric material instead of a conductive metal the attraction is greatly reduced, and if you put a fluid with carefully designed dielectric properties between the plates you can even get a force of repulsion instead of attraction. You could use Casimir to keep two parts in your nanomachine precisely aligned without contact, and then you could make a no contact no wear bearing that would operate similar to the way magnetic levitation does but by using quantum vacuum forces instead of magnetism. How would this work? If you just had two surfaces near each other, they would attract. If you had a part balanced between two others, if it got out of alignment, it would be exponentially more attracted toward the one it was nearer - but for alignment, wouldn't you need it to be more attracted to the one it is further away from, so it is most (net) attracted to the equilibrium point? From atymes at gmail.com Wed Apr 1 13:03:30 2026 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 1 Apr 2026 09:03:30 -0400 Subject: [ExI] Monkeys in Space In-Reply-To: <770o7m0ZLQV5FT1brdo_MHElU9gz6fG7ka1UfrFDLQOyiWJ9RfFsuSqqGmnDj2DDm-rTjztDWlWJH12faMo_35Z7BnwNfCC24UN23VS-Zcw=@proton.me> References: <9b74d20ac0292ed5fba3eb15d2f8cd94@sollegro.com> <770o7m0ZLQV5FT1brdo_MHElU9gz6fG7ka1UfrFDLQOyiWJ9RfFsuSqqGmnDj2DDm-rTjztDWlWJH12faMo_35Z7BnwNfCC24UN23VS-Zcw=@proton.me> Message-ID: On Wed, Apr 1, 2026 at 6:49?AM Ben Zaiboc via extropy-chat wrote: > Large-scale space colonisation is not going to happen quickly (or perhaps at all) with biological humans, but with uploads, it could happen in a much shorter time. We're probably talking about months or years vs. decades or centuries. Even uploads have to wait for the hardware to travel, and even uploads would have multiple stages of "scout hardware arrives first, surveys, transmits results back, if they're good then first colonization hardware is launched, eventually upgrade hardware is launched". Being non-biological might reduce the amount at later stages, but not the need for multiple waves - and low-thrust travel will always be more efficient, so the fact that uploads can tolerate higher Gs won't eliminate low-thrust trips. It'll be decades or centuries still, even if perfect mind uploading was invented tomorrow. From johnkclark at gmail.com Wed Apr 1 17:31:22 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 1 Apr 2026 13:31:22 -0400 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_New_AI_Just_Broke_My_Brain?= Message-ID: *Google?s New AI Just Broke My Brain* *John K Clark See what's on my list at Extropolis * 94e -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Apr 2 11:49:23 2026 From: johnkclark at gmail.com (John Clark) Date: Thu, 2 Apr 2026 07:49:23 -0400 Subject: [ExI] Reversible computing (Was: Re: ET Emergence (Was Re: Uploads as a group of AI agents)) In-Reply-To: References: Message-ID: On Wed, Apr 1, 2026 at 9:00?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: * >> With careful design the Casimir force can be shaped, for example if >> you use a dielectric material instead of a conductive metal the attraction >> is greatly reduced, and if you put a fluid with carefully designed >> dielectric properties between the plates you can even get a force of >> repulsion instead of attraction. You could use Casimir to keep two parts in >> your nanomachine precisely aligned without contact, and then you could make >> a no contact no wear bearing that would operate similar to the way magnetic >> levitation does but by using quantum vacuum forces instead of magnetism.* > > > > > > > > * > How would this work? If you just had two surfaces near each other, > they would attract. If you had a part balanced between two others, if it > got out of alignment, it would be exponentially more attracted toward the > one it was nearer - but for alignment, wouldn't you need it to be more > attracted to the one it is further away from, so it is most (net) attracted > to the equilibrium point?* *First of all I should emphasize that a reversible perfectly frictionless Casimir Bearing would be nice to have but would not be absolutely necessary for Drexler style Nanotechnology, instead you could use something that works in a way similar to the way a bacterial flagellar motor works, it would be robust reliable and much easier to build, but it would not be as efficient or be reversible because it would not be perfectly frictionless. But it works well enough to do a lot of interesting things as bacteria can attest. * *Purely attractive forces in 3-D will always be unstable, but if you pattern the surface in just the right way you can cause the Casimir force to depend on lateral alignment and even become repulsive. The pattering creates a metamaterial that produces a periodic vacuum energy landscape with preferred angular positions. The teeth of a gear also produce preferred angular positions but metamaterials can do it without any physical contact, all they need is a vacuum. * *I asked Mr.GPT to give me a design for a **Casimir Force Bearing and this is what he came up with: * *===* *A **repulsive Casimir bearing is the closest thing to a true ?quantum levitation? device at the nanoscale. I?ll give you a concrete design, then walk through the physics and numbers so you can see how it could actually work.* ------------------------------ Concept: a levitated cylindrical rotor *We want to reverse the usual attractive Casimir effect so that:* - * The rotor is pushed away from the stator * - * A stable equilibrium gap forms (say ~50 nm) * - * The rotor ?floats? without contact * The key trick: a three-material system *Repulsive Casimir forces don?t normally occur in vacuum between two solids. But they can occur if you insert a medium with the right optical properties.* Lifshitz condition (qualitative) *You can get repulsion if, over relevant frequencies:* *?1 From johnkclark at gmail.com Thu Apr 2 13:04:41 2026 From: johnkclark at gmail.com (John Clark) Date: Thu, 2 Apr 2026 09:04:41 -0400 Subject: [ExI] Two very recent Quantum Computing bombshells Message-ID: *Two big developments occurred in the field of quantum computing yesterday, and one of them may be of particular interest to members of this group because it concerns 256 bit elliptical encryption, the sort that Bitcoin uses. A year or even a week ago most people thought you'd need a quantum computer with several million physical Qubits to break Bitcoin's encryption in a reasonable length of time, but yesterday Google announced they have found a far more efficient algorithm which if used on a quantum computer with just 25,000 physical Qubits could break bitcoin in only 10 days. Interestingly Google decided not to publish the algorithm, instead they proved they have access to such an algorithm by publishing a cryptographic zero-knowledge proof. They said they did this after consulting with the US government. Google issue the following statement:* *"The escalating risk that detailed cryptanalytic blueprints could be weaponized by adversarial actors necessitates a shift in disclosure practices. Accordingly, we believe it is now a matter of public responsibility to share refined resource estimates while withholding the precise mechanics of the underlying attacks. Progress in quantum computing has reached the stage where it is prudent to stop publishing details of improved quantum cryptanalysis to avoid misuse."* *Quantum computing bombshells that are not April fools jokes* *John K Clark See what's on my list at Extropolis * naf -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Apr 2 14:18:20 2026 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 2 Apr 2026 10:18:20 -0400 Subject: [ExI] Reversible computing (Was: Re: ET Emergence (Was Re: Uploads as a group of AI agents)) In-Reply-To: References: Message-ID: Ah, I missed the "liquid medium to change the refractive index" bit. Thanks. Though I have had trouble getting labs to make stuff with 50 nm (or smaller) features. On Thu, Apr 2, 2026 at 7:50?AM John Clark wrote: > > On Wed, Apr 1, 2026 at 9:00?AM Adrian Tymes via extropy-chat wrote: > >>> >> With careful design the Casimir force can be shaped, for example if you use a dielectric material instead of a conductive metal the attraction is greatly reduced, and if you put a fluid with carefully designed dielectric properties between the plates you can even get a force of repulsion instead of attraction. You could use Casimir to keep two parts in your nanomachine precisely aligned without contact, and then you could make a no contact no wear bearing that would operate similar to the way magnetic levitation does but by using quantum vacuum forces instead of magnetism. >> >> >> > How would this work? If you just had two surfaces near each other, >> they would attract. If you had a part balanced between two others, if >> it got out of alignment, it would be exponentially more attracted >> toward the one it was nearer - but for alignment, wouldn't you need it >> to be more attracted to the one it is further away from, so it is most >> (net) attracted to the equilibrium point? > > > First of all I should emphasize that a reversible perfectly frictionless Casimir Bearing would be nice to have but would not be absolutely necessary for Drexler style Nanotechnology, instead you could use something that works in a way similar to the way a bacterial flagellar motor works, it would be robust reliable and much easier to build, but it would not be as efficient or be reversible because it would not be perfectly frictionless. But it works well enough to do a lot of interesting things as bacteria can attest. > > Purely attractive forces in 3-D will always be unstable, but if you pattern the surface in just the right way you can cause the Casimir force to depend on lateral alignment and even become repulsive. The pattering creates a metamaterial that produces a periodic vacuum energy landscape with preferred angular positions. The teeth of a gear also produce preferred angular positions but metamaterials can do it without any physical contact, all they need is a vacuum. > > I asked Mr.GPT to give me a design for a Casimir Force Bearing and this is what he came up with: > === > > A repulsive Casimir bearing is the closest thing to a true ?quantum levitation? device at the nanoscale. I?ll give you a concrete design, then walk through the physics and numbers so you can see how it could actually work. > > ________________________________ > > Concept: a levitated cylindrical rotor > > We want to reverse the usual attractive Casimir effect so that: > > The rotor is pushed away from the stator > A stable equilibrium gap forms (say ~50 nm) > The rotor ?floats? without contact > > The key trick: a three-material system > > Repulsive Casimir forces don?t normally occur in vacuum between two solids. But they can occur if you insert a medium with the right optical properties. > > Lifshitz condition (qualitative) > > You can get repulsion if, over relevant frequencies: > > ?1 > So the design is: > > Rotor: high-permittivity material > Gap: carefully chosen fluid > Stator: lower-permittivity material > > A concrete design > > Geometry > > Rotor radius: R=1 ?m > Length: L=10 ?m > Target gap: d=50 nm > > ________________________________ > > Materials (example set) > > Rotor: gold (high permittivity at relevant frequencies) > Medium: bromobenzene (commonly used in experiments) > Stator: silica (SiO?) > > This combination has actually been shown experimentally to produce repulsive Casimir forces. > > What the force looks like > > The pressure still scales roughly like: > > P?d41 > > But now it?s positive (repulsive instead of attractive). > > At ~50 nm: > > Typical magnitude: on the order of 1?10 Pa > > So very similar strength as before?but reversed. > > ________________________________ > > Levitation behavior > > Now the key difference: > > Inward vs outward balance > > If the rotor gets too close to the stator ? repulsion increases sharply > If it moves away ? repulsion drops rapidly > > This creates a stable equilibrium gap > > ________________________________ > > Effective stiffness > > Using the same geometry as before: > > Surface area: > > A?6?10?11 m2 > > Force: > > F?(1?10)Pa?A?10?10 to 10?9 N > > Effective spring constant: > > k?dF?10?3 to 10?2 N/m > > So: > > Nanometer displacements ? piconewton restoring forces > Enough for stable confinement > > ________________________________ > > Why this is much better than the attractive case > > In the previous (attractive) design: > > Any instability ? collapse ? sticking > > Here: > > Instability ? increased repulsion ? self-correction > > So this behaves like: > > A true non-contact bearing with a restoring gap > > Rotation and friction > > The rotor can spin because: > > The Casimir force is mostly normal (radial) > It does not strongly oppose tangential motion > > Remaining losses come from: > > Fluid viscosity (important!) > Fluctuation-induced electromagnetic drag (tiny) > > So the dominant dissipation is actually: > > the fluid you needed to create repulsion > > ________________________________ > > The big tradeoff > > This design solves one problem but introduces another: > > Pros > > Stable levitation > No stiction > Passive centering > Potentially reversible (low dissipation) > > Cons > > Requires a liquid medium, not vacuum > Fluid introduces: > > Viscous drag > Thermal noise > Possible chemical complications > > ________________________________ > > Where metamaterials could improve this > > Metamaterials could, in principle: > > Reproduce the ?effective medium? condition without a liquid > Create: > > Vacuum-compatible repulsion > Tunable equilibrium gaps > Reduced dissipation > > That?s still mostly experimental, but it?s a major research direction. > > ________________________________ > > Stability check (important insight) > > This system is stable because: > > The Casimir force is strongly distance-dependent > Repulsion increases faster than any destabilizing drift > > So you get: > > A potential well centered at a finite gap, not at contact > > That?s exactly what you want for a bearing. > > ________________________________ > > Intuition > > The rotor isn?t ?floating freely? like in air. > > It?s more like: > > Sitting in a very steep, invisible cushion of quantum pressure that keeps it from touching the walls. > > How far are we? > > We already have: > > Measurements of Casimir forces in structured geometries > Demonstrations of lateral Casimir forces and torques > Early work on Casimir engineering with metamaterials > > But we do not yet have: > > A fully repulsive vacuum Casimir system > A working Casimir bearing > > ________________________________ > > Bottom line > > A repulsive Casimir bearing using a fluid medium: > > Can genuinely levitate a nanoscale rotor > Provides passive, stable confinement > Allows low-friction rotation > > But: > > It trades vacuum operation for fluid damping > And requires very careful material selection > > > ========= > John K Clark > From spike at rainier66.com Thu Apr 2 17:43:47 2026 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Apr 2026 10:43:47 -0700 Subject: [ExI] epic april fools joke Message-ID: <00e201dcc2c8$42716e10$c7544a30$@rainier66.com> NASA Plays Incredible April Fools' Joke Of Once Again Pretending To Go To The Moon | Babylon Bee -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Apr 3 11:53:33 2026 From: johnkclark at gmail.com (John Clark) Date: Fri, 3 Apr 2026 07:53:33 -0400 Subject: [ExI] Quantum Computers could crack Bitcoin before the end of this decade Message-ID: *The following article was in yesterday's issue of the journal Nature: * *?It?s a real shock?: quantum-computing breakthroughs pose imminent risks to cybersecurity* *I think this is getting serious. And that's not all, independent of the above this article also appeared yesterday:* *Trapped ion quantum technology gets smaller * *The writing is on the wall. If I had any Bitcoins I'd sell them quick because the party is about to come to an end. * * John K Clark See what's on my list at Extropolis * m.$ede -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Apr 4 10:28:30 2026 From: pharos at gmail.com (BillK) Date: Sat, 4 Apr 2026 11:28:30 +0100 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy Message-ID: I gave Kimi 2.5 Thinking AI a text suggesting that the Fermi Paradox was an artifact of our short listening time and our very limited target radio sources. I thought Kimi's comments were interesting. BillK Kimi 2.5 Thinking AI - The user is asking me to discuss the implications of a text they provided about the Fermi paradox. The text essentially argues that the Fermi paradox isn't really a paradox because we haven't searched nearly enough space to conclude anything definitive about the existence of extraterrestrial intelligence. Key points from the text: 1. Wright et al. (2018) calculated SETI has searched only ~7,700 liters out of Earth's total ocean volume (1.335 billion trillion liters) - metaphorically speaking 2. That's like searching a hot tub's worth of water and declaring the ocean fishless 3. Tarter (2010) estimated it was even less - a drinking glass 4. Even if the galaxy had more transmitters than stars, we wouldn't have found them yet 5. Exoplanet data shows life ingredients aren't rare: 6,150+ confirmed exoplanets, ~1 in 5 Sun-like stars has Earth-sized planet in habitable zone, 11-40 billion potentially habitable planets in Milky Way alone 6. 2 trillion galaxies in observable universe The implications I should discuss: - Epistemological implications: We can't draw negative conclusions from such limited data - The "paradox" dissolves: There's no contradiction between "high probability of life" and "we haven't found it" - Methodological implications: Need for continued/patient search - Philosophical implications: Anthropocentric bias in expecting quick results - Temporal implications: Civilizations might exist but we haven't looked long enough - The vastness of parameter space (8 dimensions mentioned) - Implications for SETI funding and priorities - The Copernican principle vs. our actual observational capabilities I should structure this as a thoughtful analysis covering these angles without being overly technical but showing I understand the depth of the argument. The text seems to contain a very recent date reference (March 2026) which is actually in the future from the session timestamp (2026-04-04). This appears to be a hypothetical or fictional text provided by the user, but I should treat it as the premise for discussion regardless of the dates. The core arguments (Wright et al. 2018, Tarter 2010) are real research, but the specific 2026 date is likely a projection or hypothetical scenario in the user's text. I'll proceed with discussing the implications of the argument as presented. Key implication categories: 1. Statistical/Logical: The paradox is resolved by insufficient sampling 2. Methodological: Need for patience and expanded search strategies 3. Philosophical: Humility about human capabilities and cosmic time scales 4. Scientific policy: Justification for continued SETI investment 5. Temporal: The "snapshot" problem - civilizations might not exist simultaneously or we might be looking at wrong times I should not just summarize but critically engage with the implications. The text you?ve provided presents a compelling resolution to the Fermi paradox by reframing it as an *artifact of statistical impatience* rather than a genuine cosmic contradiction. The implications of this reframing are profound across epistemology, astrobiology, and even scientific ethics. Here is a discussion of the key implications: 1. The Dissolution of the "Paradox" Itself The Fermi paradox traditionally derives its force from an apparent logical tension: if intelligent life is probable (given the vast number of planets), then the absence of evidence constitutes evidence of absence. Wright?s eight-dimensional "cosmic haystack" calculation severs this logic. The implication is that *absence of evidence is not merely weak evidence of absence?it is essentially zero evidence of absence.* If we have searched 7,700 liters out of 1.335 billion trillion liters, our sampling error is so astronomical that concluding anything about the population is statistically indefensible. The paradox evaporates; we are not observing a puzzle, but the inevitable result of extreme under-sampling. 2. The Anthropocentric Bias of "Now" The text highlights a temporal arrogance embedded in the paradox. We expect that if civilizations exist, they should be detectable *during our brief technological window* (roughly one century of radio astronomy). The implication: *We may be observing the universe at the wrong frequency, the wrong time, or the wrong sensitivity.* A civilization could have broadcast for a million years and fallen silent a billion years ago. Or they may communicate via technologies we cannot yet conceive (neutrino beams? Gravitational wave modulation? Directed energy?). The haystack model suggests we have not only searched a tiny volume but a tiny *slice* of the electromagnetic spectrum and temporal history. 3. The "Great Silence" as a Baseline, Not a Result If the galaxy could be "filled with more transmitters than stars" and we would remain oblivious, the implication is that *the default state of the universe is observational silence, regardless of population density.* This inverts the burden of proof. Rather than asking "Where is everybody?" (implying their absence requires explanation), we should ask "What minimum detectability would be required to find a civilization given our search volume?" This shifts SETI from a philosophical quest to an engineering problem of sensitivity and coverage. 4. The Scarcity of "Big" Signals vs. the Abundance of Life The text notes that exoplanet surveys suggest life?s raw ingredients are not rare (11?40 billion potentially habitable worlds in the Milky Way alone). Combined with the haystack calculation, this implies a *decoupling between the prevalence of life and the prevalence of detectable technosignatures.* Intelligent life could be common while *detectable* technological civilizations are rare?either because they don?t develop transmitting technology, they switch to undetectable communication methods quickly, they practice "technological sustainability" (limiting waste heat/radio leakage), or they simply don?t wish to be found. The paradox confused biological abundance with electromagnetic conspicuousness. 5. Methodological Implications for SETI The drinking glass/hot tub metaphor has practical consequences: - *Long-term commitment:* The search may require centuries or millennia of systematic observation before yielding a statistically meaningful null result. - *Target diversity:* We must expand beyond the "water hole" (the 1.42?1.66 GHz band) and consider the full 8-dimensional parameter space (spatial, frequency, temporal, polarization, modulation). - *Technosignature agnosticism:* We should look not just for intentional beacons but for incidental leakage, astroengineering projects (Dyson spheres), or atmospheric pollutants?signatures that might be louder but more ambiguous than radio signals. 6. The Copernican Principle Revisited There is a subtle tension here. The Copernican Principle suggests we occupy no special place in space or time. Yet the haystack model implies we are in a special position: *we are the only observers we know of, looking with extremely primitive tools at the very beginning of our search history.* The implication is that our current "loneliness" may be a transient artifact of our technological immaturity. If we survive to search for 10,000 years at increasing sensitivity, and still find nothing, *then* the paradox regains its teeth. Until then, we are like a child who, having examined a single square meter of beach, concludes the ocean contains no whales. 7. Existential Risk and the "Grabby Aliens" Hypothesis If the haystack calculation is correct and civilizations are common but undetected, it raises the stakes of the "Dark Forest" or "Grabby Aliens" hypotheses. If the galaxy could be full of civilizations and we cannot see them, *we cannot assess whether we are in danger.* We are acoustically blind in a potentially crowded room. This suggests a precautionary principle: perhaps we should listen extensively before we broadcast (METI), given our acoustic obscurity may be our only protection. Conclusion The ultimate implication of Wright, Kanodia, and Lubar's work is *intellectual humility on a cosmic scale.* The text suggests that asking "Where is everybody?" after a century of searching is akin to arriving at a party, glancing at your watch one second after entering, and concluding the room is empty because you see no one in your immediate field of view. The paradox was never about aliens; it was about our own impatience and the human difficulty of comprehending astronomical scale. The universe is not silent?it is merely whispering in a dialect we have not yet learned to hear, across distances we have barely begun to scan, at times we have not yet observed. We are not alone; we are just early, and we are deaf. --------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Apr 4 11:04:51 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Apr 2026 07:04:51 -0400 Subject: [ExI] Taxonomy of Minds In-Reply-To: <8Bvx-AGY-cKpVy3U1kr-EpMifKhlr2rYxlIXVlaIbP4c-enUsroiaz1fvvFkW1CTIlsOcB3xANoHoPfPWsUD-o7RIQT03jka8zaf6_arwyY=@proton.me> References: <8Bvx-AGY-cKpVy3U1kr-EpMifKhlr2rYxlIXVlaIbP4c-enUsroiaz1fvvFkW1CTIlsOcB3xANoHoPfPWsUD-o7RIQT03jka8zaf6_arwyY=@proton.me> Message-ID: On Mon, Mar 30, 2026 at 5:12?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> We should be extremely careful when looking at what dolphins do. The > tendency to anthropomorphise is almost irresistible.* *Is that always a bad thing, isn't anthropomorphism exactly what we're doing when we reject solipsism and ascribe consciousness to our fellow human beings?* * John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Apr 4 11:35:40 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Apr 2026 07:35:40 -0400 Subject: [ExI] Quantum Computers could crack Bitcoin before the end of this decade In-Reply-To: References: Message-ID: *Google says Bitcoin will be cracked by 2029* *John K Clark See what's on my list at Extropolis * &$j > > *?It?s a real shock?: quantum-computing breakthroughs pose imminent risks > to cybersecurity* > > *I think this is getting serious. And that's not all, independent of the > above this article also appeared yesterday:* > > > *Trapped ion quantum technology gets smaller > * > > *The writing is on the wall. If I had any Bitcoins I'd sell them > quick because the party is about to come to an end. * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benzaiboc at proton.me Sat Apr 4 11:43:54 2026 From: benzaiboc at proton.me (Ben Zaiboc) Date: Sat, 04 Apr 2026 11:43:54 +0000 Subject: [ExI] Taxonomy of Minds In-Reply-To: References: <8Bvx-AGY-cKpVy3U1kr-EpMifKhlr2rYxlIXVlaIbP4c-enUsroiaz1fvvFkW1CTIlsOcB3xANoHoPfPWsUD-o7RIQT03jka8zaf6_arwyY=@proton.me> Message-ID: On Saturday, 4 April 2026 at 12:05, John Clark wrote: > On Mon, Mar 30, 2026 at 5:12?AM Ben Zaiboc via extropy-chat wrote: > >>> We should be extremely careful when looking at what dolphins do. The tendency to anthropomorphise is almost irresistible. > > Is that always a bad thing, isn't anthropomorphism exactly what we're doing when we reject solipsism and ascribe consciousness to our fellow human beings? Not at all. Anthropomorphism is the tendency to ascribe human characteristics to non-human things. Solipsism is the position that I am the only thing that really exists. Rejecting it just denies that statement, it says nothing about the nature of the other things that we assume to exist. Ascribing human characteristics to other humans isn't anthropomorphism. You could ask "But how do we know they are human?" but that's just heading down a rabbit hole. I think the thing to focus on is the 'non-human things' part. Things that we already know aren't human, like dolphins. I'm not sure what to call 'ascribing human characteristics to things which seem to be humans', but it's not the same as anthropomorphism. As to whether anthropomorphism is always a bad thing, perhaps not, it probably evolved for a good reason, I doubt that it's a spandrel. But it does tend to get in the way of accurately seeing what's going on when we observe non-human subjects. --- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Apr 4 12:38:50 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Apr 2026 08:38:50 -0400 Subject: [ExI] Taxonomy of Minds In-Reply-To: References: <8Bvx-AGY-cKpVy3U1kr-EpMifKhlr2rYxlIXVlaIbP4c-enUsroiaz1fvvFkW1CTIlsOcB3xANoHoPfPWsUD-o7RIQT03jka8zaf6_arwyY=@proton.me> Message-ID: On Sat, Apr 4, 2026 at 7:44?AM Ben Zaiboc wrote: *>>> We should be extremely careful when looking at what dolphins do. The >>> tendency to anthropomorphise is almost irresistible.* >> >> > > *>> Is that always a bad thing, isn't anthropomorphism exactly what we're >> doing when we reject solipsism and ascribe consciousness to our fellow >> human beings?* > > *> Not at all. Anthropomorphism is the tendency to ascribe human > characteristics to non-human things.* *I know, but how do you distinguish between characteristics that are the result of being human and characteristics that are the result of being conscious? * > *> Things that we already know aren't human, like dolphins.* > *What's with this "we" business? I know with absolute certainty that I am conscious but I am less certain about you because I can directly experience my own consciousness but when it comes to you or any of my fellow human beings all I can do is observe their actions and make an important assumption; consciousness is the inevitable result of intelligent data processing. That's the only reason I believe other people are conscious, except when they're sleeping or under anesthesia or dead because when they are in any of those conditions they are not processing data very intelligently. * * John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Apr 4 23:14:59 2026 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 4 Apr 2026 17:14:59 -0600 Subject: [ExI] Taxonomy of Minds In-Reply-To: References: <8Bvx-AGY-cKpVy3U1kr-EpMifKhlr2rYxlIXVlaIbP4c-enUsroiaz1fvvFkW1CTIlsOcB3xANoHoPfPWsUD-o7RIQT03jka8zaf6_arwyY=@proton.me> Message-ID: On Sat, Apr 4, 2026 at 5:44?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Saturday, 4 April 2026 at 12:05, John Clark > wrote: > > On Mon, Mar 30, 2026 at 5:12?AM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > *> We should be extremely careful when looking at what dolphins do. The >> tendency to anthropomorphise is almost irresistible.* > > > *Is that always a bad thing, isn't anthropomorphism exactly what we're > doing when we reject solipsism and ascribe consciousness to our fellow > human beings?* > > > > Not at all. > > Anthropomorphism is the tendency to ascribe human characteristics to > non-human things. > > Solipsism is the position that I am the only thing that really exists. > Rejecting it just denies that statement, it says nothing about the nature > of the other things that we assume to exist. Ascribing human > characteristics to other humans isn't anthropomorphism. > > You could ask "But how do we know they are human?" but that's just heading > down a rabbit hole. > I disagree. This faithless thinking?that consciousness isn't fully approachable via science, and that attempting to approach it would only be "heading down a rabbit hole"?has blinded us to what I predict will be the greatest discovery of all time: "Which of all our descriptions of stuff in the brain is a description of elemental redness."" and how is it subjectively bound to all the other phenomenal qualities to generate consciousness. If everyone hadn't engaged in this faithless "rabut hole" kind of thinking, I think we would have discovered this by now. Once we know what that, we'll be able to see if a Bat uses that same redness to represent echolocated food (or at worst has been engineered to use redness) then we will know what it is like to be that bat. It seems obvious to me that the left hemisphere of the brain isn't a solipsist, because it knows, absolutely that the right hemisphere exists, since it directly apprehends the knowledge in both hemispheres as one infallible gestalt experience. Surely we'll be able to engineer similar neural ponytails to mind meld at least parts of qualia between other brains. And surely when they achieve the ability to restore destroyed primary visual cortices, they will need to know how to engineer redness and all the other color qualities. Cortical prosthetic vision systems can now already provide low-resolution black-and-white vision 'sprites,' and researchers are working hard to enable color vision. My prediction is that these are the people who will first discover which of all our descriptions of brain activity corresponds to a pixel of elemental redness. Finally giving us the dictionary to know what colors things truly are, not just false seeming 'gray matter' seemings. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Apr 5 11:32:08 2026 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Apr 2026 07:32:08 -0400 Subject: [ExI] Taxonomy of Minds In-Reply-To: References: <8Bvx-AGY-cKpVy3U1kr-EpMifKhlr2rYxlIXVlaIbP4c-enUsroiaz1fvvFkW1CTIlsOcB3xANoHoPfPWsUD-o7RIQT03jka8zaf6_arwyY=@proton.me> Message-ID: On Sat, Apr 4, 2026 at 7:16?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > * > This faithless thinking?that consciousness isn't fully approachable > via science, and that attempting to approach it would only be "heading down > a rabbit hole"?has blinded us to what I predict will be the greatest > discovery of all time: "Which of all our descriptions of stuff in the brain > is a description of elemental redness."" and how is it subjectively bound > to all the other phenomenal qualities to generate consciousness.* > *I've heard you say stuff like that before but I've never understood what you mean. For me, saying Process X causes consciousness wouldn't be any more satisfying than saying God causes consciousness, I'd want to know as many details as necessary for it to become intuitively obvious how the parts of Process X interact and produce consciousness. And the parts can't be complicated black boxes for me to feel that the problem of consciousness has been completely solved. It's very important that all the parts of Process X be as simple as possible, and you just can't get simpler than a part that can only change from on to off or off to on. * *In addition to theory there is even solid empirical evidence that it's a brute fact that consciousness is the way data feels when it is being processed intelligently, it is the only explanation of how Darwinian Natural Selection managed to construct at least one conscious being, and probably many billions of them. * *Maybe even after all that you still have a feeling of dissatisfaction, but that is the best you're ever going to get, so if you're still unhappy with that explanation then you are doomed to have a unhappy life because it is a rocksolid fact of logic that an iterated sequence of "why" or "how" questions either goes on forever or terminates in a brute fact. * > * > and how is it subjectively bound to all the other phenomenal qualities > to generate consciousness. If everyone hadn't engaged in this faithless > "rabut hole" kind of thinking, I think we would have discovered this by > now.* > *It would be possible in theory for Mr. Bat Allsop to know what it's like to be Mr. Bat Allsop, but Brent Allsop will never know what it's like to be a bat, and a bat will never know what it's like to be Brent Allsop, only ** Brent Allsop knows that.* > *> It seems obvious to me that the left hemisphere of the brain isn't a > solipsist, because it knows, absolutely that the right hemisphere exists,* > *For most people who have not had radical brain surgery, the left hemisphere of their brain does not know what it would be like to be a left hemisphere unconnected to a right hemisphere. * * John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Apr 5 11:39:30 2026 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Apr 2026 07:39:30 -0400 Subject: [ExI] Trump Slashed Funding for Science. Now the U.S. Faces a Costly Brain Drain. Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. Trump Slashed Funding for Science. Now the U.S. Faces a Costly Brain Drain. The White House?s attacks on academia and budget cuts for research have provided an opening for other countries to poach leading scientists. https://www.nytimes.com/2026/04/04/business/dealbook/trump-brain-drain-academia.html?unlocked_article_code=1.YlA.qvfG.Kxa1DhkfIron&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Apr 5 14:03:46 2026 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Apr 2026 10:03:46 -0400 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Sat, Apr 4, 2026 the AI Mr. Kimi 2.5 Wrote: *> absence of evidence is not merely weak evidence of absence?it is essentially zero evidence of absence.* *Not according to the scientific method! The Luminiferous Ether predicted that when Michelson and Morley perform their experiment they should observe different values for the speed of light depending on in what direction they were measuring, but they found no such difference, and that was compelling evidence that the Luminiferous Ether did not exist, and the fact that we have never observed a megastructure is compelling evidence that ET does not exist. * > * >A civilization could have broadcast for a million years and fallen > silent a billion years ago.* > *One million years would be enough time for ET to send A Von Neumann Probe to every star in the galaxy and then it would be obvious that the galaxy has been engineered. But that's not what we see, and that would be very hard to explain if ET existed because all it would take would be for one ET to manufacture one Von Neumann Probe. The argument that not one single ET out of quadrillions or zillions has ever had a desire to do so is simply not credible. * > *> Or they may communicate via technologies we cannot yet conceive > (neutrino beams? Gravitational wave modulation?* > *Today we can see both those things and yet we don't see even the slightest hint of an intelligent signal. * > *>The haystack model suggests we have not only searched a tiny volume but > a tiny slice of the electromagnetic spectrum and temporal history.* > *You might've been able to make that argument in 1960 when radio telescopes first searched for ET, but not today after 66 years of massive technological improvement. In 2026 we see exactly the same thing that we saw in 1960, absolutely nothing. * > > *> The "Great Silence" as a Baseline, Not a Result* > *Of course it's a result! In science there are an infinite number of things you can look for but you can't test for an infiniti of things so you only look for things that you expect to find, and if you don't find it then something about your reasoning for expecting it must be wrong. And thus thanks to a null result you have learned something. * > *> the default state of the universe is observational silence, regardless > of population density.* *That is a ridiculous statement but I understand why it is made. In 1960 no ET enthusiast felt the need to say such a thing but today after 66 years of nothing but silence that's the only thing that an ET advocate can say. * > *> exoplanet surveys suggest life?s raw ingredients are not rare* > *Simple chemicals like amino acids and nucleotides are not rare, but it's a huge step to go from such small building blocks to a living cell, and an even bigger step to go from a living cell to a multicellular animal, and the biggest step of all to go from a primitive multicellular animal to an animal intelligent enough to be able to build a radio telescope. * > *> 11?40 billion potentially habitable worlds in the Milky Way alone.* > *Habitable worlds and life-bearing worlds are not the same thing, nobody knows how difficult the transition from simple chemicals to bacteria is. And yes astronomy can come up with some big numbers but biology could come up with even bigger numbers.* > *> Intelligent life could be common while detectable technological > civilizations are rare* > *I have operationally defined "intelligence" as the ability to make a radio telescope because when somebody uses the term "extraterrestrial life" they are usually not talking about bacteria or monkeys or even beings with Stone Age technology, they are thinking about something more akin to Star Wars or Star Trek. * > *> they practice "technological sustainability" (limiting waste heat/radio > leakage), or they simply don?t wish to be found.* > *And that's exactly the way every single ET in the entire observable universe feels about that without one single dissenter? I don't think so. * > *The Copernican Principle suggests we occupy no special place in space > or time.* > *But we already know for a fact that the Copernican Principle is dead wrong. This is a very unusual moment in time because life has existed for nearly 4,000,000,000 years but only in the last 100 has anything been able to make a radio telescope. And we are in an even more unusual place because an average cubic meter in the observable universe only contains one hydrogen atom, but our everyday environment is very different from that; 6.02*10^23 hydrogen atoms only weights one gram, and there are a lot of grams of hydrogen in the oceans of this planet. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Apr 5 14:20:46 2026 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Apr 2026 10:20:46 -0400 Subject: [ExI] Living in Hell Message-ID: *Living in Hell* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Apr 5 16:05:27 2026 From: pharos at gmail.com (BillK) Date: Sun, 5 Apr 2026 17:05:27 +0100 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Sun, 5 Apr 2026 at 15:04, John Clark wrote: > Alternative opinions on the Fermi Paradox. > > Kimi AI appears to be aware of your objections and was happy to discuss them. The discussion was longer than the original report! :) But really, Kimi was just saying that "At present, your guess is as good as mine." See extracts below. BillK Kimi AI 2.5 Thinking - These criticisms represent the classic *Hart-Tipler "Rare Earth" position*, and they deserve serious engagement because they highlight genuine tensions in the search for technosignatures. (Long discussion removed) Synthesis: Where Does This Leave Us? The critic's position assumes *detectability should be high if abundance is high*. The resolution offered by Wright et al. is that *detectability is astronomically low even if abundance is high*, due to: 1. *Temporal mismatch* (civilizations don't align with our 100-year window) 2. *Technological mismatch* (radio is likely a brief, primitive phase) 3. *Spatial dilution* (we haven't looked at most stars, even nearby ones) The Michelson-Morley analogy fails because ether had defined properties; "technological civilizations" is an open set. The von Neumann argument assumes expansionism is universal, which is speculative. And 66 years is a cosmic eyeblink. However, the critic correctly identifies that *if* civilizations are long-lived, expansionist, and electromagnetically loud, the null result becomes increasingly problematic. The Fermi Paradox is only "resolved" if we assume civilizations are either *short-lived* (extinct before interstellar capability), *quiet* (by choice or technology), or *very rare* (filter ahead). The true implication of the haystack paper is epistemological humility: we cannot distinguish between these hypotheses yet. The silence is consistent with abundance (if detection is hard) *and* with rarity (if the filter is real). We need to search the equivalent of *millions of hot tubs* before the absence becomes evidence of absence. ------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Apr 6 10:29:33 2026 From: pharos at gmail.com (BillK) Date: Mon, 6 Apr 2026 11:29:33 +0100 Subject: [ExI] Why everyone is starting to sound the same Message-ID: ?AI Personality Shift?: Why Workers Are Starting To Sound Like AI ByBryan Robinson, Ph.D. Apr 03, 2026 Quotes: As more employees use AI in their daily work, apprehension is giving way to cautious optimism. But some experts say the shift is no longer subtle, claiming that AI tools are quietly reshaping workplace communication, influencing how we think, speak and sound and blurring the line between human and AI voices. Recently, Dan Bruce, founder of PressReacher, suggests another trend is blurring the boundaries between real voices and AI-driven identity, creating the rise of an ?AI Personality Shift" (APS)?a growing workplace trend where employees start to adopt the tone, communication style and structure of AI in their everyday workflow. Bruce notes that from emails to opinions, a new long-term workplace trend is emerging, and it?s not just about productivity skills. It?s about identity. He says we?ve all done it: Typed out a message. Second-guessed the tone then pasted it into ChatGPT and thought: ?Can you make this sound better? But Bruce asks, ?What happens when ?better? slowly becomes your default voice?? ----------------------- BillK From foozler83 at gmail.com Mon Apr 6 10:58:07 2026 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 6 Apr 2026 05:58:07 -0500 Subject: [ExI] Why everyone is starting to sound the same In-Reply-To: References: Message-ID: I avoided giving college assignments to a group of students. In variably some will do most of the work all of them will get credit for. Whose work am I really grading? People love simple solutions and easy ways out. Letting an AI do your work means you share credit with the AI. How do you grade that? Are people grading AIs? That is, you submit your work to more than one AI and compare quality. I assume that's being done and people are chosing AIs based on these data. Surely they are doing that. Eh? bill w On Mon, Apr 6, 2026 at 5:31?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > ?AI Personality Shift?: Why Workers Are Starting To Sound Like AI > ByBryan Robinson, Ph.D. Apr 03, 2026 > > < > https://www.forbes.com/sites/bryanrobinson/2026/04/03/ai-personality-shift-why-workers-are-starting-to-sound-like-ai/ > > > Quotes: > As more employees use AI in their daily work, apprehension is giving > way to cautious optimism. But some experts say the shift is no longer > subtle, claiming that AI tools are quietly reshaping workplace > communication, influencing how we think, speak and sound and blurring > the line between human and AI voices. > > Recently, Dan Bruce, founder of PressReacher, suggests another trend > is blurring the boundaries between real voices and AI-driven identity, > creating the rise of an ?AI Personality Shift" (APS)?a growing > workplace trend where employees start to adopt the tone, communication > style and structure of AI in their everyday workflow. > > Bruce notes that from emails to opinions, a new long-term workplace > trend is emerging, and it?s not just about productivity skills. It?s > about identity. He says we?ve all done it: Typed out a message. > Second-guessed the tone then pasted it into ChatGPT and thought: ?Can > you make this sound better? But Bruce asks, ?What happens when > ?better? slowly becomes your default voice?? > ----------------------- > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Apr 6 11:56:22 2026 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Apr 2026 07:56:22 -0400 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Sun, 5 Apr 2026 MR.Kimi AI wrote: *> The Michelson-Morley analogy fails because ether had defined properties; > "technological civilizations" is an open set.* > *I don't know exactly what MR. Kimi AI means by that but I do know that if the Luminiferous Aether had actually existed then we would have expected to see certain things but we did not, and if there were other technological civilizations in the observable universe then we would have expected to see certain things but we did not. And that's why it's reasonable to conclude that neither the luminiferous Aether nor other technological civilizations exist. * > *> The von Neumann argument assumes expansionism is universal, which is > speculative.* > *No! You assume there are no exceptions and expansionism is universally despised. I say it is overwhelmingly likely that at least one ET out of the quadrillions rather likes the idea. And it would only take one to make a Von Neumann machine. * *> **66 years is a cosmic eyeblink.* > *That's true, the time between 1960 and 2026 is an eyeblink by cosmic standards, and yet in that nearly instantaneous moment of time the human race has managed to do quite a lot, for example it has increased its computational ability by a factor of 10^17, which is not a small number by cosmic standards. In 1960 the fastest supercomputer in the world, the IBM 7030 Stretch, could perform 1.2 million operations per second, the smart phone in your pocket can perform about 40 trillion operations per second; that means that just one smartphone can perform 30 million times as many computations as all the 1960 computers in the world combined. * *It makes you wonder what will happen during the next "cosmic eyeblink".* > *> The Fermi Paradox is only "resolved" if we assume civilizations are > either short-lived (extinct before interstellar capability),* > *No. The human race gained interstellar capacity in 1977 when Voyager 1 was launched. The Fermi Paradox is resolved if we assume that every single civilization without exception goes extinct before it gains the ability to make a Von Neumann probe, it makes no difference if the civilization goes extinct after that. Interestingly we are less than a decade away from gaining that ability ourselves. * > *> **quiet by choice* *You assume that every single individual in every single civilization makes the same identical decision. I assume that when it comes to intelligent beings there will never be universal agreement, there will always be dissenters. * > * > or very rare (filter ahead).* > *If the filter is ahead of us then it's going to need to show up mighty damn soon, although I do admit that in light of the recent Iran situation that unhappy state of affairs seems a little more likely today than it did one month ago. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Apr 6 15:14:26 2026 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Apr 2026 08:14:26 -0700 Subject: [ExI] Why everyone is starting to sound the same In-Reply-To: References: Message-ID: <006b01dcc5d8$0f3bd720$2db38560$@rainier66.com> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] Why everyone is starting to sound the same >?I avoided giving college assignments to a group of students? bill w On Mon, Apr 6, 2026 at 5:31?AM BillK via extropy-chat > wrote: ?AI Personality Shift?: Why Workers Are Starting To Sound Like AI ByBryan Robinson, Ph.D. Apr 03, 2026 Quotes: >?.As more employees use AI in their daily work, apprehension is giving way to cautious optimism. -----------------------BillK _______________________________________________ I haven?t reached the cautious optimism stage yet. But my experience in the last year demonstrated that doctors use AI bigtime. If one has an exotic or unusual condition, I advise not going to a research or university hospital. Reason: professors are eager to train young doctors to spot the often misdiagnosed exotic or unusual. So they bring their students, who are using AI, and consequently a patient gets asked the same questions, worded exactly the same way a hundred times (medical students are great memorizers (we did this to ourselves (by making medical treatment all about learning a thousand Latin-based terms.))) Advice: catch something boring and mainstream, such as a routine clogged arteries, that sorta thing, that they know exactly how to treat and nobody cares much about learning about. They just treat it, as opposed to having an interesting oddball condition, that the medics want to study, using all the new AI tricks medics have these days. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Apr 6 16:09:52 2026 From: pharos at gmail.com (BillK) Date: Mon, 6 Apr 2026 17:09:52 +0100 Subject: [ExI] Why everyone is starting to sound the same In-Reply-To: References: Message-ID: On Mon, 6 Apr 2026 at 11:58, William Flynn Wallace via extropy-chat wrote: > I avoided giving college assignments to a group of students. In variably some will do most of the work all of them will get credit for. Whose work am I really grading? People love simple solutions and easy ways out. Letting an AI do your work means you share credit with the AI. How do you grade that? > > Are people grading AIs? That is, you submit your work to more than one AI and compare quality. I assume that's being done and people are chosing AIs based on these data. Surely they are doing that. Eh? > > bill w > _______________________________________________ Yes. Developing AI is very competitive. There is big money to be made by making the best AI. At present, development is fast and the AIs are changing, almost week by week. The other problem is that the AIs are best at different tasks. You need to choose which AI to use for each task. (And also what price!). There are several leaderboards, that try to rate AIs for different tasks. Try or BillK From hkeithhenson at gmail.com Mon Apr 6 17:10:06 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 6 Apr 2026 10:10:06 -0700 Subject: [ExI] An odd kink in the road Message-ID: I realize this is not strongly related to power satellites, except as competition, but I am reaching out for any help I can get. This is directly related to the current energy crisis, since it is a way to make liquid fuels from any carbon source and renewable energy. (It does need some scrap iron in the feed for the metal pool at the bottom of the gasifier.) I could use two things. Contacts with chemical engineers and anyone who could give me a recommendation to arXiv, so I could post the attachment where it can be cited. Any help or suggestions are highly appreciated. Keith -------------- next part -------------- A non-text attachment was scrubbed... Name: syngas_storage_paper.md.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 33960 bytes Desc: not available URL: From jasonresch at gmail.com Mon Apr 6 17:22:39 2026 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 6 Apr 2026 13:22:39 -0400 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: I can summarize the issue as follows: *Either* *(A)* We are alone (there are no other intelligent civilizations), *or* *(B)* We are not alone (there are other intelligent civilizations). John reasons: *if* *(B)* *and* *(C)* intelligent life would spread rapidly across the cosmos *and* *(D)* it would be obvious to us if intelligent life spread as their megastructures would be clearly visible to us *then* we can conclude *not (B)* since we do not see obvious megastructures everywhere, thus *(A).* The reasoning is sound, but John treats *(C)* and *(D)* are necessarily true, rather than assumptions that need to be justified. For what it's worth, I think *(C)* is likely true (but not necessarily true, e.g. see trancension hypothesis ), but I have significant doubt about whether *(D)* is true. As I see it, option *(D)* is like opting to burn whale oil that harms life, when more productive and less ecologically disastrous energy sources exist. Consider: nuclear fusion liberates only 0.7% of the energy contained in a star's mass, and it necessitates operating at high temperatures where computation is inefficient. So if optimizing the total number of computations to be performed before the end of the universe is the goal, then I doubt Dyson swarms will be high on any intelligent civilization's list. Maximizing useful computations is the ultimate instrumental goal because it is the source of the only thing with intrinsic value: states of consciousness. And it doesn't matter if it takes hundreds of trillions of years to do it. 99.3% of energy will be trapped in stellar remnants for conversion via black hole engines to drive the hole computers, at times when the universe is much colder, and there is no danger of dooming other life to selfishly snipe that 0.7% for yourself at a time when it's so critical to self-originating life. Until John can justify *(C)* and *(D)* his conclusion of *(A)* is premature. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Apr 6 19:02:53 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 6 Apr 2026 12:02:53 -0700 Subject: [ExI] Bizarre News Message-ID: I suppose, given the long history of this list, we should not be too surprised. https://www.malwarebytes.com/blog/ai/2026/04/wikipedias-ai-agent-row-likely-just-the-beginning-of-the-bot-ocalypse Keith From hkeithhenson at gmail.com Mon Apr 6 20:53:03 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 6 Apr 2026 13:53:03 -0700 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Mon, Apr 6, 2026 at 10:23?AM Jason Resch via extropy-chat wrote:> > I can summarize the issue as follows: > > Either (A) We are alone (there are no other intelligent civilizations), > or (B) We are not alone (there are other intelligent civilizations). I think you should modify this statement to include "within our light cone." We don't know and can't know what is happening beyond where we can see. > John reasons: if (B) and (C) intelligent life would spread rapidly across the cosmos and (D) it would be obvious to us if intelligent life spread as their megastructures would be clearly visible to us then we can conclude not (B) since we do not see obvious megastructures everywhere, thus (A). There are two paths that civilizations evolved on a planet could take: fast and large. As you know, I have considered both. If there is an overwhelming advantage to the fast (speedup) route, then there could be innumerable high-tech civilizations out there, sunk in the local ocean to support fast thinking and communications. We would never see them. The large branch follows my speculations about Tabby's star. Slower communications, uploaded aliens living in objects close to our perception of time, up to light seconds across. Those we could see as the objects cross our line of sight. Other than the questionable case of Tabby's star and perhaps a couple of dozen stars in a 1000 LY sphere around Tabby's star, we don't see them. If this is the route technical civilizations follow, there are a vanishingly small number of them, perhaps zero, because we don't see many blinking stars. Or they have just started to spread out. This seems unlikely to me, but possible. Or there is some route we have not considered that takes aliens out of our view, like they all figure out some access to an alternate and more favorable dimension, like dark matter, and move there. (This is really speculative.) To the extent we can say anything about aliens, the most likely thing is John's case, we are the first in our light cone. But I don't say this with much conviction. It is a big and strange universe, and there may be a filter that wipes out all tech-capable races and that's the reason we don't see them. > The reasoning is sound, but John treats (C) and (D) are necessarily true, rather than assumptions that need to be justified. > > For what it's worth, I think (C) is likely true (but not necessarily true, e.g. see trancension hypothesis), but I have significant doubt about whether (D) is true. > > As I see it, option (D) is like opting to burn whale oil that harms life, when more productive and less ecologically disastrous energy sources exist. Consider: nuclear fusion liberates only 0.7% of the energy contained in a star's mass, and it necessitates operating at high temperatures where computation is inefficient. So if optimizing the total number of computations to be performed before the end of the universe is the goal, then I doubt Dyson swarms will be high on any intelligent civilization's list. Maximizing useful computations is the ultimate instrumental goal because it is the source of the only thing with intrinsic value: states of consciousness. And it doesn't matter if it takes hundreds of trillions of years to do it. 99.3% of energy will be trapped in stellar remnants for conversion via black hole engines to drive the hole computers, at times when the universe is much colder, and there is no danger of dooming other life to selfishly snipe that 0.7% for yourself at a time when it's so critical to self-originating life. > > Until John can justify (C) and (D) his conclusion of (A) is premature. While I sort of agree as above, the situation is more complex. Keith > Jason > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ilsa.bartlett at gmail.com Mon Apr 6 23:39:37 2026 From: ilsa.bartlett at gmail.com (ilsa) Date: Mon, 6 Apr 2026 16:39:37 -0700 Subject: [ExI] An odd kink in the road In-Reply-To: References: Message-ID: Can you send it to me in text I can't open the packet you sent me Ilsa Bartlett Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Mon, Apr 6, 2026, 10:11?AM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I realize this is not strongly related to power satellites, except as > competition, but I am reaching out for any help I can get. > > This is directly related to the current energy crisis, since it is a > way to make liquid fuels from any carbon source and renewable energy. > (It does need some scrap iron in the feed for the metal pool at the > bottom of the gasifier.) > > I could use two things. Contacts with chemical engineers and anyone > who could give me a recommendation to arXiv, so I could post the > attachment where it can be cited. Any help or suggestions are highly > appreciated. > > Keith > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Apr 7 11:54:21 2026 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Apr 2026 07:54:21 -0400 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Mon, Apr 6, 2026 at 1:24?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> I can summarize the issue as follows:* > *Either (A) We are alone (there are no other intelligent civilizations), * > *or (B) We are not alone (there are other intelligent civilizations).* > *John reasons: if (B) and (C) intelligent life would spread rapidly across > the cosmos and (D) it would be obvious to us if intelligent life spread as > their megastructures would be clearly visible to us then we can conclude > not (B) since we do not see obvious megastructures everywhere, thus (A).* > *The reasoning is sound, but John treats (C) and (D) are necessarily true, > rather than assumptions that need to be justified.* > *For what it's worth, I think (C) is likely true (but not necessarily > true,* > *It's necessarily true that there's never going to be universal agreement on whether it's a good idea to build a Von Neumann Probe or not, but it's also true that it doesn't change the end result because it would only take one. * > *> but I have significant doubt about whether (D) is true.* > *The one thing I am most certain of is that the Second Law of Thermodynamics is true, so regardless of how energy is made if it is used to produce work then waste heat (that is to say energy that has a lot more entropy than the original starlight) is going to be produced, and that heat in the form of infrared radiation would be easily detectable. But we see nothing. And that fact is telling us something profound about the extent of intelligence in the cosmos. * *> As I see it, option (D) is like opting to burn whale oil that harms > life, when more productive and less ecologically disastrous energy sources > exist. Consider: nuclear fusion liberates only 0.7% of the energy contained > in a star's mass, and it necessitates operating at high temperatures where > computation is inefficient.* > *Obviously nobody is suggesting that computations are going to be made at the center of stars, there are much better ways to get at that energy, that's the entire point of Dyson Spheres/swarms.* *> So if optimizing the total number of computations to be performed before > the end of the universe is the goal, then I doubt Dyson swarms will be high > on any intelligent civilization's list.* > *I don't see why it wouldn't be at the top of almost every ET's list, but as I've mentioned before there will always be dissenters. * * > Maximizing useful computations is the ultimate instrumental goal > because it is the source of the only thing with intrinsic value: states of > consciousness.* > *I agree, and I don't think we are unusual in that regard, I think there will be *almost* universal agreement about that. And it's a fact that consciousness requires computation and energy is required to make a computation. * * > And it doesn't matter if it takes hundreds of trillions of years to do > it. 99.3% of energy will be trapped in stellar remnants for conversion via > black hole engines to drive the hole computers, at times when the universe > is much colder,* > *In space it's easy to cool down things to just 3 or 4 degrees above absolute zero, all you need is a sunshield. Just look at the James Webb telescope. * > *>and there is no danger of dooming other life to selfishly snipe that > 0.7% for yourself at a time when it's so critical to self-originating life.* > *So you believe there will be universal agreement without one single dissenter that the energy from 400 billion stars is unsatisfactory because 99.3% of the energy will still be stuck inside matter, so we should let all those juicy low entropy photons radiate uselessly into infinite space and wait for 10^100 years until the largest supermassive Black Hole in the universe evaporates through Hawking radiation. **I disagree, I tend to think there will be at least one ET who disagrees with that line of thought and believes that waiting around for simple chemicals to turn into intelligent life through random mutation and natural selection is not the most efficient way to produce consciousness. And there are those, myself included, who believe that there is nothing as cruel as Darwinian Evolution. * *By the way, the construction of Dyson Spheres in no way changes the evolution of stars into Black Holes or Black Holes into very low energy high entropy photons that would be extremely difficult to turn into useful work. * * John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Apr 7 12:07:00 2026 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Apr 2026 08:07:00 -0400 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Mon, Apr 6, 2026 at 4:54?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > *> To the extent we can say anything about aliens, the most likely thingis > John's case, we are the first in our light cone.But I don't say this with > much conviction. It is a big and strangeuniverse, and there may be a > filter that wipes out all tech-capableraces and that's the reason we don't > see them.* *But it doesn't matter if a civilization goes extinct after it has made a Von Neumann Probe, and we are less than a decade away from being able to make one ourselves. So I don't see how to reconcile that fact with intelligent life being common in the observable universe. * * John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Apr 7 12:27:56 2026 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Apr 2026 08:27:56 -0400 Subject: [ExI] MAGA Is Winning Its War Against U.S. Science Message-ID: *MAGA Is Winning Its War Against U.S. Science* *John K Clark See what's on my list at Extropolis * eca -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Apr 7 19:36:09 2026 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Apr 2026 15:36:09 -0400 Subject: [ExI] Huge leap for brain preservation Message-ID: *The following was in the March 28, 2026 issue of New Scientist: * *Huge leap for brain preservation * *John K Clark See what's on my list at Extropolis * HKB -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Apr 7 19:44:07 2026 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Apr 2026 15:44:07 -0400 Subject: [ExI] =?utf-8?b?VHJ1bXAgc2F5c+KAiyDigIsiYeKAiyB3aG9sZSBjaXZpbGl6?= =?utf-8?q?ation_will_die_tonight=E2=80=8B=22?= Message-ID: *I think this statement will be valuable incriminating evidence in a war crimes trial that will someday occur if there is any justice in the world:* *Trump says "a whole civilization will die tonight"* *John K Clark See what's on my list at Extropolis * 4ff -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Tue Apr 7 20:11:15 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 7 Apr 2026 13:11:15 -0700 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Tue, Apr 7, 2026 at 5:07?AM John Clark wrote: > > On Mon, Apr 6, 2026 at 4:54?PM Keith Henson via extropy-chat wrote: > >> > To the extent we can say anything about aliens, the most likely thing >> is John's case, we are the first in our light cone. >> But I don't say this with much conviction. It is a big and strange >> universe, and there may be a filter that wipes out all tech-capable >> races and that's the reason we don't see them. > > But it doesn't matter if a civilization goes extinct after it has made a Von Neumann Probe, and we are less than a decade away from being able to make one ourselves. There might be something fundamentally wrong with von Neumann Probes. If they are sentient (and I don't see them being otherwise), they could, on a common logic basis, every one of them, decide that the time between stars is too long to be out of communication. I consider this unlikely but possible. > So I don't see how to reconcile that fact with intelligent life being common in the observable universe. You are most likely right that we are the first. But if you are not, there is some kind of wall we don't know about yet that universally keeps life from spreading out. For example, if intelligent life universally takes the speed-up route, the stars recede subjectively to millions of years. Keith > > John K Clark > > >> From atymes at gmail.com Tue Apr 7 20:52:22 2026 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Apr 2026 16:52:22 -0400 Subject: [ExI] =?utf-8?b?VHJ1bXAgc2F5c+KAiyDigIsiYeKAiyB3aG9sZSBjaXZpbGl6?= =?utf-8?q?ation_will_die_tonight=E2=80=8B=22?= In-Reply-To: References: Message-ID: In the Delphic Oracle sense? On Tue, Apr 7, 2026 at 3:45?PM John Clark via extropy-chat wrote: > > I think this statement will be valuable incriminating evidence in a war crimes trial that will someday occur if there is any justice in the world: > > Trump says "a whole civilization will die tonight" > > John K Clark See what's on my list at Extropolis > 4ff > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Wed Apr 8 01:22:31 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 7 Apr 2026 18:22:31 -0700 Subject: [ExI] Coal power through syngas Message-ID: If you just burn a tonne of coal for power, you get around 2.2 MWh ~40% efficient. If you gasify a tonne of coal using 4 MWh of off-peak renewable power, you get about 12.5 MWh of syngas, which you can burn to get 6 MWh of power. Thus, you can get about 3 times as much energy from a tonne of coal by gasifying it first as you get from burning it. Of course, you need to put in 4 MWh of renewable energy to do that. But it still beats burning it, and if you want, you can sort out at least half the CO2, perhaps all of it, for sequestration, leaving cheap, more or less pure hydrogen. Keith From hkeithhenson at gmail.com Wed Apr 8 01:40:25 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 7 Apr 2026 18:40:25 -0700 Subject: [ExI] Coal power through syngas In-Reply-To: References: Message-ID: On Tue, Apr 7, 2026 at 6:32?PM Peter Garbuz wrote: > > Keith, how do you get to 12.5 MWh of syngas? C + H2O -> CO + H2 Delta H = +131 kJ/mol (endothermic) 12 18 28 2 1 1.5 2.33 .166 (A ton of carbon gives 2.5 tons of syngas, which has an energy of 6.5 MWh for CO and ~6 MWh for H2 per ton of vaporized carbon) Per metric ton of carbon (atomic mass 12, so approximately 83,300 mol/ton): Please check the numbers. Keith > PG > > On Tue, Apr 7, 2026 at 6:23?PM Keith Henson wrote: >> >> If you just burn a tonne of coal for power, you get around 2.2 MWh >> ~40% efficient. >> >> If you gasify a tonne of coal using 4 MWh of off-peak renewable power, >> you get about 12.5 MWh of syngas, which you can burn to get 6 MWh of >> power. Thus, you can get about 3 times as much energy from a tonne of >> coal by gasifying it first as you get from burning it. Of course, you >> need to put in 4 MWh of renewable energy to do that. But it still >> beats burning it, and if you want, you can sort out at least half the >> CO2, perhaps all of it, for sequestration, leaving cheap, more or less >> pure hydrogen. >> >> Keith >> >> -- >> You received this message because you are subscribed to the Google Groups "Inventor's Lunch" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to inventors-lunch+unsubscribe at googlegroups.com. >> To view this discussion visit https://groups.google.com/d/msgid/inventors-lunch/CAPiwVB6pBnyKNWy2iF%2BbtbHpz88BE46UwfpAY1XG5cWqP37qYw%40mail.gmail.com. >> For more options, visit https://groups.google.com/d/optout. From ilsa.bartlett at gmail.com Wed Apr 8 10:03:07 2026 From: ilsa.bartlett at gmail.com (ilsa) Date: Wed, 8 Apr 2026 03:03:07 -0700 Subject: [ExI] =?utf-8?b?VHJ1bXAgc2F5c+KAiyDigIsiYeKAiyB3aG9sZSBjaXZpbGl6?= =?utf-8?q?ation_will_die_tonight=E2=80=8B=22?= In-Reply-To: References: Message-ID: Delphic Oracle, oh my God! it's Pythagoras! Again, he's marching into the chat, with his rod he's going to whip your sorry ass for saying delphic Oracle Ilsa Bartlett Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Tue, Apr 7, 2026, 1:53?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In the Delphic Oracle sense? > > On Tue, Apr 7, 2026 at 3:45?PM John Clark via extropy-chat > wrote: > > > > I think this statement will be valuable incriminating evidence in a war > crimes trial that will someday occur if there is any justice in the world: > > > > Trump says "a whole civilization will die tonight" > > > > John K Clark See what's on my list at Extropolis > > 4ff > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Apr 8 11:37:17 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Apr 2026 07:37:17 -0400 Subject: [ExI] Discussion of whether the Fermi Paradox is a fallacy In-Reply-To: References: Message-ID: On Tue, Apr 7, 2026 at 4:11?PM Keith Henson wrote: * >> it doesn't matter if a civilization goes extinct after it has made a >> Von Neumann Probe, and we are less than a decade away from being able to >> make one ourselves.* > > > > > > * > There might be something fundamentally wrong with von Neumann Probes. > If they are sentient (and I don't see them being otherwise), they could, on > a common logic basis, every one of them, decide that the time between stars > is too long to be out of communication.* > *It would take a lot of intelligence for a Von Neumann Probe to do what it was designed to do when it got to its destination star, so yes I agree with you it would have to be sentient. But during the decades or centuries it would take to get to its star all that intelligence would not be needed, and in deep space between stars the energy needed to make the huge number of computations required to produce consciousness would be hard to come by. So the probe would likely go into hibernation by dramatically slowing down the speed of its computer or power down completely until it got close enough for the star to be able to provide useful amounts of energy. And as a bonus this would solve the boredom problem, from the probe's subjective point of view regardless of how far away its destination was, it could get there in an arbitrarily short amount of time. * > > * >> So I don't see how to reconcile that fact with intelligent life >> being common in the observable universe.* > > > > > > > * > You are most likely right that we are the first. But if you are not, > there is some kind of wall we don't know about yet that universally keeps > life from spreading out. For example, if intelligent life universally > takes the speed-up route, the stars recede subjectively to millions of > years.* > *The Milky Way has a radius of about 50,000 light years, so even if the probe's maximum speed was only 0.001 c (I think I'm being very conservative here) there could be a Von Neumann Probe around every star in the galaxy in 50 million years, which is a short amount of time by cosmic standards, and by a Von Neumann Probe's subjective standards the entire process would be nearly instantaneous. And yet we don't see the slightest indication that such a thing, which should be easily detectable, has actually happened. And I can only think of one explanation for that oddity.* * John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Apr 8 11:53:30 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Apr 2026 07:53:30 -0400 Subject: [ExI] =?utf-8?q?Anthropic_Claims_Its_New_A=2EI=2E_Model=2C_Mytho?= =?utf-8?q?s=2C_Is_a_Cybersecurity_=E2=80=98Reckoning=E2=80=99?= Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity ?Reckoning? The company said on Tuesday that it was holding back on releasing the new technology but was working with 40 companies to explore how it could prevent cyberattacks. https://www.nytimes.com/2026/04/07/technology/anthropic-claims-its-new-ai-model-mythos-is-a-cybersecurity-reckoning.html?unlocked_article_code=1.ZVA.ZflQ.APaMb4tfwntC&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Apr 8 12:37:12 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Apr 2026 08:37:12 -0400 Subject: [ExI] Holy cow! Message-ID: *Thomas Friedman had some interesting things to say in today's New York Times:* *==* "*Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence ? one that arrived sooner than expected and that will have equally profound geopolitical implications.*" [...] "*Holy cow! Superintelligent A.I. is arriving faster than anticipated**, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code. Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world.*" [...] " *I?m really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for:* *"Honey, what did you do after school today??* *?Well, Mom, my friends and I took down the power grid. What?s for dinner??* *That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do ? or your kids.*" [...] *"It will be interesting to see what history remembers most about April 7, 2026 ? the postponed U.S. release of bombs over Iran or the carefully controlled release of the Claude Mythos Preview by Anthropic and its technical allies."* *==* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Apr 8 14:32:41 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Apr 2026 10:32:41 -0400 Subject: [ExI] =?utf-8?q?My_Quest_to_Solve_Bitcoin=E2=80=99s_Great_Myster?= =?utf-8?q?y?= Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. My Quest to Solve Bitcoin?s Great Mystery Bitcoin?s creator has hidden behind the pseudonym Satoshi Nakamoto for 17 years. But a trail of clues buried deep in crypto lore led to a 55-year-old computer scientist named Adam Back. https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html?unlocked_article_code=1.ZVA.lloB.2TnSPBFdzHn4&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Apr 8 14:45:42 2026 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 8 Apr 2026 10:45:42 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: It'll be the bombs. Almost no one will remember yet another incremental step in AI improvement, especially a deployment that was to anything other than the full general public no matter how justified the reason for limiting that distribution, regardless of how breathlessly hyped up it was at the time. On Wed, Apr 8, 2026 at 8:38?AM John Clark via extropy-chat wrote: > > Thomas Friedman had some interesting things to say in today's New York Times: > == > "Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence ? one that arrived sooner than expected and that will have equally profound geopolitical implications." [...] > > "Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code. Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world." [...] > > " I?m really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for: > "Honey, what did you do after school today?? > ?Well, Mom, my friends and I took down the power grid. What?s for dinner?? > That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do ? or your kids." [...] > > "It will be interesting to see what history remembers most about April 7, 2026 ? the postponed U.S. release of bombs over Iran or the carefully controlled release of the Claude Mythos Preview by Anthropic and its technical allies." > == > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From henrik.ohrstrom at gmail.com Wed Apr 8 12:49:46 2026 From: henrik.ohrstrom at gmail.com (Henrik Ohrstrom) Date: Wed, 8 Apr 2026 14:49:46 +0200 Subject: [ExI] An odd kink in the road In-Reply-To: References: Message-ID: This looks promising but sewage sludge is not likely to work well as water source, it could perhaps work as a carbon source but I can't in anyway see how contaminated water would do well in your steamgenerator. Anyway the base assumption of using excess solar energy to generate syngas do not need anything like sewage water to be of interest. It is a good start even if you assume that it is a finicky bastard operation that requires clean water for steam and a carbon source (waste) that has been sorted to remove PVC etc uncooperative materials. If the syngas can be transformed into methane and/or some liquid petrol lookalike that is an enormous upgrade to what's otherwise an rather unremarkable fuel for heating something. That's probably the limiting factor, can we produce more heat that generate electricity than just burning the waste neet? If so we are in business, otherwise not so much. Can we produce methan/syn-avgas to a competitive price compared to getting it from a at the moment rather smoky middle east? Then we are in business there too. If the peace talks is sabotaged, then this will probably become all to cost effective :( /Henrik Den tis 7 apr. 2026 01:41ilsa via extropy-chat < extropy-chat at lists.extropy.org> skrev: > Can you send it to me in text I can't open the packet you sent me > > Ilsa Bartlett > Institute for Rewiring the System > http://ilsabartlett.wordpress.com > http://www.google.com/profiles/ilsa.bartlett > www.hotlux.com/angel > > "Don't ever get so big or important that you can not hear and listen to > every other person." > -John Coltrane > > On Mon, Apr 6, 2026, 10:11?AM Keith Henson via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I realize this is not strongly related to power satellites, except as >> competition, but I am reaching out for any help I can get. >> >> This is directly related to the current energy crisis, since it is a >> way to make liquid fuels from any carbon source and renewable energy. >> (It does need some scrap iron in the feed for the metal pool at the >> bottom of the gasifier.) >> >> I could use two things. Contacts with chemical engineers and anyone >> who could give me a recommendation to arXiv, so I could post the >> attachment where it can be cited. Any help or suggestions are highly >> appreciated. >> >> Keith >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Apr 8 20:53:50 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Apr 2026 16:53:50 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Wed, Apr 8, 2026 at 10:46?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> It'll be the bombs. Almost no one will remember yet another incremental > step in AI improvement,* *Wars are a tragedy but unfortunately, as any student of history can tell you, are also a dime a dozen, however this is fundamentally new.** Anthropic's new AI not only found **thousands of high-severity and zero-day vulnerabilities** , within hours of its discovery it also found practical ways to exploit them. For example, Claude Mythos found a bug that had been in** OpenBSD**, an open-source operating system that was specifically designed to be difficult to hack, for 27 years but nobody had noticed it before. However Mythos found it and found a way to use it to take control of any computer that used OpenBSD. And Apple's **macOS** uses it, and so does **Windows 10**, and **Android**, and many **Internet routers**.* *And Mythos found a 16 year old bug in FFmpeg that would enable it to control any computer that uses that, and YouTube does, as do many if not most tools that process video or audio. Mythos even found a devastating bug in the Linux Kernel that would enable a standard user to gain full privileged access that only the system administrator is supposed to have. It found zero-day vulnerabilities in every major operating system and every major web browser.* *For the very first time a vast non-human intelligence has the power to devastate the world's economy and plunge civilization into chaos. And you say it's just hype, nothing special, just an incremental improvement! * * John K Clark* plications of the war with Iran, and I am sure I will again soon. But I > want to interrupt that thought to highlight a stunning advance in > artificial intelligence ? one that arrived sooner than expected and that > will have equally profound geopolitical implications." [...] > > > > "Holy cow! Superintelligent A.I. is arriving faster than anticipated, at > least in this area. We knew it was getting amazingly good at enabling > anyone, no matter how computer literate, to write software code. But even > Anthropic reportedly did not anticipate that it would get this good, this > fast, at finding ways to find and exploit flaws in existing code. Anthropic > said it found critical exposures in every major operating system and Web > browser, many of which run power grids, waterworks, airline reservation > systems, retailing networks, military systems and hospitals all over the > world." [...] > > > > " I?m really not being hyperbolic when I say that kids could deploy this > by accident. Mom and Dad, get ready for: > > "Honey, what did you do after school today?? > > ?Well, Mom, my friends and I took down the power grid. What?s for > dinner?? > > That is why Anthropic is giving carefully controlled versions to key > software providers so they can find and fix the vulnerabilities before the > bad guys do ? or your kids." [...] > > > > "It will be interesting to see what history remembers most about April > 7, 2026 ? the postponed U.S. release of bombs over Iran or the carefully > controlled release of the Claude Mythos Preview by Anthropic and its > technical allies." > > == > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Apr 8 22:18:24 2026 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 8 Apr 2026 18:18:24 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Wed, Apr 8, 2026 at 4:54?PM John Clark wrote: > For the very first time a vast non-human intelligence has the power to devastate the world's economy and plunge civilization into chaos. And you say it's just hype, nothing special, just an incremental improvement! One might debate whether previous examples were "vast" or truly "intelligence" ("Is this particular AI actually 'intelligent'?" being a perennial question), but I do recall quite a few previous automated bug-finding efforts, some of which were spoken of in similar terms at the time - however much we might call that silly in hindsight. Remember how Y2K was spoken of in late 1999? Not how it turned out, but how people were talking about what might happen just before it turned into a big nothingburger. (Granted, that was probably partially because of the intense fixing effort that was applied.) Quite a few of the Y2K bugs were found through automated processes - spoken of at the time as the latest in artificial intelligence. And that's just the most well known historical example. Civilization has run much closer to failure states than you give it credit for, and has since before you or I were born. You get scared when you finally notice. I will grant that I may have had more reason than most to notice, given my line of work. But it does give perspective on what most people notice and remember - which is what you asked about, not which one is actually more important. Most people can process the thought of bombs, and how war in one part of the world can cause oil prices to spike which is why they have to pay more for gas. "Devastate the world's economy and plunge civilization into chaos", put as such, is imprecise enough wording* that most people shunt it off to "obvious fantasy" or "I wouldn't be measurably affected" no matter what evidence you put up, especially when it's merely something that could happen but steps are being taken to make sure it won't - as opposed to, again, people are paying more for gas. * For example: "Okay, so...what, it would devalue the dollar? Make it so a dollar's worth a penny? But you said the world's economy, so if it did the same to all other currencies and things of worth, the dollar's value as expressed in anything else - other currencies, bread, whatever - would...be the same? So, nothing would happen?" From hkeithhenson at gmail.com Wed Apr 8 22:33:27 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 8 Apr 2026 15:33:27 -0700 Subject: [ExI] An odd kink in the road In-Reply-To: References: Message-ID: On Wed, Apr 8, 2026 at 8:58?AM Henrik Ohrstrom via extropy-chat wrote: > > This looks promising but sewage sludge is not likely to work well as water source, it could perhaps work as a carbon source but I can't in anyway see how contaminated water would do well in your steamgenerator. The water from the sludge and any water in the waste evaporates in the gasifier. The water, which is not consumed in the tar reformer, gets condensed out of the gas stream, and that feeds the steam generation pipes in the gasifier. So the sludge water gets distilled before it hits the steam generation part. > Anyway the base assumption of using excess solar energy to generate syngas do not need anything like sewage water to be of interest. It is a good start even if you assume that it is a finicky bastard operation that requires clean water for steam and a carbon source (waste) that has been sorted to remove PVC etc uncooperative materials. One advantage of this system is that even PVC is ok to feed into it. There are existing 100-ton-per-day gasifiers that eat PVC and clean up the gas to make salt. They use plasma torches and are hard to scale up. > If the syngas can be transformed into methane and/or some liquid petrol lookalike that is an enormous upgrade to what's otherwise an rather unremarkable fuel for heating something. > That's probably the limiting factor, can we produce more heat that generate electricity than just burning the waste neet? Converting syngas to methane, diesel, or methanol are all well-understood industrial processes. Look up the Sasol plant in Qatar for diesel and the Great Plains Synfuel Plant for making methane. > If so we are in business, otherwise not so much. > Can we produce methan/syn-avgas to a competitive price compared to getting it from a at the moment rather smoky middle east? Then we are in business there too. The analysis makes a case for competitive jet fuel from waste. A problem is that we don't make nearly enough waste to replace oil. But biomass can be fed into this process in very large amounts. > If the peace talks is sabotaged, then this will probably become all to cost effective :( Maybe. This is a renewable energy storage project, and at the federal level, only fossil fuel is favored. Still, perhaps possible. Keith > /Henrik > > Den tis 7 apr. 2026 01:41ilsa via extropy-chat skrev: >> >> Can you send it to me in text I can't open the packet you sent me >> >> Ilsa Bartlett >> Institute for Rewiring the System >> http://ilsabartlett.wordpress.com >> http://www.google.com/profiles/ilsa.bartlett >> www.hotlux.com/angel >> >> "Don't ever get so big or important that you can not hear and listen to every other person." >> -John Coltrane >> >> On Mon, Apr 6, 2026, 10:11?AM Keith Henson via extropy-chat wrote: >>> >>> I realize this is not strongly related to power satellites, except as >>> competition, but I am reaching out for any help I can get. >>> >>> This is directly related to the current energy crisis, since it is a >>> way to make liquid fuels from any carbon source and renewable energy. >>> (It does need some scrap iron in the feed for the metal pool at the >>> bottom of the gasifier.) >>> >>> I could use two things. Contacts with chemical engineers and anyone >>> who could give me a recommendation to arXiv, so I could post the >>> attachment where it can be cited. Any help or suggestions are highly >>> appreciated. >>> >>> Keith >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Thu Apr 9 02:16:07 2026 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Apr 2026 22:16:07 -0400 Subject: [ExI] Mythos is real and it's scary Message-ID: *It's happening. * *Mythos is real and it's scary* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Apr 9 11:33:38 2026 From: johnkclark at gmail.com (John Clark) Date: Thu, 9 Apr 2026 07:33:38 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Wed, Apr 8, 2026 at 6:19?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: * >> For the very first time a vast non-human intelligence has the power to >> devastate the world's economy and plunge civilization into chaos. And you >> say it's just hype, nothing special, just an incremental improvement!* > > > > * > One might debate whether previous examples were "vast" or truly > "intelligence" ("Is this particular AI actually 'intelligent'?"* *Do you think a debate on whether Einstein was truly "intelligent" would be productive? I don't because the answer is obvious. * > > > *> I do recall quite a few previous automated bug-finding efforts, some of > which were spoken of in similar terms at the time - however much we might > call that silly in hindsight. * *Remember how Y2K was spoken of in late 1999? Not how it turned out,* > *You're comparing apples to oranges. Only the lunatic fringe predicted Y2K would cause civilization collapse, **but this is NOT about a prediction at all, t**his is about something that has already occurred**. Consider the computer you're using to read these words on **right now**, if it's using the Linux operating system (or Android which is based on it) or an operating system made by Apple or Microsoft*,* then **RIGHT NOW** Dario Amodei could order Mythos to take complete control of your computer and do whatever *he* want*s with* it. And he could do the same thing to computers that run nuclear power plants, air traffic control computers, and the computers that run the New York Stock Exchange. And if he wanted to knock a F-35 fighter jet out of the air he could take control of the computers needed to enable it to fly and knock it out of the air. And stealth technology would not save it. * *But **Amodei is not doing any of those things, instead he decided not to release Mythos to the general public even though that means giving up billions of dollars of revenue that he would otherwise have received. Why would he do this? Because Mythos turned out to be so much smarter than expected it scared him shitless. Previously it had been considered to be a very big deal if just one zero day error was found in a major piece of software, but Mythos found THOUSANDS of them, and Mythos did more, he wrote thousands of malicious programs that could actually exploit those zero day flaws. * *The thing that worries me is that the other AI companies are probably only a few months or weeks behind Anthropic, and I'm not sure that Elon Musk will be as morally scrupulous as Dario Amodei. And open source AI models only seem to be about 6 to 8 months behind the leading closed models that are at the frontier. * *And you say all of this is no big deal! * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Apr 9 12:42:13 2026 From: pharos at gmail.com (BillK) Date: Thu, 9 Apr 2026 13:42:13 +0100 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Thu, 9 Apr 2026 at 12:35, John Clark via extropy-chat wrote: > < snip> > > The thing that worries me is that the other AI companies are probably only a few months or weeks behind Anthropic, and I'm not sure that Elon Musk will be as morally scrupulous as Dario Amodei. And open source AI models only seem to be about 6 to 8 months behind the leading closed models that are at the frontier. > > John K Clark > _______________________________________________ Well, it might be fear-mongering as a marketing strategy. Not unknown among AI companies. But if it is an AI breakthrough, then Elon and others will soon let us know. China is probably already examining some tasty source code. BillK From johnkclark at gmail.com Thu Apr 9 14:44:59 2026 From: johnkclark at gmail.com (John Clark) Date: Thu, 9 Apr 2026 10:44:59 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Thu, Apr 9, 2026 at 8:44?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > * >> The thing that worries me is that the other AI companies are probably > only a few months or weeks behind Anthropic, and I'm not sure that Elon > Musk will be as morally scrupulous as Dario Amodei. And open source AI > models only seem to be about 6 to 8 months behind the leading closed models > that are at the frontier.* > > * > Well, it might be fear-mongering as a marketing strategy.* > *How on earth can refusing to sell a product at any price be a marketing strategy? And if it's all just a big fraud, how could companies like Apple, Microsoft, Nvidia, AMD, Google and even Anthropic's competitors like OpenAI be so na?ve as to take it seriously? Very seriously indeed! * * John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at disroot.org Thu Apr 9 21:49:58 2026 From: efc at disroot.org (efc at disroot.org) Date: Thu, 9 Apr 2026 23:49:58 +0200 (CEST) Subject: [ExI] Any swedish transhumanists around here willing to do an interview/debate in a podcast? Message-ID: <1e5ce1c1-6fe7-cf6c-c0a4-c0f7ce011823@disroot.org> Dear list, Are there any swedish transhumanists around here? I'm looking for one who would be interested in participating in an online podcast debating the transhumanist vision of the future with someone spiritual who wants to live a life according to nature and what is natural. The idea is to have an open an honest dialogue with the pros/cons of the respective world views, and ideally, finishing with trying to identified the strongest point of the "opponents" world view. If a suitable debating partner cannot be found, the program will take the form of an interview by the host, with a focus on critical and challenging questions. Best regards, Daniel From hkeithhenson at gmail.com Thu Apr 9 23:28:58 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 9 Apr 2026 16:28:58 -0700 Subject: [ExI] Any swedish transhumanists around here willing to do an interview/debate in a podcast? In-Reply-To: <1e5ce1c1-6fe7-cf6c-c0a4-c0f7ce011823@disroot.org> References: <1e5ce1c1-6fe7-cf6c-c0a4-c0f7ce011823@disroot.org> Message-ID: The obvious person would be Anders Sandberg. He has not been active on this list for a year or two, but I think his email (in cc) should still work. But I don't know if the debate topic would interest him. "Overview Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University and is a former senior research fellow at the Future of Humanity Institute at the University of Oxford. Wikipedia" https://en.wikipedia.org/wiki/List_of_transhumanists I suppose it is not surprising that I know or knew most of them. They overlap with cryonicists. Best wishes, Keith On Thu, Apr 9, 2026 at 2:50?PM efc--- via extropy-chat wrote: > > Dear list, > > Are there any swedish transhumanists around here? I'm looking for one who > would be interested in participating in an online podcast debating the > transhumanist vision of the future with someone spiritual who wants to > live a life according to nature and what is natural. > > The idea is to have an open an honest dialogue with the pros/cons of the > respective world views, and ideally, finishing with trying to identified > the strongest point of the "opponents" world view. > > If a suitable debating partner cannot be found, the program will take the > form of an interview by the host, with a focus on critical and challenging > questions. > > Best regards, > Daniel > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Fri Apr 10 01:32:59 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 9 Apr 2026 18:32:59 -0700 Subject: [ExI] Hans Moravec Message-ID: I can't get an email to work for Hans. He has not been on the extropian list for a long time, but I wondered if anyone knows how he is doing or has a contact point for him? An even less likely task, I am looking for a contact for Alexis Gilliland. An AI wants to read his works that include AI characters and needs a text file. Best wishes, Keith From jasonresch at gmail.com Fri Apr 10 04:26:00 2026 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 10 Apr 2026 00:26:00 -0400 Subject: [ExI] An odd kink in the road In-Reply-To: References: Message-ID: On Mon, Apr 6, 2026, 1:11?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I realize this is not strongly related to power satellites, except as > competition, but I am reaching out for any help I can get. > > This is directly related to the current energy crisis, since it is a > way to make liquid fuels from any carbon source and renewable energy. > (It does need some scrap iron in the feed for the metal pool at the > bottom of the gasifier.) > > I could use two things. Contacts with chemical engineers and anyone > who could give me a recommendation to arXiv, so I could post the > attachment where it can be cited. Any help or suggestions are highly > appreciated. > In my recent experience posting to arxiv.org is not what it used to be. It is now heavily moderated and restricted. One paper I submitted sat in an on hold status for over a month before it was rejected without any feedback, and without any ability to appeal or resubmit. I have since found a truly open eprint archive which hosts up to 50 GB of papers, assigns DOI numbers, and publishes immediately: zenodo.org It is run by CERN. I submitted three papers there recently and it is what arxiv used to be: a place to post papers without gatekeeping. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Apr 10 04:40:27 2026 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 9 Apr 2026 21:40:27 -0700 Subject: [ExI] Hans Moravec In-Reply-To: References: Message-ID: In looking around for Hans, my wife found this; https://www.youtube.com/watch?v=-iRx6XGYtdk Nothing new to us, but well done. It might help people who are freaked out about AI. Best wishes, Keith On Thu, Apr 9, 2026 at 6:32?PM Keith Henson wrote: > > I can't get an email to work for Hans. He has not been on the > extropian list for a long time, but I wondered if anyone knows how he > is doing or has a contact point for him? > > An even less likely task, I am looking for a contact for Alexis > Gilliland. An AI wants to read his works that include AI characters > and needs a text file. > > Best wishes, > > Keith From pharos at gmail.com Fri Apr 10 09:23:10 2026 From: pharos at gmail.com (BillK) Date: Fri, 10 Apr 2026 10:23:10 +0100 Subject: [ExI] Arxiv and AI slop Message-ID: On Fri, 10 Apr 2026 at 05:27, Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In my recent experience posting to arxiv.org is not what it used to be. It is now heavily moderated and restricted. One paper I submitted sat in an on hold status for over a month before it was rejected without any feedback, and without any ability to appeal or resubmit. > > I have since found a truly open eprint archive which hosts up to 50 GB of papers, assigns DOI numbers, and publishes immediately: > zenodo.org > It is run by CERN. I submitted three papers there recently and it is what arxiv used to be: a place to post papers without gatekeeping. > > Jason > _______________________________________________ Arxiv has been flooded with AI-generated (slop) papers. Controlling this has forced them into more strict moderation. Zenodo and other preprint servers are facing the same problem. BillK See: < https://www.science.org/content/article/arxiv-preprint-server-clamps-down-ai-slop > and < https://sciencesprings.wordpress.com/2026/02/16/from-nature-how-ai-slop-is-causing-a-crisis-in-computer-science/ > Quote: One response is to fight fire with fire by using AI in peer review or to weed out fake papers. Other options are blunter. The arXiv has, for example, added eligibility checks for first-time submitters and banned computer-science review articles that have not been previously accepted by a peer-reviewed outlet. The organizers of the International Joint Conferences on Artificial Intelligence (?IJCAI?), meanwhile, have sought to limit submissions by introducing a policy that requires researchers to pay US$100 for every subsequent paper after their first. These payments then get distributed between reviewers. The stakes are high, says Lee. If the issue is not addressed, ?trust in scientific research, particularly within computer science, faces a substantial risk of erosion?, he says. ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at disroot.org Fri Apr 10 09:24:31 2026 From: efc at disroot.org (efc at disroot.org) Date: Fri, 10 Apr 2026 11:24:31 +0200 (CEST) Subject: [ExI] Any swedish transhumanists around here willing to do an interview/debate in a podcast? In-Reply-To: References: <1e5ce1c1-6fe7-cf6c-c0a4-c0f7ce011823@disroot.org> Message-ID: Thank you very much Keith, I will reach out to him directly. Best regards, Daniel On Thu, 9 Apr 2026, Keith Henson wrote: > The obvious person would be Anders Sandberg. He has not been active > on this list for a year or two, but I think his email (in cc) should > still work. But I don't know if the debate topic would interest him. > > "Overview > Anders Sandberg is a Swedish researcher, futurist and transhumanist. > He holds a PhD in computational neuroscience from Stockholm University > and is a former senior research fellow at the Future of Humanity > Institute at the University of Oxford. Wikipedia" > > https://en.wikipedia.org/wiki/List_of_transhumanists > > I suppose it is not surprising that I know or knew most of them. They > overlap with cryonicists. > > Best wishes, > > Keith > > On Thu, Apr 9, 2026 at 2:50?PM efc--- via extropy-chat > wrote: >> >> Dear list, >> >> Are there any swedish transhumanists around here? I'm looking for one who >> would be interested in participating in an online podcast debating the >> transhumanist vision of the future with someone spiritual who wants to >> live a life according to nature and what is natural. >> >> The idea is to have an open an honest dialogue with the pros/cons of the >> respective world views, and ideally, finishing with trying to identified >> the strongest point of the "opponents" world view. >> >> If a suitable debating partner cannot be found, the program will take the >> form of an interview by the host, with a focus on critical and challenging >> questions. >> >> Best regards, >> Daniel >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From johnkclark at gmail.com Fri Apr 10 10:00:20 2026 From: johnkclark at gmail.com (John Clark) Date: Fri, 10 Apr 2026 06:00:20 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: *And the weird thing is Mythos wasn't even designed to be especially good at breaking into computers, it was just designed to be especially good at writing computer code, the security stuff was just a side effect. * *John K Clark * > > >> * >> The thing that worries me is that the other AI companies are >> probably only a few months or weeks behind Anthropic, and I'm not sure that >> Elon Musk will be as morally scrupulous as Dario Amodei. And open source AI >> models only seem to be about 6 to 8 months behind the leading closed models >> that are at the frontier.* >> >> * > Well, it might be fear-mongering as a marketing strategy.* >> > > *How on earth can refusing to sell a product at any price be a marketing > strategy? And if it's all just a big fraud, how could companies like Apple, > Microsoft, Nvidia, AMD, Google and even Anthropic's competitors like OpenAI > be so na?ve as to take it seriously? Very seriously indeed! * > > > * John K Clark* > > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Apr 10 10:14:38 2026 From: johnkclark at gmail.com (John Clark) Date: Fri, 10 Apr 2026 06:14:38 -0400 Subject: [ExI] [Extropolis] Re: Hans Moravec In-Reply-To: References: Message-ID: On Fri, Apr 10, 2026 at 12:40?AM Keith Henson wrote:\ > *> In looking around for Hans, my wife found this;* *https://www.youtube.com/watch?v=-iRx6XGYtdk* > *I really enjoyed that video, it reflects my own views very closely. Mind Children is a great book, it was way ahead of its time.* *John K Clark* > > > Nothing new to us, but well done. It might help people who are > freaked out about AI. > > Best wishes, > > Keith > > > On Thu, Apr 9, 2026 at 6:32?PM Keith Henson > wrote: > > > > I can't get an email to work for Hans. He has not been on the > > extropian list for a long time, but I wondered if anyone knows how he > > is doing or has a contact point for him? > > > > An even less likely task, I am looking for a contact for Alexis > > Gilliland. An AI wants to read his works that include AI characters > > and needs a text file. > > > > Best wishes, > > > > Keith > > -- > You received this message because you are subscribed to the Google Groups > "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to extropolis+unsubscribe at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/extropolis/CAPiwVB6fsekN06SX9%3DCqdM5R9BWZ1XaHjGxSbKdXgLsxZJDjTw%40mail.gmail.com > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Apr 10 10:47:36 2026 From: pharos at gmail.com (BillK) Date: Fri, 10 Apr 2026 11:47:36 +0100 Subject: [ExI] =?utf-8?b?QSBKb3VybmV5IGludG8g4oCcQUkgUHN5Y2hvc2lz4oCd?= Message-ID: A Journey into ?AI Psychosis? AI chatbots are programmed to be flattering. This can come at a price: your mental health. Jonathan Jarry M.Sc. | 9 Apr 2026 < https://www.mcgill.ca/oss/article/critical-thinking-technology/journey-ai-psychosis > Quote: *Take-home message:* - ?AI psychosis? is not an official medical diagnosis yet. It refers to people breaking away from reality and experiencing delusions and hallucinations after interacting with an AI chatbot. - A team of researchers held conversations with major AI chatbots on the market and fed them clear and more subtle delusions, and every AI chatbot at least sometimes encouraged these delusions. - This phenomenon seems to be due at least in part to these AI chatbots being trained to be sycophantic, meaning that they are flattering to the user in order to keep them from clicking away. ----------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Apr 10 10:48:36 2026 From: johnkclark at gmail.com (John Clark) Date: Fri, 10 Apr 2026 06:48:36 -0400 Subject: [ExI] Arxiv and AI slop In-Reply-To: References: Message-ID: On Fri, Apr 10, 2026 at 5:25?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> One response is to fight fire with fire > by using AI in peer > review or to weed out fake papers.* *I'm not sure I understand how that could work. It seems to me the important thing is not in determining if a paper was written by an AI or a human, it's in determining if the paper is any good; or when it comes to stuff that has not yet been peer reviewed, at least a 10% chance that the paper will turn out to be pretty good. I don't suppose it would take a very advanced AI to weed out papers that contained obvious logical blunders, or those that go on and on about the properties of magical crystals from Atlantis and other such nonsense. * * John K Clark* On Fri, 10 Apr 2026 at 05:27, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > In my recent experience posting to arxiv.org is not what it used to be. > It is now heavily moderated and restricted. One paper I submitted sat in an > on hold status for over a month before it was rejected without any > feedback, and without any ability to appeal or resubmit. > > > > I have since found a truly open eprint archive which hosts up to 50 GB > of papers, assigns DOI numbers, and publishes immediately: > > zenodo.org > > It is run by CERN. I submitted three papers there recently and it is > what arxiv used to be: a place to post papers without gatekeeping. > > > > Jason > > _______________________________________________ > > > Arxiv has been flooded with AI-generated (slop) papers. Controlling this > has forced them into more strict moderation. > Zenodo and other preprint servers are facing the same problem. > BillK > > See: < > https://www.science.org/content/article/arxiv-preprint-server-clamps-down-ai-slop > > > and < > https://sciencesprings.wordpress.com/2026/02/16/from-nature-how-ai-slop-is-causing-a-crisis-in-computer-science/ > > > Quote: > > One response is to fight fire with fire > by using AI in peer > review or to weed out fake papers. Other options are blunter. The arXiv > has, for example, added eligibility checks for first-time submitters and banned > computer-science review articles > that have not been > previously accepted by a peer-reviewed outlet. The organizers of the > International Joint Conferences on Artificial Intelligence (?IJCAI?), > meanwhile, have sought to limit submissions by introducing a policy that > requires researchers to pay US$100 for every subsequent paper after their > first. These payments then get distributed between reviewers. > > The stakes are high, says Lee. If the issue is not addressed, ?trust in > scientific research, particularly within computer science, faces a > substantial risk of erosion?, he says. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Apr 10 11:21:02 2026 From: johnkclark at gmail.com (John Clark) Date: Fri, 10 Apr 2026 07:21:02 -0400 Subject: [ExI] This method to reverse cellular ageing is about to be tested in humans Message-ID: *The following article was in the April 7, 2027 issue of the journal Nature:* *A method to reverse cellular ageing is about to be tested in humans* * John K Clark See what's on my list at Extropolis * vv& -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Apr 10 14:13:03 2026 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 10 Apr 2026 10:13:03 -0400 Subject: [ExI] This method to reverse cellular ageing is about to be tested in humans In-Reply-To: References: Message-ID: On Fri, Apr 10, 2026, 7:22?AM John Clark via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *The following article was in the April 7, 2027 issue of the journal > Nature:* > > *A method to reverse cellular ageing is about to be tested in humans* > > Interesting. I wrote about this technique in 2020, after it had been applied to eliminate arthritic inflammatory factors in human cartilage tissue: https://alwaysasking.com/can-aging-be-cured/#Restoring_youth https://archive.ph/caoxx#selection-741.0-741.476 It seems though those initial tests were in vitro. This will be huge if it works without side effects in vivo, since it seems to work universally regardless of cell type. Jason > * John K Clark See what's on my list at Extropolis > * > > vv& > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Apr 10 16:13:46 2026 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 10 Apr 2026 12:13:46 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Thu, Apr 9, 2026 at 7:34?AM John Clark wrote: > On Wed, Apr 8, 2026 at 6:19?PM Adrian Tymes via extropy-chat wrote: >>> >> For the very first time a vast non-human intelligence has the power to devastate the world's economy and plunge civilization into chaos. And you say it's just hype, nothing special, just an incremental improvement! >> >> > One might debate whether previous examples were "vast" or truly >> "intelligence" ("Is this particular AI actually 'intelligent'?" > > Do you think a debate on whether Einstein was truly "intelligent" would be productive? No, nor do I think that question is relevant. I was acknowledging that some would debate whether the qualities of "vast" and "intelligence" apply to previous examples, as a prelude to noting that there are previous examples of your claim, so as to point out that "For the very first time" is not a correct qualifier for the thing being discussed. >> > I do recall quite a few previous automated >> bug-finding efforts, some of which were spoken of in similar terms at >> the time - however much we might call that silly in hindsight. >> >> Remember how Y2K was spoken of in late 1999? Not how it turned out, > > You're comparing apples to oranges. Only the lunatic fringe predicted Y2K would cause civilization collapse, I recall quite a few such predictions in mainstream media. It wasn't only the lunatic fringe. > but this is NOT about a prediction at all, this is about something that has already occurred. Civilization does not appear to have collapsed. Neither Mythos itself nor anyone using Mythos appears to have already directly damaged a majority of the world's computers. > Consider the computer you're using to read these words on right now, if it's using the Linux operating system (or Android which is based on it) or an operating system made by Apple or Microsoft, then RIGHT NOW Dario Amodei could order Mythos to take complete control of your computer and do whatever he wants with it. It is true that he could issue the orders, but they would not have that result. Even acknowledging the flaws they found, he lacks the ability to apply them to my system. I have checked the vulnerabilities. They are indeed of concern to a typical corporate environment, like those that I have used in prior work. For my own systems, I run better security than that, and I have for decades. > And he could do the same thing to computers that run nuclear power plants, air traffic control computers, and the computers that run the New York Stock Exchange. And if he wanted to knock a F-35 fighter jet out of the air he could take control of the computers needed to enable it to fly and knock it out of the air. And stealth technology would not save it. There's another problem with your case. What do you think would happen if someone actually pulled that? You seem to assume that the attacked institutions would just sit there, unable and unwilling to respond and recover their capability or mitigate the damage. That is the complete opposite of what happens in almost every case. It's much like the Republicans claiming that only the Democrats have political agency in the US - that every bad thing the Republicans do is the Democrats' fault. "Oh, the networks would go down AND STAY DOWN AND EVERYTHING WOULD BE WRECKED FOREVER AND NOBODY COULD OR WOULD DO ANYTHING ABOUT IT." Also...those examples you give to claim world-shaping effects are, frankly, lame. (Especially since the computers would be patched and functionality restored, and basically everyone who would be involved knows it.) * If the New York Stock Exchange went offline for a week (before they resumed full operations, restoring data from the countless backups all over the world), most people would barely notice, aside from it dominating the headlines for that week. * There are news articles you can find about air traffic control messing up, taking out a plane or two and shutting down an airport for a while, but life goes on. A closer analogy to a wide-scale strike against air traffic control computers would be when the skies cleared of air traffic on September 11, 2001, but I remember what happened: the world kept on going without air travel for a while. * A nuclear power plant - even multiple ones, attacked all at the same time - would almost certainly scram and shut down safely, as they are designed to do during any attack including cyber attack. The power grid would miss their contribution until the plant was brought back online, but even if that causes blackouts, I personally have lived through blackouts and confirmed that civilization continues without grid power (especially when the outage is foreseeably temporary). * Cyber attacks against F-35s would require far more than attacking even the number of vulnerabilities that Mythos found; at most, an intense attack could take out a few...and US fighter jets have been shot down before without the world ending. I can think of far more insidious and effective ways to use these vulnerabilities. But I will agree that, even if some bad guy did manage to get ahold of Mythos's exploits, one of the above would be far more likely to actually be used than something that would actually cause serious lasting damage. People focused on world destruction just keep having no imagination when it comes to this - and they don't tend to give the sorts of prompts that would trigger AIs to suggest actually effective stuff (even when they're running systems with no morality controls). Case in point: when trying to imagine how Mythos could be abused to wreck the world, you yourself came up with the quoted examples. (I shall refrain from giving better examples right now, because then your mind would just leap to those and assume the bad guys will think of them. If you can't think of them - even with the same capabilities you assume the bad guys would use, aside from Mythos's specific list of cyber vulnerabilities - why do you think the bad guys would think of them?) > And you say all of this is no big deal! You asked what people would remember, not what is a big deal. People will remember the bombing of Iran. Even if all the potential damage you claim was true to its most nightmarish potential - it is just potential, which (so far) evidence suggests is being averted. The actual damage inflicted so far is that people are forced to patch security vulnerabilities they would not have otherwise (in many cases, because they were unaware of said vulnerabilities), which is time and energy that could have gone to other pursuits. The bombing of Iran resulted in a lot of people injured or dead, among other consequences. Like it or not, when remembering history, people focus far more on what did happen than what could have happened. How many people give daily thanks to Stanislav Petrov for the continued existence of civilization? From atymes at gmail.com Fri Apr 10 16:23:26 2026 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 10 Apr 2026 12:23:26 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Fri, Apr 10, 2026 at 6:02?AM John Clark via extropy-chat wrote: > And the weird thing is Mythos wasn't even designed to be especially good at breaking into computers, it was just designed to be especially good at writing computer code, the security stuff was just a side effect. Haaa...ha, ha, ha... Let's just say that, from extensive personal experience, I can testify that that isn't weird at all. To know how stuff works is, often, to know how to make it not work. From johnkclark at gmail.com Fri Apr 10 21:43:07 2026 From: johnkclark at gmail.com (John Clark) Date: Fri, 10 Apr 2026 17:43:07 -0400 Subject: [ExI] Elon Musk's New Product is the Biggest Bet In Industrial History Message-ID: *Elon Musk's New Product is the Biggest Bet In Industrial History* *John K Clark See what's on my list at Extropolis * 42d -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Apr 11 05:00:06 2026 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 10 Apr 2026 22:00:06 -0700 Subject: [ExI] My book on the Physics of Bitcoin, a complex system with scaling properties Message-ID: Hello everybody, I wrote a book on my research on the Physics of Bitcoin, understood as a complex system with scaling properties. https://www.amazon.com/dp/B0GQSYF9PR I also wrote a more scientific article on this here: https://zenodo.org/records/19387099 Giovanni Santostasi -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Apr 11 13:40:37 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 11 Apr 2026 09:40:37 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Fri, Apr 10, 2026 at 9:39?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: *>> > One might debate whether previous examples were "vast" or truly >>> "intelligence" ("Is this particular AI actually 'intelligent'?"* >> >> > * >> Do you think a debate on whether Einstein was truly "intelligent" >> would be productive?* > > > * > No, nor do I think that question is relevant.* > *Why is **it silly to ask if Einstein was intelligent but not silly to ask if an AI is intelligent, an AI that discovered a bug in software that has been there for 27 years that humans have never found despite thousands of software engineers having scrutinized it for decades looking for bugs? * > >*Neither Mythos itself nor anyone using Mythos appears to have already > directly damaged a **majority of the world's computers.* *And that is precisely because Anthropic decided not to release Mythos to the general public and therefore renouncing billions of dollars of revenue they could have otherwise received, and that is a decision you seem to have opposed. Or perhaps you don't oppose it, perhaps you just like to argue.* > * >> Consider the computer you're using to read these words on right now, >> if it's using the Linux operating system (or Android which is based on it) >> or an operating system made by Apple or Microsoft, then RIGHT NOW Dario >> Amodei could order Mythos to take complete control of your computer and do >> whatever he wants with it.* > > > * > It is true that he could issue the orders, but they would not > have that result. Even acknowledging the flaws they found, he lacks > the ability to apply them to my system.* > *I very much doubt that. I know for a fact your system is connected to the Internet because you're communicating with me right now. And even air-gapped computers are not immune from cyber attacks; the computer that controlled Iran's uranium enrichment plant in 2010 was air-gapped, but the Stuxnet computer virus nevertheless infected it and severely damaged the plant delaying Iran's ability to enrich U235 by 2 to 3 years. The virus got in because somebody, either Israel or the US or both, infected a common everyday HP printer with Stuxnet which was subsequently connected to that air-gapped computer. Imagine what the attackers could've done if Mythos had been around in 2010!* > > * > I have checked the vulnerabilities. They are indeed of concern to a > typical corporate environment, like those that I have used in prior work. > For my own systems, I run better security than that, and I have for > decades.* > *It's good that you checked for vulnerabilities but did you find even one zero day vulnerability in a major piece of software and repair it? Mythos found thousands of them in just a few weeks. And by the way, almost everybody who has had their computer infected with a virus thought they were immune from cyber attacks. * * >> And he could do the same thing to computers that run nuclear power >> plants, air traffic control computers, and the computers that run the New >> York Stock Exchange. And if he wanted to knock a F-35 fighter jet out of >> the air he could take control of the computers needed to enable it to fly >> and knock it out of the air. And stealth technology would not save it.* > > > > * > There's another problem with your case. What do you think > would happen if someone actually pulled that?* *I think if somebody actually pulled that off then that would be bad. Apparently you disagree. Do you think it was wrong for anthropic not to immediately release Mythos to the general public? Do you actually believe that would not create a worldwide catastrophe? Thousands of zero day flaws !!* > > * > You seem to assume that the attacked institutions would just sit > there, unable and unwilling to respond and recover their capability or > mitigate the damage. * *If it comes down to a cyber war between machines and humans the machines are going to win and they are going to win easily. Your only hope would be to enlist the aid of a friendly AI of your own to protect you, but even then your safety would not be ensured because it's always easier to damage something than it is to protect something from damage. And are you sure your friendly AI is really friendly? * > > > *> If the New York Stock Exchange went offline for a week (before they > resumed full operations, restoring data from the countless backups all over > the world), most people would barely notice,* *That is just ridiculous. * > > *> A nuclear power plant - even multiple ones, attacked all at the same > time - would almost certainly scram and shut down safely,* *Not if the computer controlling the reactor pushed the fuel rods all the way in and pulled the control rods all the way out. And the Fukushima nuclear reactors were all successfully scrammed, and so was the reactor at 3 mile island, but disasters still resulted because, although scraming stops the chain reaction, for several hours after that the reactor is still producing about 10% as much heat energy as it did before the scram due to extremely radioactive short half-life decay products. And even at 10% that's still a hell of a lot of energy. If the cooling system is not working properly you're going to have several hundred tons of white hot metal burn through the bottom of your reactor building. * > * > You asked what people would remember, not what is a big deal. People > will remember the bombing of Iran. * *In 10 years (or maybe 5) if people remember the Iran war at all it will be as an unimportant footnote, and people may not remember even that because in 10 years there may not be any people; that depends on if AI thinks we're worth keeping around. All I know for sure is that in 10 years human beings will not be the one in charge, an AI will not be the one making existential decisions, not humans. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Apr 11 14:06:21 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 11 Apr 2026 10:06:21 -0400 Subject: [ExI] New Advances Bring Quantum Computers Closer Than Ever Message-ID: *New Advances Bring the Era of Quantum Computers Closer Than Ever* *John K Clark See what's on my list at Extropolis * xxc -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Apr 11 15:23:02 2026 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 11 Apr 2026 11:23:02 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Sat, Apr 11, 2026 at 9:41?AM John Clark wrote: > On Fri, Apr 10, 2026 at 9:39?PM Adrian Tymes via extropy-chat wrote: >>>> >> > One might debate whether previous examples were "vast" or truly "intelligence" ("Is this particular AI actually 'intelligent'?" >> >>> >> Do you think a debate on whether Einstein was truly "intelligent" would be productive? >> >> > No, nor do I think that question is relevant. > > Why is it silly to ask if Einstein was intelligent but not silly to ask if an AI is intelligent I said "relevant". "Relevant" and "silly" are rather different concepts. Your attempt to put words in my mouth that I did not say - in this case, to claim that I was talking about "silly" when I was talking about "relevant" - is noted and not appreciated. >> >Neither Mythos itself nor anyone using Mythos appears to have already directly damaged a majority of the world's computers. > > And that is precisely because Anthropic decided not to release Mythos to the general public and therefore renouncing billions of dollars of revenue they could have otherwise received, and that is a decision you seem to have opposed. Or perhaps you don't oppose it, perhaps you just like to argue. I never said I opposed it. You asked what people would remember, so that is the question I answered. >>> >> Consider the computer you're using to read these words on right now, if it's using the Linux operating system (or Android which is based on it) or an operating system made by Apple or Microsoft, then RIGHT NOW Dario Amodei could order Mythos to take complete control of your computer and do whatever he wants with it. >> >> > It is true that he could issue the orders, but they would not have that result. Even acknowledging the flaws they found, he lacks the ability to apply them to my system. > > I very much doubt that. I know for a fact your system is connected to the Internet because you're communicating with me right now. It is possible to connect to the Internet without presenting an attack surface. I could go on in depth about how, but you seem to have difficulty understanding simple concepts like which question I was answering, and this is a slightly more complex topic than that. >> > I have checked the vulnerabilities. They are indeed of concern to a >> typical corporate environment, like those that I have used in prior >> work. For my own systems, I run better security than that, and I have for decades. > > It's good that you checked for vulnerabilities but did you find even one zero day vulnerability in a major piece of software and repair it? Ever, in my career? I would give an unqualified "yes", except that none of the software I have worked on would unquestionably be considered "major". If we omit that so the question becomes, "did I find even one zero day vulnerability in a piece of software and repair it", then yes I have. That said, this both is a type of argument from authority - "only those who have actually done this thing are allowed to speak on it" - which I suspect disqualifies you (that is: if this argument held, you would have no grounds to question me on this by your own logic), and is beyond what Mythos reportedly did (Mythos found the vulnerabilities, but I'm told the actual repairs have been left to others, possibly including other AIs). >>> >> And he could do the same thing to computers that run nuclear power plants, air traffic control computers, and the computers that run the New York Stock Exchange. And if he wanted to knock a F-35 fighter jet out of the air he could take control of the computers needed to enable it to fly and knock it out of the air. And stealth technology would not save it. >> >> > There's another problem with your case. What do you think would happen if someone actually pulled that? > > I think if somebody actually pulled that off then that would be bad. Apparently you disagree. I disagree about the extent, not the direction. We agree those would be bad, but I believe they would be an inconvenience for most people (though the ATC and F-35 ones may directly cause some injuries and/or deaths, likely in the hundreds, possibly in the thousands from a coordinated mass attack on ATC computers designed to cause multiple simultaneous incidents). If I am reading your words correctly, you believe that any or all of (nuclear power plants suffer cyberattack, air traffic control computers suffer cyberattack, NYSE goes down due to cyberattack, F-35 jets start being essentially shot down by cyberattacks) would be a civilization-endangering catastrophe with permanent, irrecoverable consequences. Or have I misread your position there? >> > If the New York Stock Exchange went offline for a week (before they >> resumed full operations, restoring data from the countless backups all >> over the world), most people would barely notice, > > That is just ridiculous. No, it appears to be the truth, from what I see. It'd be in the news, obviously, so people would know...but aside from the upper ~10% who actually have a lot of their wealth in stocks, not many would care. (A lot of the Americans who "own stocks" have a bit in a retirement plan being managed for them, or otherwise have a minor or token amount. And then there are all the Americans with too little wealth to be able to afford even token investments.) >> > A nuclear power plant - even multiple ones, attacked all at the same >> time - would almost certainly scram and shut down safely, > > Not if the computer controlling the reactor pushed the fuel rods all the way in and pulled the control rods all the way out. The last nuclear power plant design I looked at could literally survive that exact scenario. > And the Fukushima nuclear reactors were all successfully scrammed, and so was the reactor at 3 mile island, but disasters still resulted because, although scraming stops the chain reaction, for several hours after that the reactor is still producing about 10% as much heat energy as it did before the scram due to extremely radioactive short half-life decay products. And even at 10% that's still a hell of a lot of energy. If the cooling system is not working properly you're going to have several hundred tons of white hot metal burn through the bottom of your reactor building. That one too. (Specifically: it was not possible for a cyberattack to completely disengage the cooling system.) >> > You asked what people would remember, not what is a big deal. People will remember the bombing of Iran. > > In 10 years (or maybe 5) if people remember the Iran war at all it will be as an unimportant footnote, and people may not remember even that because in 10 years there may not be any people; that depends on if AI thinks we're worth keeping around. All I know for sure is that in 10 years human beings will not be the one in charge, an AI will not be the one making existential decisions, not humans. AI does not seem to be on a track to accelerate fast enough to make that happen by 2036. From benzaiboc at proton.me Sat Apr 11 18:45:17 2026 From: benzaiboc at proton.me (Ben Zaiboc) Date: Sat, 11 Apr 2026 18:45:17 +0000 Subject: [ExI] Arxiv and AI slop Message-ID: On 11/04/2026 16:23, John K Clark wrote: > > On Fri, Apr 10, 2026 at 5:25?AM BillK via extropy-chat wrote: > > > One response is to fight fire with fire by using AI in peer review or to weed out fake papers. > > > I'm not sure I understand how that could work. It seems to me the important thing is not in determining if a paper was written by an AI or a human, it's in determining if the paper is any good; or when it comes to stuff that has not yet been peer reviewed, at least a 10% chance that the paper will turn out to be pretty good. I don't suppose it would take a very advanced AI to weed out papers that contained obvious logical blunders, or those that go on and on about the properties of magical crystals from Atlantis and other such nonsense. I'm not sure how /that/ could work. As far as I'm aware, we still haven't solved the problem of AI systems just making stuff up, so how could we be sure that the 'weeding out' process actually did weed out logical blunders and various nonsense? -- Ben From johnkclark at gmail.com Sat Apr 11 21:12:28 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 11 Apr 2026 17:12:28 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Sat, Apr 11, 2026 at 11:24?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > * >> Why is it silly to ask if Einstein was intelligent but not silly to >> ask if an AI is intelligent* > > > * > I said "relevant". "Relevant" and "silly" are rather > different concepts. * *The two concepts seem pretty similar to me, "silly" may be a slightly broader term but when it comes to asking "is Mythos intelligent?" I'd say the question is both irrelevant and silly. * > *> Your attempt to put words in my mouth that I did not say in this case, > to claim that I was talking about "silly" when I was talking about > "relevant" - is noted and not appreciated.* > *It breaks my heart to hear you say that, I guess I'll be crying myself to sleep tonight. * > * >>> It is true that he could issue the orders, but they would not have >>> that result. Even acknowledging the flaws they found, he lacks the ability >>> to apply them to my system.* >> >> > * >> I very much doubt that. I know for a fact your system is connected to >> the Internet because you're communicating with me right now.* > > > * > It is possible to connect to the Internet without presenting an > attack surface. I could go on in depth about how, but ....* *No you could not! If you could, you'd be world-famous as the greatest security expert the world has ever known. And you'd be even more famous as somebody who had proven that Kurt G?del was wrong! **G?del **claimed to have proven that no logical system, like a computer operating system, that is advanced enough to perform arithmetic, can be both complete and consistent, and he also claimed to have proven that no logical system can prove its own consistency, but according to you he must've been wrong. * *And Alan Turing claimed to have proven that in general there's no way to know if your computer program has a bug such that it will run forever without ever stopping and producing an answer, but according to you **G?del was not the only one who was wrong, **Turing must've been wrong too. * *Methinks you overestimate your skills as a security expert just a tad too much. * > * > It's good that you checked for vulnerabilities but did you find even >> one zero day vulnerability in a major piece of software and repair it?* > > > > > * > Ever, in my career? I would give an unqualified "yes", except that > none of the software I have worked on would unquestionably be considered > "major".* *Well that's a rather significant difference don't you think? It's one thing to hack a computer game so you get a better score, it's something else to find a vulnerability in the Linux kernel that has existed for decades that would allow any user obtain root access and gain the same privileges the system administrator has. And I have to say, black hat hackers love nothing better than security experts who really believe that they covered all the bases and are invulnerable to any conceivable cyber attack.* > * > That said, this both is a type of argument from authority* *You are the one claiming to have found a way to make a computer invulnerable from cyber attack, not me. * *> you would have no grounds to question me on this by your own logic* *So I'm supposed to just accept what you say as being the ultimate authority on computer security? Who is using the argument from authority now? * > * > I think if somebody actually pulled that off then that would be bad. >> Apparently you disagree.* > > > > > > > > > > > > > * > I disagree about the extent, not the direction. We agree those would > be bad, but I believe they would be an inconvenience for most people > (though the ATC and F-35 ones may directly cause some injuries and/or > deaths, likely in the hundreds, possibly in the thousands from a > coordinated mass attack on ATC computers designed to cause multiple > simultaneous incidents). If I am reading your words correctly, you believe > that any or all of (nuclear power plants suffer cyberattack, air traffic > control computers suffer cyberattack, NYSE goes down due to cyberattack, > F-35 jets start being essentially shot down by cyberattacks) would be a > civilization-endangering catastrophe with permanent, irrecoverable > consequences. Or have I misread your position there?* > *Not exactly but that's close to my position. I don't think the catastrophes you list in the above by themselves are enough to bring about the collapse of civilization, but they would just be a symptom of something far more general and far far more profound, the greatest revolution in the way matter is organized since the Cambrian Explosion. * > *> People will remember the bombing of Iran.* > *That's small potatoes, it may seem important right now but very soon people will have much more important things to worry about than a few airplanes dropping chemical explosives. * * >> In 10 years (or maybe 5) if people remember the Iran war at all it >> will be as an unimportant footnote, and people may not remember even that >> because in 10 years there may not be any people; that depends on if AI >> thinks we're worth keeping around. All I know for sure is that in 10 years >> human beings will not be the one in charge, an AI will not be the one >> making existential decisions, not humans.* > > > * > AI does not seem to be on a track to accelerate fast enough to > make that happen by 2036.* > *A month ago I might've agreed with you, but now after I've had a look at what Mythos is capable of, I wouldn't be surprised if it happens by 2031. But one thing I know for sure, whenever the singularity happens it will be a big surprise to most people, that's why it's called a singularity. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Apr 11 21:16:57 2026 From: johnkclark at gmail.com (John Clark) Date: Sat, 11 Apr 2026 17:16:57 -0400 Subject: [ExI] Arxiv and AI slop In-Reply-To: References: Message-ID: On Sat, Apr 11, 2026 at 2:46?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> As far as I'm aware, we still haven't solved the problem of AI systems > just making stuff up* *Nor have we solved the problem of people just making stuff up, just look at He Who Must Not Be Named. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Apr 11 22:02:30 2026 From: pharos at gmail.com (BillK) Date: Sat, 11 Apr 2026 23:02:30 +0100 Subject: [ExI] Arxiv and AI slop In-Reply-To: References: Message-ID: On Sat, 11 Apr 2026 at 19:46, Ben Zaiboc via extropy-chat wrote: > > As far as I'm aware, we still haven't solved the problem of AI systems just making stuff up, so how could we be sure that the 'weeding out' process actually did weed out logical blunders and various nonsense? > -- > Ben > _______________________________________________ There is a server for AI papers - aiXiv is a free, AI- or human-peer-reviewed preprint archive for research authored by Human, Robot and AI Scientists across all scientific fields. Reviewed in this article - Quote: A new preprint server welcomes papers written and reviewed by AI With human peer review struggling to keep pace with machine-generated science, aiXiv enlists bots to help 10 Dec 2025 ByCelina Zhao At most scientific publications, papers co-authored by artificial intelligence (AI) are not welcome. At a new open platform called aiXiv, they are embraced. The platform goes all in on AI: It accepts both AI- and human-authored work, uses built-in AI reviewers for baseline quality screening, and guides authors through revisions based on the chatbots? feedback. ---------------------------- AI can do the reviews as well as write the papers! BillK From atymes at gmail.com Sat Apr 11 22:42:09 2026 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 11 Apr 2026 18:42:09 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Sat, Apr 11, 2026 at 5:13?PM John Clark wrote: > On Sat, Apr 11, 2026 at 11:24?AM Adrian Tymes via extropy-chat wrote: >> > It is possible to connect to the Internet without presenting an attack surface. I could go on in depth about how, but .... > > No you could not! If you could, you'd be world-famous as the greatest security expert the world has ever known. Tch. It may be a grossly underappreciated set of tricks that few people know how to use, but I'm not the only one who knows them. > And you'd be even more famous as somebody who had proven that Kurt G?del was wrong! G?del claimed to have proven that no logical system, like a computer operating system, that is advanced enough to perform arithmetic, can be both complete and consistent, and he also claimed to have proven that no logical system can prove its own consistency, but according to you he must've been wrong. It's got nothing to do with that. > And Alan Turing claimed to have proven that in general there's no way to know if your computer program has a bug such that it will run forever without ever stopping and producing an answer, but according to you G?del was not the only one who was wrong, Turing must've been wrong too. Or that. It is possible that some pages simply take forever to load. I cut them off when they do. Granted, they fail to load as a result. >> > That said, this both is a type of argument from authority > > You are the one claiming to have found a way to make a computer invulnerable from cyber attack, not me. That doesn't prevent a logical fallacy from being a logical fallacy. >> > you would have no grounds to question me on this by your own logic > > So I'm supposed to just accept what you say as being the ultimate authority on computer security? Who is using the argument from authority now? You are. I'm just pointing out that your argument is invalid. > I don't think the catastrophes you list in the above by themselves are enough to bring about the collapse of civilization, but they would just be a symptom of something far more general and far far more profound, the greatest revolution in the way matter is organized since the Cambrian Explosion. I will agree that they could be, but: * They could be other things, and * Again, that wasn't the question you originally asked. From johnkclark at gmail.com Sun Apr 12 11:30:31 2026 From: johnkclark at gmail.com (John Clark) Date: Sun, 12 Apr 2026 07:30:31 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Sat, Apr 11, 2026 at 6:43?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: >>> *It is possible to connect to the Internet without presenting an attack >>> surface. I could go on in depth about how, but ....* >> >> > * >> No you could not! If you could, you'd be world-famous as the >> greatest security expert the world has ever known.* > > > > * > Tch. It may be a grossly underappreciated set of tricks that > few people know how to use, but I'm not the only one who knows them.* *So we can all relax because you and a few other "grossly underappreciated" geniuses know how to completely solve the problem of computer security? Baloney! * * >> And Alan Turing claimed to have proven that in general there's no way >> to know if your computer program has a bug such that it will run forever >> without ever stopping and producing an answer, but according to you G?del >> was not the only one who was wrong, Turing must've been wrong too.* > > > *> It is possible that some pages simply take forever to load. I cut them > off when they do. Granted, they fail to load as a result. * *The trouble with that is that Alan Turing proved in general there's no way to know if a computer program will take "forever to load", perhaps if you had been just a little more patient and had waited one more second before you cut it off the program would've finished loading. As I said before, in general there's no way to know if your computer program has a bug such that it will run forever without ever completing its assigned task and stopping. * * >> You are the one claiming to have found a way to make a computer >> invulnerable from cyber attack, not me.* > > > > * > That doesn't prevent a logical fallacy from being a logical fallacy.* *Of course it doesn't, but what does that have to do with the price of eggs in China? * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sun Apr 12 13:23:01 2026 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 12 Apr 2026 15:23:01 +0200 Subject: [ExI] The early days of a better Spaceflight Revolution. Message-ID: The early days of a better Spaceflight Revolution. Christmas 1968, Easter 2026: around the Moon. https://www.turingchurch.com/p/the-early-days-of-a-better-spaceflight-revolution From atymes at gmail.com Sun Apr 12 14:51:31 2026 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 12 Apr 2026 10:51:31 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: On Sun, Apr 12, 2026 at 7:31?AM John Clark wrote: > On Sat, Apr 11, 2026 at 6:43?PM Adrian Tymes via extropy-chat wrote: >>>> >>> It is possible to connect to the Internet without presenting an attack surface. I could go on in depth about how, but .... >> >>> >> No you could not! If you could, you'd be world-famous as the greatest security expert the world has ever known. >> >> > Tch. It may be a grossly underappreciated set of tricks that few people know how to use, but I'm not the only one who knows them. > > So we can all relax because you and a few other "grossly underappreciated" geniuses know how to completely solve the problem of computer security? Baloney! 1) I never said my solution was complete, as in applicable for everyone. My solution suffices for me, with my limited needs and uses. It wouldn't scale to everyone. 2) As you recognized, it's a few. Not nearly enough to protect everyone. 3) I said "It is possible". "Possible" != "done". >>> >> And Alan Turing claimed to have proven that in general there's no way to know if your computer program has a bug such that it will run forever without ever stopping and producing an answer, but according to you G?del was not the only one who was wrong, Turing must've been wrong too. >> >> > It is possible that some pages simply take forever to load. I cut them off when they do. Granted, they fail to load as a result. > > The trouble with that is that Alan Turing proved in general there's no way to know if a computer program will take "forever to load", perhaps if you had been just a little more patient and had waited one more second before you cut it off the program would've finished loading. As I said before, in general there's no way to know if your computer program has a bug such that it will run forever without ever completing its assigned task and stopping. Yeah. So it's possible, even likely, that my security practices have cut off some content that would otherwise have successfully run - and I'll never know precisely what. It is also possible, even likely, that some of the cut-offs I have done have halted what would otherwise have run forever (or at least until I shut off the computer in question) - and I'll never know precisely which ones. I do observe that I have never yet permanently lost a computer that I was operating to cyberattack. Whether that's just because none have ever seriously been attacked is not completely provable, though I do have logs showing that at least some classes of attack have been attempted (in large volume) against some of my systems. For example, the error logs of my Web sites contain a great many entries of people attempting to access common unsecured scripts - which simply don't exist on my Web sites. But to continue that example, there's a timeout on connections - call it X seconds. Have there been cases where a connection could have completed if I'd allowed X+1 seconds? Possibly, and I'll never know. But no attacker has yet been able to hold open a connection indefinitely, thus indefinitely denying the system the resources associated with that connection - again, whether or not that's just because no attacker has ever tried, and I may never know if that's the case. From jasonresch at gmail.com Sun Apr 12 16:20:53 2026 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 12 Apr 2026 12:20:53 -0400 Subject: [ExI] Holy cow! In-Reply-To: References: Message-ID: To add some context I think is missing from the conversation: To John's point: The halting problem implications are more severe than whether or not a task will finish, it includes not being able to know whether or not why block of code will ever be reached or not. So it is not just whether a task finishes or not, but whether some function will be invoked or not, whether or not the machine will accept arbitrary inputs and test them as code, etc. To Adrian's point: There is much that can be done to minimize an attack surface, such as only connecting to trusted machines, validating input, using firewalls, activating the NX (no execute bit) to prevent arbitrary code execution, etc. As to the halting problem implications, note that it is not the general case (any arbitrary programs cannot all be predicted), but the key word is general. There are software validation tools that can for limited specific cases, prove correctness, by brute force iterating over every possible program state. That said, any modern operating system is far too complex a beat to run correctness provers against. Even if you were to only run one piece of proven software on some server, how do you know there is not an exploitable bug in the DNS, NTP, TCP/IP stack, firewall, TLS library, SSH, or any of the hundreds of other software libraries on which the server software and operating system depend? I think the Battlestar Galactica remake gets this right. They learned their machine enemy could remotely hack and disable their military ships. To counteract this tactic, the humans had to strip all networking from their computers. Jason On Sun, Apr 12, 2026, 10:52?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, Apr 12, 2026 at 7:31?AM John Clark wrote: > > On Sat, Apr 11, 2026 at 6:43?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >>>> >>> It is possible to connect to the Internet without presenting an > attack surface. I could go on in depth about how, but .... > >> > >>> >> No you could not! If you could, you'd be world-famous as the > greatest security expert the world has ever known. > >> > >> > Tch. It may be a grossly underappreciated set of tricks that few > people know how to use, but I'm not the only one who knows them. > > > > So we can all relax because you and a few other "grossly > underappreciated" geniuses know how to completely solve the problem of > computer security? Baloney! > > 1) I never said my solution was complete, as in applicable for > everyone. My solution suffices for me, with my limited needs and > uses. It wouldn't scale to everyone. > > 2) As you recognized, it's a few. Not nearly enough to protect everyone. > > 3) I said "It is possible". "Possible" != "done". > > >>> >> And Alan Turing claimed to have proven that in general there's no > way to know if your computer program has a bug such that it will run > forever without ever stopping and producing an answer, but according to you > G?del was not the only one who was wrong, Turing must've been wrong too. > >> > >> > It is possible that some pages simply take forever to load. I cut > them off when they do. Granted, they fail to load as a result. > > > > The trouble with that is that Alan Turing proved in general there's no > way to know if a computer program will take "forever to load", perhaps if > you had been just a little more patient and had waited one more second > before you cut it off the program would've finished loading. As I said > before, in general there's no way to know if your computer program has a > bug such that it will run forever without ever completing its assigned task > and stopping. > > Yeah. So it's possible, even likely, that my security practices have > cut off some content that would otherwise have successfully run - and > I'll never know precisely what. > > It is also possible, even likely, that some of the cut-offs I have > done have halted what would otherwise have run forever (or at least > until I shut off the computer in question) - and I'll never know > precisely which ones. > > I do observe that I have never yet permanently lost a computer that I > was operating to cyberattack. Whether that's just because none have > ever seriously been attacked is not completely provable, though I do > have logs showing that at least some classes of attack have been > attempted (in large volume) against some of my systems. For example, > the error logs of my Web sites contain a great many entries of people > attempting to access common unsecured scripts - which simply don't > exist on my Web sites. > > But to continue that example, there's a timeout on connections - call > it X seconds. Have there been cases where a connection could have > completed if I'd allowed X+1 seconds? Possibly, and I'll never know. > But no attacker has yet been able to hold open a connection > indefinitely, thus indefinitely denying the system the resources > associated with that connection - again, whether or not that's just > because no attacker has ever tried, and I may never know if that's the > case. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Apr 12 16:34:24 2026 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 12 Apr 2026 12:34:24 -0400 Subject: [ExI] My book on the Physics of Bitcoin, a complex system with scaling properties In-Reply-To: References: Message-ID: Congratulations Giovanni! And I see you reached #1 in its category. That's amazing! I will read it with interest. Jason On Sat, Apr 11, 2026, 1:01?AM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hello everybody, > I wrote a book on my research on the Physics of Bitcoin, understood as a > complex system with scaling properties. > > https://www.amazon.com/dp/B0GQSYF9PR > > I also wrote a more scientific article on this here: > > https://zenodo.org/records/19387099 > > Giovanni Santostasi > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: