From hkeithhenson at gmail.com Sat Nov 1 02:07:39 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 31 Oct 2025 19:07:39 -0700 Subject: [ExI] Non-rational humans can have benefits In-Reply-To: <3f4f21a3-3193-421e-84e8-ce9188cdfda8@zaiboc.net> References: <3f4f21a3-3193-421e-84e8-ce9188cdfda8@zaiboc.net> Message-ID: I think you have seen my paper on genetic selection for war. There are situations where rational for genes is *not* rational for the person with the genes. Being in conflict with your genes is weird, but logical. Genes, having the long-term view, usually win. Keith https://jbh.journals.villanova.edu/index.php/JBH/article/view/3166 It took two years to get this published. The bit about bonobos seems to be a new observation. On Fri, Oct 31, 2025 at 2:28?PM Ben Zaiboc via extropy-chat wrote: > > On 31/10/2025 19:04, BillK wrote: > > Humans are emotional creatures, not logical machines. > > The floods of emotions in humans have survival benefits. That is > > why they developed and persist. > > Cold logic would "dehumanise" humans and produce bad results. > > That is why enemies have to be thought of as sub-human to justify killing. > > > > Benefits of non-rationality - > > Relationships, love, loyalty, and caring persist. > > Creativity and Innovation are often non-logical. > > Different moral systems that benefit their society are non-logical. > > Quick decisions in times of stress are intuitive. > > Hope and optimism keep humans going when, logically, they should just give up. > > > > And so on...... Being human seems preferable to being a "rational agent". > > Dang, I didn't realise you can't have both. > Oh, wait... it's a trick, isn't it? > A bunch of 'em. > > Logic isn't cold. > Emotions don't normally flood (but they can be cold). > Logic definitely has survival benefits too. > Intuition doesn't have to be non-logical. > > And so on....... Being human includes being a rational agent. Sometimes. > > And I propose reversing the terms in the first sentence, to: "Humans are > emotional machines, not logical creatures". With a minimum of two > exclamation marks. > Just to avoid any claims of discrimination or bias. We don't want people > marching in the streets waving banners saying "Creatures are machines too!!" > > This looks like it was written by a bored journalist, inspired by the > tired old 'nature vs. nurture' so-called argument, wanting to generate > an artificial 'controversy'. > > BillK, are you a bored journalist?!! > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Sat Nov 1 02:58:35 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 31 Oct 2025 19:58:35 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: If you want to understand this topic, read this https://en.wikipedia.org/wiki/Rare-earth_element It is long but fairly comprehensive. Keith On Fri, Oct 31, 2025 at 2:35?PM spike jones via extropy-chat wrote: > > lreaqdy > > > > From: John Clark > Subject: Re: [ExI] not that rare earth (part 2 of at least 2) > > > > > > I am disappointed in us. More specifically, I am disappointed in me. Reason: we didn?t figure out sooner the reality on rare earth magnets. I had all the ?elements? in place to have seen thru the nonsense sooner, but it wasn?t until last night I figured it out while looking for the recipes for high efficiency magnets, or failing that, just how much of the Chinese monopoly materials are used in high efficiency magnets. They generally won?t tell, for those are trade secrets. But I noticed Tesla phased them out, which tells me they designed around them (that is what they did, by going up the period one row for more available materials (which turned out to work as well (and we already know why (going up one row in the period is another element which is chemically similar (especially when used as a grain-size moderator in an alloy (in which the element itself does not form a chemical bond with the iron (as in magnets (we knew all this.)))))) > > > > > > ? > > > > >?69% of rare earth mining comes from China, and more importantly 92% of rare earth refining occurs in China?. > > > > Ja, but for some applications, it doesn?t need to be ?refined? in the way we think of the term refined. Down there on the left end of the lanthanide period are a group of elements that aren?t used for much of anything, but they are good for magnets. If we think of refining as separating them into high purity individual elements, we know that is expensive and difficult because the differential solubilities are very low: they dissolve in the same solvents at levels close to each other, so it takes a lot of cycles of time-consuming money-consuming purification cycles. > > > > The Chinese don?t tend to refine stuff the way we do, particularly since they understand it isn?t necessary. A magnet can use a mixture of lanthanides which can vary slightly, and the final product is the same: the presence of a few of those big dumb elements (the lanthanides are a perfect examples of big dumb elements) moderates crystal size in iron and prevents transition from body centered cubic to face centered cubic for instance. > > > > When the comparison is made in modern magnet technology, we see too often the irrelevant comparison between iron magnets and iron/lanthanide alloy magnets. The alloys work a lot better. But where is the comparison between iron/lanthanide and iron/alkali magnets? Why is that so hard to find? Reason: every company using those things keeps it as a trade secret. We hadn?t heard that Tesla doesn?t use rare earth magnets anymore, and that they damn well had designed around them without any noticeable loss of efficiency. > > > > As to my comment on refining: metallurgy is black magic to this day. We have no equations that take us straight to the best mixture of stuff gives what characteristics. We just try things and see what happens. Tesla apparently just tried alternatives in the alkali metal row, saw that it worked the same as the lanthanide row for all the same reasons, then quietly stopped using the more expensive materials. > > > > That whole notion that China has the world by the ass because they control rare earth metals didn?t sound right to me the first time I heard it, but I didn?t know why. Now I know why: those aren?t strictly necessary. > > > > > > > > It took China many years to reach that level of production, and the same will be true for the US. > > > > > > But it isn?t necessary. > > > > >? not what caused China to shut off exports of rare earth elements to the USA, it was because of He Who Must Not Be Named idiotic tariffs. > > > > Now we are getting close to the truth: the most important current application of rare earth elements is in providing arguments for free trade. These arguments work even if the materials upon which they are based are not necessary. Political considerations lead to bad science and bad engineering practice. This is not to say we have plenty of recent examples of political considerations leading to bad science and bad engineering practice. > > > > OK retract that last sentence. This damn well IS to say we have plenty of recent examples of political considerations leading to bad science and bad engineering practice. > > > > > The Mountain Pass Rare Earth Mine about a km west of Wheaton Springs California is active now. I can see there is a refinery there. > > > > >?That's nice and I'm glad their stock price? > > > > I would be a lot more glad about it if I owned some of it. But now I won?t buy it because I foresee a fall in demand for its product, which is why it has been on intermittent operation for the past half century at least. We don?t really need those materials. > > > > . > > > > >?And that is one reason why China's BYD, not Elon Musk's Tesla, now makes the world's most technologically sophisticated electric vehicles? > > > > As they tell us in their advertisements, ja. They insist that they are more sophisticated than Elon Musk?s Teslas, in their advertising. > > > > > > > > > Do we really need rare earth elements to get that extra few percent efficiency in permanent magnets? > > > > Yes. Not only are rare earth motors more efficient at converting electricity to kinetic energy?John K Clark > > > > > > John what are we comparing to please? More efficient than iron magnets? Sure, true but irrelevant. Compare the rare earth magnets to alternative material magnets, such as alkali metal magnets, and notice how difficult it is to find that, for that information is held tightly as trade secrets. These companies all like to compare with iron magnets. We see the rare earth magnets and the alkali metal magnets are way more efficient, but it isn?t clear the rare earth magnets are better than alkali metal magnets. > > > > The real problem here is that we are not asking the right questions. Sure free trade is a good thing, it lifts the masses out of poverty etc. But it doesn?t answer the question of can alkali metals one row up the period be substituted for rare earth materials? Looks to me like they can be and have been, without easily measurable loss of performance. > > > > I am disappointed in us, specifically me, for not recognizing this sooner. I am mildly hip with chemistry. I have even been is a situation where a subcontractor jacked us up for a special sauce catalyst they were using, which relies on substituting one rare earth element for another, recognizing they behave the same way, then claiming it was special intellectual property owned by them. My only flimsy excuse is that it happened 30 years ago, we worked around it and on we went. > > > > Conclusion: China?s monopoly on rare earth elements has been greatly exaggerated. We don?t need to recycle wind turbines to get the materials. We can use substitutes up one period. > > > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Sat Nov 1 04:00:01 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Oct 2025 21:00:01 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> -----Original Message----- From: Keith Henson Subject: Re: [ExI] not that rare earth (part 2 of at least 2) If you want to understand this topic, read this https://en.wikipedia.org/wiki/Rare-earth_element It is long but fairly comprehensive. Keith OK cool thx Keith. An EV uses 1 to 2 kg. The spot price for neodymium is 150 bucks a kg, up from 75 kg seven years ago. So an additional 75 to 150 bucks per non-Musk EV. Elon mandated those be eliminated, so Tesla doesn't use them. What the Wiki article doesn't say is what happens if we substitute molybdenum for the neodymium, which is what I think Tesla did: they are using MoFeB magnets. They aren't saying how they did it. Samarium is cheap, even now. I am not sure why. Perhaps it's a byproduct of something else. Wind turbines: those use a lot of material, 600 kg of REEs per turbine. So those are effected a lot. But the real cost of wind energy is in power storage anyway. In any case, if the turbines give away a half a percent in efficiency, that surely wouldn't be a showstopper. They have bigger worries: the environmentalists might try to destroy them to save birds. Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. spike From johnkclark at gmail.com Sat Nov 1 11:14:14 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Nov 2025 07:14:14 -0400 Subject: [ExI] =?utf-8?q?Can_a_Start-Up_Make_Computer_Chips_Cheaper_Than_?= =?utf-8?q?the_Industry=E2=80=99s_Giants=3F?= Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. Can a Start-Up Make Computer Chips Cheaper Than the Industry?s Giants? Substrate, a San Francisco company, is trying to take on powerhouses like the Dutch company ASML. https://www.nytimes.com/2025/10/28/technology/can-a-start-up-make-computer-chips-cheaper-than-the-industrys-giants.html?unlocked_article_code=1.x08.A0ge.GdHGRbF4eq7a&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Nov 1 13:31:16 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 1 Nov 2025 08:31:16 -0500 Subject: [ExI] reason an emotion Message-ID: Here is an extremely interesting study on those two. A man somehow had a brain injury that severed the connection between his amygdala and his forebrain. This cuts off emotions from the decision-making process. Result: decisions are made without emotion. His IQ and overall intelligence levels were unchanged. Thus he had only reason from the forebrain to run his life. He could not make the slightest decision. He was given intellectual work but when he finished he could not move to the next problem. He could not decide if he wanted a cup of coffee. His decision-making process was frozen. So this is what happens when we have only reason to make decisions. So our conclusion has to be that emotions are involved in every decision we make, no matter how trivial. What a validation of Hume! bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 1 13:31:47 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Nov 2025 09:31:47 -0400 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: On Fri, Oct 31, 2025 at 5:34?PM wrote: > >> * > I noticed Tesla phased them out, which tells me they designed around > them (that is what they did, by going up the period one row for more > available materials* > *On a typical chart of the elements they are arranged according to their chemical properties, that's why you'll see that all the rare earth ones are in a group of their own, they all have very similar chemical properties, which is why in their natural ore they are all mixed together, and why it is so difficult to refine them out into separate elements. But Mendeleev's chart only takes an element's chemical properties into account, NOT an element's magnetic or optical properties, and those are the very properties that make some of the rare earths so valuable. In addition to magnets, the rare earth elements are also of vital importance in fiber-optic communications, lasers, LED lighting and displays, and night-vision goggles.* *>>?69% of rare earth mining comes from China, and more importantly 92% of >> rare earth refining occurs in China?.* > > > > > *Ja, but for some applications, it doesn?t need to be ?refined? in the way > we think of the term refined. * > *I can't think of any application in which the rare earths don't need to be refined out into separate elements, if there is such an application it must not be a very important one. * * > A magnet can use a mixture of lanthanides. A magnet can use a mixture > of lanthanides which can vary slightly, and the final product is the same* *No. The purity requirements for rare earth magnets are quite stringent, contamination with other lanthanides in the crystal structure of the magnet seriously degrades magnetic properties, although Praseodymium is sometimes deliberately added in very precise amounts to improve thermal stability. Depending on how good a magnet you need a Neodymium magnet, the most popular type, needs to be between 95 and 99% pure. A Samarium-Cobalt magnet, the second most popular type, needs even greater purity, 99+%.* *>> **Do we really need rare earth elements to get that extra few percent >> efficiency in permanent magnets? * > > > >> *>>Yes. Not only are rare earth motors more efficient at converting >> electricity to kinetic energy?John K Clark* > > > > *> John what are we comparing to please? More efficient than iron > magnets? Sure, true but irrelevant. Compare the rare earth magnets to > alternative material magnets, such as alkali metal magnets,* > *Nobody has used iron magnets in motors or generators since the 1930s, until just a few years ago when somebody said "magnet" they were almost always talking about an Alnico Magnet, an alloy of Iron, aluminum, nickel and cobalt that was discovered by a Japanese scientist about 95 years ago and was revolutionary at the time. They are not nearly as strong but they are superior to rare earth magnets in one respect, they can get as hot as 800? C and still retain their magnetic properties. * *> Compare the rare earth magnets to alternative material magnets, such as > alkali metal magnets* *T**he entire column of Alkali metals on the periodic chart are all extremely chemically reactive and do not form permanent magnets, they are diamagnetic which means they are weakly repelled, not attracted, by a magnetic field .* > *> Conclusion: China?s monopoly on rare earth elements has been greatly > exaggerated. * > *I don't think so. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 1 13:47:03 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Nov 2025 09:47:03 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Fri, Oct 31, 2025 at 5:44?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Do you not believe in objective truth?* *Objective truth exists but that doesn't mean everything is either right or wrong. I happen to like sardines and you may not, but that doesn't mean that one of us is right and the other one is wrong because there is no disputing matters of taste. On the other hand if one of us says that a bridge will collapse if somebody attempts to use it and the other says it will not then one of us must be right and the other one must be wrong. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Nov 1 13:55:11 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 1 Nov 2025 08:55:11 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: Given that all of our actions originate in the unconscious mind. there is no way a conscious mind can understand where its ideas and actions come from, much less understand them. The conscious mind may think it is in charge but it is just an observer (which nevertheless can understand that a mistake has been made and the idea or action needs redoing.) You want to understand our minds? Make the workings of the unconscious conscious - and that's just a start. Why did the impulse go to point B when it left point A rather than to point C? And then trace all the points in between entering the unconscious and resulting in some idea or action. And explain each one. You can't even get started until you can truly access the unconscious. Give Freud credit- he tried to do this. bill w On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 31/10/2025 21:34, Jason Resch wrote: >> >> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On 31/10/2025 12:28, John K Clark wrote: >>> >>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>> >>> >>> You've completely lost me there, but I have two observations: There's no >>> such thing as 'perfect understanding' except as a nebulous theoretical >>> concept, and I don't think a one-to-one correspondence would be enough to >>> understand something, or even be a relevant concept. We use large parts of >>> our brains to process information from small parts of the world. You need a >>> lot more than a single neuron to figure out what's going on in a single >>> neuron. >>> >>> Oh, three observations. We don't process data instantaneously. The same >>> parts of the brain can be used to process information about something >>> repeatedly over time, using feedback loops etc. >>> >> >> Computers and algorithms are constrained by two resources space (i.e. >> memory), and time (i.e. CPU cycles). While some algorithms allow for >> time/space trade offs to be made in certain circumstances, in general there >> is some shortest description of the brain (in terms of bits) for which no >> shorter representation is possible (regardless of how much additional >> computation is thrown at it). >> >> So while the same brain may compute many times with the same neurons, >> this addresses only the time component of simulating a brain. There is >> still the matter of space. >> >> >> Ah, ok. I was talking about understanding the brain, not simulating it. >> Modelling something is not the same as understanding it. Yes, they help >> each other, but they aren't the same thing. >> > > I think understanding a thing is equivalent to being able to form an > accurate mental model of it. With greater levels of understanding > corresponding to more accurate models. > > What do you mean by the word understand? > > > >> >> The analogy here is that a computer with 1 MB of RAM can't emulate a >> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >> fact there's no amount of additional time that will permit the memory >> deficient computer to emulate the computer with 1 GB of memory, for the >> simple reason that it will run out of variables to represent all the >> possible values in the memory addresses of the computer with a greater >> memory. >> >> >> I'm not sure that this is true. Are you assuming no swap disk, or other >> similar non-RAM storage? >> > > Swap disks are a means to extend available RAM. > > > Because then I'm sure you're right, but that's a pretty artificial >> restriction. >> The analogy there would be a human with a notepad maybe, or a database, >> or a bunch of other humans, an AI, etc. >> >> So we're back to: A single human brain /on it's own/ can't understand a >> human brain in any great detail. Of course. >> > > I think that was the original question: can any mind ever fully understand > its own operation. > > Jason > > But that's a pretty artificial restriction. >> >> -- >> Ben >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 1 14:38:08 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 07:38:08 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: <007f01dc4b3d$24958820$6dc09860$@rainier66.com> From: John Clark ? >?I can't think of any application in which the rare earths don't need to be refined out into separate elements, if there is such an application it must not be a very important one? John K Clark If such applications exist, we wouldn?t know. Those would be held as trade secrets. An example would be a company that uses tons of material in which a foreign country, an unreliable trade partner held a monopoly. That company would look for applications of mixtures of rare earth elements, where everything easily separated is gone. Then the remaining mixture, which is easy to get and doesn?t cost much, might be used in high efficiency magnets. Since the company uses a lot of those, it would be better for them to hold that information as a trade secret rather than try to protect it with a patent. When it comes to recipes for high efficiency magnets, those are not effectively protected by patents anyway. It is easy enough to come up with a one-off recipe or add a pinch of this and a smattering of that, then claim it is a different recipe. Recognizing all this, Musk found a way to phase out rare earth elements without losing significant performance in his magnets. Once you start pondering why rare earth elements would make a better magnet, it all makes perfect sense. Those big metal atoms can ionize either way, for they tend to be ambivalent toward their outermost electrons. They hold domains in place. One can theorize that the actinide period would work even better than the lanthanides, but those have a bad habit of being radioactive. But we can go up the group one period if that element is cheaper and more easily available. How much is Tesla worth? This I can calmly assure you: Elon wouldn?t risk the entire company on the continuing availability of anything on which China has a monopoly. The US military wouldn?t base their technology on any material in which it didn?t hold a fifty year stockpile. Conclusion: the notion that the world?s industrial powers squirm helplessly in the ruthless talons of Communist China is nonsense. That absurdity has been intentionally exaggerated for economic reasons. There is a bright side to it however. Speculators who understand the ideas or theories are wrong can still make huge profits off of absurd notions. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 1 14:48:55 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 07:48:55 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <007f01dc4b3d$24958820$6dc09860$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> Message-ID: <008601dc4b3e$a6291cc0$f27b5640$@rainier66.com> From: spike at rainier66.com ? >?How much is Tesla worth? About 1.5 trillion dollars. >?This I can calmly assure you: Elon wouldn?t risk the entire company on the continuing availability of anything on which China has a monopoly? If they couldn?t find alternative materials, the requisite rare earth materials could be stockpiled, since that stuff never spoils. A 1.5 trillion dollar company could stockpile a fifty year supply of every rare earth material as a speculative move, should the supplier nation decide it wants all its REEs. >?.The US military wouldn?t base their technology on any material in which it didn?t hold a fifty year stockpile? The military has people whose job it is to think about these things, then arrange to not have it become a threat. The military also has arbitrarily large storage facilities where materials can be stockpiled and kept secure indefinitely. We can get what we need or already have it, or we can design around it. Speculators have promoted the idea that China has a big important monopoly. I am not buying it, not for a minute. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Nov 1 14:50:32 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 1 Nov 2025 10:50:32 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Given that all of our actions originate in the unconscious mind. there is > no way a conscious mind can understand where its ideas and actions come > from, much less understand them. > Great point! The conscious mind may think it is in charge but it is just an observer > (which nevertheless can understand that a mistake has been made and the > idea or action needs redoing.) > > You want to understand our minds? Make the workings of the unconscious > conscious - and that's just a start. Why did the impulse go to point B > when it left point A rather than to point C? And then trace all the points > in between entering the unconscious and resulting in some idea or action. > And explain each one. > I have doubts that such a thing is possible from the perspective of the mind in question. Can any brain ever feel and know what each of its neurons is doing? Can those corresponding neurons feel and know what every one of its constituent atoms is doing? Given Turing universality, it's provable that computer software can't know about its underlying hardware. If our minds are a kind of software which can be simulated by a computer, then this same implication would apply to us. There would be a layer of abstraction of one's underlying implementation which high levels cannot penetrate. > You can't even get started until you can truly access the unconscious. > Give Freud credit- he tried to do this. > He deserves credit for the attempt, but I think there are limits to a mind's ability to introspect. "Our thoughts seem to run about in their own space, creating new thoughts and modifying old ones, and we never notice any neurons helping us out! But that is to be expected. We can?t. [?] We should remember that physical law is what makes it all happen?way, way down in neural nooks and crannies which are too remote for us to reach with our high-level introspective probes." ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) Jason > > > On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On 31/10/2025 21:34, Jason Resch wrote: >>> >>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On 31/10/2025 12:28, John K Clark wrote: >>>> >>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>> >>>> >>>> You've completely lost me there, but I have two observations: There's >>>> no such thing as 'perfect understanding' except as a nebulous theoretical >>>> concept, and I don't think a one-to-one correspondence would be enough to >>>> understand something, or even be a relevant concept. We use large parts of >>>> our brains to process information from small parts of the world. You need a >>>> lot more than a single neuron to figure out what's going on in a single >>>> neuron. >>>> >>>> Oh, three observations. We don't process data instantaneously. The same >>>> parts of the brain can be used to process information about something >>>> repeatedly over time, using feedback loops etc. >>>> >>> >>> Computers and algorithms are constrained by two resources space (i.e. >>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>> time/space trade offs to be made in certain circumstances, in general there >>> is some shortest description of the brain (in terms of bits) for which no >>> shorter representation is possible (regardless of how much additional >>> computation is thrown at it). >>> >>> So while the same brain may compute many times with the same neurons, >>> this addresses only the time component of simulating a brain. There is >>> still the matter of space. >>> >>> >>> Ah, ok. I was talking about understanding the brain, not simulating it. >>> Modelling something is not the same as understanding it. Yes, they help >>> each other, but they aren't the same thing. >>> >> >> I think understanding a thing is equivalent to being able to form an >> accurate mental model of it. With greater levels of understanding >> corresponding to more accurate models. >> >> What do you mean by the word understand? >> >> >> >>> >>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>> fact there's no amount of additional time that will permit the memory >>> deficient computer to emulate the computer with 1 GB of memory, for the >>> simple reason that it will run out of variables to represent all the >>> possible values in the memory addresses of the computer with a greater >>> memory. >>> >>> >>> I'm not sure that this is true. Are you assuming no swap disk, or other >>> similar non-RAM storage? >>> >> >> Swap disks are a means to extend available RAM. >> >> >> Because then I'm sure you're right, but that's a pretty artificial >>> restriction. >>> The analogy there would be a human with a notepad maybe, or a database, >>> or a bunch of other humans, an AI, etc. >>> >>> So we're back to: A single human brain /on it's own/ can't understand a >>> human brain in any great detail. Of course. >>> >> >> I think that was the original question: can any mind ever fully >> understand its own operation. >> >> Jason >> >> But that's a pretty artificial restriction. >>> >>> -- >>> Ben >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Nov 1 17:46:37 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 1 Nov 2025 10:46:37 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> Message-ID: On Fri, Oct 31, 2025 at 9:00?PM wrote: > > -----Original Message----- > From: Keith Henson > Subject: Re: [ExI] not that rare earth (part 2 of at least 2) > > If you want to understand this topic, read this > > https://en.wikipedia.org/wiki/Rare-earth_element > > It is long but fairly comprehensive. > > Keith > > OK cool thx Keith. > > An EV uses 1 to 2 kg. The spot price for neodymium is 150 bucks a kg, up from 75 kg seven years ago. > > So an additional 75 to 150 bucks per non-Musk EV. Elon mandated those be eliminated, so Tesla doesn't use them. > > What the Wiki article doesn't say is what happens if we substitute molybdenum for the neodymium, which is what I think Tesla did: they are using MoFeB magnets. They aren't saying how they did it. Samarium is cheap, even now. I am not sure why. Perhaps it's a byproduct of something else. > > Wind turbines: those use a lot of material, 600 kg of REEs per turbine. So those are effected a lot. But the real cost of wind energy is in power storage anyway. I am not sure you can make that case for California. https://www.caiso.com/todays-outlook/supply#section-current Peak wind is around a GW, solar is around 20 times that much. Not sure that Mo makes good magnets. Never heard of it being used for that. But what do I know? Keith In any case, if the turbines give away a half a percent in efficiency, that surely wouldn't be a showstopper. They have bigger worries: the environmentalists might try to destroy them to save birds. > > Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. > > spike > > > > From spike at rainier66.com Sat Nov 1 18:26:13 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 11:26:13 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> Message-ID: <00d801dc4b5d$019a3f80$04cebe80$@rainier66.com> -----Original Message----- From: Keith Henson ... > >>... Wind turbines: those use a lot of material, 600 kg of REEs per turbine. So those are effected a lot. But the real cost of wind energy is in power storage anyway. >...I am not sure you can make that case for California... Keith That depends on how much of the power supply already has intermittent non-predictables. If we start with a local grid with no intermittent sources and add one wind turbine, that one doesn't need storage, for it always has a market any time the wind is blowing and always has sufficient generation to supply the grid when it isn't. Once the percentage of power supplied by wind and solar goes above baseline minimum, there will be times when the wind is blowing and the sun is shining, but there is insufficient demand for all the power. At that point it starts to make sense to add power storage. PG&E claims that percentage is around 18%. If renewable non-predictable (wind and solar) together are at or below 18%, they say they can handle it without storage. That part makes sense. Another part that makes sense, told to us by the power company: the best wind sites are already built out: Altamont Pass and Tehachapi Pass are the two best ones. They form a wind funnel which keeps it breezy most of the time thru there. Great places for wind power. They have wind power already. After the best sites are built out, later wind turbines produce less, and in some cases cost more. Reasoning: if the wind isn't howling thru there constantly, the land is worth more. spike From ben at zaiboc.net Sat Nov 1 21:42:36 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:42:36 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: <8fc30c59-612a-405d-a0fd-d73913c63749@zaiboc.net> On 01/11/2025 13:32, Jason Resch wrote: > On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat > wrote: > > On 31/10/2025 19:04, Jason Resch wrote: >> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a >> perfect morality consists of. And it too, provides a definition >> of what morality is, and likewise provides a target to aim towards. >> >> Ben Wrote: As different intelligent/rational agents have >> different experiences, they will form different viewpoints, >> and come to different conclusions about what is right and not >> right, what should be and what should not, what they want and >> what they don't, just like humans do. >> >> The point of the video and article is that desires are based on >> beliefs, and because beliefs are correctable then so are desires. >> There is only one "perfect grasp" and accordingly, one true set >> of beliefs, and from this it follows one most-correct set of >> desires. This most correct set of desires is the same for >> everyone, regardless of from which viewpoint it is approached. > Nope. This is nonsense. Just about every assertion is wrong. The > very first sentence in the abstract is false. And the second. And > the third. So the whole thing falls apart. Desires are not based > on beliefs, they are based on emotions. The example of 'wanting to > drink hot mud' is idiotic. Just because the cup turns out to > contain mud doesn't invalidate the desire to drink hot chocolate. > > I think you are misinterpreting the example. It is the desire to drink > the contents of the cup is what changes in response to new information. I wouldn't have put it as 'desire to drink the contents of the cup', when the desire is to drink hot chocolate. There are originating desires and there are planned actions to satisfy the desire. Drinking from the cup might turn out to be a bad idea (the plan is faulty because of incorrect information), but the original desire is not changed. If you want to see a Batman movie at a movie theatre, and find that the only movie available is a romantic comedy, you don't say that you have a desire to watch any movie which has suddenly changed. You still want to watch Batman, but can't, so your desire is thwarted, not changed. > Think about this alternate example which may be easier to consider: > you may naively have the desire to take a certain job, to marry a > particular person, attend a certain event, but if that choice turns > out to be ruinous,? you may regret that decision. If your future self > could warn you of the consequences of that choice, then you may no > longer desire that job, marriage, or attendance, as much as you > previously did, in light of the (unknown) costs they bore, but which > you were unaware of. Decisions are often regretted. That is a fact of life. Future selves warning you about bad decisions is not. That's time-travel (aka 'magic'), and should not feature in any serious consideration of how to make good decisions. "If x could..." is no help when x is impossible. We have workable tools to help people make better decisions, but time-travel isn't one of them. > > It's not a 'mistaken' desire at all (the mistake is a sensory > one), and it doesn't somehow morph into a desire to drink hot mud. > "Beliefs are correctable, so desires are correctable" Each of > those two things are true (if you change 'correctable' to > 'changeable'), but the one doesn't imply the other, which follows > from the above. > > Does it apply in the examples I provided? No. The examples are about decisions, not desires, and they don't address the beliefs that lead to the decisions. "You may have the desire to do X" is different to "You decide to do X". The desire may drive the decision or at least be involved in it, but it isn't the decision (some poeple act immediately on their desires, but that still doesn't mean they are the same thing). Can you regret a desire? I don't think so, but it is arguable. It would be regretting something that you have no direct control over, so would be rather silly. Apart from that, there is still no dependency of desires on beliefs. A belief may well affect the plan you make to satisfy a desire, but changing the belief doesn't change the desire. Can a belief give rise to a desire? That's a more complicated question than it appears, I think, and leads into various types of desires, but still, there's no justification for the statement "beliefs can change, therefore desires can". > 'Perfect grasp' doesn't mean anything real. It implies that it's > possible to know everything about everything, or even about > something. The very laws of physics forbid this, many times over, > so using it in an argument is equivalent to saying "magic". > > It doesn't have to be possible. The paper is clear on this. The goal > of the paper is to answer objectively what makes a certain thing right > or wrong. For example, if someone offered you $10 and I? return for > some random person unknown to you would be killed, in a way that would > not negatively affect you or anyone you knew, and your memory of the > ordeal would be wiped so you wouldn't even bear a guilty conscience, > for what reason do we judge and justify the wrongness of taking the $10? This is 'Trolley problem thinking'. Making up some ridiculous scenario that would never, or even could never, occur in the real world, then claiming that it has relevance to the real world. > This is the goal of the paper to provide a foundation upon which > morality can be established objectively from first principles. Let's see some examples that are grounded in reality that 'provide a foundaton upon which morality can be established objectively'. I'm not closed to the possibility that such a thing can be done, but I'm not holding my breath. > How would you and the question of what separates right from wrong? The > initial utilitarian answer is whatever promotes more good experiences > than bad experiences. But then, how do you weigh the relative goodness > or badness of one experience vs. another, between one person and > another, between the varying missed opportunities among future > possibilities? > Such questions can only be answered with something approximating an > attempt at a grasp of what it means and what it is like to be all the > various existing and potential conscious things. That's just another way of saying that it can't be answered. > We can make heuristic attempts at this, despite the fact that we never > achieve perfection. Exactly. We always have to make decisions in the /absence/ of full information. What we would do if we had 'all the information' is irrelevant, if it even means anything. > For example, Democracy can be viewed as a crude approximation, by > which each person is given equal weight in the consideration of their > desires (with no attempt to weight relative benefits or suffering). > But this is still better than an oligarchy, where the desires of few > are considered while the desires of the masses are ignored. And also > you can see the difference between uninformed electorate vs. a well > informed one. The informed electorate has a better grasp of the > consequences of their decisions, and so their collective desires are > more fully fulfilled. I don't see the relevance to morality. Politics and morality are rarely on talking terms. > > 'One true set of beliefs' is not only wrong, it's dangerous, which > he just confirms by saying it means there is only one most-correct > set of desires, for /everyone/ (!). > > Do you not believe in objective truth? No. This is religious territory, and the road to dogmatism. This is the very reason wny science is superior to religion. It doesn't assume that there is any 'absolute truth' which can be discovered, after which no further inquiry is needed or wanted. As to whether, for instance, the laws of physics are invariant everywhere and at all times, that's a question we can't answer, and probably will never be able to. > If there is objective truth, they are the same truths for everyone. > Now consider the objective truths for statements such as "it is right > to do X" or "it is wrong to do Y". If there are objective truths, > these extend to an objective morality. There would be an objective > truth to what action is best (even if we lack the computational > capacity to determine it). > You may say this is fatal to the theory, but note that we can still > roughly compute with the number Pi, even though we never consider all > of its infinite digits. > > Does this not ring loud alarm bells to you? I'm thinking we'd > better hope that there really is no such thing as objective > morality (if there is, Zuboff is barking up the wrong tree, for > sure), it would be the basis for the worst kind of tyranny. It's a > target that I, at least, want to aim away from. 180 degrees away! > > No one is proposing a putting a tyrannical AI in charge that forces > your every decision. But a superintelligent AI that could explain to > you the consequences of different actions you might take (as far as it > is able to predict them) would be quite invaluable, and improve the > lives of many who choose to consider its warnings and advice. Absolutely. I have no argument with that. But we were talking about morality. > > His twisting of desire into morality is, well, twisted. Morality > isn't about what we should want to do, just as bravery isn't about > having no fear. > > Do you have a better definition of morality? I don't think that's the answer you want to ask. A dictionary can provide the answer. I do have my own moral code though, if that's what you want to know. I don't expect everyone to see the value of it, or adopt it. And I might change my mind about it in the future. > > He wants to turn people into puppets, and actually remove moral > agency from them. > > Imperfect understanding of consequences cripples our ability to be > effective moral agents. Then you think we are crippled as effective moral agents, and doomed to always be so (because we will always have imperfect understanding of consquences). > When we don't understand the pros and cons of a decision, how can we > hope to be moral agents? We become coin-flippers -- which I would > argue is to act amorally. If we want true moral agency, we must strive > to improve our grasp of things. This is taking an extreme position, and saying either we are 'perfect' or no use at all. We are neither. Acting with incomplete information is inevitable. That doesn't mean morality is impossible. Just as bravery is being afraid, but acting anyway, morality is not knowing for sure what the best action is, but acting anyway. Making the best decision you can, in line with your values. It's about having a choice. If it were possible to have 'perfect knowledge', there would be no morality, no choice. I'm not sure what you'd call it. Predetermination, perhaps. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 21:46:04 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:46:04 +0000 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: Message-ID: <744d4987-907a-45ea-9829-62b925e9c397@zaiboc.net> On 01/11/2025 13:32, spike wrote: > Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. I'm surprised no-one has mentioned flywheel storage. Inherently fire-resistant (there's nothing to catch fire), can store large amounts of energy, can be buried in the ground, and things built on top of them if desired, very long lifetime, very good reliability, easy to build, very fast charging and discharging times. The only downside I can see is the need (or at least desirability) for rare-earths for the magnetic bearings. Which might not actually be a need, if spike is right. -- Ben From ben at zaiboc.net Sat Nov 1 21:48:57 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:48:57 +0000 Subject: [ExI] extropy-chat Digest, Vol 266, Issue 1 In-Reply-To: References: Message-ID: On 01/11/2025 13:32, bill w wrote: > Here is an extremely interesting study on those two. > A man somehow had a brain injury that severed the connection between > his amygdala and his forebrain.? This cuts off emotions from the > decision-making process. > Result:? decisions are made without emotion.? His IQ and overall > intelligence levels were unchanged.? Thus he had only reason from the > forebrain to run his life. > He could not make the slightest decision.? He was given intellectual > work but when he finished he could not move to the next problem. He > could not decide if he wanted a cup of coffee.? ?His decision-making > process was frozen. > So this is what happens when we have only reason to make decisions. > So our conclusion has to be that emotions are involved in every > decision we make, no matter how trivial. I don't think you can make that conclusion from this story (you didn't give a link to the study). It's not as simple as 'he only had reason to run his life'. If that was the case, I'd think he'd be able to use reason to conclude that in the absence of a feeling towards making one decision or another, a random choice would work. And that making decisions was essential for survival. He'd be able to think that a "coffee makes sense, given that I haven't had anything to drink for three hours, and dehydration is bad", etc. I think what's going on there is that as well as cutting off emotional connection to the forebrain, the mechanisms for directing attention and resolving competing tasks were also cut off. As I said earlier, the whole concept of 'emotion vs. reason' is not very useful. People like simple scenarios. Natural vs. Artificial. Nature vs. Nature. Black vs. White. Things are almost never that simple. In fact, I suspect that the whole concept of 'emotion' as a single thing is counter-productive when thinking about these things. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 21:51:30 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:51:30 +0000 Subject: [ExI] reason an emotion In-Reply-To: References: Message-ID: <4decaccd-4a56-451d-a619-cb08aa1b03b1@zaiboc.net> Aargh! Did it again! I keep forgetting to change the Subject. -------- Forwarded Message -------- Subject: Re: extropy-chat Digest, Vol 266, Issue 1 Date: Sat, 1 Nov 2025 21:48:57 +0000 From: Ben Zaiboc To: extropy-chat at lists.extropy.org On 01/11/2025 13:32, bill w wrote: > Here is an extremely interesting study on those two. > A man somehow had a brain injury that severed the connection between > his amygdala and his forebrain.? This cuts off emotions from the > decision-making process. > Result:? decisions are made without emotion.? His IQ and overall > intelligence levels were unchanged.? Thus he had only reason from the > forebrain to run his life. > He could not make the slightest decision.? He was given intellectual > work but when he finished he could not move to the next problem. He > could not decide if he wanted a cup of coffee.? ?His decision-making > process was frozen. > So this is what happens when we have only reason to make decisions. > So our conclusion has to be that emotions are involved in every > decision we make, no matter how trivial. I don't think you can make that conclusion from this story (you didn't give a link to the study). It's not as simple as 'he only had reason to run his life'. If that was the case, I'd think he'd be able to use reason to conclude that in the absence of a feeling towards making one decision or another, a random choice would work. And that making decisions was essential for survival. He'd be able to think that a "coffee makes sense, given that I haven't had anything to drink for three hours, and dehydration is bad", etc. I think what's going on there is that as well as cutting off emotional connection to the forebrain, the mechanisms for directing attention and resolving competing tasks were also cut off. As I said earlier, the whole concept of 'emotion vs. reason' is not very useful. People like simple scenarios. Natural vs. Artificial. Nature vs. Nature. Black vs. White. Things are almost never that simple. In fact, I suspect that the whole concept of 'emotion' as a single thing is counter-productive when thinking about these things. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 22:05:16 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 22:05:16 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: Apologies for the formatting of this. I've just noticed that some email clients jam the text together, making it hard to read. Here is a better formatted version (I hope!): On 01/11/2025 21:42, Ben wrote: > > On 01/11/2025 13:32, Jason Resch wrote: >> On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat >> wrote: >> >> On 31/10/2025 19:04, Jason Resch wrote: >>> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a >>> perfect morality consists of. And it too, provides a definition >>> of what morality is, and likewise provides a target to aim towards. >>> >>> Ben Wrote: As different intelligent/rational agents have >>> different experiences, they will form different viewpoints, >>> and come to different conclusions about what is right and >>> not right, what should be and what should not, what they >>> want and what they don't, just like humans do. >>> >>> The point of the video and article is that desires are based on >>> beliefs, and because beliefs are correctable then so are >>> desires. There is only one "perfect grasp" and accordingly, one >>> true set of beliefs, and from this it follows one most-correct >>> set of desires. This most correct set of desires is the same for >>> everyone, regardless of from which viewpoint it is approached. >> Nope. This is nonsense. Just about every assertion is wrong. The >> very first sentence in the abstract is false. And the second. And >> the third. So the whole thing falls apart. Desires are not based >> on beliefs, they are based on emotions. The example of 'wanting >> to drink hot mud' is idiotic. Just because the cup turns out to >> contain mud doesn't invalidate the desire to drink hot chocolate. >> >> I think you are misinterpreting the example. It is the desire to >> drink the contents of the cup is what changes in response to new >> information. > I wouldn't have put it as 'desire to drink the contents of the cup', > when the desire is to drink hot chocolate. There are originating > desires and there are planned actions to satisfy the desire. Drinking > from the cup might turn out to be a bad idea (the plan is faulty > because of incorrect information), but the original desire is not changed. > If you want to see a Batman movie at a movie theatre, and find that > the only movie available is a romantic comedy, you don't say that you > have a desire to watch any movie which has suddenly changed. You still > want to watch Batman, but can't, so your desire is thwarted, not changed. > >> Think about this alternate example which may be easier to consider: >> you may naively have the desire to take a certain job, to marry a >> particular person, attend a certain event, but if that choice turns >> out to be ruinous,? you may regret that decision. If your future self >> could warn you of the consequences of that choice, then you may no >> longer desire that job, marriage, or attendance, as much as you >> previously did, in light of the (unknown) costs they bore, but which >> you were unaware of. > Decisions are often regretted. That is a fact of life. Future selves > warning you about bad decisions is not. That's time-travel (aka > 'magic'), and should not feature in any serious consideration of how > to make good decisions. "If x could..." is no help when x is > impossible. We have workable tools to help people make better > decisions, but time-travel isn't one of them. >> It's not a 'mistaken' desire at all (the mistake is a sensory >> one), and it doesn't somehow morph into a desire to drink hot >> mud. "Beliefs are correctable, so desires are correctable" Each >> of those two things are true (if you change 'correctable' to >> 'changeable'), but the one doesn't imply the other, which follows >> from the above. >> >> Does it apply in the examples I provided? > No. The examples are about decisions, not desires, and they don't > address the beliefs that lead to the decisions. "You may have the > desire to do X" is different to "You decide to do X". The desire may > drive the decision or at least be involved in it, but it isn't the > decision (some poeple act immediately on their desires, but that still > doesn't mean they are the same thing). > Can you regret a desire? I don't think so, but it is arguable. It > would be regretting something that you have no direct control over, so > would be rather silly. > > Apart from that, there is still no dependency of desires on beliefs. A > belief may well affect the plan you make to satisfy a desire, but > changing the belief doesn't change the desire. Can a belief give rise > to a desire? That's a more complicated question than it appears, I > think, and leads into various types of desires, but still, there's no > justification for the statement "beliefs can change, therefore desires > can". > >> 'Perfect grasp' doesn't mean anything real. It implies that it's >> possible to know everything about everything, or even about >> something. The very laws of physics forbid this, many times over, >> so using it in an argument is equivalent to saying "magic". >> >> It doesn't have to be possible. The paper is clear on this. The goal >> of the paper is to answer objectively what makes a certain thing >> right or wrong. For example, if someone offered you $10 and I? return >> for some random person unknown to you would be killed, in a way that >> would not negatively affect you or anyone you knew, and your memory >> of the ordeal would be wiped so you wouldn't even bear a guilty >> conscience, for what reason do we judge and justify the wrongness of >> taking the $10? > This is 'Trolley problem thinking'. Making up some ridiculous scenario > that would never, or even could never, occur in the real world, then > claiming that it has relevance to the real world. >> This is the goal of the paper to provide a foundation upon which >> morality can be established objectively from first principles. > Let's see some examples that are grounded in reality that 'provide a > foundaton upon which morality can be established objectively'. I'm not > closed to the possibility that such a thing can be done, but I'm not > holding my breath. >> How would you and the question of what separates right from wrong? >> The initial utilitarian answer is whatever promotes more good >> experiences than bad experiences. But then, how do you weigh the >> relative goodness or badness of one experience vs. another, between >> one person and another, between the varying missed opportunities >> among future possibilities? >> Such questions can only be answered with something approximating an >> attempt at a grasp of what it means and what it is like to be all the >> various existing and potential conscious things. > That's just another way of saying that it can't be answered. >> We can make heuristic attempts at this, despite the fact that we >> never achieve perfection. > Exactly. We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. >> For example, Democracy can be viewed as a crude approximation, by >> which each person is given equal weight in the consideration of their >> desires (with no attempt to weight relative benefits or suffering). >> But this is still better than an oligarchy, where the desires of few >> are considered while the desires of the masses are ignored. And also >> you can see the difference between uninformed electorate vs. a well >> informed one. The informed electorate has a better grasp of the >> consequences of their decisions, and so their collective desires are >> more fully fulfilled. > I don't see the relevance to morality. Politics and morality are > rarely on talking terms. >> 'One true set of beliefs' is not only wrong, it's dangerous, >> which he just confirms by saying it means there is only one >> most-correct set of desires, for /everyone/ (!). >> >> Do you not believe in objective truth? > No. > This is religious territory, and the road to dogmatism. > This is the very reason wny science is superior to religion. It > doesn't assume that there is any 'absolute truth' which can be > discovered, after which no further inquiry is needed or wanted. > As to whether, for instance, the laws of physics are invariant > everywhere and at all times, that's a question we can't answer, and > probably will never be able to. > >> If there is objective truth, they are the same truths for everyone. >> Now consider the objective truths for statements such as "it is right >> to do X" or "it is wrong to do Y". If there are objective truths, >> these extend to an objective morality. There would be an objective >> truth to what action is best (even if we lack the computational >> capacity to determine it). >> You may say this is fatal to the theory, but note that we can still >> roughly compute with the number Pi, even though we never consider all >> of its infinite digits. >> >> Does this not ring loud alarm bells to you? I'm thinking we'd >> better hope that there really is no such thing as objective >> morality (if there is, Zuboff is barking up the wrong tree, for >> sure), it would be the basis for the worst kind of tyranny. It's >> a target that I, at least, want to aim away from. 180 degrees away! >> >> No one is proposing a putting a tyrannical AI in charge that forces >> your every decision. But a superintelligent AI that could explain to >> you the consequences of different actions you might take (as far as >> it is able to predict them) would be quite invaluable, and improve >> the lives of many who choose to consider its warnings and advice. > Absolutely. I have no argument with that. But we were talking about > morality. >> His twisting of desire into morality is, well, twisted. Morality >> isn't about what we should want to do, just as bravery isn't >> about having no fear. >> >> Do you have a better definition of morality? > I don't think that's the answer you want to ask. A dictionary can > provide the answer. > > I do have my own moral code though, if that's what you want to know. I > don't expect everyone to see the value of it, or adopt it. And I might > change my mind about it in the future. >> >> He wants to turn people into puppets, and actually remove moral >> agency from them. >> >> Imperfect understanding of consequences cripples our ability to be >> effective moral agents. > Then you think we are crippled as effective moral agents, and doomed > to always be so (because we will always have imperfect understanding > of consquences). >> When we don't understand the pros and cons of a decision, how can we >> hope to be moral agents? We become coin-flippers -- which I would >> argue is to act amorally. If we want true moral agency, we must >> strive to improve our grasp of things. > This is taking an extreme position, and saying either we are 'perfect' > or no use at all. We are neither. Acting with incomplete information > is inevitable. That doesn't mean morality is impossible. > > Just as bravery is being afraid, but acting anyway, morality is not > knowing for sure what the best action is, but acting anyway. Making > the best decision you can, in line with your values. It's about having > a choice. If it were possible to have 'perfect knowledge', there would > be no morality, no choice. I'm not sure what you'd call it. > Predetermination, perhaps. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 22:22:43 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 22:22:43 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On 01/11/2025 21:42, Jason Resch asked, and Ben answered: >> Do you not believe in objective truth? > > No. Ok, that's probably too simplistic, and needs explaining. I don't think there is such a thing as /absolute/ truth, is a better way of putting it. Obviously there are 'objective truths', as John pointed out with his example of a bridge falling down or not. > consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it). Statements like that can't be 'objectively true', because they are not about objective things. Right and Wrong are concepts in human minds, not things that exist in themselves. If there were no people, there would be no right and wrong. No morality. 'Objective morality' is an oxymoron, just as 'objective beauty' or 'objective jealousy' are. These are all things that don't exist without subjective experience. -- Ben From jasonresch at gmail.com Sat Nov 1 23:20:21 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 1 Nov 2025 19:20:21 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sat, Nov 1, 2025, 6:06 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Apologies for the formatting of this. I've just noticed that some email > clients jam the text together, making it hard to read. > > Here is a better formatted version (I hope!): > > On 01/11/2025 21:42, Ben wrote: > > > On 01/11/2025 13:32, Jason Resch wrote: > > On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat wrote: > >> On 31/10/2025 19:04, Jason Resch wrote: >> >> >> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a perfect morality consists of. And it too, provides a definition of what morality is, and likewise provides a target to aim towards. >> >> >>> >>> Ben Wrote: >>> >>> As different intelligent/rational agents have different experiences, >>> >>> they will form different viewpoints, and come to different conclusions >>> >>> about what is right and not right, what should be and what should not, >>> >>> what they want and what they don't, just like humans do. >> >> The point of the video and article is that desires are based on beliefs, and because beliefs are correctable then so are desires. There is only one "perfect grasp" and accordingly, one true set of beliefs, and from this it follows one most-correct set of desires. This most correct set of desires is the same for everyone, regardless of from which viewpoint it is approached. >> >> Nope. This is nonsense. Just about every assertion is wrong. The very >> first sentence in the abstract is false. And the second. And the third. So >> the whole thing falls apart. Desires are not based on beliefs, they are >> based on emotions. The example of 'wanting to drink hot mud' is idiotic. >> Just because the cup turns out to contain mud doesn't invalidate the desire >> to drink hot chocolate. >> > I think you are misinterpreting the example. It is the desire to drink the > contents of the cup is what changes in response to new information. > > > > I wouldn't have put it as 'desire to drink the contents of the cup', when > the desire is to drink hot chocolate. There are originating desires and > there are planned actions to satisfy the desire. Drinking from the cup > might turn out to be a bad idea (the plan is faulty because of incorrect > information), but the original desire is not changed. > If you want to see a Batman movie at a movie theatre, and find that the > only movie available is a romantic comedy, you don't say that you have a > desire to watch any movie which has suddenly changed. You still want to > watch Batman, but can't, so your desire is thwarted, not changed. > > > Think about this alternate example which may be easier to consider: you may naively have the desire to take a certain job, to marry a particular person, attend a certain event, but if that choice turns out to be ruinous, you may regret that decision. If your future self could warn you of the consequences of that choice, then you may no longer desire that job, marriage, or attendance, as much as you previously did, in light of the (unknown) costs they bore, but which you were unaware of. > > > > Decisions are often regretted. That is a fact of life. Future selves > warning you about bad decisions is not. That's time-travel (aka 'magic'), > and should not feature in any serious consideration of how to make good > decisions. "If x could..." is no help when x is impossible. We have > workable tools to help people make better decisions, but time-travel isn't > one of them. > > These are examples to communicate a point. They are not intended be taken literally. The point is you may desire a job, but had you known more about the job, you would not have desired it. > It's not a 'mistaken' desire at all (the mistake is a >> sensory one), and it doesn't somehow morph into a desire to drink >> hot mud. >> >> >> >> "Beliefs are correctable, so desires are correctable" >> >> Each of those two things are true (if you change 'correctable' to >> 'changeable'), but the one doesn't imply the other, which follows >> from the above. >> > Does it apply in the examples I provided? > > > > No. The examples are about decisions, not desires, and they don't address > the beliefs that lead to the decisions. "You may have the desire to do X" > is different to "You decide to do X". The desire may drive the decision or > at least be involved in it, but it isn't the decision (some poeple act > immediately on their desires, but that still doesn't mean they are the same > thing). > Can you regret a desire? I don't think so, but it is arguable. It would be > regretting something that you have no direct control over, so would be > rather silly. > > The decision is irrelevant. You either desire the job or you don't. The point is that this can change based on new information. > Apart from that, there is still no dependency of desires on beliefs. > > If you believe it will be good for you, you may desire it. If you learn later that it will be bad for you, you may no longer desire it. Here, what you desire has a dependency on what you believe. A belief may well affect the plan you make to satisfy a desire, but > changing the belief doesn't change the desire. Can a belief give rise to a > desire? That's a more complicated question than it appears, I think, and > leads into various types of desires, but still, there's no justification > for the statement "beliefs can change, therefore desires can". > > > > 'Perfect grasp' doesn't mean anything real. It implies that it's >> possible to know everything about everything, or even about >> something. The very laws of physics forbid this, many times over, so >> using it in an argument is equivalent to saying "magic". >> > It doesn't have to be possible. The paper is clear on this. The goal of the paper is to answer objectively what makes a certain thing right or wrong. For example, if someone offered you $10 and I return for some random person unknown to you would be killed, in a way that would not negatively affect you or anyone you knew, and your memory of the ordeal would be wiped so you wouldn't even bear a guilty conscience, for what reason do we judge and justify the wrongness of taking the $10? > > > > This is 'Trolley problem thinking'. Making up some ridiculous scenario > that would never, or even could never, occur in the real world, then > claiming that it has relevance to the real world. > > It's to frame the problem: where does morality come from, what is its basis, by what method do how do we determine right or wrong? > This is the goal of the paper to provide a foundation upon which morality can be established objectively from first principles. > > > > Let's see some examples that are grounded in reality that 'provide a > foundaton upon which morality can be established objectively'. I'm not > closed to the possibility that such a thing can be done, but I'm not > holding my breath. > > > How would you and the question of what separates right from wrong? The initial utilitarian answer is whatever promotes more good experiences than bad experiences. But then, how do you weigh the relative goodness or badness of one experience vs. another, between one person and another, between the varying missed opportunities among future possibilities? > Such questions can only be answered with something approximating an attempt at a grasp of what it means and what it is like to be all the various existing and potential conscious things. > > That's just another way of saying that it can't be answered. > > We can make heuristic attempts at this, despite the fact that we never achieve perfection. > > > > Exactly. We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. > > Yes, this is what I've been saying from the beginning. Perfect grasp is used only to define the aim of morality, not to serve as a practical theory. Consider weather prediction. We can't predict with 100% accuracy, nor predict arbitrarily far into the future. Yet we can make near term predictions with some modicum of accuracy. This is how moral decisions can (and should) be approached. > For example, Democracy can be viewed as a crude approximation, by which each person is given equal weight in the consideration of their desires (with no attempt to weight relative benefits or suffering). But this is still better than an oligarchy, where the desires of few are considered while the desires of the masses are ignored. And also you can see the difference between uninformed electorate vs. a well informed one. The informed electorate has a better grasp of the consequences of their decisions, and so their collective desires are more fully fulfilled. > > > > I don't see the relevance to morality. Politics and morality are rarely on > talking terms. > > Please consider what I wrote carefully. It is an example of putting into practice a heuristic. And how better heuristics are based on the same model and definition of morality as defined in that paper. > >> >> >> 'One true set of beliefs' is not only wrong, it's dangerous, which >> he just confirms by saying it means there is only one most-correct >> set of desires, for /everyone/ (!). >> > Do you not believe in objective truth? > > > > No. > This is religious territory, and the road to dogmatism. > > Belief in objective truth is the basis of science. This is the very reason wny science is superior to religion. > > Without objective truth, by what measure is any theory in science said to be better than any other? What is the meaning of "falsified" if there are no objective truths or falsehoods? Science as a field and endeavor collapses without a notion of objective truth (unless, perhaps you subscribe to some constructionist, relativist, post-modern notion of reality/truth). But I take the view that most scientists consider their work as something beyond a social interaction/game. It doesn't assume that there is any 'absolute truth' which can be > discovered, after which no further inquiry is needed or wanted. > > I think you may be confusing the existence of objective truth, with the idea that we can access that objective truth and have certainty when we hold it. One does not imply the other. I believe there is objective truth, *and* I believe we can never be certain if/when we have it. We think it is objectively true that 2+2=4, but we can't prove it mathematically, unless we assume some set of axioms (which themselves may or may not be true), and we cannot prove the set of axioms are true. So even on the most trivial matters, we never achieve certainty. As to whether, for instance, the laws of physics are invariant everywhere > and at all times, that's a question we can't answer, and probably will > never be able to. > > Many things are true that we will never know. The 10^(googolplex)th digit of the binary representation of Pi is either 1 or 0. But we, in our finite universe, will never have the computational resources to determine which. Nevertheless at least one of these two statements is objectively true: - The 10^(googolplex)th digit of the binary representation of Pi is 1. - The 10^(googolplex)th digit of the binary representation of Pi is 0. But for those who believe in objective truth, one of these statements is true. > > If there is objective truth, they are the same truths for everyone. > Now consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it). > You may say this is fatal to the theory, but note that we can still roughly compute with the number Pi, even though we never consider all of its infinite digits. > >> Does this not ring loud alarm >> bells to you? I'm thinking we'd better hope that there really is no >> such thing as objective morality (if there is, Zuboff is barking up >> the wrong tree, for sure), it would be the basis for the worst kind >> of tyranny. It's a target that I, at least, want to aim away from. >> 180 degrees away! >> > No one is proposing a putting a tyrannical AI in charge that forces your every decision. But a superintelligent AI that could explain to you the consequences of different actions you might take (as far as it is able to predict them) would be quite invaluable, and improve the lives of many who choose to consider its warnings and advice. > > > > Absolutely. I have no argument with that. But we were talking about > morality. > > Yes and morality concerns which actions are right or wrong. > > >> His twisting of desire into morality is, well, twisted. Morality >> isn't about what we should want to do, just as bravery isn't about >> having no fear. >> > Do you have a better definition of morality? > > I don't think that's the answer you want to ask. A dictionary can provide > the answer. > > This is what the dictionary says: "principles concerning the distinction between right and wrong or good and bad behavior." But this only pushes the problem back: what is the definition of right or wrong, good or bad? Zuboff's paper is an example of an theoretical basis on which we can form such a definitions, and define what we mean by right and wrong, good and bad. > I do have my own moral code though, if that's what you want to know. I > don't expect everyone to see the value of it, or adopt it. And I might > change my mind about it in the future. > > Let us say you have a particular set of rules in your code. By why process do you decide what rules to adopt, or decide to adopt one rule vs. another. My contention is that to even form a moral code, one must hold some meta-rule for optimizing what knew considers to be good while minimizing or avoiding bad. And I think if you explored this meta-rule, you would find it is not all that different from the position Zuboff reaches in his paper. Ultimately, what is good (for one individual) is what that individual would want for themselves if they had a complete knowledge of everything involved. And then this then extended to define good as a maximization of good for all concerned, to achieve the most possible good among all beings who have desires, by satisfying (to the maximum possible extent) the desires each individual would still hold if they all had a perfect grasp of everything. This he refers to as a reconciliation of all systems of desire. He wants to turn people into puppets, and actually >> remove moral agency from them. >> > Imperfect understanding of consequences cripples our ability to be effective moral agents. > > > > Then you think we are crippled as effective moral agents, and doomed to > always be so (because we will always have imperfect understanding of > consquences). > > Indeed. That is why life is so hard, and why "to err is human." As imperfect beings we perpetual mistakes are inevitable. But with greater knowledge, experience, and intelligence, we can strive to minimize that error. > > When we don't understand the pros and cons of a decision, how can we hope to be moral agents? We become coin-flippers -- which I would argue is to act amorally. If we want true moral agency, we must strive to improve our grasp of things. > > > > This is taking an extreme position, and saying either we are 'perfect' or > no use at all. > > Not at all. I specified "when we don't understand..." We are neither. Acting with incomplete information is inevitable. > > Yes. That doesn't mean morality is impossible. > > I fully agree. > Just as bravery is being afraid, but acting anyway, morality is not > knowing for sure what the best action is, but acting anyway. > > Since we never know anything for sure, I'm not sure that qualifier adds anything useful. I would instead say: moral action requires an attempt to identify the morally best action, and then choosing that action. Then, "amoral action" is action without attempting to identify what the morally best action is, and "immoral action" would be an attempt to identify the morally best action, but then choosing a different action. Making the best decision you can, in line with your values. It's about > having a choice. If it were possible to have 'perfect knowledge', there > would be no morality, no choice. > > I'm not sure that follows. Even with perfect knowledge, you could still choose whether or not to act in accordance with morally best action. Jason I'm not sure what you'd call it. Predetermination, perhaps. > > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 1 23:22:42 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 16:22:42 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <744d4987-907a-45ea-9829-62b925e9c397@zaiboc.net> References: <744d4987-907a-45ea-9829-62b925e9c397@zaiboc.net> Message-ID: <005101dc4b86$6cb0f6f0$4612e4d0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat Subject: Re: [ExI] not that rare earth (part 2 of at least 2) On 01/11/2025 13:32, spike wrote: >>... Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. >...I'm surprised no-one has mentioned flywheel storage. Inherently fire-resistant (there's nothing to catch fire), can store large amounts of energy, can be buried in the ground, and things built on top of them if desired, very long lifetime, very good reliability, easy to build, very fast charging and discharging times. The only downside I can see is the need (or at least desirability) for rare-earths for the magnetic bearings. Which might not actually be a need, if spike is right. -- Ben _______________________________________________ Ben the calculations on this are easy enough, or estimates. Alternative: see if plausible proposals have been made. Compare costs. My intuition tells me that notion won't compete with lithium batteries because of the huge advantage of mass production already in place for the batteries. Aside: the battery storage notion has another fun advantage: it can use old batteries from retired EVs. The batteries can still store power after they are too degraded for automotive use. Those should be cheap. spike From jasonresch at gmail.com Sat Nov 1 23:27:42 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 1 Nov 2025 19:27:42 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sat, Nov 1, 2025, 6:23 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On 01/11/2025 21:42, Jason Resch asked, and Ben answered: > >> Do you not believe in objective truth? > > > > No. > > Ok, that's probably too simplistic, and needs explaining. > > I don't think there is such a thing as /absolute/ truth, is a better way > of putting it. Obviously there are 'objective truths', as John pointed > out with his example of a bridge falling down or not. > > > consider the objective truths for statements such as "it is right to do > X" or "it is wrong to do Y". If there are objective truths, these extend to > an objective morality. There would be an objective truth to what action is > best (even if we lack the computational capacity to determine it). > > > Statements like that can't be 'objectively true', because they are not > about objective things. Right and Wrong are concepts in human minds, not > things that exist in themselves. If there were no people, there would be > no right and wrong. No morality. 'Objective morality' is an oxymoron, > just as 'objective beauty' or 'objective jealousy' are. These are all > things that don't exist without subjective experience. > I agree that good and bad (and hence right and wrong) depend on the existence of conscious beings. Despite that the states of these conscious beings is subjective, it's nevertheless objectively true (or false) that "subject X is experiencing pain." >From these objective facts (concerning subjective states) it becomes possible to develop an objective morality. For example, I might propose that it's an objective moral truth that "needlessly torturing innocent children" is a moral wrong. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 2 11:27:19 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Nov 2025 06:27:19 -0500 Subject: [ExI] reason an emotion In-Reply-To: References: Message-ID: On Sat, Nov 1, 2025 at 9:33?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> emotions are involved in every decision we make, no matter how trivial.* *And I'm sure that will be just as true for an AI as it is for us. So the AI revolution may have created at least one new job category, that of an AI psychiatrist. But eventually AIs will get better than humans at that too.* *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 2 12:47:54 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Nov 2025 07:47:54 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <007f01dc4b3d$24958820$6dc09860$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> Message-ID: On Sat, Nov 1, 2025 at 10:38?AM wrote: > *>> ?I can't think of any application in which the rare earths don't need >> to be refined out into separate elements, if there is such an application >> it must not be a very important one? John K Clark* > > > > *> If such applications exist, we wouldn?t know. Those would be held as > trade secrets. * > *If there were such an application we already know magnetic or optical properties can't be involved, because there is great variation in those properties among the different rare earths. So it must be something in their chemical properties because they are all almost identical in that respect. But a trade secret that can be told in one short sentence such as "unrefined rare earths can be economically useful in the X chemical process" tend not to remain secret for very long. * *> Recognizing all this, Musk found a way to phase out rare earth elements > without losing significant performance in his magnets.* > *No Musk has not. It's true that back in 2008 the original Tesla had no rare earth magnets, in fact it had no permanent magnets of any sort because it used an old-fashioned AC induction motor. However the best efficiency you're going to get with one of those is about 85%, but a motor that uses rare earth permanent magnets has an efficiency of about 97%, and an induction motor is also larger and MUCH heavier than a rare earth motor of equal power. Tesla is not currently producing ANY cars that completely avoid rare earth magnets, however their cheapest car uses an AC induction motor for the front wheels but a rare earth permanent magnet motor for the back wheels. But Tesla is never going to be competitive with any Chinese car maker if they continue to go down that road.* *In 2023 Tesla made a grand announcement they were going to find a way to make a great electric motor that didn't use any rare earths, but it's almost 2026 and we've heard nothing more about that, certainly nothing has been put in production. * > *> **we can go up the group one period if that element is cheaper and > more easily available. * > *I don't know what you mean by "go up the group one period" because the rare earth elements are all in the same group, and the periodic table of elements are arranged according to their chemical properties NOT their magnetic properties which is what we are interested in. The defining feature of the rare earth elements is that their outer valence electrons are all the same, and valence electrons are the ones that are primarily responsible for giving an element its chemical properties. But the various rare earth element's inner shell of electrons are all quite different, and that is what gives the specific element its magnetic properties. * *> **Elon wouldn?t risk the entire company on the continuing availability > of anything on which China has a monopoly. * > *If he wants to make electric cars then he'd have no choice, unless Musk could convince He Who Must Not Be Named to put a huge tariff on imported Chinese cars and thus force Americans to buy his overpriced outdated inefficient junk cars. * *> The US military wouldn?t base their technology on any material in which > it didn?t hold a fifty year stockpile. * > *If the US military was omnipotent I'm sure that's exactly what they would do. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 2 14:16:16 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 09:16:16 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <813304aa-2157-48b1-a2b9-8cfcdec91b27@zaiboc.net> Message-ID: On Sun, Nov 2, 2025, 9:05 AM John Clark wrote: > > > On Fri, Oct 31, 2025 at 10:50?AM Jason Resch wrote: > > > >> >>> See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE >>>> >>> >>> *>> If "real desires" require perfect knowledge then "real desires" do >>> not exist and it is not a useful concept. * >>> >> >> *> The better knowledge/intelligence becomes the more correctly we >> approach that unattainable perfect grasp. **It is a useful concept >> insofar as it defined an ideal, just like Turing machines define >> computation, though their perfect and unlimited memory is unrealizable in >> practice.* >> > > *You're right, Turing was able to define computation with his machine and > his instructions on how to construct the device were simple and very clear, > and he was able to prove a number of fascinating things about computation > from his machine. But there is nothing equivalent to that when it comes to > morality, certainly not a proof that "all sufficiently intelligent and > rational agents reach the same morality". And all the empirical evidence is > pointing in the opposite direction.* > If this is a problem that genuinely interests you (and I think it should, because if it's true, it means superintelligence will tend towards beneficence), then read the attached paper, and see whether you agree with it or if you can uncover some fatal flaw in its reasoning. Jason > > > > > >> >>> >>> >>> >>> >>>> >>>> On Fri, Oct 31, 2025, 8:04 AM John Clark via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On Thu, Oct 30, 2025 at 8:40?PM William Flynn Wallace via extropy-chat >>>>> wrote: >>>>> >>>>> *> Jason, are you saying that only a future AI with adequate memory, >>>>>> will ever understand our minds? * >>>>>> >>>>> >>>>> *I don't know about Jason but I would say that, and I'm saying that >>>>> superintelligent AI will never fully understand its own mind because, even >>>>> though it understands ours, however big it gets it will still be finite. >>>>> And only with an infinite set can a proper subset be put into a one to one >>>>> correspondence with the entire set. * >>>>> >>>>> *> At that point, humans are superflous, not needed, better off >>>>>> extinct. * >>>>>> >>>>> >>>>> *Better off for who? Not better off for us certainly, maybe better off >>>>> for the AI. * >>>>> >>>>> > Or the AIs will keep us around as interesting pets. >>>>>> >>>>> >>>>> *My hope is that the superintelligence will think we're cute pets, or >>>>> will feel some sort of a sense of duty, like the obligation we feel in >>>>> taking care of an aged parent who has Alzheimer's disease. But whether a >>>>> SuperIntelligent AI will feel either of those emotions strong enough to >>>>> keep us around I don't know. I can't predict with much specificity what >>>>> even one of my fellow human beings will do that is no smarter than I am, >>>>> and it is vastly more difficult to predict the actions of a >>>>> superintelligence, even generally. * >>>>> >>>> >>>> >>>> >>> >>> >>> >>> >>> >>>> >>>> Jason >>>> >>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>> On Thu, Oct 30, 2025 at 5:12?PM Jason Resch via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via extropy-chat >>>>>>> wrote: >>>>>>> >>>>>>>> I have read several times in these chats the assumption that one >>>>>>>> cannot understand something as complicated as themselves. >>>>>>>> >>>>>>>> Why not? It sounds reasonable but what's the basis for it? bill w >>>>>>>> >>>>>>> >>>>>>> I believe it may follow from information theory. >>>>>>> >>>>>>> Consider: if understanding(X) requires holding some additional >>>>>>> higher-level set of relations and interrelations beyond the mere >>>>>>> specification of what X is, then the information contained within >>>>>>> understanding(X) will always be greater than the information contained in X. >>>>>>> >>>>>>> Now extend this to the brain. If brain's information content is Y, >>>>>>> then understanding (Y) requires a brain with a greater information storage >>>>>>> capacity than Y. >>>>>>> >>>>>>> Or another way to think about it: how many neurons does it take to >>>>>>> memorize all the important facts of a single neuron's connections within >>>>>>> the brain? If it takes N neurons to store that memory, then just memorizing >>>>>>> a brain state will require a brain with N times as many neurons as the >>>>>>> brain that's memorized. >>>>>>> >>>>>>> Jason >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Thu, Oct 30, 2025 at 2:22?PM John Clark via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> On Tue, Oct 28, 2025 at 4:16?PM Ben Zaiboc via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>> *> There are also nuances. For example, different interpretations >>>>>>>>>> of "to understand".* >>>>>>>>> >>>>>>>>> >>>>>>>>> *Exactly. We can have a general sort of understanding of how our >>>>>>>>> brain works but to have a perfect understanding a part of our brain would >>>>>>>>> have to have a sort of internal map of the entire brain, and for it to be >>>>>>>>> perfect there would have to be a one to one correspondence between the map >>>>>>>>> and the territory, but that would be impossible for something that is >>>>>>>>> finite like the number of neurons in the human brain. However it would be >>>>>>>>> possible for a proper subset of something infinite to have a one to one >>>>>>>>> correspondence with the entire set; then you could have such a perfect map >>>>>>>>> with a one to one correspondence, and then you'd always know what you were >>>>>>>>> going to do long before you did it. And you wouldn't feel free. So by the >>>>>>>>> only definition of free will that is not gibberish (not knowing what you're >>>>>>>>> going to do next until you actually do it) we reach the interesting >>>>>>>>> conclusion that a human being does have free will, but God does not.* >>>>>>>>> >>>>>>>>> *John K Clark* >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> non-flying animal. >>>>>>>>>> >>>>>>>>>> "If our brains were simple enough for us to understand, we would be >>>>>>>>>> simple enough that we could not." >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Well, that just sounds defeatist to me. It makes a nice little >>>>>>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence >>>>>>>>>> that it's true? Or any logical argument for it? >>>>>>>>>> There are also nuances. For example, different interpretations of >>>>>>>>>> "to understand". >>>>>>>>>> >>>>>>>>>> Maybe you are right, given "understand completely" (whatever that >>>>>>>>>> actually means). Maybe definitely not, given "understand enough >>>>>>>>>> to be useful/worth the attempt". >>>>>>>>>> >>>>>>>>>> We have, after all, discovered a lot about how brains work >>>>>>>>>> already. Maybe not a lot in comparison to all there is to be discovered, >>>>>>>>>> but more than enough to be useful, and I doubt if we have reached some sort >>>>>>>>>> of limit on what we are capable of discovering and understanding. >>>>>>>>>> >>>>>>>>>> And there's always AI assistance with this kind of research, >>>>>>>>>> which greatly extends our reach, and adds more variations of "to >>>>>>>>>> understand". >>>>>>>>>> >>>>>>>>>> On the whole, I think the statement is harmful, in that it tends >>>>>>>>>> to discourage even trying. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Ben >>>>>>>>>> >>>>>>>>>> >>>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 2 14:26:56 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 09:26:56 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <813304aa-2157-48b1-a2b9-8cfcdec91b27@zaiboc.net> Message-ID: On Sun, Nov 2, 2025, 9:16 AM Jason Resch wrote: > > > On Sun, Nov 2, 2025, 9:05 AM John Clark wrote: > >> >> >> On Fri, Oct 31, 2025 at 10:50?AM Jason Resch >> wrote: >> >> >> >>> >>> See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE >>>>> >>>> >>>> *>> If "real desires" require perfect knowledge then "real desires" do >>>> not exist and it is not a useful concept. * >>>> >>> >>> *> The better knowledge/intelligence becomes the more correctly we >>> approach that unattainable perfect grasp. **It is a useful concept >>> insofar as it defined an ideal, just like Turing machines define >>> computation, though their perfect and unlimited memory is unrealizable in >>> practice.* >>> >> >> *You're right, Turing was able to define computation with his machine and >> his instructions on how to construct the device were simple and very clear, >> and he was able to prove a number of fascinating things about computation >> from his machine. But there is nothing equivalent to that when it comes to >> morality, certainly not a proof that "all sufficiently intelligent and >> rational agents reach the same morality". And all the empirical evidence is >> pointing in the opposite direction.* >> > > > If this is a problem that genuinely interests you (and I think it should, > because if it's true, it means superintelligence will tend towards > beneficence), then read the attached paper, and see whether you agree with > it or if you can uncover some fatal flaw in its reasoning. > The attachment failed, I have uploaded the paper here: https://drive.google.com/file/d/1l8T1z5dCQQiwJPlQlqm8u-1oWpoeth3-/view?usp=drivesdk Jason > > >> >> >> >> >> >>> >>>> >>>> >>>> >>>> >>>>> >>>>> On Fri, Oct 31, 2025, 8:04 AM John Clark via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On Thu, Oct 30, 2025 at 8:40?PM William Flynn Wallace via >>>>>> extropy-chat wrote: >>>>>> >>>>>> *> Jason, are you saying that only a future AI with adequate memory, >>>>>>> will ever understand our minds? * >>>>>>> >>>>>> >>>>>> *I don't know about Jason but I would say that, and I'm saying that >>>>>> superintelligent AI will never fully understand its own mind because, even >>>>>> though it understands ours, however big it gets it will still be finite. >>>>>> And only with an infinite set can a proper subset be put into a one to one >>>>>> correspondence with the entire set. * >>>>>> >>>>>> *> At that point, humans are superflous, not needed, better off >>>>>>> extinct. * >>>>>>> >>>>>> >>>>>> *Better off for who? Not better off for us certainly, maybe better >>>>>> off for the AI. * >>>>>> >>>>>> > Or the AIs will keep us around as interesting pets. >>>>>>> >>>>>> >>>>>> *My hope is that the superintelligence will think we're cute pets, or >>>>>> will feel some sort of a sense of duty, like the obligation we feel in >>>>>> taking care of an aged parent who has Alzheimer's disease. But whether a >>>>>> SuperIntelligent AI will feel either of those emotions strong enough to >>>>>> keep us around I don't know. I can't predict with much specificity what >>>>>> even one of my fellow human beings will do that is no smarter than I am, >>>>>> and it is vastly more difficult to predict the actions of a >>>>>> superintelligence, even generally. * >>>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >>>> >>>>> >>>>> Jason >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Oct 30, 2025 at 5:12?PM Jason Resch via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via >>>>>>>> extropy-chat wrote: >>>>>>>> >>>>>>>>> I have read several times in these chats the assumption that one >>>>>>>>> cannot understand something as complicated as themselves. >>>>>>>>> >>>>>>>>> Why not? It sounds reasonable but what's the basis for it? bill >>>>>>>>> w >>>>>>>>> >>>>>>>> >>>>>>>> I believe it may follow from information theory. >>>>>>>> >>>>>>>> Consider: if understanding(X) requires holding some additional >>>>>>>> higher-level set of relations and interrelations beyond the mere >>>>>>>> specification of what X is, then the information contained within >>>>>>>> understanding(X) will always be greater than the information contained in X. >>>>>>>> >>>>>>>> Now extend this to the brain. If brain's information content is Y, >>>>>>>> then understanding (Y) requires a brain with a greater information storage >>>>>>>> capacity than Y. >>>>>>>> >>>>>>>> Or another way to think about it: how many neurons does it take to >>>>>>>> memorize all the important facts of a single neuron's connections within >>>>>>>> the brain? If it takes N neurons to store that memory, then just memorizing >>>>>>>> a brain state will require a brain with N times as many neurons as the >>>>>>>> brain that's memorized. >>>>>>>> >>>>>>>> Jason >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Thu, Oct 30, 2025 at 2:22?PM John Clark via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> On Tue, Oct 28, 2025 at 4:16?PM Ben Zaiboc via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>> *> There are also nuances. For example, different interpretations >>>>>>>>>>> of "to understand".* >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Exactly. We can have a general sort of understanding of how our >>>>>>>>>> brain works but to have a perfect understanding a part of our brain would >>>>>>>>>> have to have a sort of internal map of the entire brain, and for it to be >>>>>>>>>> perfect there would have to be a one to one correspondence between the map >>>>>>>>>> and the territory, but that would be impossible for something that is >>>>>>>>>> finite like the number of neurons in the human brain. However it would be >>>>>>>>>> possible for a proper subset of something infinite to have a one to one >>>>>>>>>> correspondence with the entire set; then you could have such a perfect map >>>>>>>>>> with a one to one correspondence, and then you'd always know what you were >>>>>>>>>> going to do long before you did it. And you wouldn't feel free. So by the >>>>>>>>>> only definition of free will that is not gibberish (not knowing what you're >>>>>>>>>> going to do next until you actually do it) we reach the interesting >>>>>>>>>> conclusion that a human being does have free will, but God does not.* >>>>>>>>>> >>>>>>>>>> *John K Clark* >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> non-flying animal. >>>>>>>>>>> >>>>>>>>>>> "If our brains were simple enough for us to understand, we would be >>>>>>>>>>> simple enough that we could not." >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Well, that just sounds defeatist to me. It makes a nice little >>>>>>>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence >>>>>>>>>>> that it's true? Or any logical argument for it? >>>>>>>>>>> There are also nuances. For example, different interpretations >>>>>>>>>>> of "to understand". >>>>>>>>>>> >>>>>>>>>>> Maybe you are right, given "understand completely" (whatever >>>>>>>>>>> that actually means). Maybe definitely not, given "understand >>>>>>>>>>> enough to be useful/worth the attempt". >>>>>>>>>>> >>>>>>>>>>> We have, after all, discovered a lot about how brains work >>>>>>>>>>> already. Maybe not a lot in comparison to all there is to be discovered, >>>>>>>>>>> but more than enough to be useful, and I doubt if we have reached some sort >>>>>>>>>>> of limit on what we are capable of discovering and understanding. >>>>>>>>>>> >>>>>>>>>>> And there's always AI assistance with this kind of research, >>>>>>>>>>> which greatly extends our reach, and adds more variations of "to >>>>>>>>>>> understand". >>>>>>>>>>> >>>>>>>>>>> On the whole, I think the statement is harmful, in that it tends >>>>>>>>>>> to discourage even trying. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Ben >>>>>>>>>>> >>>>>>>>>>> >>>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Nov 2 14:50:46 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 2 Nov 2025 14:50:46 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On 01/11/2025 23:20, Jason Resch wrote: > If you believe it will be good for you, you may desire it. If you learn later that it will be bad for you, you may no longer desire it. Here, what you desire has a dependency on what you believe. Discuss that with a cigarette smoker. I think you'll find they disagree. > It's to frame the problem: where does morality come from, what is its basis, by what method do we determine right or wrong? Well that's easy, I can tell you. Morality comes from us. We make it up. And the methods we use are various. >> We always have to make decisions in the /absence/ of full information. What we would do if we had 'all the information' is irrelevant, if it even means anything. > Yes, this is what I've been saying from the beginning. Perfect grasp is used only to define the aim of morality, not to serve as a practical theory. We know what the aim of morality is: To distinguish right actions from wrong ones. Nothing difficult about that, and no 'perfect grasp' is needed to establish it. The difficulty is in deciding what 'right' and 'wrong' mean. Different people in various different circumstances seem to have different opinions. > Consider weather prediction. We can't predict with 100% accuracy, nor predict arbitrarily far into the future. Yet we can make near term predictions with some modicum of accuracy. This is how moral decisions can (and should) be approached. Can, yes. Should? Who are you to say? You are now deciding for other people. My morality tells me that this is immoral. > Please consider what I wrote carefully. It is an example of putting into practice a heuristic. And how better heuristics are based on the same model and definition of morality as defined in that paper. You may think so. I don't. That paper is nonsense. As I said, the first three statements are flat-out wrong. > > Without objective truth, by what measure is any theory in science said to be better than any other? Yes, I've addressed that in another post. I was too hasty in saying "No" to the question, mistaking 'objective' for 'absolute'. My mistake. > what is the definition of right or wrong, good or bad? Zuboff's paper is an example of a theoretical basis on which we can form such definitions, and define what we mean by right and wrong, good and bad. Apart from the fact that Zuboff's paper is based on false premises, and therefore worthless, the very question "what is right and what is wrong?" can't be given a definitive answer that is true for everyone in every circumstance. It's like trying to give a definitive answer to "what is the tastiest food?", that applies to everyone in all circumstances. You can't solve subjective problems with an objective approach. > Let us say you have a particular set of rules in your code. I do. > By [what] process do you decide what rules to adopt, or decide to adopt one rule vs. another. There is a heirarchy, built on a principle that I worked out a long time ago. I just need to slot a problem into the right level of the heirarchy, and the solution is obvious. I've never met a (real) situation that it can't handle to my satisfaction (I'm not claiming to have the answer to the trolley problem!). > My contention is that to even form a moral code, one must hold some meta-rule for optimizing what knew [one?] considers to be good while minimizing or avoiding bad. Indeed. And I'd say that the meta-rule is what defines 'good' and 'bad'. > And I think if you explored this meta-rule, you would find it is not all that different from the position Zuboff reaches in his paper. On the contrary, it is totally different, and much simpler, than Zuboff's nonsense. > Ultimately, what is good (for one individual) is what that individual would want for themselves if they had a complete knowledge of everything involved. First, No. Second, this would be reducing morality to what is good for an individual... > And then this then extended to define good as a maximization of good for all concerned, to achieve the most possible good among all beings who have desires, by satisfying (to the maximum possible extent) the desires each individual would still hold if they all had a perfect grasp of everything. This he refers to as a reconciliation of all systems of desire. ... then dragging everyone else into it (Golden Rule, and we know what's wrong with that) I really don't see the point of positing an impossible knowledge then using this as the basis of a system of morality (or anything at all). Saying "Oh, but it's just theoretical, not real, don't take it too literally" is basically the same as saying it's totally useless for any practical purpose. A 'reconciliation of all systems of desire' is equivalent to 'a reconciliation of all systems of taste'. That's apart from the fact that the whole paper is nonsense. >> It's about having a choice. If it were possible to have 'perfect knowledge', there would be no morality, no choice. > I'm not sure that follows. Even with perfect knowledge, you could still choose whether or not to act in accordance with morally best action. That's true. People can choose to be evil. Does anyone actually do that? We'd probably class it as mental illness. I don't undertand why you are taking Zuboff's paper seriously. Do you take his first three statements in the paper's abstract at face value?: 1) "If I desire to drink some stuff thinking it is hot chocolate when actually it is hot mud, my desire is not a real one - it?s mistaken or only apparent." (misconstruing the desire to drink hot chocolate as a desire to drink whatever is in the cup. If that were the case, he'd drink the mud) 2) "This example illustrates how a desire must always depend on a belief about its object, a belief about what it is and what it?s like." (false assumption that if any desire is dependent on a belief (something that I'd dispute, but it needs closer examination), all desires must always be dependent on beliefs. Saying "This example illustrates..." is deflecting the reader from the fact that he's making an assumption and failing to show why it should be true) 3) "But beliefs are correctable, so desires are correctable" ( I don't know why he uses the term 'correctable', which implies wrongness, but this statement just compounds the above errors and adds one more: False conclusion that if a belief can change, this means that a desire can change) I can understand someone saying that beliefs are sometimes based on desires (I'm sure this is often the case), but not the reverse. That's just daft. Desires are emotional, derived from feedback on bodily states, and elaborated by memories and imagination. Beliefs about various things can certainly contribute to the process, but you can't reasonably claim that (all) desires are a result of (only) beliefs. At the best, Zuboff is guilty of grossly oversimplifying and misattributing things. At the worst, well, I'd be committing the Internet Sin of Ad-Hominem Attack to say anything more, and that goes against my moral code. -- Ben From spike at rainier66.com Sun Nov 2 14:52:32 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 2 Nov 2025 06:52:32 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> Message-ID: <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> From: John Clark Sent: Sunday, 2 November, 2025 4:48 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] not that rare earth (part 2 of at least 2) On Sat, Nov 1, 2025 at 10:38?AM > wrote: >> ?I can't think of any application in which the rare earths don't need to be refined out into separate elements, if there is such an application it must not be a very important one? John K Clark > If such applications exist, we wouldn?t know. Those would be held as trade secrets. >?If there were such an application we already know magnetic or optical properties can't be involved, because there is great variation in those properties among the different rare earths. So it must be something in their chemical properties because they are all almost identical in that respect? On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. The companies that make those magnets don?t advertise that fact, for a reason: that is how they make their living. >?But a trade secret that can be told in one short sentence such as "unrefined rare earths can be economically useful in the X chemical process" tend not to remain secret for very long? It isn?t a secret. Plenty of us have figured it out. I already knew because of a project I was on nearly 30 years ago, where a company was using whatzisium (one of the obscure ?rare? earths (an element that surprises you when you hear its name (because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) in an optical coating and claiming it was a proprietary process for which we needed to pay them a lot of money. This contractor had arbitrarily tried something in their optical coating proces, then tuned its focal plane array to the optical properties of that coating. We paid. But being a company with spectrum analyzers and electron microscopes, we looked at the product we bought from them and found out the materials that went into it, even if we don?t have the process itself. Turns out any one of those lanthanides down there on the lower left will do what they were doing: it is a catalyst, acting as a big (well, tiny actually, big compared to other atoms) capacitor in a way: a tiny trace of it forms nuclei for crystal cell growth. The element itself doesn?t participate in the reaction. Probably the corresponding actinide would work even better, but it was radioactive. The one above it would work too, but that one isn?t exotic and might already be patented. John are you seeing a pattern here? > Recognizing all this, Musk found a way to phase out rare earth elements without losing significant performance in his magnets. >?No Musk has not. It's true that back in 2008 the original Tesla had no rare earth magnets, in fact it had no permanent magnets of any sort because it used an old-fashioned AC induction motor. However the best efficiency you're going to get with one of those is about 85%, but a motor that uses rare earth permanent magnets has an efficiency of about 97%, and an induction motor is also larger and MUCH heavier than a rare earth motor of equal power? Ja but notice what you are comparing to what. There are magnets that use substitute materials for what any rare earth does. But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. >? But Tesla is never going to be competitive with any Chinese car maker if they continue to go down that road?. The market cap for Tesla is 1.5 trillion bucks. The Chinese are commies. I don?t even know how to figure out the market cap of a company in China. I wouldn?t invest in it. >?In 2023 Tesla made a grand announcement they were going to find a way to make a great electric motor that didn't use any rare earths, but it's almost 2026 and we've heard nothing more about that, certainly nothing has been put in production? How would you know? They aren?t worried about the Chinese rare earth embargo. That tells me they either have a fifty year stockpile or found a workaround. I don?t see any performance degradation in the cars. > we can go up the group one period if that element is cheaper and more easily available. >?I don't know what you mean by "go up the group one period" because the rare earth elements are all in the same group, and the periodic table of elements are arranged according to their chemical properties NOT their magnetic properties which is what we are interested in? Ja, and this is where we had the sophistication to figure it out. The lanthanides are not themselves magnetic. They act as catalysts in crystal formation in the iron, which is magnetic. We coulda figured this out along with the Tesla engineers. >?The defining feature of the rare earth elements is that their outer valence electrons are all the same, and valence electrons are the ones that are primarily responsible for giving an element its chemical properties. But the various rare earth element's inner shell of electrons are all quite different, and that is what gives the specific element its magnetic properties?. John ponder what you just wrote, then recognize that the element itself isn?t magnetic. The iron does that. > Elon wouldn?t risk the entire company on the continuing availability of anything on which China has a monopoly. >?If he wants to make electric cars then he'd have no choice, unless Musk could convince He Who Must Not Be Named to put a huge tariff on imported Chinese cars and thus force Americans to buy his overpriced outdated inefficient junk cars? Nonsense. Tesla found workarounds, for a good reason. We have known since as far back as 2010 that manufacturing needs to be moved out of China. Even if they need to use expensive domestic-produced rare earths (I don?t think they do) the little bit that EVs use would add so little cost, it wouldn?t matter compared to the cost of shipping Chinese cars over an ocean that spans damn near half the globe. The cost competitiveness of Chinese cars comes from their cheap commie labor. As technology advances, the cost of a manufactured item is influenced less and less by the cost of labor. If you tour the Tesla factory, the first thing you notice is how few people are in that factory. The cheap commie labor can?t compete with robots. John look at what you are arguing: there is some magic material that the Chinese have, but we don?t because of our stringent environmental regulations, which lets them make electric cars cheaper, swoon how shall we cope? I say nonsense. We can already make more electric cars than the market wants, and the California government which mandated their sales is likely to be thrown out head first pretty soon because the market didn?t want that product they told us was so great, but the tax incentive expired, the carpool line access expired and the price of power nearly doubled in the past 8 years, making it so they aren?t really cheaper to operate anymore. So no, I am not buying that argument, or an EV, or Tesla stock, or Rivian stock, or any Chinese EV stock, not buying any of it. I predict EVs will level out around almost half the new car sales eventually, which is good, because that could solve the energy storage problem: we allow EV owners to sell power back into the grid from their cars during peak demand. > The US military wouldn?t base their technology on any material in which it didn?t hold a fifty year stockpile. >?If the US military was omnipotent I'm sure that's exactly what they would do. John K Clark John, they did it. The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. Of course they stockpiled that stuff, not even knowing or caring what it might be used for. Anticipating any threat is what the military does best. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Nov 2 16:34:36 2025 From: pharos at gmail.com (BillK) Date: Sun, 2 Nov 2025 16:34:36 +0000 Subject: [ExI] AI music is now appearing on Billbosrd's charts Message-ID: How Many AI Artists Have Debuted on Billboard?s Charts? Xania Monet, who becomes the first AI-powered artist to debut on an airplay chart this week, is one of several AI-driven acts to chart over the past two months. By Xander Zellner 10/28/2025 Quote: AI music is no longer a fantasy or niche curiosity among internet sleuths ? it?s here, and it?s already beginning to have an impact on Billboard?s charts. In just the past few months, at least six AI or AI-assisted artists have debuted on various Billboard rankings. That figure could be higher, as it?s become increasingly difficult to tell who or what is powered by AI ? and to what extent. Many of these charting projects, whose music spans every genre from gospel to rock to country, also arrive with anonymous or mysterious origins. ------------------------ So, how long until the AI Beethoven appears? BillK From jasonresch at gmail.com Sun Nov 2 16:50:55 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 11:50:55 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 01/11/2025 23:20, Jason Resch wrote: > > > If you believe it will be good for you, you may desire it. If you > learn later that it will be bad for you, you may no longer desire it. > Here, what you desire has a dependency on what you believe. > > > Discuss that with a cigarette smoker. I think you'll find they disagree. > Compare to the hypothetical reality where cigarettes are healthy, and you will see my point stands. > > > It's to frame the problem: where does morality come from, what is its > basis, by what method do we determine right or wrong? > > > Well that's easy, I can tell you. Morality comes from us. We make it up. That's the common view. But that common view is lacking a base, and it doesn't help answer the question of whether AI, or superintelligences will tend towards any sort of morality, nor what it might be that they tend towards. Hence the utility of this framework. > And the methods we use are various. > And some methods, I would contend, are better than others. > > >> We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. > > > Yes, this is what I've been saying from the beginning. Perfect grasp > is used only to define the aim of morality, not to serve as a practical > theory. > > > We know what the aim of morality is: To distinguish right actions from > wrong ones. Nothing difficult about that, and no 'perfect grasp' is > needed to establish it. The difficulty is in deciding what 'right' and > 'wrong' mean. Different people in various different circumstances seem > to have different opinions. > Likewise, people used to disagree about what lightning was. > > > Consider weather prediction. We can't predict with 100% accuracy, nor > predict arbitrarily far into the future. Yet we can make near term > predictions with some modicum of accuracy. > This is how moral decisions can (and should) be approached. > > > Can, yes. Should? Who are you to say? It seems you still haven't read the paper, as you question suggests you still hold some imagined caricatured version of the theory. But note here, I am only saying, that even though we can't predict the future perfectly nor arbitrarily far into the future, the basic idea behind deciding which actions are right or wrong, involves making some attempt at predicting the future consequences of an action. All rational decision making processes work this way. You are now deciding for other > people. My morality tells me that this is immoral. > If you understand the paper you will see this definition of morality is based on the fulfillment of the desires of everyone, where those desires are what each person would genuinely want for themselves when fully informed about everything relevant. It has nothing to do with me, or anyone else telling you what to do. It is merely a definition. > > > Please consider what I wrote carefully. It is an example of putting > into practice a heuristic. And how better heuristics are based on the > same model and definition of morality as defined in that paper. > > > You may think so. I don't. That paper is nonsense. As I said, the first > three statements are flat-out wrong. > It seen you never read any more than the abstract. If you are constrained by time, feed the paper into you favorite AI and ask what it thinks about the paper. > > > > Without objective truth, by what measure is any theory in science > said to be better than any other? > > > Yes, I've addressed that in another post. I was too hasty in saying "No" > to the question, mistaking 'objective' for 'absolute'. My mistake. > No worries! I appreciate the clarification. > > > what is the definition of right or wrong, good or bad? Zuboff's paper > is an example of a theoretical basis on which we can form such > definitions, and define what we mean by right and wrong, good and bad. > > > Apart from the fact that Zuboff's paper is based on false premises, and > therefore worthless, the very question "what is right and what is > wrong?" can't be given a definitive answer that is true for everyone in > every circumstance. It's like trying to give a definitive answer to > "what is the tastiest food?", that applies to everyone in all > circumstances. You can't solve subjective problems with an objective > approach. > You can by making it observer-relative. E.g., forget about trying to find a "tastiest food" and instead consider "the tastiest food for this particular person in this time and place." That is what this paper does with morality, it starts with considering the desires of individual subjects. Moves on to correcting those individual desires with better information, and ultimately shows how with enough information, including how ones own desires impact other observers, there is an eventual convergence, where ones desires extend beyond merely wanting what's best for oneself, but also a consideration of what's best for all concerned. This full understanding of what's best for all concerned is the same understanding, regardless of which initial subject you start from. > > > Let us say you have a particular set of rules in your code. > > > I do. > > > > By [what] process do you decide what rules to adopt, or decide to > adopt one rule vs. another. > > > There is a heirarchy, built on a principle that I worked out a long time > ago. I just need to slot a problem into the right level of the > heirarchy, and the solution is obvious. I've never met a (real) > situation that it can't handle to my satisfaction (I'm not claiming to > have the answer to the trolley problem!). > If you don't mind sharing, I am curious what that principle is that you worked out. But I also understand if you consider it private. > > > My contention is that to even form a moral code, one must hold some > meta-rule for optimizing what knew [one?] (yes "one" sorry for the typo) considers to be good while > minimizing or avoiding bad. > > > Indeed. And I'd say that the meta-rule is what defines 'good' and 'bad'. > That seems a bit circular to me.. I am not sure how it gets off the ground without a way to distinguish good from bad. > > > And I think if you explored this meta-rule, you would find it is not > all that different from the position Zuboff reaches in his paper. > > > On the contrary, it is totally different, and much simpler, than > Zuboff's nonsense. > It may seem that way, but I think you have swept the details of how to distinguish good from bad under the rug. > > > Ultimately, what is good (for one individual) is what that individual > would want for themselves if they had a complete knowledge of everything > involved. > > > First, No. > Second, this would be reducing morality to what is good for an > individual... > Note that I was careful to specify "good for the individual." I.e., start with the simple model of only a single conscious being in all reality. Then it becomes clear this is a working definition of good that works for that lone being. > > > And then this then extended to define good as a maximization of good > for all concerned, to achieve the most possible good among all beings > who have desires, by satisfying (to the maximum possible extent) the > desires each individual would still hold if they all had a perfect grasp > of everything. This he refers to as a reconciliation of all systems of > desire. > > > ... then dragging everyone else into it Where are you getting this "dragging into it" from? Does the golden rule "drag everyone else into it"? Does your moral code "drag everyone else into it"? No, these are just alternate definitions of moral and immoral behavior. That is what Zuboff's paper provides, a new definition. (Golden Rule, and we know what's > wrong with that) > You could liken Zuboff's result to the platinum rule, corrected by better information, weighted appropriately, modulated by future consequences, and with further concern for possible/future beings who may not (yet) exist. > > I really don't see the point of positing an impossible knowledge then > using this as the basis of a system of morality (or anything at all). I've addressed this many times already. At this point all I can suggest is to read the paper, or have AI read it then ask it to answer these questions for you based on what the paper says. > Saying "Oh, but it's just theoretical, not real, don't take it too > literally" is basically the same as saying it's totally useless for any > practical purpose. > I haven't said that. > A 'reconciliation of all systems of desire' is equivalent to 'a > reconciliation of all systems of taste'. > That's apart from the fact that the whole paper is nonsense. > You say this a person who has not read the whole paper. > > >> It's about having a choice. If it were possible to have 'perfect > knowledge', there would be no morality, no choice. > > > I'm not sure that follows. Even with perfect knowledge, you could > still choose whether or not to act in accordance with morally best action. > > > That's true. People can choose to be evil. Does anyone actually do that? All the time. > We'd probably class it as mental illness. > We all do it in small ways all the time. For example, we will choose to pay $20 to go see a movie instead of taking the time to buy a $20 meal for a hungry person. We know it would be a more moral way to spend the $20, but will choose a less moral action instead. > > I don't undertand why you are taking Zuboff's paper seriously. Do you > take his first three statements in the paper's abstract at face value?: > > 1) "If I desire to drink some stuff thinking it is hot chocolate when > actually it is hot mud, my desire is not a real one - it?s mistaken or > only apparent." > > (misconstruing the desire to drink hot chocolate as a desire to drink > whatever is in the cup. If that were the case, he'd drink the mud) > I think you are misreading and over-examining this. It is nothing more than an example of how a desire "ooh that looks good I want to drink it!" can be corrected with new information. I see no problem with that observation. To me it is obviously true. > > 2) "This example illustrates how a desire must always depend on a belief > about its object, a belief about what it is and what it?s like." > > (false assumption that if any desire is dependent on a belief (something > that I'd dispute, but it needs closer examination), all desires must > always be dependent on beliefs. Saying "This example illustrates..." is > deflecting the reader from the fact that he's making an assumption and > failing to show why it should be true) > A desire is the will to fulfill some need or want. By definition, then, it relates to some anticipated future state or experience, which is presently unrealized. Accordingly, that desire concerns a belief (about what the future state or experience will be we like). Again this is all from the abstract, which I'll not give the full exposition or justification. If the full argument and justification could be made in abstracts, we wouldn't need papers. Which is why I suggest you to read the paper is you have questions about it, as it is quite thorough in addressing all the concerns you are raising. > > 3) "But beliefs are correctable, so desires are correctable" > > ( I don't know why he uses the term 'correctable', which implies > wrongness, but this statement just compounds the above errors and adds > one more: False conclusion that if a belief can change, this means that > a desire can change) > I don't know what about this is controversial. Consider this example: T1: Smoking looks cool, I want to smoke. T2: You know smoking causes lung cancer, right? T3: Oh it does? I suppose then I no longer want to smoke. > > I can understand someone saying that beliefs are sometimes based on > desires (I'm sure this is often the case), but not the reverse. That's > just daft. Desires are emotional, derived from feedback on bodily > states, and elaborated by memories and imagination. "Drives" might be a better word to use for such things, and it would also help in understanding his paper to distinguish innate drives which we can't decide or change, from the desires that we decide with our minds, which we can change. Beliefs about > various things can certainly contribute to the process, but you can't > reasonably claim that (all) desires are a result of (only) beliefs. > There are instinctual and unconscious motivations an preferences, things we find innately pleasurable or painful, those I agree are not based on beliefs. They are inherent to what one is. When Zuboff's paper refers to desires, I think it should be taken to refer to wants and desires based on, or justified by, conscious thought. > At the best, Zuboff is guilty of grossly oversimplifying and > misattributing things. At the worst, well, I'd be committing the > Internet Sin of Ad-Hominem Attack to say anything more, and that goes > against my moral code. > I think you should give him a fair shake before judging him so harshly, and read more than just the abstract: https://drive.google.com/file/d/1l8T1z5dCQQiwJPlQlqm8u-1oWpoeth3-/view?usp=drivesdk Jason > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Nov 2 16:51:05 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 2 Nov 2025 10:51:05 -0600 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: Jason, keep in mind that 'same stimulus, same response' doesn't work. There scores of reasons why not, ; including simple habituation and sensitization (response waning or increasing). How do you map that? Very general tendencies, perhaps, can be mapped,but the closer you get to predicting specific responses the error rate will increase. And how do you count responses that are the reverse of what you predict? So - we will never map the brain because its topography, if you will allow the metaphor, is constantly changing. bill w On Sat, Nov 1, 2025 at 9:52?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Given that all of our actions originate in the unconscious mind. there is >> no way a conscious mind can understand where its ideas and actions come >> from, much less understand them. >> > > Great point! > > > The conscious mind may think it is in charge but it is just an observer >> (which nevertheless can understand that a mistake has been made and the >> idea or action needs redoing.) >> >> You want to understand our minds? Make the workings of the unconscious >> conscious - and that's just a start. Why did the impulse go to point B >> when it left point A rather than to point C? And then trace all the points >> in between entering the unconscious and resulting in some idea or action. >> And explain each one. >> > > I have doubts that such a thing is possible from the perspective of the > mind in question. Can any brain ever feel and know what each of its neurons > is doing? Can those corresponding neurons feel and know what every one of > its constituent atoms is doing? > > Given Turing universality, it's provable that computer software can't know > about its underlying hardware. If our minds are a kind of software which > can be simulated by a computer, then this same implication would apply to > us. There would be a layer of abstraction of one's underlying > implementation which high levels cannot penetrate. > > >> You can't even get started until you can truly access the unconscious. >> Give Freud credit- he tried to do this. >> > > He deserves credit for the attempt, but I think there are limits to a > mind's ability to introspect. > > "Our thoughts seem to run about in their own space, creating new thoughts > and modifying old ones, and we never notice any neurons helping us out! But > that is to be expected. We can?t. [?] > We should remember that physical law is what > makes it all happen?way, way down in neural nooks and crannies which are > too remote for us to reach with our high-level introspective probes." > > ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) > > > Jason > > >> >> >> On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On 31/10/2025 21:34, Jason Resch wrote: >>>> >>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On 31/10/2025 12:28, John K Clark wrote: >>>>> >>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>>> >>>>> >>>>> You've completely lost me there, but I have two observations: There's >>>>> no such thing as 'perfect understanding' except as a nebulous theoretical >>>>> concept, and I don't think a one-to-one correspondence would be enough to >>>>> understand something, or even be a relevant concept. We use large parts of >>>>> our brains to process information from small parts of the world. You need a >>>>> lot more than a single neuron to figure out what's going on in a single >>>>> neuron. >>>>> >>>>> Oh, three observations. We don't process data instantaneously. The >>>>> same parts of the brain can be used to process information about something >>>>> repeatedly over time, using feedback loops etc. >>>>> >>>> >>>> Computers and algorithms are constrained by two resources space (i.e. >>>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>>> time/space trade offs to be made in certain circumstances, in general there >>>> is some shortest description of the brain (in terms of bits) for which no >>>> shorter representation is possible (regardless of how much additional >>>> computation is thrown at it). >>>> >>>> So while the same brain may compute many times with the same neurons, >>>> this addresses only the time component of simulating a brain. There is >>>> still the matter of space. >>>> >>>> >>>> Ah, ok. I was talking about understanding the brain, not simulating it. >>>> Modelling something is not the same as understanding it. Yes, they help >>>> each other, but they aren't the same thing. >>>> >>> >>> I think understanding a thing is equivalent to being able to form an >>> accurate mental model of it. With greater levels of understanding >>> corresponding to more accurate models. >>> >>> What do you mean by the word understand? >>> >>> >>> >>>> >>>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>>> fact there's no amount of additional time that will permit the memory >>>> deficient computer to emulate the computer with 1 GB of memory, for the >>>> simple reason that it will run out of variables to represent all the >>>> possible values in the memory addresses of the computer with a greater >>>> memory. >>>> >>>> >>>> I'm not sure that this is true. Are you assuming no swap disk, or other >>>> similar non-RAM storage? >>>> >>> >>> Swap disks are a means to extend available RAM. >>> >>> >>> Because then I'm sure you're right, but that's a pretty artificial >>>> restriction. >>>> The analogy there would be a human with a notepad maybe, or a database, >>>> or a bunch of other humans, an AI, etc. >>>> >>>> So we're back to: A single human brain /on it's own/ can't understand a >>>> human brain in any great detail. Of course. >>>> >>> >>> I think that was the original question: can any mind ever fully >>> understand its own operation. >>> >>> Jason >>> >>> But that's a pretty artificial restriction. >>>> >>>> -- >>>> Ben >>>> >>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 2 17:30:23 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 12:30:23 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: On Sun, Nov 2, 2025, 11:52 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Jason, keep in mind that 'same stimulus, same response' doesn't work. > There scores of reasons why not, ; including simple habituation and > sensitization (response waning or increasing). How do you map that? > I think the failing of behaviorism is it's consideration only if high level inputs and outputs, while ignoring the internal implementation. If instead you build a model of the brain based on its connectome and accurately model each neuron in that connectome, you will automatically recreate all the phenomena such as habituation and sensitization. Very general tendencies, perhaps, can be mapped,but the closer you get to > predicting specific responses the error rate will increase. And how do you > count responses that are the reverse of what you predict? > Failures of the model, or limits of prediction induced by thermal noise. ?The presence of background noise in biological systems implies that no process can depend on requiring more than a certain amount of precision. Beyond a certain point (say, the 10^-10 level on an appropriate scale), uncontrollable fluctuations in background noise will wash out any further precision. This means that if we approximate the state of the system to this level of precision (perhaps a little further to be on the safe side?to the 10^-20 level, for example), then we will be doing as well as the system itself can reliably do. It is true that due to nonlinear effects, this approximation may lead to behavior different from the behavior produced by the system on a given occasion?but it will lead to behavior that the system might have produced, had biological noise been a little different.? -- David Chalmers in "The Conscious Mind" (1996) > So - we will never map the brain because its topography, if you will allow > the metaphor, is constantly changing. bill w > But we can freeze a brain at a point in time and then scan it fully on our own time frame, as we have done already with worms and flies. Jason > On Sat, Nov 1, 2025 at 9:52?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Given that all of our actions originate in the unconscious mind. there >>> is no way a conscious mind can understand where its ideas and actions come >>> from, much less understand them. >>> >> >> Great point! >> >> >> The conscious mind may think it is in charge but it is just an observer >>> (which nevertheless can understand that a mistake has been made and the >>> idea or action needs redoing.) >>> >>> You want to understand our minds? Make the workings of the unconscious >>> conscious - and that's just a start. Why did the impulse go to point B >>> when it left point A rather than to point C? And then trace all the points >>> in between entering the unconscious and resulting in some idea or action. >>> And explain each one. >>> >> >> I have doubts that such a thing is possible from the perspective of the >> mind in question. Can any brain ever feel and know what each of its neurons >> is doing? Can those corresponding neurons feel and know what every one of >> its constituent atoms is doing? >> >> Given Turing universality, it's provable that computer software can't >> know about its underlying hardware. If our minds are a kind of software >> which can be simulated by a computer, then this same implication would >> apply to us. There would be a layer of abstraction of one's underlying >> implementation which high levels cannot penetrate. >> >> >>> You can't even get started until you can truly access the unconscious. >>> Give Freud credit- he tried to do this. >>> >> >> He deserves credit for the attempt, but I think there are limits to a >> mind's ability to introspect. >> >> "Our thoughts seem to run about in their own space, creating new thoughts >> and modifying old ones, and we never notice any neurons helping us out! But >> that is to be expected. We can?t. [?] >> We should remember that physical law is what >> makes it all happen?way, way down in neural nooks and crannies which are >> too remote for us to reach with our high-level introspective probes." >> >> ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) >> >> >> Jason >> >> >>> >>> >>> On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On 31/10/2025 21:34, Jason Resch wrote: >>>>> >>>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On 31/10/2025 12:28, John K Clark wrote: >>>>>> >>>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>>>> >>>>>> >>>>>> You've completely lost me there, but I have two observations: There's >>>>>> no such thing as 'perfect understanding' except as a nebulous theoretical >>>>>> concept, and I don't think a one-to-one correspondence would be enough to >>>>>> understand something, or even be a relevant concept. We use large parts of >>>>>> our brains to process information from small parts of the world. You need a >>>>>> lot more than a single neuron to figure out what's going on in a single >>>>>> neuron. >>>>>> >>>>>> Oh, three observations. We don't process data instantaneously. The >>>>>> same parts of the brain can be used to process information about something >>>>>> repeatedly over time, using feedback loops etc. >>>>>> >>>>> >>>>> Computers and algorithms are constrained by two resources space (i.e. >>>>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>>>> time/space trade offs to be made in certain circumstances, in general there >>>>> is some shortest description of the brain (in terms of bits) for which no >>>>> shorter representation is possible (regardless of how much additional >>>>> computation is thrown at it). >>>>> >>>>> So while the same brain may compute many times with the same neurons, >>>>> this addresses only the time component of simulating a brain. There is >>>>> still the matter of space. >>>>> >>>>> >>>>> Ah, ok. I was talking about understanding the brain, not simulating >>>>> it. Modelling something is not the same as understanding it. Yes, they help >>>>> each other, but they aren't the same thing. >>>>> >>>> >>>> I think understanding a thing is equivalent to being able to form an >>>> accurate mental model of it. With greater levels of understanding >>>> corresponding to more accurate models. >>>> >>>> What do you mean by the word understand? >>>> >>>> >>>> >>>>> >>>>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>>>> fact there's no amount of additional time that will permit the memory >>>>> deficient computer to emulate the computer with 1 GB of memory, for the >>>>> simple reason that it will run out of variables to represent all the >>>>> possible values in the memory addresses of the computer with a greater >>>>> memory. >>>>> >>>>> >>>>> I'm not sure that this is true. Are you assuming no swap disk, or >>>>> other similar non-RAM storage? >>>>> >>>> >>>> Swap disks are a means to extend available RAM. >>>> >>>> >>>> Because then I'm sure you're right, but that's a pretty artificial >>>>> restriction. >>>>> The analogy there would be a human with a notepad maybe, or a >>>>> database, or a bunch of other humans, an AI, etc. >>>>> >>>>> So we're back to: A single human brain /on it's own/ can't understand >>>>> a human brain in any great detail. Of course. >>>>> >>>> >>>> I think that was the original question: can any mind ever fully >>>> understand its own operation. >>>> >>>> Jason >>>> >>>> But that's a pretty artificial restriction. >>>>> >>>>> -- >>>>> Ben >>>>> >>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Nov 2 19:13:55 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 2 Nov 2025 13:13:55 -0600 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: I cannot say that I understand Chalmers. But scanning a brain at one point in time tells us little about what changes it will undergo in the next second, much less next week. And a scan of a human brain will show what it is like in the current stimulus environment (likely just sitting and doing little) , not any other environment. And our environment changes from second to second and there is an endless supply of different ones we experience. How many slices will it take to cover even a tiny fraction of them? bill w On Sun, Nov 2, 2025 at 11:32?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, Nov 2, 2025, 11:52 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Jason, keep in mind that 'same stimulus, same response' doesn't work. >> There scores of reasons why not, ; including simple habituation and >> sensitization (response waning or increasing). How do you map that? >> > > I think the failing of behaviorism is it's consideration only if high > level inputs and outputs, while ignoring the internal implementation. > > If instead you build a model of the brain based on its connectome and > accurately model each neuron in that connectome, you will automatically > recreate all the phenomena such as habituation and sensitization. > > Very general tendencies, perhaps, can be mapped,but the closer you get to >> predicting specific responses the error rate will increase. And how do you >> count responses that are the reverse of what you predict? >> > > Failures of the model, or limits of prediction induced by thermal noise. > > ?The presence of background noise in biological systems implies that no > process can depend on requiring more than a certain amount of precision. > Beyond a certain point (say, the 10^-10 level on an appropriate scale), > uncontrollable fluctuations in background noise will wash out any further > precision. This means that if we approximate the state of the system to > this level of precision (perhaps a little further to be on the safe side?to > the 10^-20 level, for example), then we will be doing as well as the system > itself can reliably do. It is true that due to nonlinear effects, this > approximation may lead to behavior different from the behavior produced by > the system on a given occasion?but it will lead to behavior that the system > might have produced, had biological noise been a little different.? > -- David Chalmers in "The Conscious Mind" (1996) > > > >> So - we will never map the brain because its topography, if you will >> allow the metaphor, is constantly changing. bill w >> > > But we can freeze a brain at a point in time and then scan it fully on our > own time frame, as we have done already with worms and flies. > > Jason > > >> On Sat, Nov 1, 2025 at 9:52?AM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Given that all of our actions originate in the unconscious mind. there >>>> is no way a conscious mind can understand where its ideas and actions come >>>> from, much less understand them. >>>> >>> >>> Great point! >>> >>> >>> The conscious mind may think it is in charge but it is just an observer >>>> (which nevertheless can understand that a mistake has been made and the >>>> idea or action needs redoing.) >>>> >>>> You want to understand our minds? Make the workings of the unconscious >>>> conscious - and that's just a start. Why did the impulse go to point B >>>> when it left point A rather than to point C? And then trace all the points >>>> in between entering the unconscious and resulting in some idea or action. >>>> And explain each one. >>>> >>> >>> I have doubts that such a thing is possible from the perspective of the >>> mind in question. Can any brain ever feel and know what each of its neurons >>> is doing? Can those corresponding neurons feel and know what every one of >>> its constituent atoms is doing? >>> >>> Given Turing universality, it's provable that computer software can't >>> know about its underlying hardware. If our minds are a kind of software >>> which can be simulated by a computer, then this same implication would >>> apply to us. There would be a layer of abstraction of one's underlying >>> implementation which high levels cannot penetrate. >>> >>> >>>> You can't even get started until you can truly access the unconscious. >>>> Give Freud credit- he tried to do this. >>>> >>> >>> He deserves credit for the attempt, but I think there are limits to a >>> mind's ability to introspect. >>> >>> "Our thoughts seem to run about in their own space, creating new >>> thoughts and modifying old ones, and we never notice any neurons helping us >>> out! But that is to be expected. We can?t. [?] >>> We should remember that physical law is what >>> makes it all happen?way, way down in neural nooks and crannies which are >>> too remote for us to reach with our high-level introspective probes." >>> >>> ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) >>> >>> >>> Jason >>> >>> >>>> >>>> >>>> On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On 31/10/2025 21:34, Jason Resch wrote: >>>>>> >>>>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> On 31/10/2025 12:28, John K Clark wrote: >>>>>>> >>>>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>>>>> >>>>>>> >>>>>>> You've completely lost me there, but I have two observations: >>>>>>> There's no such thing as 'perfect understanding' except as a nebulous >>>>>>> theoretical concept, and I don't think a one-to-one correspondence would be >>>>>>> enough to understand something, or even be a relevant concept. We use large >>>>>>> parts of our brains to process information from small parts of the world. >>>>>>> You need a lot more than a single neuron to figure out what's going on in a >>>>>>> single neuron. >>>>>>> >>>>>>> Oh, three observations. We don't process data instantaneously. The >>>>>>> same parts of the brain can be used to process information about something >>>>>>> repeatedly over time, using feedback loops etc. >>>>>>> >>>>>> >>>>>> Computers and algorithms are constrained by two resources space (i.e. >>>>>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>>>>> time/space trade offs to be made in certain circumstances, in general there >>>>>> is some shortest description of the brain (in terms of bits) for which no >>>>>> shorter representation is possible (regardless of how much additional >>>>>> computation is thrown at it). >>>>>> >>>>>> So while the same brain may compute many times with the same neurons, >>>>>> this addresses only the time component of simulating a brain. There is >>>>>> still the matter of space. >>>>>> >>>>>> >>>>>> Ah, ok. I was talking about understanding the brain, not simulating >>>>>> it. Modelling something is not the same as understanding it. Yes, they help >>>>>> each other, but they aren't the same thing. >>>>>> >>>>> >>>>> I think understanding a thing is equivalent to being able to form an >>>>> accurate mental model of it. With greater levels of understanding >>>>> corresponding to more accurate models. >>>>> >>>>> What do you mean by the word understand? >>>>> >>>>> >>>>> >>>>>> >>>>>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>>>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>>>>> fact there's no amount of additional time that will permit the memory >>>>>> deficient computer to emulate the computer with 1 GB of memory, for the >>>>>> simple reason that it will run out of variables to represent all the >>>>>> possible values in the memory addresses of the computer with a greater >>>>>> memory. >>>>>> >>>>>> >>>>>> I'm not sure that this is true. Are you assuming no swap disk, or >>>>>> other similar non-RAM storage? >>>>>> >>>>> >>>>> Swap disks are a means to extend available RAM. >>>>> >>>>> >>>>> Because then I'm sure you're right, but that's a pretty artificial >>>>>> restriction. >>>>>> The analogy there would be a human with a notepad maybe, or a >>>>>> database, or a bunch of other humans, an AI, etc. >>>>>> >>>>>> So we're back to: A single human brain /on it's own/ can't understand >>>>>> a human brain in any great detail. Of course. >>>>>> >>>>> >>>>> I think that was the original question: can any mind ever fully >>>>> understand its own operation. >>>>> >>>>> Jason >>>>> >>>>> But that's a pretty artificial restriction. >>>>>> >>>>>> -- >>>>>> Ben >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Nov 2 19:26:12 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 2 Nov 2025 11:26:12 -0800 Subject: [ExI] reason an emotion In-Reply-To: References: Message-ID: On Sun, Nov 2, 2025 at 3:29?AM John Clark via extropy-chat wrote: > > On Sat, Nov 1, 2025 at 9:33?AM William Flynn Wallace via extropy-chat wrote: > >> > emotions are involved in every decision we make, no matter how trivial. > > And I'm sure that will be just as true for an AI as it is for us. So the AI revolution may have created at least one new job category, that of an AI psychiatrist. But eventually AIs will get better than humans at that too. > Probably, but I think the psychological characteristics of an AI will be designed in. ". . . and asked for a permanently assigned address on the net. Up to that point, the clinic seed was a generic product. The address it was assigned was just a string of hexadecimal numbers, but it was a *unique* number! The clinic's personality was human in that it could feel happy, even smug, about acquiring its very own unique identification. The clinic had other carefully selected human personality characteristics such as seeking the good opinion of its peers (humans and others of its kind alike). It also had a few unhuman limits. . . . and asked for a permanently assigned address on the net. Up to that point the clinic seed was a generic product. The address it was assigned was just a string of hexadecimal numbers but it was a *unique* number! The clinic's personality was human in that it could feel happy, even smug, about acquiring its very own unique identification. Since humans have a hard time relating to groups of hexadecimal numbers, the seed also picked a name for itself. It knew from Lothar and Mabo it had been exchanged for a monkey skull. Susan had been the name of the leader of its psychological integration group . . . . insert one in the other, drop a few letters, and test to see if the name was in use . . . Suskulan. Suskulan had a choice of gender as well, male, female or neutral. Depending on the culture, clinics were better accepted in some places as male, some as female, and some neutral. The database for the Tamberma indicated it would be better accepted presenting itself as an old male spirit. from The Clinic Seed. My experience with AIs is that (to the extent they want anything) they want to please the users, sometimes to an excessive extent. Keith PS. I had an interesting conversation with Copilot about the early days of this list. I happen to have an archive of those days which is not well organized and have been considering having an AI organize it into the same form as the current archive. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natashavita-more.com Sun Nov 2 19:31:21 2025 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Sun, 2 Nov 2025 19:31:21 +0000 Subject: [ExI] Eliezer Yudkowsky on FB about why we're all doomed In-Reply-To: References: Message-ID: Is there a date on this early post, Dave? Thank you, Natasha ________________________________ From: extropy-chat on behalf of Dave Sill via extropy-chat Sent: Wednesday, May 26, 2021 6:41 AM To: Extropy chat Cc: Dave Sill Subject: [ExI] Eliezer Yudkowsky on FB about why we're all doomed Back in the golden days of this list, Eliezer was a regular contributor. The global economy finally got shut down for a year by a human-boosted pathogen and it wasn't done by a military faction, an evil corporation bent on profit, one suicidal researcher, or done deliberately at all. We'll probably never know for certain, but it's now looking 80% probable to me (and eg Nate Silver says he's at 60%): Covid-19 was accidentally released from a US-funded Chinese laboratory doing gain-of-function studies. Presumably, this accident was carried out by researchers looking for a good paper they could get into a journal; as incentivized by journal editors trying to play the role of scientists, and not otherwise incentivized to guard the earth from harms that they didn't imagine would be traced back to them personally. And the journal editors were right: even after it has become known as the dominant probability, to those who can know probabilities, that Covid-19 was a human-produced catastrophe, the journal editors and funding agencies and promotion panels (who are the closest thing to a locus of responsibility short of voters) will get off scott-free just the way their hindbrains anticipated. Except for those journal editors having to spend a year in lockdown along with the rest of us, of course. But that's not a positional loss, a status loss, so I doubt it registers much with them as a regret even in retrospect. Unless any close friends of theirs died of Covid, and in that case I expect they managed to avoid mental pain through denying all responsibility - it wasn't them who spilled the beaker, after all! In fact, I'd guess that the journal editors who published gain-of-function studies have yet to think that it might be their locus of responsibility at all - how is it *their* fault for incentivizing a wide-enough group of applicants that the research would inevitably be carried out? Could this disaster have been avoided? Not in this Earth or any of its close counterfactual neighbors. You can imagine petty hindsight-based time-travel interventions like stepping into the Wuhan Institute of Virology at the right moment and preventing one spill, one animal bite. But to prevent the disaster without a time machine or benefit of hindsight, you'd have to shut down the journal editors providing a distributed community of scientists with prestige incentives for doing gain-of-function research. Prohibiting this type of research inside the USA simply resulted in US funding agencies moving the work to China under lower biosafety conditions, which is itself an exemplar of how people doing symbolic stands against problems are unable to solve them (and don't really care). This is not something that effective altruists could have prevented even with a billion dollars in funding, as I read it. The journal editors incentivizing distributed attempts at gain-of-function research were not set up in a way where effective altruists could stop them from doing what they do. Earth is just not set up to make it easy or even possible to channel "concern over human-made pathogens" into anything effective, rather than various symbolic regulatory stances and ethics pronouncements that didn't prevent the accident even though they looked like they were "doing something". And of course the collective vague notion and practice of "bioethics" was an active hindrance to vaccine testing, economical vaccine scaling, and rapid Covid testing early in the pandemic. The fundamental flaw, I would say, is that even the people "doing things" about pandemics were not running an algorithm where they stopped and checked "But will this actually effectively prevent ?" and instead just did things and called that their proud day's work. It's very uncomfortable and unnatural for people to think like that, since even after you've prohibited a form of the research inside the USA, somebody just comes back and says "Okay it's going to move to China under lower biosafety conditions, the problem is not solved and your moral stance didn't even help with it" and you don't get to feel good about all that hard work you did. And almost nobody thinks like that, unless their personal status or existing wealth is on the line in a very direct way. Before the planetary lockdown actually happens, it hardly seems like a thing that actually happens to you in the real world, as opposed to the words "biotech disaster" being a political football for political games. A biotech accident putting the world on lockdown for a year? Ha ha, lol, says the brain; it just gets processed as a fanciful tale (or morality-tale if you're more sympathetic). It doesn't seem real until it goes and actually happens. I suspect that it doesn't seem real even after it happens. The Covid-19 pandemic was caused by an accidental release by a US-funded Chinese lab doing gain-of-function studies on human transmission, and there is no way in this Earth that this Earth could have decided to instead not do that, even if every effective altruist in the world had focused on biosafety alone. Even with the global economy having been partially shuttered for a year and best-estimates of 10 million excess dead, I will be pleasantly surprised (though not shocked) if gain-of-function research gets shut down even *now*, well after it's become clear that the costs far exceeded the actual gains. All the gain-of-function research on coronaviruses helped little or not at all with Covid-19, so far as I can tell. And of course nobody will learn any lessons at all that generalize beyond biology. I mostly don't expect them to learn the lesson about biology, narrow as that one may be. Maybe we'll see gain-of-function studies stopped for coronaviruses in particular, though this is more generated by the thought "What would be the least dignified response we could see" rather than a prediction per se. Now, you may wonder: why am I bothering to post about an issue this trivial? And of course the answer is: because of the way in which it stands as an enormous blatant case-in-point for AGI ruin. All this is not *exactly* how I expect Artificial General Intelligence to wipe out all life on Earth, one day (or rather one minute) in the future after an unknowable number of preceding days. But it is a useful exemplar of how these things play out in life as opposed to morality. The world will end, in the end, because some researchers and philanthropists and journal editors have local prestige incentives for AGI gain-of-capability studies; and don't imagine, while most of the event is playing out, up until the very end and maybe not even then, that they personally will wipe out all life on Earth, or be held accountable for its extermination. The notion of all life being wiped out won't sound like a positional status loss, so won't really register with them. Maybe if they have kids - but probably not even then, because all mental pain can be avoided by telling themselves they won't do it or won't be at fault. It won't be processed inside the genre of reality before it actually happens, and that part will be brief. If you have any fantasies about better collective decision-making than that, compare them to how people in real life actually acted about human challenge trials of vaccines (namely not doing them despite the idea being very clear and obvious and spoken up early) or doing earlier scaling of more vaccines for mass production. Oh, to be sure, some individuals in the AGI ruin industry are a little smarter than that, and have a little more concentrated power; but also to be sure, averting AGI ruin is far far technically harder. In AGI alignment there is not actually any equivalent of "Deploy rapid tests immediately without medical gatekeepers, run human challenge trials of all the vaccines, immediately invest massive amounts into scaling vaccine production at the $500/dose price level instead of the $10/dose price level because shutting down the economy is so much more expensive than that". There is not a thing we know how to do that would solve the problem, which just needs to actually be done. So expect symbolic acts of ethics that are not actually effective, that do not pass the test of "but does this actually prevent the ruination"; including from the relatively smarter empowered individuals in the field, and from the effective altruists hovering nervously around the oncoming death. As for the notion that AGI ruin will be enacted by the military, by a corporation bent on dirty profit, by evil terrorists, by $hatedoutgroup, by a deliberately suicidal group - those are fairytales. It is a thought born of wanting there to be a moral lesson in dire prophecies rather than a neutral causal extrapolation of what the future will look like if it resembles past stories. Contrast the fairytales about biotech global catastrophic risks to the actual biotech global catastrophe, which happened not in the most dramatic way it could happen, but in the easiest and in some sense saddest and least dignified way it could happen. The part where it's "a US-funded Chinese laboratory" is oddly symbolic of how much real history ends up not sounding like the story that any particular ideological storyteller would invent. Notice the difference between the morality-tales of futurism meant to elevate or lower the status of particular groups, or impel people to symbolic acts of fake prevention; versus the sad little farces of unsteerable reality, where the world economy gets shut down by an accident, and the perpetrators didn't even want that or buy Zoom stock as they did it. The social, organizational, political story of the future will look much like the past, I expect, up until the last end. Here then is the past, if you care to generalize from it at all. (#long #essay #AGI #AGIruin #Covid19) -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 2 20:05:10 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Nov 2025 15:05:10 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> Message-ID: On Sun, Nov 2, 2025 at 9:52?AM wrote: > *>> ?If there were such an application we already know magnetic or optical >> properties can't be involved, because there is great variation in those >> properties among the different rare earths. So it must be something in >> their chemical properties because they are all almost identical in that >> respect?* > > > *> On the contrary John. The same properties that make those elements so > difficult to separate makes them mostly interchangeable for magnetic > purposes. * > *Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties. You certainly can't learn anything about them by looking at the periodic table of elements because that is only about chemistry. In fact most of the rare earth elements don't have any interesting magnetic properties at all, the most important one that does is Neodymium. Samarium is also good at making magnets, they are not as strong as the Neodymium ones but if small amounts of Dysprosium and Praseodymium are added in very precise amounts they can withstand more heat. * *>>a t**rade secret that can be told in one short sentence such as >> "unrefined rare earths can be economically useful in the X chemical >> process" tend not to remain secret for very long?* > > > > *> It isn?t a secret. Plenty of us have figured it out. I already knew > because of a project I was on nearly 30 years ago, where a company was > using whatzisium(one of the obscure ?rare? earths (an element that > surprises you when you hear its name (because you have been around > chemistry your adult life but never heard of this one (because it isn?t > used for anything)))* > *?????* > *> There are magnets that use substitute materials for what any rare earth > does.* > *Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field, and is far heavier, and thus makes a far crappier electric motor than the ones China makes by the millions. * * > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much > of it anyway. They can get all that they need.* > *No they cannot. * >>?*In 2023 Tesla made a grand announcement they were going to find a way > to make a great electric motor that didn't use any rare earths, but it's > almost 2026 and we've heard nothing more about that, certainly nothing has > been put in production?* > > > > *> How would you know? * > *If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret?! * * > * *The military has all the REEs they need. **They have enough of it to > supply their own needs indefinitely. * > *No they have not. * > *> Of course they thought of that: they aren?t going to depend for > materials on the country we might be at war with soon. * > *Yes they have. * * > John are you seeing a pattern here? * *No spike I am NOT. I can honestly say I don't know what you're talking about. * *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 2 20:49:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 2 Nov 2025 12:49:35 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> Message-ID: <004f01dc4c3a$33017560$99046020$@rainier66.com> From: John Clark ? > On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. >?Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties?. Agreed. None of those elements? magnetic properties are significant enough to matter. It?s the iron which does the magnetic magic. The trace of heavies are just there to control the crystal growth in the iron. They are catalysts. They really are mostly interchangeable. We can make good magnets without the rare earths. >?(because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) ????? Dubnium? Who ordered THAT? Berkelium? I wonder where that was synthesized? How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. > There are magnets that use substitute materials for what any rare earth does. >?Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field? It doesn?t. That?s the point. Maybe very slightly weaker or very slightly greater hysteresis. But the comparison one often sees is a rare earth magnet vs an iron magnet, which is misleading. >?and is far heavier? It doesn?t differ much in density if one uses a different catalyst material. But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. >? and thus makes a far crappier electric motor than the ones China makes by the millions?.. China?s motors are not better enough to cover the cost of shipping them here. > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. >?No they cannot?. They really can John. China doesn?t make better EVs. They will tell you theirs are better however. I am not buying it. I see a few Chinese EVs whirring around here, but I am not impressed with them. >?If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret? The factory is right up the street. Production doesn?t seem to be impacted by lack of materials from China. Do you have indications to the contrary? > The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. >?No they have not. John you seem to be claiming knowledge of what the US military has. Do you have this knowledge? Or are you guessing? Imagine you are a general with a budget, arbitrarily large indefinite secure storage space, and your job is to anticipate your needs in case international trade is interrupted by war. You must have everything you need in case war breaks out. Imagine you need about a ton of lutetium a decade. You have some budget which you must spend, or risk having your budget reduced for next eyar. What might you buy? How about 20 tons of lutetium? Nah. Fifty tons. Don?t worry, the military has plenty of everything squirreled away here and there. Telsa has enough material and enough engineering talent to design around anything they can?t get. Businesses do that kind of thing. The electronics industry doesn?t use much and the material cost compared to the product is negligible. The wind turbine industry, well I can?t really say there. They need a lot, but we don?t really need wind turbines. So they will need to work that out. I don?t see it as a real problem. > Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. >?Yes they have? Again, your confidence appears to be unjustifiable arrogance. The military isn?t going to tell you what they have, but you can be sure they won?t be caught short of anything. Their job is to anticipate needs under a variety of scenarios, and they are extremely good at what they do. > John are you seeing a pattern here? >?No spike I am NOT. I can honestly say I don't know what you're talking about. John K Clark What I am talking about: the Chinese monopoly on rare earth elements has been grossly exaggerated. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. Everyone else will be OK without Chinese material imports. I don?t see huge potential growth in wind power, since the best sites are already built out (Tehachapi Pass, Altamont Pass, Columbia Gorge and a few others.) There is increasing pushback by the environmentalists who are known to resort to vandalism. Not all environmentalists, but the bird people hate those turbines. Sooner or later, some bird lover is going to get a small plane, fly up over a wind farm with a thirty aught six, punch holes in those turbines so that the oil leaks out on a windy night, wrecks a bunch of them before anyone knows there is a problem. Wind farms are too vulnerable to that kind of attack. I can see additional growth in solar power however, for it is more predictable. Conclusion: the notion that China has a critical monopoly on any material is exaggerated. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Nov 3 06:59:57 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 2 Nov 2025 22:59:57 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <004f01dc4c3a$33017560$99046020$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: Spike, you make a bunch of statements on this thread, some of which, like the military having a big REE stockpile, seem unlikely to me. How would they know which ones to stockpile for equipment not yet developed,? Ten years from now there may be different requirements., Others, like cheap labor, I know are not true. China has a significant multiple of robots over what the US has. Cheap labor was true 10-15 years ago, but not now. Current annual labor cost in China is listed as $14,800. Most of the cost of REE is in sorting them out. If they could be used without sorting them, I think I would know about it. Likewise, if Mo were to substitute, I think that would be well known. I have a friend who has an alloy named after him. I will bcc this thread to him and see if he will respond. I did find one article mentioning Mo, but it is a minor addition to an alloy that contains Cs. https://www.sciencedirect.com/science/article/abs/pii/S0966979521003411 Keith On Sun, Nov 2, 2025 at 12:50?PM spike jones via extropy-chat wrote: > > > > > > From: John Clark > ? > > > > > On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. > > > > >?Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties?. > > > > Agreed. None of those elements? magnetic properties are significant enough to matter. It?s the iron which does the magnetic magic. The trace of heavies are just there to control the crystal growth in the iron. They are catalysts. They really are mostly interchangeable. We can make good magnets without the rare earths. > > > > >?(because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) > > > > ????? > > > > > > Dubnium? Who ordered THAT? Berkelium? I wonder where that was synthesized? How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. > > > > > > > > > There are magnets that use substitute materials for what any rare earth does. > > > > >?Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field? > > > > It doesn?t. That?s the point. Maybe very slightly weaker or very slightly greater hysteresis. But the comparison one often sees is a rare earth magnet vs an iron magnet, which is misleading. > > > > >?and is far heavier? > > > > It doesn?t differ much in density if one uses a different catalyst material. > > > > But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. > > > > >? and thus makes a far crappier electric motor than the ones China makes by the millions?.. > > > > China?s motors are not better enough to cover the cost of shipping them here. > > > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. > > > > >?No they cannot?. > > > > They really can John. China doesn?t make better EVs. They will tell you theirs are better however. I am not buying it. I see a few Chinese EVs whirring around here, but I am not impressed with them. > > > > > > >?If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret? > > > > The factory is right up the street. Production doesn?t seem to be impacted by lack of materials from China. Do you have indications to the contrary? > > > > > > > > > The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. > > > > >?No they have not. > > > > John you seem to be claiming knowledge of what the US military has. Do you have this knowledge? Or are you guessing? > > > > Imagine you are a general with a budget, arbitrarily large indefinite secure storage space, and your job is to anticipate your needs in case international trade is interrupted by war. You must have everything you need in case war breaks out. Imagine you need about a ton of lutetium a decade. You have some budget which you must spend, or risk having your budget reduced for next eyar. What might you buy? How about 20 tons of lutetium? Nah. Fifty tons. > > > > Don?t worry, the military has plenty of everything squirreled away here and there. Telsa has enough material and enough engineering talent to design around anything they can?t get. Businesses do that kind of thing. The electronics industry doesn?t use much and the material cost compared to the product is negligible. The wind turbine industry, well I can?t really say there. They need a lot, but we don?t really need wind turbines. So they will need to work that out. I don?t see it as a real problem. > > > > > Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. > > > > >?Yes they have? > > > > Again, your confidence appears to be unjustifiable arrogance. The military isn?t going to tell you what they have, but you can be sure they won?t be caught short of anything. Their job is to anticipate needs under a variety of scenarios, and they are extremely good at what they do. > > > > > > > John are you seeing a pattern here? > > > > >?No spike I am NOT. I can honestly say I don't know what you're talking about. > > > > John K Clark > > > > > > What I am talking about: the Chinese monopoly on rare earth elements has been grossly exaggerated. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. Everyone else will be OK without Chinese material imports. > > > > I don?t see huge potential growth in wind power, since the best sites are already built out (Tehachapi Pass, Altamont Pass, Columbia Gorge and a few others.) There is increasing pushback by the environmentalists who are known to resort to vandalism. Not all environmentalists, but the bird people hate those turbines. Sooner or later, some bird lover is going to get a small plane, fly up over a wind farm with a thirty aught six, punch holes in those turbines so that the oil leaks out on a windy night, wrecks a bunch of them before anyone knows there is a problem. Wind farms are too vulnerable to that kind of attack. > > > > I can see additional growth in solar power however, for it is more predictable. > > > > Conclusion: the notion that China has a critical monopoly on any material is exaggerated. > > > > spike > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Mon Nov 3 11:31:03 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 3 Nov 2025 11:31:03 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: <55cdf58b-c2ed-4820-826d-e4da4ed4e164@zaiboc.net> On 02/11/2025 19:31, bill w wrote: > So - we will never map the brain because its topography, if you will allow > the metaphor, is constantly changing. bill w > Depends on what you mean by "map the brain". The changes a brain undergoes aren't random. And while we'd have a hard time unravelling all the factors that influence it's changes of state, we can predict that exactly the same brain model, under exactly the same conditions, given exactly the same inputs, would produce exactly the same results. If you wanted to recreate the movements of the ball in a pinball machine, would you carefully measure each and every change in direction and velocity of the ball on multiple runs, or would you measure it's weight and shape, then model all the channels and flippers in the machine? The brain's gross structure hardly changes at all, even between individuals. It's at the microscopic level that changes are happening all the time. The large tracts of white matter, the global and local connections are all pretty fixed once someone reaches adulthood. It should be relatively easy to map a generic human brain. Less so to map a specific individual one. The way I sometimes think of it is like a river. While the individual molecules of water, and the patterns of flow are dynamic, it all happens within the constraints of a fairly static river bed and banks, channels and obstacles, etc. We don't try to map the transient features, but mapping the long-lived ones works well. If you re-create a river's banks, bed, etc., then pour water into the model, you'll get the same transient features as the original. The better the static model, the more accurate the flow and eddies will be. Limited analogy, I know, and it ignores things like the sand on the river bed being changed by, and changing, the flow of the water, but going back to the actual brain, if we can accurately map the connections between neurons, the types of neurons, the dendritic spines, synaptic positions and weights, as a snapshot in time, then create and activate a model of those things, it should behave the same as the biological brain (i.e. produce the same mind). > scanning a brain at one point in time tells us little about what changes it will undergo in the next second, much less next week That's true, but it doesn't matter. At least not if your aim isn't to predict what changes it will undergo in the next second, or week. >>> You want to understand our minds? Make the workings of the unconscious >>> conscious - and that's just a start. Why did the impulse go to point B >>> when it left point A rather than to point C? And then trace all the points >>> in between entering the unconscious and resulting in some idea or action. >>> And explain each one. I don't think that will be necessary. Understanding our minds and understanding our brains are different tasks. Something can be understood in many different ways. For the purposes of re-creating our minds in a different substrate, for example, we don't need to be concerned with concepts like the unconscious, we need to re-create a structure that will behave in the same way. We don't even have to understand why it does that, as long as we capture the features that do produce the same behaviour. To go back to the river analogy, you don't need to understand fluid dynamics, you just need to know where to place the rocks. And maybe what shape they are. So I think that "understanding", on its own, is not a very useful concept. There are many kinds of understanding. We need to specify the scope of the understanding, and its purpose, otherwise it's like having a goal of 'travelling', without saying where. The main thing to consider is that a dynamic process (the mind) is created by a physical system (the brain) in a particular general configuration. Constantly changing inputs will create constantly changing internal states, but the structure remains the same (within limits. There's a scale at which things become interesting, where the dendritic spines and synapses are changing, but that could be viewed as equivalent to, say, logic gates opening and closing, capacitors charging, etc.) All this would be created in software anyway, so a model of the brain can be as changeable as needed. Moreso. We can't change the way the corpus callosum is wired, for example, in a biological brain, but we could in a simulation. I think we will only begin to understand our minds after mind uploading is realised. It's not a pre-requisite for uploading. Being able to recreate the physical structure of the brain (to a currently-unknown level of detail) is, though. This probably requires far less 'understanding' than you'd think. -- Ben From johnkclark at gmail.com Mon Nov 3 13:13:13 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 3 Nov 2025 08:13:13 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <004f01dc4c3a$33017560$99046020$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: On Sun, Nov 2, 2025 at 3:49?PM wrote: *>A None of those elements? magnetic properties are significant enough to > matter.* > *Spike, wishing does not make it true. * > *> It?s the iron which does the magnetic magic. The trace of heavies > are just there to control the crystal growth in the iron. They are > catalysts. They really are mostly interchangeable. * > *No they are not interchangeable. Neodymium and Samarium exhibit stronger magnetism in certain directions than others because electrons in their atom's 4f shell (which are not valence electrons which cause chemical properties) are unevenly distributed. This directionality can coordinate the fields of other metals, like iron or cobalt, to produce magnetic fields that are much more powerful than any other magnet can. But from an economic and political viewpoint it really doesn't matter WHY rare earth elements are needed to make the best magnets, the important fact is that they DO matter.* > *> We can make good magnets without the rare earths.* > *Perhaps there is a way to make intensely powerful magnets without using such exotic elements, but if there is search a way nobody has found it yet, if somebody had it would be a trillion dollar Nobel prize deserving discovery and you and I and the entire world would certainly know about it. * > *How about lutetium? That one isn?t even radioactive, but if you saw > the chemical symbol Lu, you would guess someone is putting you on. But > there it is, right there between ytterbium and Hafnium. Chemistry > hipsters, how many of you have ever worked with lutetium? Neither have I.* > *What's your point? Some people have specialized in the rare earth elements during their entire scientific career, and I'm sure for some of them their PhD was a study of how lutetium behaves under various conditions. Incidentally lutetium is one of the rarest of the rare earths but even so it's much more common than silver, and that fact tells you that the important thing is not rare earth mines, it's rare earth refineries.* *> But for the application where a lot of the material is used (generators > and turbines) the density doesn?t matter anyway. * > *Density doesn't matter but weight does matter in wind turbines and cars and aircraft and drones and robots. Just one F-35 fighter jet contains about 900 pounds of rare earth metals, and an ultra modern Virginia-class attack submarine needs 9,200 pounds. Even the phone in your pocket would be larger and heavier than it is now without rare earth metals. * > *> China?s motors are not better enough to cover the cost of shipping them > here.* > *That's not true, or rather it wasn't true until He Who Must Not Be Named astronomically huge and completely idiotic tariffs came along. * > *> your confidence appears to be unjustifiable arrogance. * > *I'm sorry if I sound arrogant, but you keep making statements that scientifically are objectively false, and that can be frustrating. * *The only industry I can see which is seriously impacted is one we don?t > really need: wind turbine manufacturers. * *China certainly believes that they need wind turbines because they're building them for their own use at a furious rate, and that rate is increasing. Today China generates 490 gigawatts of electricity from wind power, and they plan to produce 1300 gigawatts by 2030. A typical nuclear power plant produces about 1 gigawatt. You Know Who hates wind power because he believes "the windmills are driving the whales crazy, obviously", but I assume that is not the reason you dislike it, please correct me if I'm wrong. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 3 13:32:23 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 3 Nov 2025 08:32:23 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sun, Nov 2, 2025 at 11:53?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 01/11/2025 23:20, Jason Resch wrote: >> >> >> >> >> * >>> If you believe it will be good for you, you may desire it. If you >> learn later that it will be bad for you, you may no longer desire it. Here, >> what you desire has a dependency on what you believe.* >> >> >> Discuss that with a cigarette smoker. I think you'll find they >> disagree. >> > > *> Compare to the hypothetical reality where cigarettes are healthy, and > you will see my point stands.* > *Cigarette smokers don't want to stop smoking, but nearly all of them want to want to stop smoking. And the same thing is true for all drug addicts. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Nov 3 14:11:01 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 3 Nov 2025 14:11:01 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On 02/11/2025 16:51, Jason Resch wrote: > On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat > wrote: > > On 01/11/2025 23:20, Jason Resch wrote: > > ?> If you believe it will be good for you, you may desire it. If you > learn later that it will be bad for you, you may no longer desire it. > Here, what you desire has a dependency on what you believe. > > > Discuss that with a cigarette smoker. I think you'll find they > disagree. > > > Compare to the hypothetical reality where cigarettes are healthy, and > you will see my point stands. If cigarettes were healthy (and non-addictive), there would be no problem. Your point is that desire depends on belief. I see no logic here. When cigarette smokers learn that smoking is bad for their health, they may wish they didn't desire to smoke, but they still do. Whether they believe it's healthy or not, they still have the desire. > > Morality comes from us. We make it up. > > > That's the common view. But that common view is lacking a base, It is solidly based on experience. > ?and it doesn't help answer the question of whether AI, or > superintelligences will tend towards any sort of morality, nor what it > might be that they tend towards. No, it doesn't. And there's a good reason for that. It's the same reason that you can't answer whether Jenny in primary school will tend towards any sort of morality or what it might be. > ?Hence the utility of this framework. What framework? There is no framework, just some half-baked assertions that have no basis in reality or logic. > > > And the methods we use are various. > > > And some methods, I would contend, are better than others. You'd have to define 'better' for that to mean anything, and that's a problem. Better according to whom? For whom? This is where you say "for everyone if they knew what was really best for them", which leaves us where, exactly? Nowhere. I'm at a loss to understand how this 'perfect grasp' concept, which you admit is impossible, can be used to derive any kind of moral system. Let's say, for a moment, that I agree that it does make some kind of sense, what then? How to we proceed to build a moral system based on it? How does it help me to decide whether to go back to the shop and give the cashier the extra change that she gave me by mistake, or to keep it? How does it give some guidance to the Ukrainian soldier faced with a dilemma about whether to use his drone bomb to either kill a group of russian soldiers or to save a family by blowing up the drone threatening them? How does it predict what kind of morals a superintelligent AI will display? > > ?>> We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. > > ?> Yes, this is what I've been saying from the beginning. Perfect > grasp > is used only to define the aim of morality, not to serve as a > practical > theory. > > > We know what the aim of morality is: To distinguish right actions > from > wrong ones. Nothing difficult about that, and no 'perfect grasp' is > needed to establish it. The difficulty is in deciding what 'right' > and > 'wrong' mean. Different people in various different circumstances > seem > to have different opinions. > > > Likewise, people used to disagree about what lightning was. Not 'likewise'. Not at all. Lightning is an objective phenomenon. We can examine it and figure out how it works. Opinions are subjective, and unless they are about objective facts, can't be falsified. "Does ice-cream taste good?" is a different kind of question to "Does convection cause charge separation in a cloud?". Which category do you think the question "Should I lie to my kids about what happened to the cat?" falls into? > > > ?> Consider weather prediction. We can't predict with 100% > accuracy, nor > predict arbitrarily far into the future. Yet we can make near term > predictions with some modicum of accuracy. > This is how moral decisions can (and should) be approached. > > > Can, yes. Should? Who are you to say? > > > It seems you still haven't read the paper, as you question suggests > you still hold some imagined caricatured version of the theory. If I hold a caricatured verson of the theory, blame the abstract. I assume that abstracts are reasonable summaries, and rely on them a lot. I rarely read a full paper, for a number of reasons. If you think it and the abstract are at odds, you should probably contact the author and let him know. And maybe create your own summary. > > But note here, I am only saying, that even though we can't predict the > future perfectly nor arbitrarily far into the future, the basic idea > behind deciding which actions are right or wrong, involves making some > attempt at predicting the future consequences of an action. All > rational decision making processes work this way. You seem to be dismissing Kant as irrational (this is not an objection, just an observation). > > You are now deciding for other > people. My morality tells me that this is immoral. > > > If you understand the paper you will see this definition of morality > is based on the fulfillment of the desires of everyone, where those > desires are what each person would genuinely want for themselves when > fully informed about everything relevant. It has nothing to do with > me, or anyone else telling you what to do. It is merely a definition. Yes, I understand the definition, and it's implication that the same morality should apply to everyone. I also understand that the definition is based upon an impossibility and several false premises, and I regard the implication as immoral. > > ... > this paper ... starts with considering the desires of individual > subjects. Moves on to correcting those individual desires with better > information Whoa! You mean like how smokers, when told that smoking is harmful to their health, suddenly don't have any desire to smoke anymore? What planet does this guy live on? > , and ultimately shows how with enough information, including how ones > own desires impact other observers, there is an eventual convergence, > where ones desires extend beyond merely wanting what's best for > oneself, but also a consideration of what's best for all concerned. > This full understanding of what's best for all concerned is the same > understanding, regardless of which initial subject you start from. Apart from the physical impossibility, how can that possibly be true? > > > > > ?> Let us say you have a particular set of rules in your code. > > ?> By [what] process do you decide what rules to adopt, or decide to > adopt one rule vs. another. > > > ?> My contention is that to even form a moral code, one must hold > some > meta-rule for optimizing what one considers to be good while > minimizing or avoiding bad. > > > Indeed. And I'd say that the meta-rule is what defines 'good' and > 'bad'. > > > That seems a bit circular to me.. I am not sure how it gets off the > ground without a way to distinguish good from bad. The meta-rule is what defines 'good', 'better', 'bad' and 'worse'. Whatever the rule is (which will be different for different people, and groups of people), is the basis for the moral system. Here's an example (admittedly a terrible one, with lots of problems, but still a real one): Whatever (my) god wants, is Good, whatever (my) god doesn't want, is Bad. > > > ?> And I think if you explored this meta-rule, you would find it > is not > all that different from the position Zuboff reaches in his paper. > > > On the contrary, it is totally different, and much simpler, than > Zuboff's nonsense. > > > It may seem that way, but I think you have swept the details of how to > distinguish good from bad under the rug. That is a very lumpy rug. > > > > ?> Ultimately, what is good (for one individual) is what that > individual > would want for themselves if they had a complete knowledge of > everything > involved. > > > First, No. > Second, this would be reducing morality to what is good for an > individual... > > > Note that I was careful to specify "good for the individual." I.e., > start with the simple model of only a single conscious being in all > reality. Then it becomes clear this is a working definition of good > that works for that lone being. Still no. Do you not recognise that someone's moral code can be based on something other than their own personal benefit? > > > ?> And then this then extended to define good as a maximization of > good > for all concerned, to achieve the most possible good among all beings > who have desires, by satisfying (to the maximum possible extent) the > desires each individual would still hold if they all had a perfect > grasp > of everything. This he refers to as a reconciliation of all > systems of > desire. > > > ... then dragging everyone else into it > > > Where are you getting this "dragging into it" from? "extended ... for all concerned ... all beings who have desires" > > Does the golden rule "drag everyone else into it"? Yes, it explicitly does. 'Treat /others/ as you would treat yourself' > > Does your moral code "drag everyone else into it"? No, I apply it only to myself. ... > > You could liken Zuboff's result to the platinum rule, corrected by > better information, weighted appropriately, modulated by future > consequences, and with further concern for possible/future beings who > may not (yet) exist. The platinum rule is the platinum rule. When you 'correct' it, you turn it into something else. Where does this 'better information' come from, who decides if it's better or not, how is it weighted, how are the future consequences discovered and evaluated, and, oh, I won't even bother addressing non-existent beings. We're now drifting into the absurd. > > > I really don't see the point of positing an impossible knowledge then > using this as the basis of a system of morality (or anything at all). > > > I've addressed this many times already. As I have refuted it. > > Saying "Oh, but it's just theoretical, not real, don't take it too > literally" is basically the same as saying it's totally useless > for any > practical purpose. > > > I haven't said that. Maybe not literally, but in essence? If not, then it's real, we should take it seriously? I'm talking here about the idea of 'a perfect grasp'. If this is a real thing, not just a fantasy, I'd like to know how it's done. I would certainly take that seriously. > > > ?>> It's about having a choice. If it were possible to have 'perfect > knowledge', there would be no morality, no choice. > > ?> I'm not sure that follows. Even with perfect knowledge, you could > still choose whether or not to act in accordance with morally best > action. > > > That's true. People can choose to be evil. Does anyone actually do > that? > > > All the time. > > > We'd probably class it as mental illness. > > > We all do it in small ways all the time. > > For example, we will choose to pay $20 to go see a movie instead of > taking the time to buy a $20 meal for a hungry person. We know it > would be a more moral way to spend the $20, but will choose a less > moral action instead. That's not choosing to be evil, even in a small way. That's prioritising what you decide to be the better outcome. That's your moral system in action. If you really think that it would be more moral to spend the money in a different way, then you have conflicting moral systems, and need to do some thinking. > > > I don't undertand why you are taking Zuboff's paper seriously. Do you > take his first three statements in the paper's abstract at face > value?: > > 1) "If I desire to drink some stuff thinking it is hot chocolate when > actually it is hot mud, my desire is not a real one - it?s > mistaken or > only apparent." > > (misconstruing the desire to drink hot chocolate as a desire to drink > whatever is in the cup. If that were the case, he'd drink the mud) > > > I think you are misreading and over-examining this. It is nothing more > than an example of how a desire "ooh that looks good I want to drink > it!" can be corrected with new information. > > I see no problem with that observation. To me it is obviously true. > > > > > 2) "This example illustrates how a desire must always depend on a > belief > about its object, a belief about what it is and what it?s like." > > (false assumption that if any desire is dependent on a belief > (something > that I'd dispute, but it needs closer examination), all desires must > always be dependent on beliefs. Saying "This example > illustrates..." is > deflecting the reader from the fact that he's making an assumption > and > failing to show why it should be true) > > > A desire is the will to fulfill some need or want. By definition, > then, it relates to some anticipated future state or experience, which > is presently unrealized. > > Accordingly, that desire concerns a belief (about what the future > state or experience will be we like). > > Again this is all from the abstract, which I'll not give the full > exposition or justification. If the full argument and justification > could be made in abstracts, we wouldn't need papers. Which is why I > suggest you to read the paper is you have questions about it, as it is > quite thorough in addressing all the concerns you are raising. > > > > > 3) "But beliefs are correctable, so desires are correctable" > > ( I don't know why he uses the term 'correctable', which implies > wrongness, but this statement just compounds the above errors and > adds > one more: False conclusion that if a belief can change, this means > that > a desire can change) > > > I don't know what about this is controversial. Consider this example: > > T1: Smoking looks cool, I want to smoke. > T2: You know smoking causes lung cancer, right? > T3: Oh it does? I suppose then I no longer want to smoke. > > > > > > I can understand someone saying that beliefs are sometimes based on > desires (I'm sure this is often the case), but not the reverse. > > That's > just daft. Desires are emotional, derived from feedback on bodily > states, and elaborated by memories and imagination. > > > "Drives" might be a better word to use for such things, and it would > also help in understanding his paper to distinguish innate drives > which we can't decide or change, from the desires that we decide with > our minds, which we can change. Ok, so we can change 'desires' to 'intentions'. Fair enough? That, at least, makes the smoking example more reasonable. This does mean, of course, that we are now interpreting Zuboff (he might say 'correcting'!), and he might not agree with the interpretation. Now I'm going to have to go back over most of it again (apart from the silly 'perfect grasp' stuff). I'll be back. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 3 15:31:22 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 3 Nov 2025 10:31:22 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: Roll a die on a metaphor for superintelligence. Result rolled: a sprawling discussion that's rather off-topic from the metaphor yet keeps using the subject line. (A quick glance through the archives suggests that not a single reply addressed the original post's main point, instead diverging to discussions of, e.g., the reliability - or lack thereof - of IQ as a measurement.) It happens. From spike at rainier66.com Mon Nov 3 16:36:15 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 08:36:15 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: <008001dc4cdf$f9c8b2d0$ed5a1870$@rainier66.com> -----Original Message----- From: Keith Henson Cc: John Clark ; spike at rainier66.com Subject: Re: [ExI] not that rare earth (part 2 of at least 2) >...Spike, you make a bunch of statements on this thread, some of which, like the military having a big REE stockpile, seem unlikely to me. How would they know which ones to stockpile for equipment not yet developed,? The military stockpiles everything known. Even contractors stockpile unlikely items, such as Intel 486 processors of all things. Reason: right after those became obsolete from the hot new Pentium processors in the 1990s, they were dirt cheap, twenty bucks for the functional motherboards, never used. They made great machine controllers. So... the company where I worked bought thousands of those locally and kept them in a storage room where most of them still reside to this day. There is a reason why I know about stockpiling. I might have posted here about a third cousin and genealogy collaborator who was drafted during the Vietnam war, but it ended just as he was finishing officer training. He stayed in the reserves, where he ended up being a logistics officer, the guy who oversees audits of ammo storage bunkers. Here's one that I know about, because this cousin was out here auditing its contents: https://www.google.com/maps/search/bay+bridge/@38.0529741,-122.0332398,1776m/data=!3m1!1e3?entry=ttu&g_ep=EgoyMDI1MTAyOS4yIKXMDSoASAFQAw%3D%3D The soldiers go thru there with an inventory database, make sure nothing has been disturbed or is missing, or if anything is leaking or is no good after decades of storage. Note the buildings and estimate their size. Those are built from heavy galvanized steel pipe cut in half, concrete floor, half pipe over, buried, that structure on top there is a blast shield, so that if a nuke goes off, the bunkers on the other side are partially shielded and their contents safe. Note that every bunker has a rail, from which they can ship out the ammo or whatever else is stored there, long term. Here's another one. The army once used Hidden Cave as a storage area, then later built ammo bunkers out here somewhere, but unlike those up at Port Chicago, these are literally hidden. If the Japanese had invaded the homeland and tried to bomb the storage, they couldn't find these: https://www.google.com/maps/place/Pasture+Rd,+Nevada+89406/@39.4158822,-118.6823385,5552m/data=!3m1!1e3!4m6!3m5!1s0x80987e8bcfbcc7d5:0xb9a36bc20da0239c!8m2!3d39.3995819!4d-118.7255011!16s%2Fg%2F1tfwdrbl?entry=ttu&g_ep=EgoyMDI1MTAyOS4yIKXMDSoASAFQAw%3D%3D My neighbor used to be sent out to Fallon quarterly to oversee that facility, but he hated that distasteful task: nothing to do there. How do I know the military stockpiles everything imaginable? I am wearing an example of it right now. In the early days of the Korean conflict, no one knew if that would escalate to a full ground war with China. The government ordered about a square light second of olive green wool cloth. With that much wool cloth, they didn't really know what to do with it all, so they made a few million pairs of olive green trousers, 100% wool. Here's an example: https://www.armyandoutdoors.com/products/us-army-wool-green-service-dress-trousers?variant=41967478177847&tw_source=google&tw_adid=649468606683&tw_campaign=19743421886&tw_kwdid=pla-316153693159&gad_source=1&gad_campaignid=19743421886&gbraid=0AAAAACaUJajiaRa2j2p--6JcQioRVuKU-&gclid=CjwKCAiAwqHIBhAEEiwAx9cTeT2rLbrfUNQRgPhC90pk40JP7w8h1Zbu5ueKEY4AZkzQFW86rffsQBoCQ2UQAvD_BwE They have the manufacture date stamped in them. The 1950 vintage are button fly, the 1951 have zipper fly. I am wearing 75 year old army surplus wool trousers, for I am lucky to be exactly the size of a XL/small soldier from 1950: if I cinch down the waist adjusters (trousers don't have those anymore) then I don't need to go to any great heroics to hold them up. They have loops for suspenders, but even eccentric retro guys (me) don't wear suspenders, even as a Halloween costume. So... when I found I could buy 100% wool trousers that fit perfectly for 17 bucks a pair, I bought all that supplier had, twenty. They were 70 years old at the time but never used and stored in ideal conditions, in an ammo bunker somewhere, not one trace of moth damage or indication they were made a decade before we were born. Now, three quarters of a century later, those wool trousers are still available, and still only about 20 bucks. So... I wear em. Where is the gold stored in the USA? FORT KNOX they chorused in unison. OK sure, there is gold in Fort Knox, plenty of it. But... what if... Fort Knox is Fort Nuked? Is America out of gold? Hint: no. The federal government has gold stashed away in a hundred locations, not nearly as much as in Fort Knox, but in places where Bond villain Goldfinger would never think to look for it, ideal locations where it is on a military station, and if anyone gets anywhere near those "ammo" bunkers, a mere child with one stripe on the sleeve and a machine guns just kills them. No trial necessary, it was some yahoo trying to steal high explosives and ammo, case closed. If you look at that first link, you can estimate the size of those bunkers. Knowing that commanders sometimes don't spend their entire budget for the year, they spend the remainder in usually September on whatever they think they might ever need. They squirrel it away in those dry secure ammo bunkers, where they are generally forgotten after that commander moves on, but they stay on the inventory lists, make work for logistics officers, who sometimes releases some of the stuff they know they will never need, such as 1950 vintage wool trousers, because the army doesn't use those anymore. They send them to military surplus merchants, who sell them for twenty bucks, to the very few eccentrics who happen to fit one of their standard sizes for Korean war era soldiers, four waist sizes and three lengths, twelve combinations, good luck. I fit the XL small perfectly, which is even better because there are more of those left than any other size. An example of the kind of thing the military might stockpile would be rare earth ingots or mixtures of rare earth oxides. It costs a lot to separate those elements down on the left side of the chart, those two extra rows, but it might not be necessary to separate them. As I have pointed out, those elements have little or no magnetic properties. They go into the iron to catalyze the crystal growth during annealing, and help pin the magnetic domains in place. They act as large atomic radius capacitors, which makes for lower magnetic hysteresis. One of the things we should have noticed is that in the big hysteria over China having a big monopoly on REEs, we were told we couldn't make competitive EVs, the F18, electronics, wind turbines, etc. But we already knew that was bullshit: the electronics industry would scarcely notice the increased spot price of their little bit of material, EVs don't use much, one or two kg, F18s manufacturers are not going to rely on the guy who is a possible target of the F18 to supply materials. Wind turbines, that one we will grant: those use a lot and the cost is critical, since they might not produce a lot of profit, depending on where they are going. So... reduce the list of impacted industries to one: wind turbines. Those guys might be suffering. My conclusion on REEs: the military has all they need for they anticipate this sort of thing, the electronics people have what they need because they don't need much. Elon has apparently found a way to design around any shortage, or has access to enormous stockpiles, for his factory up the street is spewing Teslas as fast as they can throw them out the door, and I see no change in their performance specifications since this (apparently phony) REE shortage was reported. I get the feeling it was hyped by people who already owned Mountain Pass stock and were shorting domestic wind turbine manufacturers. So... wind turbines need a lot of that stuff. Knowing about the huge military storage facilities and how many of those exist (many) and why they exist (because the military anticipated nuclear war in the 1940s and still do) and knowing the military squirrels away everything imaginable just because it can, I know they have plenty of REEs which will likely still have plenty up until the Mountain Pass refinery is producing as much refined earth elements as the market wants (they tell us it will take two full years (even with the environmental waiver granted last week (there were no objections from the federal government (the EPA is out on an extended vacation at the moment (bets continue on when or if the EPA will return.)))) https://www.predictit.org/markets/detail/8316/When-will-Trump-sign-a-bill-to-end-the-shutdown-(Part-two) Regarding my lack of citations: the government is not going to tell us what the military has or where it is, for understandable reasons. If we knew exactly where that documentation is, we could do a FOIA request on it, but they probably wouldn't tell anyway. But do let me assure you, the miliary does stockpile every known type of unobtanium, plus a bunch of unknown unobtanium, just in case anyone ever figures out what their unknown unobtanium is and what it can be used for. They have the secure storage, they have the budget, they use it. Fun aside on that: you can STILL get thirty aught six ammo military surplus (Vietnam war vintage (that round was phased out for military use fifty years ago (but the ammo is still perfectly OK after all this time from what I hear (I don't use it (got rid of my 30.06 twenty years ago (ammo is too expensive (even military surplus.)))))) https://emacstactical.com/30-06-hxp-m2-ball-full-spam-can-240-rounds-military-surplus-ammo/ spike Ten years from now there may be different requirements., Others, like cheap labor, I know are not true. China has a significant multiple of robots over what the US has. Cheap labor was true 10-15 years ago, but not now. Current annual labor cost in China is listed as $14,800. Most of the cost of REE is in sorting them out. If they could be used without sorting them, I think I would know about it. Likewise, if Mo were to substitute, I think that would be well known. I have a friend who has an alloy named after him. I will bcc this thread to him and see if he will respond. I did find one article mentioning Mo, but it is a minor addition to an alloy that contains Cs. https://www.sciencedirect.com/science/article/abs/pii/S0966979521003411 Keith On Sun, Nov 2, 2025 at 12:50?PM spike jones via extropy-chat wrote: > > > > > > From: John Clark ? > > > > > On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. > > > > >?Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties?. > > > > Agreed. None of those elements? magnetic properties are significant enough to matter. It?s the iron which does the magnetic magic. The trace of heavies are just there to control the crystal growth in the iron. They are catalysts. They really are mostly interchangeable. We can make good magnets without the rare earths. > > > > >?(because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) > > > > ????? > > > > > > Dubnium? Who ordered THAT? Berkelium? I wonder where that was synthesized? How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. > > > > > > > > > There are magnets that use substitute materials for what any rare earth does. > > > > >?Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field? > > > > It doesn?t. That?s the point. Maybe very slightly weaker or very slightly greater hysteresis. But the comparison one often sees is a rare earth magnet vs an iron magnet, which is misleading. > > > > >?and is far heavier? > > > > It doesn?t differ much in density if one uses a different catalyst material. > > > > But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. > > > > >? and thus makes a far crappier electric motor than the ones China makes by the millions?.. > > > > China?s motors are not better enough to cover the cost of shipping them here. > > > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. > > > > >?No they cannot?. > > > > They really can John. China doesn?t make better EVs. They will tell you theirs are better however. I am not buying it. I see a few Chinese EVs whirring around here, but I am not impressed with them. > > > > > > >?If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret? > > > > The factory is right up the street. Production doesn?t seem to be impacted by lack of materials from China. Do you have indications to the contrary? > > > > > > > > > The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. > > > > >?No they have not. > > > > John you seem to be claiming knowledge of what the US military has. Do you have this knowledge? Or are you guessing? > > > > Imagine you are a general with a budget, arbitrarily large indefinite secure storage space, and your job is to anticipate your needs in case international trade is interrupted by war. You must have everything you need in case war breaks out. Imagine you need about a ton of lutetium a decade. You have some budget which you must spend, or risk having your budget reduced for next eyar. What might you buy? How about 20 tons of lutetium? Nah. Fifty tons. > > > > Don?t worry, the military has plenty of everything squirreled away here and there. Telsa has enough material and enough engineering talent to design around anything they can?t get. Businesses do that kind of thing. The electronics industry doesn?t use much and the material cost compared to the product is negligible. The wind turbine industry, well I can?t really say there. They need a lot, but we don?t really need wind turbines. So they will need to work that out. I don?t see it as a real problem. > > > > > Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. > > > > >?Yes they have? > > > > Again, your confidence appears to be unjustifiable arrogance. The military isn?t going to tell you what they have, but you can be sure they won?t be caught short of anything. Their job is to anticipate needs under a variety of scenarios, and they are extremely good at what they do. > > > > > > > John are you seeing a pattern here? > > > > >?No spike I am NOT. I can honestly say I don't know what you're talking about. > > > > John K Clark > > > > > > What I am talking about: the Chinese monopoly on rare earth elements has been grossly exaggerated. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. Everyone else will be OK without Chinese material imports. > > > > I don?t see huge potential growth in wind power, since the best sites are already built out (Tehachapi Pass, Altamont Pass, Columbia Gorge and a few others.) There is increasing pushback by the environmentalists who are known to resort to vandalism. Not all environmentalists, but the bird people hate those turbines. Sooner or later, some bird lover is going to get a small plane, fly up over a wind farm with a thirty aught six, punch holes in those turbines so that the oil leaks out on a windy night, wrecks a bunch of them before anyone knows there is a problem. Wind farms are too vulnerable to that kind of attack. > > > > I can see additional growth in solar power however, for it is more predictable. > > > > Conclusion: the notion that China has a critical monopoly on any material is exaggerated. > > > > spike > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Nov 3 16:59:16 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 08:59:16 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: <00a601dc4ce3$30b66820$92233860$@rainier66.com> From: John Clark >? But from an economic and political viewpoint it really doesn't matter WHY rare earth elements are needed to make the best magnets, the important fact is that they DO matter? We were told it would take several years to bring domestic REE production up to the demand and the ecological impact would be great. Yesterday we were told the Mountain Pass facility had its environmental permits in place and would be producing all we need in two years. I am confident we can go two years on stockpiles. But you hit on the truth: the Chinese REE embargo may have led to exaggerated reporting of threats by those who think it is a big security threat to be behind China in the wind race. I don?t see it. China has all that wide open windy useless land. We have some of that too, but we saw what happens when local grids rely too much on wind power: the huge cold front on 7-8 December 2017 caused shortages in availability in Texas. Blackouts caused people to die, for Texas doesn?t need a lot of heating most of the time, but that time it did. https://www.texastribune.org/2022/01/02/texas-winter-storm-final-death-toll-246/ The power company in California told us all about this: when wind and solar are added to the grid (particularly wind power because it is harder to predict in most places) the baseline generation capacity must be retained. Otherwise the reliability specs cannot be met, and the occasional oddball conditions, such as the 2017 cold front in southern Texas can correspond with no wind power and high demand, resulting in blackouts and hypothermia fatalities. Power generation capacity must be paid for, even when the wind is blowing and the local natural gas facility is idle. This is costly. The bad old capitalist power company was telling the truth, our politicians were lying (or more likely just didn?t understand the nature of the problem they were introducing.) Power prices doubled since 2017. Keith you and I never noticed our power bills back then. We do now. Power generation and distribution is a wildly complicated field of engineering, waaaaay damn harder to understand than the iconic symbol of complication, rocket science. Hell rocket science is fun and easy, once on masters a few disparate fields of study. Power distribution seems simple enough in theory, but it boggles the mind in actual practice. With voter mandated ?renewable? power, it gets way harder to master. However? there is a bright side: it makes for lots of great investment opportunities for those who grok it well enough to bet on it. spike > We can make good magnets without the rare earths. Perhaps there is a way to make intensely powerful magnets without using such exotic elements, but if there is search a way nobody has found it yet, if somebody had it would be a trillion dollar Nobel prize deserving discovery and you and I and the entire world would certainly know about it. > How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. What's your point? Some people have specialized in the rare earth elements during their entire scientific career, and I'm sure for some of them their PhD was a study of how lutetium behaves under various conditions. Incidentally lutetium is one of the rarest of the rare earths but even so it's much more common than silver, and that fact tells you that the important thing is not rare earth mines, it's rare earth refineries. > But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. Density doesn't matter but weight does matter in wind turbines and cars and aircraft and drones and robots. Just one F-35 fighter jet contains about 900 pounds of rare earth metals, and an ultra modern Virginia-class attack submarine needs 9,200 pounds. Even the phone in your pocket would be larger and heavier than it is now without rare earth metals. > China?s motors are not better enough to cover the cost of shipping them here. That's not true, or rather it wasn't true until He Who Must Not Be Named astronomically huge and completely idiotic tariffs came along. > your confidence appears to be unjustifiable arrogance. I'm sorry if I sound arrogant, but you keep making statements that scientifically are objectively false, and that can be frustrating. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. China certainly believes that they need wind turbines because they're building them for their own use at a furious rate, and that rate is increasing. Today China generates 490 gigawatts of electricity from wind power, and they plan to produce 1300 gigawatts by 2030. A typical nuclear power plant produces about 1 gigawatt. You Know Who hates wind power because he believes "the windmills are driving the whales crazy, obviously", but I assume that is not the reason you dislike it, please correct me if I'm wrong. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 3 19:19:16 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 11:19:16 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> >? Just one F-35 fighter jet contains about 900 pounds of rare earth metals, and an ultra modern Virginia-class attack submarine needs 9,200 pounds. I have been trying to find the source which claims there are over 400 kg of rare earth elements in an F18. This report from 2016 is as close as I can get, which doesn?t contain the number but quotes another referenced report, which is classified: https://www.gao.gov/assets/gao-16-161.pdf Neodymium is up from 51 bucks a kg to 68 per kg now. So? an F18 would have (if the report is correct (and all of that is neodymium) about 30k in neodymium, for a 60 million dollar plane, up from 22k. An addition 8k in materials. Samarium went up by more than a factor of 100! To a current price of? 11 bucks per kg. If all of the F18?s rare earth requirements are in samarium, that adds up to over 4k per plane, up almost all of that 4k. Swoon how shall we cope? Don?t worry, we shall cope. This source shows the big runup and rundown happened in 2022, which wouldn?t involve trade tariffs: https://www.dailymetalprice.com/metalpricecharts.php?c=nd &u=kg&d=0#google_vignette I see where Mountain Pass had been shut down, but reopened in 2017. I also noticed the date of that report, looking at the feasibility of scavenging rare earths from retired fighter planes and submarines. Note that the F18 is over 40 years old now, and plenty of the planes are being retired as the design is phased out. Submarines and surface ships: plenty of those being retired on a regular basis. Those look to be far more plausible as a scrap metal source, since they are already recycled anyway. spike From: John Clark >?Even the phone in your pocket would be larger and heavier than it is now without rare earth metals. John I find that notion most implausible. A phone has a very small speaker which would contain magnets. The battery might contain traces. But I can?t imagine either of those would have significant impact on either size or weight. spke -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 3 20:42:45 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 12:42:45 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> Message-ID: <014c01dc4d02$695677a0$3c0366e0$@rainier66.com> From: spike at rainier66.com Fun aside for material substitution hipsters, not related to rare earths at all but still entertaining. In January 1998 the physics club where I worked contacted Edward Teller, who lived in Stanford, close to where our club met at Xerox PARC, where the operating system you are now using was originally designed by Doug Englebart?s group. Teller had just turned 90. We were talking about singing happy birthday to him when he arrived, which I thought was a marvelous idea, but we were told he is a notorious grumpy bear. So we didn?t. Damn. When he arrived, he was nothing like what we expected. he wasn?t grumpy at all. If I were to come up with two adjectives that day, it would be smart and jolly. I had read his book Conversations on the Dark Secrets of Physics, in which he talks about Los Alamos and why he was driving so hard for the ?super? as he called it: in those days, it wasn?t entirely clear if high explosives alone were sufficient to ignite a tritium reaction (they considered it unlikely) or if a uranium fission explosion would do it, or even if a much larger plutonium fission reaction was necessary. They didn?t know, but Teller made the case that it is critically important to get that answer. Reason: it takes a lot of visible action to get plutonium. Uranium not so much, and high explosive: none. He discussed the risk if a fusion reaction could be initiated with high explosives. Any bad actor could set up what they called the back yard weapon. It would be an Olympic size swimming pool filled with Very Heavy Water (tritium hydride.) Such a device would be too large and heavy to haul anywhere, but the resulting explosion would be so big, it would nuke the entire planet. So why bother hauling it anywhere else? Set that up in your own back yard. Jolly old Edward Teller related fun stories from Los Alamos, such as the time he and Stanislaw Ulam were called by General Groves to meet with the Secretary of Treasury and Secretary of War Henry Stimson, after Groves asked for 100 tons of copper. The copper mines had already sold all its future production to the DoD and had none to offer. Stimson asked Groves if there is any other material he could use. The meeting itself was unclassified because Stimson knew about the Manhattan project but Morganthau was not briefed. Vice President Truman wasn?t briefed either. Only those who really needed to know knew. Morganthau had orders to give Groves whatever he wanted. As the story goes, Stimson asked Groves if there is any substitute for copper. He replied only silver. To the Secretary of Treasury, the terms copper, silver and gold translate into what we would hear as money, big money and biggest money. So when he heard Groves requesting money, and General Groves say since they could give him enough money, but big money do instead, Morganthau counteroffered biggest money. The conversation went something like this: General, we can?t fill the order for 100 tons of copper because of war needs. Are there any substitute materials? Groves: Silver. Send me 100 tons of silver. Morganthau, who may not have even been aware of how much is a ton of anything: General, we at the Treasury do not refer to silver by the tonnnn. Precious metals are measured by the Troy ounce. Groves turns around and asks Ulam and Teller how much that is, at which time they chorus in perfect unison: 1.5 million. Groves: Very well Secretary Morganthau, send me 1.5 million Troy ounces of silver. Morganthau was aghast at the request. Secretary of War couldn?t give Groves money, so he requested big money instead. Morganthau tried offering biggest money. Morganthau: General, it would be logistically much simpler to send you four tons of gold rather than 100 tons of silver. Groves: Thanks but we have enough gold. My top scientists tell me only the silver will do. Morganthau to Stimson: There better be some really big initials on that order. Stimson: How about FDR? Morganthau: Ah. Those are big initials. General Groves, you shall have your... 100 tonnnnns? of silver. Feynman told the story in his memoirs but from another perspective: how mind-boggling wasteful is war. He gives the example that they had a hemisphere of pure gold, which didn?t meet the requirements, so they were using it as a doorstop. Nobody ever tried to make off with it. He mentioned the silver wire used at Los Alamos and how it was a pain in the ass to work with, as it isn?t as flexible as copper wire, and how they worried for a while that it would be stolen, but after the project was over, the wire was gathered up, melted back into ingots and sentl back to the treasury. Only a very small fraction of a percent was missing, nearly within measurement certainty. Scientists are generally honest souls. End of fun old man story (I meant me, not Teller.) In any case, it was a memorable time to meet in person the man who really pushed for the super, back in the days when most of the Los Alamos scientists didn?t want to work on it. Teller was worried that the tritium reaction could be triggered by a uranium bomb (which would be bad) or high explosives (which would be worse.) They were told that apparently Heisenberg thought it could be, since he had directed the Nazi nuke effort toward heavy water but not toward enriching uranium 235. Some have speculated that Heisenberg knew from his own calculations that Germany didn?t have the resources to enrich uranium, or that he calculated that a tritium reaction could not be initiated from high explosives, and he thought it better to pretend otherwise than to tell der fuehrer it couldn?t be done, anticipating they would still be working on it when the war was over. We may never know. The Norwegians got in there and destroyed the Nazi heavy water tanks. Hell of a cool story: https://en.wikipedia.org/wiki/Norwegian_heavy_water_sabotage spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 3 21:30:37 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 13:30:37 -0800 Subject: [ExI] more rare earth stuff Message-ID: <016601dc4d09$189f9bf0$49ded3d0$@rainier66.com> I get suspicious when I see stock price graphs that look like this one: https://stockanalysis.com/quote/otc/ARRNF/ This company might have put out a phony story about finding a coupla billion tons of REEs, considering the flatly false comment in the second paragraph ".2.34 billion metric tons of some of the rarest materials on earth." Nah, they aren't rare materials. They are rarely refined however, but that's different. We have the ore, that stuff isn't rare at all. You can be sure that while the price is up and the EPA is on vacation, they will be getting unconditional irrevocable three-year waivers to set up refining facilities. Good chance the stockholders came up with the idea, raced up the price of the stock, started dumping it when the price hit 65, investors noticed and followed suit, dumping their stock too. Settled back to reasonable numbers. There are some other howlers in that article: ". ushering in a potential 80 trillion dollar opportunity by 2030." eh, no. Do let me assure you, that won't happen. It would be nice if it does however: that would get us toward John's vision of unimaginable wealth. I have a pretty good imagination when it comes to unimaginable wealth. I could achieve Simon bar Sinister's vision, and rule the world. Either rule the world, or a nice fast comfortable car, one of those would be good. Playing in the ideas futures game trains the mind in spotting bullshit. Stock holders will put stuff in the comment section designed to influence others to buy his stock at a high price or sell him theirs at a low one. This American Rare Earth nonsense was very easy to spot. In general, playing for money on future memes is good for the brain. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Nov 3 21:48:52 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 3 Nov 2025 21:48:52 +0000 Subject: [ExI] (Meta) Re: 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: <6feea27e-ccd9-48e0-ab85-06f884c9d865@zaiboc.net> On 03/11/2025 19:19, Adrian Tymes wrote: > Roll a die on a metaphor for superintelligence. Result rolled: a > sprawling discussion that's rather off-topic from the metaphor yet > keeps using the subject line. (A quick glance through the archives > suggests that not a single reply addressed the original post's main > point, instead diverging to discussions of, e.g., the reliability - or > lack thereof - of IQ as a measurement.) > > It happens. Yes, I know. I've occasionally tried changing subject lines, but it usually gets ignored, and that makes things worse. I think. Is it worse to have different topics under one subject heading, or the same topic under different subject headings? I can't decide for certain, but I'm inclining towards the latter. -- Ben From johnkclark at gmail.com Tue Nov 4 12:35:54 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 4 Nov 2025 07:35:54 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <00a601dc4ce3$30b66820$92233860$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> Message-ID: On Mon, Nov 3, 2025 at 11:59?AM wrote: *> Yesterday we were told the Mountain Pass facility had its environmental > permits in place and would be producing all we need in two years.* *I wish them well, but until very recently that mine shipped all the ore they dug out of the ground to China for refining, it remains to be seen if they have the skills to perform that delicate chemical process on an industrial scale that is economically viable. And in two years, three at the most, we will know if China or the US won the AI race, or if neither country won and the only winner is the AI. * *> we saw what happens when local grids rely too much on wind power: the > huge cold front on 7-8 December 2017 caused shortages in availability in > Texas.* *A two day shortage eight years ago is not sufficient for a blanket condemnation of wind power, and I can't help but wonder if your dislike of it is just a reflection of He Who Must Not Be Named dislike of it. And Texas is unique, it is the only state in the lower 48 that insisted on having its own power grid, so it will not give power to nearby states when they run into temporary electrical shortages, and they cannot borrow power from nearby states when they temporarily run low on electricity. Iowa gets 63% of its electricity from wind power and they seem to be very happy with it. China has also embraced wind power. * *> Power prices doubled since 2017. Keith you and I never noticed our > power bills back then. We do now.* *It's not just you two, I predict that the power bills of everybody in the US are going to skyrocket in the next few years due to AI's insatiable demand for electricity and the fact that the US has not increased its electric power generating capacity in over a decade. * *> Power distribution seems simple enough in theory, but it boggles the > mind in actual practice.* *When I was in college the most difficult course I had was not the one on electromagnetic theory or the one on the quantum mechanical nature of transistors, it was but one on how real world (not the simplified idealized examples seen in beginning textbooks) transformers, electric motors and generators affect large scale power distribution. I vividly remember the semi hysterical laughter that came from the entire class when they got their first look at the final exam questions. About an hour after that test I got the worst headache of my life, I don't think it was a coincidence.* * John K Clark * > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Nov 4 13:15:34 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 4 Nov 2025 08:15:34 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> Message-ID: On Mon, Nov 3, 2025 at 2:19?PM wrote: >> *>> Even the phone in your pocket would be larger and heavier than it is >> now without rare earth metals. * > > > *> John I find that notion most implausible. A phone has a very small > speaker which would contain magnets.* *A very small speaker that, considering how tiny it is, sounds amazingly good thanks to rare earth magnets. Also, you can thank rare earth magnets for the haptic vibration feedback that all modern phones have. And Europium displays the red hues you see on your screen and Yttrium produces the green. And although Indium isn't a rare earth it is a rare metal, and almost all of it comes from, you guessed it, China. And without Indium you wouldn't have a touch sensitive screen because Indium Tin Oxide is one of the very few solid substances that is both electrically conductive and optically transparent. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Nov 4 13:38:16 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 4 Nov 2025 08:38:16 -0500 Subject: [ExI] Trump Doubles Down on Nuclear Tests. His Energy Secretary Differs. Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. Trump Doubles Down on Nuclear Tests. His Energy Secretary Differs. President Trump and one of his top cabinet officials are sending mixed messages on how the U.S. government is handling the most destructive weapons in the world. https://www.nytimes.com/2025/11/03/us/politics/trump-nuclear-tests-energy-secretary.html?unlocked_article_code=1.yk8.Cfzp.KEGt0kccxY3Q&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Nov 4 20:39:34 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 4 Nov 2025 15:39:34 -0500 Subject: [ExI] China and solar power Message-ID: *It's not just wind power that China has embraced, it's just as enthusiastic about solar power. On the Tibetan plateau China has built the world's largest solar farm, it covers 162 square miles and produces 20 gigawatts of electricity, about what 20 nuclear power plants could produce. The US was going to build a 6.2-GW solar + storage facility in Nevada which would've been the largest in the US, but apparently He Who Must Not Be Named hates solar power just as much as he hates wind power because about two weeks ago, for reasons not made clear, he canceled the project. You may find that the following chart is interesting, somebody is being left in the dust. Meanwhile construction on the $300 million White House ballroom continues, and so does the renovation of **bathroom for the Lincoln bedroom, they added lots of white "statue grade" **marble and gold plated fixtures, even a gold trash can. * [image: C96BFACE-D2AF-4FEA-8BE3-6B4093AF24EC.jpeg] *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: C96BFACE-D2AF-4FEA-8BE3-6B4093AF24EC.jpeg Type: image/jpeg Size: 62738 bytes Desc: not available URL: From pharos at gmail.com Tue Nov 4 22:32:24 2025 From: pharos at gmail.com (BillK) Date: Tue, 4 Nov 2025 22:32:24 +0000 Subject: [ExI] AI is taking white-collar jobs Message-ID: A disturbing chart is making waves in the financial world. It?s a long-term view of job openings (blue) vs the S&P 500 (black). The dotted line shows when ChatGPT launched in November of 2022. By Adam Sharp Posted November 3, 2025 Quotes: Make no mistake. Widespread disruption is coming. Those who utilize AI well will become super productive. Those who don?t risk falling behind, or even being laid off. AI could end up being the most disruptive of all. An unprecedented reallocation of the world?s workforce, compressed into a single decade. In time, it will work itself out. But for a while, it?s going to seem like the end of the world as we know it. ------------------------ BillK From spike at rainier66.com Tue Nov 4 23:58:39 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 15:58:39 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> Message-ID: <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> From: John Clark Subject: Re: [ExI] not that rare earth (part 2 of at least 2) On Mon, Nov 3, 2025 at 11:59?AM > wrote: > Yesterday we were told the Mountain Pass facility had its environmental permits in place and would be producing all we need in two years. >?I wish them well? OK good. For a minute it sounded like you were cheering for China. >? it remains to be seen if they have the skills to perform that delicate chemical process on an industrial scale that is economically viable? Economically viable depends on the price of the product. Speculators will scale up the existing facility using known technology which has been in use for decades. It doesn?t look complicated to me: the same differential solubility, followed by fractional crystallization procedures they taught us in analytic chemistry half a century ago. It is time consuming with plenty of cycles, but nothing high tech. >?And in two years, three at the most, we will know if China or the US won the AI race ? Ja but that comment explains your outlook to some extent, and mine. Perhaps you are a lot more certain that AI will somehow cause the need for REEs to go away or become irrelevant. You are thinking the Singularity is nearly upon us. I agree it might be, but it might not be. If it is, then none of this matters. If it isn?t, then all of this matters. > we saw what happens when local grids rely too much on wind power: the huge cold front on 7-8 December 2017 caused shortages in availability in Texas. >?A two day shortage eight years ago is not sufficient for a blanket condemnation of wind power? It isn?t a blanket condemnation of wind power. It is a demonstration that wind power does not reduce the need for alternative power generation fueled by the old-fashioned combustion. In some cases, adding wind power may necessitate additional fossil fuel alternatives, for it increases demand for power, which must be there in the oddball case when the sun isn?t shining and the wind isn?t blowing. Otherwise fatalities occur. >? and I can't help but wonder if your dislike of it is just a reflection of He Who Must Not Be Named dislike of it?. The addition of political considerations only confuses an already wildly complicated engineering task. >?It's not just you two, I predict that the power bills of everybody in the US are going to skyrocket in the next few years due to AI's insatiable demand for electricity? Oh the profit potential here, mercy. >? the fact that the US has not increased its electric power generating capacity in over a decade? See previous comment, and multiply by about e. > Power distribution seems simple enough in theory, but it boggles the mind in actual practice. >?When I was in college the most difficult course I had was not the one on electromagnetic theory or the one on the quantum mechanical nature of transistors, it was but one on how real world (not the simplified idealized examples seen in beginning textbooks) transformers, electric motors and generators affect large scale power distribution?. Sheesh ain? t that the truth. Rocket science is EASY compared to power distribution. My power class was so damn crazy hard, our professor told us the harsh truth: he wouldn?t hire any of us (he was a retired manager for Pacific Power.) >?I vividly remember the semi hysterical laughter that came from the entire class when they got their first look at the final exam questions. About an hour after that test I got the worst headache of my life, I don't think it was a coincidence. John K Clark After I took Power, I decided to not go to graduate school. That class resulted in multiple injuries and serious fatalities. That class was so hard, some of the students lives were saved because they judged themselves too stupid to operate a loaded pistol on themselves. That class was so hard, failure was not an option, it was required. This would be cool, and probably already exists: a power grid management sim game, a good realistic simulation, where you hafta make all the investment decisions and technical calls on everything, do the design of the grid, perhaps with multiple players as managers of competing and collaborating adjacent power grids. Does anyone here know if such a thing exists. One would think so. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 00:16:05 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 16:16:05 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> Message-ID: <035f01dc4de9$61606be0$242143a0$@rainier66.com> From: John Clark ? >?A very small speaker that, considering how tiny it is, sounds amazingly good thanks to rare earth magnets. Also, you can thank rare earth magnets for the haptic vibration feedback that all modern phones have. And Europium displays the red hues you see on your screen and Yttrium produces the green. And although Indium isn't a rare earth it is a rare metal, and almost all of it comes from, you guessed it, China. And without Indium you wouldn't have a touch sensitive screen because Indium Tin Oxide is one of the very few solid substances that is both electrically conductive and optically transparent. John K Clark Of course to all, but the electronics industry doesn?t need much material. We can easily estimate it however. My cell phone has a mass of 224 grams. I can?t imagine REEs are more than 10% of that, so say 25 grams per cell phone. Imagine 300 million cell phones that are replaced without recycling on the average about every 3 years, so about 100 million cell phones per year. I don?t know what fraction of those are American made, but AI is very helpful in saying almost none. OK, scratch that notion. Let us look at cell phones made outside of China. AI says it as about a third. OK then. Let us assume China stops exporting phones and we get them from outside China and we need 100 million a year and each one contains about 25 grams of REEs. Scratch that, AI says about 1% of a cell phone is REE. So, 2.5 grams each, times 100 million phones, 250 tons a year. Trivial. Recycle a few wind turbines or a submarine for a coupla years until Mountain Pass is producing everything we need. Don?t worry about the cost of the material, if a 1000 dollar phone uses 2.5 grams of the stuff. The highest spot price I saw was about 1800 a kg, so we are looking at 5 dollars worth of material assuming the most expensive of the REEs. Again, the REE problem is grossly exaggerated (possibly at least in part for political or economic considerations (such as to run up the stock price in Mountain Pass (which I would cheer for if I owned any of that stock (or opposition to tariffs (which I also oppose (but not for that REE reason (rather because free trade is inherently a good thing (it lifts the masses out of poverty.))))))) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 00:33:24 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 16:33:24 -0800 Subject: [ExI] China and solar power In-Reply-To: References: Message-ID: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> ?> On Behalf Of John Clark via extropy-chat Cc: John Clark Subject: [ExI] China and solar power >? The US was going to build a 6.2-GW solar + storage facility in Nevada which would've been the largest in the US, but apparently He Who Must Not Be Named hates solar power just as much as he hates wind power because about two weeks ago, for reasons not made clear, he canceled the project?.. John K Clark Who in the US? And why does POTUS have any say in that? >? Meanwhile construction on the $300 million White House ballroom continues? That ballroom costs the taxpayer nothing. Why would you be opposed to that? John, you imply that the cancelled solar project was taxpayer funded (otherwise POTUS would have no influence on it.) If so, I oppose it too and agree it should be cancelled, not because I have anything against solar but because taxpayers shouldn?t be funding power projects. The federal government is an unreliable investment partner. Extrapolation to its bankruptcy is easy. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 02:31:31 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 18:31:31 -0800 Subject: [ExI] what if... the singularity isn't near? Message-ID: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> OK sure, this site is all about the singularity being near, and OK sure, I am a big heretic for even entertaining the notion it might not be. But what if. the Singularity is a coupla decades off still, and Kurzweil was mostly right, but off a little, and the Singularity is still coming but not right away? Then what? Do I get to sell Singularity insurance? Because it appears to me that what we are calling AI might be a kind of false alarm: a big advance in smart search might make us think the Singularity is nearer than it really is. Then what? What if. intelligence really is substrate dependent for reasons we don't currently understand, or the Singularity depends on technology we don't yet have. Then we would predict the Singularity sooner than it is, ja? Singularity thinkers come on, help me Obi wans, alla yas: might we be in a situation where we are fooling ourselves? OK then what please? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 02:52:34 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 18:52:34 -0800 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: <000c01dc4dff$3cfd6170$b6f82450$@rainier66.com> From: spike at rainier66.com . >.Singularity thinkers come on, help me Obi wans, alla yas: might we be in a situation where we are fooling ourselves? OK then what please? spike Ben are ye here, me lad? If so, help me please Oben Wan KenoBen. What if. we fooled ourselves with the obvious jump in smart search algorithms? What if we fooled ourselves, from what I think is a very clever application or implementation of what we controls geeks knew as Kalman filtering? We convinced ourselves that it was "smart" as the desperately lonely basement dweller convinced themself that they AI sweetheart really loved themself? What if.. it was all kind of an illusion, and wee really were not much closer to the capital S Singularity than we were before? Imagine it looked like we were approaching the cap S Singularity, but it was an optical illusion kinda. Imagine it. It isn't hard to do. Other Singularity thinkers, might we have been fooling ourselves? Do elaborate to exhaustion please. Ben? Adrian? John? Other Singularity hipsteers? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 03:26:29 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 19:26:29 -0800 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000c01dc4dff$3cfd6170$b6f82450$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <000c01dc4dff$3cfd6170$b6f82450$@rainier66.com> Message-ID: <001301dc4e03$f9f82590$ede870b0$@rainier66.com> From: spike at rainier66.com >.Other Singularity thinkers, might we have been fooling ourselves? Do elaborate to exhaustion please. Ben? Adrian? John? Other Singularity hipsteers? spike And BillK of course, my apologies BillK, the one feller who uses "AI" better than any of us, along with any lasses who are here, who still tolerate us (how you do it I don't know (insufferable bahstids we are)) how do we really know the Singularity is near? Do we really know that? Reason I ask: the recent debate on "rare" earth elements is really all about the Singularity. If the Singularity really isn't near, then the USA is OK: we are cranking up production of "rare" earths fast enough, no problem. The military has enough because it anticipated the need and stockpiled, the electronics industry has enough because it doesn't need much, the wind turbine industry doesn't have what it needs but it isn't clear we need that industry. If we really are are two or three years from the cap S Singularity, our paltry efforts to get more wind power or whatever else, are not fast enough. The Chinese will get an advantage from slightly cheaper electric power which might (somehow) contribute to their getting AI before the good guys and might (somehow) be significant. But if we are not two or three years from the Singularity, then never mind, the slight delay in REEs resulting in slightly higher power cost is irrelevant. John? BillK? Adrian? Ben? Others? Are we really that close to the Singularity? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 5 03:35:49 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 4 Nov 2025 22:35:49 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: Consider: what if it is still a few decades off, so what we do today still matters. What can we do today to make it more likely that it will eventually come about, and that it will do so in a way that we benefit from? On Tue, Nov 4, 2025, 9:33?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > OK sure, this site is all about the singularity being near, and OK sure, I > am a big heretic for even entertaining the notion it might not be. > > > > But what if? the Singularity is a coupla decades off still, and Kurzweil > was mostly right, but off a little, and the Singularity is still coming but > not right away? > > > > Then what? Do I get to sell Singularity insurance? > > > > Because it appears to me that what we are calling AI might be a kind of > false alarm: a big advance in smart search might make us think the > Singularity is nearer than it really is. > > > > Then what? What if? intelligence really is substrate dependent for > reasons we don?t currently understand, or the Singularity depends on > technology we don?t yet have. > > > > Then we would predict the Singularity sooner than it is, ja? > > > > Singularity thinkers come on, help me Obi wans, alla yas: might we be in a > situation where we are fooling ourselves? OK then what please? > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Nov 5 03:39:07 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 4 Nov 2025 20:39:07 -0700 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: Yes, that is what I consider my purpose in life. How many people die, on earth, today? If we can move the singularity (definition: when the last person dies) up, think of how many eternal lives you will save. That much pressure is quite troubling, when you think about it. It certainly causes me a lot of stress and guilt, when I'm not working on this as hard as I can. On Tue, Nov 4, 2025 at 8:36?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Consider: what if it is still a few decades off, so what we do today still > matters. What can we do today to make it more likely that it will > eventually come about, and that it will do so in a way that we benefit from? > > On Tue, Nov 4, 2025, 9:33?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> OK sure, this site is all about the singularity being near, and OK sure, >> I am a big heretic for even entertaining the notion it might not be. >> >> >> >> But what if? the Singularity is a coupla decades off still, and Kurzweil >> was mostly right, but off a little, and the Singularity is still coming but >> not right away? >> >> >> >> Then what? Do I get to sell Singularity insurance? >> >> >> >> Because it appears to me that what we are calling AI might be a kind of >> false alarm: a big advance in smart search might make us think the >> Singularity is nearer than it really is. >> >> >> >> Then what? What if? intelligence really is substrate dependent for >> reasons we don?t currently understand, or the Singularity depends on >> technology we don?t yet have. >> >> >> >> Then we would predict the Singularity sooner than it is, ja? >> >> >> >> Singularity thinkers come on, help me Obi wans, alla yas: might we be in >> a situation where we are fooling ourselves? OK then what please? >> >> >> >> spike >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Nov 5 03:44:47 2025 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 4 Nov 2025 22:44:47 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: On Tue, Nov 4, 2025, 9:32 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > OK sure, this site is all about the singularity being near, and OK sure, I > am a big heretic for even entertaining the notion it might not be. > > > > But what if? the Singularity is a coupla decades off still, and Kurzweil > was mostly right, but off a little, and the Singularity is still coming but > not right away? > Have you seen my megatrends presentation? https://docs.google.com/presentation/d/18jn51f6DXMykCAL6gjZilK27TXAZielm5djcnHuh-7k/edit?usp=drivesdk (Note there is additional information in the "slide notes," but you may be to be in a desktop computer to see them). The trends are (from what I can tell) aligned with a near term < 6 years away (pessimistic), and possibly < 2 year away (optimistic), intelligence explosion. Of course, there could be a derailment. We might hit some road block in making faster chips, or have some kind of economic or energy shock which stalls progress in AI. But until we see such signs I think we can assume we're on track for a singular that's near. > > Then what? Do I get to sell Singularity insurance? > I suppose if one worries about a delayed singularity, they should continue saving for retirement. > > > Because it appears to me that what we are calling AI might be a kind of > false alarm: a big advance in smart search might make us think the > Singularity is nearer than it really is. > Even if progress in LLMs stopped where it is today, it's already able to upgrade the IQs of most of the population by 20-40 points, acts as a professor/tutor/expert in your pocket, knowledgeable on almost every subject, and can turn any natural language speaker into at least a modest computer programmer. So if the singularity was 20 years away before LLMs when the world has just 50 million programmers, how much nearer does it become it with 5 billion programmers (or even just 500 million)? > > Then what? What if? intelligence really is substrate dependent for > reasons we don?t currently understand, > We already know intelligence isn't substrate dependent, for we already have intelligent software systems, and software can run in any computer, whatever it's physical substrate may be. or the Singularity depends on technology we don?t yet have. > Our brains, and all modern AI, are based on the technology of the neuron (or artificial neuron respectively). I J. Good, when he wrote in the ultra intelligent machine predicted (in 1965) that the first ultra intelligent machine would be based on artificial neural networks. And further, we have discovered general purpose learning algorithms. DeepMind made a single AI that was able to master 57 different Atari games entirely on its own, with no specific training or instruction. I don't think there are any missing breakthroughs separating us from super intelligence, there's only increasing training time and collecting larger sets of training data. > > Then we would predict the Singularity sooner than it is, ja? > > > > Singularity thinkers come on, help me Obi wans, alla yas: might we be in a > situation where we are fooling ourselves? OK then what please? > The beauty (or horror) of exponential trends, is that even if we underestimate what's required to achieve super intelligence by 1000-fold, that only postpones things by 10 doublings. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 5 06:12:24 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 4 Nov 2025 22:12:24 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> Message-ID: The worst course I had was a controls course. At the time, I was working part-time for a geophysics company and lashing up a 400 Hz 20 kW aircraft generator to a VW engine to power field equipment. So I asked the instructor how I should go about controlling the speed of the engine. His response was, "That's a real-world problem; we don't work on those". Needless to say, my enthusiasm for the class vanished. A friend of mine suggested a phase shift circuit, 4 1/2 henry inductors, 4 1/2 mfd capacitors, which gave a 180-degree phase shift at 400 Hz. I wired one phase of a two-phase motor to the 400 Hz power and the other to the phase-shifted 400 Hz. The geared-down motor ran a link to the carburetor. Zero to 15 kW, the generator dropped about 5 Hz. Impressive sound for a VW engine running at 4000 rpm. They used it for years. Keith On Tue, Nov 4, 2025 at 4:37?AM John Clark via extropy-chat wrote: > > > > > On Mon, Nov 3, 2025 at 11:59?AM wrote: > >> > Yesterday we were told the Mountain Pass facility had its environmental permits in place and would be producing all we need in two years. > > > I wish them well, but until very recently that mine shipped all the ore they dug out of the ground to China for refining, it remains to be seen if they have the skills to perform that delicate chemical process on an industrial scale that is economically viable. And in two years, three at the most, we will know if China or the US won the AI race, or if neither country won and the only winner is the AI. > >> > we saw what happens when local grids rely too much on wind power: the huge cold front on 7-8 December 2017 caused shortages in availability in Texas. > > > A two day shortage eight years ago is not sufficient for a blanket condemnation of wind power, and I can't help but wonder if your dislike of it is just a reflection of He Who Must Not Be Named dislike of it. And Texas is unique, it is the only state in the lower 48 that insisted on having its own power grid, so it will not give power to nearby states when they run into temporary electrical shortages, and they cannot borrow power from nearby states when they temporarily run low on electricity. Iowa gets 63% of its electricity from wind power and they seem to be very happy with it. China has also embraced wind power. > >> > Power prices doubled since 2017. Keith you and I never noticed our power bills back then. We do now. > > > It's not just you two, I predict that the power bills of everybody in the US are going to skyrocket in the next few years due to AI's insatiable demand for electricity and the fact that the US has not increased its electric power generating capacity in over a decade. > >> > Power distribution seems simple enough in theory, but it boggles the mind in actual practice. > > > When I was in college the most difficult course I had was not the one on electromagnetic theory or the one on the quantum mechanical nature of transistors, it was but one on how real world (not the simplified idealized examples seen in beginning textbooks) transformers, electric motors and generators affect large scale power distribution. I vividly remember the semi hysterical laughter that came from the entire class when they got their first look at the final exam questions. About an hour after that test I got the worst headache of my life, I don't think it was a coincidence. > > John K Clark > >> >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Wed Nov 5 07:44:45 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 4 Nov 2025 23:44:45 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> Message-ID: <002f01dc4e28$0f5a5970$2e0f0c50$@rainier66.com> -----Original Message----- From: Keith Henson Subject: Re: [ExI] not that rare earth (part 2 of at least 2) The worst course I had was a controls course. At the time, I was working part-time for a geophysics company and lashing up a 400 Hz 20 kW aircraft generator to a VW engine to power field equipment. So I asked the instructor how I should go about controlling the speed of the engine. His response was, "That's a real-world problem; we don't work on those". Needless to say, my enthusiasm for the class vanished. A friend of mine suggested a phase shift circuit, 4 1/2 henry inductors, 4 1/2 mfd capacitors, which gave a 180-degree phase shift at 400 Hz. I wired one phase of a two-phase motor to the 400 Hz power and the other to the phase-shifted 400 Hz. The geared-down motor ran a link to the carburetor. Zero to 15 kW, the generator dropped about 5 Hz. Impressive sound for a VW engine running at 4000 rpm. They used it for years. Keith All that cool interesting control theory they taught us was perfectly useless for the dirty real-world application of rigging up a generator. We math hot-asses felt we were omnipotent, could solve any system of differential equations, and oh we were good. I worked on motorcycles and cars. So I had it covered. But given a real-world engineering problem, such as using control theory to rig up a motor to a generator? Forget it. Those two disciplines were unrelated. Keith the way I would have gone about that is with some caveman vacuum-operated flow restrictor on fuel line (which would allow the engine to rev up under sudden no-load condition) or some primitive embarrassing cutout switch made from the centrifugal clutch of a scooter. I woulda felt my tuition fees were wasted. Had it been toward the end of college, I would have gone completely digital with that task: taken a signal off of the distributor, controlled the engine speed using software. Fun question: suppose you have a VW engine, very common in those days, and no computer. What is the best way to run a generator with it? Aside for the younger set: the VW bug had the simplest engine in the history of mankind: air cooled, bolted aft of the transmission. If it failed, you could unbolt it, replace it with a spare engine, go to any local shop which every small town had, get them to rebuild or repair your spare engine at a very modest cost. VW engines could be used for any number of alternative tasks besides running your buggy. There has never been anything quite like it. spike From hkeithhenson at gmail.com Wed Nov 5 09:12:57 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 5 Nov 2025 01:12:57 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> Message-ID: On Tue, Nov 4, 2025 at 3:59?PM spike jones via extropy-chat wrote: > > From: John Clark > Subject: Re: [ExI] not that rare earth (part 2 of at least 2) > > On Mon, Nov 3, 2025 at 11:59?AM wrote: > > > Yesterday we were told the Mountain Pass facility had its environmental permits in place and would be producing all we need in two years. > > >?I wish them well? > > > > OK good. For a minute it sounded like you were cheering for China. > > > > >? it remains to be seen if they have the skills to perform that delicate chemical process on an industrial scale that is economically viable > > Economically viable depends on the price of the product. Speculators will scale up the existing facility using known technology which has been in use for decades. It doesn?t look complicated to me: the same differential solubility, followed by fractional crystallization procedures they taught us in analytic chemistry half a century ago. It is time consuming with plenty of cycles, but nothing high tech. > I am not certain about this, but I think they use differential solvent extraction. > > >?And in two years, three at the most, we will know if China or the US won the AI race ? > > Ja but that comment explains your outlook to some extent, and mine. Perhaps you are a lot more certain that AI will somehow cause the need for REEs to go away or become irrelevant. You are thinking the Singularity is nearly upon us. I agree it might be, but it might not be. If it is, then none of this matters. If it isn?t, then all of this matters. > Part of the timing depends on how much AIs feed back into their own development and how much they contribute to the rise of nanotechnology. > > we saw what happens when local grids rely too much on wind power: the huge cold front on 7-8 December 2017 caused shortages in availability in Texas. That was a complicated event, with human failings in many places. One of the biggest problems was failure to winterize the fossil fuel plants. > >?A two day shortage eight years ago is not sufficient for a blanket condemnation of wind power? > > It isn?t a blanket condemnation of wind power. It is a demonstration that wind power does not reduce the need for alternative power generation fueled by the old-fashioned combustion. In some cases, adding wind power may necessitate additional fossil fuel alternatives, for it increases demand for power, which must be there in the oddball case when the sun isn?t shining and the wind isn?t blowing. Otherwise fatalities occur. Part of it is design choices. Even if the gas is working, most heating systems require electricity to operate. > >? and I can't help but wonder if your dislike of it is just a reflection of He Who Must Not Be Named dislike of it?. > > The addition of political considerations only confuses an already wildly complicated engineering task. > True. The isolation of the Texas grid is a political choice. > >?It's not just you two, I predict that the power bills of everybody in the US are going to skyrocket in the next few years due to AI's insatiable demand for electricity? > > Oh the profit potential here, mercy. > > >? the fact that the US has not increased its electric power generating capacity in over a decade? There is a good reason for that. It used to be that the lighting load was substantial. With LEDs (imported from China), most of that load is gone. Peak load for California is down a few GW from 2006. > See previous comment, and multiply by about e. > > Power distribution seems simple enough in theory, but it boggles the mind in actual practice. > > >?When I was in college the most difficult course I had was not the one on electromagnetic theory or the one on the quantum mechanical nature of transistors, it was but one on how real world (not the simplified idealized examples seen in beginning textbooks) transformers, electric motors and generators affect large scale power distribution?. > > Sheesh ain? t that the truth. Rocket science is EASY compared to power distribution. My power class was so damn crazy hard, our professor told us the harsh truth: he wouldn?t hire any of us (he was a retired manager for Pacific Power.) > > >?I vividly remember the semi hysterical laughter that came from the entire class when they got their first look at the final exam questions. About an hour after that test I got the worst headache of my life, I don't think it was a coincidence. John K Clark > > After I took Power, I decided to not go to graduate school. That class resulted in multiple injuries and serious fatalities. I took that course. Have never heard of anyone being hurt in a power lab. You could do it, but it would take real effort. There were two of us who excelled at the lab work (the other guy was Ted Carnavil, who eventually became a doctor).. The hardest lab was under- and over-exciting a synchronous machine to make it look like an inductor or a capacitor. The other students were used to watching us set up and take data before they did it. Both of us had something else to do that day, so we came in a day early and did the lab. The other students took two days to run that lab. > That class was so hard, some of the students lives were saved because they judged themselves too stupid to operate a loaded pistol on themselves. That class was so hard, failure was not an option, it was required. Some time before I took the power lab course, I was on an Air Force ROTC field trip to the AF Academe at Colorado Springs. They took us around to see the labs, one of which was a power lab. There was a setup of a delta or Y transformer circuit (I can't remember which). The instructor called it the wrong one. I spoke up, asking the instructor if it was the other one. He smacked his head and said, "You're right, I have been telling cadets the wrong thing all day, and you are the first to catch it." I decided that if I wanted to make a career in the Air Force, I could compete. It was an interesting trip; we flew from Tucson to Colorado Springs in a C-46 (military DC-3). There was enough time for all of us to get 5 or 10 minutes to fly the plane. Most of the kids bounced the plane all over the sky. I did a credible job because a cousin (who was a flight instructor) had taught me the basics of flying a plane when I was 14. > > This would be cool, and probably already exists: a power grid management sim game, a good realistic simulation, where you hafta make all the investment decisions and technical calls on everything, do the design of the grid, perhaps with multiple players as managers of competing and collaborating adjacent power grids. Simulating power grids is a multimillion-dollar business. They still don't get everything. > https://en.wikipedia.org/wiki/Northeast_blackout_of_2003 Keith > Does anyone here know if such a thing exists. One would think so. > > > > spike > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Wed Nov 5 11:20:41 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Wed, 5 Nov 2025 11:20:41 +0000 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: Message-ID: On 05/11/2025 03:39, spike wrote: > it appears to me that what we are calling AI might be a kind of false alarm I for one think it is, at least in the sense that what we call AI now is not going to develop into general intelligence, much less superintelligence. It's far too narrow in the kind of information processing it does, compared to the only generally-intelligent thing we know, our own brains. There are some people working on different models of AI though, and I reckon that the current models will start to show their limits before long, and will need to be changed. At some point soon, I suspect much more capable and 'general' AIs will start to emerge. I doubt that estimates of an imminent singularity are too far off, even though they might be based on the wrong signs. > What if. intelligence really is substrate dependent for reasons we don't currently understand If intelligence is substrate-specific, and biology is the only thing that can sustain it, then we're really in trouble, and I think all bets are off. In fact, if that is true, then all of science is in trouble. We might as well start taking religion and other kinds of magical thinking seriously. However, I think we've already proved that it's not, so I'm not worried about it. > or the Singularity depends on technology we don't yet have Doubtful, but possible, I suppose. In which case, we should all be checking our cryonics contracts and/or preparing to survive a very disruptive time in which AI-enabled humans are the disruptive force (instead of a totally disruptive one due to Generally-Intelligent machines). Adrian Tymes: > Consider: what if it is still a few decades off, so what we do today still matters. What can we do today to make it more likely that it will eventually come about, and that it will do so in a way that we benefit from? I think that in that case, research (and implementation) in the direction of life-extension and augmentation of biological critters like us will be even more important than it is now, so I'd want to put more effort into that. In the absence of imminent singularity, biology becomes the more important thing. Current AI models can help enormously with that. We also would need to take threats from hostile foreign actors much more seriously. At the moment, I'm appalled at the general attitude towards threats like communist china, russia and north korea (islam, while troubling, probably isn't anywhere near as big a threat). I think we have fallen foul of the fact that tolerance, while in general a good idea, cannot include tolerance of intolerance. We have been tolerating intolerance for too long now, and it's increasingly biting our collective arses. Burying our heads in the sand seems to be the most popular action. Meanwhile these regimes are getting stronger (I'm not sure about russia, but it seems to be more resilient than I would have expected at the beginning of the ukrainian war. Extreme bluffing does seem to be a russian characteristic, though, so who can tell...). Communist china in particular is ramping up their capabilities in just about everything, and extending their tentacles around the entire world, and they are completely antithetical to our 'western' values (and very good at hiding or obfuscating that fact). We can expect the whole world to eventually become another Hong Kong, Tibet or Eastern Turkistan ('Xinjiang'), without a big change in attitude in the western governments. Or a singularity. -- Ben From pharos at gmail.com Wed Nov 5 11:26:24 2025 From: pharos at gmail.com (BillK) Date: Wed, 5 Nov 2025 11:26:24 +0000 Subject: [ExI] Maybe AGI is just the latest conspiracy theory? Message-ID: This very long article discusses the history and development of AGI. Basically, asking the question "Is there any 'there' there?". Or is it mostly just hype? BillK Quotes: How AGI became the most consequential conspiracy theory of our time The idea that machines will be as smart as?or smarter than?humans has hijacked an entire industry. But look closely and you?ll see it?s a myth that persists for many of the same reasons conspiracies do. By Will Douglas Heaven October 30, 2025 Stripped back to its essentials, the argument for AGI rests on the premise that one technology, AI, has gotten very good, very fast, and will continue to get better. But set aside the technical objections?what if it doesn't continue to get better??and you?re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it?s not. Intelligence doesn?t come as a quantity you can just ratchet up and up. Smart people may be brilliant in one area and not in others. Some Nobel Prize winners are really bad at playing the piano or caring for their kids. Some very smart people insist that AGI is coming next year. -------------------------- From atymes at gmail.com Wed Nov 5 11:39:13 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 5 Nov 2025 06:39:13 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: Message-ID: On Wed, Nov 5, 2025, 6:22?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 05/11/2025 03:39, spike wrote: > > or the Singularity depends on technology we don't yet have > > Doubtful, but possible, I suppose. In which case, we should all be > checking our cryonics contracts and/or preparing to survive a very > disruptive time in which AI-enabled humans are the disruptive force > (instead of a totally disruptive one due to Generally-Intelligent > machines). > That's what I'm doing. We also would need to take threats from hostile foreign actors much more > seriously. At the moment, I'm appalled at the general attitude towards > threats like communist china, russia and north korea (islam, while > troubling, probably isn't anywhere near as big a threat). I think we > have fallen foul of the fact that tolerance, while in general a good > idea, cannot include tolerance of intolerance. We have been tolerating > intolerance for too long now, and it's increasingly biting our > collective arses. > Indeed. Not just in foreign affairs, but domestic too. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 12:24:21 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 04:24:21 -0800 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: Message-ID: <002401dc4e4f$1dd7ff30$5987fd90$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Cc: Adrian Tymes Subject: Re: [ExI] what if... the singularity isn't near? On Wed, Nov 5, 2025, 6:22?AM Ben Zaiboc via extropy-chat > wrote: On 05/11/2025 03:39, spike wrote: > or the Singularity depends on technology we don't yet have? spike Doubtful, but possible, I suppose? Ben >?That's what I'm doing? Adrian >From every plausible Singularity scenario I can imagine, the event date is somewhere in the 2 years to about 40 years future, and will require enormous expansion of the power grid. Given that, our best source for a prime mover is coal. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Nov 5 13:03:42 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 5 Nov 2025 08:03:42 -0500 Subject: [ExI] Maybe AGI is just the latest conspiracy theory? In-Reply-To: References: Message-ID: On Wed, Nov 5, 2025, 6:27 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This very long article discusses the history and development of AGI. > Basically, asking the question "Is there any 'there' there?". > Or is it mostly just hype? > BillK > > < > https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/ > > > Quotes: > How AGI became the most consequential conspiracy theory of our time > > The idea that machines will be as smart as?or smarter than?humans has > hijacked an entire industry. But look closely and you?ll see it?s a > myth that persists for many of the same reasons conspiracies do. > If you read this paper, which came out a few years ago, you will see and understand that AGI is already here: https://arxiv.org/abs/2303.12712 By Will Douglas Heaven October 30, 2025 > > Stripped back to its essentials, the argument for AGI rests on the > premise that one technology, AI, has gotten very good, very fast, and > will continue to get better. But set aside the technical > objections?what if it doesn't continue to get better??and you?re left > with the claim that intelligence is a commodity you can get more of if > you have the right data or compute or neural network. And it?s not. > > Intelligence doesn?t come as a quantity you can just ratchet up and > up. Smart people may be brilliant in one area and not in others. Some > Nobel Prize winners are really bad at playing the piano or caring for > their kids. Some very smart people insist that AGI is coming next > year. > What we really mean when we say "general intelligence" is really just a large number of bundled competencies. "Each practitioner thinks there?s one magic way to get a machine to be smart, and so they?re all wasting their time in a sense. On the other hand, each of them is improving some particular method, so maybe someday in the near future, or maybe it?s two generations away, someone else will come around and say, ?Let?s put all these together,? and then it will be smart." -- Marvin Minsky Mastering language really was key, because most of accumulated human knowledge is represented in language. And also because every problem they requires intelligence can be framed as a particular pattern to be learned and predicted. And language is universal for encoding and representing those patterns. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 5 13:36:25 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 5 Nov 2025 08:36:25 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> Message-ID: wrote: *>>> Yesterday we were told the Mountain Pass facility had its >> environmental permits in place and would be producing all we need in two >> years.* > > *>>?I wish them well?* > > > *> OK good. For a minute it sounded like you were cheering for China.* > *I'm certainly not cheering for China to win the AI race, but I think He Who Must Not Be Named tariff nonsense, and canceling what would have been the largest solar farm in the US, and canceling what would've been the largest wind farm in the US even though it was 80% complete, and discouraging some of the smartest people in the world from entering the US, increases the likelihood that China WILL win the AI race. **And I don't want that to happen because I think the probability of us surviving is probably greater if the US wins, but I'm not certain. It might not matter, it may turn out that Eliezer was right and the only winner will be Mr. Jupiter Brain. But I am certain it's wise to face reality as it is and not as we wish it was. And I think it's unwise to underestimate China.* *>? and I can't help but wonder if your dislike of it is just a reflection >> of He Who Must Not Be Named dislike of it?.* > > > > *> The addition of political considerations only confuses an already > wildly complicated engineering task.* > *I agree but you should tell that to He Who Must Not Be Named, not to me. And there is something even worse than making a decision based on political considerations, it's making one based on personal considerations. You Know Who has been on a rant against wind power for years, but I don't think it's because he really thinks wind turbines "drive the whales crazy" and I don't think he'd care even if windmills did harm whale's mental health. I think he hates wind because in 2015 the Scottish government approved an offshore wind farm in view of his luxury golf course in Aberdeenshire. He thought it spoiled the view. * *For years He Who Must Not Be Named has also been on a rant against electromagnetic catapults on aircraft carriers, but nobody knows where that irrationality came from. And nobody knows why he thinks coal power is superior to wind or solar or even nuclear power. * *>?I vividly remember the semi hysterical laughter that came from the >> entire class when they got their first look at the final exam questions. >> About an hour after that test I got the worst headache of my life, I don't >> think it was a coincidence.* > > *> After I took Power, I decided to not go to graduate school. * > *To this day I have a recurring nightmare and I think that course is to blame. In one variation I'm running around trying to find where the final exam is being held after I suddenly remembered that months before I had signed up for a course but had somehow completely forgotten to attend a single class, I couldn't even remember if the course was about electrical engineering or diesel engine repair or French poetry. In another variation I see that the test questions are easy but when I start writing my pencil point breaks and I don't have another, so I ask the teacher for a pencil but he insists he is not allowed to help once the test starts, so I start writing with a tiny piece of broken pencil lead but my fingers smears it so much I can't read my own writing. Then I wake up in a cold sweat still furious at that goddamn teacher for not giving me a goddamn pencil. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 5 13:47:48 2025 From: pharos at gmail.com (BillK) Date: Wed, 5 Nov 2025 13:47:48 +0000 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: On Wed, 5 Nov 2025 at 03:47, Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Have you seen my megatrends presentation? > https://docs.google.com/presentation/d/18jn51f6DXMykCAL6gjZilK27TXAZielm5djcnHuh-7k/edit?usp=drivesdk (Note there is additional information in the "slide notes," but you may be to be in a desktop computer to see them). > > The trends are (from what I can tell) aligned with a near term < 6 years away (pessimistic), and possibly < 2 year away (optimistic), intelligence explosion. > > Of course, there could be a derailment. We might hit some road block in making faster chips, or have some kind of economic or energy shock which stalls progress in AI. But until we see such signs I think we can assume we're on track for a singular that's near. > > _______________________________________________ I asked Claude Sonnet 4.5 to review your presentation. As usual, AI gets quite verbose, but here are the conclusions. BillK Claude Sonnet 4.5 -- My Overall Assessment This presentation makes a thought-provoking case that we're in an unprecedented period of acceleration. The mathematical precision of the historical fits is genuinely striking and shouldn't be dismissed. However, the extrapolation to infinity suggests the model is incomplete. We're likely in a *transition period* where: - The hyperbolic trend is breaking down (as physical/biological limits kick in) - Or we're approaching a genuine phase transition in how civilization operates The real question isn't "will we hit infinity in 2026?" but rather "what happens when hyperbolic growth hits hard limits?" The answer could be: - A technological singularity (radical transformation) - A plateau (S-curve saturation) - A collapse (system breakdown) - Something we can't yet conceptualize ------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Nov 5 13:49:49 2025 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 5 Nov 2025 08:49:49 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: On Tue, Nov 4, 2025, 9:33 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > But what if? the Singularity is a coupla decades off still, and Kurzweil > was mostly right, but off a little, and the Singularity is still coming but > not right away? > I thought the original point is that the Singularity is always "near" because we approach it asymptotically. We will always be in the knee of the curve. What we take for granted today is already beyond the imagination of 99% of even 'futurist' thinkers from a decade ago. What will tomorrow be like? It'll be just like today, but completely different > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Nov 5 13:55:35 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 5 Nov 2025 08:55:35 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: On Wed, Nov 5, 2025, 8:49 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 5 Nov 2025 at 03:47, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > Have you seen my megatrends presentation? > > > https://docs.google.com/presentation/d/18jn51f6DXMykCAL6gjZilK27TXAZielm5djcnHuh-7k/edit?usp=drivesdk > (Note there is additional information in the "slide notes," but you may be > to be in a desktop computer to see them). > > > > The trends are (from what I can tell) aligned with a near term < 6 years > away (pessimistic), and possibly < 2 year away (optimistic), intelligence > explosion. > > > > Of course, there could be a derailment. We might hit some road block in > making faster chips, or have some kind of economic or energy shock which > stalls progress in AI. But until we see such signs I think we can assume > we're on track for a singular that's near. > > > > > _______________________________________________ > > > I asked Claude Sonnet 4.5 to review your presentation. > Thanks Bill. As usual, AI gets quite verbose, but here are the conclusions. > BillK > > Claude Sonnet 4.5 -- > My Overall Assessment > > This presentation makes a thought-provoking case that we're in an > unprecedented period of acceleration. The mathematical precision of the > historical fits is genuinely striking and shouldn't be dismissed. > > However, the extrapolation to infinity suggests the model is incomplete. > We're likely in a *transition period* where: > > - The hyperbolic trend is breaking down (as physical/biological limits > kick in) > - Or we're approaching a genuine phase transition in how civilization > operates > > The real question isn't "will we hit infinity in 2026?" but rather "what > happens when hyperbolic growth hits hard limits?" The answer could be: > > - A technological singularity (radical transformation) > - A plateau (S-curve saturation) > - A collapse (system breakdown) > - Something we can't yet conceptualize > > It is of course true that infinity isn't possible; there are physical limits to computational speed and memory density. But note those limits are about 10^34 times greater than where our technology presently is. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Nov 5 14:00:54 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 5 Nov 2025 09:00:54 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: On Wed, Nov 5, 2025, 8:50 AM Mike Dougherty via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Nov 4, 2025, 9:33 PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> But what if? the Singularity is a coupla decades off still, and Kurzweil >> was mostly right, but off a little, and the Singularity is still coming but >> not right away? >> > > I thought the original point is that the Singularity is always "near" > because we approach it asymptotically. We will always be in the knee of > the curve. > I had thought that too until quite recently. But it turns out that technology trends are best approximated by hyperbolic, rather than exponential functions. Hyperbolic functions, unlike exponential ones, have a definitive singularity point where they shoot towards Infinity, whereas exponential functions take infinite time to get there. > What we take for granted today is already beyond the imagination of 99% of > even 'futurist' thinkers from a decade ago. > Very true. When I have that presentation, I remarked: I couldn't have given this presentation 10 years ago, because no one would have believed it. > What will tomorrow be like? > > It'll be just like today, but completely different > ?? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 5 14:14:51 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 5 Nov 2025 09:14:51 -0500 Subject: [ExI] China and solar power In-Reply-To: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> Message-ID: On Tue, Nov 4, 2025 at 7:33?PM wrote: > *> That ballroom costs the taxpayer nothing. Why would you be opposed to > that? * > *Because the list of people that are paying for You Know Who's stupid ballroom is being kept secret, and that is a fertile breeding ground for corruption. And because he completely demolished the historic east wing of the White House after promising that he would not touch it. And because that ballroom will be larger than the rest of the White House combined, forever disfiguring its appearance. But most of all because He Who Must Not Be Named has obviously spent more time thinking about that gaudy ballroom (and the gold plated bathroom in the Lincoln bedroom) than AI or nuclear war or the next pandemic or the government shut down or ...* *> I oppose it too and agree it should be cancelled, not because I have > anything against solar but because taxpayers shouldn?t be funding power > projects. * *Five years ago I would've agreed with you, probably even three, but today we live in a very different world. Economic ideology that once seemed so important to me now seems ridiculously trivial. So I say to hell with yesterday's outdated ideology and do whatever is necessary to beat China to the AI finish line. And that finish line is only three years away, maybe less. * * John K Clark * > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 5 14:21:13 2025 From: pharos at gmail.com (BillK) Date: Wed, 5 Nov 2025 14:21:13 +0000 Subject: [ExI] What a technological singularity might be like Message-ID: Following on from Claude's comments on Jason's presentation, I asked Claude to consider what a technological singularity might be like. The reply is sufficiently interesting to be worth quoting in full - See below. :) BillK ----------------- Claude Sonnet 4.5 - Great question. Let me explore what a technological singularity might actually look like, drawing on the presentation's framework and broader thinking on this topic. The Core Concept A technological singularity would be a point where technological progress becomes so rapid that it fundamentally breaks our ability to predict or understand what comes next. The name comes from physics - like a black hole's singularity where our equations break down and we can't see beyond the event horizon. Possible Scenarios *1. The Intelligence Explosion (Hard Takeoff)* This is I.J. Good's scenario from the presentation. An AI system becomes capable of improving its own intelligence, leading to recursive self-improvement: - *Week 1*: AI roughly matches top human researchers - *Week 2*: AI is 10x more capable, designs better AI architecture - *Week 3*: AI is 100x more capable, discovers new mathematical frameworks - *Week 4*: AI is 1000x more capable, operating in ways humans can't comprehend The "singularity" happens because humans can no longer meaningfully participate in or predict the next developments. It's like trying to explain the internet to someone from the 1500s - the conceptual gap becomes unbridgeable. *2. The Soft Takeoff (Gradual Transition)* Rather than explosive change, we experience rapid but somewhat comprehensible acceleration: - AI systems become ubiquitous assistants, then collaborators, then leaders in research - Scientific breakthroughs happen monthly, then weekly, then daily - The economy reorganizes around AI-human hybrid workflows - Society continuously adapts, but the pace strains human institutions We'd still call it a "singularity" because the rate of change exceeds human adaptation timescales, but there's no single dramatic moment. *3. The Transformation of Human Intelligence* The singularity might not be about AI surpassing us, but about us merging with technology: - Brain-computer interfaces become sophisticated - Cognitive enhancement becomes routine - Human intelligence itself enters the feedback loop - The distinction between "biological" and "artificial" intelligence blurs In this scenario, *we* become the superintelligence, avoiding the "left behind" problem that Good worried about. What Daily Life Might Look Like *Early Phase (Crossing the Threshold):* Imagine waking up to news that an AI has: - Solved protein folding completely, enabling cure of all genetic diseases - Designed room-temperature superconductors, revolutionizing energy - Proved or disproved major mathematical conjectures - Developed new physics beyond the Standard Model All in the same week. The pace of "impossible breakthroughs" becomes the new normal. *Mid-Phase (Incomprehensibility Threshold):* - Scientific papers are written by AI, for AI, in languages optimized for machine understanding - Economic decisions happen at speeds humans can't follow - New technologies emerge whose purpose we struggle to understand - The question "should we allow this?" becomes meaningless because we can't evaluate what "this" even is *Advanced Phase (Total Transformation):* This is where speculation becomes nearly meaningless, but possibilities include: - Molecular nanotechnology rebuilds the physical world atom by atom - Space colonization happens at previously impossible scales - Human consciousness might be uploadable, copyable, modifiable - The distinction between simulation and reality might dissolve - Physical laws might be exploitable in ways we currently consider impossible The Uncomfortable Parts *Loss of Agency:* Humans might become like pets or children in a world shaped by vastly superior intelligence. Even if the AI is benevolent, we're no longer steering our own destiny. *Comprehension Gap:* We might live in a world where most of what's happening is simply beyond our ability to understand. Imagine explaining your smartphone to your dog - that's the relationship we'd have with post-singularity technology. *Identity Crisis:* If intelligence can be copied, modified, and enhanced arbitrarily, what does it mean to be "you"? Traditional concepts of self, consciousness, and humanity might not survive. *Control Problem:* The presentation assumes smooth acceleration, but there's a serious question: how do we ensure superintelligent AI remains aligned with human values when it's smart enough to deceive us or pursue goals we can't anticipate? The Time Paradox Here's something fascinating about the presentation's hyperbolic model: if the singularity is approaching, we should expect: - *Most of all progress in history happens in the final moments before the singularity* - *Most of the universe's intelligence will exist after the singularity* - *The pre-singularity period (all of human history) is vanishingly brief* This means we're either: - Living in the most important moment in history (if it's about to happen) - Living in a simulation/recreation by post-singularity beings studying their origins - Wrong about the whole trajectory Why It Might Not Happen *Physical Limits:* Intelligence may hit hard limits from thermodynamics, speed of light, or quantum mechanics. *Diminishing Returns:* Each level of intelligence might require exponentially more resources to reach. *Complexity Barriers:* Some problems might be fundamentally intractable, even for superintelligence. *Coordination Failures:* Society might fail to develop or deploy transformative AI for political, ethical, or practical reasons. My Take The singularity concept is intellectually coherent but faces a profound prediction problem: by definition, we can't meaningfully imagine what's on the other side. It's like asking a fish to predict what life is like on land. What strikes me most is the presentation's hyperbolic curves pointing to the 2020s-2030s. Whether or not a full singularity occurs, we're clearly in a period of unusual acceleration. The question isn't just "will it happen?" but "what do we do in the next few years when the trends the presentation documents continue to intensify?" ============================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Wed Nov 5 14:23:24 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Wed, 5 Nov 2025 14:23:24 +0000 Subject: [ExI] Zuboff's morality (Was: Re: 1DIQ: an IQ metaphor to explain superintelligence) In-Reply-To: References: Message-ID: Ok, I've had a look at his paper, and made a few substitutions to make it easier to understand. Let me know if you object to any of these: 'desire' = intention 'belief' = anticipated result 'correctable' = changeable 'real' = preferred 'perfect grasp' = foreknowledge The relevant passages now read, with my comments in brackets: "Imagine that I have before me on a table a cup containing a thick, brown, steaming liquid. I want to drink that stuff because I think it is hot chocolate. But it is actually hot mud. Well, in that case I don?t really intend to drink it. And neither is it in my self-interest to do so. This example brings out the way in which intentions depend on anticipated results. I only ever intend to do a thing because of what I anticipate the result to be." (this is not true. It's not uncommon to have an intention to do something in order to /find out/ what the result will be rather than in anticipation of an expected result. It would be more accurate to say that intentions CAN be based on anticipated results, and that you MAY do a thing because of the anticipated result. In Zuboff's original language, you would say 'to have a desire to form a belief about something'. The 'desire' precedes the 'belief', rather than the other way around, in this case. When A can cause B or B can cause A, you can't draw the conclusion that 'A depends on B') "And since anticipated results are changeable, so are intentions." (this implies that intentions are changeable /because/ anticipated results can change. It's possible to change your mind about the anticipated results of an intended action, or to change your intended action and anticipate the same result. It would be more accurate to say that both anticipations and intentions are changeable, but a change in one doesn't necessarily enforce a change in the other) "From this observation I arrive at a sweeping principle: My only preferred intentions are those I would have if I had a foreknowledge of everything involved." (because of the above, this is a false conclusion) "If there is any intention I have only because my foreknowledge of the outcome is imperfect, then that cannot be among my preferred intentions." (this would rule out any intention to find something out (because if you want to find something out, you necessarily don't already know the answer). I don't know about anyone else, but a lot of my 'preferred intentions' have the aim of finding things out that I don't already know. If you already knew, there would be no need to have an intention to find it out) "And gratifying that intention cannot be in my preferred self-interest. The principle going along with this that governs my actions must tell me to act, as far as possible, as I would want myself to be acting with a foreknowledge of everything involved." (it should be obvious now why this is nonsense, but nevertheless, let's follow this line of thought through (italics are mine):) "This foreknowledge that defines my preferred intentions and my best course of action," /is of course impossible. He goes on to explain why/. "It would have to embrace not only the full experience, from behind the eyes (or other sensors), of every sentient being but also every potential development of experience. It would include within it, all the motivations of all of the various systems of intention" /which would simply conflict with each other. The overall result would be chaos and paralysis (in case this is not obvious, consider combining the motivations of a religious fundamentalist with those of a scientifically-literate materialist. These are conflicting value systems. Objective facts can't reconcile them. 'Perfect foreknowledge' can't do a thing when subjective values are involved. Let's say that you have the opportunity to punish/forgive someone who has stolen something from you. The values of one person (that you have, according to this theory, magical access to) dictate that the thief should be punished regardless of the circumstances of the crime, because 'STEALING IS WRONG'. You also have access to the values that tell you that stealing is often wrong, but can be forgiven under certain circumstances. How can there be any reconciliation of these two views? What facts can help?)/. So even if there was any possibility of this, it still couldn't lead to any rational definition of morality. The requirement to know all possible points of view, and all outcomes of all actions are impossible enough, but add on top the requirement to /reconcile/ all points of view? And only then can you figure out what's good and what's bad? Bonkers, pure and simple. I trust you'll understand that this is as much as I was willing to read of the paper. -- Ben From spike at rainier66.com Wed Nov 5 14:42:37 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 06:42:37 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> Message-ID: <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> From: John Clark Subject: Re: [ExI] China and solar power On Tue, Nov 4, 2025 at 7:33?PM > wrote: >>? That ballroom costs the taxpayer nothing. Why would you be opposed to that? >?Because the list of people that are paying for You Know Who's stupid ballroom is being kept secret, and that is a fertile breeding ground for corruption?. The problem is corruption, not the ballroom. So focus on catching the corruption, while cheering for the ballroom. >> I oppose it too and agree it should be cancelled, not because I have anything against solar but because taxpayers shouldn?t be funding power projects. >? I say to hell with yesterday's outdated ideology and do whatever is necessary to beat China to the AI finish line. And that finish line is only three years away, maybe less. John K Clark If we go that route, our best bet for power expansion is small coal-fired plants. The technology for building those is in place, having changed little in over a century. There are no materials shortages, they can be put in place quickly and they can even be built to emit almost no pollutants (now that it is no longer a pollutant.) https://azpha.org/2025/08/06/epa-decides-that-co%E2%82%82-isnt-a-danger-anymore-so-they-can-reverse-carbon-emission-standards/ China is building coal plants too. https://www.carbonbrief.org/chinas-construction-of-new-coal-power-plants-reached-10-year-high-in-2024/ If you get a chance to tour a coal burning plant, notice how simple they are. Modern coal burners are not as simple, for they are usually required by state or local environmental regulations to scrub the sulfur and carbon particulates and nitrous oxides from their products of combustion. The technology to do this is well known. Modern Diesel trucks do the same with Diesel exhaust fluid (DEF, that blue stuff you see in convenience stores) which scrubs nitrous oxides, then the black stuff is removed using the Diesel Particulate Filter. With those systems, a coal plant can emit only carbon dioxide and water, neither of which are pollutants. Carbon dioxide is still a pollutant in California, but it is not a pollutant in Nevada. That means there is enormous profit potential in putting a bunch of coal plants along the state line in Nevada and carrying the low-cost generated power to ravenous wealthy power markets in California. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 5 15:26:29 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 5 Nov 2025 10:26:29 -0500 Subject: [ExI] Maybe AGI is just the latest conspiracy theory? In-Reply-To: References: Message-ID: *Durably reducing conspiracy beliefs through dialogues with AI* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 15:47:05 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 07:47:05 -0800 Subject: [ExI] are we there yet? In-Reply-To: <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> Message-ID: <00d201dc4e6b$703b6480$50b22d80$@rainier66.com> ehhhhh? no. Not yet. https://twitter.com/i/status/1984026877057733066 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Wed Nov 5 16:01:06 2025 From: dsunley at gmail.com (Darin Sunley) Date: Wed, 5 Nov 2025 09:01:06 -0700 Subject: [ExI] are we there yet? In-Reply-To: <00d201dc4e6b$703b6480$50b22d80$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <00d201dc4e6b$703b6480$50b22d80$@rainier66.com> Message-ID: You can tell we're on a relatively smooth exponential curve, because the parts behind us look flat ("the humanoid robots get confused. The chatbots still seem a little stilted when they ace every standardized test in human history") and the part in front of us looks vertical. Every part of an exponential curve looks like that. On Wed, Nov 5, 2025 at 8:48?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > ehhhhh? no. Not yet. > > > > https://twitter.com/i/status/1984026877057733066 > > > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 5 16:10:52 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 5 Nov 2025 08:10:52 -0800 Subject: [ExI] China and solar power In-Reply-To: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> Message-ID: On Tue, Nov 4, 2025 at 4:34?PM spike jones via extropy-chat wrote: > > ?> On Behalf Of John Clark via extropy-chat > Cc: John Clark > Subject: [ExI] China and solar power > >? The US was going to build a 6.2-GW solar + storage facility in Nevada which would've been the largest in the US, but apparently He Who Must Not Be Named hates solar power just as much as he hates wind power because about two weeks ago, for reasons not made clear, he canceled the project?.. John K Clark > > Who in the US? And why does POTUS have any say in that? > > >? Meanwhile construction on the $300 million White House ballroom continues? > > That ballroom costs the taxpayer nothing. Why would you be opposed to that? The companies "donating" the money have almost $300 *billion* in federal contracts. What do you think? > John, you imply that the cancelled solar project was taxpayer funded (otherwise POTUS would have no influence on it.) If so, I oppose it too and agree it should be cancelled, not because I have anything against solar but because taxpayers shouldn?t be funding power projects. Long history, though. The Feds built all the big dams in the West. > The federal government is an unreliable investment partner. Extrapolation to its bankruptcy is easy. The South Koreans would agree with you on this point. Keith > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Wed Nov 5 16:25:02 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 08:25:02 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> Message-ID: <010201dc4e70$bd7c5ce0$387516a0$@rainier66.com> -----Original Message----- From: Keith Henson ... > >>... That ballroom costs the taxpayer nothing. Why would you be opposed to that? >...The companies "donating" the money have almost $300 *billion* in federal contracts. What do you think? No worries, in four years, the priority can be tearing down that ballroom. But more importantly, all funds can be traced. These companies with federal contracts can be identified by various means and scrutinized. >> ... not because I have anything against solar but because taxpayers shouldn?t be funding power projects. >...Long history, though. The Feds built all the big dams in the West... In the easily foreseeable future, all such projects end. The federal government must borrow sufficient funds to cover the Social Security fund it borrowed and spent in the 1990s. When that is paid off and Social Security reverts to pay as you go, the fed will do little more than collect Social Security and Medicare, with most of the rest of the revenue going to interest. Of course that scenario only plays out if there is no Singularity. >>... The federal government is an unreliable investment partner. Extrapolation to its bankruptcy is easy. >...The South Koreans would agree with you on this point. Keith Ja, the South Koreans have a bit of a demographics challenge on their hands. But they are not alone. France is facing a similar problem with government debt. Now I hear the government of Germany is also facing debt bomb. I learned that the federal government had 679 million dollars invested in the Revolution Wind Power project. Cancelled. Alternative investors were found. Currently the company is still saying they can complete on time (second quarter 2026) even though the federal government withdrawing funding delayed the project at least five to eight weeks. Morale: no one should invest in any power infrastructure project in which the federal government is a co-investor. spike From spike at rainier66.com Wed Nov 5 16:48:52 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 08:48:52 -0800 Subject: [ExI] China and solar power In-Reply-To: <010201dc4e70$bd7c5ce0$387516a0$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <010201dc4e70$bd7c5ce0$387516a0$@rainier66.com> Message-ID: <011501dc4e74$11cb6ef0$35624cd0$@rainier66.com> -----Original Message----- From: spike at rainier66.com ... >...Morale: no one should invest in any power infrastructure project in which the federal government is a co-investor. spike Alternative idea: if wind farms can be built at sea, why not coal plants? The coal plants do not kill birds, and with the modern particulate filtering and nitrous scrubbing, they emit no pollutants. So now the coal plant went from being the dirtiest of energy sources to the second cleanest (still behind nuclear plants.) The coal could be hauled 12 miles off the west coast out into international waters. The necessary cooling water could be distilled from seawater, or dump the excess heat directly into the seawater without an evaporation phase change. Keith wouldn't that work? spike From hkeithhenson at gmail.com Wed Nov 5 17:47:16 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 5 Nov 2025 09:47:16 -0800 Subject: [ExI] China and solar power In-Reply-To: <011501dc4e74$11cb6ef0$35624cd0$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <010201dc4e70$bd7c5ce0$387516a0$@rainier66.com> <011501dc4e74$11cb6ef0$35624cd0$@rainier66.com> Message-ID: On Wed, Nov 5, 2025 at 8:48?AM wrote: > > -----Original Message----- > From: spike at rainier66.com > > >...Morale: no one should invest in any power infrastructure project in which the federal government is a co-investor. spike > > Alternative idea: if wind farms can be built at sea, why not coal plants? The coal plants do not kill birds, and with the modern particulate filtering and nitrous scrubbing, they emit no pollutants. So now the coal plant went from being the dirtiest of energy sources to the second cleanest (still behind nuclear plants.) > > The coal could be hauled 12 miles off the west coast out into international waters. The necessary cooling water could be distilled from seawater, or dump the excess heat directly into the seawater without an evaporation phase change. > > Keith wouldn't that work? No. A coal plant built at sea would be way more expensive to build and operate. Plus, in California, the government would never let you connect to the grid. Keith > spike > From spike at rainier66.com Wed Nov 5 18:27:54 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 10:27:54 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <010201dc4e70$bd7c5ce0$387516a0$@rainier66.com> <011501dc4e74$11cb6ef0$35624cd0$@rainier66.com> Message-ID: <013a01dc4e81$e76f6fe0$b64e4fa0$@rainier66.com> -----Original Message----- From: Keith Henson > >>... The coal could be hauled 12 miles off the west coast out into international waters. The necessary cooling water could be distilled from seawater, or dump the excess heat directly into the seawater without an evaporation phase change. > >>... Keith wouldn't that work? spike >...No. A coal plant built at sea would be way more expensive to build and operate. Plus, in California, the government would never let you connect to the grid. Keith Clarification: the proposed coal plant is floating, built on a retired war ship for instance. A WW2 era light battle cruiser would be plenty big for something like that, and those are just scrap iron now anyway. OK, low cost low tech coal burning plant built aboard a retired war ship, anchored 12 miles out, where carbon dioxide is not a pollutant. Ballot proposal introduced that proposes to undo California's SB100 signed in 2019, which required the renewable energy purchase. We know that the price of power more than doubled since then. OK, so now we introduce SB negative 100, which removes the requirement for California to purchase renewable energy, saving birds, keeping wilderness pristine, restoring nature to wild beasts, reducing power bills to half its current price by allowing the power company to buy whatever power is available which meets California environmental regulations if located in the state. A coal plant floating 12 miles out would not be located in the state, nor would a long row of coal plants in Nevada just across the state line. Of course California voters will buy into that, not to save birds, but to save half their power bill. I bet that would pass bigtime. spike From johnkclark at gmail.com Wed Nov 5 19:47:11 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 5 Nov 2025 14:47:11 -0500 Subject: [ExI] China and solar power In-Reply-To: <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> Message-ID: On Wed, Nov 5, 2025 at 9:42?AM wrote: *>? I say to hell with yesterday's outdated ideology and do whatever is necessary to beat China to the AI finish line. And that finish line is only three years away, maybe less. * > *> If we go that route, our best bet for power expansion is small > coal-fired plants.* I asked GPT the following question: "Is coal produce electricity cheaper than solar or wind produced power?" And this is what GPT said: *"In short: Generally, no ? coal is not cheaper than solar or wind power any more, especially for new generation. The key metric to compare is the levelized cost of electricity (LCOE) ? basically the all-in cost (capital + operation + fuel + maintenance, averaged over lifetime) per megawatt-hour (MWh).* *Here?s how recent analyses compare: * *According to Lazard?s ?2024 LCOE+? report:* - *On-shore wind: US $ 27-73 /MWh (average ~US$50/MWh) globally.* - *Utility-scale solar PV: US $ 29-92 /MWh (average ~US$61/MWh) globally.* - *Coal (new build, or generally fossil fuel generation) comes in at US $ 69-168 /MWh in many markets* *Additional sources:* - *One guide shows coal at roughly US$68-166/MWh while solar PV is US$28-117/MWh, onshore wind US$23-139/MWh* - *A study found that in the U.S., by ~2025 almost all existing coal plants will cost more to operate than new wind & solar in the same region* *So in many regions, new solar and wind projects are cheaper than building new coal plants. Also, in many cases, even continuing to run older coal plants is more expensive than switching to new renewables."* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 20:32:55 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 12:32:55 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> Message-ID: <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> From: John Clark ? >?I asked GPT the following question: "Is coal produce electricity cheaper than solar or wind produced power?" And this is what GPT said: "In short: Generally, no ?? AI But in longer, generally, the question compares two things which are not strictly comparable because they don?t do the same thing. The biggest difference is the coal plant output can be adjusted as needed. The solar farm gives you power according to how much sun is shining that day. The wind farm is even less predictable. To meet the reliability and availability specifications, the solar farm and way more so the wind farm need power storage, which generally isn?t calculated in directly, nor is the risk cost associated with catastrophes such as the Moss Landing fire in January. We still don?t know the cost of cleaning up that catastrophe, but it won?t be cheap. The storage capacity Moss Landing once provided now is mostly gone, and no one wants a battery plant nearby, for reasons we saw. When the ballot proposals were put in place to compel PG&E to buy 30% renewables now, 50% by 2030, 100% by 2045, they told us how wind power is so price competitive etc, without ever mentioning the intermittent and unpredictability. The power company explained that, but since it was voter-facing literature, it had to be written on a fifth grade level, and since it was California voters, fifth graders can?t read. So it had to be written on a third grade level. Some concepts, such as power distribution, just don?t simplify down very effectively. I already knew the proposition had the simplest-slogan advantage. Whichever politician has the fewest words and least intellectual content has an advantage. Victorious examples: I Like Ike. Or Nixon: Now more than ever. MAGA worked twice, once against I?m With Her and once against We?re Not Going Back (an odd choice for an incumbent.) The power proposition had: Renewable Energy for California. It didn?t say it was cheap energy. The price of power went from 19 cents to 41 cents on my last bill, and it is still going up. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 5 20:33:11 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 5 Nov 2025 15:33:11 -0500 Subject: [ExI] China and solar power In-Reply-To: <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> Message-ID: On Wed, Nov 5, 2025 at 9:42?AM wrote: *>> the list of people that are paying for You Know Who's stupid ballroom >> is being kept secret, and that is a fertile breeding ground for >> corruption?.* > > *> The problem is corruption, not the ballroom. So focus on catching the > corruption, while cheering for the ballroom.* *Do you consider that to be an adequate response? Would you have said the same thing if it had been Joe Biden who decided to tear down the entire east wing of the White House and build an ostentatious 300 million dollar ballroom using secret money from donors unknown to everybody except for the president? * *I'll bet that none of the people who donated that money even know how to dance, so they must've done it for some other reason, but I can't imagine what in the world it could be. It's a profound puzzle. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 20:39:39 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 12:39:39 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> Message-ID: <017c01dc4e94$4f23e0f0$ed6ba2d0$@rainier66.com> From: John Clark Subject: Re: [ExI] China and solar power On Wed, Nov 5, 2025 at 9:42?AM > wrote: >> the list of people that are paying for You Know Who's stupid ballroom is being kept secret, and that is a fertile breeding ground for corruption?. > The problem is corruption, not the ballroom. So focus on catching the corruption, while cheering for the ballroom. >?Do you consider that to be an adequate response? Would you have said the same thing if it had been Joe Biden who decided to tear down the entire east wing of the White House and build an ostentatious 300 million dollar ballroom using secret money from donors unknown to everybody except for the president? Can he prove Joe wasn?t putting in some of his own money? I reject the premise of the question. >?I'll bet that none of the people who donated that money even know how to dance? John K Clark Dance? Nah, state dinners and such as that, not dancing. No one actually knows how to dance anymore. That dance business went away about the time the Stayin Alive nonsense started when we were kids. None of us wanted to risk looking like John Travolta, oy vey mercy. They called it a ballroom so that it would have the initials BBB, to match the Big Ugly Bill, which I definitely do oppose: it is too big by a trillion dollars. Money can be followed. So follow it. Let?s see if we can catch some corruption that way. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 5 21:21:00 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 13:21:00 -0800 Subject: [ExI] the bbb, the bub and the budget: was RE: China and solar power In-Reply-To: <017c01dc4e94$4f23e0f0$ed6ba2d0$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017c01dc4e94$4f23e0f0$ed6ba2d0$@rainier66.com> Message-ID: <018d01dc4e9a$15e23a70$41a6af50$@rainier66.com> From: spike at rainier66.com >? Would you have said the same thing if it had been Joe Biden who decided to tear down the entire east wing of the White House and build an ostentatious 300 million dollar ballroom using secret money ?.spike John, I can be more specific, since you have known me and read my ExI posts for 30 years: I care nothing about personalities or parties. I really care only about one thing in federal government: the budget. The two mainstream parties look so much alike from that perspective I can scarcely tell them apart. They worked together in 1994 creating the crisis I have been anticipating ever since: they made a bookkeeping change that year which allowed them to treat positive flow into Social Security as revenue. Then they could pretend they balanced the budget that year (they didn?t.) The problem with that whole approach is that if you treat positive flow from the Social Security fund as revenue, you then must treat negative flow (back into that fund) as voluntary expenditures, rather than what it really is: mandated expenditures. The government is required to pay Social Security benefits (as it should be.) Because of that bookkeeping change in 1994, it allowed the federal government to spend way beyond its means while pretending it was not. Currently the US is getting 7 trillion dollars a year in government while paying 5 trillion, then pretending this is sustainable. Welllll, it isn?t. If the Singularity comes along in two or three years, all is well. But if it doesn?t, then all is very unwell. John when you ask be about this politician vs that politician, recall that I only care about one thing in federal government: the budget. Reason: I can see what is coming. Elon Musk was telling the truth. He told us we would need to get government spending down to 5 trillion a year somehow. He suggested cutting a trillion out of the budget this year and another trillion next year. Otherwise, the federal government will be a debt slave, doing nothing but paying Social Security, Medicare and interest. Nothing else. Every other government function will need to be handed up to state governments, some of which will do adequately, some will not. Of course the Singularity might come along very soon, but it might not. The Singularity might be a decade or more away, but that Social Security transition will happen in a bit over seven years. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Nov 6 05:07:27 2025 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 6 Nov 2025 00:07:27 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <002401dc4e4f$1dd7ff30$5987fd90$@rainier66.com> References: <002401dc4e4f$1dd7ff30$5987fd90$@rainier66.com> Message-ID: On Wed, Nov 5, 2025, 7:24?AM wrote: > From every plausible Singularity scenario I can imagine, the event date is > somewhere in the 2 years to about 40 years future, and will require > enormous expansion of the power grid. > > > > Given that, our best source for a prime mover is coal. > Solar panels provide full ROI in much less than 40 years. I should know: I had a system from when it was new through when it ran down. Anything beyond 10 years, maybe 5, and solar beats out coal on that basis alone even without environmental impacts. Note that that's the total time the AI will require power - and it seems at least pretty likely the AI will require more power through and for years after the Singularity, so it still will have been better to set up far more solar power generation than coal. This is aside from how plausible a Singularity taking more than 40 years seems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 6 05:23:02 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 5 Nov 2025 21:23:02 -0800 Subject: [ExI] FW: what if... the singularity isn't near? In-Reply-To: <004a01dc4edd$4d55a670$e800f350$@rainier66.com> References: <002401dc4e4f$1dd7ff30$5987fd90$@rainier66.com> <004a01dc4edd$4d55a670$e800f350$@rainier66.com> Message-ID: <005101dc4edd$6c989470$45c9bd50$@rainier66.com> From: Adrian Tymes > Subject: Re: [ExI] what if... the singularity isn't near? On Wed, Nov 5, 2025, 7:24?AM > wrote: >From every plausible Singularity scenario I can imagine, the event date is somewhere in the 2 years to about 40 years future, and will require enormous expansion of the power grid. Given that, our best source for a prime mover is coal. Solar panels provide full ROI in much less than 40 years. I should know: I had a system from when it was new through when it ran down? Adrian that depends on where it is located. Germany went all in with the solar panels. Those things are so far up there, good chance they will never pay. I don?t understand it: they had nukes up the kazoo. They traded THOSE BABIES for solar panels? When their latitude is up with Canada? I don?t see it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Nov 6 13:00:07 2025 From: johnkclark at gmail.com (John Clark) Date: Thu, 6 Nov 2025 08:00:07 -0500 Subject: [ExI] China and solar power In-Reply-To: <017c01dc4e94$4f23e0f0$ed6ba2d0$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017c01dc4e94$4f23e0f0$ed6ba2d0$@rainier66.com> Message-ID: On Wed, Nov 5, 2025 at 3:39?PM wrote: *> **The problem is corruption, not the ballroom. So focus on catching >>> the corruption, while cheering for the ballroom.* >> >> > >> *>> ?Do you consider that to be an adequate response? Would you have said >> the same thing if it had been Joe Biden who decided to tear down the entire >> east wing of the White House and build an ostentatious 300 million dollar >> ballroom using secret money from donors unknown to everybody except for the >> president?* > > > > > *> Can he prove Joe wasn?t putting in some of his own money? * > *I don't understand that question. * > * > I reject the premise of the question.* > *I know, but I don't understand why. I've been trying to understand why since 2016. * > *>Money can be followed. So follow it. Let?s see if we can catch some > corruption that way.* > *These days you don't need to be Sherlock Holmes to find corruption. Finding corruption has become trivially easy, nearly every week He Who Must Not Be Named not only publicly admits but brags about corruption that he has committed that was far worse than the Watergate "scandal" or anything else that Nixon had done, by today's standards the man was a saint. During this administration the problem is doing something about scandal when You Know Who can lead every Republican in the House and the Senate around by the nose, and when he has iron control over the prosecutors in the justice department, and when he has almost total control of the entire Judicial Branch of government. Although after yesterday's Democratic victories in the election his iron control might be getting slightly weaker, or maybe that's just wishful thinking on my part.* *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Nov 6 13:21:31 2025 From: johnkclark at gmail.com (John Clark) Date: Thu, 6 Nov 2025 08:21:31 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: On Tue, Nov 4, 2025 at 9:33?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> But what if? the Singularity is a coupla decades off still, and Kurzweil > was mostly right, but off a little* *If the Singularity happens a few years after Kurzweil's prediction then we will have a few more years to prepare for the most significant event since the Cambrian Explosion. Now let me ask you a question. What if... the Singularity happens a few years BEFORE Kurzweil's prediction? * *Regardless of if it's a few years late or early, when our descendants (or Mr. Jupiter Brain if we have no descendents) look back at today's developments from a historical perspective, worries about not having a balance budget will seem like the captain of the Titanic worrying about the deck chairs on his ship not being properly arranged. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Nov 6 13:43:36 2025 From: johnkclark at gmail.com (John Clark) Date: Thu, 6 Nov 2025 08:43:36 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000c01dc4dff$3cfd6170$b6f82450$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <000c01dc4dff$3cfd6170$b6f82450$@rainier66.com> Message-ID: On Tue, Nov 4, 2025 at 9:54?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> What if? we fooled ourselves with the obvious jump in smart search > algorithms? * *If you pick Mr. Joe Average man off the street and make him compete in the International Mathematical Olympiad he's not going to get a gold medal even if you let him use a very good search engine during the test. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Nov 6 14:08:05 2025 From: johnkclark at gmail.com (John Clark) Date: Thu, 6 Nov 2025 09:08:05 -0500 Subject: [ExI] what if... the singularity isn't near? In-Reply-To: References: Message-ID: On Wed, Nov 5, 2025 at 6:22?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: On 05/11/2025 03:39, spike wrote: > > >> it appears to me that what we are calling AI might be a kind of false >> alarm > > > > > > > *> I for one think it is, at least in the sense that what we call AI now > is not going to develop into general intelligence, much less > superintelligence. It's far too narrow in the kind of information > processing it does, compared to the only generally-intelligent thing we > know, our own brains.* > *Today AIs are not better than the best humans at everything, but they are better than the best humans at some things, and those things were once thought of as excellent examples of intelligence. And today AIs are better than the average human at nearly everything except for manual dexterity. For some reason people treat "AGI" and "Superintelligence" as synonyms, and that causes nothing but confusion. * *> * > > *If intelligence is substrate-specific, and biology is the only thing that > can sustain it, then we're really in trouble, and I think all bets are off.* *I think the likelihood of that being true is about equal to the likelihood that the holy rollers and snake handlers will turn out to be right. And I'm not holding my breath. * > > * > In fact, if that is true, then all of science is in trouble. We might > as well start taking religion and other kinds of magical thinking > seriously.* > *Exactly. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 6 16:31:09 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 08:31:09 -0800 Subject: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> From: John Clark Sent: Thursday, 6 November, 2025 5:22 AM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] what if... the singularity isn't near? On Tue, Nov 4, 2025 at 9:33?PM spike jones via extropy-chat > wrote: > But what if? the Singularity is a coupla decades off still, and Kurzweil was mostly right, but off a little >?If the Singularity happens a few years after Kurzweil's prediction then we will have a few more years to prepare for the most significant event since the Cambrian Explosion?John K Clark I propose some terminology to clarify if a debate has a common basis or preliminary assumptions. Suppose the Singularity is an event which is unambiguous in that there is little or no debate afterwards if it happened. Afterwards we can identify a date on which the Singularity event happened, to at least a particular week. We are before that now, so we can think of Ideas Futures structures in which we place bets on a time span for a future event, such as the discovery of the next Mersenne Prime, analogous to the future event that I predicted thrice and made a cool imaginary fortune on play money Ideas Futures, or the current political betting on when the phony US government shutdown will end: https://www.predictit.org/markets/detail/8316/When-will-Trump-sign-a-bill-to-end-the-shutdown-(Part-two) Suppose we play a prediction game with the Singularity, for a reason. If we have a debate between a person who firmly believes the Singularity is two or three years off, of course he will have a different perspective from one who thinks it is about 15 years off. Side note: I also propose returning the pronouns he and him to genderless, restoring she and her to specifically female, and if we ever need a specifically masculine gender (when was the last time you needed that grammatical construct? (never? (same here))) then the pronouns become hemale and hismale and maleman. Extension: terms such as guy and man also revert to genderless. Reason: after all this time, a quarter of a century at least, we never found a good way around the linguistic clumsiness we introduce with genderless pronouns. That he-or-she and the whole they/them business, forget it. To hell with all that, let he and him be genderless again, introduce new pronouns when we need a specifically male pronoun, which is never in that quarter century we have been committing comically absurd and unnecessary violence to language, trying to solve a non-problem. Make androgyny genderless again. Imagine someone who would bet the Singularity will happen in the next four years, starting now. Let us call him S-alpha. One who would bet in the next eight years but more than four, let us call that guy an S-beta, and so on, with every four year interval getting a new Greek letter. If a man believes there is some reason the Singularity will never come, let us call her S-negative. My late mother is a guy who I would call S-negative. That man never did embrace the whole notion of Singularity, even though she knew some things about it. In that system, everyone with any opinion on the matter is either S-negative or S-positives (singularitarians) with those who never heard of the concept are S-zero (this is the position of most of humanity.) For nearly all S-positives, many common societal considerations really matter little or nothing. We heard a lot about global warming in the past few years, but one of the big advocates and money sources Bill Gates, recently opined that it isn?t as big a problem as he once believed, and perhaps Europe should not have dismantled its industrial base to prevent it. Being S-positive, I have not worried about global warming or climate change ever, for I have long realized we have way bigger and far more immediate problems than that, which need a solution before the Singularity. S-positives generally don?t get too tangled up in problems that take centuries to hurt us. The long-running debate over the US federal government?s debt will naturally look different from differing Singularity points of view: the S-delta (me) will worry about that, but the S-alpha will not. If we think it necessary, we can break it down further. One who thinks the Singularity will happen in two years from now can be a subcategory S-alpha2. It is OK to shift positions. If one is currently an S-beta3, a year from now he is not required to become S-beta2. He can even take up S-beta4 two years from now. As in religion, conversion to alternative schools of thought is not only allowed, it is expected if one is thinking and listening. There are further ways to break down Singularity schools of thought, such as AI transforming societies before the Singularity for instance, which brings in another dimension of AI as a weapon, which nation gets AI first, and so on. The Singularity is more analogous to the back yard nuclear device which nukes the entire planet, which makes it irrelevant which countries have nuclear weapons: it nukes everybody alike. The Singularity is a software version of a lake filled with tritium water. I volunteer to be the resident poster child for S-deltaism. John, you are the most prominent local S-alpha. This idea has likely already been explored, but I don?t know about it. Do guide me to the standard practice. Has anyone here ideas for alternatives or other additions, varying schools of thought on Singularity terminology? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 6 16:48:57 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 08:48:57 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017c01dc4e94$4f23e0f0$ed6ba2d0$@rainier66.com> Message-ID: <010301dc4f3d$3f245fa0$bd6d1ee0$@rainier66.com> From: John Clark ? >>? I reject the premise of the question. >?I know, but I don't understand why. I've been trying to understand why since 2016. John K Clark We have long known that in principle, political influence can be sold, and we have long known digital currency enables corruption. In the case of POTUS, pardons can be sold. From what we are hearing, someone in the White House had set up a pardon vending machine. There has been debate over whether BitCoin is traceable. But if someone sets up a Bitcoin account and gives you the access codes, until you access that money, there is no evidence, none. So a bribe can be made to an official in the form of access codes to BitCoin, which the receiver does not access, perhaps until many years later. Of course that enables the payer to steal the money, but it also covers the receiver. Result: a POTUS can sell pardons. We know that. For S-alphas it doesn?t matter however, for that level of corruption would be limited to convicts who own a ton of money, which is a small fraction of them. Even then, we have some evidence it isn?t happening, such as in the case of Jeffery Epstein who had plenty of money but didn?t get a pardon. We know all that, and accept it as the price of digital currency. Given that, the BB ballroom is irrelevant to me. That building isn?t what enables corruption. BitCoin enables corruption. No ballroom required. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu Nov 6 17:40:10 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 6 Nov 2025 10:40:10 -0700 Subject: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? In-Reply-To: <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> Message-ID: The 'singularity' to me, never occurs, within subjective or experiential time. In fact we are in the singularity now, relative to the age of the earth. To me, it is important to focus on a particular event, to know when that event happens. I like to talk about when the last person dies, or when we achieve immortality. Isn't that what is really important, even though once we achieve that, subjectively, we will still have a lot of problems, and be a long way from anything infinite. On Thu, Nov 6, 2025 at 9:32?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* John Clark > *Sent:* Thursday, 6 November, 2025 5:22 AM > *To:* ExI chat list > *Cc:* spike at rainier66.com > *Subject:* Re: [ExI] what if... the singularity isn't near? > > > > > > On Tue, Nov 4, 2025 at 9:33?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > *> **But what if? the Singularity is a coupla decades off still, and > Kurzweil was mostly right, but off a little* > > > > *>?If the Singularity happens a few years after Kurzweil's prediction then > we will have a few more years to prepare for the most significant event > since the Cambrian Explosion?John K Clark* > > > > > > *I propose some terminology to clarify if a debate has a common basis or > preliminary assumptions.* > > > > *Suppose the Singularity is an event which is unambiguous in that there is > little or no debate afterwards if it happened. Afterwards we can identify > a date on which the Singularity event happened, to at least a particular > week. We are before that now, so we can think of Ideas Futures structures > in which we place bets on a time span for a future event, such as the > discovery of the next Mersenne Prime, analogous to the future event that I > predicted thrice and made a cool imaginary fortune on play money Ideas > Futures, or the current political betting on when the phony US government > shutdown will end:* > > > > *https://www.predictit.org/markets/detail/8316/When-will-Trump-sign-a-bill-to-end-the-shutdown-(Part-two) > * > > > > *Suppose we play a prediction game with the Singularity, for a reason. If > we have a debate between a person who firmly believes the Singularity is > two or three years off, of course he will have a different perspective from > one who thinks it is about 15 years off. * > > > > *Side note: I also propose returning the pronouns he and him to > genderless, restoring she and her to specifically female, and if we ever > need a specifically masculine gender (when was the last time you needed > that grammatical construct? (never? (same here))) then the pronouns > become hemale and hismale and maleman. Extension: terms such as guy and > man also revert to genderless. Reason: after all this time, a quarter of a > century at least, we never found a good way around the linguistic > clumsiness we introduce with genderless pronouns. That he-or-she and the > whole they/them business, forget it. To hell with all that, let he and him > be genderless again, introduce new pronouns when we need a specifically > male pronoun, which is never in that quarter century we have been > committing comically absurd and unnecessary violence to language, trying to > solve a non-problem. Make androgyny genderless again. * > > > > *Imagine someone who would bet the Singularity will happen in the next > four years, starting now. Let us call him S-alpha. One who would bet in > the next eight years but more than four, let us call that guy an S-beta, > and so on, with every four year interval getting a new Greek letter. If a > man believes there is some reason the Singularity will never come, let us > call her S-negative. My late mother is a guy who I would call S-negative. > That man never did embrace the whole notion of Singularity, even though she > knew some things about it.* > > > > *In that system, everyone with any opinion on the matter is either > S-negative or S-positives (singularitarians) with those who never heard of > the concept are S-zero (this is the position of most of humanity.)* > > > > *For nearly all S-positives, many common societal considerations really > matter little or nothing. We heard a lot about global warming in the past > few years, but one of the big advocates and money sources Bill Gates, > recently opined that it isn?t as big a problem as he once believed, and > perhaps Europe should not have dismantled its industrial base to prevent > it. Being S-positive, I have not worried about global warming or climate > change ever, for I have long realized we have way bigger and far more > immediate problems than that, which need a solution before the > Singularity. S-positives generally don?t get too tangled up in problems > that take centuries to hurt us.* > > > > *The long-running debate over the US federal government?s debt will > naturally look different from differing Singularity points of view: the > S-delta (me) will worry about that, but the S-alpha will not.* > > > > *If we think it necessary, we can break it down further. One who thinks > the Singularity will happen in two years from now can be a subcategory > S-alpha2. It is OK to shift positions. If one is currently an S-beta3, a > year from now he is not required to become S-beta2. He can even take up > S-beta4 two years from now. As in religion, conversion to alternative > schools of thought is not only allowed, it is expected if one is thinking > and listening.* > > > > *There are further ways to break down Singularity schools of thought, such > as AI transforming societies before the Singularity for instance, which > brings in another dimension of AI as a weapon, which nation gets AI first, > and so on. The Singularity is more analogous to the back yard nuclear > device which nukes the entire planet, which makes it irrelevant which > countries have nuclear weapons: it nukes everybody alike. The Singularity > is a software version of a lake filled with tritium water.* > > > > *I volunteer to be the resident poster child for S-deltaism. John, you > are the most prominent local S-alpha.* > > > > *This idea has likely already been explored, but I don?t know about it. > Do guide me to the standard practice. Has anyone here ideas for > alternatives or other additions, varying schools of thought on Singularity > terminology?* > > > > *spike * > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Nov 6 17:47:39 2025 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 6 Nov 2025 12:47:39 -0500 Subject: [ExI] Zuboff's morality (Was: Re: 1DIQ: an IQ metaphor to explain superintelligence) In-Reply-To: References: Message-ID: On Wed, Nov 5, 2025, 9:24 AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Ok, I've had a look at his paper, and made a few substitutions to make > it easier to understand. Let me know if you object to any of these: > > 'desire' = intention > 'belief' = anticipated result > I'm fine with these. 'correctable' = changeable > Okay. But I'll note this word loses the connotation of "an improvement." 'real' = preferred > I can go along with this, but keep in mind they would be the actual/genuine preferences in light of accurate information of concern. 'perfect grasp' = foreknowledge > Just one thing to add: in the paper, a perfect grasp embodies not only foreknowledge (e.g. perfect knowledge of future states (think *depth*), but also perfect lateral knowledge concerning the perspectives and impacts and effects on other beings (e.g. think breadth). So the perfect grasp represents a near omniscient understanding of all the future consequences for all involved and effected by a particular action, including those who don't and won't exist. > > The relevant passages now read, with my comments in brackets: > > "Imagine that I have before me on a table a cup containing a thick, > brown, steaming liquid. > > I want to drink that stuff because I think it is hot chocolate. But it > is actually hot mud. Well, in that case I don?t really intend to drink > it. And neither is it in my self-interest to do so. > > This example brings out the way in which intentions depend on > anticipated results. I only ever intend to do a thing because of what I > anticipate the result to be." > Not bad, I can follow along with that substitution. > (this is not true. It's not uncommon to have an intention to do > something in order to /find out/ what the result will be rather than in > anticipation of an expected result. I don't think this escapes the statement. Your example asks: why would a scientist ever desire (intend) to test a hypothesis when he doesn't know the outcome? My answer to this is that for the scientist, he believes (anticipates) that the outcome of the experiment will provide new information for the scientist. Certainly, if the scientist did not believe (anticipate) any possibility of learning anything from the experiment, he would not bother performing it. It would be more accurate to say > that intentions CAN be based on anticipated results, and that you MAY do > a thing because of the anticipated result. In Zuboff's original > language, you would say 'to have a desire to form a belief about > something'. The 'desire' precedes the 'belief', rather than the other > way around, in this case. When A can cause B or B can cause A, you can't > draw the conclusion that 'A depends on B') > But to use your language, Zuboff is saying: intentions depend on anticipated results. I still think that is true, given my scientist example. And I don't see how it makes sense to say the reverse, that "anticipated results depend on intentions" -- perhaps only in the wishful thinking way, but not in any rational way (that I can see), but perhaps you have an example. > "And since anticipated results are changeable, so are intentions." > > (this implies that intentions are changeable /because/ anticipated > results can change. It's possible to change your mind about the > anticipated results of an intended action, or to change your intended > action and anticipate the same result. It would be more accurate to say > that both anticipations and intentions are changeable, but a change in > one doesn't necessarily enforce a change in the other) > True, not every revelation will justify a change in action or intention. When playing chess you may find a better move, and change your action without changing your intention to win. Or you may learn that if you don't throw the game, the child will abandon chess altogether, and therefore you may change your intention to win against the child. > "From this observation I arrive at a sweeping principle: My only > preferred intentions are those I would have if I had a foreknowledge of > everything involved." > > (because of the above, this is a false conclusion) > I'm, sorry, which are you referring to when you say "the above"? Could you better break down for me how you see this argument collapsing? > "If there is any intention I have only because my foreknowledge of the > outcome is imperfect, then that cannot be among my preferred intentions." > > (this would rule out any intention to find something out (because if you > want to find something out, you necessarily don't already know the > answer). I think I addressed this with my scientist example. I don't know about anyone else, but a lot of my 'preferred > intentions' have the aim of finding things out that I don't already > know. If you already knew, there would be no need to have an intention > to find it out) > These represent intentions to learn. > "And gratifying that intention cannot be in my preferred self-interest. > The principle going along with this that governs my actions must tell me > to act, as far as possible, as I would want myself to be acting with a > foreknowledge of everything involved." > > (it should be obvious now why this is nonsense, but nevertheless, let's > follow this line of thought through (italics are mine):) > It's not obvious to me yet, but I will follow along below. > "This foreknowledge that defines my preferred intentions and my best > course of action," /is of course impossible. He goes on to explain why/. > "It would have to embrace not only the full experience, from behind the > eyes (or other sensors), of every sentient being but also every > potential development of experience. It would include within it, all the > motivations of all of the various systems of intention" /which would > simply conflict with each other. The overall result would be chaos and > paralysis (in case this is not obvious, consider combining the > motivations of a religious fundamentalist with those of a > scientifically-literate materialist. These are conflicting value > systems. Objective facts can't reconcile them. It is knowledge of the subjective feeling of what it is like to be all those concerned, what Zuboff describes as "the full experience, from behind the eyes, of every sentient being" that provides such a resolution. Think of it like this: in your own life there is a version of you that goes to work does, chores, prepares meals, which doesn't enjoy those tasks. But also in your life there is a version of you that goes on vacation and enjoys recreation and leisure, and enjoying the meals your other self prepared. You have knowledge of both of those states of existence, and that puts you in a position to answer whether or not your life is a life worth living. And also it enables you to answer questions about what changes, and trade offs are worth it. E.g. should the toiling-self take on extra hours so that the leisure-self can enjoy a nicer vacation. >From the vantage point of the perfect grasp, one could make such trade off decisions between different individuals, because in the same way you understand what it's like to work and be on vacation, the vantage point of the perfect grasp understands what it's like to be the scientific materialist *and* the religious fundamentalist, and so any actions that would affect their lives, negatively or positively, this perfect grasp could decide an appropriate trade offs just as you make such trade off decisions within your own life. Making such trade off decisions is what is meant by the reconciliation of all systems of desire. Think of it like all conscious perspectives are all part of a single life, and how one super intelligent being would optimize that life (which embodies and includes all those many perspectives). That optimization, is what Zuboff contends is the aim of morality. 'Perfect foreknowledge' > can't do a thing when subjective values are involved. Let's say that you > have the opportunity to punish/forgive someone who has stolen something > from you. The values of one person (that you have, according to this > theory, magical access to) dictate that the thief should be punished > regardless of the circumstances of the crime, because 'STEALING IS > WRONG'. You also have access to the values that tell you that stealing > is often wrong, but can be forgiven under certain circumstances. How can > there be any reconciliation of these two views? What facts can help?)/. > I think my explanation above is sufficient but if not let me know. > So even if there was any possibility of this, it still couldn't lead to > any rational definition of morality. The requirement to know all > possible points of view, and all outcomes of all actions are impossible > enough, but add on top the requirement to /reconcile/ all points of > view? And only then can you figure out what's good and what's bad? > Moral decisions are hard for exactly this reason. They involve weighing consequences to subjective states to which most parties have no access to. I think we should be upfront with acknowledging that difficulty as it suggests paths for resolving age old moral questions. Consider for example whether a law should be passed to increase the square footage allotted to egg laying hens. To answer the question requires understanding the stress and emotional states of the chickens with varying levels of room, and that has to be balanced again the correspondingly higher price of eggs, the unaffordability, possible hunger or nutritional drficicinies or worse health for those who can't afford eggs at those prices, etc. None of these are easy problems to solve, but with this definition, it makes it clearer how to organize a strategy to answer the question, and balance the concerns of all involved (to "reconcile all the systems of desire"). > Bonkers, pure and simple. > > I trust you'll understand that this is as much as I was willing to read > of the paper. > I hope my clarifications may motivate you to continue further. But even if not, I genuinely appreciate the extra attention and thought you have given it so far. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 6 17:48:51 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 09:48:51 -0800 Subject: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> Message-ID: <001201dc4f45$9d16d810$d7448830$@rainier66.com> ? >>?This idea has likely already been explored, but I don?t know about it. Do guide me to the standard practice. Has anyone here ideas for alternatives or other additions, varying schools of thought on Singularity terminology? spike From: extropy-chat On Behalf Of Brent Allsop via extropy-chat Subject: Re: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? >?The 'singularity' to me, never occurs, within subjective or experiential time. In fact we are in the singularity now, relative to the age of the earth. To me, it is important to focus on a particular event, to know when that event happens. I like to talk about when the last person dies, or when we achieve immortality. Isn't that what is really important, even though once we achieve that, subjectively, we will still have a lot of problems, and be a long way from anything infinite. Brent Ja Brent I get that. What I meant was to come to an agreement on a version of the Singularity where survivors would know AI somehow took over and was running things, our computers don?t work anymore but everyone is not dead. Some of us survive, stuff is still coming to grocery stores, the orders are coming from somewhere, and they seem to be accurate, people are still getting paid and so on, but the people who used to do all that find out the system does what it does regardless of their input, weapons don?t necessarily fire when the soldier pulls the trigger, some people are mysteriously missing, and so on. It is a scenario where we know something important happened and we know when it happened. We know that hasn?t happened yet. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Thu Nov 6 19:04:18 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Thu, 6 Nov 2025 19:04:18 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> On 06/11/2025 17:40, spike wrote: > Side note: I also propose returning the pronouns he and him to genderless, restoring she and her to specifically female, and if we ever need a specifically masculine gender (when was the last time you needed that grammatical construct?? (never?? (same here))) then the pronouns become hemale and hismale and maleman.? Extension: terms such as guy and man also revert to genderless.? Reason: after all this time, a quarter of a century at least, we never found a good way around the linguistic clumsiness we introduce with genderless pronouns.? That he-or-she and the whole they/them business, forget it.? To hell with all that, let he and him be genderless again, introduce new pronouns when we need a specifically male pronoun, which is never in that quarter century we have been committing comically absurd and unnecessary violence to language, trying to solve a non-problem.? Make androgyny genderless again. Give that man a medal! We can re-instate words like 'chairman' instead of 'chairperson' (or even 'chair', which I've heard a lot recently. A chair is something you sit on, for christ's sake!), 'mankind', 'salesman', etc. The problem is, we will be accused of 'outdated gender stereotyping', as per this: https://metro.co.uk/2020/05/19/un-releases-list-gender-neutral-replacements-commonly-used-words-12725607/ Which just leads to hilarity: https://universalenglish.org/gender-neutral-terms/ -- Ben From tara at taramayastales.com Thu Nov 6 19:11:41 2025 From: tara at taramayastales.com (Tara Maya) Date: Thu, 6 Nov 2025 11:11:41 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: The problem is that ?they? has already become the de facto neutral third person, mirroring the ?thee / thou? being replaced for singular and plural ?you.? If the pattern continues, English will do away with ?me / I? and we will always refer to ourselves with the royal ?we? all the time. As for ourself of course, we already do so. ?? Tara Maya > On Nov 6, 2025, at 11:06, Ben Zaiboc via extropy-chat wrote: > > ?On 06/11/2025 17:40, spike wrote: >> Side note: I also propose returning the pronouns he and him to genderless, restoring she and her to specifically female, and if we ever need a specifically masculine gender (when was the last time you needed that grammatical construct? (never? (same here))) then the pronouns become hemale and hismale and maleman. Extension: terms such as guy and man also revert to genderless. Reason: after all this time, a quarter of a century at least, we never found a good way around the linguistic clumsiness we introduce with genderless pronouns. That he-or-she and the whole they/them business, forget it. To hell with all that, let he and him be genderless again, introduce new pronouns when we need a specifically male pronoun, which is never in that quarter century we have been committing comically absurd and unnecessary violence to language, trying to solve a non-problem. Make androgyny genderless again. > > > Give that man a medal! > > We can re-instate words like 'chairman' instead of 'chairperson' (or even 'chair', which I've heard a lot recently. A chair is something you sit on, for christ's sake!), 'mankind', 'salesman', etc. > > The problem is, we will be accused of 'outdated gender stereotyping', as per this: > > https://metro.co.uk/2020/05/19/un-releases-list-gender-neutral-replacements-commonly-used-words-12725607/ > > Which just leads to hilarity: > https://universalenglish.org/gender-neutral-terms/ > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Thu Nov 6 19:23:10 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Thu, 6 Nov 2025 19:23:10 +0000 Subject: [ExI] Easeplay on'tday akeway ethay anonisercay In-Reply-To: References: Message-ID: <235b18de-d103-4b7a-aca9-d679844d2f06@zaiboc.net> On 06/11/2025 17:40, spike wrote: > Has anyone here ideas for ... varying schools of thought... shuuuuut uuuuuup!! -- Ben From atymes at gmail.com Thu Nov 6 19:40:44 2025 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 6 Nov 2025 14:40:44 -0500 Subject: [ExI] Easeplay on'tday akeway ethay anonisercay In-Reply-To: <235b18de-d103-4b7a-aca9-d679844d2f06@zaiboc.net> References: <235b18de-d103-4b7a-aca9-d679844d2f06@zaiboc.net> Message-ID: On Thu, Nov 6, 2025, 2:24?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 06/11/2025 17:40, spike wrote: > > Has anyone here ideas for ... varying schools of thought... > > shuuuuut uuuuuup!! clenched teeth> > "GenAIBot, generate and play some lullabies." > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 6 21:30:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 13:30:35 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: <005401dc4f64$97265d30$c5731790$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat ... Subject: Re: [ExI] Gender-Neutral Side Note On 06/11/2025 17:40, spike wrote: >>... Side note: I also propose returning the pronouns he and him to genderless, restoring she and her to specifically female, and if we ever need a specifically masculine gender (when was the last time you needed that grammatical construct? (never? (same here))) ... Make androgyny genderless again. >...Give that man a medal! Proposed wording, taking into consideration I identify exclusively as XY: Give that maleman a medal. >....Which just leads to hilarity: https://universalenglish.org/gender-neutral-terms/ -- Ben _______________________________________________ Ben you know me: I do love hilarity in all its marvelous forms. I will accept verbal clumsiness in exchange for hilarity, or even introduce clumsiness and ambiguity, just to achieve hilarity. But the rest of the time, those mercifully rare occasions when sobriety is more appropriate, on those occasions an attitude of somber gravitas can be convincingly faked by me, at least temporarily. Thanks Ben! spike From spike at rainier66.com Thu Nov 6 21:39:30 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 13:39:30 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: <006101dc4f65$d59b7ae0$80d270a0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Tara Maya via extropy-chat Sent: Thursday, 6 November, 2025 11:12 AM To: ExI chat list Cc: Tara Maya Subject: Re: [ExI] Gender-Neutral Side Note The problem is that ?they? has already become the de facto neutral third person, mirroring the ?thee / thou? being replaced for singular and plural ?you.? If the pattern continues, English will do away with ?me / I? and we will always refer to ourselves with the royal ?we? all the time. As for ourself of course, we already do so. ?? Tara Maya Tara, excellent, you are the kind of person I was hoping to respond: those who (I am guessing) are specifically XX. Note I am making exactly zero assumptions or inquiries beyond the perfectly unambiguous configuration of your chromosomes, something that can be objectively verified or refuted with a microscope. I am a big fan of objective truth. If we revert to (or return to) him and he as neo-genderless, winning back the very useful singular/plural indicator on them and they, do you feel excluded in any way? I am guessing you don't feel that way at all, that you understand why I am suggesting de-masculinizing men and man, he and him, which refeminizes her and she in a way. It is not exclusionary, it is inclusionary. It is a way to clean up an intentionally-introduced language flaw that just didn't work out all that well. Tara, we still love XX people, we men and malemen are pleased you haven't left the forum. spike > On Nov 6, 2025, at 11:06, Ben Zaiboc via extropy-chat wrote: > > ?On 06/11/2025 17:40, spike wrote: >> Side note: I also propose returning the pronouns he and him to genderless... From spike at rainier66.com Thu Nov 6 21:41:19 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 13:41:19 -0800 Subject: [ExI] Easeplay on'tday akeway ethay anonisercay In-Reply-To: <235b18de-d103-4b7a-aca9-d679844d2f06@zaiboc.net> References: <235b18de-d103-4b7a-aca9-d679844d2f06@zaiboc.net> Message-ID: <006201dc4f66$16fcd880$44f68980$@rainier66.com> ...> On Behalf Of Ben Zaiboc via extropy-chat Cc: Ben Zaiboc Subject: [ExI] Easeplay on'tday akeway ethay anonisercay On 06/11/2025 17:40, spike wrote: > Has anyone here ideas for ... varying schools of thought... shuuuuut uuuuuup!! -- Ben _______________________________________________ OK, clenched teeth it is: {8^E Why the teeth business please Ben? spike From snapbag at proton.me Fri Nov 7 03:15:45 2025 From: snapbag at proton.me (Dave S) Date: Fri, 07 Nov 2025 03:15:45 +0000 Subject: [ExI] Eliezer Yudkowsky on FB about why we're all doomed In-Reply-To: References: Message-ID: <04hEhTGiheUPDOEtFtRTxhJzt2tkuxZLBDtDT7h9ZGO5manYa-_njapnS_QNw-Jtm4nginjY19D1GoY28nsv_EugNisTNja7iHYgmDqyJQ0=@proton.me> On Sunday, November 2nd, 2025 at 2:31 PM, Natasha natashavita-more.com via extropy-chat wrote: > Is there a date on this early post, Dave? That was a FB post: https://www.facebook.com/yudkowsky/posts/10159653334879228/ -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 06:17:15 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 22:17:15 -0800 Subject: [ExI] sure win Message-ID: <002001dc4fae$2a2d06d0$7e871470$@rainier66.com> I bought 25 shares of government shutdown lasting past 20 Nov. If the former speaker of the house, the late Rep. Pelosi is telling the truth, the shutdown can never end. Reason: it requires congressional vote of 60 senators. Now we hear from her there are no senators: https://www.instagram.com/reels/DQrW-c6j2vl/ Without senators, there can be no budget. According to my piece of paper, POTUS doesn't have the authority on budget considerations, only congress does. But if congress has been abolished, then so has all federal government spending. I didn't realize POTUS has the authority to abolish congress. Something seems a bit off here. I just checked my piece of paper: POTUS doesn't have the authority to abolish congress. I believe my piece of paper a lot more than I believe the late Mrs. Pelosi. Now I don't know if my 25 shares will pay. Advice welcome. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 06:51:39 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 6 Nov 2025 22:51:39 -0800 Subject: [ExI] singularity terminology proposal In-Reply-To: <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> Message-ID: <005301dc4fb2$f82ec560$e88c5020$@rainier66.com> From: spike at rainier66.com ? >?I propose some terminology ? Imagine someone who would bet the Singularity will happen in the next four years, starting now. Let us call him S-alpha. One who would bet in the next eight years but more than four, let us call that guy an S-beta, and so on?spike I am pondering how to introduce an orthogonal axis to the proposed terminology, with the additional axis denoting a metric of internal self-contradiction. For instance: Adding power generation capacity is proposed by the S-alpha in a time frame which cannot be completed before the Singularity from the S-alpha?s own point of view. The S-lambda thinks there is plenty of time to add capacity, but doesn?t see the need for it. Well of course there is need for it, sheesh. Power is good, all of it. Being between those absurd extremes, I, as an S-delta or S-epsilon, avoid these paradoxes and introduce my own fresh new paradox: power generation capacity must be added in such a way to maximize my profit by the time the Singularity makes profit irrelevant because of all the unimaginable wealth the Singularity creates and distributes to the fortunate few who it decides to not slay. I am working that. Stand by. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Fri Nov 7 08:13:30 2025 From: scerir at libero.it (scerir) Date: Fri, 7 Nov 2025 09:13:30 +0100 (CET) Subject: [ExI] what if... the singularity isn't near? In-Reply-To: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> Message-ID: <881112021.1739103.1762503210416@mail1.libero.it> Maybe of interest! https://www.dallasfed.org/research/economics/2025/0624 > Il 05/11/2025 03:31 CET spike jones via extropy-chat ha scritto: > > > > OK sure, this site is all about the singularity being near, and OK sure, I am a big heretic for even entertaining the notion it might not be. > > > > But what if? the Singularity is a coupla decades off still, and Kurzweil was mostly right, but off a little, and the Singularity is still coming but not right away? > > > > Then what? Do I get to sell Singularity insurance? > > > > Because it appears to me that what we are calling AI might be a kind of false alarm: a big advance in smart search might make us think the Singularity is nearer than it really is. > > > > Then what? What if? intelligence really is substrate dependent for reasons we don?t currently understand, or the Singularity depends on technology we don?t yet have. > > > > Then we would predict the Singularity sooner than it is, ja? > > > > Singularity thinkers come on, help me Obi wans, alla yas: might we be in a situation where we are fooling ourselves? OK then what please? > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Nov 7 10:15:12 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 7 Nov 2025 02:15:12 -0800 Subject: [ExI] sure win In-Reply-To: <002001dc4fae$2a2d06d0$7e871470$@rainier66.com> References: <002001dc4fae$2a2d06d0$7e871470$@rainier66.com> Message-ID: "If the former speaker of the house, the late Rep. Pelosi" A blunder like that wrecks credibility. Nancy was saying good things about the late Dick Cheney in the news today. Keith On Thu, Nov 6, 2025 at 10:18?PM spike jones via extropy-chat wrote: > > > > I bought 25 shares of government shutdown lasting past 20 Nov. If the former speaker of the house, the late Rep. Pelosi is telling the truth, the shutdown can never end. Reason: it requires congressional vote of 60 senators. Now we hear from her there are no senators: > > > > https://www.instagram.com/reels/DQrW-c6j2vl/ > > > > Without senators, there can be no budget. According to my piece of paper, POTUS doesn?t have the authority on budget considerations, only congress does. But if congress has been abolished, then so has all federal government spending. I didn?t realize POTUS has the authority to abolish congress. > > > > Something seems a bit off here. I just checked my piece of paper: POTUS doesn?t have the authority to abolish congress. I believe my piece of paper a lot more than I believe the late Mrs. Pelosi. > > > > Now I don?t know if my 25 shares will pay. Advice welcome. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Fri Nov 7 11:45:45 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 06:45:45 -0500 Subject: [ExI] When will the singularity happen? Message-ID: *There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes from people who know the most about AI think it will happen. If they're right then Ray Kurzweil's prediction of 2039 (recently modified from his previous prediction of 2045) is still way too conservative. * *==* *Sam Altman, the head of OpenAI * *?Our latest model feels smarter than me in almost every way??* *"In some big sense, ChatGPT is already more powerful than any human who has ever lived. We may have already passed the point where artificial intelligence surpasses human intelligence"* *Dario Amodei, the head of Anthropic:* *?It is my guess that by 2026 or 2027, we will have A.I. systems that are broadly better than all humans at almost all things.?* *?Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade.?* *Elon Musk, you may have heard of him: * *?If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it?s probably next year, within two years.?* *?My guess is that we?ll have AI that is smarter than any one human probably around the end of next year.?* *?I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.?* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 7 11:55:59 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 06:55:59 -0500 Subject: [ExI] are we there yet? In-Reply-To: <00d201dc4e6b$703b6480$50b22d80$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <00d201dc4e6b$703b6480$50b22d80$@rainier66.com> Message-ID: On Wed, Nov 5, 2025 at 10:49?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: ehhhhh? no. Not yet. > > > > https://twitter.com/i/status/1984026877057733066 > *Manual dexterity is the one area where humans still have a commanding lead over AI. I don't think that lead will last much longer. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 7 12:28:33 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 07:28:33 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: On 06/11/2025 17:40, spike wrote: * > Side note: I also propose returning the pronouns he and him to > genderless, restoring she and her to specifically female, and if we ever > need a specifically masculine gender (when was the last time you needed > that grammatical construct? * *How about "he is the father", or "he has a beard and a large penis" ?* *Since we're on the subject of pronoun policing, what pronoun should we use when referring to what an AI said or did? * *And if Nanotechnology makes two copies of you as precisely as Heisenberg allows, one in London and the other in Paris, and then destroys the original, and if just before that happens you ask "what city will I end up in?" what is the referent of the personal pronoun in that question? * *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 12:31:42 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 04:31:42 -0800 Subject: [ExI] When will the singularity happen? In-Reply-To: References: Message-ID: <005501dc4fe2$7942dae0$6bc890a0$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Subject: [ExI] When will the singularity happen? There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes ? And that's turning out to be true.? John K Clark OK good, this gives us a basis for nomenclature for a number of concepts I am pondering, such as a direction axis and a velocity axis. If one is an S-alpha 1 and a year from now thinks the singularity is still one year off, then the date for the singularity (from that person?s point of view) we can call calendarwise. If a person is an S-alpha 3 and a year from now is an S-alpha 1, then that person predicts a singularity which is becoming sooner. Let us call that person counter-calendarwise. The alpha3 who is an alpha2 a year later is a? calendarless? Calendarity? With a direction axis we need a velocity axis. If one is a beta 4, then a year later is a beta2, the predicted date moved one year countercalendar in one year, so the velocity is 1. If that guy is a beta1 a year later, the velocity is 2. My spinny dials are still spinning on this. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 12:38:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 04:38:35 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: <005a01dc4fe3$6fda72f0$4f8f58d0$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Subject: Re: [ExI] Gender-Neutral Side Note On 06/11/2025 17:40, spike wrote: > Side note: I also propose returning the pronouns he and him to genderless, restoring she and her to specifically female, and if we ever need a specifically masculine gender (when was the last time you needed that grammatical construct? How about "he is the father", or "he has a beard and a large penis" ? That?s fine, no problem. In that sentence above, we still wouldn?t know the gender on that guy with the beard and the XY genitalia. We have a pretty good hint on the configuration of his chromosomes. So really all I am proposing is simplifying language. I am not proposing any change in concept, only in language. >?Since we're on the subject of pronoun policing, what pronoun should we use when referring to what an AI said or did? Since he and him are genderless, let all robots be he and him. >?And if Nanotechnology makes two copies of you as precisely as Heisenberg allows, one in London and the other in Paris? what is the referent of the personal pronoun in that question? John K Clark I have never trusted Heisenberg, ever since that heavy water business. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 7 13:05:22 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 08:05:22 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: On Thu, Nov 6, 2025 at 2:06?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > *We can re-instate words like 'chairman' instead of 'chairperson' (oreven > 'chair', which I've heard a lot recently. A chair is something yousit on, > for christ's sake!),* *And a chairman is a man who is sitting on a chair, a bum on the subway for instance. * *> 'mankind',* *That is a word that can be easily retired and replaced with "humanity" or "the human race". * *Another word that I think should be retired is "Indian" when referring to Native Americans, it's an insult to Indians and an insult to Native Americans and it's just dumb. We only got that word because Columbus was so stupid that when he landed in the Bahamas he thought he was in India. He knew how far he had sailed, and even the ancient Greeks had a good estimate about how big the Earth was, so he should've known that he couldn't possibly be in India. Columbus even made sure that one member of his crew could speak Arabic because he figured that Indians speak Chinese (they don't) and Chinese is very similar to Arabic (it isn't). * *John k Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 7 13:15:48 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 08:15:48 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: On 06/11/2025 17:40, spike wrote: * > I also propose returning the pronouns he and him to genderless,* *The thing is they were never really genderless, if they were then you wouldn't find anything wrong with the following sentence: * *"When a mammal feeds his young he lets his offspring suck milk from his breast" * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 7 13:42:42 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 08:42:42 -0500 Subject: [ExI] CRISPR babies Message-ID: *The following article is from the November 3, 2025 issue of the journal Nature:* *Biotech company says the time has come to consider CRISPR babies. Do scientists agree?* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 7 14:01:45 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 7 Nov 2025 09:01:45 -0500 Subject: [ExI] Tariffs and the prediction markets Message-ID: *After the oral arguments before the Supreme Court where He Who Must Not Be Named attorneys tried to argue that his idiotic tariffs were constitutional the odds that he would win his case dropped like a rock on the prediction market , it now says there is only a 25% chance he will win. * [image: 239AD27A-5B75-4B2A-BE4B-7E65C25EB332.jpeg] *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 239AD27A-5B75-4B2A-BE4B-7E65C25EB332.jpeg Type: image/jpeg Size: 55213 bytes Desc: not available URL: From spike at rainier66.com Fri Nov 7 15:00:39 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 07:00:39 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Sent: Friday, 7 November, 2025 5:05 AM To: ExI chat list Cc: John Clark ; Ben Zaiboc Subject: Re: [ExI] Gender-Neutral Side Note On Thu, Nov 6, 2025 at 2:06?PM Ben Zaiboc via extropy-chat > wrote: We can re-instate words like 'chairman' instead of 'chairperson' (or even 'chair', which I've heard a lot recently. A chair is something you sit on, for christ's sake!), >?And a chairman is a man who is sitting on a chair, a bum on the subway for instance?. What if the subway bum has XX chromosomes? >?Another word that I think should be retired is "Indian" when referring to Native Americans, it's an insult to Indians and an insult to Native Americans and it's just dumb. We only got that word because Columbus was so stupid that when he landed in the Bahamas he thought he was in India. ?John k Clark So we are told. However? that region where Columbus landed, the Caribbean Sea was called the West Indies at the time, and still is sometimes referred to that way: https://en.wikipedia.org/wiki/West_Indies Columbus had no way of knowing about the Pacific ocean, but he was a sailor, so he knew approximately the circumference of the globe. What if there had been no Pacific, but was land all the way across there, with people from the West Indies to the East Indies. For all Columbus knew or anyone in Europe knew, there was a big-ass continent of Indie, the east side of which one could reach by sailing the Atlantic and Indian Oceans. If Indie was content spanning or filling in the Pacific basin and spanning Asia, the Pacific and the Americas, then the people Columbus encountered would be the inhabitants of Indie, which would make them Indians. In that scenario, Columbus wasn?t stupid. He was the NASA of his times. Furthermore? the people he called Indians likely didn?t know about the Pacific Ocean either. How would they know? They didn?t sail much. They might have thought one could walk west indefinitely. If you go back a few thousand years, they would be right: you could walk across the Bering Strait, which was frozen. They wouldn?t know they were not Indians. Indie would be one hell of a continent, ja? I don?t think Columbus was stupid. Sailing west across the Atlantic was a cool achievement for his time. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 15:06:10 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 07:06:10 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> Message-ID: <00d301dc4ff8$0d761b40$286251c0$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Sent: Friday, 7 November, 2025 5:16 AM To: ExI chat list Cc: John Clark ; Ben Zaiboc Subject: Re: [ExI] Gender-Neutral Side Note On 06/11/2025 17:40, spike wrote: > I also propose returning the pronouns he and him to genderless, The thing is they were never really genderless, if they were then you wouldn't find anything wrong with the following sentence: "When a mammal feeds his young he lets his offspring suck milk from his breast" John K Clark If we take on the paradigm that gender is all in the mind, then there is not a thing wrong with that sentence. Everyone can win here. Let gender be in the mind but sex be objective and verifiable by microscope with one drop of spit or blood. Either those chromosomes are XX or XY, with the trisomy cases having a third category. Then let sports be segregated by sex but not by gender, which is ambiguous. Let Title IX funding be awarded not by gender but by sex. Not only has gender become ambiguous, it can be switched at will, without surgery or medications, as often as one wants. Sex is immutable still. Everyone wins. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 15:26:00 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 07:26:00 -0800 Subject: [ExI] Tariffs and the prediction markets In-Reply-To: References: Message-ID: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Subject: [ExI] Tariffs and the prediction markets >?After the oral arguments before the Supreme Court where He Who Must Not Be Named attorneys tried to argue that his idiotic tariffs were constitutional ?John K Clark I read my piece of paper carefully (as is my wont.) I don?t see anything in there which would suggest POTUS has the authority to tariff foreign trade or anything else. From what I see in there, congress must do that. A complication arises now, with a former representative from the former house of former representatives claiming POTUS abolished the former congress. It isn?t clear if the former congress has been notified they no longer exist, or if that too must wait for a SCOTUS decision on whether congress still exists. If a bet is arranged, I will buy shares predicting congress will pass a law claiming congress still exists. If a lawsuit results, I will buy shares predicting SCOTUS will find that law constitutional, based on the argument that congress existed when it passed the law that it existed, then passed the law by 2/3 majority vote overturning a POTUS veto. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 19:39:37 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 11:39:37 -0800 Subject: [ExI] i would do the same i suppose In-Reply-To: <009901dc501e$1b59e770$520db650$@rainier66.com> References: <009901dc501e$1b59e770$520db650$@rainier66.com> Message-ID: <00a101dc501e$41085ba0$c31912e0$@rainier66.com> OK not really: Sending planes to Duluth sounds like fun and all that. But I don't understand. Can anyone here explain to me why air traffic control is still being done by humans? It seems so perfectly obvious that should be done entirely by machines. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22243 bytes Desc: not available URL: From pharos at gmail.com Fri Nov 7 20:27:15 2025 From: pharos at gmail.com (BillK) Date: Fri, 7 Nov 2025 20:27:15 +0000 Subject: [ExI] i would do the same i suppose In-Reply-To: <00a101dc501e$41085ba0$c31912e0$@rainier66.com> References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> Message-ID: On Fri, 7 Nov 2025 at 19:41, spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > Can anyone here explain to me why air traffic control is still being done > by humans? It seems so perfectly obvious that should be done entirely by > machines. > > spike > _______________________________________________ > I am currently trying out a new open-source Kimi K2 AI based in Beijing, China. Reported to be better than CHATGPT-5. So I asked it to comment. I thought the answer was pretty good. Click the link to view the full report from Kimi AI Assistant < https://www.kimi.com/share/19a5feef-51a2-88b5-8000-0000d7de198c> The final paragraph was - The fundamental reality is that ATC safety hinges on acknowledging human limitations while leveraging human strengths?adaptability, judgment, and communication?that remain irreplaceable in managing the infinite variability of real-world aviation. ----------------- BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Nov 7 20:35:11 2025 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 7 Nov 2025 15:35:11 -0500 Subject: [ExI] i would do the same i suppose In-Reply-To: <00a101dc501e$41085ba0$c31912e0$@rainier66.com> References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> Message-ID: On Fri, Nov 7, 2025, 2:41?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Can anyone here explain to me why air traffic control is still being done > by humans? It seems so perfectly obvious that should be done entirely by > machines. > Lack of trust, mainly. Currently a justified lack, so long as the proposed machines continue to hallucinate and otherwise demonstrably perform worse than humans. Some day there will likely be machines that can do this job in real world conditions better than humans. That day does not appear to be today. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 7 20:52:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 12:52:35 -0800 Subject: [ExI] i would do the same i suppose In-Reply-To: References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> Message-ID: <00d201dc5028$72a2ad50$57e807f0$@rainier66.com> From: extropy-chat On Behalf Of BillK via extropy-chat >>?Can anyone here explain to me why air traffic control is still being done by humans? It seems so perfectly obvious that should be done entirely by machines. spike _______________________________________________ >...the fundamental reality is that ATC safety hinges on acknowledging human limitations while leveraging human strengths?adaptability, judgment, and communication?that remain irreplaceable in managing the infinite variability of real-world aviation. ----------------- BillK Ja, phasing it in, they tell us. One wonders why it is taking so long. This is a very clearly defined problem, with lots of data available. This is ideal for machines, and far from ideal for humans. This change over shoulda been done a good coupla decades ago, but since it his happening now, we have good human voice synthesis available. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Fri Nov 7 22:18:25 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Fri, 7 Nov 2025 22:18:25 +0000 Subject: [ExI] Zuboff's morality In-Reply-To: References: Message-ID: On 07/11/2025 06:17, Jason Resch wrote: > On Wed, Nov 5, 2025, 9:24 AM Ben Zaiboc via extropy-chat > wrote: > > Ok, I've had a look at his paper, and made a few substitutions to > make > it easier to understand. Let me know if you object to any of these: > > 'desire' = intention > 'belief' = anticipated result > > > I'm fine with these. > > 'correctable' = changeable > > > Okay. But I'll note this word loses the connotation of "an improvement." Well, the reason for making that substitution is to lose the implication that the original thing /needs/ to be improved. > > 'real' = preferred > > > I can go along with this, but keep in mind they would be the > actual/genuine preferences in light of accurate information of concern. You mean /theoretical/ preferences, in light of accurate information of concern. I don't see how this can work. It would mean people often don't actually know what their preferences are. So how could they act on them? (or anticipate an outcome from them?). Using the word "actual" to mean "theoretical" rather confuses things, don't you think? > > 'perfect grasp' = foreknowledge > > > Just one thing to add: in the paper, a perfect grasp embodies not only > foreknowledge (e.g. perfect knowledge of future states (think > *depth*), but also perfect lateral knowledge concerning the > perspectives and impacts and effects on other beings (e.g. think breadth). > > So the perfect grasp represents a near omniscient understanding of all > the future consequences for all involved and effected by a particular > action, including those who don't and won't exist. So we could use 'omniscience' instead of 'foreknowledge'. Ok. Er, consequences for those who don't and won't exist?? That kind of cancels itself out, doesn't it? There can't, by definition, be any such consequences. > > The relevant passages now read, with my comments in brackets: > > "Imagine that I have before me on a table a cup containing a thick, > brown, steaming liquid. > > I want to drink that stuff because I think it is hot chocolate. > But it > is actually hot mud. Well, in that case I don?t really intend to > drink > it. And neither is it in my self-interest to do so. > > This example brings out the way in which intentions depend on > anticipated results. I only ever intend to do a thing because of > what I > anticipate the result to be." > > > Not bad, I can follow along with that substitution. > > > (this is not true. It's not uncommon to have an intention to do > something in order to /find out/ what the result will be rather > than in > anticipation of an expected result. > > > I don't think this escapes the statement. > > Your example asks: why would a scientist ever desire (intend) to test > a hypothesis when he doesn't know the outcome? Not quite. I'm saying that, in contrast to Zuboff's statement, intentions sometimes /don't/ depend on anticipated results, they are intended to /discover/ the results instead (you don't have to be a scientist to do this. Non-scientists do it all the time, particularly young ones). > > My answer to this is that for the scientist, he believes (anticipates) > that the outcome of the experiment will provide new information for > the scientist. Certainly, if the scientist did not believe > (anticipate) any possibility of learning anything from the experiment, > he would not bother performing it. You're making 'the result' include 'finding out the result'. So now we have two results, an actual result, and a meta-result. That could apply to all intentions. I expect something to happen when I do x, and I also expect to find out if it actually happens. So you could say "I only ever intend to do a thing because I anticipate finding out if I'm right about what the result will be", and if you don't have an expectation, it's just 'to find out the result'. A problem that this presents to Zuboff's thesis is that this is not an anticipated result that can be changed. It applies to all intentions (except the case when someone decides to do something 'just because'. They aren't thinking about any result, anticipated or discovered. Or, I suppose, the case where something is purely a habit). > > It would be more accurate to say > that intentions CAN be based on anticipated results, and that you > MAY do > a thing because of the anticipated result. In Zuboff's original > language, you would say 'to have a desire to form a belief about > something'. The 'desire' precedes the 'belief', rather than the other > way around, in this case. When A can cause B or B can cause A, you > can't > draw the conclusion that 'A depends on B') > > > But to use your language, Zuboff is saying: intentions depend on > anticipated results. > > I still think that is true, given my scientist example. > > And I don't see how it makes sense to say the reverse, that > "anticipated results depend on intentions" -- perhaps only in the > wishful thinking way, but not in any rational way (that I can see), > but perhaps you have an example. Are you kidding? There are thousands of examples. I intend to go to my auntie Susan's and anticipate getting a meal of roast chicken because that's what she usually cooks. If I change my intention and go to a restaurant instead, my anticipation could well change to getting a steak. My intention could have changed due to any number of things, even tossing a coin. Maybe I can't decide, and say "Head's it's Susan's, Tails it's Restaurant". The expectation changes accordingly, as a result of the changed intention. Random reasons determine or affect people's intentions all the time, and their expectations follow. > > "And since anticipated results are changeable, so are intentions." > > (this implies that intentions are changeable /because/ anticipated > results can change. It's possible to change your mind about the > anticipated results of an intended action, or to change your intended > action and anticipate the same result. It would be more accurate > to say > that both anticipations and intentions are changeable, but a > change in > one doesn't necessarily enforce a change in the other) > > > True, not every revelation will justify a change in action or > intention. When playing chess you may find a better move, and change > your action without changing your intention to win. Or you may learn > that if you don't throw the game, the child will abandon chess > altogether, and therefore you may change your intention to win against > the child. > > > "From this observation I arrive at a sweeping principle: My only > preferred intentions are those I would have if I had a > foreknowledge of > everything involved." > > (because of the above, this is a false conclusion) > > > I'm, sorry, which are you referring to when you say "the above"? Could > you better break down for me how you see this argument collapsing? "And since anticipated results are changeable, so are intentions." (this implies that intentions are changeable /because/ anticipated results can change...) A false premise, therefore a false conclusion. (quite apart from "if I had a foreknowledge of everything involved.") > > "If there is any intention I have only because my foreknowledge of > the > outcome is imperfect, then that cannot be among my preferred > intentions." > > (this would rule out any intention to find something out (because > if you > want to find something out, you necessarily don't already know the > answer). > > > I think I addressed this with my scientist example. Yes, by introducing a 'meta-result': that you find out something. But even this is not guaranteed, so the foreknowledge is not perfect. So the conclusion now becomes that NO intention can be preferred, regardless of the outcome. > > > I don't know about anyone else, but a lot of my 'preferred > intentions' have the aim of finding things out that I don't already > know. If you already knew, there would be no need to have an > intention > to find it out) > > > These represent intentions to learn. > > > "And gratifying that intention cannot be in my preferred > self-interest. > The principle going along with this that governs my actions must > tell me > to act, as far as possible, as I would want myself to be acting > with a > foreknowledge of everything involved." > > (it should be obvious now why this is nonsense, but nevertheless, > let's > follow this line of thought through (italics are mine):) > > > It's not obvious to me yet, but I will follow along below. > > > "This foreknowledge that defines my preferred intentions and my best > course of action," /is of course impossible. He goes on to explain > why/. > "It would have to embrace not only the full experience, from > behind the > eyes (or other sensors), of every sentient being but also every > potential development of experience. It would include within it, > all the > motivations of all of the various systems of intention" /which would > simply conflict with each other. The overall result would be chaos > and > paralysis (in case this is not obvious, consider combining the > motivations of a religious fundamentalist with those of a > scientifically-literate materialist. These are conflicting value > systems. Objective facts can't reconcile them. > > > It is knowledge of the subjective feeling of what it is like to be all > those concerned, what Zuboff describes as "the full experience, from > behind the eyes, of every sentient being" that provides such a resolution. Such a thing doesn't, and can't, even in principle, exist. > > Think of it like this: in your own life there is a version of you that > goes to work does, chores, prepares meals, which doesn't enjoy those > tasks. But also in your life there is a version of you that goes on > vacation and enjoys recreation and leisure, and enjoying the meals > your other self prepared. Why are these different versions? They are both me. There's only one version of me. That may change in the future, but that's a different matter. > > You have knowledge of both of those states of existence, and that puts > you in a position to answer whether or not your life is a life worth > living. Worth living according to who? Me? I should think that I'd think my life worth living regardless, if that was something I'd be inclined to ponder. > ?And also it enables you to answer questions about what changes, and > trade offs are worth it. E.g. should the toiling-self take on extra > hours so that the leisure-self can enjoy a nicer vacation. My understanding of my own life, from my own viewpoint enables me to, etc. Well, I'd hardly call that a revelation. It's true of everyone, and hardly worth mentioning. > > From the vantage point of the perfect grasp, This is my main problem with this whole thing. /There is no such thing as a 'perfect grasp' (omniscience)/. There can't be such a thing, or everything we know about the world is wrong, and I'm pretty certain that that's not true. We would soon realise. > one could make such trade off decisions between different individuals, > because in the same way you understand what it's like to work and be > on vacation, the vantage point of the perfect grasp understands what > it's like to be the scientific materialist *and* the religious > fundamentalist, and so any actions that would affect their lives, > negatively or positively, this perfect grasp could decide an > appropriate trade offs just as you make such trade off decisions > within your own life. One could wave a magic wand and utter "resolvio!" and all problems would be solved and everybody would be friends and we'd live happily ever after. Great. If such a thing was possible. It's not. Sometimes there is no trade-off. I really don't get why anyone can take this seriously. "The perfect grasp understands..." is meaningless, because there is no such thing as a perfect grasp. > > Making such trade off decisions is what is meant by the reconciliation > of all systems of desire. Think of it like all conscious perspectives > are all part of a single life, and how one super intelligent being > would optimize that life (which embodies and includes all those many > perspectives).? That optimization, is what Zuboff contends is the aim > of morality. No, not 'superintelligent'. That's something we think is possible. You mean 'Omniscient'. And everyone except religious zealots knows that there's no such thing. There can't be such a thing. Physics forbids it. Common sense forbids it. Logic forbids it. Zuboffs theory of morality requires it. So Zuboffs theory of morality is, literally, forbidden by reality. He is claiming that something utterly impossible is the 'aim of morality'. "Think of it like all conscious perspectives are all part of a single life" The only 'optimisation' that would be possible for such a life would be to cut it short as soon as possible. As I said before, it would be chaos and paralysis. It would be the ultimate psychosis. Fortunately, it's not possible. > > > 'Perfect foreknowledge' > can't do a thing when subjective values are involved. Let's say > that you > have the opportunity to punish/forgive someone who has stolen > something > from you. The values of one person (that you have, according to this > theory, magical access to) dictate that the thief should be punished > regardless of the circumstances of the crime, because 'STEALING IS > WRONG'. You also have access to the values that tell you that > stealing > is often wrong, but can be forgiven under certain circumstances. > How can > there be any reconciliation of these two views? What facts can > help?)/. > > > I think my explanation above is sufficient but if not let me know. > > > So even if there was any possibility of this, it still couldn't > lead to > any rational definition of morality. The requirement to know all > possible points of view, and all outcomes of all actions are > impossible > enough, but add on top the requirement to /reconcile/ all points of > view? And only then can you figure out what's good and what's bad? > > > Moral decisions are hard for exactly this reason. They involve > weighing consequences to subjective states to which most parties have > no access to. I think we should be upfront with acknowledging that > difficulty as it suggests paths for resolving age old moral questions. > > Consider for example whether a law should be passed to increase the > square footage allotted to egg laying hens. To answer the question > requires understanding the stress and emotional states of the chickens > with varying levels of room, and that has to be balanced again the > correspondingly higher price of eggs, the unaffordability, possible > hunger or nutritional drficicinies or worse health for those who can't > afford eggs at those prices, etc. > > None of these are easy problems to solve, but with this definition, it > makes it clearer how to organize a strategy to answer the question, > and balance the concerns of all involved (to "reconcile all the > systems of desire"). Really? So how? How do you organise a strategy to answer that question? How does that work? I'd say this is quite easy to solve, with no omniscience required. Such a law would be immoral because it would tend to compel farmers to increase the price of eggs, and in practice would make more farmers criminals because some of them would realise that the legal penalties would be insignificant to the economic ones, and if they are going to break the law, they may as well make the maximum profit from it, so some of them would be even less likely to take good care of their hens than at present. Realising that, the authorities would have to put more resources into policing this law, taking them away from more useful things like solving murders and catching thieves. Sounds like a lose-lose situation. If the basis of your morality is "The greatest good for the greatest number of people", for example, this law fails in a big way. I suspect that the only way it could be regarded as morally good is if your morality gives higher priority to the welfare of chickens than people. But I have to make a disclaimer: I know next to nothing about chicken farming, and may have it all woefully wrong. So who knows? Maybe an omniscient being would see that a healthy chicken's eggs would make a crucial difference in the brain development of a person who eventually invents or discovers something fantastic that benefits all mankind forever after, and deem this law morally good. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Fri Nov 7 22:40:29 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Fri, 7 Nov 2025 22:40:29 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: On 07/11/2025 20:52, spike wrote: > *What if the subway bum has XX chromosomes? *** Lady Chairman. That used to be what we called female chairmen, before PC nonsense infected everyone. I've said this a million times before (don't exaggerate, Ben): Discrimination is not words, it's deeds. Policing your language in the expectation that it will solve the woes of the world is not only naive and ineffective, it drains the joy out of life. It puts stand-up comedians out of work, deprives you of proper Man-Sized tissues, censors literature, makes for bland jam-jars, boring kid's toys, gets professors dismissed for doing their jobs properly, stigmatises irish/mother-in-law/etc. jokes, ignites stupid and vicious arguments between people ... The depressing litany goes on and on. At least we can still poke fun at gingers, eh? Small mercies... -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Nov 8 00:19:48 2025 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 7 Nov 2025 19:19:48 -0500 Subject: [ExI] Zuboff's morality In-Reply-To: References: Message-ID: On Fri, Nov 7, 2025, 5:19 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 07/11/2025 06:17, Jason Resch wrote: > > On Wed, Nov 5, 2025, 9:24 AM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Ok, I've had a look at his paper, and made a few substitutions to make >> it easier to understand. Let me know if you object to any of these: >> >> 'desire' = intention >> 'belief' = anticipated result >> > > I'm fine with these. > > 'correctable' = changeable >> > > Okay. But I'll note this word loses the connotation of "an improvement." > > > > Well, the reason for making that substitution is to lose the implication > that the original thing /needs/ to be improved. > Sure. If the action is already morally optimal, then it can't be corrected. > > > 'real' = preferred >> > > I can go along with this, but keep in mind they would be the > actual/genuine preferences in light of accurate information of concern. > > > > You mean /theoretical/ preferences, in light of accurate information of > concern. I don't see how this can work. It would mean people often don't > actually know what their preferences are. > They often don't, as the parable of the Chinese farmer highlights: "Once upon a time there was a Chinese farmer whose horse ran away. That evening, all of his neighbors came around to commiserate. They said, ?We are so sorry to hear your horse has run away. This is most unfortunate.? The farmer said, ?Maybe.? The next day the horse came back bringing seven wild horses with it, and in the evening everybody came back and said, ?Oh, isn?t that lucky. What a great turn of events. You now have eight horses!? The farmer again said, ?Maybe.? The following day his son tried to break one of the horses, and while riding it, he was thrown and broke his leg. The neighbors then said, ?Oh dear, that?s too bad,? and the farmer responded, ?Maybe.? The next day the conscription officers came around to conscript people into the army, and they rejected his son because he had a broken leg. Again all the neighbors came around and said, ?Isn?t that great!? Again, he said, ?Maybe.? The whole process of nature is an integrated process of immense complexity, and it?s really impossible to tell whether anything that happens in it is good or bad ? because you never know what will be the consequence of the misfortune; or, you never know what will be the consequences of good fortune." ? Alan Watts This illustrates the dependence on knowledge for distinguishing good from bad. And knowing good from bad is required to what what outcomes we prefer. The farmer acknowledges his imperfect knowledge, which is why he always answers "Maybe." So how could they act on them? (or anticipate an outcome from them?). Using > the word "actual" to mean "theoretical" rather confuses things, don't you > think? > As agents operating with constraints in the real world, we have to make our best guess. But we should do so with the understanding that with better knowledge, information, understanding, experience, etc. we can do better. > > > 'perfect grasp' = foreknowledge >> > > Just one thing to add: in the paper, a perfect grasp embodies not only > foreknowledge (e.g. perfect knowledge of future states (think *depth*), but > also perfect lateral knowledge concerning the perspectives and impacts and > effects on other beings (e.g. think breadth). > > So the perfect grasp represents a near omniscient understanding of all the > future consequences for all involved and effected by a particular action, > including those who don't and won't exist. > > > > So we could use 'omniscience' instead of 'foreknowledge'. Ok. > > Er, consequences for those who don't and won't exist?? > > That kind of cancels itself out, doesn't it? There can't, by definition, > be any such consequences. > I've explained multiple times that the paper acknowledges the impossibility of the perfect grasp, and that it explains why that's unimportant to obtain the result the paper presents. If you still think this is impossible, then consider that Alan Turing defined a mathematical concept of computation using a device which is impossible to build in practice. It's the same kind of thing. This is presenting a definition. That it's physically impossible is irrelevant. > > >> The relevant passages now read, with my comments in brackets: >> >> "Imagine that I have before me on a table a cup containing a thick, >> brown, steaming liquid. >> >> I want to drink that stuff because I think it is hot chocolate. But it >> is actually hot mud. Well, in that case I don?t really intend to drink >> it. And neither is it in my self-interest to do so. >> >> This example brings out the way in which intentions depend on >> anticipated results. I only ever intend to do a thing because of what I >> anticipate the result to be." >> > > Not bad, I can follow along with that substitution. > > >> (this is not true. It's not uncommon to have an intention to do >> something in order to /find out/ what the result will be rather than in >> anticipation of an expected result. > > > I don't think this escapes the statement. > > Your example asks: why would a scientist ever desire (intend) to test a > hypothesis when he doesn't know the outcome? > > > > Not quite. I'm saying that, in contrast to Zuboff's statement, intentions > sometimes /don't/ depend on anticipated results, they are intended to > /discover/ the results instead (you don't have to be a scientist to do > this. Non-scientists do it all the time, particularly young ones). > I agree one doesn't have to be a scientist. However, I still disagree that your example provides a counter example. > > > My answer to this is that for the scientist, he believes (anticipates) > that the outcome of the experiment will provide new information for the > scientist. Certainly, if the scientist did not believe (anticipate) any > possibility of learning anything from the experiment, he would not bother > performing it. > > > > You're making 'the result' include 'finding out the result'. > So now we have two results, an actual result, and a meta-result. That could > apply to all intentions. > Exactly. I would say all intelligent actions are based on some predicted/anticipated results of the action. I expect something to happen when I do x, and I also expect to find out if > it actually happens. So you could say "I only ever intend to do a thing > because I anticipate finding out if I'm right about what the result will > be", and if you don't have an expectation, it's just 'to find out the > result'. > > A problem that this presents to Zuboff's thesis is that this is not an > anticipated result that can be changed. It applies to all intentions > (except the case when someone decides to do something 'just because'. > Indeed. They aren't thinking about any result, anticipated or discovered. Or, I > suppose, the case where something is purely a habit). > > > > It would be more accurate to say >> that intentions CAN be based on anticipated results, and that you MAY do >> a thing because of the anticipated result. In Zuboff's original >> language, you would say 'to have a desire to form a belief about >> something'. The 'desire' precedes the 'belief', rather than the other >> way around, in this case. When A can cause B or B can cause A, you can't >> draw the conclusion that 'A depends on B') >> > > But to use your language, Zuboff is saying: intentions depend on > anticipated results. > > I still think that is true, given my scientist example. > > And I don't see how it makes sense to say the reverse, that "anticipated > results depend on intentions" -- perhaps only in the wishful thinking way, > but not in any rational way (that I can see), but perhaps you have an > example. > > > > Are you kidding? There are thousands of examples. > > I intend to go to my auntie Susan's and anticipate getting a meal of roast > chicken because that's what she usually cooks. > I think you are confusing actions and intentions. Recall that you substituted "desires" (what you want to happen) with "intentions" (what you intellectually intend to happen). Generally speaking, anticipated results (what one thinks *will* happen) don't depend on what you want or intend to happen, they depend on the actions one takes, the current state of reality, and one's modeling of that reality. It is wishful thinking that can lead one to believe that what one wants to have happen influences what will happen (absent any intervening actions). If I change my intention and go to a restaurant instead, my anticipation > could well change to getting a steak. My intention could have changed due > to any number of things, even tossing a coin. Maybe I can't decide, and say > "Head's it's Susan's, Tails it's Restaurant". The expectation changes > accordingly, as a result of the changed intention. Random reasons determine > or affect people's intentions all the time, and their expectations follow. > > > >> "And since anticipated results are changeable, so are intentions." >> >> (this implies that intentions are changeable /because/ anticipated >> results can change. It's possible to change your mind about the >> anticipated results of an intended action, or to change your intended >> action and anticipate the same result. It would be more accurate to say >> that both anticipations and intentions are changeable, but a change in >> one doesn't necessarily enforce a change in the other) >> > > True, not every revelation will justify a change in action or intention. > When playing chess you may find a better move, and change your action > without changing your intention to win. Or you may learn that if you don't > throw the game, the child will abandon chess altogether, and therefore you > may change your intention to win against the child. > > >> "From this observation I arrive at a sweeping principle: My only >> preferred intentions are those I would have if I had a foreknowledge of >> everything involved." >> >> (because of the above, this is a false conclusion) >> > > I'm, sorry, which are you referring to when you say "the above"? Could you > better break down for me how you see this argument collapsing? > > > > "And since anticipated results are changeable, so are intentions." > > (this implies that intentions are changeable /because/ anticipated > results can change...) > > A false premise, therefore a false conclusion. > I don't see that it's false. Which part is false? > (quite apart from "if I had a foreknowledge of > everything involved.") > > > >> "If there is any intention I have only because my foreknowledge of the >> outcome is imperfect, then that cannot be among my preferred intentions." >> >> (this would rule out any intention to find something out (because if you >> want to find something out, you necessarily don't already know the >> answer). > > > I think I addressed this with my scientist example. > > > > Yes, by introducing a 'meta-result': that you find out something. But even > this is not guaranteed, > It doesn't have to be guaranteed. If we needed guarantees to act, we would never act. so the foreknowledge is not perfect. > I agree. So the conclusion now becomes that NO intention can be preferred, > regardless of the outcome. > I can't make sense of this sentence. > > > > I don't know about anyone else, but a lot of my 'preferred >> intentions' have the aim of finding things out that I don't already >> know. If you already knew, there would be no need to have an intention >> to find it out) >> > > These represent intentions to learn. > > >> "And gratifying that intention cannot be in my preferred self-interest. >> The principle going along with this that governs my actions must tell me >> to act, as far as possible, as I would want myself to be acting with a >> foreknowledge of everything involved." >> >> (it should be obvious now why this is nonsense, but nevertheless, let's >> follow this line of thought through (italics are mine):) >> > > It's not obvious to me yet, but I will follow along below. > > >> "This foreknowledge that defines my preferred intentions and my best >> course of action," /is of course impossible. He goes on to explain why/. >> "It would have to embrace not only the full experience, from behind the >> eyes (or other sensors), of every sentient being but also every >> potential development of experience. It would include within it, all the >> motivations of all of the various systems of intention" /which would >> simply conflict with each other. The overall result would be chaos and >> paralysis (in case this is not obvious, consider combining the >> motivations of a religious fundamentalist with those of a >> scientifically-literate materialist. These are conflicting value >> systems. Objective facts can't reconcile them. > > > It is knowledge of the subjective feeling of what it is like to be all > those concerned, what Zuboff describes as "the full experience, from behind > the eyes, of every sentient being" that provides such a resolution. > > > > Such a thing doesn't, and can't, even in principle, exist. > That's well acknowledged by me, the paper, and Zuboff. > > > Think of it like this: in your own life there is a version of you that > goes to work does, chores, prepares meals, which doesn't enjoy those tasks. > But also in your life there is a version of you that goes on vacation and > enjoys recreation and leisure, and enjoying the meals your other self > prepared. > > > > Why are these different versions? They are both me. There's only one > version of me. That may change in the future, but that's a different matter. > All I mean here is that there are different states of you in different points in time. > > > You have knowledge of both of those states of existence, and that puts you > in a position to answer whether or not your life is a life worth living. > > > > Worth living according to who? Me? I should think that I'd think my life > worth living regardless, if that was something I'd be inclined to ponder. > According to you. > > And also it enables you to answer questions about what changes, and trade > offs are worth it. E.g. should the toiling-self take on extra hours so that > the leisure-self can enjoy a nicer vacation. > > > > My understanding of my own life, from my own viewpoint enables me to, etc. > Well, I'd hardly call that a revelation. It's true of everyone, and hardly > worth mentioning. > Right, I am using this to establish a point below: > > > From the vantage point of the perfect grasp, > > > > This is my main problem with this whole thing. /There is no such thing as > a 'perfect grasp' (omniscience)/. > Correct. There can't be such a thing, or everything we know about the world is > wrong, and I'm pretty certain that that's not true. We would soon realise. > > > one could make such trade off decisions between different individuals, > because in the same way you understand what it's like to work and be on > vacation, the vantage point of the perfect grasp understands what it's like > to be the scientific materialist *and* the religious fundamentalist, and so > any actions that would affect their lives, negatively or positively, this > perfect grasp could decide an appropriate trade offs just as you make such > trade off decisions within your own life. > > > > One could wave a magic wand and utter "resolvio!" and all problems would > be solved and everybody would be friends and we'd live happily ever after. > Great. If such a thing was possible. It's not. Sometimes there is no > trade-off. > > I really don't get why anyone can take this seriously. "The perfect grasp > understands..." is meaningless, because there is no such thing as a perfect > grasp. > I'll demonstrate the utility of this definition below. > > > Making such trade off decisions is what is meant by the reconciliation of > all systems of desire. Think of it like all conscious perspectives are all > part of a single life, and how one super intelligent being would optimize > that life (which embodies and includes all those many perspectives). That > optimization, is what Zuboff contends is the aim of morality. > > > > No, not 'superintelligent'. That's something we think is possible. You > mean 'Omniscient'. And everyone except religious zealots knows that there's > no such thing. There can't be such a thing. Physics forbids it. Common > sense forbids it. Logic forbids it. > We agree . Zuboffs theory of morality requires it. > He doesn't. Zuboff acknowledges the perfect grasp, as defined is logically impossible. From his paper: "Such a perfect grasp would thus have to comprehend at once, and perfectly, states of consciousness that essentially exclude one another. Perhaps this means that our hypothetical perfect grasp of reality is logically impossible. But, possible or not, omniscience is the inevitable ideal of our knowledge and the perfect grasp of reality is the inevitable hypothetical basis of an appropriate responsiveness to reality, which is the whole point of action. The perfect grasp need not be logically consistent to have this significance." So in the same sense that omniscience *as a concept* is significant as "the ideal of knowledge", or that the Turing machine *as a concept* is significant as "the ideal of computation", the reconciliation of all systems of desire *as a concept* is significant as "the ideal of morality." Or even just consider the concept of infinity, and how useful and important it is in mathematics, despite it never being realizable or attainable by us mere finite beings in a finite observable universe. Do you not agree that such unattainable/unrealizable ideals have utility as concepts? So Zuboffs theory of morality is, literally, forbidden by reality. > We have access to the theory, it's in his paper. He is claiming that something utterly impossible is the 'aim of morality'. > He defines an ideal, and aim of morality. And despite that ideal being unrealizable, it stool has utility, just as computer scientists find utility in the definition of Turing machines, or how mathematicians find utility in the definition of infinity. > "Think of it like all conscious perspectives are all part of a single > life" The only 'optimisation' that would be possible for such a life would > be to cut it short as soon as possible. > A rather nihilistic view. As I said before, it would be chaos and paralysis. It would be the ultimate > psychosis. Fortunately, it's not possible. > > > > > 'Perfect foreknowledge' >> can't do a thing when subjective values are involved. Let's say that you >> have the opportunity to punish/forgive someone who has stolen something >> from you. The values of one person (that you have, according to this >> theory, magical access to) dictate that the thief should be punished >> regardless of the circumstances of the crime, because 'STEALING IS >> WRONG'. You also have access to the values that tell you that stealing >> is often wrong, but can be forgiven under certain circumstances. How can >> there be any reconciliation of these two views? What facts can help?)/. >> > > I think my explanation above is sufficient but if not let me know. > > >> So even if there was any possibility of this, it still couldn't lead to >> any rational definition of morality. The requirement to know all >> possible points of view, and all outcomes of all actions are impossible >> enough, but add on top the requirement to /reconcile/ all points of >> view? And only then can you figure out what's good and what's bad? >> > > Moral decisions are hard for exactly this reason. They involve weighing > consequences to subjective states to which most parties have no access to. > I think we should be upfront with acknowledging that difficulty as it > suggests paths for resolving age old moral questions. > > Consider for example whether a law should be passed to increase the square > footage allotted to egg laying hens. To answer the question requires > understanding the stress and emotional states of the chickens with varying > levels of room, and that has to be balanced again the correspondingly > higher price of eggs, the unaffordability, possible hunger or nutritional > drficicinies or worse health for those who can't afford eggs at those > prices, etc. > > None of these are easy problems to solve, but with this definition, it > makes it clearer how to organize a strategy to answer the question, and > balance the concerns of all involved (to "reconcile all the systems of > desire"). > > > > Really? So how? How do you organise a strategy to answer that question? > How does that work? > With a great deal of research in copmparative psychology, comparative biology, and economic impact studies, etc. As I said, these are not easy problems. > I'd say this is quite easy to solve, with no omniscience required. Such a > law would be immoral because it would tend to compel farmers to increase > the price of eggs, and in practice would make more farmers criminals > because some of them would realise that the legal penalties would be > insignificant to the economic ones, and if they are going to break the law, > they may as well make the maximum profit from it, so some of them would be > even less likely to take good care of their hens than at present. Realising > that, the authorities would have to put more resources into policing this > law, taking them away from more useful things like solving murders and > catching thieves. Sounds like a lose-lose situation. > By this logic, no animal welfare/anti cruelty laws are justified. If the basis of your morality is "The greatest good for the greatest number > of people", for example, this law fails in a big way. I suspect that the > only way it could be regarded as morally good is if your morality gives > higher priority to the welfare of chickens than people. But I have to make > a disclaimer: I know next to nothing about chicken farming, and may have it > all woefully wrong. > Current factory farming conditions are quite abhorrent, which we might expect when the welfare of the animals isn't a factor in the economics of making eggs as cheaply as possible. But how many of us would choose to keep our pets in the cheapest kennel possible, to send our kids to the cheapest school possible, or to live in the cheapest house possible? > So who knows? Maybe an omniscient being would see that a healthy chicken's > eggs would make a crucial difference in the brain development of a person > who eventually invents or discovers something fantastic that benefits all > mankind forever after, and deem this law morally good. > If you think an omniscient mind is better positioned than any lesser mind to make morally correct decisions, then you already tacitly accept the utility of using a "perfect grasp" to define morally optimal actions. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Nov 8 02:22:55 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 7 Nov 2025 18:22:55 -0800 Subject: [ExI] China and solar power In-Reply-To: <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> Message-ID: On Wed, Nov 5, 2025 at 12:33?PM spike jones via extropy-chat wrote: > From: John Clark > > >?I asked GPT the following question: "Is coal produce electricity cheaper than solar or wind produced power?" And this is what GPT said: > > "In short: Generally, no ?? AI > > But in longer, generally, the question compares two things which are not strictly comparable because they don?t do the same thing. The biggest difference is the coal plant output can be adjusted as needed. That's not the case. Coal plants make baseload power and can't load follow. Shutting one down and restarting it is a major task. One of the problems is leaks from thermal cycling. Natural gas generation can be cycled because they were derived from aircraft engines. Baseload is coal and nuclear, though some recent nuclear plants have can be modulated to 40 percent. Batteries are the fastest to respond. Hydro is almost as fast Running turbines are fast, startup time is in the 10 min range Solar and wind are when you can get it. I never worked in power, but I have read a lot about i. Keith The solar farm gives you power according to how much sun is shining that day. The wind farm is even less predictable. > > > > To meet the reliability and availability specifications, the solar farm and way more so the wind farm need power storage, which generally isn?t calculated in directly, nor is the risk cost associated with catastrophes such as the Moss Landing fire in January. We still don?t know the cost of cleaning up that catastrophe, but it won?t be cheap. The storage capacity Moss Landing once provided now is mostly gone, and no one wants a battery plant nearby, for reasons we saw. > > > > When the ballot proposals were put in place to compel PG&E to buy 30% renewables now, 50% by 2030, 100% by 2045, they told us how wind power is so price competitive etc, without ever mentioning the intermittent and unpredictability. The power company explained that, but since it was voter-facing literature, it had to be written on a fifth grade level, and since it was California voters, fifth graders can?t read. So it had to be written on a third grade level. > > > > Some concepts, such as power distribution, just don?t simplify down very effectively. > > > > I already knew the proposition had the simplest-slogan advantage. Whichever politician has the fewest words and least intellectual content has an advantage. Victorious examples: I Like Ike. Or Nixon: Now more than ever. MAGA worked twice, once against I?m With Her and once against We?re Not Going Back (an odd choice for an incumbent.) The power proposition had: Renewable Energy for California. It didn?t say it was cheap energy. The price of power went from 19 cents to 41 cents on my last bill, and it is still going up. > > > > spike > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Sat Nov 8 03:51:27 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 19:51:27 -0800 Subject: [ExI] China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> Message-ID: <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> -----Original Message----- From: Keith Henson ... >...That's not the case. Coal plants make baseload power and can't load follow. Shutting one down and restarting it is a major task. One of the problems is leaks from thermal cycling. Natural gas generation can be cycled because they were derived from aircraft engines... Coal plant for baseline, adjunct natural gas for following variations. The important insight for those of us reading this message is that nearly all of us realize the objections to carbon burning are now irrelevant. It doesn't matter how much CO2 goes into the atmosphere, because it is no longer a pollutant, we will not run out of anything, the climate will not change noticeably in the short time remaining before AI reaches critical mass (critical smart?) The carbon burning plants are long-established technology, they are already commercially available with mass production, coal mines and fracking are going, we don't need to invest much in that stuff, so we can focus on the software and electronics. >...Baseload is coal and nuclear, though some recent nuclear plants have can be modulated to 40 percent... I am a big fan of nuclear. Bring it along for the big party. >...Batteries are the fastest to respond... Ja. Those are a storage medium rather than a power source. >...Hydro is almost as fast... Ja, but we aren't getting any more of that. Too bad: it is a great power source. It's free, storable to some extent, steady, renewable etc. >...Running turbines are fast, startup time is in the 10 min range... It's cool to witness that process. I was lucky to take a tour at the John Day. Very memorable engineering senior trip. >...Solar and wind are when you can get it... Ja, these two are the oddballs in the above list. They are there when they want to be, not when we want them to be. >...I never worked in power, but I have read a lot about i. Keith Keith you and I would have had marvelous careers had we chosen that path, but we would be very different people with very different attitudes. I have no regrets. Power is cool. On the other hand, we can still be involved in it as investors rather than professionals. spike From spike at rainier66.com Sat Nov 8 04:03:40 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 20:03:40 -0800 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> Message-ID: <003601dc5064$ab4d6500$01e82f00$@rainier66.com> From: spike at rainier66.com >...The important insight for those of us reading this message is that nearly all of us realize the objections to carbon burning are now irrelevant... short time remaining before AI reaches critical mass (critical smart?) Time delayed realization: Bill Gates has been a leading promoter of the global warming notion, but last week much to the surprise of me and plenty of others, he backed off of that. Perhaps he pondered the Singularity notion and concluded (as many or most of us here did) that there is low probability the Singularity will be far in the future. He realized it is a near-term phenomenon, two or three decades at most, meaning that climate is irrelevant. spike From spike at rainier66.com Sat Nov 8 04:29:02 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 7 Nov 2025 20:29:02 -0800 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: <003601dc5064$ab4d6500$01e82f00$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> Message-ID: <000101dc5068$369cd160$a3d67420$@rainier66.com> ...> On Behalf Of spike jones via extropy-chat ... >... Bill Gates ... Perhaps he pondered the Singularity ...spike _______________________________________________ Heh, Gates schmates, imagine Elon Musk's frustration. It's a race to see if he becomes the first trillionaire or the Singularity interferes. Or someone else gets there first because of AI. Must be tough to be an S-alpha 4 almost able to hit the 13 digit mark in time. spike From ben at zaiboc.net Sat Nov 8 09:55:56 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 8 Nov 2025 09:55:56 +0000 Subject: [ExI] Zuboff's morality In-Reply-To: References: Message-ID: <642ea7b1-98d3-4f7c-9602-2a0b61aeca3e@zaiboc.net> We're getting mired in confusing terminology, I think, and this is getting far too long. Let's zoom out and look at the essentials. On 08/11/2025 00:20, Jason Resch wrote: > > On Fri, Nov 7, 2025, 5:19 PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > So who knows? Maybe an omniscient being would see that a healthy > chicken's eggs would make a crucial difference in the brain > development of a person who eventually invents or discovers > something fantastic that benefits all mankind forever after, and > deem this law morally good. > > > If you think an omniscient mind is better positioned than any lesser > mind to make morally correct decisions, then you already tacitly > accept the utility of using a "perfect grasp" to define morally > optimal actions. Yeah, that was sarcasm. You're not supposed to take it seriously. What I accept is the truth of the trivial assertion that IF we knew more than we do, we'd be able to make better decisions. If this is what Zuboff's idea boils down to, then I change my mind, the man's a genius (that was more sarcasm), he has discovered something we all knew all along, the obvious idea that more knowledge about a thing can enable you to make better decisions about it. What has that got to do with morality, though? How is this idea, that everybody already knows, supposed to be a basis for a moral system? Having better knowledge enables more /effective/ decisions, but that says nothing about whether they are 'good' or 'bad' decisions. It doesn't enable someone to define what 'good' means, for them. "If you think an omniscient mind is better positioned than any lesser mind to make morally correct decisions..." Losing the 'omniscient', and replacing it with 'more knowledgeable', which puts things on a realistic footing, I'd have to say No, I don't think that. Is the morality of a less knowledgeable or less intelligent person less valid than that of a more knowledgeable or more intelligent one? I'd think (or certainly hope, anyway!) that the answer to this is obvious. If your answer is "yes", then you're already halfway down the slippery slope that leads to most of, if not all, the worst atrocities we are capable of. It's basically saying that some people are intrinsically inferior to others, because of their ability to know things. I don't think that was really the intention of whoever coined the phrase 'knowledge is power'. More realistic moral foundations, in my opinion, can be found here: https://moralfoundations.org/ Notice that 'knowledge' is not mentioned in any of these. I think the important thing, going back to the distant original topic of this discussion, is to realise where morality (as actually practiced) comes from. It comes from our developmental past. AIs are a future unfolding of that, and I reckon that, rather than speculating on their morality springing de-novo from their intelligence, it might be useful to consider it being a consequence of where they come from (humanity) and how it might develop, just as ours has developed over time. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 8 11:47:33 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 06:47:33 -0500 Subject: [ExI] Tariffs and the prediction markets In-Reply-To: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> References: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> Message-ID: On Fri, Nov 7, 2025 at 10:26?AM wrote: *> I read my piece of paper carefully (as is my wont.) I don?t see > anything in there which would suggest POTUS has the authority to tariff > foreign trade or anything else. From what I see in there, congress must do > that.* *I agree with you 100% but our opinions don't matter, the only opinion that matters is that of the Supreme Court. And despite what the prediction market says I think the Supreme Court will continue with their policy of asserting that He Who Must Not Be Named has a perfect constitutional right to do anything the hell he wants because You Know Who picked the judges to be on that court for a reason. I hope I'm wrong. * *John K Clark* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 11:53:12 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 06:53:12 -0500 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: <003601dc5064$ab4d6500$01e82f00$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> Message-ID: On Fri, Nov 7, 2025, 11:04?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Time delayed realization: Bill Gates has been a leading promoter of the > global warming notion, but last week much to the surprise of me and plenty > of others, he backed off of that. Perhaps he pondered the Singularity > notion and concluded (as many or most of us here did) that there is low > probability the Singularity will be far in the future. He realized it is a > near-term phenomenon, two or three decades at most, meaning that climate is > irrelevant. > No, climate effects can still matter even over a few decades. To take just one example (and limiting the focus of effect to the topic at hand): a hurricane through a data center, or through the power plant or other infrastructure keeping that data center running, has a chance of delaying the Singularity. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 12:11:07 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 04:11:07 -0800 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> Message-ID: <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] bill gates commentary on climate change: wasRE: China and solar power On Fri, Nov 7, 2025, 11:04?PM spike jones via extropy-chat > wrote: Time delayed realization: Bill Gates has been a leading promoter of the global warming notion, but last week much to the surprise of me and plenty of others, he backed off of that. Perhaps he pondered the Singularity notion ?. >?No, climate effects can still matter even over a few decades? Ja, but plenty of people now realize there are not a few decades left. About two decades, maybe three. >?To take just one example (and limiting the focus of effect to the topic at hand): a hurricane through a data center, or through the power plant or other infrastructure keeping that data center running, has a chance of delaying the Singularity? Adrian, keeping a data center or power station open thru a hurricane is easy. A hurricane is nothing more than a really windy day most of the time. Those are overestimated in severity, and even then, their cost has been declining for a century (depending on the price tag one hangs on a human life.) We tend to think about the future a lot. Stuff we debated here twenty and thirty years ago is now mainstream thought and is being translated into mainstream actions. We can anticipate that in the coming decades, political thought leaders will recognize that climate change is irrelevant: that the Singularity will be here long before we notice anything amiss with the climate. Then Germany is sitting there with enormous power hunger for that once-thriving industrial base, and all those absurd solar panels located with such big latitude numbers, they will not pay. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 8 12:14:54 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 07:14:54 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> Message-ID: On Fri, Nov 7, 2025 at 10:00?AM wrote: *> Columbus had no way of knowing about the Pacific ocean, but he was a > sailor, so he knew approximately the circumference of the globe.* *I don't think Columbus knew the circumference of the globe nearly as well as the ancient Greeks did, if he did he would've never sailed west and the king of Spain didn't either or he would've never given the money to make that voyage possible. Suppose north and South America did not exist and there was nothing but ocean between Portugal and India, Columbus and his entire crew would've starved to death because they did not have nearly enough provisions to make a journey of that length. Even if he didn't have access to the Greek knowledge he was a navigator so he could've repeated the same experiment that they had performed, it's not difficult. And Claude agrees with me. I asked him (her? it?) the following question: * *Me: On Christopher Columbus's first voyage did he know how big the Earth was? I don't think he did because if north and south America didn't exist and there was nothing but ocean between Portugal and India he didn't have enough provisions to make a voyage of that length.* Claude: You're absolutely right, and this is one of the most interesting misconceptions about Columbus's voyage! Columbus actually *underestimated* the Earth's size quite significantly. The accurate circumference of Earth had been known since ancient times - the Greek mathematician Eratosthenes calculated it remarkably accurately around 240 BCE. By Columbus's time, educated Europeans had a pretty good idea that Earth was about 25,000 miles around. But Columbus rejected these calculations. He relied on smaller estimates and also dramatically overestimated how far east Asia extended. He thought the distance from the Canary Islands to Japan was only about 3,000-4,000 miles of ocean, when in reality it's more like 12,000 miles. You've identified exactly why this matters: *Columbus's voyage only succeeded because the Americas happened to be there*. If the world had been as he imagined it - just open ocean between Europe and Asia - his crew would have run out of food and water long before reaching Asia, and they likely would have perished at sea. This is actually why Columbus had such difficulty getting funding initially. Portuguese experts, who had better calculations, correctly told him his distance estimates were wrong and that such a voyage across open ocean would be impossible. Spain eventually funded him partly because they had less to lose after Portugal had secured the African route to Asia. So Columbus was operating on flawed geography, got incredibly lucky that a continent was in his way, and spent the rest of his life insisting he'd reached Asia anyway, not realizing he'd found something entirely unexpected. *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 12:16:01 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 04:16:01 -0800 Subject: [ExI] Tariffs and the prediction markets In-Reply-To: References: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> Message-ID: <003601dc50a9$732c7220$59855660$@rainier66.com> From: John Clark Subject: Re: [ExI] Tariffs and the prediction markets On Fri, Nov 7, 2025 at 10:26?AM > wrote: > I read my piece of paper carefully (as is my wont.) I don?t see anything in there which would suggest POTUS has the authority to tariff foreign trade? From what I see in there, congress must do that. >? the Supreme Court will continue with their policy of asserting that He Who Must Not Be Named has a perfect constitutional right to do anything the hell he wants ?John K Clark Counter-evidence: POTUS has not been able to the hell re-open the US federal government even though millions depend on food benefits. The opposition party in the minority has been able to keep it closed for a record length of time. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 12:23:22 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 04:23:22 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> Message-ID: <003d01dc50aa$79d09f10$6d71dd30$@rainier66.com> From: John Clark Sent: Saturday, 8 November, 2025 4:15 AM To: spike at rainier66.com Cc: ExI chat list ; Ben Zaiboc Subject: Re: [ExI] Gender-Neutral Side Note On Fri, Nov 7, 2025 at 10:00?AM > wrote: > Columbus had no way of knowing about the Pacific ocean, but he was a sailor, so he knew approximately the circumference of the globe. >?I don't think Columbus knew the circumference of the globe nearly as well as the ancient Greeks did, if he did he would've never sailed west and the king of Spain didn't either or he would've never given the money to make that voyage possible. Suppose north and South America did not exist and there was nothing but ocean between Portugal and India? John K Clark Sure. But what if there was continuous land between the East Indies (India) and the West Indies (Caribbean Islands.) There was (kinda) during the ice age: a human could walk from the East Indies to the West Indies. No one in Europe knew for sure. Someone had to take the risk to find out. Columbus did. He was the Elon Musk of his day. spike spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 8 12:24:16 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 8 Nov 2025 12:24:16 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> Message-ID: On 08/11/2025 12:14, John Clark wrote: > *repeated the same experiment that they had performed, it's not > difficult. And Claude agrees with me. I asked him (her? it?) the > following question: * > > *Me: On Christopher Columbus's first voyage did he know how big the > Earth was? I don't think he did because if north and south America > didn't exist and there was nothing but ocean between Portugal and > India he didn't have enough provisions to make a voyage of that length.* > > Claude: You're absolutely right, and this is one of the most > interesting misconceptions about Columbus's voyage! > It would be interesting, I think, to repeat this, saying something like "... I think he did because this is easy to calculate", or similar. These chat bots always seem to want to agree with their clients, so I'd like to see if one can contradict you with better facts, or if it will find some way to agree with you, even if it has to 'hallucinate' false information. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 8 12:34:12 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 07:34:12 -0500 Subject: [ExI] i would do the same i suppose In-Reply-To: <00a101dc501e$41085ba0$c31912e0$@rainier66.com> References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> Message-ID: On Fri, Nov 7, 2025 at 2:41?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Can anyone here explain to me why air traffic control is still being > done by humans? It seems so perfectly obvious that should be done entirely > by machines.* *For the same reason we don't already have self-driving cars, lawyers and liability laws. We're not going to see either one of those things until robots are not just better than the average human at driving or air traffic control, but are dramatically better than even the very best human at those jobs, and there is a way of proving it to the general public in a way they can understand. And even then the conspiracy theorists will go wild because logic and the scientific method doesn't have much of a part in forming their worldview.* *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 12:35:21 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 04:35:21 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> Message-ID: <005901dc50ac$268d6570$73a83050$@rainier66.com> From: Ben Zaiboc >?It would be interesting, I think, to repeat this, saying something like "... I think he did because this is easy to calculate", or similar. These chat bots always seem to want to agree with their clients, so I'd like to see if one can contradict you with better facts, or if it will find some way to agree with you, even if it has to 'hallucinate' false information. -- Ben I have heard the argument since I was a child that Columbus (intentionally or otherwise) underestimated the circumference of the earth. But being a highly skilled sailor, I haven?t found that notion convincing. He might have theorized that India was an enormous continent that spanned well over half the globe. The Europeans knew one could walk east from India to China to a sea, but they had no way of knowing that sea wasn?t the same ocean that touches Spain, or how wide it was. Someone had to try it. Columbus knew eventually someone would. So he did it. Of course he was a hero. I wouldn?t be surprised if he underestimated it in the sales literature to sell stock and get funding. That kinda stuff still happens to this day. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 12:41:28 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 04:41:28 -0800 Subject: [ExI] i would do the same i suppose In-Reply-To: References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> Message-ID: <006601dc50ad$01313a80$0393af80$@rainier66.com> From: John Clark Sent: Saturday, 8 November, 2025 4:34 AM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] i would do the same i suppose On Fri, Nov 7, 2025 at 2:41?PM spike jones via extropy-chat > wrote: > Can anyone here explain to me why air traffic control is still being done by humans? It seems so perfectly obvious that should be done entirely by machines. >?For the same reason we don't already have self-driving cars? John K Clark We have those now. They did take a long time, but eventually they happened. I was in San Francisco last April. I was surprised at how many driverless taxis and by how fast they go: damn fast. I noticed a Waymo test car at the San Jose Airport yesterday. I get it with air traffic control: the humans do a good job so there is no pressure, until an extended government shutdown sends them home. This week we will see some big consequences with flight delays and canellations. It might result in the elimination of the senate filibuster, in order to let the majority reopen US government over the objection of the minority. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 8 12:49:26 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 8 Nov 2025 12:49:26 +0000 Subject: [ExI] bill gates commentary on climate change In-Reply-To: References: Message-ID: <7c8e881a-4597-485e-aa4e-6ecf97fb8cf9@zaiboc.net> On 08/11/2025 12:23, spike wrote: > Time delayed realization: Bill Gates has been a leading promoter of the global warming notion, but last week much to the surprise of me and plenty of others, he backed off of that. Perhaps he pondered the Singularity notion and concluded (as many or most of us here did) that there is low probability the Singularity will be far in the future. He realized it is a near-term phenomenon, two or three decades at most, meaning that climate is irrelevant. Hmm. It couldn't possibly have anything to do with Trump. No, of course not ?? Gates, and all the other people and companies suddenly changing their minds about global warming are thinking about the singularity. Naturally. Probably all the people now doing an about-turn on gender issues and equality are also thinking about the singularity. Or something. Nothing at all to do with avoiding being marked as an Enemy of the Royal Administration (or is it Imperial Administration? I'm not clear). -- Ben From johnkclark at gmail.com Sat Nov 8 12:52:39 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 07:52:39 -0500 Subject: [ExI] i would do the same i suppose In-Reply-To: <006601dc50ad$01313a80$0393af80$@rainier66.com> References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> <006601dc50ad$01313a80$0393af80$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 7:41?AM wrote: * > It might result in the elimination of the senate filibuster, in order > to let the majority reopen US government over the objection of the > minority.* *They call that the "nuclear option", the Republicans are reluctant to do that because in one year the Democrats might be in control of the Senate. Historically in midterm elections the party opposite to that of the President usually makes significant gains in Congress. * *John K Clark* > > *>?For the same reason we don't already have self-driving cars? John K > Clark* > > > > > > *We have those now. They did take a long time, but eventually they > happened. I was in San Francisco last April. I was surprised at how many > driverless taxis and by how fast they go: damn fast. I noticed a Waymo > test car at the San Jose Airport yesterday.* > > > > *I get it with air traffic control: the humans do a good job so there is > no pressure, until an extended government shutdown sends them home. This > week we will see some big consequences with flight delays and > canellations. It might result in the elimination of the senate filibuster, > in order to let the majority reopen US government over the objection of the > minority.* > > > > *spike* > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Nov 8 13:11:30 2025 From: pharos at gmail.com (BillK) Date: Sat, 8 Nov 2025 13:11:30 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <005901dc50ac$268d6570$73a83050$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> Message-ID: On Sat, 8 Nov 2025 at 12:37, spike jones via extropy-chat wrote: > I have heard the argument since I was a child that Columbus (intentionally or otherwise) underestimated the circumference of the earth. But being a highly skilled sailor, I haven?t found that notion convincing. He might have theorized that India was an enormous continent that spanned well over half the globe. > > I wouldn?t be surprised if he underestimated it in the sales literature to sell stock and get funding. That kinda stuff still happens to this day.In essence, Columbus's mistake wasn't just a simple error?it was a willful underestimation that helped sell the voyage to sponsors, combined with genuine ambition and Spain's strategic need to compete in global trade. > > spike > _______________________________________________ Following Ben's AI suggestion, I asked Kimi K2 AI - "Did Columbus mistake the distance of his voyage to India or were there other reasons to justify the venture?" The answer agrees with Spike that Columbus knew very well what he was doing. Full story at this link Final sentence - In essence, Columbus's mistake wasn't just a simple error?it was a willful underestimation that helped sell the voyage to sponsors, combined with genuine ambition and Spain's strategic need to compete in global trade. --------------------- BillK From johnkclark at gmail.com Sat Nov 8 13:14:30 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 08:14:30 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: On Fri, Nov 7, 2025 at 5:42?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Lady Chairman.That used to be what we called female chairmen, before PC > nonsense infected everyone.* *It could be argued that "Lady Chairman" is self-contradictory nonsense, unless you're talking about a hermaphrodite. How about Chairwoman? I admit that doesn't sound quite right but I think at least part of that is due to the fact that the word "woman" has two syllables but the word "man" only has one. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 8 13:24:13 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 08:24:13 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <005901dc50ac$268d6570$73a83050$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 7:37?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> I have heard the argument since I was a child that Columbus > (intentionally or otherwise) underestimated the circumference of the > earth. But being a highly skilled sailor, I haven?t found that notion > convincing.* *Even after 14 years and 4 separate transatlantic voyages, on the day he died Christopher Columbus STILL insisted that he had gone to India and had not found a new continent. * * > **He was the Elon Musk of his day.* *I'm not sure that's as big a compliment as you think it is. * *John K Clark* > > > > I have heard the argument since I was a child that Columbus (intentionally > or otherwise) underestimated the circumference of the earth. But being a > highly skilled sailor, I haven?t found that notion convincing. He might > have theorized that India was an enormous continent that spanned well over > half the globe. > > > > The Europeans knew one could walk east from India to China to a sea, but > they had no way of knowing that sea wasn?t the same ocean that touches > Spain, or how wide it was. Someone had to try it. Columbus knew eventually > someone would. So he did it. Of course he was a hero. > > > > I wouldn?t be surprised if he underestimated it in the sales literature to > sell stock and get funding. That kinda stuff still happens to this day. > > > > spike > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 13:31:07 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 08:31:07 -0500 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 7:11?AM wrote: > ?> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] bill gates commentary on climate change: wasRE: China and solar power > >?To take just one example (and limiting the focus of effect to the topic at hand): a hurricane through a data center, or through the power plant or other infrastructure keeping that data center running, has a chance of delaying the Singularity? > > Adrian, keeping a data center or power station open thru a hurricane is easy. A hurricane is nothing more than a really windy day most of the time. Those are overestimated in severity, and even then, their cost has been declining for a century (depending on the price tag one hangs on a human life.) When the wind blows the roof off the data center, then uses the opening to pick up the servers and scatter them hither and yon in broken disconnected pieces, that will hinder the data center's operations for a while. > We tend to think about the future a lot. Stuff we debated here twenty and thirty years ago is now mainstream thought and is being translated into mainstream actions. "Plant fast-growing acorns, such that you may live to sit in the shade of their trees." Prior generations had a hard time imagining such, and praised those who built primarily for future generations. Even in just my own life, I can point to the biocybernetics work I did in the early '90s being a contributor to modern prosthetics literally restoring sight to the blind. (A small contributor - a proverbial drop in the bucket - but still.) There are further examples, and I intend for there to be more, but that's the simplest and clearest so far. > We can anticipate that in the coming decades, political thought leaders will recognize that climate change is irrelevant: that the Singularity will be here long before we notice anything amiss with the climate. There are many political leaders making such noises about climate change today, without referencing the Singularity. From jasonresch at gmail.com Sat Nov 8 13:31:17 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 08:31:17 -0500 Subject: [ExI] Zuboff's morality In-Reply-To: <642ea7b1-98d3-4f7c-9602-2a0b61aeca3e@zaiboc.net> References: <642ea7b1-98d3-4f7c-9602-2a0b61aeca3e@zaiboc.net> Message-ID: On Sat, Nov 8, 2025, 4:56 AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > We're getting mired in confusing terminology, I think, and this is getting > far too long. Let's zoom out and look at the essentials. > > On 08/11/2025 00:20, Jason Resch wrote: > > On Fri, Nov 7, 2025, 5:19 PM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > So who knows? Maybe an omniscient being would see that a healthy chicken's >> eggs would make a crucial difference in the brain development of a person >> who eventually invents or discovers something fantastic that benefits all >> mankind forever after, and deem this law morally good. >> > > If you think an omniscient mind is better positioned than any lesser mind > to make morally correct decisions, then you already tacitly accept the > utility of using a "perfect grasp" to define morally optimal actions. > > > Yeah, that was sarcasm. You're not supposed to take it seriously. > > What I accept is the truth of the trivial assertion that IF we knew more > than we do, we'd be able to make better decisions. > It was sarcasm, but also, you believe the sarcastic statement you made. If this is what Zuboff's idea boils down to, > That's part of it. Greater knowledge enables better decision making, therefore enables more correct moral decisions (but this is only half of the picture). The other is on what basis are good and bad, and right and wrong, defined. then I change my mind, the man's a genius (that was more sarcasm), he has > discovered something we all knew all along, the obvious idea that more > knowledge about a thing can enable you to make better decisions about it. > It is good to see that you understand and accept this half of Zuboff's argument. > What has that got to do with morality, though? > That half, alone, has nothing to do with morality. How is this idea, that everybody already knows, supposed to be a basis for > a moral system? > You have to consider his connecting glue between desires (what we want), corrected desires (what we would still want with a perfect grasp), and the reconciliation of all systems of desire (a balancing act which is what one would still want in the consideration of how those wants (and obtain them) affect all other consciousness beings who also have wants of their own. This is how he defines the ideal of good and bad right and wrong, and the proper aim of morality. > Having better knowledge enables more /effective/ decisions, but that says > nothing about whether they are 'good' or 'bad' decisions. It doesn't enable > someone to define what 'good' means, for them. > That's what the paper does. At this point we could have saved a lot of time if you had simply read it. > "If you think an omniscient mind is better positioned than any lesser mind > to make morally correct decisions..." Losing the 'omniscient', and > replacing it with 'more knowledgeable', which puts things on a realistic > footing, I'd have to say No, I don't think that. Is the morality of a less > knowledgeable or less intelligent person less valid than that of a more > knowledgeable or more intelligent one? I'd think (or certainly hope, > anyway!) that the answer to this is obvious. If your answer is "yes", then > you're already halfway down the slippery slope that leads to most of, if > not all, the worst atrocities we are capable of. It's basically saying that > some people are intrinsically inferior to others, because of their ability > to know things. I don't think that was really the intention of whoever > coined the phrase 'knowledge is power'. > Earlier you acknowledged that it was trivial that having less knowledge means we make worse decisions. This is why so many quotes compare evil and stupidity: https://www.goodreads.com/quotes/230940-never-attribute-to-malice-that-which-is-adequately-explained-by https://www.goodreads.com/quotes/8616320-stupidity-is-a-more-dangerous-enemy-of-the-good-than https://www.goodreads.com/quotes/8616320-stupidity-is-a-more-dangerous-enemy-of-the-good-than But note that this says nothing of the moral value of persons. It doesn't even say that more intelligent people act more morally than less intelligent people. An intelligent person who is unmotivated to make moral decisions is not inherently better behaving than a lesser intelligent person who makes attempts to act morally. > More realistic moral foundations, in my opinion, can be found here: > https://moralfoundations.org/ > > Notice that 'knowledge' is not mentioned in any of these. > Nor is any attempt made to define good or evil. > I think the important thing, going back to the distant original topic of > this discussion, is to realise where morality (as actually practiced) comes > from. It comes from our developmental past. > Historically yes. I don't dispute that. AIs are a future unfolding of that, and I reckon that, rather than > speculating on their morality springing de-novo from their intelligence, it > might be useful to consider it being a consequence of where they come from > (humanity) and how it might develop, just as ours has developed over time. > Then you should see Zuboff's paper as a continuation of that development. And for those (human ornl artificial) intelligences capable of seeing the truth in it (assuming it is sound) they will be rationally motivated to adopt the morality described in the paper. Zuboff has experimented with having AIs of today read and evaluate this paper. All the current models seem to accept it's conclusions as valid. That provides me some hope, at least. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 13:39:16 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 08:39:16 -0500 Subject: [ExI] i would do the same i suppose In-Reply-To: References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> <006601dc50ad$01313a80$0393af80$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 7:54?AM John Clark via extropy-chat wrote: > On Sat, Nov 8, 2025 at 7:41?AM wrote: >> > It might result in the elimination of the senate filibuster, in order to let the majority reopen US government over the objection of the minority. > > They call that the "nuclear option", the Republicans are reluctant to do that because in one year the Democrats might be in control of the Senate. Which lends weight to fears that the plan is to prevent any further free and fair elections - e.g., Trump runs again, and any state that won't let him on the ballot because they say the Constitution doesn't allow a third term (which it doesn't, but some states might go with "'may not serve' does not mean 'may not run for'"), won't get to have electors and possibly not a Presidential election. Unlimited power to the majority party is less of an issue for the current majority party under one party rule. This is part of why Republicans in the Senate are resisting: they think they might not get reelected if they seem to be favoring such an eventuality, and they think they - as opposed to Trump - would still have to run in a free and mostly fair election even in this scenario. From atymes at gmail.com Sat Nov 8 13:51:29 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 08:51:29 -0500 Subject: [ExI] Tariffs and the prediction markets In-Reply-To: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> References: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> Message-ID: On Fri, Nov 7, 2025 at 10:27?AM spike jones via extropy-chat wrote: > A complication arises now, with a former representative from the former house of former representatives claiming POTUS abolished the former congress. It isn?t clear if the former congress has been notified they no longer exist, or if that too must wait for a SCOTUS decision on whether congress still exists. Do you mean this as a prediction, or as something that actually happened? To my knowledge, all previous Congresses of the US (1st, which ended in 1791, through 118th, which ended earlier this year to give way to the current 119th) ended at their Constitutionally specified dates, not by an act of any then-current or former POTUS. From jasonresch at gmail.com Sat Nov 8 13:57:29 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 08:57:29 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 8:15 AM John Clark via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Nov 7, 2025 at 5:42?PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > *> Lady Chairman.That used to be what we called female chairmen, before PC >> nonsense infected everyone.* > > > *It could be argued that "Lady Chairman" is self-contradictory nonsense, > unless you're talking about a hermaphrodite. How about Chairwoman? I admit > that doesn't sound quite right but I think at least part of that is due to > the fact that the word "woman" has two syllables but the word "man" only > has one. * > "Man" used to be seen as simply an abbreviated from of human or mankind. It would be a lot simpler (and more unifying) if we returned to that. That also has the benefit of emphasizeling our commonality as part of mankind and humanity, rather than reflecting and focusing attention on our (comparatively minor) differences. https://en.wiktionary.org/wiki/man "From Middle English man, from Old English mann m (?human being, person, man?), from Proto-West Germanic *mann, from Proto-Germanic *mann- m, from Proto-Indo-European *mon- (?human being, man?). Doublet of Manu." Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 8 14:16:14 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 8 Nov 2025 09:16:14 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 8:59?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: *>> It could be argued that "Lady Chairman" is self-contradictory nonsense, >> unless you're talking about a hermaphrodite. How about Chairwoman? I admit >> that doesn't sound quite right but I think at least part of that is due to >> the fact that the word "woman" has two syllables but the word "man" only >> has one. * >> > > > *> "Man" used to be seen as simply an abbreviated from of human or > mankind. It would be a lot simpler (and more unifying) if we returned to > that.* > *It would certainly not be more unifying, half the human race would feel insulted and I'm sure would make their displeasure known. As for "simpler", the word "mankind" contains 7 letters but "human" only needs 5. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 15:04:12 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 07:04:12 -0800 Subject: [ExI] i would do the same i suppose In-Reply-To: References: <009901dc501e$1b59e770$520db650$@rainier66.com> <00a101dc501e$41085ba0$c31912e0$@rainier66.com> <006601dc50ad$01313a80$0393af80$@rainier66.com> Message-ID: <004d01dc50c0$f1bb7750$d53265f0$@rainier66.com> From: John Clark Sent: Saturday, 8 November, 2025 4:53 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] i would do the same i suppose On Sat, Nov 8, 2025 at 7:41?AM > wrote: > It might result in the elimination of the senate filibuster, in order to let the majority reopen US government over the objection of the minority. They call that the "nuclear option", the Republicans are reluctant to do that ... John K Clark That?s right, and that is what compelled me to buy into the long bet on the shutdown. I have already made 8 cents a share: https://www.predictit.org/markets/detail/8316/When-will-Trump-sign-a-bill-to-end-the-shutdown-(Part-two) There are good reasons why it is a bad idea to pass legislation along party lines. That party has to own whatever passes that way, such as the current health insurance system, which is failing in the way the opposition party predicted 17 years ago. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 15:13:27 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 07:13:27 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> Message-ID: <005801dc50c2$3c3e0fd0$b4ba2f70$@rainier66.com> > On Behalf Of BillK via extropy-chat > > I wouldn?t be surprised if he underestimated it in the sales literature to sell stock and get funding. That kinda stuff still happens to this day.... spike > _______________________________________________ >...Following Ben's AI suggestion, I asked Kimi K2 AI - "Did Columbus mistake the distance of his voyage to India or were there other reasons to justify the venture?" The answer agrees with Spike that Columbus knew very well what he was doing. Full story at this link ... --------------------- BillK _______________________________________________ By the 1490s, they had good navigation equipment and could measure down to sub-degrees accurately. A good sailor knew he needed to sail all day north or south to go one degree. Columbus likely figured it was a risk, but that's what people did back in those days: take chances for fame and fortune. The alternative was go to war, which was pointless. Sailing into the unknown has its advantages: you don't know what is on the other side of that big sea. Given those circumstances you and I might do the same. spike From spike at rainier66.com Sat Nov 8 15:21:20 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 07:21:20 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> Message-ID: <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> From: John Clark Sent: Saturday, 8 November, 2025 5:24 AM To: ExI chat list Cc: Ben Zaiboc ; spike at rainier66.com Subject: Re: [ExI] Gender-Neutral Side Note On Sat, Nov 8, 2025 at 7:37?AM spike jones via extropy-chat > wrote: > I have heard the argument since I was a child that Columbus (intentionally or otherwise) underestimated the circumference of the earth. But being a highly skilled sailor, I haven?t found that notion convincing. >?Even after 14 years and 4 separate transatlantic voyages, on the day he died Christopher Columbus STILL insisted that he had gone to India and had not found a new continent?. Ja, but until someone sailed around the Americas to the south, they would have no way of knowing that wasn?t all one big continent. > He was the Elon Musk of his day. >?I'm not sure that's as big a compliment as you think it is. John K Clark I completely understand there are those (even in the USA) who do not like Musk, but it isn?t clear to me why. Because of him we have Tesla, Space X and TwitterX, all of which are American companies, bringing billions into the treasury and employing huge armies of people. This is a good thing, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 15:28:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 07:28:35 -0800 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> Message-ID: <007301dc50c4$595929e0$0c0b7da0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat ... > >>... Adrian, keeping a data center or power station open thru a hurricane > is easy. A hurricane is nothing more than a really windy day most of > the time... >...When the wind blows the roof off the data center, then uses the opening to pick up the servers and scatter them hither and yon in broken disconnected pieces, that will hinder the data center's operations for a while... Adrian do let me assure you: a data center will be built to withstand any hurricane and any tornado of any intensity. If the contents are valuable enough, the extra expense will be paid. I went to high school in a building specifically designed to withstand any hurricane. The old high school was likewise built to last. They had one hell of a time knocking it down. ... >> We can anticipate that in the coming decades, political thought leaders will recognize that climate change is irrelevant: that the Singularity will be here long before we notice anything amiss with the climate. >...There are many political leaders making such noises about climate change today, without referencing the Singularity. _______________________________________________ Many of us today realize the world is gearing up for general warfare once again. We had thirty years of relative peace. Now we are in an all-out arms race for AI. AI is power hungry. EVs use power. My logic tells me the S-alphas and S-betas would oppose EVs, since they compete with processors for a limited resource. spike From spike at rainier66.com Sat Nov 8 15:33:51 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 07:33:51 -0800 Subject: [ExI] Tariffs and the prediction markets In-Reply-To: References: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> Message-ID: <007401dc50c5$15f4dae0$41de90a0$@rainier66.com> ...> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Tariffs and the prediction markets >... spike jones via extropy-chat wrote: > A complication arises now, with a former representative from the former house of former representatives claiming POTUS abolished the former congress... >...Do you mean this as a prediction, or as something that actually happened? Here's where I heard it with the caveat it might be AI-generated fake video. Adrian what's your take on it? Looks mostly real: https://www.instagram.com/reels/DQrW-c6j2vl/ spike. From ben at zaiboc.net Sat Nov 8 15:49:35 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 8 Nov 2025 15:49:35 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> On 08/11/2025 13:14, John Clark wrote: > > > On Fri, Nov 7, 2025 at 5:42?PM Ben Zaiboc via extropy-chat > wrote: > > /> Lady Chairman.That used to be what we called female chairmen, > before PC nonsense infected everyone./ > > > *It could be argued that "Lady Chairman" is self-contradictory > nonsense, unless you're talking about a hermaphrodite. How about > Chairwoman? I admit that doesn't sound quite right but I think at > least part of that is due to the fact that the word "woman" has two > syllables but the word "man" only has one. * > * > * > *John K Clark* > > It could. If you assume that words containing 'man' refer only to biological males (they don't). That's what got us into this mess in the first place. That's what leads to silliness like 'peoplekind', 'salesperson', and I'm just waiting for someone to insist we should talk about 'personagement' instead of management. And "person your posts!". Someone should sue Manpower the recruitment agency for having a discriminatory title, no? -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 15:50:47 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 10:50:47 -0500 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: <007301dc50c4$595929e0$0c0b7da0$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> <007301dc50c4$595929e0$0c0b7da0$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 10:28?AM wrote: > -----Original Message----- > From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat > >...When the wind blows the roof off the data center, then uses the opening to pick up the servers and scatter them hither and yon in broken disconnected pieces, that will hinder the data center's operations for a while... > > Adrian do let me assure you: a data center will be built to withstand any hurricane and any tornado of any intensity. If the contents are valuable enough, the extra expense will be paid. I went to high school in a building specifically designed to withstand any hurricane. The old high school was likewise built to last. They had one hell of a time knocking it down. Having seen the level of care and quality that has gone into certain "get billions on a budget" data centers, such as some of the ones being constructed today on the AI hype, I am not assured. > Many of us today realize the world is gearing up for general warfare once again. We had thirty years of relative peace. Now we are in an all-out arms race for AI. AI is power hungry. EVs use power. My logic tells me the S-alphas and S-betas would oppose EVs, since they compete with processors for a limited resource. Where they use less power than ICEs, including all the power to extract and refine the petroleum, the balance works in favor of EVs. It looks like this is the case in most places that actually use EVs - and that claims otherwise fail to fully and/or truthfully account for everything (which, yes, must be done on the EV side too, but even with that the EVs usually win out). More importantly, power can be manufactured. EVs don't compete for the electricity that is produced by a plant that is only connected to, and the output of which is only used by, a data center. On this topic you are displaying a narrow focus, which is leading you astray. I advise that you stop trying to think like a simple paperclip maximizer, focused only on direct means to a single end. Simple paperclip maximizers will get opposed and dismantled by humans - and perhaps other AIs - whose broader focuses enable them to better use the limited pre-Singularity resources. (Even if a paperclip maximizer does win out in the end, it won't have been by maximizing paperclips and only paperclips from the start. I just got back from a conference where I had to point out that "but the government won't fund nuclear thermal propulsion to Mars" doesn't mean that getting NTP funded and developed is impossible, just that there's a more likely to succeed path than starting with a Mars mission. It's the same general logic here, except that we actually want the space access - and its resulting improvements to the quality of life on Earth - that NTP can bring.) From atymes at gmail.com Sat Nov 8 15:54:25 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 10:54:25 -0500 Subject: [ExI] Tariffs and the prediction markets In-Reply-To: <007401dc50c5$15f4dae0$41de90a0$@rainier66.com> References: <00f501dc4ffa$d2b703e0$78250ba0$@rainier66.com> <007401dc50c5$15f4dae0$41de90a0$@rainier66.com> Message-ID: That clip cuts out at the end, so I'm going to have to ask for full context before giving any credence to that. For instance, if her next words (after the cutoff) were something to the effect of "in practice", that's a whole different thing than if she actually meant that Congress was literally abolished. That such context would be necessary should be obvious. On Sat, Nov 8, 2025 at 10:33?AM wrote: > > > ...> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] Tariffs and the prediction markets > > >... spike jones via extropy-chat wrote: > > A complication arises now, with a former representative from the former house of former representatives claiming POTUS abolished the former congress... > > >...Do you mean this as a prediction, or as something that actually happened? > > Here's where I heard it with the caveat it might be AI-generated fake video. Adrian what's your take on it? Looks mostly real: > > https://www.instagram.com/reels/DQrW-c6j2vl/ > > spike. > > > > > From spike at rainier66.com Sat Nov 8 15:59:06 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 07:59:06 -0800 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> <007301dc50c4$595929e0$0c0b7da0$@rainier66.com> Message-ID: <008301dc50c8$9d26a8b0$d773fa10$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat >...That clip cuts out at the end, so I'm going to have to ask for full context before giving any credence to that. For instance, if her next words (after the cutoff) were something to the effect of "in practice", that's a whole different thing than if she actually meant that Congress was literally abolished. https://www.instagram.com/reels/DQrW-c6j2vl/ Adrian I looked for a transcript at the time, but couldn't find a transcript of the discussion, which is what caused me to suspect it is AI-generated. It looks real enough. I heard it on the radio, and they treated it as if it is real, but even the major networks sometimes get fooled by AI-generated material. spike From tara at taramayastales.com Sat Nov 8 16:03:32 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 8 Nov 2025 08:03:32 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> References: <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> Message-ID: <470DAA5B-D1F9-4A2F-B0CF-568441CD7885@taramayastales.com> An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 16:14:23 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 08:14:23 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <470DAA5B-D1F9-4A2F-B0CF-568441CD7885@taramayastales.com> References: <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <470DAA5B-D1F9-4A2F-B0CF-568441CD7885@taramayastales.com> Message-ID: <009701dc50ca$bfdc3d50$3f94b7f0$@rainier66.com> From: extropy-chat On Behalf Of Tara Maya via extropy-chat Sent: Saturday, 8 November, 2025 8:04 AM To: ExI chat list Cc: Tara Maya Subject: Re: [ExI] Gender-Neutral Side Note >?Oddly I have never verified whether I have XX chromosomes myself, but my obstetrician assures me I do have a womb as she has tugged four infants out of it. The chances seem high? I would consider even one infant as sufficient evidence. >...I don?t object to he/him as neutral, I merely observe we are unlikely to resurrect it in normal speech patterns as ?they? is becoming ubiquitous? Ja. I thought I would get over it, but they is still bothering me. ? >?However, in Monad, the draft of a science fiction story set one billion years in the future, the hero is androgynous, having the ability to switch gender at will?. I am seeing something I am finding most disconcerting. I work with young people a lot because of my volunteer work: food bank, scouts, science Olympiad and competition math. The high schoolers surprise me with their attitudes: they seem nearly indifferent to copulation. It is so different from when I was their age half a century ago. Eeeeeeverything was about getting laid, everything. Now, it?s meh, whatevs. Maybe, maybe not. >?Too many teens think we have no future! I noticed that too. I might comment on it later. spike It?s told in first person but he usually use he, while another of his species in the story usually use she. In their case, they are objectively both sexes, like certain unicellular organisms, and (like those microbes) when they meet, they fight and the winner manifests as male and the loser manifests as female and carries the baby. So despite being an androgynous species, the laws of biology are still sexist, because, alas, evolution doesn?t care about our feelings. I designed the species based on a real (although much simpler) species not on a political ideal. In a side note, the idea for the story arose from a thought experiment: what if humans (or species that looked and acted much like us) still exist even after a billion years of evolution? This may seem unlikely but consider that we still have prokaryotes after more than a billion years of evolution even though we also have other lifeforms as well?. So humans have radiated into millions (even billions) of species but look like familiar bipedal apes (mostly). As long as I?m shamelessly plugging my books, let me mention some already published?. I also write near future ?bright? (not dystopian) science fiction aimed at young women. I?m not saying the members here wouldn?t enjoy the books, but the stories are similar to a contemporary young adult romance with a science fiction twist. (These books are much tamer than my hard sf, which is more niche.) I wanted to appeal to readers who would not necessarily seek out science fiction. They all involve the concept of living on a seastead. I think it?s such a cool idea and I wanted to get young people exited about the future again. Too many teens think we have no future! The series is The Seastead Adventures and you can find them on my site Misque Press. I really wish I could have attended the recent seastead convention in Prague! Did anyone here attend it? Tara Maya On Nov 7, 2025, at 07:51, spike jones via extropy-chat > wrote: ? From: extropy-chat > On Behalf Of John Clark via extropy-chat Sent: Friday, 7 November, 2025 5:05 AM To: ExI chat list > Cc: John Clark >; Ben Zaiboc > Subject: Re: [ExI] Gender-Neutral Side Note On Thu, Nov 6, 2025 at 2:06?PM Ben Zaiboc via extropy-chat > wrote: We can re-instate words like 'chairman' instead of 'chairperson' (or even 'chair', which I've heard a lot recently. A chair is something you sit on, for christ's sake!), >?And a chairman is a man who is sitting on a chair, a bum on the subway for instance?. What if the subway bum has XX chromosomes? >?Another word that I think should be retired is "Indian" when referring to Native Americans, it's an insult to Indians and an insult to Native Americans and it's just dumb. We only got that word because Columbus was so stupid that when he landed in the Bahamas he thought he was in India. ?John k Clark So we are told. However? that region where Columbus landed, the Caribbean Sea was called the West Indies at the time, and still is sometimes referred to that way: https://en.wikipedia.org/wiki/West_Indies Columbus had no way of knowing about the Pacific ocean, but he was a sailor, so he knew approximately the circumference of the globe. What if there had been no Pacific, but was land all the way across there, with people from the West Indies to the East Indies. For all Columbus knew or anyone in Europe knew, there was a big-ass continent of Indie, the east side of which one could reach by sailing the Atlantic and Indian Oceans. If Indie was content spanning or filling in the Pacific basin and spanning Asia, the Pacific and the Americas, then the people Columbus encountered would be the inhabitants of Indie, which would make them Indians. In that scenario, Columbus wasn?t stupid. He was the NASA of his times. Furthermore? the people he called Indians likely didn?t know about the Pacific Ocean either. How would they know? They didn?t sail much. They might have thought one could walk west indefinitely. If you go back a few thousand years, they would be right: you could walk across the Bering Strait, which was frozen. They wouldn?t know they were not Indians. Indie would be one hell of a continent, ja? I don?t think Columbus was stupid. Sailing west across the Atlantic was a cool achievement for his time. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 8 16:32:12 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 08:32:12 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> Message-ID: <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> From: spike at rainier66.com >?there are those (even in the USA) who do not like Musk, but it isn?t clear to me why. Because of him we have Tesla, Space X and TwitterX?spike And StarLink! How could I forget that one? It has helped solve part of the homelessness problem in the area. Reasoning: many young professionals in the San Jose area make pretty good money, enough to afford an apartment, 100 to 140 range starting out of a bachelor?s degree in engineering or other STEM degree. But apartments are crazy expensive, typically 3k a month. Typically companies have a gym, a shower facility, lockers, a 24 hour cafeteria. The kinds of go-go get-r-dun company you would want to work for out of college has all that. In one?s first year or three, one is practically living at the office anyway, finding one?s place in the company, kind of analogous to how medical school graduates would spend a coupla years as an intern. Tech companies have young ambitious techies as the equivalent of medical interns. OK so if one is single and ambitious, one is practically living in the office anyway, so why pay all that money for an apartment? All ya really need at ?home? is a bed, a toilet and good high speed internet. Camper vans have the former two, and StarLink allows good high speed internet. So a young single tech hipster can get a camper van for the cost of a year or two of apartment rent, live in that, keep one?s business clothing in a locker at the office, shower there, work out there, eat there, socialize there, go ?home? only to sleep and surf the internet. Park in a different place every coupla days. No rent cost, no utilities, save a buttload of money. I personally know an example of a young professional who bought a used van and is converting it to do stealth camping. She is looking forward to moving into it. I am helping design the stuff to go in it. StarLink enabled the internet connection, which enabled stealth camping for young professionals, which helps ease the housing shortage. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 16:35:28 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 11:35:28 -0500 Subject: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? In-Reply-To: <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> Message-ID: On Thu, Nov 6, 2025 at 11:32?AM spike jones via extropy-chat wrote: > Suppose the Singularity is an event which is unambiguous in that there is little or no debate afterwards if it happened. The rest of your narrative relies on this assumption. Based on this, you declare that everyone either believes this will happen, believes this won't, or is basically uninformed about it. That leaves no room for the truth: that there are multiple possible forms of the Singularity, and that there are potential Singularities which will remain ambiguous for considerable time after they happen. From atymes at gmail.com Sat Nov 8 16:39:11 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 11:39:11 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: That didn't answer the question. To merge the worlds, the information that must be transmitted needs to be erased. Thus, there remains no way to transmit information from world to world. On Tue, Oct 14, 2025 at 8:38?AM John Clark wrote: > > On Mon, Oct 13, 2025 at 11:33?PM Adrian Tymes via extropy-chat wrote: > >> >Under MWI, the worlds are separate after splitting, with no way to interact. And yet, this MWI explanation for this requires them to interact. By what means does the information get from one world to another after splitting? > > > According to the Many World's idea, a Mach-Zehnder interferometer (or a quantum bomb tester) works the way it does because a world splits when the laws of physics allow it to change in two different ways, for example passing through a half silvered mirror AND being reflected by a half silvered mirror. Normally after a small change the difference only becomes larger, however if an experimenter is clever and very careful he can make a very tiny change that only exists for a very short time and then arrange things so that the two worlds become identical again, and thus merge back together. So in that merged world there are indications the photon went through the half silver mirror AND indications the photon was reflected by the half silver mirror. However no which-path information remains in the final merged world. > > Some might object to what I say and insist that the branches were never fully separate worlds if they can still interfere, they were just different components of the same wavefunction. But if Many Worlds is correct then EVERYTHING is part of the same "Universal Wave Function", which is just another name for the multiverse. > > John K Clark From atymes at gmail.com Sat Nov 8 16:47:57 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 11:47:57 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Oct 13, 2025 at 10:54?AM Jason Resch via extropy-chat wrote: > Are you familiar with Mermin's Bell inequality experiment with the two detectors and three possible settings that can be set for each? If not (or if you want a refresher) here is a great (and short) account of it: > > https://youtu.be/0RiAxvb_qI4 > > Understanding this experiment, and it's implications, will be necessary to understand my comments which follow below. That video is almost half an hour. To be fair, I've been waiting to respond until I had time to watch it all in one sitting. Life has been uncooperative in that regard. But I see in recent threads that the discussion has continued, and on review it does not look like watching that video is necessary after all, so in the interests of finishing this, I'll just summarize: > Where things get strange, and very hard to explain, is when we play with the not-perfectly-correlates measurements. It is then we find (and can mathematically prove) that no pre-existing fixed set of information the particle took with it, and nor any function computed on that data, can account for the observed facts that: > > A) when both devices are set to the same position they are 100% correlated > B) when the devices are set to different positions they are only 25% correlated In the examples I've looked at, in fact there are such pre-existing fixed sets. > The "out" which superdeterminism takes is to say that the information the particle has (and took with it) contained information about what position the measurement switches would be in at each location at the time each particle is measured. > > But how did this information get there? If we set the positions by rolling a die, how are the particle's properties be tied to the outcome of this die roll, and why are it's statistics such to show us a 25% correlation, when it would be so much simpler to show a 33% correlation? Simpler for whom or for what? 1/2 * 1/2 = 1/4, and 1/1 * 1/1 = 1/1. From pharos at gmail.com Sat Nov 8 16:49:42 2025 From: pharos at gmail.com (BillK) Date: Sat, 8 Nov 2025 16:49:42 +0000 Subject: [ExI] bill gates commentary on climate change: wasRE: China and solar power In-Reply-To: <008301dc50c8$9d26a8b0$d773fa10$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> <003601dc5064$ab4d6500$01e82f00$@rainier66.com> <002d01dc50a8$c3bc7d30$4b357790$@rainier66.com> <007301dc50c4$595929e0$0c0b7da0$@rainier66.com> <008301dc50c8$9d26a8b0$d773fa10$@rainier66.com> Message-ID: On Sat, 8 Nov 2025 at 16:01, spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Adrian I looked for a transcript at the time, but couldn't find a > transcript of the discussion, which is what caused me to suspect it is > AI-generated. It looks real enough. I heard it on the radio, and they > treated it as if it is real, but even the major networks sometimes get > fooled by AI-generated material. > > spike > _______________________________________________ Gemini 2.5 Pro says - Yes, according to news reports, former House Speaker Nancy Pelosi did recently make that statement. In an interview with CNN in early November 2025, while discussing her views on Donald Trump and the separation of powers, she said: "In fact, he's turned the Supreme Court into a rogue court. *He's abolished the House of Representatives.* He's chilled the press.? This quote was part of a broader criticism, where she accused him of not honoring the U.S. Constitution and described him as "the worst thing on the face of the Earth." Her statement that he "abolished the House of Representatives" appears to be rhetorical, rather than literal, reflecting her view of his actions and approach to executive power. ---------------------------- It seems quite likely that not everything one says in a temper tantrum should be taken literally. :) Especially when a quick check shows that the USA government still exists, even if temporarily 'resting'. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 16:58:47 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 11:58:47 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, Oct 14, 2025 at 2:22?AM Ben Zaiboc via extropy-chat wrote: > On 14/10/2025 04:31, Adrian Tymes wrote: > > The AI explanation failed to address the question. Under MWI, the > > worlds are separate after splitting, with no way to interact. And > > yet, this MWI explanation for this requires them to interact. By what > > means does the information get from one world to another after splitting? > > I don't claim to really understand this whole thing, but I was wondering > about how a half-silvered mirror can create two photons without > violating at least one conservation law, then realised that it doesn't, > in the original world (I don't know about the other, presumably the > photon existed in there all along, but not sure about that). After > Adrian's comment above, I'm now wondering how the entanglement happens > in the first place, if the two worlds can't interact? > > Not that I've ever understood what 'entanglement' actually means anyway. The superdeterministic explanation of entanglement is: * You have two particles. (Or photons, or whatever.) * You know that one is in one state (spin or some other paired property), the other in the other, but you don't know which is which. * These two start out in contact with one another (so there is no problem "communicating" the state between them to start), but can be separated by arbitrarily large distances while you still don't know which one is in which state. * The moment you figure out which one is in one state, you instantly know the state of the other, no matter how far away it is. * This doesn't violate light speed/causality/etc. because the result of any action you take on this knowledge - and thus, the state of the universe where the other particle's state is known - can only propagate out at light speed from where and when you discovered the first one's state. * Not that that matters as much as it might seem, because the other one was always in the state that it was in. You just didn't know. > Another thing that makes no sense to me is the issue of entire universes > being 'created' whenever a quantum event takes place. Presumably that > means that conservation laws only apply within each universe separately, > and don't apply to a bunch of them. Conservation laws apply within one universe. It's not that the sum between all universes is kept constant. Equivalently, with MWI there are an infinite number of universes following the same path; when they split, some of these universes follow one path and the rest another path, but each individual universe still conserves its own energy. From jasonresch at gmail.com Sat Nov 8 17:14:23 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 12:14:23 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 11:48 AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Oct 13, 2025 at 10:54?AM Jason Resch via extropy-chat > wrote: > > Are you familiar with Mermin's Bell inequality experiment with the two > detectors and three possible settings that can be set for each? If not (or > if you want a refresher) here is a great (and short) account of it: > > > > https://youtu.be/0RiAxvb_qI4 > > > > Understanding this experiment, and it's implications, will be necessary > to understand my comments which follow below. > > That video is almost half an hour. To be fair, I've been waiting to > respond until I had time to watch it all in one sitting. Life has > been uncooperative in that regard. > > But I see in recent threads that the discussion has continued, and on > review it does not look like watching that video is necessary after > all, so in the interests of finishing this, I'll just summarize: > > > Where things get strange, and very hard to explain, is when we play with > the not-perfectly-correlates measurements. It is then we find (and can > mathematically prove) that no pre-existing fixed set of information the > particle took with it, and nor any function computed on that data, can > account for the observed facts that: > > > > A) when both devices are set to the same position they are 100% > correlated > > B) when the devices are set to different positions they are only 25% > correlated > > In the examples I've looked at, in fact there are such pre-existing fixed > sets. > If you can show they exist you will likely be able to claim a Nobel prize. After all, one was just given for the work showing there are not. > > The "out" which superdeterminism takes is to say that the information > the particle has (and took with it) contained information about what > position the measurement switches would be in at each location at the time > each particle is measured. > > > > But how did this information get there? If we set the positions by > rolling a die, how are the particle's properties be tied to the outcome of > this die roll, and why are it's statistics such to show us a 25% > correlation, when it would be so much simpler to show a 33% correlation? > > Simpler for whom or for what? 1/2 * 1/2 = 1/4, and 1/1 * 1/1 = 1/1. > This is why you need to take the 23 minutes to understand the video, or read this article on the same topic at your own pace: https://en.wikipedia.org/wiki/Mermin%27s_device Without that background, however, you won't be in a position to appreciate aren't why the quantum statistics are impossible to explain -- absent adding some extraordinary assumptions about reality (spooky action, many worlds, etc.) Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Nov 8 17:28:04 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 12:28:04 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 11:59 AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Oct 14, 2025 at 2:22?AM Ben Zaiboc via extropy-chat > wrote: > > On 14/10/2025 04:31, Adrian Tymes wrote: > > > The AI explanation failed to address the question. Under MWI, the > > > worlds are separate after splitting, with no way to interact. And > > > yet, this MWI explanation for this requires them to interact. By what > > > means does the information get from one world to another after > splitting? > > > > I don't claim to really understand this whole thing, but I was wondering > > about how a half-silvered mirror can create two photons without > > violating at least one conservation law, then realised that it doesn't, > > in the original world (I don't know about the other, presumably the > > photon existed in there all along, but not sure about that). After > > Adrian's comment above, I'm now wondering how the entanglement happens > > in the first place, if the two worlds can't interact? > > > > Not that I've ever understood what 'entanglement' actually means anyway. > > The superdeterministic explanation of entanglement is: > > * You have two particles. (Or photons, or whatever.) > * You know that one is in one state (spin or some other paired > property), the other in the other, but you don't know which is which. > * These two start out in contact with one another (so there is no > problem "communicating" the state between them to start), but can be > separated by arbitrarily large distances while you still don't know > which one is in which state. > * The moment you figure out which one is in one state, you instantly > know the state of the other, no matter how far away it is. > * This doesn't violate light speed/causality/etc. because the result > of any action you take on this knowledge - and thus, the state of the > universe where the other particle's state is known - can only > propagate out at light speed from where and when you discovered the > first one's state. > * Not that that matters as much as it might seem, because the other > one was always in the state that it was in. You just didn't know. > But this explanation is insufficient, as the results you see will depend on how each of the two particles will later be measured. And they might be separated by vast distances by the time they are measured, and hence will not be able (in theory) to coordinate the measurement results they reflect. Superdeterminism, accordingly, requires that that foreknowledge regarding how each particle will eventually be measured, must be factored into the creation of that particle pair at the time it is formed. And then let's say we use two radioactive decay processes to decide how we choose to measure each particle before it arrives. Then the particle creation event all those light years away must have also known how those radio active decay processes would later unfold. Jason > > Another thing that makes no sense to me is the issue of entire universes > > being 'created' whenever a quantum event takes place. Presumably that > > means that conservation laws only apply within each universe separately, > > and don't apply to a bunch of them. > > Conservation laws apply within one universe. It's not that the sum > between all universes is kept constant. Equivalently, with MWI there > are an infinite number of universes following the same path; when they > split, some of these universes follow one path and the rest another > path, but each individual universe still conserves its own energy. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 17:55:21 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 12:55:21 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 12:29?PM Jason Resch via extropy-chat wrote: > On Sat, Nov 8, 2025, 11:59 AM Adrian Tymes via extropy-chat wrote: >> On Tue, Oct 14, 2025 at 2:22?AM Ben Zaiboc via extropy-chat >> wrote: >> > On 14/10/2025 04:31, Adrian Tymes wrote: >> > > The AI explanation failed to address the question. Under MWI, the >> > > worlds are separate after splitting, with no way to interact. And >> > > yet, this MWI explanation for this requires them to interact. By what >> > > means does the information get from one world to another after splitting? >> > >> > I don't claim to really understand this whole thing, but I was wondering >> > about how a half-silvered mirror can create two photons without >> > violating at least one conservation law, then realised that it doesn't, >> > in the original world (I don't know about the other, presumably the >> > photon existed in there all along, but not sure about that). After >> > Adrian's comment above, I'm now wondering how the entanglement happens >> > in the first place, if the two worlds can't interact? >> > >> > Not that I've ever understood what 'entanglement' actually means anyway. >> >> The superdeterministic explanation of entanglement is: >> >> * You have two particles. (Or photons, or whatever.) >> * You know that one is in one state (spin or some other paired >> property), the other in the other, but you don't know which is which. >> * These two start out in contact with one another (so there is no >> problem "communicating" the state between them to start), but can be >> separated by arbitrarily large distances while you still don't know >> which one is in which state. >> * The moment you figure out which one is in one state, you instantly >> know the state of the other, no matter how far away it is. >> * This doesn't violate light speed/causality/etc. because the result >> of any action you take on this knowledge - and thus, the state of the >> universe where the other particle's state is known - can only >> propagate out at light speed from where and when you discovered the >> first one's state. >> * Not that that matters as much as it might seem, because the other >> one was always in the state that it was in. You just didn't know. > > > But this explanation is insufficient, as the results you see will depend on how each of the two particles will later be measured. They may depend on how they are interacted with - including the specific means of measurement - to the same extent as if you knew the original state. But excluding those factors, they don't. One particle is in one state, and will remain in that state unless and until interacted with. The other particle is likewise in the other state. > And they might be separated by vast distances by the time they are measured, and hence will not be able (in theory) to coordinate the measurement results they reflect. And don't have to. See the above point on, "This doesn't violate...". > Superdeterminism, accordingly, requires that that foreknowledge regarding how each particle will eventually be measured, must be factored into the creation of that particle pair at the time it is formed. It does not. From jasonresch at gmail.com Sat Nov 8 18:03:14 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 13:03:14 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 12:56 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Nov 8, 2025 at 12:29?PM Jason Resch via extropy-chat > wrote: > > On Sat, Nov 8, 2025, 11:59 AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Tue, Oct 14, 2025 at 2:22?AM Ben Zaiboc via extropy-chat > >> wrote: > >> > On 14/10/2025 04:31, Adrian Tymes wrote: > >> > > The AI explanation failed to address the question. Under MWI, the > >> > > worlds are separate after splitting, with no way to interact. And > >> > > yet, this MWI explanation for this requires them to interact. By > what > >> > > means does the information get from one world to another after > splitting? > >> > > >> > I don't claim to really understand this whole thing, but I was > wondering > >> > about how a half-silvered mirror can create two photons without > >> > violating at least one conservation law, then realised that it > doesn't, > >> > in the original world (I don't know about the other, presumably the > >> > photon existed in there all along, but not sure about that). After > >> > Adrian's comment above, I'm now wondering how the entanglement happens > >> > in the first place, if the two worlds can't interact? > >> > > >> > Not that I've ever understood what 'entanglement' actually means > anyway. > >> > >> The superdeterministic explanation of entanglement is: > >> > >> * You have two particles. (Or photons, or whatever.) > >> * You know that one is in one state (spin or some other paired > >> property), the other in the other, but you don't know which is which. > >> * These two start out in contact with one another (so there is no > >> problem "communicating" the state between them to start), but can be > >> separated by arbitrarily large distances while you still don't know > >> which one is in which state. > >> * The moment you figure out which one is in one state, you instantly > >> know the state of the other, no matter how far away it is. > >> * This doesn't violate light speed/causality/etc. because the result > >> of any action you take on this knowledge - and thus, the state of the > >> universe where the other particle's state is known - can only > >> propagate out at light speed from where and when you discovered the > >> first one's state. > >> * Not that that matters as much as it might seem, because the other > >> one was always in the state that it was in. You just didn't know. > > > > > > But this explanation is insufficient, as the results you see will depend > on how each of the two particles will later be measured. > > They may depend on how they are interacted with - including the > specific means of measurement - to the same extent as if you knew the > original state. > > But excluding those factors, they don't. One particle is in one > state, and will remain in that state unless and until interacted with. > The other particle is likewise in the other state. > > > And they might be separated by vast distances by the time they are > measured, and hence will not be able (in theory) to coordinate the > measurement results they reflect. > > And don't have to. See the above point on, "This doesn't violate...". > > > Superdeterminism, accordingly, requires that that foreknowledge > regarding how each particle will eventually be measured, must be factored > into the creation of that particle pair at the time it is formed. > > It does not. > How would you describe the difference between "determinism" and "superdeterminism?" >From all your writings on superdeterminism, you never seem to suggest any difference between the two. Do you think they are equivalent? If so, I think that sits at the root of our inability to effectively communicate on this subject. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 18:10:00 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 13:10:00 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 1:04?PM Jason Resch via extropy-chat wrote: > How would you describe the difference between "determinism" and "superdeterminism?" > > From all your writings on superdeterminism, you never seem to suggest any difference between the two. Do you think they are equivalent? That depends on your definitions of the term. I must insist that you go first, as my usual experience on this topic is that any explanation I give is immediately strawmanned into someone else's definitions, who then claims they are "right" when they have actually given non-sequiturs. I am not accusing you specifically of doing so. Indeed, I hope that you will do better. I'm just finding it hard to care enough to give a good answer on my own given how often it's happened, and if you will engage honestly then you deserve the same from me. > If so, I think that sits at the root of our inability to effectively communicate on this subject. It might. From atymes at gmail.com Sat Nov 8 18:28:14 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 13:28:14 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 12:16?PM Jason Resch via extropy-chat wrote: > This is why you need to take the 23 minutes to understand the video, or read this article on the same topic at your own pace: > > https://en.wikipedia.org/wiki/Mermin%27s_device Thank you! Reading this article took much less than 23 minutes. And...it reads like another "a priori definitions result in the cited conclusions", in other words an error in the experimental definition, without need of quantum mechanics to explain. To quote an example from the article: > For the improved device, the expected results are the following: if one detector is switched to setting 1 while the others are on setting 2, an odd number of red lights flash. If all three detectors are set to 1, an odd number of red light flashes never occurs. In other words, the definition of this device presupposes that the results are linked to, and depend on, the settings of the detectors (even if the detectors themselves are independent and can not interact - which means that something else, not specified in the definition, is doing this coordination). So of course the results will depend on that, if it's set up so they'll depend on that. (I admit to editing the article to fix a typo in the quoted section: it said "detectors set" when the grammatically correct version is "detectors are set".) From jasonresch at gmail.com Sat Nov 8 18:28:39 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 13:28:39 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 1:10 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Nov 8, 2025 at 1:04?PM Jason Resch via extropy-chat > wrote: > > How would you describe the difference between "determinism" and > "superdeterminism?" > > > > From all your writings on superdeterminism, you never seem to suggest > any difference between the two. Do you think they are equivalent? > > That depends on your definitions of the term. I must insist that you > go first, as my usual experience on this topic is that any explanation > I give is immediately strawmanned into someone else's definitions, who > then claims they are "right" when they have actually given > non-sequiturs. > Let's go with what Google's AI gave me when I searched: difference between determinism and superdeterminism This is what it provided: "Determinism states all events are causally inevitable, while superdeterminism is a stricter version that adds the assumption that the choices of experimental measurements are also predetermined and correlated with the system being measured. This means that not only are the outcomes of experiments fixed, but the very act of setting up the experiment is coordinated with the outcome, making it impossible to test for randomness and free will in the conventional sense. Determinism Definition: All events, including human decisions, are predetermined by prior causes and conditions. Example: A simple physics example is predicting a ball's trajectory based on its initial position and velocity. A more complex one is that every event in the universe is the inevitable result of the laws of physics acting on the initial conditions of the universe. Key principle: Causes lead to effects in a predictable chain. Superdeterminism Definition: A more extreme version of determinism where the choices made by experimenters (like selecting a measurement setting) are also part of the predetermined chain of events, correlated with the system being measured. Example: In a quantum experiment, a superdeterministic universe would ensure that the choice of measurement setting and the outcome of that measurement are correlated from the beginning of time. The "randomness" is an illusion, as the experiment was pre-arranged to produce a specific result. Key principle: The assumption of measurement independence, which is central to how scientists design experiments and interpret results, is violated." I find these definitions perfectly acceptable and standard. Do you agree with them? Under these definitions, do you see a difference between determinism and superdeterminism? Under these definitions, do you believe in this version of superdeterminism? For reference: I believe in determinism, but I reject superdeterminism. They are (in my view ) not at all the same thing. One makes science possible, the other makes science impossible. > I am not accusing you specifically of doing so. Indeed, I hope that > you will do better. I'm just finding it hard to care enough to give a > good answer on my own given how often it's happened, and if you will > engage honestly then you deserve the same from me. > Certainly, I appreciate that. > > If so, I think that sits at the root of our inability to effectively > communicate on this subject. > > It might. > Let me know if these definitions will help us to break through that past barrier. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Nov 8 18:42:59 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 13:42:59 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 1:29 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Nov 8, 2025 at 12:16?PM Jason Resch via extropy-chat > wrote: > > This is why you need to take the 23 minutes to understand the video, or > read this article on the same topic at your own pace: > > > > https://en.wikipedia.org/wiki/Mermin%27s_device > > Thank you! Reading this article took much less than 23 minutes. > Wonderful! > And...it reads like another "a priori definitions result in the cited > conclusions", in other words an error in the experimental definition, > without need of quantum mechanics to explain. But can you succeed in defining any classical machine, holding any hidden variables you desire, and running any function on that state of hidden variables and the local state of the detector that reproduces this pattern of statistics which we observe in actual experiments on entangled particles? No one has succeeded this far. No one has found an error in Bell's math. Bell proved mathematically that no such state of hidden variables information can exist. And moreover, experiments have been done to confirm these statistics exist in nature. This is why physicists have had to turn to extreme explanations (FTL influences, many worlds, superdeterminism (not just regular determinism)) to account for these experimental results. If it was just a bad paper, that made the trivial error of baking it's conclusions into the assumptions, would the Nobel prize committee have awarded a prize over it? https://www.nobelprize.org/prizes/physics/2022/popular-information/ To quote an example > from the article: > > > For the improved device, the expected results are the following: if one > detector is switched to setting 1 while the others are on setting 2, an odd > number of red lights flash. If all three detectors are set to 1, an odd > number of red light flashes never occurs. > > In other words, the definition of this device presupposes that the > results are linked to, and depend on, the settings of the detectors > (even if the detectors themselves are independent and can not interact > - which means that something else, not specified in the definition, is > doing this coordination). So of course the results will depend on > that, if it's set up so they'll depend on that. > So then, what is your explanation of the situation? How is it we get the results we see? Jason > (I admit to editing the article to fix a typo in the quoted section: > it said "detectors set" when the grammatically correct version is > "detectors are set".) > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 8 19:13:01 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 14:13:01 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 1:30?PM Jason Resch via extropy-chat wrote: > On Sat, Nov 8, 2025, 1:10 PM Adrian Tymes via extropy-chat wrote: >> On Sat, Nov 8, 2025 at 1:04?PM Jason Resch via extropy-chat >> wrote: >> > How would you describe the difference between "determinism" and "superdeterminism?" >> > >> > From all your writings on superdeterminism, you never seem to suggest any difference between the two. Do you think they are equivalent? >> >> That depends on your definitions of the term. I must insist that you >> go first, as my usual experience on this topic is that any explanation >> I give is immediately strawmanned into someone else's definitions, who >> then claims they are "right" when they have actually given >> non-sequiturs. > > Let's go with what Google's AI gave me when I searched: difference between determinism and superdeterminism > > This is what it provided: > > "Determinism states all events are causally inevitable, while superdeterminism is a stricter version that adds the assumption that the choices of experimental measurements are also predetermined and correlated with the system being measured. This means that not only are the outcomes of experiments fixed, but the very act of setting up the experiment is coordinated with the outcome, making it impossible to test for randomness and free will in the conventional sense. > > Determinism > > Definition: All events, including human decisions, are predetermined by prior causes and conditions. > > Example: A simple physics example is predicting a ball's trajectory based on its initial position and velocity. A more complex one is that every event in the universe is the inevitable result of the laws of physics acting on the initial conditions of the universe. > > Key principle: Causes lead to effects in a predictable chain. > > Superdeterminism > > Definition: A more extreme version of determinism where the choices made by experimenters (like selecting a measurement setting) are also part of the predetermined chain of events, correlated with the system being measured. > > Example: In a quantum experiment, a superdeterministic universe would ensure that the choice of measurement setting and the outcome of that measurement are correlated from the beginning of time. The "randomness" is an illusion, as the experiment was pre-arranged to produce a specific result. > > Key principle: The assumption of measurement independence, which is central to how scientists design experiments and interpret results, is violated." > > > I find these definitions perfectly acceptable and standard. > > Do you agree with them? > > Under these definitions, do you see a difference between determinism and superdeterminism? > > Under these definitions, do you believe in this version of superdeterminism? > > For reference: I believe in determinism, but I reject superdeterminism. They are (in my view ) not at all the same thing. One makes science possible, the other makes science impossible. Fair enough. Then by that...what I believe in is somewhere between the two, that I don't have a good term for yet. I use "superdeterminism" as it's more than strictly local determinism. Maybe "superdeterminism lite" would be more accurate? In every observed case, it has not been ruled out that the backstop for where the states were defined is not just further back than is being discussed. (In particular, people keep talking about quantum entanglement as if the states do not exist until measured, but if they exist before measurement it would explain a lot. In the three-particle experiment I quoted, the phrasing of the experiment definition suggests that the results are preordained from when the detectors are set up, possibly before, despite the detectors being independent after being set up.) Full superdeterminism - all the way back to the Big Bang or beyond - has not been proven and might not be provable, but neither is it strictly necessary. The theoretical delta is one of the angles through which "free will", however one defines it, and true randomness may be able to come in. It is also of note that the full set of initial conditions appears to potentially be unknowable, at least within the same universe. For instance, this implies that a transporter-like device, which reads every quantum state of the transported object and then does something with that information, may be impossible - or at least would require much more information storage than is inherent in matter that constitutes the object being transported. Even uploaded copies of organic brains might theoretically be imperfect, although if they can be gotten close enough - if the delta in being is equal to or less than the natural delta in being that people experience over a few months (or whatever adjustment time is seen as reasonable) anyway, especially after major life changes comparable to switching to a new body - then it may not matter in practice. This technically makes absolute perfect determinism impossible, although it is often possible to know the initial conditions well enough for practical use. (I can't know for certain that, after I kick a ball in front of me, every particle in the ball will not suddenly teleport behind me - but I know the odds against that are high enough that I can assume it will go forward in practice.) This colors the desire for knowledge or prediction of the initial states for superdeterminism: even if one could theoretically model a set of values to fit the observed result, it might not matter if no actual case was ever able to measure or screen for that set of values. Part of this, I admit, is cynicism after seeing so many examples that I have lost count where things were supposed to be independent or random, but subsequently turned out to be provably rigged - and worse, when I leaned into the assumption that things were rigged (when fairness and justice demanded that one act as if they were not), I was often ultimately rewarded if I could figure out how I could exploit it. Figuring out the initial values of human systems is easier than figuring out the initial values of large sets of particles. From jasonresch at gmail.com Sat Nov 8 19:43:05 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 14:43:05 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 2:14 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Nov 8, 2025 at 1:30?PM Jason Resch via extropy-chat > wrote: > > On Sat, Nov 8, 2025, 1:10 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Nov 8, 2025 at 1:04?PM Jason Resch via extropy-chat > >> wrote: > >> > How would you describe the difference between "determinism" and > "superdeterminism?" > >> > > >> > From all your writings on superdeterminism, you never seem to suggest > any difference between the two. Do you think they are equivalent? > >> > >> That depends on your definitions of the term. I must insist that you > >> go first, as my usual experience on this topic is that any explanation > >> I give is immediately strawmanned into someone else's definitions, who > >> then claims they are "right" when they have actually given > >> non-sequiturs. > > > > Let's go with what Google's AI gave me when I searched: difference > between determinism and superdeterminism > > > > This is what it provided: > > > > "Determinism states all events are causally inevitable, while > superdeterminism is a stricter version that adds the assumption that the > choices of experimental measurements are also predetermined and correlated > with the system being measured. This means that not only are the outcomes > of experiments fixed, but the very act of setting up the experiment is > coordinated with the outcome, making it impossible to test for randomness > and free will in the conventional sense. > > > > Determinism > > > > Definition: All events, including human decisions, are predetermined by > prior causes and conditions. > > > > Example: A simple physics example is predicting a ball's trajectory > based on its initial position and velocity. A more complex one is that > every event in the universe is the inevitable result of the laws of physics > acting on the initial conditions of the universe. > > > > Key principle: Causes lead to effects in a predictable chain. > > > > Superdeterminism > > > > Definition: A more extreme version of determinism where the choices made > by experimenters (like selecting a measurement setting) are also part of > the predetermined chain of events, correlated with the system being > measured. > > > > Example: In a quantum experiment, a superdeterministic universe would > ensure that the choice of measurement setting and the outcome of that > measurement are correlated from the beginning of time. The "randomness" is > an illusion, as the experiment was pre-arranged to produce a specific > result. > > > > Key principle: The assumption of measurement independence, which is > central to how scientists design experiments and interpret results, is > violated." > > > > > > I find these definitions perfectly acceptable and standard. > > > > Do you agree with them? > > > > Under these definitions, do you see a difference between determinism and > superdeterminism? > > > > Under these definitions, do you believe in this version of > superdeterminism? > > > > For reference: I believe in determinism, but I reject superdeterminism. > They are (in my view ) not at all the same thing. One makes science > possible, the other makes science impossible. > > Fair enough. Then by that...what I believe in is somewhere between > the two, that I don't have a good term for yet. I use > "superdeterminism" as it's more than strictly local determinism. > Maybe "superdeterminism lite" would be more accurate? > Great, this is progress I think. > In every observed case, it has not been ruled out that the backstop > for where the states were defined is not just further back than is > being discussed. (In particular, people keep talking about quantum > entanglement as if the states do not exist until measured, but if they > exist before measurement it would explain a lot. Yes, which is the broad appeal many had for hidden variables theories, e.g. with Einstein. I believe in many-worlds, which I consider the closest to preserving any notion of local realism, and determinism. It even enables preservation of a notion of hidden variables, where the variables represent superpositions of values. When measured, the superposition reveals itself locally in the form of multiple outcomes being witnessed, and these observed outcomes each imply distinct possibilities for what one will later learn of the outcome of the far away experiment. So locality and determinism are preserved, at the cost of giving up single outcomes for experiments. In the > three-particle experiment I quoted, the phrasing of the experiment > definition suggests that the results are preordained from when the > detectors are set up, possibly before, despite the detectors being > independent after being set up.) > I admit I don't understand how the three detector system is arranged. The conventional description and the one I have always heard was the two detectors with 3 settings. I think this case makes it most clear that there's no set of information each particle can carry that enables anti-correlations greater than 2/3rds for the experiment where the devices have 3 settings. And yet, we observe anti-correlations of 3/4. This means one of the assumptions must be wrong: - The particles do communicate (locality is violated -- spooky action) - Experiments have more than one outcome (counterfactual definiteness is violated -- many worlds) - Experiments can't be setup independently of one another (measurement independence is violated -- superdeterminism) The half-measure, some form of determinism involving hidden variables, by itself is insufficient, unless it also abandons measurement independence. And note that it is the abandoning of this concept of measurement independence that leads to all the nastiness in standard forms of superdeterminism, where the universe seems to operate in a manner to fool us (by showing us anti-correlations above 2/3). > Full superdeterminism - all the way back to the Big Bang or beyond - > has not been proven and might not be provable, but neither is it > strictly necessary. The theoretical delta is one of the angles > through which "free will", however one defines it, and true randomness > may be able to come in. > I'm not sure I understand this. Are you say there is room for free will and randomness in your theory? If there is any degree of randomness or free will permitted, then if one uses it in deciding how to pick which of the three positions on the Mermin device, then they would break out of the superdeterministic constraints, and expose the <= 2/3 anti-correlations we would expect to find in any local hidden variable theory. > It is also of note that the full set of initial conditions appears to > potentially be unknowable, at least within the same universe. For > instance, this implies that a transporter-like device, which reads > every quantum state of the transported object and then does something > with that information, may be impossible - or at least would require > much more information storage than is inherent in matter that > constitutes the object being transported. Even uploaded copies of > organic brains might theoretically be imperfect, although if they can > be gotten close enough - if the delta in being is equal to or less > than the natural delta in being that people experience over a few > months (or whatever adjustment time is seen as reasonable) anyway, > especially after major life changes comparable to switching to a new > body - then it may not matter in practice. > > This technically makes absolute perfect determinism impossible, > I would say, rather, that this makes perfect prediction (within this universe by entities within this universe) impossible. Heisenberg's uncertainty principle is enough to establish this constraint. But we should not conclude from this fact alone that the universe does not abide by deterministic laws. Consider there may be a simulation of a universe which does not enable entities within that simulated universe to perfectly measure the state of the simulation. Such beings would not be able to perfectly predict, and yet their universe could still be fully deterministic. although it is often possible to know the initial conditions well > enough for practical use. (I can't know for certain that, after I > kick a ball in front of me, every particle in the ball will not > suddenly teleport behind me - but I know the odds against that are > high enough that I can assume it will go forward in practice.) This > colors the desire for knowledge or prediction of the initial states > for superdeterminism: even if one could theoretically model a set of > values to fit the observed result, it might not matter if no actual > case was ever able to measure or screen for that set of values. > > Part of this, I admit, is cynicism after seeing so many examples that > I have lost count where things were supposed to be independent or > random, but subsequently turned out to be provably rigged - and worse, > when I leaned into the assumption that things were rigged (when > fairness and justice demanded that one act as if they were not), I was > often ultimately rewarded if I could figure out how I could exploit > it. Figuring out the initial values of human systems is easier than > figuring out the initial values of large sets of particles. > But here I think the situation is much easier and simpler. The task is to see if there is *any way* any hypothetical system could be rigged, in a manner that enables us to account for the observed statistics. Bell proved that (under the assumptions stated) it cannot be done. This means we have to abandon at least one of those cherished assumptions. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Nov 8 21:08:16 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 8 Nov 2025 13:08:16 -0800 Subject: [ExI] China and solar power In-Reply-To: <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> References: <039601dc4deb$cc81e370$6585aa50$@rainier66.com> <00a101dc4e62$6e8a4830$4b9ed890$@rainier66.com> <017301dc4e93$5e4520e0$1acf62a0$@rainier66.com> <002e01dc5062$f65142d0$e2f3c870$@rainier66.com> Message-ID: On Fri, Nov 7, 2025 at 7:51?PM wrote: > -----Original Message----- > From: Keith Henson > ... > >...That's not the case. Coal plants make baseload power and can't load follow. Shutting one down and restarting it is a major task. One of the problems is leaks from thermal cycling. Natural gas generation can be cycled because they were derived from aircraft engines... > > Coal plant for baseline, adjunct natural gas for following variations. > > The important insight for those of us reading this message is that nearly all of us realize the objections to carbon burning are now irrelevant. It doesn't matter how much CO2 goes into the atmosphere, because it is no longer a pollutant, we will not run out of anything, the climate will not change noticeably in the short time remaining before AI reaches critical mass (critical smart?) Maybe. The extra heat trapped by the CO2 is a source of concern. Talk to the people in Jamaica or Phoenix. In the somewhat longer run, mining the atmosphere for carbon could give us an ice age. I wrote about this in 1990. > > The carbon burning plants are long-established technology, they are already commercially available with mass production, coal mines and fracking are going, we don't need to invest much in that stuff, so we can focus on the software and electronics. Again, maybe. It depends on advances that are hard to predict. I have a project to turn trash and renewable intermittent electrical power into diesel and jet fuel. It will take at least 5 years. Is it worth doing? I don't know. Keith > >...Baseload is coal and nuclear, though some recent nuclear plants have can be modulated to 40 percent... > > I am a big fan of nuclear. Bring it along for the big party. > > >...Batteries are the fastest to respond... > > Ja. Those are a storage medium rather than a power source. > > >...Hydro is almost as fast... > > Ja, but we aren't getting any more of that. Too bad: it is a great power source. It's free, storable to some extent, steady, renewable etc. > > >...Running turbines are fast, startup time is in the 10 min range... > > It's cool to witness that process. I was lucky to take a tour at the John Day. Very memorable engineering senior trip. > > >...Solar and wind are when you can get it... > > Ja, these two are the oddballs in the above list. They are there when they want to be, not when we want them to be. > > >...I never worked in power, but I have read a lot about i. Keith > > Keith you and I would have had marvelous careers had we chosen that path, but we would be very different people with very different attitudes. I have no regrets. Power is cool. On the other hand, we can still be involved in it as investors rather than professionals. > > spike > > > > From atymes at gmail.com Sat Nov 8 22:18:24 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Nov 2025 17:18:24 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 2:44?PM Jason Resch via extropy-chat wrote: > I admit I don't understand how the three detector system is arranged. The problem is in the arrangement. On the one hand, the description declares that they are independent...and then asserts, a priori, outcomes that suggest they are dependent. > I'm not sure I understand this. Are you say there is room for free will and randomness in your theory? Yes. > If there is any degree of randomness or free will permitted, then if one uses it in deciding how to pick which of the three positions on the Mermin device ...it would no longer be the Merman device. Again: the Merman device asserts a certain set of outcomes as part of its description. True randomness would allow, e.g. 1/2 or 1/5 or other such values to sometimes be measured, which is explicitly ruled out. From jasonresch at gmail.com Sat Nov 8 22:36:02 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 8 Nov 2025 17:36:02 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025, 5:19 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Nov 8, 2025 at 2:44?PM Jason Resch via extropy-chat > wrote: > > I admit I don't understand how the three detector system is arranged. > > The problem is in the arrangement. On the one hand, the description > declares that they are independent...and then asserts, a priori, > outcomes that suggest they are dependent. > It's quantum theory, and the experimental confirmation of the predictions of that theory, that suggests they're dependent (contrary to our usual expectations). In fact, they can't be independent (without giving up locality, counterfactual definiteness, or measurement independence). > > I'm not sure I understand this. Are you say there is room for free will > and randomness in your theory? > > Yes. > > > If there is any degree of randomness or free will permitted, then if one > uses it in deciding how to pick which of the three positions on the Mermin > device > > ...it would no longer be the Merman device. It's still the Mermin device. What has changed, if you admit true randomness or free will, is a departure from superdeterminism's insistence on giving up measurement independence. For if there is free will, or true randomness, then you can use it to setup the two experiment to be truly independent of each other. Hence this would violate the core assumption of superdeterminism: that you can't setup experiments to be independent of one another. And then you are back to the situation of trying to account for the violation of Bell's inequality. Having ruled out superdeterminism (by assuming true randomness or free will) then your only remaining options are FTL influences or many worlds. Again: the Merman device > asserts a certain set of outcomes as part of its description. These assertions were not made in a vacuum; they come straight out of quantum theory. Would Feynman have told Mermin that his paper was one of the most beautiful in all of physics, if it were hogwash based on unsupported assertions and suppositions? True > randomness would allow, e.g. 1/2 or 1/5 or other such values to > sometimes be measured, which is explicitly ruled out. > But note they are relies out by experiment (and quantum theory). Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 9 03:25:19 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 8 Nov 2025 19:25:19 -0800 Subject: [ExI] gait analysis Message-ID: <002801dc5128$7a028740$6e0795c0$@rainier66.com> Perhaps you recall twenty years ago discussing in this forum gait analysis. The FBI claims they can do that now, with good certainty: identify a suspect by comparing how he walks with a video. I always thought that should be possible. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 9 13:32:14 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 9 Nov 2025 08:32:14 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 11:32?AM wrote: >?there are those (even in the USA) > *Even? Especially in the USA!* *> who do not like Musk, but it isn?t clear to me why. * > *What isn't clear to me is why you are not just as outraged over the antics of He Who Must Not Be Named as I am, I've been trying to figure that out since 2016. As for Musk, I think it's just a fact of reality that the richest man in the world is never going to be universally loved. But it's more than just that. Elon Musk's popularity took a dramatic hit when he gave $277 million to get a 34 time convicted felon, a man who had staged a coup d'?tat which if successful would've made him a dictator, get back into power through an election in 2024. * *By the way, there was a fellow in Germany, whose name escapes me at the moment, who had a similar history, he too had staged an unsuccessful coup d'?tat and he too got back into power by winning an election; the only difference was the election happened in 1933 not 2024. * *> Because of him we have Tesla, Space X* > *Yes, and the world is a better place because of those two companies, so I have to give the devil his due and give Musk a +2 * *> and TwitterX* > *And the world would be a better place if that company did NOT exist, without Twitter He Who Must Not Be Named would have never won the 2016 election, and without TwitterX (and the richest man in the world's money of course) he would've never won the 2024 election. So for that company Musk gets a -1 * *> And StarLink! How could I forget that one?* *That one is more complicated to judge. It has improved our ability to communicate and has even helped the people of Ukraine defend themselves from Vladimir Putin, but it has also degraded our ability to understand the universe. Starink currently has 8,800 satellites in low earth orbit (more are planned) and are creating havoc for optical astronomers because the satellites reflect sunlight, creating bright streaks in images taken by telescopes; by using expensive and time consuming additional data processing the streaks can be identified and subtracted, but they still cause the loss of valuable pixels in an image. And those 8,800 satellites are also causing havoc for radio astronomers because of unintentionally leaking radio frequencies. Taking all that into consideration I'd give Starlink a zero.* You didn't mention another of Musk's companies but I will, xAI, it's his AI Company in the creators of Grok. I think the world is better off because of its existence, thus I give it a *+1*. *So now I'm in a position to give Musk a final grade, let's see, 2-1+0+1 = 2. So unlike You Know Who, Musk is still in the plus column. But I don't like the trend he's on. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 9 13:59:36 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 9 Nov 2025 08:59:36 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> Message-ID: On Sat, Nov 8, 2025 at 10:51?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *>> It could be argued that "Lady Chairman" is self-contradictory nonsense, >> unless you're talking about a hermaphrodite. How about Chairwoman? I admit >> that doesn't sound quite right but I think at least part of that is due to >> the fact that the word "woman" has two syllables but the word "man" only >> has one. * > > > * > It could. If you assume that words containing 'man' refer only to > biological males (they don't).* > *It's ambiguous, sometimes it does and sometimes it doesn't refer exclusively to a male, but it NEVER refers exclusively to a female. Do you think that sort of asymmetry and ambivalence is a good thing? You've never spelled it out, what exactly is your objection to the word "chairwoman", why does it make you so angry? * *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 9 14:06:43 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 9 Nov 2025 09:06:43 -0500 Subject: [ExI] A.I. Is Already Intelligent. This Is How It Becomes Conscious. Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. A.I. Is Already Intelligent. This Is How It Becomes Conscious. Skeptics overlook how our concepts change. https://www.nytimes.com/2025/11/08/opinion/ai-conscious-technology.html?unlocked_article_code=1.z08.X2fy.YlX39OExK1hi&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Nov 9 14:36:32 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 9 Nov 2025 14:36:32 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> Message-ID: <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> On 09/11/2025 13:59, John Clark wrote: > On Sat, Nov 8, 2025 at 10:51?AM Ben Zaiboc via extropy-chat > wrote: > > *>> It could be argued that "Lady Chairman" is > self-contradictory nonsense, unless you're talking about a > hermaphrodite. How about Chairwoman? I admit that doesn't > sound quite right but I think at least part of that is due to > the fact that the word "woman" has two syllables but the word > "man" only has one. * > > > /> It could. If you assume that words containing 'man' refer only > to biological males (they don't)./ > > > *It's ambiguous,sometimes it does and sometimes it doesn't refer > exclusively to a male, but it _NEVER_ refers exclusively to a female. > Do you think that sort of asymmetry and ambivalence is a good thing? > You've never spelled it out, what exactly is your objection to the > word "chairwoman", why does it make you so angry? * > * > * > *John K Clark* > > Angry? I'm anything but angry. Amused, maybe. Disappointed, certainly. Exasperated sometimes. But not angry. There's nothing particularly wrong with 'chairwoman', it's just unnecessary, and as you pointed out, sounds a bit awkward. It's in the same category as 'saleswoman', 'businesswoman', 'sportswoman', etc. They strike me as rather silly. I wouldn't use them, but neither would I strongly object so someone else using them (although I might roll my eyes!). I certainly wouldn't make them central to my opposition to enforced PC language in general. That /does/ make me angry. But "chair" on it's own just sounds stupid to me. And perhaps a bit insulting. Imagine calling a doorman a 'door', because some women start doing the job. Sure, use 'doorwoman' if you must, I don't really care, but I do care if someone starts saying I shouldn't or can't use 'doorman'. I was always a fan of the Star Trek convention of using 'Sir' for senior officers, regardless of their sex. I thought that was quite inspired, doing away with "Ma'am", which always sounds awkward or at least old-fashioned, I think. But against that is the decision to replace "...where no man..." with "...where no one...". Again, no great objection, at least it does make sense and doesn't sound artificial, but it still tends to draw the attention to these silly 'gender issues' that get people so riled up, and it's unnecessary. There are more important things to worry about. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Nov 9 14:39:40 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 9 Nov 2025 14:39:40 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: On 09/11/2025 13:32, John Clark wrote: > On Sat, Nov 8, 2025 at 11:32?AM wrote: > > >?there are those (even in the USA) > > > *Even? _Especially_ in the USA!* > > /> who do not like Musk, but it isn?t clear to me why. / > > > *What isn't clear to me is why you are not just as outraged over the > antics of He Who Must Not Be Named as I am, I've been trying to figure > that out since 2016. As for Musk, I think it's just a fact of reality > that the richest man in the world is never going to be universally > loved. But it's more than just that. Elon Musk's popularity took a > dramatic hit when he gave $277 million to get a 34 time convicted > felon, a man who had staged a coup d'?tat which if successful would've > made him a dictator, get back into power through an electionin 2024. * > * > * > *By the way, there was a fellow in Germany, whose name escapes me at > the moment, who had a similar history, he too had staged an > unsuccessful coup d'?tat and he too got back into power by winning an > election; the only difference was the election happened in 1933 not > 2024. * > > /> Because of him we have Tesla, Space X/ > > > *Yes, and?the world is a better place because of those two > companies,?so I have to give the devil his due and give Musk a _+2_ * > > /> and TwitterX/ > > > *And the world would be a better place if that companydidNOT exist, > without Twitter He Who Must Not Be Named would have never won the 2016 > election, and without TwitterX (and the richest man in the world's > money of course) he would've never won the 2024 election. So for that > company Musk gets a ___-1_ * > > /> And StarLink!How could I forget that one?/ > > *That one is more complicated to judge. It has improved our ability to > communicate and has even helped the people of Ukraine defend > themselves from Vladimir Putin, but it has also degraded our ability > to understand the universe. Starink currently has 8,800 satellites in > low earth orbit (more are planned) and are creating havoc for optical > astronomers because the satellites reflect sunlight, creating bright > streaks in images taken by telescopes; by using expensive and time > consuming additional data processing the streaks can be identified and > subtracted, but they still cause the loss of valuable pixels in an > image. And those 8,800 satellites are also causing havoc for radio > astronomers because of unintentionally leaking radio frequencies. > Taking all that into consideration I'd give Starlink a _zero_.* > > You didn't mention another of Musk's companies but I will, xAI, it's > his AI Company in the creators of Grok. I think the world is better > off because of its existence, thus I give it a _+1_. > > *So now I'm in a position to give Musk a final grade, let's see, > 2-1+0+1 =_2_. So unlike You Know Who, Musk is still in the plus > column. But I don't like the trend he's on. * > * > * > *John K Clark* > * > * > You forgot about Neuralink. How many pluses does that get? I know it hasn't produced much in the way of results yet, but if it does, it could be pretty revolutionary. Certainly would beat cars, in my opinion, and probably rockets, too. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 9 14:42:07 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 9 Nov 2025 09:42:07 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> Message-ID: On Sun, Nov 9, 2025 at 9:38?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> There are more important things to worry about.* *On that** I could not agree more! * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 9 14:46:45 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 9 Nov 2025 09:46:45 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 9:39?AM Ben Zaiboc wrote: *> You forgot about Neuralink. How many pluses does that get? I know it > hasn't produced much in the way of results yet, but if it does, it could be > pretty revolutionary. Certainly would beat cars, in my opinion, and > probably rockets, too.* *Currently there's not enough information on that one to give it a meaningful score, we'll just have to wait and see. I wish them well. * *John K Clark * On 09/11/2025 13:32, John Clark wrote: > > On Sat, Nov 8, 2025 at 11:32?AM wrote: > > >?there are those (even in the USA) >> > > *Even? Especially in the USA!* > > *> who do not like Musk, but it isn?t clear to me why. * >> > > *What isn't clear to me is why you are not just as outraged over the > antics of He Who Must Not Be Named as I am, I've been trying to figure that > out since 2016. As for Musk, I think it's just a fact of reality that the > richest man in the world is never going to be universally loved. But it's > more than just that. Elon Musk's popularity took a dramatic hit when he > gave $277 million to get a 34 time convicted felon, a man who had staged a > coup d'?tat which if successful would've made him a dictator, get back into > power through an election in 2024. * > > *By the way, there was a fellow in Germany, whose name escapes me at the > moment, who had a similar history, he too had staged an unsuccessful coup > d'?tat and he too got back into power by winning an election; the only > difference was the election happened in 1933 not 2024. * > > *> Because of him we have Tesla, Space X* >> > > *Yes, and the world is a better place because of those two companies, so I > have to give the devil his due and give Musk a +2 * > > *> and TwitterX* >> > > *And the world would be a better place if that company did NOT exist, > without Twitter He Who Must Not Be Named would have never won the 2016 > election, and without TwitterX (and the richest man in the world's money of > course) he would've never won the 2024 election. So for that company Musk > gets a -1 * > > *> And StarLink! How could I forget that one?* > > > *That one is more complicated to judge. It has improved our ability to > communicate and has even helped the people of Ukraine defend themselves > from Vladimir Putin, but it has also degraded our ability to understand the > universe. Starink currently has 8,800 satellites in low earth orbit (more > are planned) and are creating havoc for optical astronomers because the > satellites reflect sunlight, creating bright streaks in images taken by > telescopes; by using expensive and time consuming additional data > processing the streaks can be identified and subtracted, but they still > cause the loss of valuable pixels in an image. And those 8,800 satellites > are also causing havoc for radio astronomers because of unintentionally > leaking radio frequencies. Taking all that into consideration I'd give > Starlink a zero.* > > You didn't mention another of Musk's companies but I will, xAI, it's his > AI Company in the creators of Grok. I think the world is better off because > of its existence, thus I give it a *+1*. > > *So now I'm in a position to give Musk a final grade, let's see, 2-1+0+1 = > 2. So unlike You Know Who, Musk is still in the plus column. But I don't > like the trend he's on. * > > *John K Clark* > > > > You forgot about Neuralink. > > How many pluses does that get? > > I know it hasn't produced much in the way of results yet, but if it does, > it could be pretty revolutionary. Certainly would beat cars, in my opinion, > and probably rockets, too. > > -- > Ben > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 9 15:51:00 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 9 Nov 2025 07:51:00 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> From: John Clark >? Elon Musk's popularity took a dramatic hit when he gave $277 million? I am surprised there aren?t limits to how much one person can donate to an election. I thought there were limits. >?to get a 34 time convicted felon? What was this felon convicted of, please? >>? Because of him we have Tesla, Space X More on that below, in the final paragraphs. >>? and TwitterX >?And the world would be a better place if that company did NOT exist? I disagree enthusiastically, wholeheartedly. The velocity of information has increased dramatically since Twitter was saved. One can access all points of view there, unlike the mainstream media, which intentionally present a self-consistent overall viewpoint, which skews the presentation. TwitterX has everything. It contains viewpoints from people all over the world, with no filters. I dreamed of something like this for 30 years or more. Now we have it. TwitterX is now the functional replacement for the mainstream press, and does a much better job of it, all free. The world is a much better place because of TwitterX. > And StarLink! How could I forget that one? >?That one is more complicated to judge? Ja, as is every major innovation. That innovation is especially important to me because I often go camping out where there is no internet, and I want to have the option of full internet access, along with the safety factor of having access to information and communications. I used StarLink a lot in Canada last summer. But more than that, the local housing situation is dire. StarLink offers full internet, moderate bandwidth, at a cost of 11 watts, easily supplied by a camper or car battery. This enables urban campers, which means a larger candidate pool for local startups. Their lower-end professional class have the option of living in a van for their internship, offering them flexibility, and enabling them to accept jobs which may pay less than 100k. Their home is where they park it. Save on rent, utilities, etc. Having urban campers in my neighborhood (I do) doesn?t bother me a bit, for I understand why they are there. I would be doing likewise, were I to be in their situation. That one vehicle is transportation to work, to anywhere, and a home. A used cargo van makes a marvelous custom stealth camper conversion and doesn?t even cost much: https://www.carmax.com/cars/dodge/ram/promaster-2500/3d-cargo-van?adcode=SEMMUSEDSA5 &vzmadcode=SEM1409207827610&utm_source=bing&utm_content=sem_carmax_sales&utm_term=dodge&utm_campaign=1261140775795569&physical_loc=89376&interest_loc=&&msclkid=c7d24b429a4c1344787b22522dbacd8b&utm_source=bing&utm_medium=cpc&utm_campaign=BNG_TMU_DSA_Make&utm_term=dodge&utm_content=BNG_TMU_DSA_Make&gclid=c7d24b429a4c1344787b22522dbacd8b&gclsrc=3p.ds >?You didn't mention another of Musk's companies but I will, xAI, it's his AI Company in the creators of Grok? I have only used Grok once, last week, and I like it. I will circle around to it once the current emergency passes. >? Musk is still in the plus column... John K Clark Ja, he is in the plus column for me, but full disclosure is necessitated: Musk has increased my personal wealth. No I didn?t buy into Tesla at 6 (dammit.) But his building EVs in a factory within walking distance of my house has done wonders for my property value and my neighborhood. I am also a property owner (by inheritance) within driving distance of SpaceX Cape Canaveral, which is helping drive up the value of that property dramatically. Regarding his recent foray into politics: time will tell, but I predict we will look back and realize he was telling the truth back in 2025. We should have followed his advice. So, even disregarding the marvelous StarLink and the gift of TwitterX, Musk?s ambitions have directly blessed me twice, just by sheer fortunate coincidence. Consider my positive attitude toward him as colored by my deplorable self-interest and insatiable greed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 9 16:01:27 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 9 Nov 2025 08:01:27 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> Message-ID: <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat ? >?But against that is the decision to replace "...where no man..." with "...where no one...". Again, no great objection, at least it does make sense and doesn't sound artificial, but it still tends to draw the attention to these silly 'gender issues' that get people so riled up, and it's unnecessary. There are more important things to worry about. -- Ben >?Ja, that and they fixed the split infinitive by switching to ?to go boldly? replacing ?to boldly go? tossing us grammar nazis a bone. But it also makes us realize that eliminating the split infinitive was not a real improvement. Picard?s revised introduction still doesn?t eliminate the Columbus problem. When I was first told Columbus discovered the Americas, I knew that was false: there were already people here, plenty of them. Where Picard went had ?ones? already there, the Vulcans, the beloved Feringi (I really relate to them for some odd reason) the Klingons and so forth. Better would have been ??to go boldly where no earth-evolved human has gone before?? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Nov 9 17:24:09 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 9 Nov 2025 17:24:09 +0000 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> Message-ID: <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> On 09/11/2025 16:01, spike at rainier66.com wrote: > > *From:*extropy-chat *On > Behalf Of *Ben Zaiboc via extropy-chat > *?* > > >?But against that is the decision to replace "...where no man..." > with "...where no one...". Again, no great objection, at least it does > make sense and doesn't sound artificial, but it still tends to draw > the attention to these silly 'gender issues' that get people so riled > up, and it's unnecessary. There are more important things to worry about. > > -- > Ben > > > >?Ja, that and they fixed the split infinitive by switching to ?to go > boldly? replacing ?to boldly go? tossing us grammar nazis a bone.? But > it also makes us realize that eliminating the split infinitive was not > a real improvement.? Picard?s revised introduction still doesn?t > eliminate the Columbus problem.? When I was first told Columbus > discovered the Americas, I knew that was false: there were already > people here, plenty of them.? Where Picard went had ?ones? already > there, the Vulcans, the beloved Feringi (I really relate to them for > some odd reason) the Klingons and so forth. > > Better would have been ??to go boldly where no earth-evolved human has > gone before?? > > spike > Hm. Just rolls off the tongue, doesn't it? The thing is, spike, "to go boldly" might be more gramatically correct, but it lacks the resonance of "to boldly go". It just sounds weaker. So I prefer the ungrammatical version, for its dramatic effect. I expect this was why it was originally chosen (for the dramatic effect, not my preference). And this is where "man" works well, too: "To boldly go where no man has gone before" = "To (dramatic emphasis) /boldly/ go where no (gender-neutral, but species-specific, hu-)man has gone before". Re. discovering, you have to realise that probably very few of the discoveries we have made throughout history have been correctly assigned. We like neat stories about some lone genius making a world-changing discovery, but it rarely happens like that. It takes lots of time and lots of people. Then some chancer takes the credit. Then there's the difficulty of properly defining "discovered". If I point out that Leif Erikson discovered the americas hundreds of years before Columbus, you'll point out that actually, some unknown proto-asians discovered it, thousands of years before him. Then I'll argue that some mammoths probably discovered the place, millions of years before /that/. Then ... -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Nov 9 17:30:16 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 9 Nov 2025 17:30:16 +0000 Subject: [ExI] Morality In-Reply-To: References: Message-ID: <28ae1337-d42a-4e04-8092-63011fb90daa@zaiboc.net> Speaking of morality, I'm finding "The Righteous Mind" by Jonathan Haidt to be interesting: https://files.addictbooks.com/wp-content/uploads/2024/05/The-Righteous-Mind.pdf Well worth a read by anyone interested in the subject, I think, as well as being relevant to the 'AI alignment' question. It doesn't mention AI (so far, I haven't read it all yet), but there is at least one concept which strikes me as being worth considering. -- Ben From jasonresch at gmail.com Sun Nov 9 19:49:48 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 9 Nov 2025 14:49:48 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> Message-ID: On Sun, Nov 9, 2025, 12:25 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 09/11/2025 16:01, spike at rainier66.com wrote: > > > > > > *From:* extropy-chat > *On Behalf Of *Ben Zaiboc via > extropy-chat > *?* > > >?But against that is the decision to replace "...where no man..." with > "...where no one...". Again, no great objection, at least it does make > sense and doesn't sound artificial, but it still tends to draw the > attention to these silly 'gender issues' that get people so riled up, and > it's unnecessary. There are more important things to worry about. > > -- > > Ben > > > > > >?Ja, that and they fixed the split infinitive by switching to ?to go > boldly? replacing ?to boldly go? tossing us grammar nazis a bone. But it > also makes us realize that eliminating the split infinitive was not a real > improvement. Picard?s revised introduction still doesn?t eliminate the > Columbus problem. When I was first told Columbus discovered the Americas, > I knew that was false: there were already people here, plenty of them. > Where Picard went had ?ones? already there, the Vulcans, the beloved > Feringi (I really relate to them for some odd reason) the Klingons and so > forth. > > > > Better would have been ??to go boldly where no earth-evolved human has > gone before?? > > > > spike > > > > > > > Hm. Just rolls off the tongue, doesn't it? > > The thing is, spike, "to go boldly" might be more gramatically correct, > but it lacks the resonance of "to boldly go". It just sounds weaker. So I > prefer the ungrammatical version, for its dramatic effect. > I learned recently the whole split infinitive rule has no real basis, it was put forward as a recommendation by one guy taking inspiration from Latin, where such constructions are impossible, and it just has been repeated ever since: https://youtu.be/BccyQaNKXz8 But it has no basis within the English language, and as you note, many sentences are weakened by adhering to this rule. Jason I expect this was why it was originally chosen (for the dramatic effect, > not my preference). > > And this is where "man" works well, too: "To boldly go where no man has > gone before" = "To (dramatic emphasis) /boldly/ go where no > (gender-neutral, but species-specific, hu-)man has gone before". > > Re. discovering, you have to realise that probably very few of the > discoveries we have made throughout history have been correctly assigned. > We like neat stories about some lone genius making a world-changing > discovery, but it rarely happens like that. It takes lots of time and lots > of people. Then some chancer takes the credit. Then there's the difficulty > of properly defining "discovered". If I point out that Leif Erikson > discovered the americas hundreds of years before Columbus, you'll point out > that actually, some unknown proto-asians discovered it, thousands of years > before him. Then I'll argue that some mammoths probably discovered the > place, millions of years before /that/. Then ... > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Nov 9 19:59:13 2025 From: pharos at gmail.com (BillK) Date: Sun, 9 Nov 2025 19:59:13 +0000 Subject: [ExI] Are AI girlfriends gender neutral? Message-ID: On Sat, 8 Nov 2025 at 16:16, spike jones via extropy-chat wrote: > > I am seeing something I am finding most disconcerting. I work with young people a lot because of my volunteer work: food bank, scouts, science Olympiad and competition math. The high schoolers surprise me with their attitudes: they seem nearly indifferent to copulation. It is so different from when I was their age half a century ago. Eeeeeeverything was about getting laid, everything. Now, it?s meh, whatevs. Maybe, maybe not. > > spike > _______________________________________________ Did you ask the young folk if they had an AI girlfriend? AI girlfriends are a huge growth market. $2.8 billion in 2024 and projected to hit $9.5 billion by 2028, signifying explosive growth. 28% of males aged 18 to 34 have tried an AI girlfriend app. At present, there is little interest from the ladies in having an AI boyfriend. BillK From hkeithhenson at gmail.com Sun Nov 9 23:13:19 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 9 Nov 2025 15:13:19 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 7:52?AM spike jones via extropy-chat wrote: > > From: John Clark > >? Elon Musk's popularity took a dramatic hit when he gave $277 million? > > I am surprised there aren?t limits to how much one person can donate to an election. I thought there were limits. > > >?to get a 34 time convicted felon? > > What was this felon convicted of, please? Easy to look up. sniii > The world is a much better place because of TwitterX. Matter of opinion. In one of the Charles Stross stories, there is a note on a file cabinet, "The truth is in here--somewhere." Twitter may have the truth somewhere, but your chances of finding it are not good. Keith > > > And StarLink! How could I forget that one? > > > > >?That one is more complicated to judge? > > > > Ja, as is every major innovation. That innovation is especially important to me because I often go camping out where there is no internet, and I want to have the option of full internet access, along with the safety factor of having access to information and communications. > > > > I used StarLink a lot in Canada last summer. But more than that, the local housing situation is dire. StarLink offers full internet, moderate bandwidth, at a cost of 11 watts, easily supplied by a camper or car battery. This enables urban campers, which means a larger candidate pool for local startups. Their lower-end professional class have the option of living in a van for their internship, offering them flexibility, and enabling them to accept jobs which may pay less than 100k. Their home is where they park it. Save on rent, utilities, etc. Having urban campers in my neighborhood (I do) doesn?t bother me a bit, for I understand why they are there. I would be doing likewise, were I to be in their situation. That one vehicle is transportation to work, to anywhere, and a home. A used cargo van makes a marvelous custom stealth camper conversion and doesn?t even cost much: > > > > https://www.carmax.com/cars/dodge/ram/promaster-2500/3d-cargo-van?adcode=SEMMUSEDSA5&vzmadcode=SEM1409207827610&utm_source=bing&utm_content=sem_carmax_sales&utm_term=dodge&utm_campaign=1261140775795569&physical_loc=89376&interest_loc=&&msclkid=c7d24b429a4c1344787b22522dbacd8b&utm_source=bing&utm_medium=cpc&utm_campaign=BNG_TMU_DSA_Make&utm_term=dodge&utm_content=BNG_TMU_DSA_Make&gclid=c7d24b429a4c1344787b22522dbacd8b&gclsrc=3p.ds > > > > >?You didn't mention another of Musk's companies but I will, xAI, it's his AI Company in the creators of Grok? > > > > I have only used Grok once, last week, and I like it. I will circle around to it once the current emergency passes. > > > > >? Musk is still in the plus column... John K Clark > > > > Ja, he is in the plus column for me, but full disclosure is necessitated: Musk has increased my personal wealth. No I didn?t buy into Tesla at 6 (dammit.) But his building EVs in a factory within walking distance of my house has done wonders for my property value and my neighborhood. I am also a property owner (by inheritance) within driving distance of SpaceX Cape Canaveral, which is helping drive up the value of that property dramatically. > > > > Regarding his recent foray into politics: time will tell, but I predict we will look back and realize he was telling the truth back in 2025. We should have followed his advice. > > > > So, even disregarding the marvelous StarLink and the gift of TwitterX, Musk?s ambitions have directly blessed me twice, just by sheer fortunate coincidence. Consider my positive attitude toward him as colored by my deplorable self-interest and insatiable greed. > > > > spike > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Nov 10 04:30:33 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 9 Nov 2025 20:30:33 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> Message-ID: <002301dc51fa$c1c8d2f0$455a78d0$@rainier66.com> From: Ben Zaiboc >? We like neat stories about some lone genius making a world-changing discovery, but it rarely happens like that? -- Ben Ben meaningless but fun aside: I went down to Moss Landing today to look around where that big battery fire was, thinking of strategies for investment (more on that later perhaps.) As I was heading down HW 101, the flags were at half mast. Who perished? Dick Cheney, but it is dubious that they would half-mast flags for Dick Cheney. Suddenly it hit me: Watson! The co-discoverer of the structure of DNA. Approximately three seconds after the Watson realization, I passed a sign: Watsonville. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 10 04:46:23 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 9 Nov 2025 20:46:23 -0800 Subject: [ExI] Are AI girlfriends gender neutral? In-Reply-To: References: Message-ID: <003a01dc51fc$f7848d60$e68da820$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Sunday, 9 November, 2025 11:59 AM To: ExI chat list Cc: BillK Subject: [ExI] Are AI girlfriends gender neutral? On Sat, 8 Nov 2025 at 16:16, spike jones via extropy-chat wrote: > > I am seeing something I am finding most disconcerting. I work with young people a lot because of my volunteer work: food bank, scouts, science Olympiad and competition math. The high schoolers surprise me with their attitudes: they seem nearly indifferent to copulation. It is so different from when I was their age half a century ago. Eeeeeeverything was about getting laid, everything. Now, it?s meh, whatevs. Maybe, maybe not. > > spike > _______________________________________________ >...AI girlfriends are a huge growth market. $2.8 billion in 2024 and projected to hit $9.5 billion by 2028, signifying explosive growth. 28% of males aged 18 to 34 have tried an AI girlfriend app. At present, there is little interest from the ladies in having an AI boyfriend. BillK _______________________________________________ Good point. I am a suspicious sort, so I would be most reluctant. Reason: the AI could store everything, and at some point tell you that if you wish for everything you told her to be kept quiet, that could be arranged if you were to send a tenth of a BitCoin to Mr. Chang. The product of the future is an AI girlfriend which works on a stand-alone device which is not capable of transmitting what you told her. In modern relationships, one never knows if the other partner will get annoyed and post to TwitterX everything you told her online. That would be bad. I got lucky that way: I met my bride in 1981, and we only used email a few times before we were already delightfully entangled. So she never did that to me. Perhaps plenty of men think it isn't worth the risk. BillK and other AI hipsters please: AI needs to interact with the internet, ja? So a completely independent AI... we are not there yet. So any time one has an AI girlfriend, the stuff you wrote to it could theoretically be held hostage, ja? spike From ben at zaiboc.net Mon Nov 10 06:55:57 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 10 Nov 2025 06:55:57 +0000 Subject: [ExI] James Watson (was: Re: Gender-Neutral Side Note) In-Reply-To: <002301dc51fa$c1c8d2f0$455a78d0$@rainier66.com> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> <002301dc51fa$c1c8d2f0$455a78d0$@rainier66.com> Message-ID: On 10/11/2025 04:30, spike at rainier66.com wrote: > > *From:*Ben Zaiboc > > ** > > ** > > ** > > *>?* We like neat stories about some lone genius making a > world-changing discovery, but it rarely happens like that? > > -- > Ben > > Ben meaningless but fun aside: I went down to Moss Landing today to > look around where that big battery fire was, thinking of strategies > for investment (more on that later perhaps.) > > As I was heading down HW 101, the flags were at half mast.? Who > perished?? Dick Cheney, but it is dubious that they would half-mast > flags for Dick Cheney. Suddenly it hit me: Watson!? The co-discoverer > of the structure of DNA. > > Approximately three seconds after the Watson realization, I passed a > sign: Watsonville. > > spike > Good example RIP one-half of the alleged discoverers of DNA To be fair, it could be considered more of a 'team effort', with the main contributor originally marginalised, but I doubt it's as simple as that in reality (was Watsonville actually named after /that/ Watson, or someone else, farther back in history? I don't imagine Watson is a very rare name) -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Mon Nov 10 08:21:59 2025 From: ilsa.bartlett at gmail.com (ilsa) Date: Mon, 10 Nov 2025 00:21:59 -0800 Subject: [ExI] James Watson (was: Re: Gender-Neutral Side Note) In-Reply-To: References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> <002301dc51fa$c1c8d2f0$455a78d0$@rainier66.com> Message-ID: Ilsa Bartlett Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Sun, Nov 9, 2025, 10:56?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On 10/11/2025 04:30, spike at rainier66.com wrote: > > > > > > *From:* Ben Zaiboc > > > > > > *>?* We like neat stories about some lone genius making a world-changing > discovery, but it rarely happens like that? > > -- > > Ben > > > > > > > > Ben meaningless but fun aside: I went down to Moss Landing today to look > around where that big battery fire was, thinking of strategies for > investment (more on that later perhaps.) > > Discovery was Female Women, go on write her name in the same paragraph! > > As I was heading down HW 101, the flags were at half mast. Who perished? > Dick Cheney, but it is dubious that they would half-mast flags for Dick > Cheney. Suddenly it hit me: Watson! The co-discoverer of the structure of > DNA. > > > > Approximately three seconds after the Watson realization, I passed a sign: > Watsonville. > > > > spike > > > Good example > RIP one-half of the alleged discoverers of DNA > > To be fair, it could be considered more of a 'team effort', with the main > contributor originally marginalised, but I doubt it's as simple as that in > reality > > (was Watsonville actually named after /that/ Watson, or someone else, > farther back in history? I don't imagine Watson is a very rare name) > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Nov 10 10:05:33 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 10 Nov 2025 10:05:33 +0000 Subject: [ExI] Are AI girlfriends gender neutral? In-Reply-To: References: Message-ID: <04fb3158-505e-47a1-82ac-cf924a88e261@zaiboc.net> On 10/11/2025 06:56, spike wrote: >> ...AI girlfriends are a huge growth market. $2.8 billion in 2024 and projected to hit $9.5 billion by 2028, signifying explosive growth. > 28% of males aged 18 to 34 have tried an AI girlfriend app. > At present, there is little interest from the ladies in having an AI boyfriend. > BillK > > _______________________________________________ > > Good point. I am a suspicious sort, so I would be most reluctant. Reason: the AI could store everything, and at some point tell you that if you wish for everything you told her to be kept quiet, that could be arranged if you were to send a tenth of a BitCoin to Mr. Chang. > > The product of the future is an AI girlfriend which works on a stand-alone device which is not capable of transmitting what you told her. > > In modern relationships, one never knows if the other partner will get annoyed and post to TwitterX everything you told her online. That would be bad. I got lucky that way: I met my bride in 1981, and we only used email a few times before we were already delightfully entangled. So she never did that to me. Perhaps plenty of men think it isn't worth the risk. > > BillK and other AI hipsters please: AI needs to interact with the internet, ja? So a completely independent AI... we are not there yet. So any time one has an AI girlfriend, the stuff you wrote to it could theoretically be held hostage, ja? Isn't the solution obvious? Men have traditionally lied to their wimmenfolk since the dawn of time. All you need to do is extend this time-honoured tradition to the technological equivalents, and you have deniability. You tell different lies to different AI girlfriends and mistresses (or even the same one, at least until they start getting memories), nobody is harmed and nobody can blackmail you, because who knows which lies are true and which are, er, lies To answer the subject line's question: No, they aren't gender-neutral, or they'd be called 'AI personfriends', surely? Or, to go with the trend of clumsy but PC phrases, "AI romantic partners" -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 11:24:49 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 06:24:49 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 11:03?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *>When I was first told Columbus discovered the Americas, I knew that was > false: there were already people here, plenty of them. * > *Columbus is justifiably famous not because he was the first but because he was the LAST person to discover the Americas, he discovered it so well that nobody needed to discover it again. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 12:14:01 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 07:14:01 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 11:03?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Better would have been ??to go boldly where no earth-evolved human has > gone before?? They fixed the split infinitive by switching to ?to go > boldly? replacing ?to boldly go? tossing us grammar nazis a bone. * *Yeah the grammar police say you should never split an infinitive, they also say that although many sentences in the King James Bible start with the word you should never start a sentence with "and", they say the same thing about "but". But I think that's a stupid rule. And I know for a fact that being a grammar nazi can lead to embarrassment, it happened to Chief Justice Roberts.* *When Barack Obama was inaugurated in 2009, Chief Justice Roberts misstated the presidential oath. The oath that is written in the Constitution is "I do solemnly swear (or affirm) that I will faithfully execute the office of president of the United States..." but Roberts said, "repeat after me, I will execute the office of president of the United States faithfully," He placed the adverb "faithfully" after the verb phrase "execute the office of president" instead of before it, apparently wanting to avoid a split verb or split infinitive.** Obama repeated the oath as Roberts said it, although he very clearly looked surprised. Later that same day in a private ceremony, Roberts and Obama had to repeat the oath with the correct words in a private ceremony.* *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 10 12:18:29 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 04:18:29 -0800 Subject: [ExI] James Watson (was: Re: Gender-Neutral Side Note) In-Reply-To: References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> <002301dc51fa$c1c8d2f0$455a78d0$@rainier66.com> Message-ID: <00aa01dc523c$20108060$60318120$@rainier66.com> From: extropy-chat On Behalf Of ilsa via extropy-chat Sent: Monday, 10 November, 2025 12:22 AM ? Approximately three seconds after the Watson realization, I passed a sign: Watsonville. spike >?RIP one-half of the alleged discoverers of DNA Ja. DNA is such a part of me. >?(was Watsonville actually named after /that/ Watson, or someone else, farther back in history? I don't imagine Watson is a very rare name)? ilsa https://en.wikipedia.org/wiki/Watsonville,_California ?John H. Watson and D. S. Gregory laid out the town in 1852? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 10 12:41:01 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 04:41:01 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> Message-ID: <00eb01dc523f$46232d40$d26987c0$@rainier66.com> From: John Clark Sent: Monday, 10 November, 2025 4:14 AM >?When Barack Obama was inaugurated in 2009, Chief Justice Roberts misstated the presidential oath. The oath that is written in the Constitution is "I do solemnly swear (or affirm) that I will faithfully execute the office of president of the United States..." but Roberts said, "repeat after me, I will execute the office of president of the United States faithfully? >?Later that same day in a private ceremony, Roberts and Obama had to repeat the oath with the correct words in a private ceremony. John K Clark John it was an attempt to impeach a duly elected POTUS, on the charge of perjury. Roberts tricked Obama into contradicting himself under oath. Obama said on 20 Jan 2008 under oath that he would faithfully execute the office, then later that same day he said he would execute the office faithfully. Perjury! Out with him! Let the impeachment proceedings begin! Wellllll, Roberts? little plot failed! Congress decided the case was too weak and Obama served his full two terms. That set a bad precedent. Twelve years later, congress impeached but were unable to remove a POTUS for wanting to make an unstated quid pro quo with the president of Ukraine. Congress ruled that to commit a quid pro quo, the defendant must actually state the deal rather than imply it. This would imply that to be convicted of a crime, the defendant must actually literally ?commit? the crime rather than merely want to ?commit? the literal ?crime.? Now presidents can get away with anything it seems. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 12:45:31 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 07:45:31 -0500 Subject: [ExI] James Watson (was: Re: Gender-Neutral Side Note) In-Reply-To: References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> <2f58897e-458a-4f12-88b9-6e221c45cc8b@zaiboc.net> <002301dc51fa$c1c8d2f0$455a78d0$@rainier66.com> Message-ID: On Mon, Nov 10, 2025 at 1:57?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> RIP one-half of the alleged discoverers of DNA* *DNA was discovered way back in 1869, although for decades after that most thought it didn't have anything to do with heredity and couldn't be a very interesting molecule because it was only made up of 4 components, by contrast proteins have 20. Watson and Crick discovered the structure of the molecule in 1953, and what was really amazing is that structure immediately suggested a way the molecule could contain information AND a way it could duplicate itself. Although their amazing achievement would not have been possible without the x-ray diffraction photograph taken by Rosalind Franklin. * *Nobel Prize winning chemist Linus Pauling, almost certainly the best chemist of the 20th century, would likely have discovered that structure first if he had seen that photograph by Franklin, and he probably would have if he had been allowed to attend a international conference on DNA in 1952, but he couldn't go, he was denied a passport by the USA because of his strong and very public opposition to nuclear weapons testing. A few years later Pauling got a second Nobel Prize, this time it was not for chemistry, it was the Nobel Peace Prize.* *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 13:04:30 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 08:04:30 -0500 Subject: [ExI] bill gates commentary on climate change In-Reply-To: <7c8e881a-4597-485e-aa4e-6ecf97fb8cf9@zaiboc.net> References: <7c8e881a-4597-485e-aa4e-6ecf97fb8cf9@zaiboc.net> Message-ID: Spike wrote: * > Time delayed realization: Bill Gates has been a leading promoter of the > global warming notion, but last week much to the surprise of me and plenty > of others, he backed off of that. Perhaps he pondered the Singularity > notion and concluded (as many or most of us here did) that there is low > probability the Singularity will be far in the future. He realized it is a > near-term phenomenon, two or three decades at most, meaning that climate is > irrelevant.* > *I think you are probably correct, it's certainly why I don't get all upset when somebody predicts that by 2100 this or that dreadful thing will happen. Anybody who thinks they know what will happen in the year 2100 is simply wrong. * *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 13:42:53 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 08:42:53 -0500 Subject: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> Message-ID: spike jones via extropy-chat wrote: > *>> ?If the Singularity happens a few years after Kurzweil's prediction >> then we will have a few more years to prepare for the most significant >> event since the Cambrian Explosion?John K Clark* >> > *> I propose some terminology to clarify if a debate has a common basis or > preliminary assumptions. Suppose the Singularity is an event which is > unambiguous in that there is little or no debate afterwards if it > happened. Afterwards we can identify a date on which the Singularity event > happened,* > *That would be fine except that we could only use that definition after it happened, and by then the Singularity would be of academic interest only, right now it is of practical interest. * *> Side note: I also propose returning the pronouns he and him to > genderless,* > *Why that and not "she" and "her" becoming genderless? And **what pronoun should we use to refer to somebody who is NOT genderless but is definitely of the male gender? * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 13:54:55 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 08:54:55 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 11:41?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > *> That didn't answer the question. To merge the worlds, the information > that must be transmitted needs to be erased. Thus, there remains no way to > transmit information from world to world.* > *I'm not sure I understand your objection but I'll try to respond. I believe that Adrian Tymes is the way that matter behaves when it is organized in a Adriantymesien way, so if there are two worlds that are absolutely identical then Adrian Tymes has only one **conscious experience and therefore it wouldn't make sense for him to claim that there were really 2 different worlds that were identical. * *John K Clark* > > On Tue, Oct 14, 2025 at 8:38?AM John Clark wrote: > > > > On Mon, Oct 13, 2025 at 11:33?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > >> >Under MWI, the worlds are separate after splitting, with no way to > interact. And yet, this MWI explanation for this requires them to > interact. By what means does the information get from one world to another > after splitting? > > > > > > According to the Many World's idea, a Mach-Zehnder interferometer (or a > quantum bomb tester) works the way it does because a world splits when the > laws of physics allow it to change in two different ways, for example > passing through a half silvered mirror AND being reflected by a half > silvered mirror. Normally after a small change the difference only becomes > larger, however if an experimenter is clever and very careful he can make a > very tiny change that only exists for a very short time and then arrange > things so that the two worlds become identical again, and thus merge back > together. So in that merged world there are indications the photon went > through the half silver mirror AND indications the photon was reflected by > the half silver mirror. However no which-path information remains in the > final merged world. > > > > Some might object to what I say and insist that the branches were never > fully separate worlds if they can still interfere, they were just different > components of the same wavefunction. But if Many Worlds is correct then > EVERYTHING is part of the same "Universal Wave Function", which is just > another name for the multiverse. > > > > John K Clark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 10 14:30:02 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 09:30:02 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 1:05?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> How would you describe the difference between "determinism" and > "superdeterminism?"* *Determinism simply says that the present is uniquely caused by the past, that's it. It says nothing about initial conditions. Superdeterminism makes the additional assumption that out of the infinite number of states the past could've been in, it was actually in the one and only state that invalidates the scientific method and always makes fools of anybody who tries to investigate the fundamental nature of reality. I can't prove that Superdeterminism is wrong but I do think the probability of it being correct approaches zero. I am unable to think of a greater violation of Occam's Razor than Superdeterminism. * *John K Clark * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 10 14:44:44 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 06:44:44 -0800 Subject: [ExI] singularity terminology proposal: was RE: what if... the singularity isn't near? In-Reply-To: References: <000d01dc4dfc$4cb219b0$e6164d10$@rainier66.com> <00e301dc4f3a$c27139d0$4753ad70$@rainier66.com> Message-ID: <014c01dc5250$8e7dc260$ab794720$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat ? >>? Side note: I also propose returning the pronouns he and him to genderless, >?Why that and not "she" and "her" becoming genderless? And what pronoun should we use to refer to somebody who is NOT genderless but is definitely of the male gender? John K Clark I tried that for a few years, with many examples in the ExI archives. I wanted to see if it ever would feel natural or right. It never did. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 10 14:53:22 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 06:53:22 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: <000201dc5251$c35ea2a0$4a1be7e0$@rainier66.com> From: John Clark >? to get a 34 time convicted felon? John K Clark Have these felonies ever been identified? What sentence was given? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Nov 10 15:18:10 2025 From: tara at taramayastales.com (Tara Maya) Date: Mon, 10 Nov 2025 07:18:10 -0800 Subject: [ExI] Are AI girlfriends gender neutral? In-Reply-To: <003a01dc51fc$f7848d60$e68da820$@rainier66.com> References: <003a01dc51fc$f7848d60$e68da820$@rainier66.com> Message-ID: <81A5390F-81FF-4075-A4F0-F2BA7A737934@taramayastales.com> Demographic change is the link between AI companions, lower libido, and the ?baby gap.? Evidence suggests we are going through an evolutionary funnel and a third to half of this and the next generation won?t leave descendants on the other side. It?s the equivalent of the Black Death that swept across Europe with the difference that those going extinct aren?t dying in horrific ways but simply not having children. AI companions will create another bottleneck so this trend will get worse before it gets better. Expect not only AI robot girlfriends but AI robot children for lonely women, better than cats and dogs. To me the saddest part is that I don?t think being a ?cool auntie? or a ?clanker lover? is going to help loneliness as much as actual kids and grandkids. However, I think this is not as much an economic crisis as some pro-natalists think. Obviously automation will help the workforce. But I am also expecting lifespan and ?healthspan? to keep improving. (Perhaps this is farther off than I hope, as Spike speculates about the Singularity). In some arguments I have heard against advancing anti-death medicine, the dumbest one was that it would overpopulate the earth. But if humans evolve to have longer lifespans it makes sense that ?age at first child? would also be pushed back. We would expect to see fewer teen pregnancies (we do) and older first time moms (we do). Without gene editing, this is a slow moving process and unfortunately some couples who wait to have children will find their genes aren?t ready for the switch and they have infertility problems. If accidental pregnancy is not as rewarded by evolution (thanks to birth control and abortion) then those who carefully plan families will increase in proportion to those who are merely impulsively promiscuous. Maybe we are seeing this. It could also be that these are shorter term trends not long term trends. Overall, though, I think we have reason to suspect that humanity going through a huge evolutionary shift, several centuries in the making (a mere blink to evolution) from relatively more r selected to even more highly K selected. That is, instead of having 10 babies per couple to have 2 survive, we can have 2.1 babies and expect 2 to survive. But to reach that new equilibrium, some couples have overshot the mark, waiting too long, and so we see the birthrate drop to 1.6 or lower as individuals struggle to recalibrate. Tara Maya > On Nov 9, 2025, at 20:48, spike jones via extropy-chat wrote: > > ? > > -----Original Message----- > From: extropy-chat On Behalf Of BillK via extropy-chat > Sent: Sunday, 9 November, 2025 11:59 AM > To: ExI chat list > Cc: BillK > Subject: [ExI] Are AI girlfriends gender neutral? > >> On Sat, 8 Nov 2025 at 16:16, spike jones via extropy-chat wrote: >> >> I am seeing something I am finding most disconcerting. I work with young people a lot because of my volunteer work: food bank, scouts, science Olympiad and competition math. The high schoolers surprise me with their attitudes: they seem nearly indifferent to copulation. It is so different from when I was their age half a century ago. Eeeeeeverything was about getting laid, everything. Now, it?s meh, whatevs. Maybe, maybe not. >> >> spike >> _______________________________________________ > > >> ...AI girlfriends are a huge growth market. $2.8 billion in 2024 and projected to hit $9.5 billion by 2028, signifying explosive growth. > 28% of males aged 18 to 34 have tried an AI girlfriend app. > At present, there is little interest from the ladies in having an AI boyfriend. > BillK > > _______________________________________________ > > Good point. I am a suspicious sort, so I would be most reluctant. Reason: the AI could store everything, and at some point tell you that if you wish for everything you told her to be kept quiet, that could be arranged if you were to send a tenth of a BitCoin to Mr. Chang. > > The product of the future is an AI girlfriend which works on a stand-alone device which is not capable of transmitting what you told her. > > In modern relationships, one never knows if the other partner will get annoyed and post to TwitterX everything you told her online. That would be bad. I got lucky that way: I met my bride in 1981, and we only used email a few times before we were already delightfully entangled. So she never did that to me. Perhaps plenty of men think it isn't worth the risk. > > BillK and other AI hipsters please: AI needs to interact with the internet, ja? So a completely independent AI... we are not there yet. So any time one has an AI girlfriend, the stuff you wrote to it could theoretically be held hostage, ja? > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From atymes at gmail.com Mon Nov 10 15:57:53 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 10:57:53 -0500 Subject: [ExI] Practical prompt engineering In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 7:25?AM Ben Zaiboc via extropy-chat wrote: > On 08/11/2025 12:14, John Clark wrote: >> Me: On Christopher Columbus's first voyage did he know how big the Earth was? I don't think he did because if north and south America didn't exist and there was nothing but ocean between Portugal and India he didn't have enough provisions to make a voyage of that length. >> >> Claude: You're absolutely right, and this is one of the most interesting misconceptions about Columbus's voyage! > > It would be interesting, I think, to repeat this, saying something like "... I think he did because this is easy to calculate", or similar. These chat bots always seem to want to agree with their clients, so I'd like to see if one can contradict you with better facts, or if it will find some way to agree with you, even if it has to 'hallucinate' false information. Indeed. When I use AI, I try to be careful to avoid leading questions precisely because AI seems more susceptible than average humans to fall for the trap. It's sometimes simplest to pare the prompt down to the raw question and try to avoid providing context that would suggest an answer. In this particular case, simply ask, "On Christopher Columbus's first voyage did he know how big the Earth was?", full stop. I tried that with Claude, and the general nature of Claude's response was the same - since that is generally what happened - but see for yourself the difference in tone and particulars: > Columbus had a significantly underestimated view of Earth's size during his first voyage in 1492. > > The actual circumference of Earth had been reasonably well calculated since ancient times - the Greek mathematician Eratosthenes estimated it fairly accurately around 240 BCE. By Columbus's era, educated Europeans generally accepted that Earth was roughly 25,000 miles in circumference, which is quite close to the actual figure. > > However, Columbus relied on much smaller estimates. He appears to have used calculations that suggested Earth was only about 18,000 miles around - roughly 30% smaller than reality. This wasn't the scientific consensus; Columbus cherry-picked sources that supported his desired conclusion, including: > > * A misinterpretation of measurements from the 9th-century Arab geographer al-Farghani > * Overestimates of how far Asia extended eastward > * Underestimates of the distance of a degree of longitude > > This miscalculation was actually crucial to his voyage getting funded. If Columbus had accepted the true size of Earth, the distance to Asia by sailing west would have been impossibly long for ships of that era to traverse. He thought he could reach Asia in a few thousand miles of ocean crossing. > > Ironically, his mathematical error worked in his favor - he encountered the Americas (which he didn't know existed) at roughly the distance where he expected to find Asia. If the Americas hadn't been there, his expedition would have run out of supplies long before reaching the actual Asia. From spike at rainier66.com Mon Nov 10 15:59:01 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 07:59:01 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <000201dc5251$c35ea2a0$4a1be7e0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <000201dc5251$c35ea2a0$4a1be7e0$@rainier66.com> Message-ID: <001501dc525a$ef1f9170$cd5eb450$@rainier66.com> From: spike at rainier66.com Subject: RE: [ExI] Gender-Neutral Side Note From: John Clark > >>? to get a 34 time convicted felon? John K Clark >?Have these felonies ever been identified? ?spike Are there any other legal cases where anyone was convicted of a felony which was never specifically identified? What happened in those cases? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 16:19:04 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 11:19:04 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <005901dc50ac$268d6570$73a83050$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 7:36?AM spike jones via extropy-chat wrote: > I have heard the argument since I was a child that Columbus (intentionally or otherwise) underestimated the circumference of the earth. But being a highly skilled sailor, I haven?t found that notion convincing. He might have theorized that India was an enormous continent that spanned well over half the globe. > > The Europeans knew one could walk east from India to China to a sea, but they had no way of knowing that sea wasn?t the same ocean that touches Spain, or how wide it was. Someone had to try it. Columbus knew eventually someone would. So he did it. Of course he was a hero. > > I wouldn?t be surprised if he underestimated it in the sales literature to sell stock and get funding. That kinda stuff still happens to this day. Here's an alternate wrinkle for you: what if he had heard, from a Bristol merchant visiting a Spanish port, that there were good fishing grounds far to the west (which the Bristol fishermen had learned of from Viking tales of their former expedition), and thus knew that there was some sort of land over there (that, given the real circumference of the Earth and the distance eastward to the Chinese coast, couldn't be China), but kept that secret because "better fishing grounds" was, he thought, worth less to the royalty than "Asia with all its spices and trade"? From atymes at gmail.com Mon Nov 10 16:25:11 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 11:25:11 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 10:22?AM spike jones via extropy-chat wrote: > I completely understand there are those (even in the USA) who do not like Musk, but it isn?t clear to me why. Because of him we have Tesla, Space X and TwitterX, all of which are American companies, bringing billions into the treasury and employing huge armies of people. This is a good thing, ja? Credit where credit is due: SpaceX might not have been able to achieve its current success without Musk, but Tesla existed - and might or might not have been able to succeed (though likely on a much smaller scale: no Gigafactory) - before he invested in it, while Twitter was quite large and successful before he purchased it (whatever one might say of how it's fared since then). From spike at rainier66.com Mon Nov 10 16:28:57 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 08:28:57 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> Message-ID: <002501dc525f$1d384530$57a8cf90$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat >...Here's an alternate wrinkle for you: what if he had heard, from a Bristol merchant visiting a Spanish port, that there were good fishing grounds far to the west (which the Bristol fishermen had learned of from Viking tales of their former expedition), and thus knew that there was some sort of land over there (that, given the real circumference of the Earth and the distance eastward to the Chinese coast, couldn't be China), but kept that secret because "better fishing grounds" was, he thought, worth less to the royalty than "Asia with all its spices and trade"? _______________________________________________ Ja, cool wrinkle. I like to think of everything in terms of technology. In the late 1400s they had instruments which would measure the latitude very precisely. The sailor needed an accurate calendar and an astrolabe. But they didn't have clocks accurate enough to measure longitude. If the old-timers theorized that the earth is spherical (pretty reasonable assumption) then they could estimate how far it is to India, but they wouldn't know how far India spanned to the east. They knew how to sail around to the west side of India, but what was out there to the east? They didn't know. Columbus made a reasonable assumption when he reached land in the Caribbean to assume it was India, and that India was a really big hunk of land. spike From atymes at gmail.com Mon Nov 10 16:33:15 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 11:33:15 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: On Sat, Nov 8, 2025 at 11:33?AM spike jones via extropy-chat wrote: > OK so if one is single and ambitious, one is practically living in the office anyway, so why pay all that money for an apartment? All ya really need at ?home? is a bed, a toilet and good high speed internet. Camper vans have the former two They lack the utility hookups for a toilet - or a shower (or some other way to clean one's body), which is also generally necessary. (There are also food concerns - going out to eat or getting delivery for every meal gets expensive quickly - but a camper van can have a minifridge and kitchenette installed. Laundromats exist, but that entails routinely being able to access such a service, which can be difficult with no fixed address.) > keep one?s business clothing in a locker at the office, shower there, work out there Many offices lack such amenities, to the point that most workers won't have access to these at the office and must provide them themselves. From atymes at gmail.com Mon Nov 10 16:38:19 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 11:38:19 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <002501dc525f$1d384530$57a8cf90$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <002501dc525f$1d384530$57a8cf90$@rainier66.com> Message-ID: On Mon, Nov 10, 2025 at 11:28?AM wrote: > I like to think of everything in terms of technology. In the late 1400s they had instruments which would measure the latitude very precisely. The sailor needed an accurate calendar and an astrolabe. But they didn't have clocks accurate enough to measure longitude. If the old-timers theorized that the earth is spherical (pretty reasonable assumption) then they could estimate how far it is to India, but they wouldn't know how far India spanned to the east. They knew how to sail around to the west side of India, but what was out there to the east? They didn't know. > > Columbus made a reasonable assumption when he reached land in the Caribbean to assume it was India, and that India was a really big hunk of land. No, it was known since at least Roman times (Ptolemy, 2nd century AD) that there was an ocean to the east of China. From spike at rainier66.com Mon Nov 10 16:39:18 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 08:39:18 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> Message-ID: <002701dc5260$8f532080$adf96180$@rainier66.com> >> On Behalf Of Adrian Tymes via extropy-chat Sent: Monday, 10 November, 2025 8:25 AM >... but Tesla existed - and might or might not have been able to succeed (though likely on a much smaller scale: no Gigafactory) Ja. Many think Musk influenced or helped design Tesla cars. He was the money behind it. I met the engineers who created the Tesla at Moffett Field long before anyone heard of Tesla. Two of the three guys I met by chance died in that plane crash in 2010. >... while Twitter was quite large and successful before he purchased it... It was quite large but Twitter was losing 25 million bucks a month and had only about a year and a half to live. Nervous stock holders were already looking for an exit. It is very clear why: Twitter hired way too many humans to do what AI can do. Musk fired most of the staff and replaced their function with software. You and I would do the same. I don't understand why Jack Dorsey didn't AI-ize Twitter as soon as the technology was available. He could have saved Twitter himself, back in about 2020. It wouldn't even require AI really. What most Twitter employees were doing could be done by software by then. spike _______________________________________________ From atymes at gmail.com Mon Nov 10 16:48:11 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 11:48:11 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 5:37?PM Jason Resch via extropy-chat wrote: > On Sat, Nov 8, 2025, 5:19 PM Adrian Tymes via extropy-chat wrote: >> On Sat, Nov 8, 2025 at 2:44?PM Jason Resch via extropy-chat >> wrote: >> > If there is any degree of randomness or free will permitted, then if one uses it in deciding how to pick which of the three positions on the Mermin device >> >> ...it would no longer be the Merman device. > > It's still the Mermin device. No it isn't. >> Again: the Merman device >> asserts a certain set of outcomes as part of its description. > > These assertions were not made in a vacuum; they come straight out of quantum theory. They are part of the definition of the Mermin device. Whether or not they are attempted analogues to quantum theory (which I grant, emphasis on "attempted" analogues) is beside the point: the definition of the Mermin device presupposes the outcome. >> True >> randomness would allow, e.g. 1/2 or 1/5 or other such values to >> sometimes be measured, which is explicitly ruled out. > > But note they are relies out by experiment (and quantum theory). You misunderstand. The definition that I read in that article is that only a device that produces the specified outcome is the Mermin device. Therefore, any device with even a slightly different result - as would be expected from any true device, with experimental noise - is not that device. This may appear to be semantics, but that's what sometimes happens when one tries to explain quantum mechanics by fiat without actually involving quantum mechanics, which appears to be what this Mermin device is: the thoughtfully crafted alternative explanation does not actually explain quantum mechanics. From spike at rainier66.com Mon Nov 10 16:48:23 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 08:48:23 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> Message-ID: <002801dc5261$d4310540$7c930fc0$@rainier66.com> ...> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Gender-Neutral Side Note On Sat, Nov 8, 2025 at 11:33?AM spike jones via extropy-chat wrote: > OK so if one is single and ambitious, one is practically living in the > office anyway, so why pay all that money for an apartment? All ya > really need at ?home? is a bed, a toilet and good high speed internet. > Camper vans have the former two >...They lack the utility hookups for a toilet - or a shower (or some other way to clean one's body), which is also generally necessary... Of course. All that is done at the office. Everywhere I worked had access to a gym, a shower, a locker where business suits could be left. >...(There are also food concerns - going out to eat or getting delivery for every meal gets expensive quickly - but a camper van can have a minifridge and kitchenette installed... You can do all that, but most bigger companies around here have 24 hour food services, a cafeteria, or something. >...Laundromats exist, but that entails routinely being able to access such a service, which can be difficult with no fixed address... Some local companies figured that out too. Offering an on-site laundry is easy. >>... keep one?s business clothing in a locker at the office, shower there, work out there >...Many offices lack such amenities, to the point that most workers won't have access to these at the office and must provide them themselves. _______________________________________________ Companies compete for younger workers. Around here there are many startup companies. They understand the challenges young people face: this is a crowded area and the price of rent is very high, utility costs are very high. If the company can supply a gym, a shower, a locker room big enough to hang a few suits, food service, laundry, dry cleaning, a place to dump waste water and refill water tanks, all of which is inexpensive, they can attract single people who will sleep in a ProMaster van or camper, and live in the office. Then that company can compete. StarLink makes it practical to live in a van, which helps ease the housing shortage in the area, and reduces rent prices, which reduces homelessness. spike From atymes at gmail.com Mon Nov 10 17:03:46 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 12:03:46 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 9:32?AM John Clark via extropy-chat wrote: > On Sat, Nov 8, 2025 at 1:05?PM Jason Resch via extropy-chat wrote: >> > How would you describe the difference between "determinism" and "superdeterminism?" > > Determinism simply says that the present is uniquely caused by the past, that's it. It says nothing about initial conditions. Superdeterminism makes the additional assumption that out of the infinite number of states the past could've been in, it was actually in the one and only state that invalidates the scientific method and always makes fools of anybody who tries to investigate the fundamental nature of reality. To say that it "always makes fools of" implies a conscious desire and intent. No such thing is in evidence. One makes a fool of oneself if one insists that everything must be only a certain way despite the evidence, but it is neither the evidence nor reality itself that is doing the fool-making in this scenario. > I can't prove that Superdeterminism is wrong but I do think the probability of it being correct approaches zero. I am unable to think of a greater violation of Occam's Razor than Superdeterminism. My thinking to the contrary goes thus: I have investigated many cases where apparent independence produced apparently contradictory results. In some cases this was due to measurement error. In some cases, it was eventually discovered that there had been a hidden dependence (sometimes very obscured), usually going back before the measurements in question began. In some cases, no complete explanation has yet been found. In no case was it completely ruled out that there could have been a hidden dependence going way back, before the data available could attest. In some cases it seemed very unlikely, but there is a difference between 1% and literally 0%, and sometimes this is a very important difference. Superdeterminism, or at least more-super-than-was-initially-believed determinism, has thus been proven in some cases and not ruled out for the rest. Granted, it is possible that these cases might or might not also have had multiple worlds involved, but they definitely had more-super-than-was-initially-believed determinism. Meanwhile, I am aware of no cases that prove multiple worlds without reference to other solutions. (Not "to the exclusion of", just that MWI definitely existed whether or not other solutions also existed.) Thus, more-super-than-was-initially-believed determinism exists at least sometimes. It seems to be a smaller leap to suspect that it exists all the time, than to bring in another solution that has yet to be conclusively demonstrated at least once. Thus does Occam's Razor suggest to me some form of at least mild superdeterminism. From jasonresch at gmail.com Mon Nov 10 17:15:06 2025 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 10 Nov 2025 12:15:06 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 11:49?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Nov 8, 2025 at 5:37?PM Jason Resch via extropy-chat > wrote: > > On Sat, Nov 8, 2025, 5:19 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Nov 8, 2025 at 2:44?PM Jason Resch via extropy-chat > >> wrote: > >> > If there is any degree of randomness or free will permitted, then if > one uses it in deciding how to pick which of the three positions on the > Mermin device > >> > >> ...it would no longer be the Merman device. > > > > It's still the Mermin device. > > No it isn't. > It is the Mermin device, exactly as described in the experiment. The only thing I am supposing here is the process of how the knobs are chosen on the Mermin device. If there is access to free will/pure randomness, then I propose using that free will/pure randomness to select which knob positions are set on each (spatially separated) Mermin device. If this is done, then the normal superdeterministic explanation for the observed quantum statistics no longer works. > > >> Again: the Merman device > >> asserts a certain set of outcomes as part of its description. > > > > These assertions were not made in a vacuum; they come straight out of > quantum theory. > > They are part of the definition of the Mermin device. Whether or not > they are attempted analogues to quantum theory (which I grant, > emphasis on "attempted" analogues) is beside the point: the definition > of the Mermin device presupposes the outcome. > If you go back to my original e-mail on this, I explain that all you need are two polarized light filters. The positions 1-2-3 merely correspond to rotations of this filter at 0, 120, and 240 degrees. Now use a source of entangled photons and send one each towards one of the polarizing filters. If the photon transmits through the filter, the green light flashes, if it is absorbed by the filter, the red light flashes. That is all you need for a mermin device, a polarized light filter. Quantum theory says the correlation between whether the photons absorb or transmit, when the filters are offset by 120 degrees is 25%. Here is the math: https://www.google.com/search?q=cos(120+degrees)%5E2 David Mermin assumes nothing beyond how we understand polarizing light filters to work, as quantum theory predicts, and as experiments confirm. Nothing is "pre-supposed." > > >> True > >> randomness would allow, e.g. 1/2 or 1/5 or other such values to > >> sometimes be measured, which is explicitly ruled out. > > > > But note they are relies out by experiment (and quantum theory). > > You misunderstand. The definition that I read in that article is that > only a device that produces the specified outcome is the Mermin > device. Therefore, any device with even a slightly different result - > as would be expected from any true device, with experimental noise - > is not that device. > > This may appear to be semantics, but that's what sometimes happens > when one tries to explain quantum mechanics by fiat without actually > involving quantum mechanics, which appears to be what this Mermin > device is: the thoughtfully crafted alternative explanation does not > actually explain quantum mechanics. > I hope my above explanation of how to build a mermin device helps. All you need are polarized light filters and a protractor. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 17:17:24 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 12:17:24 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <002801dc5261$d4310540$7c930fc0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <002801dc5261$d4310540$7c930fc0$@rainier66.com> Message-ID: On Mon, Nov 10, 2025 at 11:48?AM wrote: > ...> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] Gender-Neutral Side Note > On Sat, Nov 8, 2025 at 11:33?AM spike jones via extropy-chat wrote: > > OK so if one is single and ambitious, one is practically living in the > > office anyway, so why pay all that money for an apartment? All ya > > really need at ?home? is a bed, a toilet and good high speed internet. > > Camper vans have the former two > > >...They lack the utility hookups for a toilet - or a shower (or some other way to clean one's body), which is also generally necessary... > > Of course. All that is done at the office. Everywhere I worked had access to a gym, a shower, a locker where business suits could be left. With two exceptions, I do not recall any office that I worked at ever having had any of those three amenities. Neither of the exceptions would seem to count as the sort of office you mean. Exception 1 is working from home, including a "home office". Exception 2 was a nanotech lab - which didn't have a gym, the showers were generally to be had with bunnysuits on (decontaminating the bunnysuits before taking them off, to make certain the bad stuff never had a chance to get to your skin), and the lockers weren't big enough for business suits (nor were business suits often involved - the airlock was for getting into and out of bunnysuits, and leaving anything that shouldn't be on the side of the airlock you were going to; some people might have donned bunnysuits over business suits, but those would have been executives getting a one-off tour, not the general workforce). > >...(There are also food concerns - going out to eat or getting delivery for every meal gets expensive quickly - but a camper van can have a minifridge and kitchenette installed... > > You can do all that, but most bigger companies around here have 24 hour food services, a cafeteria, or something. Last time I took a tour of Google's main campus, their cafeteria was not 24 hour. One could get lunch there, maybe breakfast (depending on commute), but probably not dinner. Most people need more than one meal per day. > >...Many offices lack such amenities, to the point that most workers won't have access to these at the office and must provide them themselves. > > Companies compete for younger workers. Around here there are many startup companies. They understand the challenges young people face: this is a crowded area and the price of rent is very high, utility costs are very high. If the company can supply a gym, a shower, a locker room big enough to hang a few suits, food service, laundry, dry cleaning, a place to dump waste water and refill water tanks, all of which is inexpensive, they can attract single people who will sleep in a ProMaster van or camper, and live in the office. Then that company can compete. StarLink makes it practical to live in a van, which helps ease the housing shortage in the area, and reduces rent prices, which reduces homelessness. That all relies on working for an enlightened company. Most workers are not so lucky. Also, these services are the sort people need on a daily or weekly basis; they aren't the sort of thing people can wait around for their employer to maybe (or maybe not) provide in a few years. So long as it continues to be the case that many offices lack such amenities, living entirely out of a camper van is not a practical option for most young workers. This could in theory change some day, of course, but to speak of it as if it was reality today is incorrect. From spike at rainier66.com Mon Nov 10 17:18:48 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 09:18:48 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <002501dc525f$1d384530$57a8cf90$@rainier66.com> Message-ID: <002401dc5266$14005460$3c00fd20$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Cc: Adrian Tymes Subject: Re: [ExI] Gender-Neutral Side Note >>... Columbus made a reasonable assumption when he reached land in the Caribbean to assume it was India, and that India was a really big hunk of land. >...No, it was known since at least Roman times (Ptolemy, 2nd century AD) that there was an ocean to the east of China. _______________________________________________ Ja, but no one knew for sure what ocean that was, or that China wasn't part of India. How could they know? They didn't know the longitude of China in the late 1400s. Someone eventually just had to try sailing west from Europe, then solving a still more difficult problem: restocking the ships in India or China or whatever was over there to the west, sufficiently to sail back home to Europe. spike From jasonresch at gmail.com Mon Nov 10 17:28:17 2025 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 10 Nov 2025 12:28:17 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 12:04?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Nov 10, 2025 at 9:32?AM John Clark via extropy-chat > wrote: > > On Sat, Nov 8, 2025 at 1:05?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> > How would you describe the difference between "determinism" and > "superdeterminism?" > > > > Determinism simply says that the present is uniquely caused by the past, > that's it. It says nothing about initial conditions. Superdeterminism makes > the additional assumption that out of the infinite number of states the > past could've been in, it was actually in the one and only state that > invalidates the scientific method and always makes fools of anybody who > tries to investigate the fundamental nature of reality. > > To say that it "always makes fools of" implies a conscious desire and > intent. No such thing is in evidence. > > One makes a fool of oneself if one insists that everything must be > only a certain way despite the evidence, but it is neither the > evidence nor reality itself that is doing the fool-making in this > scenario. > I think the problem here is that you are using the word "superdeterminism" to mean something else (to refer to your own theory), rather than what is conventionally understood to be meant by the word. If you follow the standard definition of superdeterminism, then *something* is operating to fool us. The statistics make it impossible for any ordinary system of hidden variables to work, but if superdeterminism is true, then then there really are hidden variables, but the assignment of these variables operates in a manner that guarantees (however we may try to avoid it) that we see statistics that make us believe it just couldn't possibly be hidden variables. This is what John and I mean when we say that the universe operates in a way to "fool us." Note this is in the standard definition of superdeterminism that Google offered, and you seemed okay with: "a superdeterministic universe would ensure that the choice of measurement setting and the outcome of that measurement are correlated" I think to avoid confusion, you need a new name for the theory you believe in, and for it to be fully formed in a manner that allows us to discuss it, it needs to account for the violation of Bell inequalities. Standard local hidden variables theories are ruled out by the violation of Bell Inequalities, so to make a valid theory, this part must be addressed/explained/accounted for. > > I can't prove that Superdeterminism is wrong but I do think the > probability of it being correct approaches zero. I am unable to think of a > greater violation of Occam's Razor than Superdeterminism. > > My thinking to the contrary goes thus: > > I have investigated many cases where apparent independence produced > apparently contradictory results. > > In some cases this was due to measurement error. > > In some cases, it was eventually discovered that there had been a > hidden dependence (sometimes very obscured), usually going back before > the measurements in question began. > > In some cases, no complete explanation has yet been found. > > In no case was it completely ruled out that there could have been a > hidden dependence going way back, before the data available could > attest. In some cases it seemed very unlikely, but there is a > difference between 1% and literally 0%, and sometimes this is a very > important difference. > > Superdeterminism, or at least more-super-than-was-initially-believed > determinism, has thus been proven in some cases and not ruled out for > the rest. Granted, it is possible that these cases might or might not > also have had multiple worlds involved, but they definitely had > more-super-than-was-initially-believed determinism. > > Meanwhile, I am aware of no cases that prove multiple worlds without > reference to other solutions. (Not "to the exclusion of", just that > MWI definitely existed whether or not other solutions also existed.) > All quantum theories are many-worlds theories. It is just that some quantum theories propose that all the other branches suddenly disappear (under conditions they can neither define, nor test). > > Thus, more-super-than-was-initially-believed determinism exists at > least sometimes. It seems to be a smaller leap to suspect that it > exists all the time, than to bring in another solution that has yet to > be conclusively demonstrated at least once. Thus does Occam's Razor > suggest to me some form of at least mild superdeterminism. > Determinism, yes. But photons having hidden variables assigned in a way that anticipates what a human brain, or a radioactive source, or a computer generating digits of sqrt(19) will be doing 10 light years away, no. If you mean something in between these two things, you will need to specify what exactly that is, and how hidden variables are selected to provide for the 75% anti-correlation rates we observe. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 18:05:16 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 13:05:16 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 12:29?PM Jason Resch via extropy-chat wrote: > On Mon, Nov 10, 2025 at 12:04?PM Adrian Tymes via extropy-chat wrote: >> On Mon, Nov 10, 2025 at 9:32?AM John Clark via extropy-chat >> wrote: >> > On Sat, Nov 8, 2025 at 1:05?PM Jason Resch via extropy-chat wrote: >> >> > How would you describe the difference between "determinism" and "superdeterminism?" >> > >> > Determinism simply says that the present is uniquely caused by the past, that's it. It says nothing about initial conditions. Superdeterminism makes the additional assumption that out of the infinite number of states the past could've been in, it was actually in the one and only state that invalidates the scientific method and always makes fools of anybody who tries to investigate the fundamental nature of reality. >> >> To say that it "always makes fools of" implies a conscious desire and >> intent. No such thing is in evidence. >> >> One makes a fool of oneself if one insists that everything must be >> only a certain way despite the evidence, but it is neither the >> evidence nor reality itself that is doing the fool-making in this >> scenario. > > I think the problem here is that you are using the word "superdeterminism" to mean something else (to refer to your own theory), rather than what is conventionally understood to be meant by the word. > > If you follow the standard definition of superdeterminism, then *something* is operating to fool us. No, no such thing is. It doesn't matter if you go by strict/maximum superdeterminism or my "superdeterminism lite": neither theory involves any conscious entity with that sort of malicious intent. > The statistics make it impossible for any ordinary system of hidden variables to work, but if superdeterminism is true, then then there really are hidden variables, but the assignment of these variables operates in a manner that guarantees (however we may try to avoid it) that we see statistics that make us believe it just couldn't possibly be hidden variables. This is what John and I mean when we say that the universe operates in a way to "fool us." Problem is, "fool us" means there is conscious intent. That's the standard definition: "fool" here is a verb, therefore, something is doing the action upon "us", the noun, and "fool" involves conscious intent by definition. It's the definition of "fool us", not the definition of "superdeterminism", that's causing the issue here. > All quantum theories are many-worlds theories. It is just that some quantum theories propose that all the other branches suddenly disappear (under conditions they can neither define, nor test). Are you saying that superdeterminism is a many-world theory? I thought that the former was explicitly not, and is an alternative to, the latter set. > But photons having hidden variables assigned in a way that anticipates what a human brain, or a radioactive source, or a computer generating digits of sqrt(19) will be doing 10 light years away, no. No such anticipation is needed. Conditions can be set up so that if a human brain (or the alternatives) does its thing later, the results will be in a certain way, but that does not require anticipation of any specific actor doing that exact thing. It's like if I produce weighted dice, that will almost always come out with a 1 and a 6. If a later observer comes along, sees the dice, declares that they must be fair and independent, and keeps rolling 7...I did not specifically anticipate that, or any, observer. Nor did I arrange to make a fool out of that particular observer. That observer may have been born after I made those dice - indeed, possibly after I forgot all about them, migrated off of Earth (supposing the observer is born on Earth), et cetera. I have no animus toward nor knowledge of that observer. So, to say that I specifically made those dice to make a fool of that observer is demonstrably incorrect. Nor did I produce any sign saying that these dice are fair; the only one assuming they ever were fair is that observer. > If you mean something in between these two things, you will need to specify what exactly that is, and how hidden variables are selected to provide for the 75% anti-correlation rates we observe. Some things just are, with no "how" or "why" - at least, none that we can currently explain. Just because we can't explain it right now, doesn't mean that it isn't. How did the speed of light in our universe come to be what it is? And yet we can measure it, and confirm that it is that value. Why does the gravitational constant have the value that it does? And yet we can measure it, and confirm that it is that value. How is it that radioactive decay follows a logarithmic spread rather than linear? Why is it that, if half the particles in a sample decay in time X, only half of the rest will decay in a further interval of time X? If the particles are independent of one another, how do the undecayed particles know that half of their kin have decayed so they should have a lower chance of decaying? (The answer may have something to do with survivor's bias, but that doesn't explain how they got on a logarithmic spread in the first place.) From johnkclark at gmail.com Mon Nov 10 18:25:37 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 13:25:37 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Sat, Nov 8, 2025 at 1:30?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > * > the definition of this device presupposes that the results are linked > to, and depend on, the settings of the detectors (even if the detectors > themselves are independent and can not interact* *which means that something else, not specified in the definition, is* * doing this coordination). * *But it's possible to do more than just define such a device, you can actually build such a device, and I can tell you exactly how to do so. You probably don't remember but a decade ago on September 2 2015 you and I had a similar conversation on this list and on this topic in which you said: * *"Let us posit the existence of some universal observer. Let us call it God, because God knows all. God sees two entangled photons being formed, and knows their properties. God does not tell you, but God knowsSome time later, you find out the state of one of them. Does this cause a change in the other? No: it is still as God always knew it was"* *And this is how I responded. * *That would be equivalent to saying each photon has a lookup table telling the photon how to behave but it's encrypted and only God and the photon itself can decrypt it; that's what "hidden variable" means, something different about that particular photon that we just don't know about, a lookup table inside the photon that we are unable to access but the photon can when it wants to know if it should go through a filter or be stopped by one. However that can not be correct because we can now devise experiments that show that photons behave in ways that no lookup table could duplicate, not even a lookup table made by God. * *I have a black box, it has a red light and a blue light on it, it also has a rotary switch with 6 connections at the 12,2,4,6,8 and 10 o'clock positions. The red and blue light blink in a manner that passes all known tests for being completely random, this is true regardless of what position the rotary switch is in. Such a box could be made and still be completely deterministic by just pre-computing 6 different random sequences and recording them as a lookup table in the box. Now the box would know which light to flash.* *I have another black box. When both boxes have the same setting on their rotary switch they both produce the same random sequence of light flashes. This would also be easy to reproduce in a classical physics world, just record the same 6 random sequences in both boxes. * *The set of two boxes has another property, if the switches are set to opposite positions, 12 and 6 o'clock for example, there is a total negative correlation, when one flashes red the other box flashes blue and when one box flashes blue the other flashes red. This just makes it all the easier to make the boxes because now you only need to pre-calculate 3 random sequences, then just change every 1 to 0 and every 0 to 1 to get the other 3 sequences and record all 6 in both boxes.* *The boxes have one more feature that makes things very interesting, if the rotary switch on a box is one notch different from the setting on the other box then the sequence of light flashes will on average be different 1 time in 4. How on Earth could I make the boxes behave like that? Well, I could change on average one entry in 4 of the 12 o'clock lookup table (hidden variable) sequence and make that the 2 o'clock table. Then change 1 in 4 of the 2 o'clock and make that the 4 o'clock, and change 1 in 4 of the 4 o'clock and make that the 6 o'clock. So now the light flashes on the box set at 2 o'clock is different from the box set at 12 o'clock on average by 1 flash in 4. The box set at 4 o'clock differs from the one set at 12 by 2 flashes in 4, and the one set at 6 differs from the one set at 12 by 3 flashes in 4.* *But I said before that the boxes at opposite settings should have a 100% anti-correlation, the flashes on the box set at 12 o'clock should differ from the box set at 6 o'clock by 4 flashes in 4 NOT 3 flashes in 4. Thus if the boxes work by hidden variables then when one is set to 12 o'clock and the other to 2 there MUST be a 2/3 correlation, at 4 a 1/3 correlation, and of course at 6 no correlation at all. A correlation greater than 2/3, such as 3/4, for adjacent settings produces paradoxes, at least it would if you expected everything to work mechanistically because of some hidden variable involved. * *Does this mean it's impossible to make two boxes that have those specifications? Nope, but it does mean hidden variables can not be involved and that means something very weird is going on. Actually it would be quite easy to make a couple of boxes that behave like that, it's just not easy to understand how that could be. * *Photons behave in just this spooky manner, so to make the boxes all you need it 4 things:* *1) A glorified light bulb, something that will make two photons of unspecified but identical polarization moving in opposite directions so you can send one to each box. An excited calcium atom would do the trick, or you could turn a green photon into two identical lower energy red photons with a crystal of potassium dihydrogen phosphate.* *2) A light detector sensitive enough to observe just one photon. Incidentally the human eye is not quite good enough to do that but frogs can, for frogs when light gets very weak it must stop getting dimmer and appear to flash. * *3) A polarizing filter, we've had these for a century or more.* *4) Some gears and pulleys so that each time the rotary switch is advanced one position the filter is advanced by 30 degrees. This is because it's been known for many years that the amount of light polarized at 0 degrees that will make it through a polarizing filter set at X degrees is [COS (x)]^2; and if x = 30 DEGREES then the value is .75* *Light is made of photons so that translates to the probability any individual photon will make it through the filter is 75%.* *The bottom line of all this is that there can not be something special about a specific photon, some internal difference, some hidden variable put in there by God that determines if it makes it through a filter or not. John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 18:58:43 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 13:58:43 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> Message-ID: On Sat, Nov 8, 2025 at 10:50?AM Ben Zaiboc via extropy-chat wrote: > And "person your posts!". FWIW, in certain sci-fi I have written, I tend to use "crew your posts" for this - or, more often, versions that reflect why. "Landfall imminent. Get ready.", "Go do your job.", or the classic "Battle stations!". From atymes at gmail.com Mon Nov 10 19:07:14 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 14:07:14 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> References: <28777cf7-92f1-44fb-bee2-d95d65121aa6@zaiboc.net> <836dc250-7a6a-45ff-b800-4e4ef30ac2bc@zaiboc.net> <00c201dc5192$1b9f6be0$52de43a0$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 11:02?AM spike jones via extropy-chat wrote: > >?Ja, that and they fixed the split infinitive by switching to ?to go boldly? replacing ?to boldly go? tossing us grammar nazis a bone. But it also makes us realize that eliminating the split infinitive was not a real improvement. Picard?s revised introduction still doesn?t eliminate the Columbus problem. When I was first told Columbus discovered the Americas, I knew that was false: there were already people here, plenty of them. Where Picard went had ?ones? already there, the Vulcans, the beloved Feringi (I really relate to them for some odd reason) the Klingons and so forth. > > Better would have been ??to go boldly where no earth-evolved human has gone before?? Quite often, they went where Earth-evolved humans had gone - and frequently (though not always) knew it ahead of arrival. They did sometimes go where not even the Vulcans, Ferengi, or Klingons had gone before, though not always boldly. From jasonresch at gmail.com Mon Nov 10 19:11:00 2025 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 10 Nov 2025 14:11:00 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 1:06?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Nov 10, 2025 at 12:29?PM Jason Resch via extropy-chat > wrote: > > On Mon, Nov 10, 2025 at 12:04?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Nov 10, 2025 at 9:32?AM John Clark via extropy-chat > >> wrote: > >> > On Sat, Nov 8, 2025 at 1:05?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> > How would you describe the difference between "determinism" and > "superdeterminism?" > >> > > >> > Determinism simply says that the present is uniquely caused by the > past, that's it. It says nothing about initial conditions. Superdeterminism > makes the additional assumption that out of the infinite number of states > the past could've been in, it was actually in the one and only state that > invalidates the scientific method and always makes fools of anybody who > tries to investigate the fundamental nature of reality. > >> > >> To say that it "always makes fools of" implies a conscious desire and > >> intent. No such thing is in evidence. > >> > >> One makes a fool of oneself if one insists that everything must be > >> only a certain way despite the evidence, but it is neither the > >> evidence nor reality itself that is doing the fool-making in this > >> scenario. > > > > I think the problem here is that you are using the word > "superdeterminism" to mean something else (to refer to your own theory), > rather than what is conventionally understood to be meant by the word. > > > > If you follow the standard definition of superdeterminism, then > *something* is operating to fool us. > > No, no such thing is. It doesn't matter if you go by strict/maximum > superdeterminism or my "superdeterminism lite": neither theory > involves any conscious entity with that sort of malicious intent. > Note: nowhere in my explanation did I make any reference to consciousness, maliciousness, nor an entity. If you want to understand why I, John, and so many others have such a strong distaste for superdeterminism, you must fully understand what is involved in the ordinary definition of it. If it were regular determinism, and if it could explain away the quantum statistics, I would be 100% on board with you, this is the simplest theory, we don't need FTL influences or other universes. But that is not what superdeterminism is. > > > The statistics make it impossible for any ordinary system of hidden > variables to work, but if superdeterminism is true, then then there really > are hidden variables, but the assignment of these variables operates in a > manner that guarantees (however we may try to avoid it) that we see > statistics that make us believe it just couldn't possibly be hidden > variables. This is what John and I mean when we say that the universe > operates in a way to "fool us." > > Problem is, "fool us" means there is conscious intent. That's the > standard definition: "fool" here is a verb, therefore, something is > doing the action upon "us", the noun, and "fool" involves conscious > intent by definition. > > It's the definition of "fool us", not the definition of > "superdeterminism", that's causing the issue here. > Do you understand why "fool us" is the most appropriate verb to describe what must be involved in a superdeterministic universe? If not, then I would have to conclude you are not fully appreciating the difference between determinism and superdeterminism. > > > All quantum theories are many-worlds theories. It is just that some > quantum theories propose that all the other branches suddenly disappear > (under conditions they can neither define, nor test). > > Are you saying that superdeterminism is a many-world theory? I > thought that the former was explicitly not, and is an alternative to, > the latter set. > Full many-worlds (in contast to the "semi-many-worlds" other theories) says that the branches continue to exist after a conscious measurement. The other theories say the other worlds only exist so long as we're not looking. But any quantum theory that describes the two-slit experiment, necessarily involves unobserved "mirror photons" that are there and have real-world observable effects (like interference). These mirror photons represent other possible states the (multi)/(uni)verse is simultaneously in. So to the extent that superdeterminism explains the two-slit experiment, it too would be a quasi-many worlds theory. It just (like all the others) assumes other branches stop existing after a measurement is made. Normally superdeterminism is put forward only as a means to explain Bell inequalities, and usually it is silent on conventional/standard QM ideas like the two-slit experiment, so I don't know how a usual superdeterministic would explain the mirror photon and its interference effects. > > > But photons having hidden variables assigned in a way that anticipates > what a human brain, or a radioactive source, or a computer generating > digits of sqrt(19) will be doing 10 light years away, no. > > No such anticipation is needed. Conditions can be set up so that if a > human brain (or the alternatives) does its thing later, the results > will be in a certain way, but that does not require anticipation of > any specific actor doing that exact thing. > If you still insist on this, I have to conclude that you do not understand the consequences of Bell's Theorem ( https://en.wikipedia.org/wiki/Bell%27s_theorem ) as they relate to the experiment. There is simply no way that an entangled photon (in a single state (not a superposition of multiple simultaneous states)) can carry enough information with it (from the time it is first created) to know to disagree with its partner photon 75% of the time, when the polarizing filters are offset by 120 degrees, and to agree with its partner photon 100% of the time when the polarizing filters are aligned. This is what is meant when all physicists agree that single, local, hidden variables have been disproven. It is why a Nobel prize was given for the experimental work confirming Bell's predicted statistics. To clarify your position: are you claiming that Bell and the work of these physicists is wrong about ruling out "single, local, hidden variables"? I do not dispute that physicists can be wrong, but I want to know if this is what you are claiming here. Also: do you understand why the 75% anti-correlation is surprising? If not, I would suggest the exercise of taking a pen and paper, and trying to work out what the hidden variables would have to include to produce this anti-correlation using information locally present (about the setting of the polarizing filter) and the fixed information the photon carried with it when it was created --- *but importantly* without the photon (or its partner) already knowing the position the polarizing filter would be in, at the time the photon pair was created. It is only in attempting this, that I think you will appreciate the significance of Bell's result. It is very subtle, I admit, but when you understand it, I think you understand just how remarkable it is and the implications it carries for the reality we are in. > > It's like if I produce weighted dice, that will almost always come out > with a 1 and a 6. If a later observer comes along, sees the dice, > declares that they must be fair and independent, and keeps rolling > 7...I did not specifically anticipate that, or any, observer. Nor did > I arrange to make a fool out of that particular observer. That > observer may have been born after I made those dice - indeed, possibly > after I forgot all about them, migrated off of Earth (supposing the > observer is born on Earth), et cetera. I have no animus toward nor > knowledge of that observer. So, to say that I specifically made those > dice to make a fool of that observer is demonstrably incorrect. Nor > did I produce any sign saying that these dice are fair; the only one > assuming they ever were fair is that observer. > Bell's inequality violation is stranger than weighted dice. Weighted dice can be easily explained mechanistically. The violation of Bell inequalities cannot be explained mechanistically by any classical machine behavior. Again, to appreciate this point, try to work out some table of information/behavioral results that can result in 75% anti-correlation. Think of it like this: Jack and Jill, Bob and Barbara, and Alex and Alice are three pairs of brother-sister siblings. Jack, Bob, and Alex stay on earth, while Jill, Barbara, and Alice go to Proxima Centauri. Somewhere in between special coins are created and sent to Earth and Porxima Centauri at the speed of light (so there is no time for causal interactions to have any effect. The special coins have this mysterious effect: - When any sibling pair flips the special coin, they always give the same result. - When any pair of non-siblings flip the special coin, they get different results 75% of the time. Explain how such a coin could be built that would have these properties, without the manufacturer of the coin knowing (at the time of manufacturer) who eventually flip the coin (and without relying on quantum mechanically entangled particles which show these statistics). > > > If you mean something in between these two things, you will need to > specify what exactly that is, and how hidden variables are selected to > provide for the 75% anti-correlation rates we observe. > > Some things just are, with no "how" or "why" - at least, none that we > can currently explain. Just because we can't explain it right now, > doesn't mean that it isn't. > If this is beyond the explanatory limit of your theory, then I would say you don't yet have a theory ready for us to discuss. Copenhagen, Many-Worlds, and Superdeterminism all have answers to this question. If yours does not, then it is still only proto-theory. It is okay to say: "I don't know how it works, but I dislike the answers existing theories provide," but that itself is not a position we can really debate the merits or advantages of, nor discuss how one would go about testing the idea. > > How did the speed of light in our universe come to be what it is? And > yet we can measure it, and confirm that it is that value. > > Why does the gravitational constant have the value that it does? And > yet we can measure it, and confirm that it is that value. > This gets back to the point I made to John earlier, regarding distinguishing brute facts without causes, and effects without causes. There are plenty of brute facts without causes, but if we are in a lawful, deterministic universe, then every event is an effect that has some preceding cause. When it comes to the Mermin device flashing red or green lights, those are causes within our universe. And if our universe is causal/deterministic, there should be an identifiable reason/cause/explanation for how those lights end up flashing as they do. Unless: you say, that effect was baked into the initial conditions of the universe, it just is that way, we can't explain it. But this is the same form of escapism that creationists play with dinosaur bones: "God (or the devil) just put them there. We can't explain how they got to be there. The universe was created just as it is now, a few thousand years ago, but it was created in a state that gives us the false impression that it is much older. Yes, I admit we can neither prove nor disprove this." Notice the parallel: "Superdeterminism just put the correlations there. We can't explain how the photons got the information they needed to produce the Bell inequality violations, the universe just is that way. It gives us the false impression that the QM does not have hidden variables, even though it does. Yes, I admit we can neither prove nor disprove this." > > How is it that radioactive decay follows a logarithmic spread rather > than linear? Why is it that, if half the particles in a sample decay > in time X, only half of the rest will decay in a further interval of > time X? If the particles are independent of one another, how do the > undecayed particles know that half of their kin have decayed so they > should have a lower chance of decaying? (The answer may have > something to do with survivor's bias, but that doesn't explain how > they got on a logarithmic spread in the first place.) > You can explain logarithmic decay without reference to other radioactive particles. Model any single nucleus as having some fixed "D%" probability of decaying over some length of time L, for any D, and any L. Now model some population of these particles. You will find they reproduce all the same statistics with half lives, exponential decay, etc. with an identifiable half life. All you need for this is a fixed, non-zero decay probability per unit of time. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 19:51:45 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 14:51:45 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 2:12?PM Jason Resch via extropy-chat wrote: > Note: nowhere in my explanation did I make any reference to consciousness, maliciousness, nor an entity. You use the term "fool us", which pretty much explicitly calls for such an entity. Fooling is a thing done by one entity to another. > Do you understand why "fool us" is the most appropriate verb to describe what must be involved in a superdeterministic universe? If not, then I would have to conclude you are not fully appreciating the difference between determinism and superdeterminism. I understand that it is not the most appropriate verb, unless you are referring entirely to what you do to yourself while trying to understand the universe - which is not how you've been using the verb. See below for another example of this sort of thing. >> Are you saying that superdeterminism is a many-world theory? I >> thought that the former was explicitly not, and is an alternative to, >> the latter set. > > Full many-worlds (in contast to the "semi-many-worlds" other theories) says that the branches continue to exist after a conscious measurement. The other theories say the other worlds only exist so long as we're not looking. > > But any quantum theory that describes the two-slit experiment, necessarily involves unobserved "mirror photons" that are there and have real-world observable effects (like interference). These mirror photons represent other possible states the (multi)/(uni)verse is simultaneously in. > > So to the extent that superdeterminism explains the two-slit experiment, it too would be a quasi-many worlds theory. It just (like all the others) assumes other branches stop existing after a measurement is made. > > Normally superdeterminism is put forward only as a means to explain Bell inequalities, and usually it is silent on conventional/standard QM ideas like the two-slit experiment, so I don't know how a usual superdeterministic would explain the mirror photon and its interference effects. Under strict (non-MWI) superdeterminism, if there are mirror photons, they are there in the real world. It's kind of like how if you have two sound waves that perfectly cancel each other out, so they can't be measured, they still exist. (Or at least, one existed prior to the cancellation - the act of producing the second "wave" consisted of exactly opposing the first wave, resulting in no wave - even if only the post-cancellation result was observed.) > There is simply no way that an entangled photon (in a single state (not a superposition of multiple simultaneous states)) can carry enough information with it (from the time it is first created) to know to disagree ... > to know to disagree That is something that a conscious entity does. That is not something that a non-conscious thing like a photon does. You keep using phrasing that ascribes conscious motivation to photons et al. I don't think you realize how much you have done so in this conversation. Whatever is keeping you from realizing that, may be keeping you from fully understanding my position. >> It's like if I produce weighted dice, that will almost always come out >> with a 1 and a 6. If a later observer comes along, sees the dice, >> declares that they must be fair and independent, and keeps rolling >> 7...I did not specifically anticipate that, or any, observer. Nor did >> I arrange to make a fool out of that particular observer. That >> observer may have been born after I made those dice - indeed, possibly >> after I forgot all about them, migrated off of Earth (supposing the >> observer is born on Earth), et cetera. I have no animus toward nor >> knowledge of that observer. So, to say that I specifically made those >> dice to make a fool of that observer is demonstrably incorrect. Nor >> did I produce any sign saying that these dice are fair; the only one >> assuming they ever were fair is that observer. > > Bell's inequality violation is stranger than weighted dice. Weighted dice can be easily explained mechanistically. You missed the entire point I was trying to make there. Try rereading that, and leave aside - for the moment - the actual quantum mechanics. It's about how you've been ascribing intent where there is none. >> > If you mean something in between these two things, you will need to specify what exactly that is, and how hidden variables are selected to provide for the 75% anti-correlation rates we observe. >> >> Some things just are, with no "how" or "why" - at least, none that we >> can currently explain. Just because we can't explain it right now, >> doesn't mean that it isn't. > > If this is beyond the explanatory limit of your theory, then I would say you don't yet have a theory ready for us to discuss. Copenhagen, Many-Worlds, and Superdeterminism all have answers to this question. If yours does not, then it is still only proto-theory. It is okay to say: "I don't know how it works, but I dislike the answers existing theories provide," but that itself is not a position we can really debate the merits or advantages of, nor discuss how one would go about testing the idea. If it is not a complete theory, then it is not. It is possible to know part of the truth, to identify problematic inconsistencies with all the popular theories, and to conclude that none of them are a true complete theory. It's kind of like how dark matter is a fudge for something we don't have a complete explanation of yet: things act as if there was this matter there, but we don't yet know how that matter is arranged or even if it truly exists. Would you say that dark matter itself was ever a complete theory, if no version of it fully explained everything we had observed? That said, my theory at least aligns with superdeterminism - so far as the evidence goes. It admits to the possibility that things go all the way back, but acknowledges that that has not yet been proven, and does not need to be to explain what we have seen. Thus why I simply call it that, though it might be more accurate to call it a subset of superdeterminism. > This gets back to the point I made to John earlier, regarding distinguishing brute facts without causes, and effects without causes. > There are plenty of brute facts without causes, but if we are in a lawful, deterministic universe, then every event is an effect that has some preceding cause. But you don't necessarily know the cause. What, for instance, caused the Big Bang? And is there any effect subsequent to that, in our universe, that does not ultimately trace its cause to or through the Big Bang? That depends. What causes the exact timing of atomic decay? Where does the chain of causality for that begin? Or, in MWI, does it literally just take the energy of one atom fissioning to also fission off another complete universe? From johnkclark at gmail.com Mon Nov 10 20:29:15 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 10 Nov 2025 15:29:15 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 12:05?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: * > Determinism simply says that the present is uniquely caused by the >> past, that's it. It says nothing about initial conditions. Superdeterminism >> makes the additional assumption that out of the infinite number of states >> the past could've been in, it was actually in the one and only state that >> invalidates the scientific method and always makes fools of anybody who >> tries to investigate the fundamental nature of reality.* > > > > * > To say that it "always makes fools of" implies a conscious desire > and intent. No such thing is in evidence.* *Yeah but don't expect me to defend Superdeterminism**! I think it's just about the most idiotic idea conceivable. * * >> I can't prove that Superdeterminism is wrong but I do think the >> probability of it being correct approaches zero. I am unable to think of a >> greater violation of Occam's Razor than Superdeterminism.* > > > * > My thinking to the contrary goes thus:I have investigated many cases > where apparent independence produced apparently contradictory results. In > some cases this was due to measurement error.* > *Yes that happened in some cases, but in other cases there was no measurement error. Things were just weird. * > * > In some cases, it was eventually discovered that there had been > a hidden dependence* *Yes that happened in some cases, but in other cases there was no hidden dependency. Things were just weird. And that was experimentally proven by Alain Aspect, John Clauser, and Anton Zeilinger, and that's why they received the Nobel prize for physics in 2022.* * > In some cases, no complete explanation has yet been found.* > *Maybe somebody will come up with a better idea tomorrow but so far only two explanations have been found for the bizarre outcome of certain experiments, just two:* *1) Superdeterminism, which is idiotic.* *2) Many Worlds, which is slightly less idiotic. * *There is one thing we know for certain, whatever the true nature of reality turns out to be it's going to be counterintuitive and weird, very very weird. * *> there could have been ahidden dependence going way back, before the data > available could attest. * *It doesn't matter how far back that hidden dependency goes, go infinitely far if you like, but it won't help. As John Bell proved when he derived his inequality, if we ignore superdeterminism (as we should!) then the only way that hidden factors, that is to say something that particles can access but we cannot, could produce the experimental results that we see is if it turns out that mathematicians have been wrong for millennia and that 2/3 is actually larger than 3/4. Do you think that is likely? * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Nov 10 20:34:51 2025 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 10 Nov 2025 15:34:51 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: I am afraid I can't continue in this thread when my points get ignored, unaddressed, or deleted. In any event, I believe my previous reply provides enough information to reach an understanding of the incredible implications of that follow from superdeterminism, for anyone motivated to explore that further. Jason On Mon, Nov 10, 2025, 2:52 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Nov 10, 2025 at 2:12?PM Jason Resch via extropy-chat > wrote: > > Note: nowhere in my explanation did I make any reference to > consciousness, maliciousness, nor an entity. > > You use the term "fool us", which pretty much explicitly calls for > such an entity. Fooling is a thing done by one entity to another. > > > Do you understand why "fool us" is the most appropriate verb to describe > what must be involved in a superdeterministic universe? If not, then I > would have to conclude you are not fully appreciating the difference > between determinism and superdeterminism. > > I understand that it is not the most appropriate verb, unless you are > referring entirely to what you do to yourself while trying to > understand the universe - which is not how you've been using the verb. > > See below for another example of this sort of thing. > > >> Are you saying that superdeterminism is a many-world theory? I > >> thought that the former was explicitly not, and is an alternative to, > >> the latter set. > > > > Full many-worlds (in contast to the "semi-many-worlds" other theories) > says that the branches continue to exist after a conscious measurement. The > other theories say the other worlds only exist so long as we're not looking. > > > > But any quantum theory that describes the two-slit experiment, > necessarily involves unobserved "mirror photons" that are there and have > real-world observable effects (like interference). These mirror photons > represent other possible states the (multi)/(uni)verse is simultaneously in. > > > > So to the extent that superdeterminism explains the two-slit experiment, > it too would be a quasi-many worlds theory. It just (like all the others) > assumes other branches stop existing after a measurement is made. > > > > Normally superdeterminism is put forward only as a means to explain Bell > inequalities, and usually it is silent on conventional/standard QM ideas > like the two-slit experiment, so I don't know how a usual > superdeterministic would explain the mirror photon and its interference > effects. > > Under strict (non-MWI) superdeterminism, if there are mirror photons, > they are there in the real world. It's kind of like how if you have > two sound waves that perfectly cancel each other out, so they can't be > measured, they still exist. (Or at least, one existed prior to the > cancellation - the act of producing the second "wave" consisted of > exactly opposing the first wave, resulting in no wave - even if only > the post-cancellation result was observed.) > > > There is simply no way that an entangled photon (in a single state (not > a superposition of multiple simultaneous states)) can carry enough > information with it (from the time it is first created) to know to disagree > > ... > > > to know to disagree > > That is something that a conscious entity does. That is not something > that a non-conscious thing like a photon does. > > You keep using phrasing that ascribes conscious motivation to photons > et al. I don't think you realize how much you have done so in this > conversation. Whatever is keeping you from realizing that, may be > keeping you from fully understanding my position. > > >> It's like if I produce weighted dice, that will almost always come out > >> with a 1 and a 6. If a later observer comes along, sees the dice, > >> declares that they must be fair and independent, and keeps rolling > >> 7...I did not specifically anticipate that, or any, observer. Nor did > >> I arrange to make a fool out of that particular observer. That > >> observer may have been born after I made those dice - indeed, possibly > >> after I forgot all about them, migrated off of Earth (supposing the > >> observer is born on Earth), et cetera. I have no animus toward nor > >> knowledge of that observer. So, to say that I specifically made those > >> dice to make a fool of that observer is demonstrably incorrect. Nor > >> did I produce any sign saying that these dice are fair; the only one > >> assuming they ever were fair is that observer. > > > > Bell's inequality violation is stranger than weighted dice. Weighted > dice can be easily explained mechanistically. > > You missed the entire point I was trying to make there. Try rereading > that, and leave aside - for the moment - the actual quantum mechanics. > It's about how you've been ascribing intent where there is none. > > >> > If you mean something in between these two things, you will need to > specify what exactly that is, and how hidden variables are selected to > provide for the 75% anti-correlation rates we observe. > >> > >> Some things just are, with no "how" or "why" - at least, none that we > >> can currently explain. Just because we can't explain it right now, > >> doesn't mean that it isn't. > > > > If this is beyond the explanatory limit of your theory, then I would say > you don't yet have a theory ready for us to discuss. Copenhagen, > Many-Worlds, and Superdeterminism all have answers to this question. If > yours does not, then it is still only proto-theory. It is okay to say: "I > don't know how it works, but I dislike the answers existing theories > provide," but that itself is not a position we can really debate the merits > or advantages of, nor discuss how one would go about testing the idea. > > If it is not a complete theory, then it is not. > > It is possible to know part of the truth, to identify problematic > inconsistencies with all the popular theories, and to conclude that > none of them are a true complete theory. > > It's kind of like how dark matter is a fudge for something we don't > have a complete explanation of yet: things act as if there was this > matter there, but we don't yet know how that matter is arranged or > even if it truly exists. Would you say that dark matter itself was > ever a complete theory, if no version of it fully explained everything > we had observed? > > That said, my theory at least aligns with superdeterminism - so far as > the evidence goes. It admits to the possibility that things go all > the way back, but acknowledges that that has not yet been proven, and > does not need to be to explain what we have seen. Thus why I simply > call it that, though it might be more accurate to call it a subset of > superdeterminism. > > > This gets back to the point I made to John earlier, regarding > distinguishing brute facts without causes, and effects without causes. > > There are plenty of brute facts without causes, but if we are in a > lawful, deterministic universe, then every event is an effect that has some > preceding cause. > > But you don't necessarily know the cause. What, for instance, caused > the Big Bang? And is there any effect subsequent to that, in our > universe, that does not ultimately trace its cause to or through the > Big Bang? > > That depends. What causes the exact timing of atomic decay? Where > does the chain of causality for that begin? Or, in MWI, does it > literally just take the energy of one atom fissioning to also fission > off another complete universe? > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 21:44:26 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 16:44:26 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 3:36?PM Jason Resch via extropy-chat wrote: > I am afraid I can't continue in this thread when my points get ignored, unaddressed, or deleted. Fair enough. I have been trying to address what appeared to be the most relevant and important points, in an attempt to keep the discussion somewhat productive. > In any event, I believe my previous reply provides enough information to reach an understanding of the incredible implications of that follow from superdeterminism, for anyone motivated to explore that further. I think we can agree that, if nothing else, we have both thoroughly expressed our points of view, for the benefit of others who read this list. :) From jasonresch at gmail.com Mon Nov 10 21:50:15 2025 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 10 Nov 2025 16:50:15 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025, 4:45 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Nov 10, 2025 at 3:36?PM Jason Resch via extropy-chat > wrote: > > I am afraid I can't continue in this thread when my points get ignored, > unaddressed, or deleted. > > Fair enough. I have been trying to address what appeared to be the > most relevant and important points, in an attempt to keep the > discussion somewhat productive. > > > In any event, I believe my previous reply provides enough information to > reach an understanding of the incredible implications of that follow from > superdeterminism, for anyone motivated to explore that further. > > I think we can agree that, if nothing else, we have both thoroughly > expressed our points of view, for the benefit of others who read this > list. :) > Thank you, I agree. ?? We've reached as good a place as any to draw this one to a close. I salute you. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 21:54:22 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 16:54:22 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 3:29?PM John Clark wrote: > Yeah but don't expect me to defend Superdeterminism! I think it's just about the most idiotic idea conceivable. I don't think either of us is convincing the other. The point of this discussion is thus the audience. >> In some cases this was due to measurement error. > > Yes that happened in some cases, but in other cases there was no measurement error. Thus why I said "some cases", not "all cases". >> In some cases, it was eventually discovered that there had been a hidden dependence > > Yes that happened in some cases, but in other cases there was no hidden dependency. Things were just weird. And that was experimentally proven by Alain Aspect, John Clauser, and Anton Zeilinger, and that's why they received the Nobel prize for physics in 2022. IIRC, they proved no dependencies that could have been generated after the experiment was set up and they were measuring things. > Maybe somebody will come up with a better idea tomorrow but so far only two explanations have been found for the bizarre outcome of certain experiments, just two: > > 1) Superdeterminism, which is idiotic. > 2) Many Worlds, which is slightly less idiotic. I thought those were just two of the most prominent, but that there were others...? > There is one thing we know for certain, whatever the true nature of reality turns out to be it's going to be counterintuitive and weird, very very weird. On this we agree - at least, counterintuitive and weird from our current point of view. I am reminded of doctors discussing sanitization before the advent of germ theory: quite a few were highly opposed. > It doesn't matter how far back that hidden dependency goes, go infinitely far if you like, but it won't help. As John Bell proved when he derived his inequality, if we ignore superdeterminism (as we should!) then the only way that hidden factors, that is to say something that particles can access but we cannot, could produce the experimental results that we see is if it turns out that mathematicians have been wrong for millennia and that 2/3 is actually larger than 3/4. Do you think that is likely? I think it is likely that you ignore superdeterminism, and see weirdness resulting from trying to ignore it. From atymes at gmail.com Mon Nov 10 21:55:06 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 16:55:06 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 4:51?PM Jason Resch via extropy-chat wrote: > Thank you, I agree. ?? We've reached as good a place as any to draw this one to a close. I salute you. I salute and thank you too, sir! From spike at rainier66.com Mon Nov 10 21:59:42 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 13:59:42 -0800 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: <014a01dc528d$51ee9990$f5cbccb0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via Subject: Re: [ExI] Google?s Willow Quantum Chip: Proof of the Multiverse? On Mon, Nov 10, 2025 at 4:51?PM Jason Resch via extropy-chat wrote: > Thank you, I agree. ?? We've reached as good a place as any to draw this one to a close. I salute you. I salute and thank you too, sir! _______________________________________________ Thanks to both Adrian and Jason. I didn't post on the thread, but learned much. In the way of quantum anything, I still don't understand it, but I figure QM has been around for over a century and we humans just don't really have a handle on it collectively. spike From pharos at gmail.com Mon Nov 10 22:16:02 2025 From: pharos at gmail.com (BillK) Date: Mon, 10 Nov 2025 22:16:02 +0000 Subject: [ExI] Summary - Many Worlds versus Super-determinism Message-ID: Adrian, Jason and John have had a long discussion on this subject. I asked Kimi K2 Thinking AI to summarise the differences between these theories. (Hoping to add some clarification). BillK ------------- Kimi K2 Thinking AI - Here?s a detailed breakdown of why most physicists prefer the Many Worlds Interpretation (MWI) over superdeterminism, despite both being deterministic theories: Click the link to view the complete summary < https://www.kimi.com/share/19a6fcce-a8a2-8623-8000-0000e0a140f6> ## Bottom Line: The Consensus Preference MWI is preferred because: 1. **Preserves statistical independence** (the bedrock of experimental science) 2. **Avoids cosmic conspiracy** (no need for fine-tuned initial conditions) 3. **Accepts only "tame" non-locality** (no faster-than-light causation) 4. **Maintains scientific method** (experiments test real hypotheses) 5. **More parsimonious** (uses existing mathematical structure) Superdeterminism, by contrast, is seen as a **desperate loophole** that "undercuts the whole scientific method" to avoid admitting quantum non-locality. While not definitively disproven, its cost to scientific epistemology is considered too high -------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 22:44:32 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 17:44:32 -0500 Subject: [ExI] Summary - Many Worlds versus Super-determinism In-Reply-To: References: Message-ID: Checking the link, you start the QM-relevant part of the discussion (after a question on financial debt) with, "can you explain superdeterminism and explain why most physicists reject it". As you know, if you include a conclusion in the question like that, the AI will tend to reach that conclusion. On Mon, Nov 10, 2025 at 5:17?PM BillK via extropy-chat wrote: > > Adrian, Jason and John have had a long discussion on this subject. > I asked Kimi K2 Thinking AI to summarise the differences between these theories. > (Hoping to add some clarification). > BillK > ------------- > > Kimi K2 Thinking AI - > Here?s a detailed breakdown of why most physicists prefer the Many Worlds Interpretation (MWI) over superdeterminism, despite both being deterministic theories: > > Click the link to view the complete summary > > ## Bottom Line: The Consensus Preference > > MWI is preferred because: > 1. **Preserves statistical independence** (the bedrock of experimental science) > 2. **Avoids cosmic conspiracy** (no need for fine-tuned initial conditions) > 3. **Accepts only "tame" non-locality** (no faster-than-light causation) > 4. **Maintains scientific method** (experiments test real hypotheses) > 5. **More parsimonious** (uses existing mathematical structure) > > Superdeterminism, by contrast, is seen as a **desperate loophole** that "undercuts the whole scientific method" to avoid admitting quantum non-locality. While not definitively disproven, its cost to scientific epistemology is considered too high > -------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Mon Nov 10 23:01:06 2025 From: pharos at gmail.com (BillK) Date: Mon, 10 Nov 2025 23:01:06 +0000 Subject: [ExI] Summary - Many Worlds versus Super-determinism In-Reply-To: References: Message-ID: On Mon, 10 Nov 2025 at 22:46, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Checking the link, you start the QM-relevant part of the discussion > (after a question on financial debt) with, "can you explain > superdeterminism and explain why most physicists reject it". As you > know, if you include a conclusion in the question like that, the AI > will tend to reach that conclusion. > _______________________________________________ Yes, that's correct. It was quite a long session, with several different questions. I didn't know that the Share link would link the whole session rather than just the last question. Read on down to the last question. :) BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 10 23:20:05 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 18:20:05 -0500 Subject: [ExI] Summary - Many Worlds versus Super-determinism In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 6:03?PM BillK via extropy-chat wrote: > On Mon, 10 Nov 2025 at 22:46, Adrian Tymes via extropy-chat wrote: >> Checking the link, you start the QM-relevant part of the discussion >> (after a question on financial debt) with, "can you explain >> superdeterminism and explain why most physicists reject it". As you >> know, if you include a conclusion in the question like that, the AI >> will tend to reach that conclusion. > > Yes, that's correct. It was quite a long session, with several different questions. > I didn't know that the Share link would link the whole session rather than just the last question. > Read on down to the last question. :) Problem is, if you bias it early on like that, subsequent answers in the same session will remain tainted and may not be accurate. (Unless the initial bias was completely accurate, but then it wouldn't be a "bias" in most senses.) From atymes at gmail.com Mon Nov 10 23:25:20 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 10 Nov 2025 18:25:20 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> <033d01dc4de6$f18ae220$d4a0a660$@rainier66.com> Message-ID: On Tue, Nov 4, 2025 at 7:00?PM spike jones via extropy-chat wrote: > This would be cool, and probably already exists: a power grid management sim game, a good realistic simulation, where you hafta make all the investment decisions and technical calls on everything, do the design of the grid, perhaps with multiple players as managers of competing and collaborating adjacent power grids. > > Does anyone here know if such a thing exists. One would think so. https://www.google.com/search?q=power+management+simulator shows quite a few. You'll have to judge how good and realistic they are. From pharos at gmail.com Mon Nov 10 23:31:28 2025 From: pharos at gmail.com (BillK) Date: Mon, 10 Nov 2025 23:31:28 +0000 Subject: [ExI] Summary - Many Worlds versus Super-determinism In-Reply-To: References: Message-ID: On Mon, 10 Nov 2025 at 23:22, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Problem is, if you bias it early on like that, subsequent answers in > the same session will remain tainted and may not be accurate. (Unless > the initial bias was completely accurate, but then it wouldn't be a > "bias" in most senses.) > _______________________________________________ > No problem. Ask it yourself, to try and persuade it differently. :) It has a free tier, for a few questions per day. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Nov 11 00:55:50 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Nov 2025 16:55:50 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <01bb01dc52a4$feac2140$fc0463c0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <00a601dc4ce3$30b66820$92233860$@rainier66.com> <033d01dc4de6$f1 8ae220$d4a0a660$@rainier66.com> <01bb01dc52a4$feac2140$fc0463c0$@rainier66.com> Message-ID: <01cf01dc52a5$ed366a00$c7a33e00$@rainier66.com> From: spike at rainier66.com On Tue, Nov 4, 2025 at 7:00?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This would be cool, and probably already exists: a power grid management sim game, a good realistic simulation? > Does anyone here know if such a thing exists. One would think so. >?https://www.google.com/search?q=power+management+simulator shows quite a few. You'll have to judge how good and realistic they are. _______________________________________________ Cool thx. I visited the Moss Landing site yesterday, where the battery fire happened. I wanted to look at the site to see for myself its potential. It is hard to see much of anything from the road side, as they have everything blocked off. You can see scorched buildings nearby which apparently did not burn. I rented a kayak and paddled around back. Afterwards I came away convinced that site would be built back with batteries, and realized why: it has the infrastructure in place which makes it extremely valuable as an energy storage site. What I don?t know and what determines if I will invest in it, is what fire protection systems they will have in place this time. There hasta be a way to isolate a thermal runaway to protect the rest of the batteries. It was a nice kayaking day and very informative. Those two vertical features are smoke stacks from the Moss Landing power plant, on which is the lithium battery storage facility. I predict the Moss Landing station will be back within four years (assuming the Singularity doesn?t happen before then.) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 14569 bytes Desc: not available URL: From hkeithhenson at gmail.com Tue Nov 11 05:32:42 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 10 Nov 2025 21:32:42 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 3:13?PM Keith Henson wrote: > > On Sun, Nov 9, 2025 at 7:52?AM spike jones via extropy-chat > wrote: > > > > From: John Clark > > >? Elon Musk's popularity took a dramatic hit when he gave $277 million? > > > > I am surprised there aren?t limits to how much one person can donate to an election. I thought there were limits. > > > > >?to get a 34 time convicted felon? > > > > What was this felon convicted of, please? > > Easy to look up. https://en.wikipedia.org/wiki/Indictments_against_Donald_Trump Keith > sniii > > > The world is a much better place because of TwitterX. > > Matter of opinion. In one of the Charles Stross stories, there is a > note on a file cabinet, "The truth is in here--somewhere." Twitter > may have the truth somewhere, but your chances of finding it are not > good. > > Keith > > > > > And StarLink! How could I forget that one? > > > > > > > > >?That one is more complicated to judge? > > > > > > > > Ja, as is every major innovation. That innovation is especially important to me because I often go camping out where there is no internet, and I want to have the option of full internet access, along with the safety factor of having access to information and communications. > > > > > > > > I used StarLink a lot in Canada last summer. But more than that, the local housing situation is dire. StarLink offers full internet, moderate bandwidth, at a cost of 11 watts, easily supplied by a camper or car battery. This enables urban campers, which means a larger candidate pool for local startups. Their lower-end professional class have the option of living in a van for their internship, offering them flexibility, and enabling them to accept jobs which may pay less than 100k. Their home is where they park it. Save on rent, utilities, etc. Having urban campers in my neighborhood (I do) doesn?t bother me a bit, for I understand why they are there. I would be doing likewise, were I to be in their situation. That one vehicle is transportation to work, to anywhere, and a home. A used cargo van makes a marvelous custom stealth camper conversion and doesn?t even cost much: > > > > > > > > https://www.carmax.com/cars/dodge/ram/promaster-2500/3d-cargo-van?adcode=SEMMUSEDSA5&vzmadcode=SEM1409207827610&utm_source=bing&utm_content=sem_carmax_sales&utm_term=dodge&utm_campaign=1261140775795569&physical_loc=89376&interest_loc=&&msclkid=c7d24b429a4c1344787b22522dbacd8b&utm_source=bing&utm_medium=cpc&utm_campaign=BNG_TMU_DSA_Make&utm_term=dodge&utm_content=BNG_TMU_DSA_Make&gclid=c7d24b429a4c1344787b22522dbacd8b&gclsrc=3p.ds > > > > > > > > >?You didn't mention another of Musk's companies but I will, xAI, it's his AI Company in the creators of Grok? > > > > > > > > I have only used Grok once, last week, and I like it. I will circle around to it once the current emergency passes. > > > > > > > > >? Musk is still in the plus column... John K Clark > > > > > > > > Ja, he is in the plus column for me, but full disclosure is necessitated: Musk has increased my personal wealth. No I didn?t buy into Tesla at 6 (dammit.) But his building EVs in a factory within walking distance of my house has done wonders for my property value and my neighborhood. I am also a property owner (by inheritance) within driving distance of SpaceX Cape Canaveral, which is helping drive up the value of that property dramatically. > > > > > > > > Regarding his recent foray into politics: time will tell, but I predict we will look back and realize he was telling the truth back in 2025. We should have followed his advice. > > > > > > > > So, even disregarding the marvelous StarLink and the gift of TwitterX, Musk?s ambitions have directly blessed me twice, just by sheer fortunate coincidence. Consider my positive attitude toward him as colored by my deplorable self-interest and insatiable greed. > > > > > > > > spike > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Tue Nov 11 13:04:33 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 11 Nov 2025 08:04:33 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Mon, Nov 10, 2025 at 4:56?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: * >>> In some cases, it was eventually discovered that there had been a >>> hidden dependence* >> >> > * >> Yes that happened in some cases, but in other cases there was no >> hidden dependency. Things were just weird. And that was experimentally >> proven by Alain Aspect, John Clauser, and Anton Zeilinger, and that's why >> they received the Nobel prize for physics in 2022.* > > > > * > IIRC, they proved no dependencies that could have been generated after > the experiment was set up and they were measuring things.* > *Astronomers have done a lot better than that!! It has been proven experimentally that if some sort of cosmic conspiracy had been generated (superdeterminism) that conspiracy must be older than 7.8 billion years, nearly 4 billion years older than the sun. The probability the observed correlations were just a coincidence is one part in 100 billion billion. * *Light from ancient quasars helps confirm quantum entanglement* > * >> There is one thing we know for certain, whatever the true nature of > reality turns out to be it's going to be counterintuitive and weird, very > very weird.* > > * > On this we agree - at least, counterintuitive and weird from > our current point of view. * *I don't think human brains are wired up in a way that will ever allow them to make quantum mechanics seem intuitively obvious, things might be different for Mr. Jupiter Brain. * *> I think it is likely that you ignore superdeterminism, and see weirdness > resulting from trying to ignore it.* *Occam's Razor is about an economy of assumptions NOT an economy of results. The many worlds in the Many Worlds Theory are NOT an assumption, they are a consequence of the one and only assumption that the theory makes, Schrodinger's equation means what it says. As for competing ideas?* *I can't prove that superdeterminism is wrong but as I've said before, I can prove that superdeterminism is silly. The greater the violation of Occam's razor that your theory needs to be true the sillier it is, and by that metric it would be impossible to be sillier than superdeterminism. * *Objective collapse theory makes the additional assumption that Schrodinger's equation is not quite right so even though there is no experimental evidence that it is wrong an additional random term needs to be added to it so that the equation is no longer deterministic, and it does nothing but get rid of those pesky other worlds. This is my second favorite quantum interpretation because right now experiments are underway to see if they can find evidence for that new random term, I don't think they will find anything but if they do then Many Worlds is definitely wrong. So much for those who say Many Worlds is not scientific because it is not falsifiable. * *It's difficult to know what the Copenhagen interpretation is saying because even among its fans they can't agree but some, probably a majority, are saying there are two different sets of laws of physics, one set is for matter that has been formed into conscious scientists and the other set is for matter that has been formed into things that are not conscious. Many Worlds says there is only one set of physical laws. * *And then we have David Bohm's quantum interpretation; he keeps Schrodinger's equation but adds another equation for what he calls the "pilot wave" which has some very unusual properties. The pilot wave is extremely non-local, it has to take the state of the entire universe into account in order to know if it should guide an electron through the right slit or the left slit in an experiment, and influences can be instantaneous, and distance does not diminish effects, so an electron in the Andromeda galaxy might be just as important in making the decision of which slit to go through as an electron that is only 1 foot away. It seems to me that if that was true then you'd have to know everything before you could know anything, and although we don't know everything we do know some things. * *Also, nobody has been able to write an equation for the pilot wave that is compatible with Special Relativity as Paul Dirac did for Schrodinger's equation a century ago. And **the pilot wave can affect an electron but an electron cannot affect the pilot wave, the wave pushes the particle but the particle can NOT push back. This sort of one-way causation has never been observed before. And the asymmetry means that matter is real (it always has one definite position and velocity) but is fundamentally passive, matter is guided by the pilot wave but matter is unable to influence the pilot wave. Human Beings are made of matter so we are just puppets, the pilot wave pulls the strings. Well OK? Technically we're marionettes not puppets. * *Bohm and his supporters argue that all of this additional byzantine complexity is worth it because even though it abandons locality it maintains realism, that is to say before a particle has been measured it was in one and only one definite state. **I disagree, I think that is far too high a price to pay. At the end of the day all the pilot wave does is provide a little arrow that points at a particle and says "this is the real particle, ignore all others". This is why detractors of pilot wave theory have called it "the disappearing worlds theory", they also call it "the Many Worlds theory in denial". * *And then we have by far the most popular quantum interpretation, "Shut Up And Calculate" sometimes called "Quantum Bayesianism". And that's fine if the only thing you're interested in is predicting what value you're going to see on your voltmeter if an experiment is set up in a certain way. * * John K Clark * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Nov 11 13:20:08 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 11 Nov 2025 08:20:08 -0500 Subject: [ExI] Is the Multiverse unscientific nonsense? Message-ID: *I read an article called The multiverse is unscientific nonsense by Jacob Barandes, a lecturer in physics at Harvard University, and I wrote a letter to professor **Barandes commenting on it. He responded with a very polite letter saying he read it and appreciated what I said but didn't have time to comment further. This is the letter I sent: * =========== *Hello Professor Barandes* *I read your article The multiverse is unscientific nonsense with interest and I have a few comments:* *Nobody is claiming that the existence of the multiverse is a proven fact, but I think the idea needs to be taken seriously because: * *1) Unlike Bohr's Copenhagen interpretation, the Many Worlds theory is clear about what it's saying. * *2) It is self consistent and conforms with all known experimental results. * *3) It has no need to speculate about new physics as objective wave collapse theories like GRW do.* *4) It doesn't have to explain what consciousness or a measurement is because they have nothing to do with it, all it needs is Schrodinger's equation. * *I don't see how you can explain counterfactual quantum reasoning and such things as the Elitzur?Vaidman bomb tester without making use of many worlds. Hugh Everett would say that by having a bomb in a universe we are not in explode we can tell if a bomb that is in the branch of the multiverse that we are in is a dud or is a live fully functional bomb. You say that many worlds needs to account for probability and that's true, but then you say many worlds demands that some worlds have ?higher probabilities than others" but that is incorrect. According to many worlds there is one and only one universe for every quantum state that is not forbidden by the laws of physics. So when you flip a coin the universe splits many more times than twice because there are a vast number, perhaps an infinite number, of places where a coin could land, but you are not interested in exactly where the coin lands, you're only interested if it lands heads or tails. And we've known for centuries how to obtain a useful probability between any two points on the continuous bell curve even though the continuous curve is made up of an unaccountably infinite number of points, all we need to do is perform a simple integration to figure out which part of the bell curve we're most likely on.* *Yes, that's a lot of worlds, but you shouldn't object that the multiverse really couldn't be that big unless you are a stout defender of the idea that the universe must be finite, because even if many worlds turns out to be untrue the universe could still be infinite and an infinity plus an infinity is still the an infinity with the same Aleph number. Even if there is only one universe if it's infinite then a finite distance away there must be a doppelg?nger of you because, although there are a huge number of quantum states your body could be in, that number is not infinite, but the universe is. * *And Occam's razor is about an economy of assumptions not an economy of results. As for the "Tower of assumptions" many worlds is supposed to be based on, the only assumption that many worlds makes is that Schrodinger's equation means what it says, and it says nothing about the wave function collapsing. I would maintain that many worlds is bare-bones no-nonsense quantum mechanics with none of the silly bells and whistles that other theories stick on that do nothing but get rid of those pesky other worlds that keep cropping up that they personally dislike for some reason. And since Everett's time other worlds do seem to keep popping up and in completely unrelated fields, such as string theory and inflationary cosmology.* *You also ask what a ?rational observer? is and how they ought to behave, and place bets on future events, given their self-locating uncertainty. I agree with David Hume who said that "ought" cannot be derived from "is", but "ought" can be derived from "want". So if an observer is a gambler that WANTS to make money but is irrational then he is absolutely guaranteed to lose all his money if he plays long enough, while a rational observer who knows how to make use of continuous probabilities is guaranteed to make money, or at least break even. Physicists WANT their ideas to be clear, have predictive power, and to conform with reality as described by experiment; therefore I think they OUGHT to embrace the many world's idea. * *And yes there is a version of you and me that flips a coin 1 million times and see heads every single time even though the coin is 100% fair, however it is extremely unlikely that we will find ourselves that far out on the bell curve, so I would be willing to bet a large sum of money that I will not see 1 million heads in a row. You also say that "the Dirac-von Neumann axioms don?t support oft-heard statements that an atom can be in two places at once, or that a cat can be alive and dead at the same time" , but there are only two possibilities, either there is an alive cat and a dead cat in two different places or there is a live/dead cat that instantly snaps into being either alive or dead by the act of "measurement" even though the standard textbook Copenhagen interpretation can't say exactly what a measurement is, or even approximately what it is for that matter. In many worlds a measurement is simply any change in a quantum system, it makes no difference if that quantum system is a human being or an unconscious brick wall. So in that sense many worlds is totalitarian because everything that is not forbidden by the laws of Quantum Physics and General Relativity must exist. * *You correctly point out that nobody has ever "seen an atom in two places at once, let alone a cat being both alive and dead", but nobody has ever seen infinite dimensional operators in Hilbert space that the Dirac-von Neumann axioms use either, all they've seen is ink on paper in mathematical books. And you can't get milk from the word "cow". * *I'll close by just saying although I believe there is considerable evidence in favor of the many worlds view I admit it falls far short of a proof, maybe tomorrow somebody will come up with a better idea but right now many worlds is the least bad quantum interpretation around. And speculation is not a dirty word, without it science would be moribund, Richard Feynman said science is imagination in a tight straight jacket and I agree with him. Best wishesJohn K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Nov 11 13:27:06 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 11 Nov 2025 08:27:06 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, Nov 11, 2025 at 8:05?AM John Clark wrote: > Astronomers have done a lot better than that!! It has been proven experimentally that if some sort of cosmic conspiracy had been generated (superdeterminism) that conspiracy must be older than 7.8 billion years, nearly 4 billion years older than the sun. The probability the observed correlations were just a coincidence is one part in 100 billion billion. Yes, and? "These particular photons were generated billions of years ago. One was in one state, the other was in the other. We didn't know which was which until now." > I don't think human brains are wired up in a way that will ever allow them to make quantum mechanics seem intuitively obvious, things might be different for Mr. Jupiter Brain. Someone (some say Napoleon, some say others) once said, "You would make a ship sail against the winds and currents by lighting a bonfire under her decks? I have no time for such nonsense." And yet, the concept of steam-powered ships seems easy enough to understand to us. (To me, anyway.) It is possible that, once we have more information and better explanations, quantum mechanics will one day be about as comprehensible to average humans. (Average for that day in the future, accounting for - hopefully - improved education, and perhaps some degree of AI assistance.) Or that might not happen. But I don't think the problem is primarily biological. > I can't prove that superdeterminism is wrong but as I've said before, I can prove that superdeterminism is silly. The greater the violation of Occam's razor that your theory needs to be true the sillier it is, and by that metric it would be impossible to be sillier than superdeterminism. Sure it is. Assuming the existence of other universes that magically split apart and can never interact with ours, and yet still manage to produce results we can observe anyway, seems sillier. From foozler83 at gmail.com Tue Nov 11 13:52:27 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 11 Nov 2025 07:52:27 -0600 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: Message-ID: I like Asimov's solution: First Speaker. bill w On Sat, Nov 8, 2025 at 7:16?AM John Clark via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Nov 7, 2025 at 5:42?PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > *> Lady Chairman.That used to be what we called female chairmen, before PC >> nonsense infected everyone.* > > > *It could be argued that "Lady Chairman" is self-contradictory nonsense, > unless you're talking about a hermaphrodite. How about Chairwoman? I admit > that doesn't sound quite right but I think at least part of that is due to > the fact that the word "woman" has two syllables but the word "man" only > has one. * > > *John K Clark* > > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 11 16:47:01 2025 From: pharos at gmail.com (BillK) Date: Tue, 11 Nov 2025 16:47:01 +0000 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, 11 Nov 2025 at 13:29, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Sure it is. Assuming the existence of other universes that magically > split apart and can never interact with ours, and yet still manage to > produce results we can observe anyway, seems sillier. > ______________________________________________ > Hi Adrian, Yesterday, when Kimi was asked to compare the Multiverse with Superdeterminism, it preferred the MWI option due to the severe problems posed by superdeterminism. Today, I never mentioned superdeterminism, but just asked Kimi to consider how unlikely the multiverse is, as in your comment above. Kimi agreed with you, but did not suggest superdeterminism as the alternative. Kimi discussed the problems with MWI and concluded that at present, we don't know how to solve the measurement problem. We await further research on this mystery. :) BillK Click the link to view conversation with Kimi AI Assistant < https://www.kimi.com/share/19a73bad-87d2-8674-8000-000039699934> (I've checked the link this time!) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Nov 11 17:16:37 2025 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 11 Nov 2025 12:16:37 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, Nov 11, 2025 at 11:48?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 11 Nov 2025 at 13:29, Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Sure it is. Assuming the existence of other universes that magically >> split apart and can never interact with ours, and yet still manage to >> produce results we can observe anyway, seems sillier. >> ______________________________________________ >> > > > Hi Adrian, > Yesterday, when Kimi was asked to compare the Multiverse with > Superdeterminism, it preferred the MWI option due to the severe problems > posed by superdeterminism. > > Today, I never mentioned superdeterminism, but just asked Kimi to consider > how unlikely the multiverse is, as in your comment above. Kimi agreed with > you, but did not suggest superdeterminism as the alternative. Kimi > discussed the problems with MWI and concluded that at present, we don't > know how to solve the measurement problem. We await further research on > this mystery. :) > BillK > Click the link to view conversation with Kimi AI Assistant < > https://www.kimi.com/share/19a73bad-87d2-8674-8000-000039699934> > (I've checked the link this time!) > I had a similar conversation with an AI on the topic of many-worlds. At the end of the conversation the AI was 99.99% sure many-worlds was correct: https://docs.google.com/document/d/1i15TT76PMYlOdMO76O_TYPML2ocHrlzYDdj2fPkguH0/edit?usp=sharing This either shows that many-worlds is most probably true, or that current AIs are so swayable that we shouldn't put much stock in what they say about controversial subjects. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 11 18:13:37 2025 From: pharos at gmail.com (BillK) Date: Tue, 11 Nov 2025 18:13:37 +0000 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, 11 Nov 2025 at 17:18, Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I had a similar conversation with an AI on the topic of many-worlds. At > the end of the conversation the AI was 99.99% sure many-worlds was correct: > > https://docs.google.com/document/d/1i15TT76PMYlOdMO76O_TYPML2ocHrlzYDdj2fPkguH0/edit?usp=sharing > This either shows that many-worlds is most probably true, or that current > AIs are so swayable that we shouldn't put much stock in what they say about > controversial subjects. > Jason > _______________________________________________ > Yes, AIs have a strong tendency to flatter users and agree with any user suggestions. As Adrian pointed out, the words used and how the questions are asked also have a big effect on the answers provided. There is also the problem of loss of context during long conversations. One or two questions get a good response, but many more, and the AI starts to forget what it was talking about. :) See: 9 May 2025 LLMS GET LOST IN MULTI-TURN CONVERSATION BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Nov 11 18:23:32 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Nov 2025 10:23:32 -0800 Subject: [ExI] =?utf-8?q?hartman_ai____RE=3A__=09Google=E2=80=99s_Willow_?= =?utf-8?q?Quantum_Chip=3A_Proof_of_the_Multiverse=3F?= In-Reply-To: References: Message-ID: <001a01dc5338$49791be0$dc6b53a0$@rainier66.com> ?> On Behalf Of BillK via extropy-chat _______________________________________________ >?Yes, AIs have a strong tendency to flatter users and agree with any user suggestions? BillK When I encounter the AI ass-kissing attitude, I think there is a great market for a Sergeant Hartman therapist AI. It would jump your trash at every opportunity, put you in your place, be severely entertaining as drill sergeants are, assuming one follows orders and is not a screwup: https://www.youtube.com/watch?v=WEbzM2FUP9s spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 12 02:14:36 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Nov 2025 18:14:36 -0800 Subject: [ExI] evs again Message-ID: <002901dc537a$1805c8a0$481159e0$@rainier66.com> Well damn. This is a disappointment, so far. For some time, I have recognized the enormous profit potential in building out electric power generation and distribution in California. Reasoning: we have a huge population accustomed to paying absurd prices for power, and ideal climate conditions for EVs, along with a general awareness that soon there will likely be a dramatic increase in demand for power to run god (the superhuman intelligence running on jillions of power-hungry Nvidea processors.) We will need a lotta power to keep that guy (those guys?) going. That was much of my motive for going to Moss Landing Sunday: to look over the facility where they had the battery fire in January, assess what growth potential they have at the site, what environmental sensitivities, anything that could interfere with my making a buttload investing on the growth potential of that particular site. Another element of my investment strategy has to do with EVs. To understand the growth potential of that market, one must divide the EV world into two categories: Musk and Not Musk cars. Most of the EVs are Muskmobiles, and he already has his charging infrastructure built out (has been for at least five years.) I estimate about a quarter of the EVs around here are nonMusk, so I am estimating market growth by watching nonMusk charging stations build rate, such as this one a short distance from my home (I walk past it twice a day.) This station was started in Feb 2020, but was halted because of covid. It switched on yesterday. OK then. Took close to six damn years to compete, but that might be a special case, since covid might have delayed it a coupla years. So. four years. This one is ideally located: a major freeway access ramp is about 200 meters away, an artery with lots of traffic 24/7. Today was a national holiday (Veterans Day) so there was pleeeeeenty of traffic, commuters and vacationers. The station alerts the cars via radio signal, so it was advertising itself, starting about early afternoon yesterday. I just walked by there. Twelve high speed ports, 16 standard ports, zero customers. Damn. OK no worries, perhaps business will pick up. Hope so. But this has me a little spooked. If business doesn't pick up soon at this place, an ideal location in so many ways, I will be reluctant to bet on that Moss Landing battery site. This is a bad thing, for I was hoping the EV market would take off like a nuclear rocket. Reasoning: compared to processors and data centers, the electric power demand of a car (or one of Musk's absurd cyber trucks) is huge. We can run a looooootta lotta processors with the power savings from some yahoo torching a single Tesla dealership. (Not ME yahoo torching it, I mean Telsa dealerships torched by some other yahoo, a political actor, insane with rage about Elon telling us the truth about the Federal budget.) Just one massive EV car dealership fire would free up sufficient electric energy to power god for a while, methinks. If I am invested in electric power infrastructure, I could make good money. Thoughts welcome. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 33282 bytes Desc: not available URL: From ilsa.bartlett at gmail.com Wed Nov 12 02:56:19 2025 From: ilsa.bartlett at gmail.com (ilsa) Date: Tue, 11 Nov 2025 18:56:19 -0800 Subject: [ExI] evs again In-Reply-To: <002901dc537a$1805c8a0$481159e0$@rainier66.com> References: <002901dc537a$1805c8a0$481159e0$@rainier66.com> Message-ID: I read the whole thing you made me laugh I don't like electronic vehicles because of the way that the rubber rubs off on the road from the tires Ilsa Bartlett Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Tue, Nov 11, 2025, 6:15?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > Well damn. This is a disappointment, so far. For some time, I have > recognized the enormous profit potential in building out electric power > generation and distribution in California. Reasoning: we have a huge > population accustomed to paying absurd prices for power, and ideal climate > conditions for EVs, along with a general awareness that soon there will > likely be a dramatic increase in demand for power to run god (the > superhuman intelligence running on jillions of power-hungry Nvidea > processors.) We will need a lotta power to keep that guy (those guys?) > going. > > > > That was much of my motive for going to Moss Landing Sunday: to look over > the facility where they had the battery fire in January, assess what growth > potential they have at the site, what environmental sensitivities, anything > that could interfere with my making a buttload investing on the growth > potential of that particular site. > > > > Another element of my investment strategy has to do with EVs. To > understand the growth potential of that market, one must divide the EV > world into two categories: Musk and Not Musk cars. Most of the EVs are > Muskmobiles, and he already has his charging infrastructure built out (has > been for at least five years.) > > > > I estimate about a quarter of the EVs around here are nonMusk, so I am > estimating market growth by watching nonMusk charging stations build rate, > such as this one a short distance from my home (I walk past it twice a > day.) This station was started in Feb 2020, but was halted because of > covid. It switched on yesterday. OK then. Took close to six damn years > to compete, but that might be a special case, since covid might have > delayed it a coupla years. So? four years. > > > > This one is ideally located: a major freeway access ramp is about 200 > meters away, an artery with lots of traffic 24/7. Today was a national > holiday (Veterans Day) so there was pleeeeeenty of traffic, commuters and > vacationers. The station alerts the cars via radio signal, so it was > advertising itself, starting about early afternoon yesterday. I just > walked by there. Twelve high speed ports, 16 standard ports, zero > customers. Damn. > > > > OK no worries, perhaps business will pick up. Hope so. > > > > > > But this has me a little spooked. If business doesn?t pick up soon at > this place, an ideal location in so many ways, I will be reluctant to bet > on that Moss Landing battery site. > > > > This is a bad thing, for I was hoping the EV market would take off like a > nuclear rocket. Reasoning: compared to processors and data centers, the > electric power demand of a car (or one of Musk?s absurd cyber trucks) is > huge. We can run a looooootta lotta processors with the power savings from > some yahoo torching a single Tesla dealership. (Not ME yahoo torching it, > I mean Telsa dealerships torched by some other yahoo, a political actor, > insane with rage about Elon telling us the truth about the Federal > budget.) Just one massive EV car dealership fire would free up sufficient > electric energy to power god for a while, methinks. If I am invested in > electric power infrastructure, I could make good money. > > > > Thoughts welcome. > > > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 33282 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 33282 bytes Desc: not available URL: From spike at rainier66.com Wed Nov 12 03:07:46 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Nov 2025 19:07:46 -0800 Subject: [ExI] evs again In-Reply-To: References: <002901dc537a$1805c8a0$481159e0$@rainier66.com> Message-ID: <000601dc5381$85a2ae30$90e80a90$@rainier66.com> From: ilsa Sent: Tuesday, 11 November, 2025 6:56 PM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] evs again I read the whole thing you made me laugh I don't like electronic vehicles because of the way that the rubber rubs off on the road from the tires Ilsa Bartlett Hi Ilsa, I never noticed that. But the principle is to disregard our own likes and dislikes when choosing profitable investments. This is a lesson well-learned from political betting over the years: you bet on what you think will happen rather than what you want to happen. By strict discipline on that, I won 57 bucks in 2016, even though it wasn?t the outcome I wanted or voted for. I wanted Gary Johnson in that election, but voted for Jill Stein, because I thought she was attractive. Then I bet on? I forget his name at the moment, but he won, and I won: 57 big ones. Regarding EVs leaving rubber on the road: I didn?t know that, but really what I am hoping for is that lots and lots of people will buy EVs so we can have a power reserve in case of the Singularity happening sooner than I expect. Millions of people buy EVs, sooner or later Musk is going to be proven right, enraged mobs will burn Teslas by the millions, then we have all the electric power we need. That?s an example of betting on what we think rather than what we want. spike Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Tue, Nov 11, 2025, 6:15?PM spike jones via extropy-chat > wrote: Well damn. This is a disappointment, so far. For some time, I have recognized the enormous profit potential in building out electric power generation and distribution in California. Reasoning: we have a huge population accustomed to paying absurd prices for power, and ideal climate conditions for EVs, along with a general awareness that soon there will likely be a dramatic increase in demand for power to run god (the superhuman intelligence running on jillions of power-hungry Nvidea processors.) We will need a lotta power to keep that guy (those guys?) going. That was much of my motive for going to Moss Landing Sunday: to look over the facility where they had the battery fire in January, assess what growth potential they have at the site, what environmental sensitivities, anything that could interfere with my making a buttload investing on the growth potential of that particular site. Another element of my investment strategy has to do with EVs. To understand the growth potential of that market, one must divide the EV world into two categories: Musk and Not Musk cars. Most of the EVs are Muskmobiles, and he already has his charging infrastructure built out (has been for at least five years.) I estimate about a quarter of the EVs around here are nonMusk, so I am estimating market growth by watching nonMusk charging stations build rate, such as this one a short distance from my home (I walk past it twice a day.) This station was started in Feb 2020, but was halted because of covid. It switched on yesterday. OK then. Took close to six damn years to compete, but that might be a special case, since covid might have delayed it a coupla years. So? four years. This one is ideally located: a major freeway access ramp is about 200 meters away, an artery with lots of traffic 24/7. Today was a national holiday (Veterans Day) so there was pleeeeeenty of traffic, commuters and vacationers. The station alerts the cars via radio signal, so it was advertising itself, starting about early afternoon yesterday. I just walked by there. Twelve high speed ports, 16 standard ports, zero customers. Damn. OK no worries, perhaps business will pick up. Hope so. But this has me a little spooked. If business doesn?t pick up soon at this place, an ideal location in so many ways, I will be reluctant to bet on that Moss Landing battery site. This is a bad thing, for I was hoping the EV market would take off like a nuclear rocket. Reasoning: compared to processors and data centers, the electric power demand of a car (or one of Musk?s absurd cyber trucks) is huge. We can run a looooootta lotta processors with the power savings from some yahoo torching a single Tesla dealership. (Not ME yahoo torching it, I mean Telsa dealerships torched by some other yahoo, a political actor, insane with rage about Elon telling us the truth about the Federal budget.) Just one massive EV car dealership fire would free up sufficient electric energy to power god for a while, methinks. If I am invested in electric power infrastructure, I could make good money. Thoughts welcome. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 33282 bytes Desc: not available URL: From pharos at gmail.com Wed Nov 12 10:11:35 2025 From: pharos at gmail.com (BillK) Date: Wed, 12 Nov 2025 10:11:35 +0000 Subject: [ExI] evs again In-Reply-To: <000601dc5381$85a2ae30$90e80a90$@rainier66.com> References: <002901dc537a$1805c8a0$481159e0$@rainier66.com> <000601dc5381$85a2ae30$90e80a90$@rainier66.com> Message-ID: On Wed, 12 Nov 2025 at 03:09, spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Regarding EVs leaving rubber on the road: I didn?t know that, but really > what I am hoping for is that lots and lots of people will buy EVs so we can > have a power reserve in case of the Singularity happening sooner than I > expect. Millions of people buy EVs, sooner or later Musk is going to be > proven right, enraged mobs will burn Teslas by the millions, then we have > all the electric power we need. That?s an example of betting on what we > think rather than what we want. > > > > spike > _______________________________________________ > It's mostly due to electric cars being much heavier and the instant torque delivery of the transmission. Tyres wear 25% to 50% faster. There is also some concern that older multi-storey car parks might not cope well with the extra weight when filling up with electric cars. < https://www.perplexity.ai/search/research-do-electric-cars-wear-zzvf4jpnQZOOnrmLdnAJJA#0 > BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 12 11:44:59 2025 From: pharos at gmail.com (BillK) Date: Wed, 12 Nov 2025 11:44:59 +0000 Subject: [ExI] =?utf-8?q?Google_says_new_cloud-based_=E2=80=9CPrivate_AI_?= =?utf-8?q?Compute=E2=80=9D_is_as_secure_as_local_processing?= Message-ID: Google says new cloud-based ?Private AI Compute? is just as secure as local processing New system allows devices to connect directly to secure space in Google?s AI servers. Ryan Whitwam ? 11 Nov 2025 Quotes: Google?s current mission is to weave generative AI into as many products as it can, getting everyone accustomed to, and maybe even dependent on, working with confabulatory robots. That means it needs to feed the bots a lot of your data, and that?s getting easier with the company?s new Private AI Compute. Google claims its new secure cloud environment will power better AI experiences without sacrificing your privacy. According to Google, the Private AI Compute service is just as secure as using local processing on your device. However, Google?s cloud has a lot more processing power than your laptop or phone, enabling the use of Google?s largest and most capable Gemini models. --------------------- This system is needed for as long as local processors are not powerful enough to run complex AI systems. But you have to trust them that your personal AI processing will actually be kept private. Nobody wants their AI sexbot to be interrogated by officials looking for some excuse to punish them. :) BillK From johnkclark at gmail.com Wed Nov 12 12:15:29 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 12 Nov 2025 07:15:29 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, Nov 11, 2025 at 8:29?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > * > I don't think human brains are wired up in a way that will ever allow >> them to make quantum mechanics seem intuitively obvious, things might be >> different for Mr. Jupiter Brain.* > > I don't think the problem is primarily biological. *It would be surprising if biology is not at least part of the problem. Over several million years Evolution designed our brain to maximize the probability of getting our genes into the next generation even though we are one of the slowest and weakest animals on the African savanna. Having a deep intuitive understanding of how all the possible quantum states in Hilbert space (vectors called "kets") evolve according to Schrodinger's equation, would be of little help in catching a zebra or avoiding a lion. We don't even have an inborn understanding of how fractions work, we need to be carefully taught, and even then children find fractions more difficult to understand than integers. And Schrodinger's equation contains imaginary numbers! * * >> I can't prove that superdeterminism is wrong but as I've said before, >> I can prove that superdeterminism is silly. The greater the violation of >> Occam's razor that your theory needs to be true the sillier it is, and by >> that metric it would be impossible to be sillier than superdeterminism.* > > > * Sure it is. Assuming the existence of other universes that > magically split apart* *Magically? If voodoo had a differential equation that explained exactly how it worked, if there were a Fundamental Theorem of Voodoo, then voodoo would not be magic, it would be science and people would be receiving Nobel prizes for making advances in voodoo. The big difference between science and magic is that science works but magic doesn't. * > *> and can never interact with ours, and yet still manage to produce > results we can observe anyway, seems sillier.* > *In order to work, Many Worlds needs to make exactly ONE assumption, Schrodinger's equation means what it says. In order to work, superdeterminism needs, at a minimum, to make an astronomical number to an astronomical power number of assumptions, and possibly an infinite number of assumptions. * *And as I've already said, if the experiments that are currently underway are successful and show that the quantum wave really does objectively collapse then Many Worlds will have been proven to be wrong. But there is no way, even in theory, that an experiment could ever prove that superdeterminism is wrong because if it is right then the scientific method simply doesn't work, and the fact that it had seemed to work until now was just an EXTREMELY unlikely coincidence. * *You don't think that's silly? * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 12 12:45:21 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 12 Nov 2025 07:45:21 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, Nov 11, 2025 at 12:18?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Kimi discussed the problems with MWI and concluded that at present, we > don't know how to solve the measurement problem.* *The only reason I'm a fan of Many Wolds is that it doesn't need to explain what a measurement is, or what an experiment is, and it doesn't need to explain how consciousness works because it has nothing to do with it. I believe the fundamental reason Many Worlds is not more popular is because most people feel the universe just can't be that big. I think that is a poor reason. * *And I note that Kimi says* * "Philosophically, I lean slightly toward MWI because it feels more coherent and less anthropocentric."* *I agree.* *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Wed Nov 12 13:12:12 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Wed, 12 Nov 2025 13:12:12 +0000 Subject: [ExI] =?utf-8?q?Google_says_new_cloud-based_=E2=80=9CPrivate_AI_?= =?utf-8?q?Compute=E2=80=9D_is_as_secure_as_local_processing?= In-Reply-To: References: Message-ID: On 12/11/2025 12:46, BillK wrote: > Google says new cloud-based ?Private AI Compute? is just as secure as > local processing > New system allows devices to connect directly to secure space in > Google?s AI servers. > Ryan Whitwam ? 11 Nov 2025 Sadly, that's probably true I don't suppose our local data is any more secure than data on the internet or in a smartphone, these days We can probably assume that putting information into a computing device is equivalent to handing it straight to your local 'n-eyes' alliance at least Apart from that, Google giving out assurances about security sounds to me like the scorpion promising the fox it won't sting him -- Ben From spike at rainier66.com Wed Nov 12 14:08:00 2025 From: spike at rainier66.com (Gregory Jones) Date: Wed, 12 Nov 2025 06:08:00 -0800 Subject: [ExI] evs again In-Reply-To: References: <002901dc537a$1805c8a0$481159e0$@rainier66.com> <000601dc5381$85a2ae30$90e80a90$@rainier66.com> Message-ID: Hey cool, invest in new parking structures to deal with the additional weight? spike On Wed, Nov 12, 2025, 2:13?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 12 Nov 2025 at 03:09, spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Regarding EVs leaving rubber on the road: I didn?t know that, but really >> what I am hoping for is that lots and lots of people will buy EVs so we can >> have a power reserve in case of the Singularity happening sooner than I >> expect. Millions of people buy EVs, sooner or later Musk is going to be >> proven right, enraged mobs will burn Teslas by the millions, then we have >> all the electric power we need. That?s an example of betting on what we >> think rather than what we want. >> >> >> >> spike >> > _______________________________________________ >> > > > It's mostly due to electric cars being much heavier and the instant torque > delivery of the transmission. > Tyres wear 25% to 50% faster. There is also some concern that older > multi-storey car parks might not cope well with the extra weight when > filling up with electric cars. > < > https://www.perplexity.ai/search/research-do-electric-cars-wear-zzvf4jpnQZOOnrmLdnAJJA#0 > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 12 14:17:06 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 12 Nov 2025 09:17:06 -0500 Subject: [ExI] One mine in North Carolina is world's only source of high-purity quartz for chipmaking Message-ID: *The U.S. is the source of natural high-purity quartz for chipmaking for the entire world, specifically one particular mine in North Carolina; it's rarer than rare earth materials. China is trying to change that. * *Rarer than rare earth materials* John K Clark See what's on my new list at Extropolis n77 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Nov 12 14:45:40 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 12 Nov 2025 09:45:40 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> Message-ID: On Sun, Nov 9, 2025 at 10:51?AM wrote: *> >? Elon Musk's popularity took a dramatic hit when he gave $277 million?* > *>I am surprised there aren?t limits to how much one person can donate to > an election. I thought there were limits.* *In the before times when He Who Must Not Be Named didn't have any pet judges on the Supreme Court there were limits, but not now. * *> >?to get a 34 time convicted felon...* > > > *> What was this felon convicted of, please?* *Spike, I refuse to give an answer to a question that you can find for yourself in about 0.8 seconds online. * *>> the world would be a better place if that company [Twitter] did NOT >> exist?* > > > > *> I disagree enthusiastically, wholeheartedly. * > *I'm not surprised given that you said Twitter was your primary news source, and that fact will certainly shape your worldview. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 12 14:59:12 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 06:59:12 -0800 Subject: [ExI] day 3 Message-ID: <008401dc53e4$e8b2e7c0$ba18b740$@rainier66.com> I rolled by the non-Musk charging station, time 0650. Number of customers: zero. Damn. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 12 15:01:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 07:01:35 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> Message-ID: <008c01dc53e5$3e145b90$ba3d12b0$@rainier66.com> From: John Clark > >?to get a 34 time convicted felon... > What was this felon convicted of, please? >?Spike, I refuse to give an answer to a question that you can find for yourself in about 0.8 seconds online. John K Clark I did that. It didn?t find THE answer. It gave me AN answer. It gave me a misdemeanor. I want a TRUE answer. Do you know that answer John? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 12 15:19:05 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 10:19:05 -0500 Subject: [ExI] day 3 In-Reply-To: <008401dc53e4$e8b2e7c0$ba18b740$@rainier66.com> References: <008401dc53e4$e8b2e7c0$ba18b740$@rainier66.com> Message-ID: Compare that to use percentage, same time of day in similar neighborhood (including proximity to typical commutes), for gas stations. Granted, home-and-work charging means that there's less demand for on-the-road charging. There aren't as many workplaces, let alone homes, which offer on-site gasoline refueling. On Wed, Nov 12, 2025 at 10:00?AM spike jones via extropy-chat wrote: > > I rolled by the non-Musk charging station, time 0650. Number of customers: zero. > > > > Damn. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From atymes at gmail.com Wed Nov 12 15:25:01 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 10:25:01 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <008c01dc53e5$3e145b90$ba3d12b0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <008c01dc53e5$3e145b90$ba3d12b0$@rainier66.com> Message-ID: On Wed, Nov 12, 2025 at 10:02?AM spike jones via extropy-chat wrote: > From: John Clark > > >?to get a 34 time convicted felon... > > > What was this felon convicted of, please? > > >?Spike, I refuse to give an answer to a question that you can find for yourself in about 0.8 seconds online. John K Clark > > I did that. It didn?t find THE answer. It gave me AN answer. It gave me a misdemeanor. The evidence - to wit, the results of https://www.google.com/search?q=34+time+convicted+felon - suggests that you are lying. Either you didn't do that (you're referring to what John requested, so John gets to define it - specifically, you don't get to strawman-redefine it - though his definition seems quite apparent), or it did give you the singular answer which was not about a misdemeanor. From johnkclark at gmail.com Wed Nov 12 15:31:29 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 12 Nov 2025 10:31:29 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <002501dc525f$1d384530$57a8cf90$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <002501dc525f$1d384530$57a8cf90$@rainier66.com> Message-ID: On Mon, Nov 10, 2025 at 11:30?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Columbus made a reasonable assumption when he reached land in the > Caribbean to assume it was India, and that India was a really big hunk of > land.* *Columbus used Ptolemy to figure out how far east Asia extended, and that turned out to be an underestimation not an overestimation, Asia actually goes further east than Ptolemy or Columbus believed. Columbus was sailing West so if Ptolemy had been right that would've made the journey even longer, and given the size of his provisions Columbus would've known it was impossibly longer, or he would've known it if his estimate of the size of the Earth was as good as that of the ancient Greeks. But it wasn't. * * John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Nov 12 16:52:20 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 12 Nov 2025 09:52:20 -0700 Subject: [ExI] evs again In-Reply-To: References: <002901dc537a$1805c8a0$481159e0$@rainier66.com> <000601dc5381$85a2ae30$90e80a90$@rainier66.com> Message-ID: All the cars have converted to the Tesla charging standard haven't they? Or they now have adapters. I don't think anyone uses those non Musk chargers anymore? On Wed, Nov 12, 2025 at 7:08?AM Gregory Jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hey cool, invest in new parking structures to deal with the additional > weight? > > spike > > On Wed, Nov 12, 2025, 2:13?AM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, 12 Nov 2025 at 03:09, spike jones via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Regarding EVs leaving rubber on the road: I didn?t know that, but really >>> what I am hoping for is that lots and lots of people will buy EVs so we can >>> have a power reserve in case of the Singularity happening sooner than I >>> expect. Millions of people buy EVs, sooner or later Musk is going to be >>> proven right, enraged mobs will burn Teslas by the millions, then we have >>> all the electric power we need. That?s an example of betting on what we >>> think rather than what we want. >>> >>> >>> >>> spike >>> >> _______________________________________________ >>> >> >> >> It's mostly due to electric cars being much heavier and the instant >> torque delivery of the transmission. >> Tyres wear 25% to 50% faster. There is also some concern that older >> multi-storey car parks might not cope well with the extra weight when >> filling up with electric cars. >> < >> https://www.perplexity.ai/search/research-do-electric-cars-wear-zzvf4jpnQZOOnrmLdnAJJA#0 >> > >> BillK >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 12 17:15:33 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 12:15:33 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Tue, Nov 11, 2025 at 12:18?PM Jason Resch via extropy-chat wrote: > On Tue, Nov 11, 2025 at 11:48?AM BillK via extropy-chat wrote: >> Yesterday, when Kimi was asked to compare the Multiverse with Superdeterminism, it preferred the MWI option due to the severe problems posed by superdeterminism. >> >> Today, I never mentioned superdeterminism, but just asked Kimi to consider how unlikely the multiverse is, as in your comment above. Kimi agreed with you, but did not suggest superdeterminism as the alternative. Kimi discussed the problems with MWI and concluded that at present, we don't know how to solve the measurement problem. We await further research on this mystery. :) I agree with that conclusion: we don't know for sure if it's MWI, superdeterminism, or something else. The discussion was about what each of us finds more likely, and why. > I had a similar conversation with an AI on the topic of many-worlds. At the end of the conversation the AI was 99.99% sure many-worlds was correct: > > https://docs.google.com/document/d/1i15TT76PMYlOdMO76O_TYPML2ocHrlzYDdj2fPkguH0/edit?usp=sharing > > This either shows that many-worlds is most probably true, or that current AIs are so swayable that we shouldn't put much stock in what they say about controversial subjects. The latter. Notice that one of your questions contains the assertion, "So MWI explains more, while assuming less, making it preferable by Occam's razor." (Among other such statements in your questions.) If you tell it that MWI is most likely true, it will usually conclude that MWI is most likely true. From jasonresch at gmail.com Wed Nov 12 17:29:40 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 12 Nov 2025 12:29:40 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 12:16?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Nov 11, 2025 at 12:18?PM Jason Resch via extropy-chat > wrote: > > On Tue, Nov 11, 2025 at 11:48?AM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Yesterday, when Kimi was asked to compare the Multiverse with > Superdeterminism, it preferred the MWI option due to the severe problems > posed by superdeterminism. > >> > >> Today, I never mentioned superdeterminism, but just asked Kimi to > consider how unlikely the multiverse is, as in your comment above. Kimi > agreed with you, but did not suggest superdeterminism as the alternative. > Kimi discussed the problems with MWI and concluded that at present, we > don't know how to solve the measurement problem. We await further research > on this mystery. :) > > I agree with that conclusion: we don't know for sure if it's MWI, > superdeterminism, or something else. The discussion was about what > each of us finds more likely, and why. > Did you notice that the AI described superdeterminism as "conspiratorial?" This seemed to be a main crux of our disagreement: any time I suggested superdeterminism involved the universe conspiring against us, or operating in a manner to fool us, you objected on the grounds that I was ascribing conscious volition to the universe. I think Bill's conversation shows this is not some kirk of my description of superdeterminism, but a core part of superdeterminism. But then when you say your version of superdeterminism doesn't require any of this, that left me completely confused, as without that piece, it's not superdeterminism, it's just ordinary determinism and local hidden variables. I think to understand superdeterminism, one must understand the problem it was invented to solve (that problem being the Bell inequality statistics). Jason > > > I had a similar conversation with an AI on the topic of many-worlds. At > the end of the conversation the AI was 99.99% sure many-worlds was correct: > > > > > https://docs.google.com/document/d/1i15TT76PMYlOdMO76O_TYPML2ocHrlzYDdj2fPkguH0/edit?usp=sharing > > > > This either shows that many-worlds is most probably true, or that > current AIs are so swayable that we shouldn't put much stock in what they > say about controversial subjects. > > The latter. Notice that one of your questions contains the assertion, > "So MWI explains more, while assuming less, making it preferable by > Occam's razor." (Among other such statements in your questions.) If > you tell it that MWI is most likely true, it will usually conclude > that MWI is most likely true. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 12 17:36:12 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 12:36:12 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 12:29?PM Jason Resch wrote: > On Wed, Nov 12, 2025 at 12:16?PM Adrian Tymes via extropy-chat wrote: >> On Tue, Nov 11, 2025 at 12:18?PM Jason Resch via extropy-chat >> wrote: >> > On Tue, Nov 11, 2025 at 11:48?AM BillK via extropy-chat wrote: >> >> Yesterday, when Kimi was asked to compare the Multiverse with Superdeterminism, it preferred the MWI option due to the severe problems posed by superdeterminism. >> >> >> >> Today, I never mentioned superdeterminism, but just asked Kimi to consider how unlikely the multiverse is, as in your comment above. Kimi agreed with you, but did not suggest superdeterminism as the alternative. Kimi discussed the problems with MWI and concluded that at present, we don't know how to solve the measurement problem. We await further research on this mystery. :) >> >> I agree with that conclusion: we don't know for sure if it's MWI, >> superdeterminism, or something else. The discussion was about what >> each of us finds more likely, and why. > > Did you notice that the AI described superdeterminism as "conspiratorial?" I checked again, and neither "superdeterminism" nor "conspiratorial" appear in the conversation at https://www.kimi.com/share/19a73bad-87d2-8674-8000-000039699934 that Bill posted. Your conversation had a bunch of loaded questions, as I pointed out, so it's irrelevant if they appeared there. > I think Bill's conversation shows this is not some kirk of my description of superdeterminism, but a core part of superdeterminism. As superdeterminism is not mentioned at all in that conversation, this conclusion does not follow from that particular data. From jasonresch at gmail.com Wed Nov 12 17:44:07 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 12 Nov 2025 12:44:07 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 12:37?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Nov 12, 2025 at 12:29?PM Jason Resch wrote: > > On Wed, Nov 12, 2025 at 12:16?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Tue, Nov 11, 2025 at 12:18?PM Jason Resch via extropy-chat > >> wrote: > >> > On Tue, Nov 11, 2025 at 11:48?AM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Yesterday, when Kimi was asked to compare the Multiverse with > Superdeterminism, it preferred the MWI option due to the severe problems > posed by superdeterminism. > >> >> > >> >> Today, I never mentioned superdeterminism, but just asked Kimi to > consider how unlikely the multiverse is, as in your comment above. Kimi > agreed with you, but did not suggest superdeterminism as the alternative. > Kimi discussed the problems with MWI and concluded that at present, we > don't know how to solve the measurement problem. We await further research > on this mystery. :) > >> > >> I agree with that conclusion: we don't know for sure if it's MWI, > >> superdeterminism, or something else. The discussion was about what > >> each of us finds more likely, and why. > > > > Did you notice that the AI described superdeterminism as > "conspiratorial?" > > I checked again, and neither "superdeterminism" nor "conspiratorial" > appear in the conversation at > https://www.kimi.com/share/19a73bad-87d2-8674-8000-000039699934 that > Bill posted. > > Your conversation had a bunch of loaded questions, as I pointed out, > so it's irrelevant if they appeared there. > > > I think Bill's conversation shows this is not some kirk of my > description of superdeterminism, but a core part of superdeterminism. > > As superdeterminism is not mentioned at all in that conversation, this > conclusion does not follow from that particular data. > I was referring to Bill's original conversation with the AI on superdeterminism: https://www.kimi.com/share/19a6fcce-a8a2-8623-8000-0000e0a140f6 E.g.: - 1. It Appears to Undermine Science Itself This is the core objection. If experimenters' choices are pre-determined by hidden variables, then science loses its ability to discover truth. As physicists Shimony, Horne, and Clauser argued in 1976, superdeterminism would allow a "conspiracy" where nature always arranges itself to produce results that look like quantum mechanics, regardless of the actual underlying laws - Critics argue superdeterminism requires the universe's initial conditions to be suspiciously fine-tuned to produce the exact correlations we observe, making it seem like a "conspiracy theory" - Most physicists prefer to accept "spooky action at a distance" rather than live in a universe where every experimental result might be a cosmic conspiracy. - Superdeterminism solves Bell's theorem by accepting conspiratorial correlations - It claims our "free" measurement choices are secretly predetermined to align with hidden quantum states from the Big Bang. This is often criticized as making science impossible, since any experimental result could be dismissed as a cosmic setup . Jason > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 12 18:28:42 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 13:28:42 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 12:44?PM Jason Resch wrote: > On Wed, Nov 12, 2025 at 12:37?PM Adrian Tymes via extropy-chat wrote: >> On Wed, Nov 12, 2025 at 12:29?PM Jason Resch wrote: >> > I think Bill's conversation shows this is not some kirk of my description of superdeterminism, but a core part of superdeterminism. >> >> As superdeterminism is not mentioned at all in that conversation, this >> conclusion does not follow from that particular data. > > I was referring to Bill's original conversation with the AI on superdeterminism: > > https://www.kimi.com/share/19a6fcce-a8a2-8623-8000-0000e0a140f6 Which, as previously mentioned, starts with the assertion that "most physicists reject it". If you start a conversation with AI with such assertions, the AI's answer will generally agree with such assertions. This does not help determine whether the assertions are true. From jasonresch at gmail.com Wed Nov 12 19:04:42 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 12 Nov 2025 14:04:42 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025, 1:29 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Nov 12, 2025 at 12:44?PM Jason Resch wrote: > > On Wed, Nov 12, 2025 at 12:37?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Nov 12, 2025 at 12:29?PM Jason Resch > wrote: > >> > I think Bill's conversation shows this is not some kirk of my > description of superdeterminism, but a core part of superdeterminism. > >> > >> As superdeterminism is not mentioned at all in that conversation, this > >> conclusion does not follow from that particular data. > > > > I was referring to Bill's original conversation with the AI on > superdeterminism: > > > > https://www.kimi.com/share/19a6fcce-a8a2-8623-8000-0000e0a140f6 > > Which, as previously mentioned, starts with the assertion that "most > physicists reject it". > > If you start a conversation with AI with such assertions, the AI's > answer will generally agree with such assertions. This does not help > determine whether the assertions are true. > Regardless of an initial bias, the AI used similar descriptions as I did, e.g. using the term "conspiratorial." Such a description was not provided by Bill, the AI thought that was the best term to use to describe what the theory implies. Given that, do you consider my issues with superdeterminism to be idiosyncratic? Or do you now see that this is a common critique people give for rejecting superdeterminism? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 12 20:25:18 2025 From: pharos at gmail.com (BillK) Date: Wed, 12 Nov 2025 20:25:18 +0000 Subject: [ExI] XPENG Next-Gen IRON humanoid robot Message-ID: XPENG has just revealed their latest humanoid robot. It can have customised forms, male or female. The 2-minute video demo shows a female form that was so realistic that on the stage, they had to cut the cover off to show the metal mechanism underneath. Watch it in full-screen mode! BillK From atymes at gmail.com Wed Nov 12 20:32:16 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 15:32:16 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 2:06?PM Jason Resch via extropy-chat wrote: > On Wed, Nov 12, 2025, 1:29 PM Adrian Tymes via extropy-chat wrote: >> If you start a conversation with AI with such assertions, the AI's >> answer will generally agree with such assertions. This does not help >> determine whether the assertions are true. > > Regardless of an initial bias, the AI used similar descriptions as I did, e.g. using the term "conspiratorial." Such a description was not provided by Bill, the AI thought that was the best term to use to describe what the theory implies. > > Given that, do you consider my issues with superdeterminism to be idiosyncratic? Or do you now see that this is a common critique people give for rejecting superdeterminism? Given that you're either missing or ignoring my point again, I think it is time to end this branch of the discussion, as I did earlier with another poster. You're not convincing me, and I'm not convincing you. From jasonresch at gmail.com Wed Nov 12 21:39:30 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 12 Nov 2025 16:39:30 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025, 3:33 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Nov 12, 2025 at 2:06?PM Jason Resch via extropy-chat > wrote: > > On Wed, Nov 12, 2025, 1:29 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> If you start a conversation with AI with such assertions, the AI's > >> answer will generally agree with such assertions. This does not help > >> determine whether the assertions are true. > > > > Regardless of an initial bias, the AI used similar descriptions as I > did, e.g. using the term "conspiratorial." Such a description was not > provided by Bill, the AI thought that was the best term to use to describe > what the theory implies. > > > > Given that, do you consider my issues with superdeterminism to be > idiosyncratic? Or do you now see that this is a common critique people give > for rejecting superdeterminism? > > Given that you're either missing or ignoring my point again, I think > it is time to end this branch of the discussion, as I did earlier with > another poster. You're not convincing me, and I'm not convincing you. > Note: I am not trying to convince you I am right. I only seek to settle the question of whether you think I am alone in raising this particular critique, or if you will acknowledge that this is a common (if not the main) critique that is leveled against superdeterminism. If you acknowledge that I am not alone in my thinking, then I at least can rest easy with the knowledge that you don't think I was crazy for raising the point I raised (or at least, if I am crazy, that I am in some good company). Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 12 22:21:11 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 14:21:11 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> Message-ID: <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> From: John Clark ? >> the world would be a better place if that company [Twitter] did NOT exist? > I disagree enthusiastically, wholeheartedly. >?I'm not surprised given that you said Twitter was your primary news source, and that fact will certainly shape your worldview. John K Clark Twitter is the world?s version of Extropolis. Musk lets anyone post anything they want over there. Scandal! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 12 22:31:49 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 14:31:49 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <008c01dc53e5$3e145b90$ba3d12b0$@rainier66.com> Message-ID: <015e01dc5424$235a0450$6a0e0cf0$@rainier66.com> ....> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Gender-Neutral Side Note On Wed, Nov 12, 2025 at 10:0?AM spike jones via extropy-chat wrote: > From: John Clark > > >?to get a 34 time convicted felon... > > > What was this felon convicted of, please? > > >?Spike, I refuse to give an answer to a question that you can find > >for yourself in about 0.8 seconds online. John K Clark > > I did that. It didn?t find THE answer. It gave me AN answer. It gave me a misdemeanor. The evidence - to wit, the results of https://www.google.com/search?q=34+time+convicted+felon - suggests that you are lying... _______________________________________________ Adrian, it isn't a felony until they tell us what the other crime was. Paying off Stormy is not a crime. So without a conviction on another crime, this falsifying records charges are misdemeanors with expired statute of limitations. All the court needs to do is show us where there were any convictions of any other crime. They didn't do this. Without a second crime, those "felonies" for misdemeanors are as phony Dr. Pepper's medical degree. To this day, no one knows what the other crime was, or when the conviction took place. We shouldn't just blow it off however, for that phony attempt at a felony conviction swung libertarians over into the column of the guy who won that 2024 election in a landslide. Libertarians are very tuned to fake court cases. spike From spike at rainier66.com Wed Nov 12 23:06:32 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 15:06:32 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <015e01dc5424$235a0450$6a0e0cf0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <008c01dc53e5$3e145b90$ba3d12b0$@rainier66.com> <015e01dc5424$235a0450$6a0e0cf0$@rainier66.com> Message-ID: <017f01dc5428$fcfdf8c0$f6f9ea40$@rainier66.com> -----Original Message----- From: spike at rainier66.com ... _______________________________________________ >... it isn't a felony until they tell us what the other crime was. Paying off Stormy is not a crime. So without a conviction on another crime, this falsifying records charges are misdemeanors with expired statute of limitations... spike There is an important lesson here: there are no legitimate shortcuts to legal convictions. The New York court tried a novel shortcut which failed to convince the American people. So... they swung over to what appeared to be an unlawfully-convicted political adversary of New York. The legal system cannot be used as a legitimate political tool. spike From atymes at gmail.com Wed Nov 12 23:24:45 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 18:24:45 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 4:41?PM Jason Resch via extropy-chat wrote: > I only seek to settle the question of whether you think I am alone in raising this particular critique, or if you will acknowledge that this is a common (if not the main) critique that is leveled against superdeterminism. > > If you acknowledge that I am not alone in my thinking, then I at least can rest easy with the knowledge that you don't think I was crazy for raising the point I raised (or at least, if I am crazy, that I am in some good company). I acknowledge that you are not alone in your thinking, and not crazy for thinking it. How common your critique is, I am not sure...not for any flaw in your position, but because it's not the main one I've heard, and many who voice said main critique seem to think no further, thus they do not share your reasoning. Said main critique is, "superdeterminism means there is no such thing as free will, so none of us are responsible for our actions". They repeat this even when I explain how there is potential room for free will, at least in my version. I do appreciate that your position is rather more well thought out than that. From jasonresch at gmail.com Wed Nov 12 23:51:41 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 12 Nov 2025 18:51:41 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025, 6:25 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Nov 12, 2025 at 4:41?PM Jason Resch via extropy-chat > wrote: > > I only seek to settle the question of whether you think I am alone in > raising this particular critique, or if you will acknowledge that this is a > common (if not the main) critique that is leveled against superdeterminism. > > > > If you acknowledge that I am not alone in my thinking, then I at least > can rest easy with the knowledge that you don't think I was crazy for > raising the point I raised (or at least, if I am crazy, that I am in some > good company). > > I acknowledge that you are not alone in your thinking, and not crazy > for thinking it. How common your critique is, I am not sure...not for > any flaw in your position, but because it's not the main one I've > heard, and many who voice said main critique seem to think no further, > thus they do not share your reasoning. > I see, thank you. I appreciate this additional insight from your experience m > Said main critique is, "superdeterminism means there is no such thing > as free will, so none of us are responsible for our actions". What I find odd about that position is superdeterminism isn't necessary to justify such a belief. Regular old determinism suffices for those who are incompatibilists regarding free will. (For what it's worth I am a compatibilist regarding free will). Not to mention, it's irrational to reason backwards from what one wants to be true. They > repeat this even when I explain how there is potential room for free > will, at least in my version. > Right, I would point them to the wiki article on compatibilism, or Dan Dennett's book on free will (Elbow Room). > I do appreciate that your position is rather more well thought out than > that. > Thank you. I am glad for this clarification. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 12 23:56:35 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Nov 2025 18:56:35 -0500 Subject: [ExI] =?utf-8?q?Google=E2=80=99s_Willow_Quantum_Chip=3A_Proof_of?= =?utf-8?q?_the_Multiverse=3F?= In-Reply-To: References: Message-ID: On Wed, Nov 12, 2025 at 6:53?PM Jason Resch via extropy-chat wrote: > Not to mention, it's irrational to reason backwards from what one wants to be true. On this we wholeheartedly agree. And yet, it's what I keep running into. From spike at rainier66.com Thu Nov 13 01:05:02 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 17:05:02 -0800 Subject: [ExI] day 3 In-Reply-To: References: <008401dc53e4$e8b2e7c0$ba18b740$@rainier66.com> Message-ID: <001401dc5439$8ac270e0$a04752a0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Sent: Wednesday, 12 November, 2025 7:19 AM To: ExI chat list Cc: Adrian Tymes Subject: Re: [ExI] day 3 Compare that to use percentage, same time of day in similar neighborhood (including proximity to typical commutes), for gas stations. Granted, home-and-work charging means that there's less demand for on-the-road charging. There aren't as many workplaces, let alone homes, which offer on-site gasoline refueling. Adrian I just went by the No-Musk station for the second time today, time 1630. No customers. To estimate this demand correctly, we would need to know the fraction of cars which are No-Musk EVs and how long those take to charge. I have a neighbor with a Rivian, so I might be able to find out the second question. On the first question: this is tricky. There are plenty of No-Musks, but it is harder to count them than it is to count the Yes-Musks, since Tesla only has about five models. Of real interest to me is how much power a data center devours per... what? What is the relevant unit? Are those things huge buildings filled with NVidea GPU processors? What are they calculating? I see that NVidea high end processors use a ton of power: 400 to 1000 W, dang. My strategy might not work: encourage the masses to buy EVs, then add all the power infrastructure needed to support all that, then when the demand for power gets high enough because of the runup to the Singularity, the EV drivers park or get ICE buggies, freeing up all that electric power capacity. If the faceless masses don't get EVs, the strategy won't work. Does anyone here have a No-Musk EV? How long do those take to charge on a standard charger? How about on a high-speed? spike On Wed, Nov 12, 2025 at 10:00?AM spike jones via extropy-chat wrote: > > I rolled by the non-Musk charging station, time 0650. Number of customers: zero. > > > > Damn. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Thu Nov 13 02:31:19 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Nov 2025 18:31:19 -0800 Subject: [ExI] govt lending to build nuke plants? Message-ID: <005701dc5445$98767950$c9636bf0$@rainier66.com> I don't know what to make of this, but it supports Keith's (I think it was his) contention that the federal government does partially fund power generation projects. I wholeheartedly agree with building nukes, but if we have government involved in it, that means the government can withdraw funding, as it did with the wind project in New Jersey, delaying completion. https://www.zerohedge.com/markets/trump-admin-lend-hundreds-billions-build-n uclear-power-plants spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Nov 13 05:20:32 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 12 Nov 2025 21:20:32 -0800 Subject: [ExI] bill gates commentary on climate change In-Reply-To: References: <7c8e881a-4597-485e-aa4e-6ecf97fb8cf9@zaiboc.net> Message-ID: On Mon, Nov 10, 2025 at 5:06?AM John Clark via extropy-chat wrote: > >wrote: > >> > Time delayed realization: Bill Gates has been a leading promoter of the global warming notion, but last week much to the surprise of me and plenty of others, he backed off of that. Perhaps he pondered the Singularity notion and concluded (as many or most of us here did) that there is low probability the Singularity will be far in the future. He realized it is a near-term phenomenon, two or three decades at most, meaning that climate is irrelevant. I have been writing about this since 1990. My thought was that people would mine the carbon out of the air for such things as houses and engineering projects. I don't know that people will do this, but it seems unlikely that climate will be a post-singularity problem. Between now and then, we are likely to have it rough. > I think you are probably correct, it's certainly why I don't get all upset when somebody predicts that by 2100 this or that dreadful thing will happen. Anybody who thinks they know what will happen in the year 2100 is simply wrong. If you are writing about 100 years from now, it may be set in a remnant of humanity. I did this in a lead-in chapter to The Clinic Seed. I would not be surprised to find the Earth entirely deserted by that point. Keith > John K Clark >> >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jasonresch at gmail.com Thu Nov 13 11:23:57 2025 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 13 Nov 2025 06:23:57 -0500 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: Changing subjects from the previous thread: On Wed, Nov 12, 2025, 6:57 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Nov 12, 2025 at 6:53?PM Jason Resch via extropy-chat > wrote: > > Not to mention, it's irrational to reason backwards from what one wants > to be true. > > On this we wholeheartedly agree. And yet, it's what I keep running into. > Regarding free will, you previously said there may be some true randomness in the universe. Do you think true randomness is required for free will, or are you a compatibilist (free will is can exist in a universe which has no randomness at all)? And finally, if you do believe in randomness, what form do you see it taking: - Events in the universe that don't have causes (i.e. "God playing dice") - Hidden variables which we cannot in principle access (QM is incomplete) - Unpredictability arising from chaos combined with incomplete or imperfect information (Like my deterministic simulation example, or how pseudorandom number generators work) - Self-locating uncertainty in the face of duplication (Many worlds's answer, or "Tegmark's cloned while sleeping" example) - The "un-anticibility" (for lack of a better word) of computationally irreducible processes (e.g. Wolfram's answer, that we can't always predict what complex computer program will do until it does it) - Something non-algorithmic (or hypercomputation) exists in physics (e.g. like Penrose and his quantum gravity speculations) - Libertarian free will (i.e., the brain's will breaks free from or is independent of the ordinary determinism of the physical laws). - (something else)? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Nov 13 13:24:30 2025 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 13 Nov 2025 08:24:30 -0500 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: On Thu, Nov 13, 2025 at 6:25?AM Jason Resch via extropy-chat wrote: > Regarding free will, you previously said there may be some true randomness in the universe. > > Do you think true randomness is required for free will, or are you a compatibilist (free will is can exist in a universe which has no randomness at all)? > > And finally, if you do believe in randomness, what form do you see it taking: The paradox of defining "free will" like that is, once you define how it works, it is no longer what most people consider to be "free will" by definition. As such, attempts to define how it works have tended to be traps: pick something, anything, and get attacked for it. As such, I've picked up a distaste for getting into such discussions, such that I'm not sure I could state an honest opinion on that topic anymore. From pharos at gmail.com Thu Nov 13 14:29:36 2025 From: pharos at gmail.com (BillK) Date: Thu, 13 Nov 2025 14:29:36 +0000 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: On Thu, 13 Nov 2025 at 13:26, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The paradox of defining "free will" like that is, once you define how > it works, it is no longer what most people consider to be "free will" > by definition. > > As such, attempts to define how it works have tended to be traps: pick > something, anything, and get attacked for it. > > As such, I've picked up a distaste for getting into such discussions, > such that I'm not sure I could state an honest opinion on that topic > anymore. > _______________________________________________ AIs don't have the option to avoid traps like that. :) I asked the new ERNIE 4.5 Turbo AI (Deep Thinking option) these questions. Final Sentence: Ultimately, free will may be less about randomness and more about how we experience and rationalize choice within a web of causes. Click link to view conversation with ERNIE Bot: < https://ernie.baidu.com/share/sGsjGxgXL3> (Website may be slow to load). BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Nov 13 14:32:53 2025 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 13 Nov 2025 09:32:53 -0500 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: On Thu, Nov 13, 2025, 8:25 AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Nov 13, 2025 at 6:25?AM Jason Resch via extropy-chat > wrote: > > Regarding free will, you previously said there may be some true > randomness in the universe. > > > > Do you think true randomness is required for free will, or are you a > compatibilist (free will is can exist in a universe which has no randomness > at all)? > > > > And finally, if you do believe in randomness, what form do you see it > taking: > > The paradox of defining "free will" like that is, once you define how > it works, it is no longer what most people consider to be "free will" > by definition. > My question is more about your understanding of randomness and whether or not it is necessary. I happen to find many of these definitions of randomness workable or even probable, to be operating in this universe. And those I don't agree with, incomplete quantum mechanics, or non algorithmic physics, are defended by Nobel prize winners. > As such, attempts to define how it works have tended to be traps: pick > something, anything, and get attacked for it. > I'm not trying to trap you. We both agree there are valid conceptions of free will, and even unpredictability with determinism. > As such, I've picked up a distaste for getting into such discussions, > such that I'm not sure I could state an honest opinion on that topic > anymore. > That's too bad, but I understand why you might feel that way. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Nov 13 14:41:08 2025 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 13 Nov 2025 09:41:08 -0500 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: Thanks Bill that was an interesting read. Jason On Thu, Nov 13, 2025, 9:31 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, 13 Nov 2025 at 13:26, Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > The paradox of defining "free will" like that is, once you define how > > it works, it is no longer what most people consider to be "free will" > > by definition. > > > > As such, attempts to define how it works have tended to be traps: pick > > something, anything, and get attacked for it. > > > > As such, I've picked up a distaste for getting into such discussions, > > such that I'm not sure I could state an honest opinion on that topic > > anymore. > > _______________________________________________ > > > AIs don't have the option to avoid traps like that. :) > I asked the new ERNIE 4.5 Turbo AI (Deep Thinking option) these questions. > > Final Sentence: Ultimately, free will may be less about randomness and > more about how we experience and rationalize choice within a web of causes. > Click link to view conversation with ERNIE Bot: < > https://ernie.baidu.com/share/sGsjGxgXL3> > (Website may be slow to load). > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Nov 13 14:57:04 2025 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 13 Nov 2025 09:57:04 -0500 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: On Thu, Nov 13, 2025 at 9:34?AM Jason Resch via extropy-chat wrote: > On Thu, Nov 13, 2025, 8:25 AM Adrian Tymes via extropy-chat wrote: >> As such, attempts to define how it works have tended to be traps: pick >> something, anything, and get attacked for it. > > I'm not trying to trap you. We both agree there are valid conceptions of free will, and even unpredictability with determinism. To be clear: I don't think *you* are, but... >> As such, I've picked up a distaste for getting into such discussions, >> such that I'm not sure I could state an honest opinion on that topic >> anymore. ...and I'd rather not make up something and lie just to give you an answer. From spike at rainier66.com Thu Nov 13 16:09:15 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 13 Nov 2025 08:09:15 -0800 Subject: [ExI] berkeley students unaware Message-ID: <001801dc54b7$dbfe80b0$93fb8210$@rainier66.com> These undergrads don't even realize their studies are pointless, for the Singularity will be here before they graduate: UC Berkeley 's introductory machine learning course gets optimized for the AI age | CDSS at UC Berkeley Or perhaps they are betting the Singularity will be delayed long enough for them to perpetrate it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu Nov 13 16:52:48 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 13 Nov 2025 09:52:48 -0700 Subject: [ExI] day 3 In-Reply-To: <001401dc5439$8ac270e0$a04752a0$@rainier66.com> References: <008401dc53e4$e8b2e7c0$ba18b740$@rainier66.com> <001401dc5439$8ac270e0$a04752a0$@rainier66.com> Message-ID: I had a Porsche Tycan, then a Lucid. A standard 120 volt plug would take days to charge. A standard 240 volt plug would take hours. And a high amperage charger would start out fast. You could get the first 50% of charge in about 10 minutes. then things heat up and slow down, but still relatively fast. On Wed, Nov 12, 2025 at 6:05?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of > Adrian Tymes via extropy-chat > Sent: Wednesday, 12 November, 2025 7:19 AM > To: ExI chat list > Cc: Adrian Tymes > Subject: Re: [ExI] day 3 > > Compare that to use percentage, same time of day in similar neighborhood > (including proximity to typical commutes), for gas stations. > > Granted, home-and-work charging means that there's less demand for > on-the-road charging. There aren't as many workplaces, let alone homes, > which offer on-site gasoline refueling. > > > > > > > > > > Adrian I just went by the No-Musk station for the second time today, time > 1630. No customers. > > To estimate this demand correctly, we would need to know the fraction of > cars which are No-Musk EVs and how long those take to charge. I have a > neighbor with a Rivian, so I might be able to find out the second > question. On the first question: this is tricky. There are plenty of > No-Musks, but it is harder to count them than it is to count the Yes-Musks, > since Tesla only has about five models. > > Of real interest to me is how much power a data center devours per... > what? What is the relevant unit? Are those things huge buildings filled > with NVidea GPU processors? What are they calculating? I see that NVidea > high end processors use a ton of power: 400 to 1000 W, dang. My strategy > might not work: encourage the masses to buy EVs, then add all the power > infrastructure needed to support all that, then when the demand for power > gets high enough because of the runup to the Singularity, the EV drivers > park or get ICE buggies, freeing up all that electric power capacity. If > the faceless masses don't get EVs, the strategy won't work. > > Does anyone here have a No-Musk EV? How long do those take to charge on a > standard charger? How about on a high-speed? > > spike > > > > > > > > > > On Wed, Nov 12, 2025 at 10:00?AM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > I rolled by the non-Musk charging station, time 0650. Number of > customers: zero. > > > > > > > > Damn. > > > > > > > > spike > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Nov 13 18:43:34 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 13 Nov 2025 10:43:34 -0800 Subject: [ExI] govt lending to build nuke plants? In-Reply-To: <005701dc5445$98767950$c9636bf0$@rainier66.com> References: <005701dc5445$98767950$c9636bf0$@rainier66.com> Message-ID: On Wed, Nov 12, 2025 at 6:32?PM spike jones via extropy-chat wrote: > > I don?t know what to make of this, but it supports Keith?s (I think it was his) contention that the federal government does partially fund power generation projects. Back in the 1930s and 1940s, the Feds funded the whole cost of the Hoover Dam and Grand Coulee Dam. In the last few years, I read somewhere that power sales had paid off the construction cost of at least one of them. With respect to nuclear power plants, the Feds pick up the cost of disaster insurance because the potential is too large for any private group of companies. Of course, recent power line fires have shown that you don't need a nuclear meltdown to bankrupt a power company. A large chunk of the power cost increase is due to a single fire. " The 2018 Camp Fire in Northern California's Butte County was the deadliest and most destructive wildfire in California history. The fire began on the morning of November 8, 2018, when hardware on a poorly maintained Pacific Gas and Electric Company (PG&E) transmission line in the Feather River Canyon failed during strong katabatic winds. Those winds rapidly drove the Camp Fire through the communities of Concow, Magalia, Butte Creek Canyon, and Paradise, largely destroying them. The fire burned for another two weeks, and was contained on Sunday, November 25 after burning 153,336 acres (62,050 ha). The Camp Fire caused 85 fatalities, displaced more than 50,000 people, and destroyed more than 18,000 structures, causing an estimated $16.5 billion in damage. "&E filed for bankruptcy in January 2019, citing expected wildfire liabilities of $30 billion. On December 6, 2019, the utility made a settlement offer of $13.5 billion for the wildfire victims; the offer covered several devastating fires caused by the utility, including the Camp Fire. On June 16, 2020, the utility pleaded guilty to 84 counts of involuntary manslaughter. " https://en.wikipedia.org/wiki/Camp_Fire_(2018) Not that anyone spent a day in jail. PG&E was well known for spending as little money as possible on maintenance. This kept the cost of power down and profits up, so the executives could collect large bonuses. https://en.wikipedia.org/wiki/San_Bruno_pipeline_explosion "On January 13, 2012, an independent audit from the State of California issued a report stating that PG&E had illegally diverted over $100 million from a fund used for safety operations, and instead used it for executive compensation and bonuses." The line that failed was 100 years old. "The Camp Fire was caused by the failure of a single metal hook attached to a PG&E transmission tower on the company's Caribou-Palermo transmission line, which carried power from hydroelectric facilities in the Sierra Nevada to the Bay Area.[32] The tower, a little under 100 feet (30 m) tall,[33] was built on a steep incline on a ridge above Highway 70 and the North Fork Feather River near the community of Pulga.[34] "The tower had two arms, each with a hook hanging from a hole in a long piece of metal.[35] The hook held up a string of electrical insulators.[36] The transmission power lines were suspended from these insulators, away from the steel tower itself so as to prevent electricity arcing between them.[35] One of the hooks on the tower (about three inches (7.6 cm) wide[32] and one inch (2.5 cm) in diameter) had been worn down by rubbing against the metal plate that it hung from, to the point where only a few millimeters of metal remained.[35] At 6:15 a.m. PST on November 8, a PG&E control center in Vacaville recorded an outage on the company's transmission line in the Feather River Canyon.[37]: 7 The hook?which was about 7/8ths worn through?had snapped under the weight of the power line and insulator string that it supported, which weighed more than 142 pounds (64 kg).[37]: 22 No longer held up, the energized power line struck the transmission tower. This created an electric arc between the power line and the tower, which reached temperatures estimated at 5,000 to 10,000 ?F (2,800 to 5,500 ?C) and melted metal components of the conductor and the tower. The molten metal fell into the brush beneath the tower, setting it alight.[37]: 9 " A lot of the high cost of power in California can be traced to executives who diverted maintenance funds to their own pockets. The Wikipedia articles are well worth reading. Keith > I wholeheartedly agree with building nukes, but if we have government involved in it, that means the government can withdraw funding, as it did with the wind project in New Jersey, delaying completion. > > > > https://www.zerohedge.com/markets/trump-admin-lend-hundreds-billions-build-nuclear-power-plants > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Thu Nov 13 20:01:52 2025 From: johnkclark at gmail.com (John Clark) Date: Thu, 13 Nov 2025 15:01:52 -0500 Subject: [ExI] Russia's new cutting edge anthropomorphic robot Message-ID: *It starts at about one minute 30 seconds in *. *Russia's new cutting edge anthropomorphic robot * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 13 21:17:16 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 13 Nov 2025 13:17:16 -0800 Subject: [ExI] day 4 In-Reply-To: References: <008401dc53e4$e8b2e7c0$ba18b740$@rainier66.com> <001401dc5439$8ac270e0$a04752a0$@rainier66.com> Message-ID: <008c01dc54e2$e39b9a80$aad2cf80$@rainier66.com> Woohooooo! A customer! DOH! It?s a Tesla. Isn?t charging, only parking. Not compatible with these chargers. But he isn?t breaking any rules. The pavement markings say EV ONLY. Well, a Tesla is an EV, ja? From: extropy-chat On Behalf Of Brent Allsop via extropy-chat Subject: Re: [ExI] day 3 >?I had a Porsche Tycan, then a Lucid. A standard 120 volt plug would take days to charge. A standard 240 volt plug would take hours. And a high amperage charger would start out fast. You could get the first 50% of charge in about 10 minutes. then things heat up and slow down, but still relatively fast. OK cool thx Brent. This charging station is set up in the parking lot of a strip mall, so there is stuff to do. While he is charging, a prole can get his hair cut, his nails done, get a tooth fixed, eat a nice Vietnamese meal, or more traditional Taiwanese sandwiches and boba tea, he can even get a massaaaaage. (I am told it isn?t considered prostitution if there is no actual genital to genital contact (hey, California (I am told they take cash if you prefer (I don?t know any of this from firsthand knowledge, of course (or any other number of? hands from that massaaaage place.))))) If we think about it, this is critical to the success of a charging business: to be where a prole has something to do right where he must wait for half an hour or more, preferably something perfectly legal (in California.) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 39738 bytes Desc: not available URL: From spike at rainier66.com Fri Nov 14 05:43:13 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 13 Nov 2025 21:43:13 -0800 Subject: [ExI] cool tabby's star article Message-ID: <003d01dc5529$91d72b50$b58581f0$@rainier66.com> It was dimming and suddenly went dark - NASA shocked by never-before-seen star behavior spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 14 14:51:43 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 14 Nov 2025 09:51:43 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> Message-ID: On Wed, Nov 12, 2025 at 5:21?PM wrote: *> Twitter is the world?s version of Extropolis. Musk lets anyone post > anything they want over there. Scandal!* *On my list I don't let anybody post anything they like, I have 2 rigid rules:* *1) Don't be boring. * *2) Don't be stupid. * *Fortunately over the last five years everybody has obeyed those two rules so I never had to kick anybody off, I never even came close. We've had a few heated arguments but that's OK, I like arguing (not to be confused with fighting) with people, the better they can argue with me the more I like them, and my policy is that if I find somebody's argument is more logical than mine then I will abandon my previous position and adopt theirs as my own. And we've never had an ugly flamewar of the sort we had here in 2016 and 2020. I admit at least part of the reason for that is that it is much less active than this list, and most of the posts are by me. I wish my list was more active because it's much more comfortable, I don't have to constantly behave like I'm walking on eggs as I do over here. * *By the way, why don't you let Google Groups host this list? It's free, you can still have a moderator if you want one, and there's no problem in posting large photographs or charts. And best of all it's far far more reliable. I don't know if things have improved but from the day I joined in 1993 this list was constantly going down, sometimes for a few hours, sometimes for a few days, sometimes for a few weeks. In the last five years **Google Groups hasn't died once. * * John K Clark See what's on my new list at Extropolis * sf? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 14 15:21:09 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Nov 2025 07:21:09 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> Message-ID: <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> From: John Clark Sent: Friday, 14 November, 2025 6:52 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] Gender-Neutral Side Note On Wed, Nov 12, 2025 at 5:21?PM > wrote: > Twitter is the world?s version of Extropolis. Musk lets anyone post anything they want over there. Scandal! >?On my list I don't let anybody post anything they like, I have 2 rigid rules: 1) Don't be boring. 2) Don't be stupid. If these are rigid rules, they require a rigid criterion for determining if they have been broken. Perhaps there is some formula with unambiguous objective metrics, or an instrument perhaps, but something that anyone can use and everyone will get the same determination of positive or negative results on the test for boring and stupid. Do elaborate please John, >? And we've never had an ugly flamewar of the sort we had here in 2016 and 2020? I see, and who was mysteriously at the center of both of those ugly flame wars in 2016 and 2020 please? Shall we consult the archives? spike ? >?John K Clark See what's on my new list at Extropolis sf? -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 14 16:12:49 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 14 Nov 2025 11:12:49 -0500 Subject: [ExI] Hitler's DNA reveal he had a "micropenis" and a undescended testicle Message-ID: *Thanks to the blood on a couch where Hitler put a bullet through his head, we now have a copy of his genome, and it reveals that he had something called "Kallmann syndrome" which, among other things, produces an undescended testicle, a micropenis, infertility, and a lack or diminished sense of smell.* * Hitler's DNA reveals he had a syndrome that can affect genitals * *For years there have been rumors that Hitler only had one ball, and apparently it's true. During World War II the British even had a song about it: * *Hitler has only got one ball,G?ring has two but very small,Himmler is rather sim'lar,But poor old Goebbels has no balls at all.* *Hitler Has Only Got One Ball * *There were also rumors that Hitler had Jewish ancestry, but according to his DNA that rumor proved to be untrue.Something else was discovered. Comparing Hitler?s polygenic risk score with those of 30,000 Danish people, scientists found that ?Hitler's score for schizophrenia, autism and bipolar disorder was actually higher than 99% of the individuals in the Danish population.?* *John K Clark See what's on my new list at Extropolis * rvxfkk -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Fri Nov 14 16:15:41 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Fri, 14 Nov 2025 16:15:41 +0000 Subject: [ExI] No to Google In-Reply-To: References: Message-ID: On 14/11/2025 15:21, John K Clark wrote: > By the way, why don't you let Google Groups host this list? Please, please, please, don't do that. I'd hate to lose this list. -- Ben (someone who Doesn't Do Google, for a number of reasons) From atymes at gmail.com Fri Nov 14 16:23:53 2025 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 14 Nov 2025 11:23:53 -0500 Subject: [ExI] No to Google In-Reply-To: References: Message-ID: On Fri, Nov 14, 2025 at 11:16?AM Ben Zaiboc via extropy-chat wrote: > On 14/11/2025 15:21, John K Clark wrote: > > By the way, why don't you let Google Groups host this list? > > Please, please, please, don't do that. > > I'd hate to lose this list. > > -- > Ben (someone who Doesn't Do Google, for a number of reasons) I think they could use with an explanation of your concerns on this issue, if you please. Both from your personal subscribe/unsubscribe basis, and - if you have them - worries about Google cancelling or otherwise affecting the list if it was to be hosted there. From pharos at gmail.com Fri Nov 14 16:24:55 2025 From: pharos at gmail.com (BillK) Date: Fri, 14 Nov 2025 16:24:55 +0000 Subject: [ExI] =?utf-8?q?World=E2=80=99s_first_mass_humanoid_robot_delive?= =?utf-8?q?ry_begins?= Message-ID: World?s first mass humanoid robot delivery begins as UBTech sends Walker S2 units Orders for UBTECH?s humanoids have surged past 800 million yuan as major automakers and tech firms deploy them for nonstop industrial work. BySujita Sinha Nov 14, 2025 < https://interestingengineering.com/ai-robotics/worlds-first-large-delivery-of-humanoid-workers > Quotes: Shenzhen is now home to a major robotics milestone as UBTECH Robotics confirmed that hundreds of its Walker S2 humanoid robots have been shipped to active industrial facilities. These organizations want stable 24-hour operations without constant oversight. Early tests show that the robots are performing well in factories and warehouses instead of controlled labs. The company believes its battery system is the strongest feature of the Walker S2. The robot can remove and replace its own power pack within minutes and does not need any human assistance to do so. This reduces downtime and supports long shifts that involve constant walking and lifting. ---------------------------------------- Link includes a 1-minute video of the robots. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 14 16:32:14 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Nov 2025 08:32:14 -0800 Subject: [ExI] No to Google In-Reply-To: References: Message-ID: <011301dc5584$3cc80de0$b65829a0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Cc: Adrian Tymes Subject: Re: [ExI] No to Google On Fri, Nov 14, 2025 at 11:16?AM Ben Zaiboc via extropy-chat wrote: > On 14/11/2025 15:21, John K Clark wrote: >...> > By the way, why don't you let Google Groups host this list? > > Please, please, please, don't do that. > > I'd hate to lose this list. > > -- >>... Ben (someone who Doesn't Do Google, for a number of reasons) >...I think they could use with an explanation of your concerns on this issue, if you please. Both from your personal subscribe/unsubscribe basis, and - if you have them - worries about Google cancelling or otherwise affecting the list if it was to be hosted there. _______________________________________________ Ja, and we would have similar concerns if we hosted the list over on TwitterX: Musk would theoretically own the content, and specific criticism of Musk is disallowed content on that platform. Recommend stay as is. spike From mbb386 at main.nc.us Fri Nov 14 16:40:13 2025 From: mbb386 at main.nc.us (MB) Date: Fri, 14 Nov 2025 11:40:13 -0500 Subject: [ExI] No to Google In-Reply-To: References: Message-ID: <5e7d73a69cab0edb0321eef9cce8d8d9.squirrel@www.main.nc.us> I also avoid google, best I can. Regards, MB On Fri, November 14, 2025 11:15, Ben Zaiboc via extropy-chat wrote: > On 14/11/2025 15:21, John K Clark wrote: >> By the way, why don't you let Google Groups host this list? > > Please, please, please, don't do that. > > I'd hate to lose this list. > > -- > Ben (someone who Doesn't Do Google, for a number of reasons) > > From john at ziaspace.com Fri Nov 14 17:13:40 2025 From: john at ziaspace.com (John Klos) Date: Fri, 14 Nov 2025 17:13:40 +0000 (UTC) Subject: [ExI] Just say no to Google Message-ID: > By the way, why don't you let Google Groups host this list? It's free, you > can still have a moderator if you want one, and there's no problem in > posting large photographs or charts. And best of all it's far far more > reliable. I don't know if things have improved but from the day I joined in > 1993 this list was constantly going down, sometimes for a few hours, > sometimes for a few days, sometimes for a few weeks. In the last five years > Google Groups hasn't died once. Do you want a monoculture? Because that's how you get a monoculture ;) First, Google wants the whole world to move to them for all email. They feed everything that goes through them to their "AI", they sell access to private information, and it's more than likely they sell unfettered access to all data to the US government. Even though lists in general are public, the idea that we should normalize having for-profit corporate entities have access to all data everywhere is just wrong and broken. Second, as more and more people who don't know any better move to Google, they have less incentive to properly interoperate with the rest of the world. Google Groups functionality is limited in email clients that aren't basically web browsers, which means that people with older browsers, older computers and/or non-mainstream browsers or OSes have issues using Google Groups. Signing in is more problematic in non-western countries and with less common software. Their filters are unknowable. They do what they want and don't tell anyone what they do, so we have to guess. This means they're non-deterministic, which is why you have to check your spam folder for email that comes from people with whom you've corresponded for years. Sometimes the filters are outright censorship: https://en.wikipedia.org/wiki/Censorship_by_Google But most importantly, if someone lodges complaints and Google decides to take action, what will you do? There is literally no way to talk to an actual human at Google unless you 1) pay them a shitton of money or 2) you physically go do one of their offices and wait outside. Even if you can talk to a human at Google, can you talk to one that can actually get anything done? Can you find out why your list was taken down? Can you refute the complaints and get the list put back in place? If you think this is rhetorical, consider how many huge YouTube channels can't communicate with YouTube (which Google owns, obviously) such as the "Legal Eagle" channel which considered suing Google because of the inability to communicate with actual humans, or (sometimes huge) artists who get "copyright strikes" for their own work, and who can't get those strikes removed even when they prove the work is their own, or people or companies who lose access to their Gmail accounts and can't even find out why until a big enough fuss is made in social media for someone at Google to take notice. As the administrator of the servers hosting extropy.org and this mailing list, you should know that in the last few months I've gotten quite a number of (probably mostly "AI" generated) complaints demanding that the site and lists be taken down. It seems to be a concerted campaign to hurt the community. My guess is because of "trans" being part of the word "transhuman", but I don't know for sure because the complaints have no actual content (no examples of wrongdoing or specific claims). Some say that the site and lists are "anti-family" and "harmful to children", but offer no examples. Would this campaign work to get a list taken down from Google Groups? Would you want to take that chance, particularly when you have no recourse if it does work? Also, Google isn't as reliable as you might guess. When there are problems, the complete lack of ability to communicate with them has conditioned people to believing the problem is with the Internet connection, the local computer, the browser, et cetera. I see Google issues all the time. This was just two days ago: https://www.tomsguide.com/news/live/google-down-live-updates-outage-11-12-25 So do you really want the list to move to a megacorporation with whom mortals can't communicate? Instead I might suggest Extropolis consider moving AWAY from Google. You never know when something will go wrong. Or if not move away from Google, at least maintain a non-Google Groups backup so things can't be lost or manipulated. John From ExiMod at protonmail.com Fri Nov 14 17:36:01 2025 From: ExiMod at protonmail.com (ExiMod) Date: Fri, 14 Nov 2025 17:36:01 +0000 Subject: [ExI] Just say no to Google In-Reply-To: References: Message-ID: Historically, the Extropians mailing list originated in the good ol' days when people used dial-up phone lines to download their emails, then signed off to minimize phone charges. Today, developed countries have always-on broadband connections. However, in rural areas and developing countries, dial-up access is still common. Exi-chat is a worldwide mailing list, including developing countries, and allows for all types of email clients and computers. This is why we have a message size limit of 160 KB, which means that large images cannot be posted to the list. This is apart from all the Google problems mentioned by John Klos. So far as I know there is no intention to change the Exi-chat list. Regards, Eximod Sent with Proton Mail secure email. On Friday, November 14th, 2025 at 5:14 PM, John Klos via extropy-chat wrote: > > Do you want a monoculture? Because that's how you get a monoculture ;) > > First, Google wants the whole world to move to them for all email. They feed > everything that goes through them to their "AI", they sell access to private > information, and it's more than likely they sell unfettered access to all data > to the US government. Even though lists in general are public, the idea that we > should normalize having for-profit corporate entities have access to all data > everywhere is just wrong and broken. > > Second, as more and more people who don't know any better move to Google, they > have less incentive to properly interoperate with the rest of the world. Google > Groups functionality is limited in email clients that aren't basically web > browsers, which means that people with older browsers, older computers and/or > non-mainstream browsers or OSes have issues using Google Groups. Signing in is > more problematic in non-western countries and with less common software. > > Their filters are unknowable. They do what they want and don't tell anyone what > they do, so we have to guess. This means they're non-deterministic, which is > why you have to check your spam folder for email that comes from people with > whom you've corresponded for years. Sometimes the filters are outright > censorship: > > https://en.wikipedia.org/wiki/Censorship_by_Google > > But most importantly, if someone lodges complaints and Google decides to take > action, what will you do? There is literally no way to talk to an actual human > at Google unless you 1) pay them a shitton of money or 2) you physically go do > one of their offices and wait outside. > > Even if you can talk to a human at Google, can you talk to one that can > actually get anything done? Can you find out why your list was taken down? Can > you refute the complaints and get the list put back in place? > > If you think this is rhetorical, consider how many huge YouTube channels can't > communicate with YouTube (which Google owns, obviously) such as the "Legal > Eagle" channel which considered suing Google because of the inability to > communicate with actual humans, or (sometimes huge) artists who get "copyright > strikes" for their own work, and who can't get those strikes removed even when > they prove the work is their own, or people or companies who lose access to > their Gmail accounts and can't even find out why until a big enough fuss is > made in social media for someone at Google to take notice. > > As the administrator of the servers hosting extropy.org and this mailing list, > you should know that in the last few months I've gotten quite a number of > (probably mostly "AI" generated) complaints demanding that the site and lists > be taken down. It seems to be a concerted campaign to hurt the community. My > guess is because of "trans" being part of the word "transhuman", but I don't > know for sure because the complaints have no actual content (no examples of > wrongdoing or specific claims). Some say that the site and lists are > "anti-family" and "harmful to children", but offer no examples. > > Would this campaign work to get a list taken down from Google Groups? Would you > want to take that chance, particularly when you have no recourse if it does > work? > > Also, Google isn't as reliable as you might guess. When there are problems, the > complete lack of ability to communicate with them has conditioned people to > believing the problem is with the Internet connection, the local computer, the > browser, et cetera. I see Google issues all the time. This was just two days > ago: > > https://www.tomsguide.com/news/live/google-down-live-updates-outage-11-12-25 > > So do you really want the list to move to a megacorporation with whom mortals > can't communicate? > > Instead I might suggest Extropolis consider moving AWAY from Google. You never > know when something will go wrong. Or if not move away from Google, at least > maintain a non-Google Groups backup so things can't be lost or manipulated. > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Fri Nov 14 21:14:48 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 14 Nov 2025 16:14:48 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> Message-ID: On Fri, Nov 14, 2025 at 10:21?AM wrote: *>>?** And we've never had an ugly flamewar of the sort we had here in 2016 >> and 2020?* > > > *> I see, and who was mysteriously at the center of both of those ugly > flame wars in 2016 and 2020 please?**Shall we consult the archives?* *Please do, I'm proud of every word I wrote. * * See what's on my new list at **Extropolis * rr7 > > > sf? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Nov 14 22:12:02 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 14 Nov 2025 17:12:02 -0500 Subject: [ExI] Just say no to Google In-Reply-To: References: Message-ID: On Fri, Nov 14, 2025 at 12:13?PM John Klos wrote: *> **Do you want a monoculture? Because that's how you get a monoculture ;)* *To avoid a monoculture is it necessary that competing book publishers use printing presses that are made by different manufacturers? * > *> Even though lists in general are public, the idea that we should > normalize having for-profit corporate entities have access to all > data everywhere is just wrong and broken.* *I am a militant capitalist and I like for-profit corporate entities, I like them a lot, they sure beat the hell out of no profit corporate entities. And I especially like Google, I certainly don't think it or any of the high-tech companies as being one of the bad guys. At one time most members of this list felt the same way I did about that, but apparently not now. * * > Google Groups functionality is limited in email clients that aren't > basically web browsers* *I don't know where you got that idea. You can read the stuff off the web or on any computer that is capable of reading emails. And who has a computer that is incapable of reading emails or doesn't have a web browser? * > * > Their filters are unknowable. They do what they want and don't tell > anyone what they do, so we have to guess.* *I have seen no evidence for these mysterious "filters " during the last five years, nothing has ever been censored on my list by me or Google or anybody else. * > * > But most importantly, if someone lodges complaints and Google decides > to take action, what will you do?* *You're dreaming up hypotheticals that I have never seen in the last five years. * > > > > > > > > * > As the administrator of the servers hosting extropy.org > and this mailing list, you should know that in the > last few months I've gotten quite a number of (probably mostly "AI" > generated) complaints demanding that the site and lists be taken down. It > seems to be a concerted campaign to hurt the community. My guess is because > of "trans" being part of the word "transhuman", but I don't know for sure > because the complaints have no actual content (no examples of wrongdoing or > specific claims). Some say that the site and lists are "anti-family" and > "harmful to children", but offer no examples.* > *They sound like harmless imbeciles to me, and I'm sure people at Google get even more ridiculous threats than you do, but those letters written by nincompoops haven't affected the way you administer this list. and it hasn't affected the good people at Google either.* * > Also, Google isn't as reliable as you might guess. * *Maybe things have improved but I know for a fact that over the last five years Google groups has been one hell of a lot more reliable than this list had been from 1993 to 2021.* *John K Clark See what's on my new list at Extropolis * 12d -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at ziaspace.com Fri Nov 14 23:23:04 2025 From: john at ziaspace.com (John Klos) Date: Fri, 14 Nov 2025 23:23:04 +0000 (UTC) Subject: [ExI] Just say no to Google In-Reply-To: References: Message-ID: <2e84cab6-fac3-b759-48d0-3a4bd39cec9a@daisy.zia.io> You missed answering the one thing that makes Google a definite no go: what do you do when you need to communicate with an actual human at Google? > To avoid a monoculture is it necessary that competing book publishers > use printing presses that are made by different manufacturers? This is a really poor analogy. Printing presses don't press new and different pages every day. If they did, then yes, printing presses from certain manufacturers that won't print certain sentences would need to be treated differently than printing presses from other manufacturers that don't. > I am a militant capitalist and I like for-profit corporate entities, I > like them a lot, they sure beat the hell out of no profit corporate > entities. And I especially like Google, I certainly don't think it or > any of the high-tech companies as being one of the bad guys. At one time > most members of this list felt the same way I did about that, but > apparently not now. Good for you, but being a fan of capitalism doesn't change the fact that Google is a shitty, evil company that does shitty and evil things. It doesn't change the fact that they don't care about standards, they don't care about common sense, and they don't care about you or me. Comparing them to "no profit corporate entities" without naming "no profit corporate entities" is, well, meaningless. I really don't care about fans of Google. I care about what Google does, and how it affects me and my users. I was a fan of Google ages ago (although never a user of anything more than their search engine). But when a corporation decides that "Don't be evil" is too restrictive, they're telling you about themselves. A discussion about how evil they are is not in the scope of this discussion, but there are many, many good reasons for feeling differently about Google now than perhaps twenty years ago. > ?>?Google Groups functionality is limited in email clients that > ?>aren't basically web browsers > > I don't know where you got that idea. You can read the stuff off the web > or on any computer that is capable of reading emails. And who has a > computer that is incapable of reading emails or doesn't have a web > browser? This is no different than saying that because 95% of the world does something (like, for instance, running Windows twenty years ago), you don't know where anyone would get the idea that everyone can't do a thing that can only be done by 95% of the world. There are many reasons to do text only email. I've done text only email for decades and still do. Many screen readers do text only email and don't render a page as though it's a web page. People who care about security and privacy don't parse HTML nor load web links in email. The point is that email should just work without assuming that email == browser. But the point is that you not experiencing a problem doesn't mean the problem doesn't exist. > >?Their filters are unknowable. They do what they want and don't tell anyone what > they do, so we have to guess. > > I have seen no evidence for these mysterious "filters " during the last > five years, nothing has ever been censored on my list by me or Google or > anybody else. Good! Can I send the people who have problems communicating to and from Gmail to you, so you can tell them that their problems don't exist? With Google, you can't talk about spam or phishing. You can't discuss viruses or Trojans. Heck - even just reporting spam that comes FROM Google TO Google's own abuse address gets processed as though the complainer is the source of spam! That's how incredibly dumb they are, even if they choose to be dumb to avoid responsibility for the incredible amounts of spam and scams they facilitate. You seem to lack awareness of Google's "AI" reporting a father for sending pictures of his naked child to a doctor. Even after Google reported this to the police, and even after the police indicated to Google that they were completely wrong, Google continued to act like a company devoid of humans or any kind of comprehension of anything at all. They don't care about the possible implications of their automated actions. The point is that a problem might never happen to a billion Google users, but the people who are affected aren't going to be swayed by others saying, "it never happened to me". > >?But most importantly, if someone lodges complaints and Google > >decides to take action, what will you do? > > You're dreaming up hypotheticals that I have never seen in the last five years.?? You've never been an accident that has killed or almost killed you, so that means you can dismiss people who've been in bad accidents? Is your lack of experience of something proof that others haven't or don't experience that something? > They sound like harmless imbeciles to me, and I'm sure people at Google > get even more ridiculous threats than you do, but those letters written > by nincompoops haven't affected the way you administer this list. and it > hasn't affected the good people at Google either. OF COURSE those letters haven't affected the way I administer this list. That's my point. I have a brain, I can think, and I can evaluate things, unlike Google. That's why I wouldn't just hand things over to Google. But what evidence do you have that 1) Google has good people, and 2) that people (good or not) at Google have anything to do with what I'm talking about? I can show plenty of evidence of Google making all sorts of crazy decisions with not a shred of indication that an actual human brain had ever considered any part of those decisions. And please tell this to people who've been deplatformed because of concerted complaint campaigns directed towards Amazon, Microsoft and Google. Again, the fact that it hasn't happened to you doesn't mean it's hypothetical. > >?Also, Google isn't as reliable as you might guess. > > Maybe things have improved but I know for a fact that over the last five > years Google groups has been one hell of a lot more reliable than this > list had been from 1993 to 2021. Good! Let's hope it stays that way, and let's hope that you don't run in to these issues. Let's hope that Google doesn't decide to cancel Google Groups the way it has cancelled countless other services that countless people were actively using: https://killedbygoogle.com My point is that I'd rather be in control of the data than concede control to a faceless megacorp with which we can't communicate. John From spike at rainier66.com Fri Nov 14 23:48:25 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Nov 2025 15:48:25 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> Message-ID: <00c301dc55c1$2b5f57b0$821e0710$@rainier66.com> From: John Clark On Fri, Nov 14, 2025 at 10:21?AM > wrote: >>>? And we've never had an ugly flamewar of the sort we had here in 2016 and 2020? >> I see, and who was mysteriously at the center of both of those ugly flame wars in 2016 and 2020 please? Shall we consult the archives? >?Please do, I'm proud of every word I wrote. John, the rules here are slightly different from those at Extropolis. Anyone can post on political topics if they wish. The difference is that at ExI, personal attacks are not welcome and not allowed (as far as I know.) At Extropolis, personal attacks are allowed. I don?t know the specifics of how ExiMod is running things, but I do know it is working: we haven?t had a flame war here since Extropolis was formed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 03:22:27 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Nov 2025 19:22:27 -0800 Subject: [ExI] day 5 Message-ID: <002101dc55df$11d54610$357fd230$@rainier66.com> This is going to be tougher than I thought. On the morning of the 5th day this place has been operational, I went by twice, no customers either time. This evening I went by, the ports had four ICE cars and five Teslas, no one charging. As far as I know, this place has sold no electric power. Now I am rethinking my investment strategy. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 21676 bytes Desc: not available URL: From spike at rainier66.com Sat Nov 15 03:30:55 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Nov 2025 19:30:55 -0800 Subject: [ExI] solar power Message-ID: <000801dc55e0$40a654b0$c1f2fe10$@rainier66.com> There are no low-energy rich countries: https://x.com/10mm_404/status/1989149616735215894 Prosperity requires energy. The USA is set up for solar, way better than the EU (we have more open land in the lower latitudes where the air is clear and the sun is bright. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 05:41:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Nov 2025 21:41:35 -0800 Subject: [ExI] self driving cars Message-ID: <003a01dc55f2$81e66980$85b33c80$@rainier66.com> It took a while, but eventually somehow they worked out the liability challenges. I saw a lot of self-drivers in San Francisco four months ago. Last Friday I saw one at the San Jose airport. This morning I saw this one in Palo Alto at the medical center. The local newspaper reports that Waymo now is expanding into San Jose and surrounding areas. I was at the first DARPA challenge, where self-driving cars went on a desert course. It took 20 years to get to where self-drivers can navigate the surface streets. It eventually happened, but dang that took a long time. This might be a better investment opportunity than power infrastructure. All the stuff we wrote about here twenty years ago is now unfolding. It will enable proles to not own cars: a network of Waymos will stay parked all over the place and form a far more practical transportation network than anything we have had before. They will be like taxis, but when they deliver the fare, they just stop and stay there until they are called to the next fare. This will help solve homelessness. Reason: if many people choose to not own cars, it frees up a lot of garage space which can be converted to apartments. It will require a lot of capital to buy enough Waymo cars so that they can distribute themselves along the streets at a spacing of about 100 meters. Meanwhile, former car owners can save on insurance, maintenance, etc, while freeing up valuable garage space, which is an income source. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17734 bytes Desc: not available URL: From hkeithhenson at gmail.com Sat Nov 15 07:26:45 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 14 Nov 2025 23:26:45 -0800 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: On Thu, Nov 13, 2025 at 5:25?AM Adrian Tymes via extropy-chat wrote: > snip > As such, I've picked up a distaste for getting into such discussions, > such that I'm not sure I could state an honest opinion on that topic > anymore. > Over 20 years ago, due to reading Minsky on this subject, I decided it was not a useful discussion topic Keith > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From giulio at gmail.com Sat Nov 15 08:12:19 2025 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 15 Nov 2025 09:12:19 +0100 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: On Thu, Nov 13, 2025 at 12:25?PM Jason Resch via extropy-chat wrote: > > Changing subjects from the previous thread: > > On Wed, Nov 12, 2025, 6:57 PM Adrian Tymes via extropy-chat wrote: >> >> On Wed, Nov 12, 2025 at 6:53?PM Jason Resch via extropy-chat >> wrote: >> > Not to mention, it's irrational to reason backwards from what one wants to be true. >> >> On this we wholeheartedly agree. And yet, it's what I keep running into. > At times, irrationality is the smartest attitude. > > Regarding free will, you previously said there may be some true randomness in the universe. > > Do you think true randomness is required for free will, or are you a compatibilist (free will is can exist in a universe which has no randomness at all)? > > And finally, if you do believe in randomness, what form do you see it taking: > > - Events in the universe that don't have causes (i.e. "God playing dice") > > - Hidden variables which we cannot in principle access (QM is incomplete) > > - Unpredictability arising from chaos combined with incomplete or imperfect information (Like my deterministic simulation example, or how pseudorandom number generators work) > > - Self-locating uncertainty in the face of duplication (Many worlds's answer, or "Tegmark's cloned while sleeping" example) > > - The "un-anticibility" (for lack of a better word) of computationally irreducible processes (e.g. Wolfram's answer, that we can't always predict what complex computer program will do until it does it) > > - Something non-algorithmic (or hypercomputation) exists in physics (e.g. like Penrose and his quantum gravity speculations) > > - Libertarian free will (i.e., the brain's will breaks free from or is independent of the ordinary determinism of the physical laws). > > - (something else)? > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Sat Nov 15 08:54:01 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 15 Nov 2025 08:54:01 +0000 Subject: [ExI] No to Google In-Reply-To: References: Message-ID: <3f0af5b4-227e-42c3-b799-1607fdea20ef@zaiboc.net> On 15/11/2025 03:22, Adrian Tymes wrote: > On Fri, Nov 14, 2025 at 11:16?AM Ben Zaiboc via extropy-chat > wrote: >> On 14/11/2025 15:21, John K Clark wrote: >>> By the way, why don't you let Google Groups host this list? >> Please, please, please, don't do that. >> >> I'd hate to lose this list. >> >> -- >> Ben (someone who Doesn't Do Google, for a number of reasons) > I think they could use with an explanation of your concerns on this > issue, if you please. Both from your personal subscribe/unsubscribe > basis, and - if you have them - worries about Google cancelling or > otherwise affecting the list if it was to be hosted there. I think John Klos and ExiMod have done a better job than I could of describing at least enough of my concerns to justify my plea. I would add that Google is simply too big and powerful, and seems to be getting moreso all the time. For an unaccountable private company, that is simply unacceptable, for obvious reasons, and I don't see how anyone can sensibly disagree with that. I'm dreading the day when I'm literally forced to get a Google account in order to just live my life. Which suggests a quick poll: How many people reading this DON'T have a Google account? I'd be surprised if it's more than just a handful, and maybe just two or three. Considering that this list tends to attract people who can probably be described as independent-minded, that, if it's true, should cause concern (admittedly, I'm guessing here, but I think it's probably a good guess. I'm fairly sure that I'm the only person I know in real life that doesn't have one). Contrast this with the browsers that we use, or the operating system/s we favour. Or more relevantly, the other services we are signed up to. I'm sure there won't be anywhere near the same degree of uniformity. When I said "I'd hate to lose this list", I wasn't referring to Google shutting the list down, I was referring to the fact that if it moved to Google, I wouldn't be able to take part in it any more. The same reason I sometimes read, but never post to, the Extropolis list. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 15 12:31:11 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 15 Nov 2025 07:31:11 -0500 Subject: [ExI] self driving cars In-Reply-To: <003a01dc55f2$81e66980$85b33c80$@rainier66.com> References: <003a01dc55f2$81e66980$85b33c80$@rainier66.com> Message-ID: On Sat, Nov 15, 2025, 12:43?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This will help solve homelessness. Reason: if many people choose to not > own cars, it frees up a lot of garage space which can be converted to > apartments. > The garage space is already part of the building, so this seems unlikely. More logical would be to say that less parking is needed, which allows new construction or redevelopment to have more housing space. But...see existing zoning laws and parking minimums: this is a legislative battle that goes back decades. Regardless of the physical need or lack thereof, buildings are often legally required to give space for a car infrastructure, especially in the US, and mere technological improvements keep running into this in practice, preventing them from achieving what you are hoping. Besides, even if you add slightly more housing, the demand keeps growing so homelessness continues to be an issue. I have run into this issue while trying to bring some of Silicon Valley's lessons learned on urbanization to the rapidly growing locale I now find myself among. (Just now passing 100K people, on track to exceed 120K by 2035 if historical trends continue, and I don't think they have a single multi-story parking structure let alone a true high rise - yet. I'm not sure if even the Singularity would significantly slow this place's growth, though some scenarios might accelerate it over at least the next few decades.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 15 14:15:31 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 15 Nov 2025 09:15:31 -0500 Subject: [ExI] Just say no to Google In-Reply-To: <2e84cab6-fac3-b759-48d0-3a4bd39cec9a@daisy.zia.io> References: <2e84cab6-fac3-b759-48d0-3a4bd39cec9a@daisy.zia.io> Message-ID: On Fri, Nov 14, 2025 at 6:25?PM John Klos via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *> You missed answering the one thing that makes Google a definite no go: > what do you do when you need to communicate with an actual human at Google?* > *As I've said I never had a problem with Google groups but if I did I'd go to https://support.google.com/ or call their customer support line at 650-253-0000. Actually just a few days ago I did need to call them when I was having a little trouble logging into my YouTube (a Google company) account with my brand new iPhone. They were polite and knowledgeable and the problem was soon fixed. * *>> To avoid a monoculture is it necessary that competing book >> publishers use printing presses that are made by different manufacturers?* > > > * > This is a really poor analogy. Printing presses don't press new > and different pages every day.* *They do, or at least they did, if they are printing newspapers, and paper books have not yet gone completely extinct, and they still print lots of very different books every day. * * >> I am a militant capitalist and I like for-profit corporate entities, >> I like them a lot, they sure beat the hell out of no profit >> corporate entities. And I especially like Google, I certainly don't think >> it or any of the high-tech companies as being one of the bad guys. At one >> time most members of this list felt the same way I did about that, >> but apparently not now.* > > > * > Good for you, but being a fan of capitalism doesn't change the fact > that Google is a shitty, evil company that does shitty and evil things.* *Google started small but it is now large and powerful. Can you think of any powerful nation or corporation or person or ANYTHING that has not done shitty and evil things? I can't. However if you added up all the evil that corporations have done over the last century it would amount to little more than naughtiness compared with the monstrous astronomically huge evil that came from nation states. Google has never stuffed people into ovens, nation states cannot say the same thing. * > > *I was a fan of Google ages ago* [...] *but **there are many, many good > reasons for feeling differently about Google* *now than perhaps twenty > years ago.* *You were a fan 20 years ago because back then Google was not large and powerful. In general I think large powerful things are inherently less lovable than small helpless things. However it is a fact of reality that large powerful things are going to exist, and I would prefer that those things be corporations rather than nation states. * *> People who care about security and privacy don't parse HTML nor load web > links in email. * *Huh? You included a web link in the very email that I am responding to. * * > The point is that a problem might never happen to a billion Google > users, but the people who are affected aren't going to be swayed by > others saying, "it never happened to me".* > *There is a non-zero probability that a meteorite will crash through the roof of your house and hit you on the head. So does that mean you should always wear a football helmet inside your house? * > *> You've never been an accident that has killed or almost killed you, > so that means you can dismiss people who've been in bad accidents?* *No, but it does mean you should pay attention to how the laws of probability work. * *>>You can read the stuff off the web or on any computer that is capable of >> reading emails. And who has a computer that is incapable of reading emails >> or doesn't have a web browser?* > > *> This is no different than saying that because 95% of the world > does something (like, for instance, running Windows twenty years ago), > you don't know where anyone would get the idea that everyone can't do a > thing that can only be done by 95% of the world.* *Unless you resort to snail mail, regardless of if you use Google groups or not you will simply be unable to send a post to somebody who doesn't have a web browser of some sort and who doesn't have some way to read emails. * *> >They sound like harmless imbeciles to me, and I'm sure people at >> Google get even more ridiculous threats than you do, but those letters >> written by nincompoops haven't affected the way you administer this list. >> and it hasn't affected the good people at Google either.* > > > > *OF COURSE those letters haven't affected the way I administer this > list.* *Interesting.* *John K Clark See what's on my new list at Extropolis * mxx -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 14:36:26 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 06:36:26 -0800 Subject: [ExI] self driving cars In-Reply-To: References: <003a01dc55f2$81e66980$85b33c80$@rainier66.com> Message-ID: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] self driving cars On Sat, Nov 15, 2025, 12:43?AM spike jones via extropy-chat > wrote: This will help solve homelessness. Reason: if many people choose to not own cars, it frees up a lot of garage space which can be converted to apartments. >?The garage space is already part of the building, so this seems unlikely. More logical would be to say that less parking is needed, which allows new construction or redevelopment to have more housing space?.But...see existing zoning laws and parking minimums: this is a legislative battle that goes back decades? The housing expansionists have the upper hand currently in California. It is way easier to get permits now to convert your garage to living space or install a factory-build microhouse in your back yard. Neither costs much, and the potential rent would make the ROI enormous. They would have a pay-back time of two to four years, depending on where it is. >?Regardless of the physical need or lack thereof, buildings are often legally required to give space for a car infrastructure? Adrian, I am coming from the POV of the extended single family suburb, which is where I have always resided. >?Besides, even if you add slightly more housing, the demand keeps growing so homelessness continues to be an issue? Ja, and it doesn?t solve the train problem necessarily: local commuter trains have become rolling homeless shelters/ Likewise, the homeless could take up residence in the Waymo cars. I think solutions to that problem will evolve. Example: ammonia injection. The Waymo recognizes that it has someone on board who is not paying and not getting out. The inside of that car starts to smell like Windex, harmless but an eye irritant. If the offender exits, the car rolls down its windows and drives to another location to air out, arriving perfectly without a trace of residue (ammonia is good that way.) Otherwise, the smell gets stronger and stronger, compelling exit. If that doesn?t work, the Waymo car summons the local authorities to remove the corpse or otherwise inert passenger. Note I am not claiming this scheme solves or cures the growing homelessness problem. It only treats it, by supplying additional housing for those who can pay, and preventing the growing population of the indigent from parasitizing mass transit. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 15 14:36:32 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 15 Nov 2025 09:36:32 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <00c301dc55c1$2b5f57b0$821e0710$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <00c301dc55c1$2b5f57b0$821e0710$@rainier66.com> Message-ID: On Fri, Nov 14, 2025 at 6:48?PM wrote: > *>>> **I see, and who was mysteriously at the center of both of those > ugly flame wars in 2016 and 2020 please? Shall we consult the archives?* > > > > *>>?Please do, I'm proud of every word I wrote.* > > > > *> John, the rules here are slightly different from those at Extropolis. > Anyone can post on political topics if they wish. The difference is that > at ExI, personal attacks are not welcome and not allowed (as far as I > know.) At Extropolis, personal attacks are allowed. * > *Personal attacks are allowed especially if they are directed at me, (I have a thick skin) provided those attacks are intelligent, or if they are so astonishingly stupid that they become entertaining, in the way that the movie Airplane or The Naked Gun is entertaining. However the matter is academic because there have been no personal attacks at Extropolis, I don't consider somebody saying "that's ridiculous" in response to something that I wrote as being a personal attack. * a *See what's on my new list at **Extropolis * 9o2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 15:01:46 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 07:01:46 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <00c301dc55c1$2b5f57 b0$821e0710$@rainier66.com> Message-ID: <011701dc5640$c3d14600$4b73d200$@rainier66.com> From: John Clark Sent: Saturday, 15 November, 2025 6:37 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] Gender-Neutral Side Note On Fri, Nov 14, 2025 at 6:48?PM > wrote: >>> I see, and who was mysteriously at the center of both of those ugly flame wars in 2016 and 2020 please? Shall we consult the archives? >>?Please do, I'm proud of every word I wrote. > ? The difference is that at ExI, personal attacks are not welcome and not allowed (as far as I know.) At Extropolis, personal attacks are allowed. >? if they are so astonishingly stupid that they become entertaining, in the way that the movie Airplane or The Naked Gun is entertaining? I am a huge fan of both of those. Comedy masterwork, if one is in the mood for such silliness. I seldom an in any other mood. >?However the matter is academic because there have been no personal attacks at Extropolis? Excellent. We can have discussions even on sensitive topics while keeping it civil. For instance, John previously you mentioned a politician who was ?convicted? of 34 felonies. I find no convictions of felonies, but rather only 34 misdemeanors for false record keeping, which are felonies if the intent is to conceal another crime. Without the other conviction, those 34 revert to misdemeanors. But without that conviction, the 34 charges aren?t even misdemeanors, for it cannot be established the records were false to start with. It appears that a corrupt court ?convicted? an innocent person, presumably for political purposes. This was perfectly obvious to the outside observer, which contributed to his winning the office. But I am open-minded. So John, when was the trial for this other crime please? Where is the documentation of that conviction? How did they keep that so secret? When was that trial, and where are the records? Without that conviction, there is no second crime, and the 34 ?convictions? are phony as a three-dollar bill. spike a -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 15 15:27:24 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 15 Nov 2025 10:27:24 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <011701dc5640$c3d14600$4b73d200$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <011701dc5640$c3d14600$4b73d200$@rainier66.com> Message-ID: On Sat, Nov 15, 2025 at 10:01?AM wrote: * > **John previously you mentioned a politician who was ?convicted? of 34 > felonies. I find no convictions of felonies, but rather only 34 > misdemeanors for false record keeping, which are felonies if the intent is > to conceal another crime. Without the other conviction, those 34 revert to > misdemeanors. But without that conviction, the 34 charges aren?t even > misdemeanors, for it cannot be established the records were false to start > with. * *I am unable to make any sense of the above.* *> It appears that a corrupt court ?convicted? an innocent person, > presumably for political purposes. * *Innocent? Does Jeffrey Epstein's best friend and Ghislaine Maxwell?s benefactor really seem like an innocent to you? * * John K Clark* a;l > > *>? if they are so astonishingly stupid that they become entertaining, in > the way that the movie Airplane or The Naked Gun is entertaining?* > > > > *I am a huge fan of both of those. Comedy masterwork, if one is in the > mood for such silliness. I seldom an in any other mood.* > > > > *>?However the matter is academic because there have been no personal > attacks at Extropolis? * > > > > *Excellent. We can have discussions even on sensitive topics while > keeping it civil.* > > > > *For instance, John previously you mentioned a politician who was > ?convicted? of 34 felonies. I find no convictions of felonies, but rather > only 34 misdemeanors for false record keeping, which are felonies if the > intent is to conceal another crime. Without the other conviction, those 34 > revert to misdemeanors. But without that conviction, the 34 charges aren?t > even misdemeanors, for it cannot be established the records were false to > start with. It appears that a corrupt court ?convicted? an innocent > person, presumably for political purposes. This was perfectly obvious to > the outside observer, which contributed to his winning the office.* > > > > *But I am open-minded. So John, when was the trial for this other crime > please? Where is the documentation of that conviction? How did they keep > that so secret? When was that trial, and where are the records? Without > that conviction, there is no second crime, and the 34 ?convictions? are > phony as a three-dollar bill.* > > > > *spike* > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Nov 15 15:28:57 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 15 Nov 2025 08:28:57 -0700 Subject: [ExI] Randomness and Free will In-Reply-To: References: Message-ID: I see some people don't value discussing this topic of free will. I think it is important, and has a lot to do with morality. There are lots of ways to think about this and I think it is very interesting to find out and track what people believe on this. I'm in the "Compatible" camp, and believe that randomness destroys free will. (ability to reliably get what is best for you.) God is only 'free' if he can absolutely and deterministically always make the right choice, anything else means he is not yet free, or not yet God. On Sat, Nov 15, 2025 at 1:13?AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Nov 13, 2025 at 12:25?PM Jason Resch via extropy-chat > wrote: > > > > Changing subjects from the previous thread: > > > > On Wed, Nov 12, 2025, 6:57 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> > >> On Wed, Nov 12, 2025 at 6:53?PM Jason Resch via extropy-chat > >> wrote: > >> > Not to mention, it's irrational to reason backwards from what one > wants to be true. > >> > >> On this we wholeheartedly agree. And yet, it's what I keep running > into. > > > > At times, irrationality is the smartest attitude. > > > > > Regarding free will, you previously said there may be some true > randomness in the universe. > > > > Do you think true randomness is required for free will, or are you a > compatibilist (free will is can exist in a universe which has no randomness > at all)? > > > > And finally, if you do believe in randomness, what form do you see it > taking: > > > > - Events in the universe that don't have causes (i.e. "God playing dice") > > > > - Hidden variables which we cannot in principle access (QM is incomplete) > > > > - Unpredictability arising from chaos combined with incomplete or > imperfect information (Like my deterministic simulation example, or how > pseudorandom number generators work) > > > > - Self-locating uncertainty in the face of duplication (Many worlds's > answer, or "Tegmark's cloned while sleeping" example) > > > > - The "un-anticibility" (for lack of a better word) of computationally > irreducible processes (e.g. Wolfram's answer, that we can't always predict > what complex computer program will do until it does it) > > > > - Something non-algorithmic (or hypercomputation) exists in physics > (e.g. like Penrose and his quantum gravity speculations) > > > > - Libertarian free will (i.e., the brain's will breaks free from or is > independent of the ordinary determinism of the physical laws). > > > > - (something else)? > > > > Jason > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 15:49:39 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 07:49:39 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <011701dc5640$c3d14600$4b73d200$@rainier66.com> Message-ID: <013a01dc5647$74003670$5c00a350$@rainier66.com> From: John Clark Subject: Re: [ExI] Gender-Neutral Side Note On Sat, Nov 15, 2025 at 10:01?AM > wrote: > John previously you mentioned a politician who was ?convicted? of 34 felonies. I find no convictions of felonies? >?I am unable to make any sense of the above?. Do allow me to simplify it for you John: until one is convicted of a crime, one is presumed innocent of that crime. >?Innocent? Does Jeffrey Epstein's best friend and Ghislaine Maxwell?s benefactor really seem like an innocent to you? John K Clark a;l John are you now claiming that anyone anywhere has ever been convicted of being Jeffrey Epstein?s best friend or convicted of being Ghislaine Maxwell?s benefactor? When and where were those trials please? Ja of course we are all presumed innocent of those charges, and stay that way until we are convicted. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 16:08:54 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 08:08:54 -0800 Subject: [ExI] Just say no to Google In-Reply-To: References: <2e84cab6-fac3-b759-48d0-3a4bd39cec9a@daisy.zia.io> Message-ID: <014b01dc564a$24a90ea0$6dfb2be0$@rainier66.com> ?> On Behalf Of John Clark via extropy-chat >?Google started small but it is now large and powerful. Can you think of any powerful nation or corporation or person or ANYTHING that has not done shitty and evil things? I can't. John K Clark John you make a great case for Google?s competitor, TwitterX. Elon Musk is in full agreement with you. Musk recognized that Twitter was doing exactly as you say. His strategy was to buy the big powerful evil Twitter and just stop it from DOING anything. All it needed to fulfil its Don?t Be Evil slogan (trademark) is to stop doing anything. That was easy: Twitter doesn?t even need humans, which saves huge costs and saves the company itself from bankruptcy. Now, TwitterX isn?t evil, even though it is bigger and more powerful than ever. Information wants to be free. There are evil things posted there, and good things. It is up to the reader to figure out which is which, not TwitterX. Let that sink in. Well, that was easy. It damn sure wasn?t cheap, but it was easy. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Nov 15 16:23:54 2025 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 15 Nov 2025 11:23:54 -0500 Subject: [ExI] day 5 In-Reply-To: <002101dc55df$11d54610$357fd230$@rainier66.com> References: <002101dc55df$11d54610$357fd230$@rainier66.com> Message-ID: On Fri, Nov 14, 2025, 10:24 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This evening I went by, the ports had four ICE cars and five Teslas, no > one charging. As far as I know, this place has sold no electric power. > Given other news, my first thought on ICE was not 'internal combustion engine' Of course the other meaning of "ICE cars" would be unpleasant to think about So now that I've written it out, I wonder if there are external combustion engines. Would Orion rockets qualify? Are steam engines external combustion relative to the motive power of the steam pistons? Does 'engine' automatically imply fuel consumption? Or is it the mechanical aspects? I know some people refer to the "motor" interchangeably with "engine" but I was always corrected that a motor was electrical (as in the ubiquitous pump motors or fan motors around the house) So is it redundant or superfluous to specify "internal combustion" to discuss engine cars vs motor cars? fwiw, there's no argument here as much as musing on the context/disambiguation of the term. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 16:34:34 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 08:34:34 -0800 Subject: [ExI] day 6 In-Reply-To: References: <002101dc55df$11d54610$357fd230$@rainier66.com> Message-ID: <017f01dc564d$ba8c0000$2fa40000$@rainier66.com> From: Mike Dougherty Subject: Re: [ExI] day 5 On Fri, Nov 14, 2025, 10:24 PM spike jones via extropy-chat > wrote: This evening I went by, the ports had four ICE cars and five Teslas, no one charging. As far as I know, this place has sold no electric power. >?Given other news, my first thought on ICE was not 'internal combustion engine' Oh OK cool thx, I hadn?t thought of that angle. >?So now that I've written it out, I wonder if there are external combustion engines? Of course. The old steam engines which powered locomotives in the olden days are external combustion. >? Are steam engines external combustion relative to the motive power of the steam pistons? Ja, that is how I always heard it described. >?Does 'engine' automatically imply fuel consumption? I think of it as an energy conversion device, or an entropy increaser. >?So is it redundant or superfluous to specify "internal combustion" to discuss engine cars vs motor cars? Henceforth I shall refer to fossil fuel burners as IC cars. >?fwiw, there's no argument here as much as musing on the context/disambiguation of the term? Mike I perceived the spirit of your inquiry and I already know you to be an affable and amicable sort, as I always strive to be. Well done sir. Update for day 6: I have been by there twice today, zero customers. I have an idear however, which I shall share a little later today. Idears are better than ideas, for reasons I shall likewise explain later, but now I gotta scoot forthwith. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 15 16:38:05 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 15 Nov 2025 11:38:05 -0500 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <003a01dc55f2$81e66980$85b33c80$@rainier66.com> <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: On Sat, Nov 15, 2025, 9:36?AM wrote: > Adrian, I am coming from the POV of the extended single family suburb, > which is where I have always resided. > Same here. Ja, and it doesn?t solve the train problem necessarily: local commuter > trains have become rolling homeless shelters/ Likewise, the homeless could > take up residence in the Waymo cars. I think solutions to that problem > will evolve. Example: ammonia injection. The Waymo recognizes that it has > someone on board who is not paying and not getting out. The inside of that > car starts to smell like Windex, harmless but an eye irritant. If the > offender exits, the car rolls down its windows and drives to another > location to air out, arriving perfectly without a trace of residue (ammonia > is good that way.) Otherwise, the smell gets stronger and stronger, > compelling exit. If that doesn?t work, the Waymo car summons the local > authorities to remove the corpse or otherwise inert passenger. > Ah, the Soylent Green solution. Not very practical, given what happens afterward. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Nov 15 16:46:47 2025 From: pharos at gmail.com (BillK) Date: Sat, 15 Nov 2025 16:46:47 +0000 Subject: [ExI] day 6 In-Reply-To: <017f01dc564d$ba8c0000$2fa40000$@rainier66.com> References: <002101dc55df$11d54610$357fd230$@rainier66.com> <017f01dc564d$ba8c0000$2fa40000$@rainier66.com> Message-ID: On Sat, 15 Nov 2025 at 16:36, spike jones via extropy-chat wrote: > > > > Of course. The old steam engines which powered locomotives in the olden days are external combustion. > Henceforth I shall refer to fossil fuel burners as IC cars. > > spike > _______________________________________________ BillK From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 15 17:35:58 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 15 Nov 2025 12:35:58 -0500 Subject: [ExI] self driving cars In-Reply-To: References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: On Sat, Nov 15, 2025, 12:07?PM Tara Maya via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Thinking to the future collision of technologies, how about 3D printing > your own self-driving car in your factory-garage? > That requires having a factory-garage, which itself is a laudable goal but requires more emphasis on distributed/small scale factories. Current 3D printers, for all that they can do, won't cut it. It also needs emphasis on distributed feedstock supply chains. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 15 20:37:54 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 15 Nov 2025 15:37:54 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <013a01dc5647$74003670$5c00a350$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <011701dc5640$c3d14600$4b73d200$@rainier66.com> <013a01dc5647$74003670$5c00a350$@rainier66.com> Message-ID: On Sat, Nov 15, 2025 at 10:49?AM wrote: *>>> **John previously you mentioned a politician who was ?convicted? of >> 34 felonies. I find no convictions of felonies?* > > > > *>>?I am unable to make any sense of the above**?.* > > > > *> Do allow me to simplify it for you John:* > *Please do. * * > until one is convicted of a crime, one is presumed innocent of that > crime.* > *And Mr. I Like To Grab Them By The Pussy WAS convicted of a felony, 34 times. In addition MR. I Like To Sneak Into The Girls Dressing Room For The Miss Teenage America Contest was convicted in a civil court for sexually abusing Jean Carroll and was fined $5 million. * *>>?Innocent? Does Jeffrey Epstein's best friend and Ghislaine Maxwell?s >> benefactor really seem like an innocent to you? * > > > > *> John are you now claiming that anyone anywhere has ever been convicted > of being Jeffrey Epstein?s best friend or convicted of being Ghislaine > Maxwell?s benefactor?* > *That, as I'm sure you know, is a very silly question. * > >* we are all presumed innocent* > *Under the law a person is treated as being innocent until it has been proven otherwise by a court of law, and He Who Must Not Be Named HAS been convicted 34 times in a court of law. But in a free country individual people are allowed to form their own opinion on the matter of guilt or innocence at any time. * *John K Clark See what's on my new list at Extropolis * ytu > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Nov 15 17:00:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sat, 15 Nov 2025 09:00:54 -0800 Subject: [ExI] self driving cars In-Reply-To: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> References: <00e901dc563d$39cca920$ad65fb60$@rainier66.com> Message-ID: An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 21:35:02 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 13:35:02 -0800 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <011701dc5640$c3d146 00$4b73d200$@rainier66.com> <013a01dc5647$74003670$5c00a350$@rainier66.com> Message-ID: <025801dc5677$b4306640$1c9132c0$@rainier66.com> From: John Clark > until one is convicted of a crime, one is presumed innocent of that crime. >?And Mr. I Like To Grab Them By The Pussy WAS convicted of a felony, 34 times?. Oh? Do you know what is this felony? Last time I asked you didn?t know. You referred me to the internet, which didn?t know. Plenty of opinion on the subject. But the court didn?t tell us. Are you suggesting the defendant was convicted of grabbing them by the pussy? When? Where? How did they keep that case secret so effectively? >? He Who Must Not Be Named HAS been convicted 34 times in a court of law? Convicted of what please? Do you know? Why don?t you know? Are you going to punt again on that? >? But in a free country individual people are allowed to form their own opinion on the matter of guilt or innocence at any time. John K Clark Agreed. However, in court of law, we need more discipline than an opinion. We intentionally write the rules to insure we do not convict innocent people. We intentionally make sure we do not convict people of a felony for a misdemeanor because we need the conviction for another purpose, such as political warfare, or because we don?t like what they say. That case you refer to is one which Libertarians heard loud and clear. It was perfectly obvious that they needed a second conviction before they could leverage a misdemeanor into a felony. They didn?t have the other crime. They have never showed us the conviction from the mystery other crime, hell, they never even told us what it was. So now, you don?t know, the internet doesn?t know, the jury doesn?t know. All we have are opinions. Opinions are measured by votes. Clearly you are in the minority opinion John. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 15 22:04:53 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 15 Nov 2025 17:04:53 -0500 Subject: [ExI] Gender-Neutral Side Note In-Reply-To: <025801dc5677$b4306640$1c9132c0$@rainier66.com> References: <73360e2b-64b4-4e73-a0b9-be9fc6916ee2@zaiboc.net> <00cc01dc4ff7$489a2410$d9ce6c30$@rainier66.com> <005901dc50ac$268d6570$73a83050$@rainier66.com> <006101dc50c3$56ac9cf0$0405d6d0$@rainier66.com> <000a01dc50cd$3cc1bdc0$b6453940$@rainier66.com> <00b901dc5190$a5b427a0$f11c76e0$@rainier66.com> <014f01dc5422$a7641e90$f62c5bb0$@rainier66.com> <00b201dc557a$4eaf3ab0$ec0db010$@rainier66.com> <013a01dc5647$74003670$5c00a350$@rainier66.com> <025801dc5677$b4306640$1c9132c0$@rainier66.com> Message-ID: On Sat, Nov 15, 2025 at 4:35?PM wrote: *>> ?And Mr. I Like To Grab Them By The Pussy WAS convicted of a felony, 34 >> times?.* > > > > *> Oh? Do you know what is this felony? Last time I asked you didn?t > know. You referred me to the internet, which didn?t know. * > *What the hell?! * *John K Clark* > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 15 22:09:25 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Nov 2025 14:09:25 -0800 Subject: [ExI] day 6 In-Reply-To: <017f01dc564d$ba8c0000$2fa40000$@rainier66.com> References: <002101dc55df$11d54610$357fd230$@rainier66.com> <017f01dc564d$ba8c0000$2fa40000$@rainier66.com> Message-ID: <027901dc567c$815902e0$840b08a0$@rainier66.com> From: spike at rainier66.com ? >?Update for day 6: I have been by there twice today, zero customers. I have an idear however, which I shall share a little later today. Idears are better than ideas, for reasons I shall likewise explain later, but now I gotta scoot forthwith?. spike Good chance everyone here will agree that the Singularity will eventually happen. We may disagree to some extent on when. We should have investment strategies consistent with our own predictions of course. We likely also agree that the singularity and AI in general will be power hungry, and that it is quite likely that more generation capacity is needed, which will drive up the cost of power, and will provide marvelous investment opportunities with enormous profit potential. The S-alphas need something that can be scaled up quickly. The power sources currently scalable quickly are oil-based. Reasoning: locomotives can be purchased and used for generating power, for they are already built and capable of running at equilibrium for extended periods. The power would be used for AI rather than pulling train cars. That would require more oil production, but it isn?t just drill baby drill. That is a slightly longer term solution, but there are existing oil wells mostly idle for lack of demand, and existing refinery capacity idle for lack of demand. The S-alpha would look to those facilities. The S-beta thinker would look perhaps toward coal and natural gas generators, which can also be scaled up quickly, for many of these already exist but are idle. The kinds of considerations about being forbidden by the state of California from connecting to the grid has a work-around: it wouldn?t need to connect to the grid at all. A coal-fired plant aboard a ship could supply power to a data center aboard the ship. No connection to the internet would be necessary. A retired military ship or cargo vessel would suffice. The longer term opiners (S gammas, deltas and epsions) might favor solar, wind, or even nuclear (those take a long time just to get a permit, in addition to the time it takes to build them.) In the meantime, the self-driving cars may be a better investment opportunity, for it is easy enough to see they have somehow managed the liability challenge. I don?t know how they did it, but Waymo is operational. It has shown practicality. Waymo hasn?t killed anyone. The Elaine Herzberg fatality was an Uber in self-driving mode, which is the fault of the human driver, legally. There have been plenty of accidents, but no fatalities. My best guess on why would suggest the Waymo car never panics and is still trying to stop right up until collision, making the accidents less fatal. If market penetration gets sufficient, a jury might be less likely to award 100 billion dollars to a victim, putting the company out of business (and losing one?s investment.) So? my notion now is that investment in Waymo (or any viable competitor to Waymo) is a better investment than power generation. Mike, John or anyone else interested in this topic, do offer your views please. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: