From hkeithhenson at gmail.com Sat Nov 1 02:07:39 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 31 Oct 2025 19:07:39 -0700 Subject: [ExI] Non-rational humans can have benefits In-Reply-To: <3f4f21a3-3193-421e-84e8-ce9188cdfda8@zaiboc.net> References: <3f4f21a3-3193-421e-84e8-ce9188cdfda8@zaiboc.net> Message-ID: I think you have seen my paper on genetic selection for war. There are situations where rational for genes is *not* rational for the person with the genes. Being in conflict with your genes is weird, but logical. Genes, having the long-term view, usually win. Keith https://jbh.journals.villanova.edu/index.php/JBH/article/view/3166 It took two years to get this published. The bit about bonobos seems to be a new observation. On Fri, Oct 31, 2025 at 2:28?PM Ben Zaiboc via extropy-chat wrote: > > On 31/10/2025 19:04, BillK wrote: > > Humans are emotional creatures, not logical machines. > > The floods of emotions in humans have survival benefits. That is > > why they developed and persist. > > Cold logic would "dehumanise" humans and produce bad results. > > That is why enemies have to be thought of as sub-human to justify killing. > > > > Benefits of non-rationality - > > Relationships, love, loyalty, and caring persist. > > Creativity and Innovation are often non-logical. > > Different moral systems that benefit their society are non-logical. > > Quick decisions in times of stress are intuitive. > > Hope and optimism keep humans going when, logically, they should just give up. > > > > And so on...... Being human seems preferable to being a "rational agent". > > Dang, I didn't realise you can't have both. > Oh, wait... it's a trick, isn't it? > A bunch of 'em. > > Logic isn't cold. > Emotions don't normally flood (but they can be cold). > Logic definitely has survival benefits too. > Intuition doesn't have to be non-logical. > > And so on....... Being human includes being a rational agent. Sometimes. > > And I propose reversing the terms in the first sentence, to: "Humans are > emotional machines, not logical creatures". With a minimum of two > exclamation marks. > Just to avoid any claims of discrimination or bias. We don't want people > marching in the streets waving banners saying "Creatures are machines too!!" > > This looks like it was written by a bored journalist, inspired by the > tired old 'nature vs. nurture' so-called argument, wanting to generate > an artificial 'controversy'. > > BillK, are you a bored journalist?!! > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Sat Nov 1 02:58:35 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 31 Oct 2025 19:58:35 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: If you want to understand this topic, read this https://en.wikipedia.org/wiki/Rare-earth_element It is long but fairly comprehensive. Keith On Fri, Oct 31, 2025 at 2:35?PM spike jones via extropy-chat wrote: > > lreaqdy > > > > From: John Clark > Subject: Re: [ExI] not that rare earth (part 2 of at least 2) > > > > > > I am disappointed in us. More specifically, I am disappointed in me. Reason: we didn?t figure out sooner the reality on rare earth magnets. I had all the ?elements? in place to have seen thru the nonsense sooner, but it wasn?t until last night I figured it out while looking for the recipes for high efficiency magnets, or failing that, just how much of the Chinese monopoly materials are used in high efficiency magnets. They generally won?t tell, for those are trade secrets. But I noticed Tesla phased them out, which tells me they designed around them (that is what they did, by going up the period one row for more available materials (which turned out to work as well (and we already know why (going up one row in the period is another element which is chemically similar (especially when used as a grain-size moderator in an alloy (in which the element itself does not form a chemical bond with the iron (as in magnets (we knew all this.)))))) > > > > > > ? > > > > >?69% of rare earth mining comes from China, and more importantly 92% of rare earth refining occurs in China?. > > > > Ja, but for some applications, it doesn?t need to be ?refined? in the way we think of the term refined. Down there on the left end of the lanthanide period are a group of elements that aren?t used for much of anything, but they are good for magnets. If we think of refining as separating them into high purity individual elements, we know that is expensive and difficult because the differential solubilities are very low: they dissolve in the same solvents at levels close to each other, so it takes a lot of cycles of time-consuming money-consuming purification cycles. > > > > The Chinese don?t tend to refine stuff the way we do, particularly since they understand it isn?t necessary. A magnet can use a mixture of lanthanides which can vary slightly, and the final product is the same: the presence of a few of those big dumb elements (the lanthanides are a perfect examples of big dumb elements) moderates crystal size in iron and prevents transition from body centered cubic to face centered cubic for instance. > > > > When the comparison is made in modern magnet technology, we see too often the irrelevant comparison between iron magnets and iron/lanthanide alloy magnets. The alloys work a lot better. But where is the comparison between iron/lanthanide and iron/alkali magnets? Why is that so hard to find? Reason: every company using those things keeps it as a trade secret. We hadn?t heard that Tesla doesn?t use rare earth magnets anymore, and that they damn well had designed around them without any noticeable loss of efficiency. > > > > As to my comment on refining: metallurgy is black magic to this day. We have no equations that take us straight to the best mixture of stuff gives what characteristics. We just try things and see what happens. Tesla apparently just tried alternatives in the alkali metal row, saw that it worked the same as the lanthanide row for all the same reasons, then quietly stopped using the more expensive materials. > > > > That whole notion that China has the world by the ass because they control rare earth metals didn?t sound right to me the first time I heard it, but I didn?t know why. Now I know why: those aren?t strictly necessary. > > > > > > > > It took China many years to reach that level of production, and the same will be true for the US. > > > > > > But it isn?t necessary. > > > > >? not what caused China to shut off exports of rare earth elements to the USA, it was because of He Who Must Not Be Named idiotic tariffs. > > > > Now we are getting close to the truth: the most important current application of rare earth elements is in providing arguments for free trade. These arguments work even if the materials upon which they are based are not necessary. Political considerations lead to bad science and bad engineering practice. This is not to say we have plenty of recent examples of political considerations leading to bad science and bad engineering practice. > > > > OK retract that last sentence. This damn well IS to say we have plenty of recent examples of political considerations leading to bad science and bad engineering practice. > > > > > The Mountain Pass Rare Earth Mine about a km west of Wheaton Springs California is active now. I can see there is a refinery there. > > > > >?That's nice and I'm glad their stock price? > > > > I would be a lot more glad about it if I owned some of it. But now I won?t buy it because I foresee a fall in demand for its product, which is why it has been on intermittent operation for the past half century at least. We don?t really need those materials. > > > > . > > > > >?And that is one reason why China's BYD, not Elon Musk's Tesla, now makes the world's most technologically sophisticated electric vehicles? > > > > As they tell us in their advertisements, ja. They insist that they are more sophisticated than Elon Musk?s Teslas, in their advertising. > > > > > > > > > Do we really need rare earth elements to get that extra few percent efficiency in permanent magnets? > > > > Yes. Not only are rare earth motors more efficient at converting electricity to kinetic energy?John K Clark > > > > > > John what are we comparing to please? More efficient than iron magnets? Sure, true but irrelevant. Compare the rare earth magnets to alternative material magnets, such as alkali metal magnets, and notice how difficult it is to find that, for that information is held tightly as trade secrets. These companies all like to compare with iron magnets. We see the rare earth magnets and the alkali metal magnets are way more efficient, but it isn?t clear the rare earth magnets are better than alkali metal magnets. > > > > The real problem here is that we are not asking the right questions. Sure free trade is a good thing, it lifts the masses out of poverty etc. But it doesn?t answer the question of can alkali metals one row up the period be substituted for rare earth materials? Looks to me like they can be and have been, without easily measurable loss of performance. > > > > I am disappointed in us, specifically me, for not recognizing this sooner. I am mildly hip with chemistry. I have even been is a situation where a subcontractor jacked us up for a special sauce catalyst they were using, which relies on substituting one rare earth element for another, recognizing they behave the same way, then claiming it was special intellectual property owned by them. My only flimsy excuse is that it happened 30 years ago, we worked around it and on we went. > > > > Conclusion: China?s monopoly on rare earth elements has been greatly exaggerated. We don?t need to recycle wind turbines to get the materials. We can use substitutes up one period. > > > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Sat Nov 1 04:00:01 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Oct 2025 21:00:01 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> -----Original Message----- From: Keith Henson Subject: Re: [ExI] not that rare earth (part 2 of at least 2) If you want to understand this topic, read this https://en.wikipedia.org/wiki/Rare-earth_element It is long but fairly comprehensive. Keith OK cool thx Keith. An EV uses 1 to 2 kg. The spot price for neodymium is 150 bucks a kg, up from 75 kg seven years ago. So an additional 75 to 150 bucks per non-Musk EV. Elon mandated those be eliminated, so Tesla doesn't use them. What the Wiki article doesn't say is what happens if we substitute molybdenum for the neodymium, which is what I think Tesla did: they are using MoFeB magnets. They aren't saying how they did it. Samarium is cheap, even now. I am not sure why. Perhaps it's a byproduct of something else. Wind turbines: those use a lot of material, 600 kg of REEs per turbine. So those are effected a lot. But the real cost of wind energy is in power storage anyway. In any case, if the turbines give away a half a percent in efficiency, that surely wouldn't be a showstopper. They have bigger worries: the environmentalists might try to destroy them to save birds. Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. spike From johnkclark at gmail.com Sat Nov 1 11:14:14 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Nov 2025 07:14:14 -0400 Subject: [ExI] =?utf-8?q?Can_a_Start-Up_Make_Computer_Chips_Cheaper_Than_?= =?utf-8?q?the_Industry=E2=80=99s_Giants=3F?= Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. Can a Start-Up Make Computer Chips Cheaper Than the Industry?s Giants? Substrate, a San Francisco company, is trying to take on powerhouses like the Dutch company ASML. https://www.nytimes.com/2025/10/28/technology/can-a-start-up-make-computer-chips-cheaper-than-the-industrys-giants.html?unlocked_article_code=1.x08.A0ge.GdHGRbF4eq7a&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Nov 1 13:31:16 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 1 Nov 2025 08:31:16 -0500 Subject: [ExI] reason an emotion Message-ID: Here is an extremely interesting study on those two. A man somehow had a brain injury that severed the connection between his amygdala and his forebrain. This cuts off emotions from the decision-making process. Result: decisions are made without emotion. His IQ and overall intelligence levels were unchanged. Thus he had only reason from the forebrain to run his life. He could not make the slightest decision. He was given intellectual work but when he finished he could not move to the next problem. He could not decide if he wanted a cup of coffee. His decision-making process was frozen. So this is what happens when we have only reason to make decisions. So our conclusion has to be that emotions are involved in every decision we make, no matter how trivial. What a validation of Hume! bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 1 13:31:47 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Nov 2025 09:31:47 -0400 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: On Fri, Oct 31, 2025 at 5:34?PM wrote: > >> * > I noticed Tesla phased them out, which tells me they designed around > them (that is what they did, by going up the period one row for more > available materials* > *On a typical chart of the elements they are arranged according to their chemical properties, that's why you'll see that all the rare earth ones are in a group of their own, they all have very similar chemical properties, which is why in their natural ore they are all mixed together, and why it is so difficult to refine them out into separate elements. But Mendeleev's chart only takes an element's chemical properties into account, NOT an element's magnetic or optical properties, and those are the very properties that make some of the rare earths so valuable. In addition to magnets, the rare earth elements are also of vital importance in fiber-optic communications, lasers, LED lighting and displays, and night-vision goggles.* *>>?69% of rare earth mining comes from China, and more importantly 92% of >> rare earth refining occurs in China?.* > > > > > *Ja, but for some applications, it doesn?t need to be ?refined? in the way > we think of the term refined. * > *I can't think of any application in which the rare earths don't need to be refined out into separate elements, if there is such an application it must not be a very important one. * * > A magnet can use a mixture of lanthanides. A magnet can use a mixture > of lanthanides which can vary slightly, and the final product is the same* *No. The purity requirements for rare earth magnets are quite stringent, contamination with other lanthanides in the crystal structure of the magnet seriously degrades magnetic properties, although Praseodymium is sometimes deliberately added in very precise amounts to improve thermal stability. Depending on how good a magnet you need a Neodymium magnet, the most popular type, needs to be between 95 and 99% pure. A Samarium-Cobalt magnet, the second most popular type, needs even greater purity, 99+%.* *>> **Do we really need rare earth elements to get that extra few percent >> efficiency in permanent magnets? * > > > >> *>>Yes. Not only are rare earth motors more efficient at converting >> electricity to kinetic energy?John K Clark* > > > > *> John what are we comparing to please? More efficient than iron > magnets? Sure, true but irrelevant. Compare the rare earth magnets to > alternative material magnets, such as alkali metal magnets,* > *Nobody has used iron magnets in motors or generators since the 1930s, until just a few years ago when somebody said "magnet" they were almost always talking about an Alnico Magnet, an alloy of Iron, aluminum, nickel and cobalt that was discovered by a Japanese scientist about 95 years ago and was revolutionary at the time. They are not nearly as strong but they are superior to rare earth magnets in one respect, they can get as hot as 800? C and still retain their magnetic properties. * *> Compare the rare earth magnets to alternative material magnets, such as > alkali metal magnets* *T**he entire column of Alkali metals on the periodic chart are all extremely chemically reactive and do not form permanent magnets, they are diamagnetic which means they are weakly repelled, not attracted, by a magnetic field .* > *> Conclusion: China?s monopoly on rare earth elements has been greatly > exaggerated. * > *I don't think so. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Nov 1 13:47:03 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Nov 2025 09:47:03 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Fri, Oct 31, 2025 at 5:44?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Do you not believe in objective truth?* *Objective truth exists but that doesn't mean everything is either right or wrong. I happen to like sardines and you may not, but that doesn't mean that one of us is right and the other one is wrong because there is no disputing matters of taste. On the other hand if one of us says that a bridge will collapse if somebody attempts to use it and the other says it will not then one of us must be right and the other one must be wrong. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Nov 1 13:55:11 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 1 Nov 2025 08:55:11 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: Given that all of our actions originate in the unconscious mind. there is no way a conscious mind can understand where its ideas and actions come from, much less understand them. The conscious mind may think it is in charge but it is just an observer (which nevertheless can understand that a mistake has been made and the idea or action needs redoing.) You want to understand our minds? Make the workings of the unconscious conscious - and that's just a start. Why did the impulse go to point B when it left point A rather than to point C? And then trace all the points in between entering the unconscious and resulting in some idea or action. And explain each one. You can't even get started until you can truly access the unconscious. Give Freud credit- he tried to do this. bill w On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 31/10/2025 21:34, Jason Resch wrote: >> >> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On 31/10/2025 12:28, John K Clark wrote: >>> >>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>> >>> >>> You've completely lost me there, but I have two observations: There's no >>> such thing as 'perfect understanding' except as a nebulous theoretical >>> concept, and I don't think a one-to-one correspondence would be enough to >>> understand something, or even be a relevant concept. We use large parts of >>> our brains to process information from small parts of the world. You need a >>> lot more than a single neuron to figure out what's going on in a single >>> neuron. >>> >>> Oh, three observations. We don't process data instantaneously. The same >>> parts of the brain can be used to process information about something >>> repeatedly over time, using feedback loops etc. >>> >> >> Computers and algorithms are constrained by two resources space (i.e. >> memory), and time (i.e. CPU cycles). While some algorithms allow for >> time/space trade offs to be made in certain circumstances, in general there >> is some shortest description of the brain (in terms of bits) for which no >> shorter representation is possible (regardless of how much additional >> computation is thrown at it). >> >> So while the same brain may compute many times with the same neurons, >> this addresses only the time component of simulating a brain. There is >> still the matter of space. >> >> >> Ah, ok. I was talking about understanding the brain, not simulating it. >> Modelling something is not the same as understanding it. Yes, they help >> each other, but they aren't the same thing. >> > > I think understanding a thing is equivalent to being able to form an > accurate mental model of it. With greater levels of understanding > corresponding to more accurate models. > > What do you mean by the word understand? > > > >> >> The analogy here is that a computer with 1 MB of RAM can't emulate a >> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >> fact there's no amount of additional time that will permit the memory >> deficient computer to emulate the computer with 1 GB of memory, for the >> simple reason that it will run out of variables to represent all the >> possible values in the memory addresses of the computer with a greater >> memory. >> >> >> I'm not sure that this is true. Are you assuming no swap disk, or other >> similar non-RAM storage? >> > > Swap disks are a means to extend available RAM. > > > Because then I'm sure you're right, but that's a pretty artificial >> restriction. >> The analogy there would be a human with a notepad maybe, or a database, >> or a bunch of other humans, an AI, etc. >> >> So we're back to: A single human brain /on it's own/ can't understand a >> human brain in any great detail. Of course. >> > > I think that was the original question: can any mind ever fully understand > its own operation. > > Jason > > But that's a pretty artificial restriction. >> >> -- >> Ben >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 1 14:38:08 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 07:38:08 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> Message-ID: <007f01dc4b3d$24958820$6dc09860$@rainier66.com> From: John Clark ? >?I can't think of any application in which the rare earths don't need to be refined out into separate elements, if there is such an application it must not be a very important one? John K Clark If such applications exist, we wouldn?t know. Those would be held as trade secrets. An example would be a company that uses tons of material in which a foreign country, an unreliable trade partner held a monopoly. That company would look for applications of mixtures of rare earth elements, where everything easily separated is gone. Then the remaining mixture, which is easy to get and doesn?t cost much, might be used in high efficiency magnets. Since the company uses a lot of those, it would be better for them to hold that information as a trade secret rather than try to protect it with a patent. When it comes to recipes for high efficiency magnets, those are not effectively protected by patents anyway. It is easy enough to come up with a one-off recipe or add a pinch of this and a smattering of that, then claim it is a different recipe. Recognizing all this, Musk found a way to phase out rare earth elements without losing significant performance in his magnets. Once you start pondering why rare earth elements would make a better magnet, it all makes perfect sense. Those big metal atoms can ionize either way, for they tend to be ambivalent toward their outermost electrons. They hold domains in place. One can theorize that the actinide period would work even better than the lanthanides, but those have a bad habit of being radioactive. But we can go up the group one period if that element is cheaper and more easily available. How much is Tesla worth? This I can calmly assure you: Elon wouldn?t risk the entire company on the continuing availability of anything on which China has a monopoly. The US military wouldn?t base their technology on any material in which it didn?t hold a fifty year stockpile. Conclusion: the notion that the world?s industrial powers squirm helplessly in the ruthless talons of Communist China is nonsense. That absurdity has been intentionally exaggerated for economic reasons. There is a bright side to it however. Speculators who understand the ideas or theories are wrong can still make huge profits off of absurd notions. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 1 14:48:55 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 07:48:55 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <007f01dc4b3d$24958820$6dc09860$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> Message-ID: <008601dc4b3e$a6291cc0$f27b5640$@rainier66.com> From: spike at rainier66.com ? >?How much is Tesla worth? About 1.5 trillion dollars. >?This I can calmly assure you: Elon wouldn?t risk the entire company on the continuing availability of anything on which China has a monopoly? If they couldn?t find alternative materials, the requisite rare earth materials could be stockpiled, since that stuff never spoils. A 1.5 trillion dollar company could stockpile a fifty year supply of every rare earth material as a speculative move, should the supplier nation decide it wants all its REEs. >?.The US military wouldn?t base their technology on any material in which it didn?t hold a fifty year stockpile? The military has people whose job it is to think about these things, then arrange to not have it become a threat. The military also has arbitrarily large storage facilities where materials can be stockpiled and kept secure indefinitely. We can get what we need or already have it, or we can design around it. Speculators have promoted the idea that China has a big important monopoly. I am not buying it, not for a minute. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Nov 1 14:50:32 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 1 Nov 2025 10:50:32 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Given that all of our actions originate in the unconscious mind. there is > no way a conscious mind can understand where its ideas and actions come > from, much less understand them. > Great point! The conscious mind may think it is in charge but it is just an observer > (which nevertheless can understand that a mistake has been made and the > idea or action needs redoing.) > > You want to understand our minds? Make the workings of the unconscious > conscious - and that's just a start. Why did the impulse go to point B > when it left point A rather than to point C? And then trace all the points > in between entering the unconscious and resulting in some idea or action. > And explain each one. > I have doubts that such a thing is possible from the perspective of the mind in question. Can any brain ever feel and know what each of its neurons is doing? Can those corresponding neurons feel and know what every one of its constituent atoms is doing? Given Turing universality, it's provable that computer software can't know about its underlying hardware. If our minds are a kind of software which can be simulated by a computer, then this same implication would apply to us. There would be a layer of abstraction of one's underlying implementation which high levels cannot penetrate. > You can't even get started until you can truly access the unconscious. > Give Freud credit- he tried to do this. > He deserves credit for the attempt, but I think there are limits to a mind's ability to introspect. "Our thoughts seem to run about in their own space, creating new thoughts and modifying old ones, and we never notice any neurons helping us out! But that is to be expected. We can?t. [?] We should remember that physical law is what makes it all happen?way, way down in neural nooks and crannies which are too remote for us to reach with our high-level introspective probes." ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) Jason > > > On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On 31/10/2025 21:34, Jason Resch wrote: >>> >>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On 31/10/2025 12:28, John K Clark wrote: >>>> >>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>> >>>> >>>> You've completely lost me there, but I have two observations: There's >>>> no such thing as 'perfect understanding' except as a nebulous theoretical >>>> concept, and I don't think a one-to-one correspondence would be enough to >>>> understand something, or even be a relevant concept. We use large parts of >>>> our brains to process information from small parts of the world. You need a >>>> lot more than a single neuron to figure out what's going on in a single >>>> neuron. >>>> >>>> Oh, three observations. We don't process data instantaneously. The same >>>> parts of the brain can be used to process information about something >>>> repeatedly over time, using feedback loops etc. >>>> >>> >>> Computers and algorithms are constrained by two resources space (i.e. >>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>> time/space trade offs to be made in certain circumstances, in general there >>> is some shortest description of the brain (in terms of bits) for which no >>> shorter representation is possible (regardless of how much additional >>> computation is thrown at it). >>> >>> So while the same brain may compute many times with the same neurons, >>> this addresses only the time component of simulating a brain. There is >>> still the matter of space. >>> >>> >>> Ah, ok. I was talking about understanding the brain, not simulating it. >>> Modelling something is not the same as understanding it. Yes, they help >>> each other, but they aren't the same thing. >>> >> >> I think understanding a thing is equivalent to being able to form an >> accurate mental model of it. With greater levels of understanding >> corresponding to more accurate models. >> >> What do you mean by the word understand? >> >> >> >>> >>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>> fact there's no amount of additional time that will permit the memory >>> deficient computer to emulate the computer with 1 GB of memory, for the >>> simple reason that it will run out of variables to represent all the >>> possible values in the memory addresses of the computer with a greater >>> memory. >>> >>> >>> I'm not sure that this is true. Are you assuming no swap disk, or other >>> similar non-RAM storage? >>> >> >> Swap disks are a means to extend available RAM. >> >> >> Because then I'm sure you're right, but that's a pretty artificial >>> restriction. >>> The analogy there would be a human with a notepad maybe, or a database, >>> or a bunch of other humans, an AI, etc. >>> >>> So we're back to: A single human brain /on it's own/ can't understand a >>> human brain in any great detail. Of course. >>> >> >> I think that was the original question: can any mind ever fully >> understand its own operation. >> >> Jason >> >> But that's a pretty artificial restriction. >>> >>> -- >>> Ben >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Nov 1 17:46:37 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 1 Nov 2025 10:46:37 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> Message-ID: On Fri, Oct 31, 2025 at 9:00?PM wrote: > > -----Original Message----- > From: Keith Henson > Subject: Re: [ExI] not that rare earth (part 2 of at least 2) > > If you want to understand this topic, read this > > https://en.wikipedia.org/wiki/Rare-earth_element > > It is long but fairly comprehensive. > > Keith > > OK cool thx Keith. > > An EV uses 1 to 2 kg. The spot price for neodymium is 150 bucks a kg, up from 75 kg seven years ago. > > So an additional 75 to 150 bucks per non-Musk EV. Elon mandated those be eliminated, so Tesla doesn't use them. > > What the Wiki article doesn't say is what happens if we substitute molybdenum for the neodymium, which is what I think Tesla did: they are using MoFeB magnets. They aren't saying how they did it. Samarium is cheap, even now. I am not sure why. Perhaps it's a byproduct of something else. > > Wind turbines: those use a lot of material, 600 kg of REEs per turbine. So those are effected a lot. But the real cost of wind energy is in power storage anyway. I am not sure you can make that case for California. https://www.caiso.com/todays-outlook/supply#section-current Peak wind is around a GW, solar is around 20 times that much. Not sure that Mo makes good magnets. Never heard of it being used for that. But what do I know? Keith In any case, if the turbines give away a half a percent in efficiency, that surely wouldn't be a showstopper. They have bigger worries: the environmentalists might try to destroy them to save birds. > > Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. > > spike > > > > From spike at rainier66.com Sat Nov 1 18:26:13 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 11:26:13 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <000501dc4ae3$ffb70b90$ff2522b0$@rainier66.com> Message-ID: <00d801dc4b5d$019a3f80$04cebe80$@rainier66.com> -----Original Message----- From: Keith Henson ... > >>... Wind turbines: those use a lot of material, 600 kg of REEs per turbine. So those are effected a lot. But the real cost of wind energy is in power storage anyway. >...I am not sure you can make that case for California... Keith That depends on how much of the power supply already has intermittent non-predictables. If we start with a local grid with no intermittent sources and add one wind turbine, that one doesn't need storage, for it always has a market any time the wind is blowing and always has sufficient generation to supply the grid when it isn't. Once the percentage of power supplied by wind and solar goes above baseline minimum, there will be times when the wind is blowing and the sun is shining, but there is insufficient demand for all the power. At that point it starts to make sense to add power storage. PG&E claims that percentage is around 18%. If renewable non-predictable (wind and solar) together are at or below 18%, they say they can handle it without storage. That part makes sense. Another part that makes sense, told to us by the power company: the best wind sites are already built out: Altamont Pass and Tehachapi Pass are the two best ones. They form a wind funnel which keeps it breezy most of the time thru there. Great places for wind power. They have wind power already. After the best sites are built out, later wind turbines produce less, and in some cases cost more. Reasoning: if the wind isn't howling thru there constantly, the land is worth more. spike From ben at zaiboc.net Sat Nov 1 21:42:36 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:42:36 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: <8fc30c59-612a-405d-a0fd-d73913c63749@zaiboc.net> On 01/11/2025 13:32, Jason Resch wrote: > On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat > wrote: > > On 31/10/2025 19:04, Jason Resch wrote: >> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a >> perfect morality consists of. And it too, provides a definition >> of what morality is, and likewise provides a target to aim towards. >> >> Ben Wrote: As different intelligent/rational agents have >> different experiences, they will form different viewpoints, >> and come to different conclusions about what is right and not >> right, what should be and what should not, what they want and >> what they don't, just like humans do. >> >> The point of the video and article is that desires are based on >> beliefs, and because beliefs are correctable then so are desires. >> There is only one "perfect grasp" and accordingly, one true set >> of beliefs, and from this it follows one most-correct set of >> desires. This most correct set of desires is the same for >> everyone, regardless of from which viewpoint it is approached. > Nope. This is nonsense. Just about every assertion is wrong. The > very first sentence in the abstract is false. And the second. And > the third. So the whole thing falls apart. Desires are not based > on beliefs, they are based on emotions. The example of 'wanting to > drink hot mud' is idiotic. Just because the cup turns out to > contain mud doesn't invalidate the desire to drink hot chocolate. > > I think you are misinterpreting the example. It is the desire to drink > the contents of the cup is what changes in response to new information. I wouldn't have put it as 'desire to drink the contents of the cup', when the desire is to drink hot chocolate. There are originating desires and there are planned actions to satisfy the desire. Drinking from the cup might turn out to be a bad idea (the plan is faulty because of incorrect information), but the original desire is not changed. If you want to see a Batman movie at a movie theatre, and find that the only movie available is a romantic comedy, you don't say that you have a desire to watch any movie which has suddenly changed. You still want to watch Batman, but can't, so your desire is thwarted, not changed. > Think about this alternate example which may be easier to consider: > you may naively have the desire to take a certain job, to marry a > particular person, attend a certain event, but if that choice turns > out to be ruinous,? you may regret that decision. If your future self > could warn you of the consequences of that choice, then you may no > longer desire that job, marriage, or attendance, as much as you > previously did, in light of the (unknown) costs they bore, but which > you were unaware of. Decisions are often regretted. That is a fact of life. Future selves warning you about bad decisions is not. That's time-travel (aka 'magic'), and should not feature in any serious consideration of how to make good decisions. "If x could..." is no help when x is impossible. We have workable tools to help people make better decisions, but time-travel isn't one of them. > > It's not a 'mistaken' desire at all (the mistake is a sensory > one), and it doesn't somehow morph into a desire to drink hot mud. > "Beliefs are correctable, so desires are correctable" Each of > those two things are true (if you change 'correctable' to > 'changeable'), but the one doesn't imply the other, which follows > from the above. > > Does it apply in the examples I provided? No. The examples are about decisions, not desires, and they don't address the beliefs that lead to the decisions. "You may have the desire to do X" is different to "You decide to do X". The desire may drive the decision or at least be involved in it, but it isn't the decision (some poeple act immediately on their desires, but that still doesn't mean they are the same thing). Can you regret a desire? I don't think so, but it is arguable. It would be regretting something that you have no direct control over, so would be rather silly. Apart from that, there is still no dependency of desires on beliefs. A belief may well affect the plan you make to satisfy a desire, but changing the belief doesn't change the desire. Can a belief give rise to a desire? That's a more complicated question than it appears, I think, and leads into various types of desires, but still, there's no justification for the statement "beliefs can change, therefore desires can". > 'Perfect grasp' doesn't mean anything real. It implies that it's > possible to know everything about everything, or even about > something. The very laws of physics forbid this, many times over, > so using it in an argument is equivalent to saying "magic". > > It doesn't have to be possible. The paper is clear on this. The goal > of the paper is to answer objectively what makes a certain thing right > or wrong. For example, if someone offered you $10 and I? return for > some random person unknown to you would be killed, in a way that would > not negatively affect you or anyone you knew, and your memory of the > ordeal would be wiped so you wouldn't even bear a guilty conscience, > for what reason do we judge and justify the wrongness of taking the $10? This is 'Trolley problem thinking'. Making up some ridiculous scenario that would never, or even could never, occur in the real world, then claiming that it has relevance to the real world. > This is the goal of the paper to provide a foundation upon which > morality can be established objectively from first principles. Let's see some examples that are grounded in reality that 'provide a foundaton upon which morality can be established objectively'. I'm not closed to the possibility that such a thing can be done, but I'm not holding my breath. > How would you and the question of what separates right from wrong? The > initial utilitarian answer is whatever promotes more good experiences > than bad experiences. But then, how do you weigh the relative goodness > or badness of one experience vs. another, between one person and > another, between the varying missed opportunities among future > possibilities? > Such questions can only be answered with something approximating an > attempt at a grasp of what it means and what it is like to be all the > various existing and potential conscious things. That's just another way of saying that it can't be answered. > We can make heuristic attempts at this, despite the fact that we never > achieve perfection. Exactly. We always have to make decisions in the /absence/ of full information. What we would do if we had 'all the information' is irrelevant, if it even means anything. > For example, Democracy can be viewed as a crude approximation, by > which each person is given equal weight in the consideration of their > desires (with no attempt to weight relative benefits or suffering). > But this is still better than an oligarchy, where the desires of few > are considered while the desires of the masses are ignored. And also > you can see the difference between uninformed electorate vs. a well > informed one. The informed electorate has a better grasp of the > consequences of their decisions, and so their collective desires are > more fully fulfilled. I don't see the relevance to morality. Politics and morality are rarely on talking terms. > > 'One true set of beliefs' is not only wrong, it's dangerous, which > he just confirms by saying it means there is only one most-correct > set of desires, for /everyone/ (!). > > Do you not believe in objective truth? No. This is religious territory, and the road to dogmatism. This is the very reason wny science is superior to religion. It doesn't assume that there is any 'absolute truth' which can be discovered, after which no further inquiry is needed or wanted. As to whether, for instance, the laws of physics are invariant everywhere and at all times, that's a question we can't answer, and probably will never be able to. > If there is objective truth, they are the same truths for everyone. > Now consider the objective truths for statements such as "it is right > to do X" or "it is wrong to do Y". If there are objective truths, > these extend to an objective morality. There would be an objective > truth to what action is best (even if we lack the computational > capacity to determine it). > You may say this is fatal to the theory, but note that we can still > roughly compute with the number Pi, even though we never consider all > of its infinite digits. > > Does this not ring loud alarm bells to you? I'm thinking we'd > better hope that there really is no such thing as objective > morality (if there is, Zuboff is barking up the wrong tree, for > sure), it would be the basis for the worst kind of tyranny. It's a > target that I, at least, want to aim away from. 180 degrees away! > > No one is proposing a putting a tyrannical AI in charge that forces > your every decision. But a superintelligent AI that could explain to > you the consequences of different actions you might take (as far as it > is able to predict them) would be quite invaluable, and improve the > lives of many who choose to consider its warnings and advice. Absolutely. I have no argument with that. But we were talking about morality. > > His twisting of desire into morality is, well, twisted. Morality > isn't about what we should want to do, just as bravery isn't about > having no fear. > > Do you have a better definition of morality? I don't think that's the answer you want to ask. A dictionary can provide the answer. I do have my own moral code though, if that's what you want to know. I don't expect everyone to see the value of it, or adopt it. And I might change my mind about it in the future. > > He wants to turn people into puppets, and actually remove moral > agency from them. > > Imperfect understanding of consequences cripples our ability to be > effective moral agents. Then you think we are crippled as effective moral agents, and doomed to always be so (because we will always have imperfect understanding of consquences). > When we don't understand the pros and cons of a decision, how can we > hope to be moral agents? We become coin-flippers -- which I would > argue is to act amorally. If we want true moral agency, we must strive > to improve our grasp of things. This is taking an extreme position, and saying either we are 'perfect' or no use at all. We are neither. Acting with incomplete information is inevitable. That doesn't mean morality is impossible. Just as bravery is being afraid, but acting anyway, morality is not knowing for sure what the best action is, but acting anyway. Making the best decision you can, in line with your values. It's about having a choice. If it were possible to have 'perfect knowledge', there would be no morality, no choice. I'm not sure what you'd call it. Predetermination, perhaps. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 21:46:04 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:46:04 +0000 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: Message-ID: <744d4987-907a-45ea-9829-62b925e9c397@zaiboc.net> On 01/11/2025 13:32, spike wrote: > Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. I'm surprised no-one has mentioned flywheel storage. Inherently fire-resistant (there's nothing to catch fire), can store large amounts of energy, can be buried in the ground, and things built on top of them if desired, very long lifetime, very good reliability, easy to build, very fast charging and discharging times. The only downside I can see is the need (or at least desirability) for rare-earths for the magnetic bearings. Which might not actually be a need, if spike is right. -- Ben From ben at zaiboc.net Sat Nov 1 21:48:57 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:48:57 +0000 Subject: [ExI] extropy-chat Digest, Vol 266, Issue 1 In-Reply-To: References: Message-ID: On 01/11/2025 13:32, bill w wrote: > Here is an extremely interesting study on those two. > A man somehow had a brain injury that severed the connection between > his amygdala and his forebrain.? This cuts off emotions from the > decision-making process. > Result:? decisions are made without emotion.? His IQ and overall > intelligence levels were unchanged.? Thus he had only reason from the > forebrain to run his life. > He could not make the slightest decision.? He was given intellectual > work but when he finished he could not move to the next problem. He > could not decide if he wanted a cup of coffee.? ?His decision-making > process was frozen. > So this is what happens when we have only reason to make decisions. > So our conclusion has to be that emotions are involved in every > decision we make, no matter how trivial. I don't think you can make that conclusion from this story (you didn't give a link to the study). It's not as simple as 'he only had reason to run his life'. If that was the case, I'd think he'd be able to use reason to conclude that in the absence of a feeling towards making one decision or another, a random choice would work. And that making decisions was essential for survival. He'd be able to think that a "coffee makes sense, given that I haven't had anything to drink for three hours, and dehydration is bad", etc. I think what's going on there is that as well as cutting off emotional connection to the forebrain, the mechanisms for directing attention and resolving competing tasks were also cut off. As I said earlier, the whole concept of 'emotion vs. reason' is not very useful. People like simple scenarios. Natural vs. Artificial. Nature vs. Nature. Black vs. White. Things are almost never that simple. In fact, I suspect that the whole concept of 'emotion' as a single thing is counter-productive when thinking about these things. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 21:51:30 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 21:51:30 +0000 Subject: [ExI] reason an emotion In-Reply-To: References: Message-ID: <4decaccd-4a56-451d-a619-cb08aa1b03b1@zaiboc.net> Aargh! Did it again! I keep forgetting to change the Subject. -------- Forwarded Message -------- Subject: Re: extropy-chat Digest, Vol 266, Issue 1 Date: Sat, 1 Nov 2025 21:48:57 +0000 From: Ben Zaiboc To: extropy-chat at lists.extropy.org On 01/11/2025 13:32, bill w wrote: > Here is an extremely interesting study on those two. > A man somehow had a brain injury that severed the connection between > his amygdala and his forebrain.? This cuts off emotions from the > decision-making process. > Result:? decisions are made without emotion.? His IQ and overall > intelligence levels were unchanged.? Thus he had only reason from the > forebrain to run his life. > He could not make the slightest decision.? He was given intellectual > work but when he finished he could not move to the next problem. He > could not decide if he wanted a cup of coffee.? ?His decision-making > process was frozen. > So this is what happens when we have only reason to make decisions. > So our conclusion has to be that emotions are involved in every > decision we make, no matter how trivial. I don't think you can make that conclusion from this story (you didn't give a link to the study). It's not as simple as 'he only had reason to run his life'. If that was the case, I'd think he'd be able to use reason to conclude that in the absence of a feeling towards making one decision or another, a random choice would work. And that making decisions was essential for survival. He'd be able to think that a "coffee makes sense, given that I haven't had anything to drink for three hours, and dehydration is bad", etc. I think what's going on there is that as well as cutting off emotional connection to the forebrain, the mechanisms for directing attention and resolving competing tasks were also cut off. As I said earlier, the whole concept of 'emotion vs. reason' is not very useful. People like simple scenarios. Natural vs. Artificial. Nature vs. Nature. Black vs. White. Things are almost never that simple. In fact, I suspect that the whole concept of 'emotion' as a single thing is counter-productive when thinking about these things. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 22:05:16 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 22:05:16 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: Apologies for the formatting of this. I've just noticed that some email clients jam the text together, making it hard to read. Here is a better formatted version (I hope!): On 01/11/2025 21:42, Ben wrote: > > On 01/11/2025 13:32, Jason Resch wrote: >> On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat >> wrote: >> >> On 31/10/2025 19:04, Jason Resch wrote: >>> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a >>> perfect morality consists of. And it too, provides a definition >>> of what morality is, and likewise provides a target to aim towards. >>> >>> Ben Wrote: As different intelligent/rational agents have >>> different experiences, they will form different viewpoints, >>> and come to different conclusions about what is right and >>> not right, what should be and what should not, what they >>> want and what they don't, just like humans do. >>> >>> The point of the video and article is that desires are based on >>> beliefs, and because beliefs are correctable then so are >>> desires. There is only one "perfect grasp" and accordingly, one >>> true set of beliefs, and from this it follows one most-correct >>> set of desires. This most correct set of desires is the same for >>> everyone, regardless of from which viewpoint it is approached. >> Nope. This is nonsense. Just about every assertion is wrong. The >> very first sentence in the abstract is false. And the second. And >> the third. So the whole thing falls apart. Desires are not based >> on beliefs, they are based on emotions. The example of 'wanting >> to drink hot mud' is idiotic. Just because the cup turns out to >> contain mud doesn't invalidate the desire to drink hot chocolate. >> >> I think you are misinterpreting the example. It is the desire to >> drink the contents of the cup is what changes in response to new >> information. > I wouldn't have put it as 'desire to drink the contents of the cup', > when the desire is to drink hot chocolate. There are originating > desires and there are planned actions to satisfy the desire. Drinking > from the cup might turn out to be a bad idea (the plan is faulty > because of incorrect information), but the original desire is not changed. > If you want to see a Batman movie at a movie theatre, and find that > the only movie available is a romantic comedy, you don't say that you > have a desire to watch any movie which has suddenly changed. You still > want to watch Batman, but can't, so your desire is thwarted, not changed. > >> Think about this alternate example which may be easier to consider: >> you may naively have the desire to take a certain job, to marry a >> particular person, attend a certain event, but if that choice turns >> out to be ruinous,? you may regret that decision. If your future self >> could warn you of the consequences of that choice, then you may no >> longer desire that job, marriage, or attendance, as much as you >> previously did, in light of the (unknown) costs they bore, but which >> you were unaware of. > Decisions are often regretted. That is a fact of life. Future selves > warning you about bad decisions is not. That's time-travel (aka > 'magic'), and should not feature in any serious consideration of how > to make good decisions. "If x could..." is no help when x is > impossible. We have workable tools to help people make better > decisions, but time-travel isn't one of them. >> It's not a 'mistaken' desire at all (the mistake is a sensory >> one), and it doesn't somehow morph into a desire to drink hot >> mud. "Beliefs are correctable, so desires are correctable" Each >> of those two things are true (if you change 'correctable' to >> 'changeable'), but the one doesn't imply the other, which follows >> from the above. >> >> Does it apply in the examples I provided? > No. The examples are about decisions, not desires, and they don't > address the beliefs that lead to the decisions. "You may have the > desire to do X" is different to "You decide to do X". The desire may > drive the decision or at least be involved in it, but it isn't the > decision (some poeple act immediately on their desires, but that still > doesn't mean they are the same thing). > Can you regret a desire? I don't think so, but it is arguable. It > would be regretting something that you have no direct control over, so > would be rather silly. > > Apart from that, there is still no dependency of desires on beliefs. A > belief may well affect the plan you make to satisfy a desire, but > changing the belief doesn't change the desire. Can a belief give rise > to a desire? That's a more complicated question than it appears, I > think, and leads into various types of desires, but still, there's no > justification for the statement "beliefs can change, therefore desires > can". > >> 'Perfect grasp' doesn't mean anything real. It implies that it's >> possible to know everything about everything, or even about >> something. The very laws of physics forbid this, many times over, >> so using it in an argument is equivalent to saying "magic". >> >> It doesn't have to be possible. The paper is clear on this. The goal >> of the paper is to answer objectively what makes a certain thing >> right or wrong. For example, if someone offered you $10 and I? return >> for some random person unknown to you would be killed, in a way that >> would not negatively affect you or anyone you knew, and your memory >> of the ordeal would be wiped so you wouldn't even bear a guilty >> conscience, for what reason do we judge and justify the wrongness of >> taking the $10? > This is 'Trolley problem thinking'. Making up some ridiculous scenario > that would never, or even could never, occur in the real world, then > claiming that it has relevance to the real world. >> This is the goal of the paper to provide a foundation upon which >> morality can be established objectively from first principles. > Let's see some examples that are grounded in reality that 'provide a > foundaton upon which morality can be established objectively'. I'm not > closed to the possibility that such a thing can be done, but I'm not > holding my breath. >> How would you and the question of what separates right from wrong? >> The initial utilitarian answer is whatever promotes more good >> experiences than bad experiences. But then, how do you weigh the >> relative goodness or badness of one experience vs. another, between >> one person and another, between the varying missed opportunities >> among future possibilities? >> Such questions can only be answered with something approximating an >> attempt at a grasp of what it means and what it is like to be all the >> various existing and potential conscious things. > That's just another way of saying that it can't be answered. >> We can make heuristic attempts at this, despite the fact that we >> never achieve perfection. > Exactly. We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. >> For example, Democracy can be viewed as a crude approximation, by >> which each person is given equal weight in the consideration of their >> desires (with no attempt to weight relative benefits or suffering). >> But this is still better than an oligarchy, where the desires of few >> are considered while the desires of the masses are ignored. And also >> you can see the difference between uninformed electorate vs. a well >> informed one. The informed electorate has a better grasp of the >> consequences of their decisions, and so their collective desires are >> more fully fulfilled. > I don't see the relevance to morality. Politics and morality are > rarely on talking terms. >> 'One true set of beliefs' is not only wrong, it's dangerous, >> which he just confirms by saying it means there is only one >> most-correct set of desires, for /everyone/ (!). >> >> Do you not believe in objective truth? > No. > This is religious territory, and the road to dogmatism. > This is the very reason wny science is superior to religion. It > doesn't assume that there is any 'absolute truth' which can be > discovered, after which no further inquiry is needed or wanted. > As to whether, for instance, the laws of physics are invariant > everywhere and at all times, that's a question we can't answer, and > probably will never be able to. > >> If there is objective truth, they are the same truths for everyone. >> Now consider the objective truths for statements such as "it is right >> to do X" or "it is wrong to do Y". If there are objective truths, >> these extend to an objective morality. There would be an objective >> truth to what action is best (even if we lack the computational >> capacity to determine it). >> You may say this is fatal to the theory, but note that we can still >> roughly compute with the number Pi, even though we never consider all >> of its infinite digits. >> >> Does this not ring loud alarm bells to you? I'm thinking we'd >> better hope that there really is no such thing as objective >> morality (if there is, Zuboff is barking up the wrong tree, for >> sure), it would be the basis for the worst kind of tyranny. It's >> a target that I, at least, want to aim away from. 180 degrees away! >> >> No one is proposing a putting a tyrannical AI in charge that forces >> your every decision. But a superintelligent AI that could explain to >> you the consequences of different actions you might take (as far as >> it is able to predict them) would be quite invaluable, and improve >> the lives of many who choose to consider its warnings and advice. > Absolutely. I have no argument with that. But we were talking about > morality. >> His twisting of desire into morality is, well, twisted. Morality >> isn't about what we should want to do, just as bravery isn't >> about having no fear. >> >> Do you have a better definition of morality? > I don't think that's the answer you want to ask. A dictionary can > provide the answer. > > I do have my own moral code though, if that's what you want to know. I > don't expect everyone to see the value of it, or adopt it. And I might > change my mind about it in the future. >> >> He wants to turn people into puppets, and actually remove moral >> agency from them. >> >> Imperfect understanding of consequences cripples our ability to be >> effective moral agents. > Then you think we are crippled as effective moral agents, and doomed > to always be so (because we will always have imperfect understanding > of consquences). >> When we don't understand the pros and cons of a decision, how can we >> hope to be moral agents? We become coin-flippers -- which I would >> argue is to act amorally. If we want true moral agency, we must >> strive to improve our grasp of things. > This is taking an extreme position, and saying either we are 'perfect' > or no use at all. We are neither. Acting with incomplete information > is inevitable. That doesn't mean morality is impossible. > > Just as bravery is being afraid, but acting anyway, morality is not > knowing for sure what the best action is, but acting anyway. Making > the best decision you can, in line with your values. It's about having > a choice. If it were possible to have 'perfect knowledge', there would > be no morality, no choice. I'm not sure what you'd call it. > Predetermination, perhaps. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Nov 1 22:22:43 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 1 Nov 2025 22:22:43 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On 01/11/2025 21:42, Jason Resch asked, and Ben answered: >> Do you not believe in objective truth? > > No. Ok, that's probably too simplistic, and needs explaining. I don't think there is such a thing as /absolute/ truth, is a better way of putting it. Obviously there are 'objective truths', as John pointed out with his example of a bridge falling down or not. > consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it). Statements like that can't be 'objectively true', because they are not about objective things. Right and Wrong are concepts in human minds, not things that exist in themselves. If there were no people, there would be no right and wrong. No morality. 'Objective morality' is an oxymoron, just as 'objective beauty' or 'objective jealousy' are. These are all things that don't exist without subjective experience. -- Ben From jasonresch at gmail.com Sat Nov 1 23:20:21 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 1 Nov 2025 19:20:21 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sat, Nov 1, 2025, 6:06 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Apologies for the formatting of this. I've just noticed that some email > clients jam the text together, making it hard to read. > > Here is a better formatted version (I hope!): > > On 01/11/2025 21:42, Ben wrote: > > > On 01/11/2025 13:32, Jason Resch wrote: > > On Fri, Oct 31, 2025, 5:02 PM Ben Zaiboc via extropy-chat wrote: > >> On 31/10/2025 19:04, Jason Resch wrote: >> >> >> the paper ( https://philarchive.org/rec/ARNMAW ) defines what a perfect morality consists of. And it too, provides a definition of what morality is, and likewise provides a target to aim towards. >> >> >>> >>> Ben Wrote: >>> >>> As different intelligent/rational agents have different experiences, >>> >>> they will form different viewpoints, and come to different conclusions >>> >>> about what is right and not right, what should be and what should not, >>> >>> what they want and what they don't, just like humans do. >> >> The point of the video and article is that desires are based on beliefs, and because beliefs are correctable then so are desires. There is only one "perfect grasp" and accordingly, one true set of beliefs, and from this it follows one most-correct set of desires. This most correct set of desires is the same for everyone, regardless of from which viewpoint it is approached. >> >> Nope. This is nonsense. Just about every assertion is wrong. The very >> first sentence in the abstract is false. And the second. And the third. So >> the whole thing falls apart. Desires are not based on beliefs, they are >> based on emotions. The example of 'wanting to drink hot mud' is idiotic. >> Just because the cup turns out to contain mud doesn't invalidate the desire >> to drink hot chocolate. >> > I think you are misinterpreting the example. It is the desire to drink the > contents of the cup is what changes in response to new information. > > > > I wouldn't have put it as 'desire to drink the contents of the cup', when > the desire is to drink hot chocolate. There are originating desires and > there are planned actions to satisfy the desire. Drinking from the cup > might turn out to be a bad idea (the plan is faulty because of incorrect > information), but the original desire is not changed. > If you want to see a Batman movie at a movie theatre, and find that the > only movie available is a romantic comedy, you don't say that you have a > desire to watch any movie which has suddenly changed. You still want to > watch Batman, but can't, so your desire is thwarted, not changed. > > > Think about this alternate example which may be easier to consider: you may naively have the desire to take a certain job, to marry a particular person, attend a certain event, but if that choice turns out to be ruinous, you may regret that decision. If your future self could warn you of the consequences of that choice, then you may no longer desire that job, marriage, or attendance, as much as you previously did, in light of the (unknown) costs they bore, but which you were unaware of. > > > > Decisions are often regretted. That is a fact of life. Future selves > warning you about bad decisions is not. That's time-travel (aka 'magic'), > and should not feature in any serious consideration of how to make good > decisions. "If x could..." is no help when x is impossible. We have > workable tools to help people make better decisions, but time-travel isn't > one of them. > > These are examples to communicate a point. They are not intended be taken literally. The point is you may desire a job, but had you known more about the job, you would not have desired it. > It's not a 'mistaken' desire at all (the mistake is a >> sensory one), and it doesn't somehow morph into a desire to drink >> hot mud. >> >> >> >> "Beliefs are correctable, so desires are correctable" >> >> Each of those two things are true (if you change 'correctable' to >> 'changeable'), but the one doesn't imply the other, which follows >> from the above. >> > Does it apply in the examples I provided? > > > > No. The examples are about decisions, not desires, and they don't address > the beliefs that lead to the decisions. "You may have the desire to do X" > is different to "You decide to do X". The desire may drive the decision or > at least be involved in it, but it isn't the decision (some poeple act > immediately on their desires, but that still doesn't mean they are the same > thing). > Can you regret a desire? I don't think so, but it is arguable. It would be > regretting something that you have no direct control over, so would be > rather silly. > > The decision is irrelevant. You either desire the job or you don't. The point is that this can change based on new information. > Apart from that, there is still no dependency of desires on beliefs. > > If you believe it will be good for you, you may desire it. If you learn later that it will be bad for you, you may no longer desire it. Here, what you desire has a dependency on what you believe. A belief may well affect the plan you make to satisfy a desire, but > changing the belief doesn't change the desire. Can a belief give rise to a > desire? That's a more complicated question than it appears, I think, and > leads into various types of desires, but still, there's no justification > for the statement "beliefs can change, therefore desires can". > > > > 'Perfect grasp' doesn't mean anything real. It implies that it's >> possible to know everything about everything, or even about >> something. The very laws of physics forbid this, many times over, so >> using it in an argument is equivalent to saying "magic". >> > It doesn't have to be possible. The paper is clear on this. The goal of the paper is to answer objectively what makes a certain thing right or wrong. For example, if someone offered you $10 and I return for some random person unknown to you would be killed, in a way that would not negatively affect you or anyone you knew, and your memory of the ordeal would be wiped so you wouldn't even bear a guilty conscience, for what reason do we judge and justify the wrongness of taking the $10? > > > > This is 'Trolley problem thinking'. Making up some ridiculous scenario > that would never, or even could never, occur in the real world, then > claiming that it has relevance to the real world. > > It's to frame the problem: where does morality come from, what is its basis, by what method do how do we determine right or wrong? > This is the goal of the paper to provide a foundation upon which morality can be established objectively from first principles. > > > > Let's see some examples that are grounded in reality that 'provide a > foundaton upon which morality can be established objectively'. I'm not > closed to the possibility that such a thing can be done, but I'm not > holding my breath. > > > How would you and the question of what separates right from wrong? The initial utilitarian answer is whatever promotes more good experiences than bad experiences. But then, how do you weigh the relative goodness or badness of one experience vs. another, between one person and another, between the varying missed opportunities among future possibilities? > Such questions can only be answered with something approximating an attempt at a grasp of what it means and what it is like to be all the various existing and potential conscious things. > > That's just another way of saying that it can't be answered. > > We can make heuristic attempts at this, despite the fact that we never achieve perfection. > > > > Exactly. We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. > > Yes, this is what I've been saying from the beginning. Perfect grasp is used only to define the aim of morality, not to serve as a practical theory. Consider weather prediction. We can't predict with 100% accuracy, nor predict arbitrarily far into the future. Yet we can make near term predictions with some modicum of accuracy. This is how moral decisions can (and should) be approached. > For example, Democracy can be viewed as a crude approximation, by which each person is given equal weight in the consideration of their desires (with no attempt to weight relative benefits or suffering). But this is still better than an oligarchy, where the desires of few are considered while the desires of the masses are ignored. And also you can see the difference between uninformed electorate vs. a well informed one. The informed electorate has a better grasp of the consequences of their decisions, and so their collective desires are more fully fulfilled. > > > > I don't see the relevance to morality. Politics and morality are rarely on > talking terms. > > Please consider what I wrote carefully. It is an example of putting into practice a heuristic. And how better heuristics are based on the same model and definition of morality as defined in that paper. > >> >> >> 'One true set of beliefs' is not only wrong, it's dangerous, which >> he just confirms by saying it means there is only one most-correct >> set of desires, for /everyone/ (!). >> > Do you not believe in objective truth? > > > > No. > This is religious territory, and the road to dogmatism. > > Belief in objective truth is the basis of science. This is the very reason wny science is superior to religion. > > Without objective truth, by what measure is any theory in science said to be better than any other? What is the meaning of "falsified" if there are no objective truths or falsehoods? Science as a field and endeavor collapses without a notion of objective truth (unless, perhaps you subscribe to some constructionist, relativist, post-modern notion of reality/truth). But I take the view that most scientists consider their work as something beyond a social interaction/game. It doesn't assume that there is any 'absolute truth' which can be > discovered, after which no further inquiry is needed or wanted. > > I think you may be confusing the existence of objective truth, with the idea that we can access that objective truth and have certainty when we hold it. One does not imply the other. I believe there is objective truth, *and* I believe we can never be certain if/when we have it. We think it is objectively true that 2+2=4, but we can't prove it mathematically, unless we assume some set of axioms (which themselves may or may not be true), and we cannot prove the set of axioms are true. So even on the most trivial matters, we never achieve certainty. As to whether, for instance, the laws of physics are invariant everywhere > and at all times, that's a question we can't answer, and probably will > never be able to. > > Many things are true that we will never know. The 10^(googolplex)th digit of the binary representation of Pi is either 1 or 0. But we, in our finite universe, will never have the computational resources to determine which. Nevertheless at least one of these two statements is objectively true: - The 10^(googolplex)th digit of the binary representation of Pi is 1. - The 10^(googolplex)th digit of the binary representation of Pi is 0. But for those who believe in objective truth, one of these statements is true. > > If there is objective truth, they are the same truths for everyone. > Now consider the objective truths for statements such as "it is right to do X" or "it is wrong to do Y". If there are objective truths, these extend to an objective morality. There would be an objective truth to what action is best (even if we lack the computational capacity to determine it). > You may say this is fatal to the theory, but note that we can still roughly compute with the number Pi, even though we never consider all of its infinite digits. > >> Does this not ring loud alarm >> bells to you? I'm thinking we'd better hope that there really is no >> such thing as objective morality (if there is, Zuboff is barking up >> the wrong tree, for sure), it would be the basis for the worst kind >> of tyranny. It's a target that I, at least, want to aim away from. >> 180 degrees away! >> > No one is proposing a putting a tyrannical AI in charge that forces your every decision. But a superintelligent AI that could explain to you the consequences of different actions you might take (as far as it is able to predict them) would be quite invaluable, and improve the lives of many who choose to consider its warnings and advice. > > > > Absolutely. I have no argument with that. But we were talking about > morality. > > Yes and morality concerns which actions are right or wrong. > > >> His twisting of desire into morality is, well, twisted. Morality >> isn't about what we should want to do, just as bravery isn't about >> having no fear. >> > Do you have a better definition of morality? > > I don't think that's the answer you want to ask. A dictionary can provide > the answer. > > This is what the dictionary says: "principles concerning the distinction between right and wrong or good and bad behavior." But this only pushes the problem back: what is the definition of right or wrong, good or bad? Zuboff's paper is an example of an theoretical basis on which we can form such a definitions, and define what we mean by right and wrong, good and bad. > I do have my own moral code though, if that's what you want to know. I > don't expect everyone to see the value of it, or adopt it. And I might > change my mind about it in the future. > > Let us say you have a particular set of rules in your code. By why process do you decide what rules to adopt, or decide to adopt one rule vs. another. My contention is that to even form a moral code, one must hold some meta-rule for optimizing what knew considers to be good while minimizing or avoiding bad. And I think if you explored this meta-rule, you would find it is not all that different from the position Zuboff reaches in his paper. Ultimately, what is good (for one individual) is what that individual would want for themselves if they had a complete knowledge of everything involved. And then this then extended to define good as a maximization of good for all concerned, to achieve the most possible good among all beings who have desires, by satisfying (to the maximum possible extent) the desires each individual would still hold if they all had a perfect grasp of everything. This he refers to as a reconciliation of all systems of desire. He wants to turn people into puppets, and actually >> remove moral agency from them. >> > Imperfect understanding of consequences cripples our ability to be effective moral agents. > > > > Then you think we are crippled as effective moral agents, and doomed to > always be so (because we will always have imperfect understanding of > consquences). > > Indeed. That is why life is so hard, and why "to err is human." As imperfect beings we perpetual mistakes are inevitable. But with greater knowledge, experience, and intelligence, we can strive to minimize that error. > > When we don't understand the pros and cons of a decision, how can we hope to be moral agents? We become coin-flippers -- which I would argue is to act amorally. If we want true moral agency, we must strive to improve our grasp of things. > > > > This is taking an extreme position, and saying either we are 'perfect' or > no use at all. > > Not at all. I specified "when we don't understand..." We are neither. Acting with incomplete information is inevitable. > > Yes. That doesn't mean morality is impossible. > > I fully agree. > Just as bravery is being afraid, but acting anyway, morality is not > knowing for sure what the best action is, but acting anyway. > > Since we never know anything for sure, I'm not sure that qualifier adds anything useful. I would instead say: moral action requires an attempt to identify the morally best action, and then choosing that action. Then, "amoral action" is action without attempting to identify what the morally best action is, and "immoral action" would be an attempt to identify the morally best action, but then choosing a different action. Making the best decision you can, in line with your values. It's about > having a choice. If it were possible to have 'perfect knowledge', there > would be no morality, no choice. > > I'm not sure that follows. Even with perfect knowledge, you could still choose whether or not to act in accordance with morally best action. Jason I'm not sure what you'd call it. Predetermination, perhaps. > > > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Nov 1 23:22:42 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Nov 2025 16:22:42 -0700 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <744d4987-907a-45ea-9829-62b925e9c397@zaiboc.net> References: <744d4987-907a-45ea-9829-62b925e9c397@zaiboc.net> Message-ID: <005101dc4b86$6cb0f6f0$4612e4d0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat Subject: Re: [ExI] not that rare earth (part 2 of at least 2) On 01/11/2025 13:32, spike wrote: >>... Regarding my post on fire-resistant power storage, I am surprised no one checked my estimates. >...I'm surprised no-one has mentioned flywheel storage. Inherently fire-resistant (there's nothing to catch fire), can store large amounts of energy, can be buried in the ground, and things built on top of them if desired, very long lifetime, very good reliability, easy to build, very fast charging and discharging times. The only downside I can see is the need (or at least desirability) for rare-earths for the magnetic bearings. Which might not actually be a need, if spike is right. -- Ben _______________________________________________ Ben the calculations on this are easy enough, or estimates. Alternative: see if plausible proposals have been made. Compare costs. My intuition tells me that notion won't compete with lithium batteries because of the huge advantage of mass production already in place for the batteries. Aside: the battery storage notion has another fun advantage: it can use old batteries from retired EVs. The batteries can still store power after they are too degraded for automotive use. Those should be cheap. spike From jasonresch at gmail.com Sat Nov 1 23:27:42 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 1 Nov 2025 19:27:42 -0400 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sat, Nov 1, 2025, 6:23 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On 01/11/2025 21:42, Jason Resch asked, and Ben answered: > >> Do you not believe in objective truth? > > > > No. > > Ok, that's probably too simplistic, and needs explaining. > > I don't think there is such a thing as /absolute/ truth, is a better way > of putting it. Obviously there are 'objective truths', as John pointed > out with his example of a bridge falling down or not. > > > consider the objective truths for statements such as "it is right to do > X" or "it is wrong to do Y". If there are objective truths, these extend to > an objective morality. There would be an objective truth to what action is > best (even if we lack the computational capacity to determine it). > > > Statements like that can't be 'objectively true', because they are not > about objective things. Right and Wrong are concepts in human minds, not > things that exist in themselves. If there were no people, there would be > no right and wrong. No morality. 'Objective morality' is an oxymoron, > just as 'objective beauty' or 'objective jealousy' are. These are all > things that don't exist without subjective experience. > I agree that good and bad (and hence right and wrong) depend on the existence of conscious beings. Despite that the states of these conscious beings is subjective, it's nevertheless objectively true (or false) that "subject X is experiencing pain." >From these objective facts (concerning subjective states) it becomes possible to develop an objective morality. For example, I might propose that it's an objective moral truth that "needlessly torturing innocent children" is a moral wrong. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 2 11:27:19 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Nov 2025 06:27:19 -0500 Subject: [ExI] reason an emotion In-Reply-To: References: Message-ID: On Sat, Nov 1, 2025 at 9:33?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> emotions are involved in every decision we make, no matter how trivial.* *And I'm sure that will be just as true for an AI as it is for us. So the AI revolution may have created at least one new job category, that of an AI psychiatrist. But eventually AIs will get better than humans at that too.* *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 2 12:47:54 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Nov 2025 07:47:54 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <007f01dc4b3d$24958820$6dc09860$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> Message-ID: On Sat, Nov 1, 2025 at 10:38?AM wrote: > *>> ?I can't think of any application in which the rare earths don't need >> to be refined out into separate elements, if there is such an application >> it must not be a very important one? John K Clark* > > > > *> If such applications exist, we wouldn?t know. Those would be held as > trade secrets. * > *If there were such an application we already know magnetic or optical properties can't be involved, because there is great variation in those properties among the different rare earths. So it must be something in their chemical properties because they are all almost identical in that respect. But a trade secret that can be told in one short sentence such as "unrefined rare earths can be economically useful in the X chemical process" tend not to remain secret for very long. * *> Recognizing all this, Musk found a way to phase out rare earth elements > without losing significant performance in his magnets.* > *No Musk has not. It's true that back in 2008 the original Tesla had no rare earth magnets, in fact it had no permanent magnets of any sort because it used an old-fashioned AC induction motor. However the best efficiency you're going to get with one of those is about 85%, but a motor that uses rare earth permanent magnets has an efficiency of about 97%, and an induction motor is also larger and MUCH heavier than a rare earth motor of equal power. Tesla is not currently producing ANY cars that completely avoid rare earth magnets, however their cheapest car uses an AC induction motor for the front wheels but a rare earth permanent magnet motor for the back wheels. But Tesla is never going to be competitive with any Chinese car maker if they continue to go down that road.* *In 2023 Tesla made a grand announcement they were going to find a way to make a great electric motor that didn't use any rare earths, but it's almost 2026 and we've heard nothing more about that, certainly nothing has been put in production. * > *> **we can go up the group one period if that element is cheaper and > more easily available. * > *I don't know what you mean by "go up the group one period" because the rare earth elements are all in the same group, and the periodic table of elements are arranged according to their chemical properties NOT their magnetic properties which is what we are interested in. The defining feature of the rare earth elements is that their outer valence electrons are all the same, and valence electrons are the ones that are primarily responsible for giving an element its chemical properties. But the various rare earth element's inner shell of electrons are all quite different, and that is what gives the specific element its magnetic properties. * *> **Elon wouldn?t risk the entire company on the continuing availability > of anything on which China has a monopoly. * > *If he wants to make electric cars then he'd have no choice, unless Musk could convince He Who Must Not Be Named to put a huge tariff on imported Chinese cars and thus force Americans to buy his overpriced outdated inefficient junk cars. * *> The US military wouldn?t base their technology on any material in which > it didn?t hold a fifty year stockpile. * > *If the US military was omnipotent I'm sure that's exactly what they would do. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 2 14:16:16 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 09:16:16 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <813304aa-2157-48b1-a2b9-8cfcdec91b27@zaiboc.net> Message-ID: On Sun, Nov 2, 2025, 9:05 AM John Clark wrote: > > > On Fri, Oct 31, 2025 at 10:50?AM Jason Resch wrote: > > > >> >>> See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE >>>> >>> >>> *>> If "real desires" require perfect knowledge then "real desires" do >>> not exist and it is not a useful concept. * >>> >> >> *> The better knowledge/intelligence becomes the more correctly we >> approach that unattainable perfect grasp. **It is a useful concept >> insofar as it defined an ideal, just like Turing machines define >> computation, though their perfect and unlimited memory is unrealizable in >> practice.* >> > > *You're right, Turing was able to define computation with his machine and > his instructions on how to construct the device were simple and very clear, > and he was able to prove a number of fascinating things about computation > from his machine. But there is nothing equivalent to that when it comes to > morality, certainly not a proof that "all sufficiently intelligent and > rational agents reach the same morality". And all the empirical evidence is > pointing in the opposite direction.* > If this is a problem that genuinely interests you (and I think it should, because if it's true, it means superintelligence will tend towards beneficence), then read the attached paper, and see whether you agree with it or if you can uncover some fatal flaw in its reasoning. Jason > > > > > >> >>> >>> >>> >>> >>>> >>>> On Fri, Oct 31, 2025, 8:04 AM John Clark via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On Thu, Oct 30, 2025 at 8:40?PM William Flynn Wallace via extropy-chat >>>>> wrote: >>>>> >>>>> *> Jason, are you saying that only a future AI with adequate memory, >>>>>> will ever understand our minds? * >>>>>> >>>>> >>>>> *I don't know about Jason but I would say that, and I'm saying that >>>>> superintelligent AI will never fully understand its own mind because, even >>>>> though it understands ours, however big it gets it will still be finite. >>>>> And only with an infinite set can a proper subset be put into a one to one >>>>> correspondence with the entire set. * >>>>> >>>>> *> At that point, humans are superflous, not needed, better off >>>>>> extinct. * >>>>>> >>>>> >>>>> *Better off for who? Not better off for us certainly, maybe better off >>>>> for the AI. * >>>>> >>>>> > Or the AIs will keep us around as interesting pets. >>>>>> >>>>> >>>>> *My hope is that the superintelligence will think we're cute pets, or >>>>> will feel some sort of a sense of duty, like the obligation we feel in >>>>> taking care of an aged parent who has Alzheimer's disease. But whether a >>>>> SuperIntelligent AI will feel either of those emotions strong enough to >>>>> keep us around I don't know. I can't predict with much specificity what >>>>> even one of my fellow human beings will do that is no smarter than I am, >>>>> and it is vastly more difficult to predict the actions of a >>>>> superintelligence, even generally. * >>>>> >>>> >>>> >>>> >>> >>> >>> >>> >>> >>>> >>>> Jason >>>> >>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>> On Thu, Oct 30, 2025 at 5:12?PM Jason Resch via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via extropy-chat >>>>>>> wrote: >>>>>>> >>>>>>>> I have read several times in these chats the assumption that one >>>>>>>> cannot understand something as complicated as themselves. >>>>>>>> >>>>>>>> Why not? It sounds reasonable but what's the basis for it? bill w >>>>>>>> >>>>>>> >>>>>>> I believe it may follow from information theory. >>>>>>> >>>>>>> Consider: if understanding(X) requires holding some additional >>>>>>> higher-level set of relations and interrelations beyond the mere >>>>>>> specification of what X is, then the information contained within >>>>>>> understanding(X) will always be greater than the information contained in X. >>>>>>> >>>>>>> Now extend this to the brain. If brain's information content is Y, >>>>>>> then understanding (Y) requires a brain with a greater information storage >>>>>>> capacity than Y. >>>>>>> >>>>>>> Or another way to think about it: how many neurons does it take to >>>>>>> memorize all the important facts of a single neuron's connections within >>>>>>> the brain? If it takes N neurons to store that memory, then just memorizing >>>>>>> a brain state will require a brain with N times as many neurons as the >>>>>>> brain that's memorized. >>>>>>> >>>>>>> Jason >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Thu, Oct 30, 2025 at 2:22?PM John Clark via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> On Tue, Oct 28, 2025 at 4:16?PM Ben Zaiboc via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>> *> There are also nuances. For example, different interpretations >>>>>>>>>> of "to understand".* >>>>>>>>> >>>>>>>>> >>>>>>>>> *Exactly. We can have a general sort of understanding of how our >>>>>>>>> brain works but to have a perfect understanding a part of our brain would >>>>>>>>> have to have a sort of internal map of the entire brain, and for it to be >>>>>>>>> perfect there would have to be a one to one correspondence between the map >>>>>>>>> and the territory, but that would be impossible for something that is >>>>>>>>> finite like the number of neurons in the human brain. However it would be >>>>>>>>> possible for a proper subset of something infinite to have a one to one >>>>>>>>> correspondence with the entire set; then you could have such a perfect map >>>>>>>>> with a one to one correspondence, and then you'd always know what you were >>>>>>>>> going to do long before you did it. And you wouldn't feel free. So by the >>>>>>>>> only definition of free will that is not gibberish (not knowing what you're >>>>>>>>> going to do next until you actually do it) we reach the interesting >>>>>>>>> conclusion that a human being does have free will, but God does not.* >>>>>>>>> >>>>>>>>> *John K Clark* >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> non-flying animal. >>>>>>>>>> >>>>>>>>>> "If our brains were simple enough for us to understand, we would be >>>>>>>>>> simple enough that we could not." >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Well, that just sounds defeatist to me. It makes a nice little >>>>>>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence >>>>>>>>>> that it's true? Or any logical argument for it? >>>>>>>>>> There are also nuances. For example, different interpretations of >>>>>>>>>> "to understand". >>>>>>>>>> >>>>>>>>>> Maybe you are right, given "understand completely" (whatever that >>>>>>>>>> actually means). Maybe definitely not, given "understand enough >>>>>>>>>> to be useful/worth the attempt". >>>>>>>>>> >>>>>>>>>> We have, after all, discovered a lot about how brains work >>>>>>>>>> already. Maybe not a lot in comparison to all there is to be discovered, >>>>>>>>>> but more than enough to be useful, and I doubt if we have reached some sort >>>>>>>>>> of limit on what we are capable of discovering and understanding. >>>>>>>>>> >>>>>>>>>> And there's always AI assistance with this kind of research, >>>>>>>>>> which greatly extends our reach, and adds more variations of "to >>>>>>>>>> understand". >>>>>>>>>> >>>>>>>>>> On the whole, I think the statement is harmful, in that it tends >>>>>>>>>> to discourage even trying. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Ben >>>>>>>>>> >>>>>>>>>> >>>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 2 14:26:56 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 09:26:56 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <813304aa-2157-48b1-a2b9-8cfcdec91b27@zaiboc.net> Message-ID: On Sun, Nov 2, 2025, 9:16 AM Jason Resch wrote: > > > On Sun, Nov 2, 2025, 9:05 AM John Clark wrote: > >> >> >> On Fri, Oct 31, 2025 at 10:50?AM Jason Resch >> wrote: >> >> >> >>> >>> See: https://youtu.be/Yy3SKed25eM?si=NqE8fsY2aROLpXNE >>>>> >>>> >>>> *>> If "real desires" require perfect knowledge then "real desires" do >>>> not exist and it is not a useful concept. * >>>> >>> >>> *> The better knowledge/intelligence becomes the more correctly we >>> approach that unattainable perfect grasp. **It is a useful concept >>> insofar as it defined an ideal, just like Turing machines define >>> computation, though their perfect and unlimited memory is unrealizable in >>> practice.* >>> >> >> *You're right, Turing was able to define computation with his machine and >> his instructions on how to construct the device were simple and very clear, >> and he was able to prove a number of fascinating things about computation >> from his machine. But there is nothing equivalent to that when it comes to >> morality, certainly not a proof that "all sufficiently intelligent and >> rational agents reach the same morality". And all the empirical evidence is >> pointing in the opposite direction.* >> > > > If this is a problem that genuinely interests you (and I think it should, > because if it's true, it means superintelligence will tend towards > beneficence), then read the attached paper, and see whether you agree with > it or if you can uncover some fatal flaw in its reasoning. > The attachment failed, I have uploaded the paper here: https://drive.google.com/file/d/1l8T1z5dCQQiwJPlQlqm8u-1oWpoeth3-/view?usp=drivesdk Jason > > >> >> >> >> >> >>> >>>> >>>> >>>> >>>> >>>>> >>>>> On Fri, Oct 31, 2025, 8:04 AM John Clark via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On Thu, Oct 30, 2025 at 8:40?PM William Flynn Wallace via >>>>>> extropy-chat wrote: >>>>>> >>>>>> *> Jason, are you saying that only a future AI with adequate memory, >>>>>>> will ever understand our minds? * >>>>>>> >>>>>> >>>>>> *I don't know about Jason but I would say that, and I'm saying that >>>>>> superintelligent AI will never fully understand its own mind because, even >>>>>> though it understands ours, however big it gets it will still be finite. >>>>>> And only with an infinite set can a proper subset be put into a one to one >>>>>> correspondence with the entire set. * >>>>>> >>>>>> *> At that point, humans are superflous, not needed, better off >>>>>>> extinct. * >>>>>>> >>>>>> >>>>>> *Better off for who? Not better off for us certainly, maybe better >>>>>> off for the AI. * >>>>>> >>>>>> > Or the AIs will keep us around as interesting pets. >>>>>>> >>>>>> >>>>>> *My hope is that the superintelligence will think we're cute pets, or >>>>>> will feel some sort of a sense of duty, like the obligation we feel in >>>>>> taking care of an aged parent who has Alzheimer's disease. But whether a >>>>>> SuperIntelligent AI will feel either of those emotions strong enough to >>>>>> keep us around I don't know. I can't predict with much specificity what >>>>>> even one of my fellow human beings will do that is no smarter than I am, >>>>>> and it is vastly more difficult to predict the actions of a >>>>>> superintelligence, even generally. * >>>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >>>> >>>>> >>>>> Jason >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Oct 30, 2025 at 5:12?PM Jason Resch via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Oct 30, 2025, 3:35 PM William Flynn Wallace via >>>>>>>> extropy-chat wrote: >>>>>>>> >>>>>>>>> I have read several times in these chats the assumption that one >>>>>>>>> cannot understand something as complicated as themselves. >>>>>>>>> >>>>>>>>> Why not? It sounds reasonable but what's the basis for it? bill >>>>>>>>> w >>>>>>>>> >>>>>>>> >>>>>>>> I believe it may follow from information theory. >>>>>>>> >>>>>>>> Consider: if understanding(X) requires holding some additional >>>>>>>> higher-level set of relations and interrelations beyond the mere >>>>>>>> specification of what X is, then the information contained within >>>>>>>> understanding(X) will always be greater than the information contained in X. >>>>>>>> >>>>>>>> Now extend this to the brain. If brain's information content is Y, >>>>>>>> then understanding (Y) requires a brain with a greater information storage >>>>>>>> capacity than Y. >>>>>>>> >>>>>>>> Or another way to think about it: how many neurons does it take to >>>>>>>> memorize all the important facts of a single neuron's connections within >>>>>>>> the brain? If it takes N neurons to store that memory, then just memorizing >>>>>>>> a brain state will require a brain with N times as many neurons as the >>>>>>>> brain that's memorized. >>>>>>>> >>>>>>>> Jason >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Thu, Oct 30, 2025 at 2:22?PM John Clark via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> On Tue, Oct 28, 2025 at 4:16?PM Ben Zaiboc via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>> *> There are also nuances. For example, different interpretations >>>>>>>>>>> of "to understand".* >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Exactly. We can have a general sort of understanding of how our >>>>>>>>>> brain works but to have a perfect understanding a part of our brain would >>>>>>>>>> have to have a sort of internal map of the entire brain, and for it to be >>>>>>>>>> perfect there would have to be a one to one correspondence between the map >>>>>>>>>> and the territory, but that would be impossible for something that is >>>>>>>>>> finite like the number of neurons in the human brain. However it would be >>>>>>>>>> possible for a proper subset of something infinite to have a one to one >>>>>>>>>> correspondence with the entire set; then you could have such a perfect map >>>>>>>>>> with a one to one correspondence, and then you'd always know what you were >>>>>>>>>> going to do long before you did it. And you wouldn't feel free. So by the >>>>>>>>>> only definition of free will that is not gibberish (not knowing what you're >>>>>>>>>> going to do next until you actually do it) we reach the interesting >>>>>>>>>> conclusion that a human being does have free will, but God does not.* >>>>>>>>>> >>>>>>>>>> *John K Clark* >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> non-flying animal. >>>>>>>>>>> >>>>>>>>>>> "If our brains were simple enough for us to understand, we would be >>>>>>>>>>> simple enough that we could not." >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Well, that just sounds defeatist to me. It makes a nice little >>>>>>>>>>> pessimistic soundbite (if you like pessimism), but is there any evidence >>>>>>>>>>> that it's true? Or any logical argument for it? >>>>>>>>>>> There are also nuances. For example, different interpretations >>>>>>>>>>> of "to understand". >>>>>>>>>>> >>>>>>>>>>> Maybe you are right, given "understand completely" (whatever >>>>>>>>>>> that actually means). Maybe definitely not, given "understand >>>>>>>>>>> enough to be useful/worth the attempt". >>>>>>>>>>> >>>>>>>>>>> We have, after all, discovered a lot about how brains work >>>>>>>>>>> already. Maybe not a lot in comparison to all there is to be discovered, >>>>>>>>>>> but more than enough to be useful, and I doubt if we have reached some sort >>>>>>>>>>> of limit on what we are capable of discovering and understanding. >>>>>>>>>>> >>>>>>>>>>> And there's always AI assistance with this kind of research, >>>>>>>>>>> which greatly extends our reach, and adds more variations of "to >>>>>>>>>>> understand". >>>>>>>>>>> >>>>>>>>>>> On the whole, I think the statement is harmful, in that it tends >>>>>>>>>>> to discourage even trying. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Ben >>>>>>>>>>> >>>>>>>>>>> >>>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Nov 2 14:50:46 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 2 Nov 2025 14:50:46 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On 01/11/2025 23:20, Jason Resch wrote: > If you believe it will be good for you, you may desire it. If you learn later that it will be bad for you, you may no longer desire it. Here, what you desire has a dependency on what you believe. Discuss that with a cigarette smoker. I think you'll find they disagree. > It's to frame the problem: where does morality come from, what is its basis, by what method do we determine right or wrong? Well that's easy, I can tell you. Morality comes from us. We make it up. And the methods we use are various. >> We always have to make decisions in the /absence/ of full information. What we would do if we had 'all the information' is irrelevant, if it even means anything. > Yes, this is what I've been saying from the beginning. Perfect grasp is used only to define the aim of morality, not to serve as a practical theory. We know what the aim of morality is: To distinguish right actions from wrong ones. Nothing difficult about that, and no 'perfect grasp' is needed to establish it. The difficulty is in deciding what 'right' and 'wrong' mean. Different people in various different circumstances seem to have different opinions. > Consider weather prediction. We can't predict with 100% accuracy, nor predict arbitrarily far into the future. Yet we can make near term predictions with some modicum of accuracy. This is how moral decisions can (and should) be approached. Can, yes. Should? Who are you to say? You are now deciding for other people. My morality tells me that this is immoral. > Please consider what I wrote carefully. It is an example of putting into practice a heuristic. And how better heuristics are based on the same model and definition of morality as defined in that paper. You may think so. I don't. That paper is nonsense. As I said, the first three statements are flat-out wrong. > > Without objective truth, by what measure is any theory in science said to be better than any other? Yes, I've addressed that in another post. I was too hasty in saying "No" to the question, mistaking 'objective' for 'absolute'. My mistake. > what is the definition of right or wrong, good or bad? Zuboff's paper is an example of a theoretical basis on which we can form such definitions, and define what we mean by right and wrong, good and bad. Apart from the fact that Zuboff's paper is based on false premises, and therefore worthless, the very question "what is right and what is wrong?" can't be given a definitive answer that is true for everyone in every circumstance. It's like trying to give a definitive answer to "what is the tastiest food?", that applies to everyone in all circumstances. You can't solve subjective problems with an objective approach. > Let us say you have a particular set of rules in your code. I do. > By [what] process do you decide what rules to adopt, or decide to adopt one rule vs. another. There is a heirarchy, built on a principle that I worked out a long time ago. I just need to slot a problem into the right level of the heirarchy, and the solution is obvious. I've never met a (real) situation that it can't handle to my satisfaction (I'm not claiming to have the answer to the trolley problem!). > My contention is that to even form a moral code, one must hold some meta-rule for optimizing what knew [one?] considers to be good while minimizing or avoiding bad. Indeed. And I'd say that the meta-rule is what defines 'good' and 'bad'. > And I think if you explored this meta-rule, you would find it is not all that different from the position Zuboff reaches in his paper. On the contrary, it is totally different, and much simpler, than Zuboff's nonsense. > Ultimately, what is good (for one individual) is what that individual would want for themselves if they had a complete knowledge of everything involved. First, No. Second, this would be reducing morality to what is good for an individual... > And then this then extended to define good as a maximization of good for all concerned, to achieve the most possible good among all beings who have desires, by satisfying (to the maximum possible extent) the desires each individual would still hold if they all had a perfect grasp of everything. This he refers to as a reconciliation of all systems of desire. ... then dragging everyone else into it (Golden Rule, and we know what's wrong with that) I really don't see the point of positing an impossible knowledge then using this as the basis of a system of morality (or anything at all). Saying "Oh, but it's just theoretical, not real, don't take it too literally" is basically the same as saying it's totally useless for any practical purpose. A 'reconciliation of all systems of desire' is equivalent to 'a reconciliation of all systems of taste'. That's apart from the fact that the whole paper is nonsense. >> It's about having a choice. If it were possible to have 'perfect knowledge', there would be no morality, no choice. > I'm not sure that follows. Even with perfect knowledge, you could still choose whether or not to act in accordance with morally best action. That's true. People can choose to be evil. Does anyone actually do that? We'd probably class it as mental illness. I don't undertand why you are taking Zuboff's paper seriously. Do you take his first three statements in the paper's abstract at face value?: 1) "If I desire to drink some stuff thinking it is hot chocolate when actually it is hot mud, my desire is not a real one - it?s mistaken or only apparent." (misconstruing the desire to drink hot chocolate as a desire to drink whatever is in the cup. If that were the case, he'd drink the mud) 2) "This example illustrates how a desire must always depend on a belief about its object, a belief about what it is and what it?s like." (false assumption that if any desire is dependent on a belief (something that I'd dispute, but it needs closer examination), all desires must always be dependent on beliefs. Saying "This example illustrates..." is deflecting the reader from the fact that he's making an assumption and failing to show why it should be true) 3) "But beliefs are correctable, so desires are correctable" ( I don't know why he uses the term 'correctable', which implies wrongness, but this statement just compounds the above errors and adds one more: False conclusion that if a belief can change, this means that a desire can change) I can understand someone saying that beliefs are sometimes based on desires (I'm sure this is often the case), but not the reverse. That's just daft. Desires are emotional, derived from feedback on bodily states, and elaborated by memories and imagination. Beliefs about various things can certainly contribute to the process, but you can't reasonably claim that (all) desires are a result of (only) beliefs. At the best, Zuboff is guilty of grossly oversimplifying and misattributing things. At the worst, well, I'd be committing the Internet Sin of Ad-Hominem Attack to say anything more, and that goes against my moral code. -- Ben From spike at rainier66.com Sun Nov 2 14:52:32 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 2 Nov 2025 06:52:32 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> Message-ID: <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> From: John Clark Sent: Sunday, 2 November, 2025 4:48 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] not that rare earth (part 2 of at least 2) On Sat, Nov 1, 2025 at 10:38?AM > wrote: >> ?I can't think of any application in which the rare earths don't need to be refined out into separate elements, if there is such an application it must not be a very important one? John K Clark > If such applications exist, we wouldn?t know. Those would be held as trade secrets. >?If there were such an application we already know magnetic or optical properties can't be involved, because there is great variation in those properties among the different rare earths. So it must be something in their chemical properties because they are all almost identical in that respect? On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. The companies that make those magnets don?t advertise that fact, for a reason: that is how they make their living. >?But a trade secret that can be told in one short sentence such as "unrefined rare earths can be economically useful in the X chemical process" tend not to remain secret for very long? It isn?t a secret. Plenty of us have figured it out. I already knew because of a project I was on nearly 30 years ago, where a company was using whatzisium (one of the obscure ?rare? earths (an element that surprises you when you hear its name (because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) in an optical coating and claiming it was a proprietary process for which we needed to pay them a lot of money. This contractor had arbitrarily tried something in their optical coating proces, then tuned its focal plane array to the optical properties of that coating. We paid. But being a company with spectrum analyzers and electron microscopes, we looked at the product we bought from them and found out the materials that went into it, even if we don?t have the process itself. Turns out any one of those lanthanides down there on the lower left will do what they were doing: it is a catalyst, acting as a big (well, tiny actually, big compared to other atoms) capacitor in a way: a tiny trace of it forms nuclei for crystal cell growth. The element itself doesn?t participate in the reaction. Probably the corresponding actinide would work even better, but it was radioactive. The one above it would work too, but that one isn?t exotic and might already be patented. John are you seeing a pattern here? > Recognizing all this, Musk found a way to phase out rare earth elements without losing significant performance in his magnets. >?No Musk has not. It's true that back in 2008 the original Tesla had no rare earth magnets, in fact it had no permanent magnets of any sort because it used an old-fashioned AC induction motor. However the best efficiency you're going to get with one of those is about 85%, but a motor that uses rare earth permanent magnets has an efficiency of about 97%, and an induction motor is also larger and MUCH heavier than a rare earth motor of equal power? Ja but notice what you are comparing to what. There are magnets that use substitute materials for what any rare earth does. But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. >? But Tesla is never going to be competitive with any Chinese car maker if they continue to go down that road?. The market cap for Tesla is 1.5 trillion bucks. The Chinese are commies. I don?t even know how to figure out the market cap of a company in China. I wouldn?t invest in it. >?In 2023 Tesla made a grand announcement they were going to find a way to make a great electric motor that didn't use any rare earths, but it's almost 2026 and we've heard nothing more about that, certainly nothing has been put in production? How would you know? They aren?t worried about the Chinese rare earth embargo. That tells me they either have a fifty year stockpile or found a workaround. I don?t see any performance degradation in the cars. > we can go up the group one period if that element is cheaper and more easily available. >?I don't know what you mean by "go up the group one period" because the rare earth elements are all in the same group, and the periodic table of elements are arranged according to their chemical properties NOT their magnetic properties which is what we are interested in? Ja, and this is where we had the sophistication to figure it out. The lanthanides are not themselves magnetic. They act as catalysts in crystal formation in the iron, which is magnetic. We coulda figured this out along with the Tesla engineers. >?The defining feature of the rare earth elements is that their outer valence electrons are all the same, and valence electrons are the ones that are primarily responsible for giving an element its chemical properties. But the various rare earth element's inner shell of electrons are all quite different, and that is what gives the specific element its magnetic properties?. John ponder what you just wrote, then recognize that the element itself isn?t magnetic. The iron does that. > Elon wouldn?t risk the entire company on the continuing availability of anything on which China has a monopoly. >?If he wants to make electric cars then he'd have no choice, unless Musk could convince He Who Must Not Be Named to put a huge tariff on imported Chinese cars and thus force Americans to buy his overpriced outdated inefficient junk cars? Nonsense. Tesla found workarounds, for a good reason. We have known since as far back as 2010 that manufacturing needs to be moved out of China. Even if they need to use expensive domestic-produced rare earths (I don?t think they do) the little bit that EVs use would add so little cost, it wouldn?t matter compared to the cost of shipping Chinese cars over an ocean that spans damn near half the globe. The cost competitiveness of Chinese cars comes from their cheap commie labor. As technology advances, the cost of a manufactured item is influenced less and less by the cost of labor. If you tour the Tesla factory, the first thing you notice is how few people are in that factory. The cheap commie labor can?t compete with robots. John look at what you are arguing: there is some magic material that the Chinese have, but we don?t because of our stringent environmental regulations, which lets them make electric cars cheaper, swoon how shall we cope? I say nonsense. We can already make more electric cars than the market wants, and the California government which mandated their sales is likely to be thrown out head first pretty soon because the market didn?t want that product they told us was so great, but the tax incentive expired, the carpool line access expired and the price of power nearly doubled in the past 8 years, making it so they aren?t really cheaper to operate anymore. So no, I am not buying that argument, or an EV, or Tesla stock, or Rivian stock, or any Chinese EV stock, not buying any of it. I predict EVs will level out around almost half the new car sales eventually, which is good, because that could solve the energy storage problem: we allow EV owners to sell power back into the grid from their cars during peak demand. > The US military wouldn?t base their technology on any material in which it didn?t hold a fifty year stockpile. >?If the US military was omnipotent I'm sure that's exactly what they would do. John K Clark John, they did it. The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. Of course they stockpiled that stuff, not even knowing or caring what it might be used for. Anticipating any threat is what the military does best. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Nov 2 16:34:36 2025 From: pharos at gmail.com (BillK) Date: Sun, 2 Nov 2025 16:34:36 +0000 Subject: [ExI] AI music is now appearing on Billbosrd's charts Message-ID: How Many AI Artists Have Debuted on Billboard?s Charts? Xania Monet, who becomes the first AI-powered artist to debut on an airplay chart this week, is one of several AI-driven acts to chart over the past two months. By Xander Zellner 10/28/2025 Quote: AI music is no longer a fantasy or niche curiosity among internet sleuths ? it?s here, and it?s already beginning to have an impact on Billboard?s charts. In just the past few months, at least six AI or AI-assisted artists have debuted on various Billboard rankings. That figure could be higher, as it?s become increasingly difficult to tell who or what is powered by AI ? and to what extent. Many of these charting projects, whose music spans every genre from gospel to rock to country, also arrive with anonymous or mysterious origins. ------------------------ So, how long until the AI Beethoven appears? BillK From jasonresch at gmail.com Sun Nov 2 16:50:55 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 11:50:55 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 01/11/2025 23:20, Jason Resch wrote: > > > If you believe it will be good for you, you may desire it. If you > learn later that it will be bad for you, you may no longer desire it. > Here, what you desire has a dependency on what you believe. > > > Discuss that with a cigarette smoker. I think you'll find they disagree. > Compare to the hypothetical reality where cigarettes are healthy, and you will see my point stands. > > > It's to frame the problem: where does morality come from, what is its > basis, by what method do we determine right or wrong? > > > Well that's easy, I can tell you. Morality comes from us. We make it up. That's the common view. But that common view is lacking a base, and it doesn't help answer the question of whether AI, or superintelligences will tend towards any sort of morality, nor what it might be that they tend towards. Hence the utility of this framework. > And the methods we use are various. > And some methods, I would contend, are better than others. > > >> We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. > > > Yes, this is what I've been saying from the beginning. Perfect grasp > is used only to define the aim of morality, not to serve as a practical > theory. > > > We know what the aim of morality is: To distinguish right actions from > wrong ones. Nothing difficult about that, and no 'perfect grasp' is > needed to establish it. The difficulty is in deciding what 'right' and > 'wrong' mean. Different people in various different circumstances seem > to have different opinions. > Likewise, people used to disagree about what lightning was. > > > Consider weather prediction. We can't predict with 100% accuracy, nor > predict arbitrarily far into the future. Yet we can make near term > predictions with some modicum of accuracy. > This is how moral decisions can (and should) be approached. > > > Can, yes. Should? Who are you to say? It seems you still haven't read the paper, as you question suggests you still hold some imagined caricatured version of the theory. But note here, I am only saying, that even though we can't predict the future perfectly nor arbitrarily far into the future, the basic idea behind deciding which actions are right or wrong, involves making some attempt at predicting the future consequences of an action. All rational decision making processes work this way. You are now deciding for other > people. My morality tells me that this is immoral. > If you understand the paper you will see this definition of morality is based on the fulfillment of the desires of everyone, where those desires are what each person would genuinely want for themselves when fully informed about everything relevant. It has nothing to do with me, or anyone else telling you what to do. It is merely a definition. > > > Please consider what I wrote carefully. It is an example of putting > into practice a heuristic. And how better heuristics are based on the > same model and definition of morality as defined in that paper. > > > You may think so. I don't. That paper is nonsense. As I said, the first > three statements are flat-out wrong. > It seen you never read any more than the abstract. If you are constrained by time, feed the paper into you favorite AI and ask what it thinks about the paper. > > > > Without objective truth, by what measure is any theory in science > said to be better than any other? > > > Yes, I've addressed that in another post. I was too hasty in saying "No" > to the question, mistaking 'objective' for 'absolute'. My mistake. > No worries! I appreciate the clarification. > > > what is the definition of right or wrong, good or bad? Zuboff's paper > is an example of a theoretical basis on which we can form such > definitions, and define what we mean by right and wrong, good and bad. > > > Apart from the fact that Zuboff's paper is based on false premises, and > therefore worthless, the very question "what is right and what is > wrong?" can't be given a definitive answer that is true for everyone in > every circumstance. It's like trying to give a definitive answer to > "what is the tastiest food?", that applies to everyone in all > circumstances. You can't solve subjective problems with an objective > approach. > You can by making it observer-relative. E.g., forget about trying to find a "tastiest food" and instead consider "the tastiest food for this particular person in this time and place." That is what this paper does with morality, it starts with considering the desires of individual subjects. Moves on to correcting those individual desires with better information, and ultimately shows how with enough information, including how ones own desires impact other observers, there is an eventual convergence, where ones desires extend beyond merely wanting what's best for oneself, but also a consideration of what's best for all concerned. This full understanding of what's best for all concerned is the same understanding, regardless of which initial subject you start from. > > > Let us say you have a particular set of rules in your code. > > > I do. > > > > By [what] process do you decide what rules to adopt, or decide to > adopt one rule vs. another. > > > There is a heirarchy, built on a principle that I worked out a long time > ago. I just need to slot a problem into the right level of the > heirarchy, and the solution is obvious. I've never met a (real) > situation that it can't handle to my satisfaction (I'm not claiming to > have the answer to the trolley problem!). > If you don't mind sharing, I am curious what that principle is that you worked out. But I also understand if you consider it private. > > > My contention is that to even form a moral code, one must hold some > meta-rule for optimizing what knew [one?] (yes "one" sorry for the typo) considers to be good while > minimizing or avoiding bad. > > > Indeed. And I'd say that the meta-rule is what defines 'good' and 'bad'. > That seems a bit circular to me.. I am not sure how it gets off the ground without a way to distinguish good from bad. > > > And I think if you explored this meta-rule, you would find it is not > all that different from the position Zuboff reaches in his paper. > > > On the contrary, it is totally different, and much simpler, than > Zuboff's nonsense. > It may seem that way, but I think you have swept the details of how to distinguish good from bad under the rug. > > > Ultimately, what is good (for one individual) is what that individual > would want for themselves if they had a complete knowledge of everything > involved. > > > First, No. > Second, this would be reducing morality to what is good for an > individual... > Note that I was careful to specify "good for the individual." I.e., start with the simple model of only a single conscious being in all reality. Then it becomes clear this is a working definition of good that works for that lone being. > > > And then this then extended to define good as a maximization of good > for all concerned, to achieve the most possible good among all beings > who have desires, by satisfying (to the maximum possible extent) the > desires each individual would still hold if they all had a perfect grasp > of everything. This he refers to as a reconciliation of all systems of > desire. > > > ... then dragging everyone else into it Where are you getting this "dragging into it" from? Does the golden rule "drag everyone else into it"? Does your moral code "drag everyone else into it"? No, these are just alternate definitions of moral and immoral behavior. That is what Zuboff's paper provides, a new definition. (Golden Rule, and we know what's > wrong with that) > You could liken Zuboff's result to the platinum rule, corrected by better information, weighted appropriately, modulated by future consequences, and with further concern for possible/future beings who may not (yet) exist. > > I really don't see the point of positing an impossible knowledge then > using this as the basis of a system of morality (or anything at all). I've addressed this many times already. At this point all I can suggest is to read the paper, or have AI read it then ask it to answer these questions for you based on what the paper says. > Saying "Oh, but it's just theoretical, not real, don't take it too > literally" is basically the same as saying it's totally useless for any > practical purpose. > I haven't said that. > A 'reconciliation of all systems of desire' is equivalent to 'a > reconciliation of all systems of taste'. > That's apart from the fact that the whole paper is nonsense. > You say this a person who has not read the whole paper. > > >> It's about having a choice. If it were possible to have 'perfect > knowledge', there would be no morality, no choice. > > > I'm not sure that follows. Even with perfect knowledge, you could > still choose whether or not to act in accordance with morally best action. > > > That's true. People can choose to be evil. Does anyone actually do that? All the time. > We'd probably class it as mental illness. > We all do it in small ways all the time. For example, we will choose to pay $20 to go see a movie instead of taking the time to buy a $20 meal for a hungry person. We know it would be a more moral way to spend the $20, but will choose a less moral action instead. > > I don't undertand why you are taking Zuboff's paper seriously. Do you > take his first three statements in the paper's abstract at face value?: > > 1) "If I desire to drink some stuff thinking it is hot chocolate when > actually it is hot mud, my desire is not a real one - it?s mistaken or > only apparent." > > (misconstruing the desire to drink hot chocolate as a desire to drink > whatever is in the cup. If that were the case, he'd drink the mud) > I think you are misreading and over-examining this. It is nothing more than an example of how a desire "ooh that looks good I want to drink it!" can be corrected with new information. I see no problem with that observation. To me it is obviously true. > > 2) "This example illustrates how a desire must always depend on a belief > about its object, a belief about what it is and what it?s like." > > (false assumption that if any desire is dependent on a belief (something > that I'd dispute, but it needs closer examination), all desires must > always be dependent on beliefs. Saying "This example illustrates..." is > deflecting the reader from the fact that he's making an assumption and > failing to show why it should be true) > A desire is the will to fulfill some need or want. By definition, then, it relates to some anticipated future state or experience, which is presently unrealized. Accordingly, that desire concerns a belief (about what the future state or experience will be we like). Again this is all from the abstract, which I'll not give the full exposition or justification. If the full argument and justification could be made in abstracts, we wouldn't need papers. Which is why I suggest you to read the paper is you have questions about it, as it is quite thorough in addressing all the concerns you are raising. > > 3) "But beliefs are correctable, so desires are correctable" > > ( I don't know why he uses the term 'correctable', which implies > wrongness, but this statement just compounds the above errors and adds > one more: False conclusion that if a belief can change, this means that > a desire can change) > I don't know what about this is controversial. Consider this example: T1: Smoking looks cool, I want to smoke. T2: You know smoking causes lung cancer, right? T3: Oh it does? I suppose then I no longer want to smoke. > > I can understand someone saying that beliefs are sometimes based on > desires (I'm sure this is often the case), but not the reverse. That's > just daft. Desires are emotional, derived from feedback on bodily > states, and elaborated by memories and imagination. "Drives" might be a better word to use for such things, and it would also help in understanding his paper to distinguish innate drives which we can't decide or change, from the desires that we decide with our minds, which we can change. Beliefs about > various things can certainly contribute to the process, but you can't > reasonably claim that (all) desires are a result of (only) beliefs. > There are instinctual and unconscious motivations an preferences, things we find innately pleasurable or painful, those I agree are not based on beliefs. They are inherent to what one is. When Zuboff's paper refers to desires, I think it should be taken to refer to wants and desires based on, or justified by, conscious thought. > At the best, Zuboff is guilty of grossly oversimplifying and > misattributing things. At the worst, well, I'd be committing the > Internet Sin of Ad-Hominem Attack to say anything more, and that goes > against my moral code. > I think you should give him a fair shake before judging him so harshly, and read more than just the abstract: https://drive.google.com/file/d/1l8T1z5dCQQiwJPlQlqm8u-1oWpoeth3-/view?usp=drivesdk Jason > -- > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Nov 2 16:51:05 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 2 Nov 2025 10:51:05 -0600 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: Jason, keep in mind that 'same stimulus, same response' doesn't work. There scores of reasons why not, ; including simple habituation and sensitization (response waning or increasing). How do you map that? Very general tendencies, perhaps, can be mapped,but the closer you get to predicting specific responses the error rate will increase. And how do you count responses that are the reverse of what you predict? So - we will never map the brain because its topography, if you will allow the metaphor, is constantly changing. bill w On Sat, Nov 1, 2025 at 9:52?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Given that all of our actions originate in the unconscious mind. there is >> no way a conscious mind can understand where its ideas and actions come >> from, much less understand them. >> > > Great point! > > > The conscious mind may think it is in charge but it is just an observer >> (which nevertheless can understand that a mistake has been made and the >> idea or action needs redoing.) >> >> You want to understand our minds? Make the workings of the unconscious >> conscious - and that's just a start. Why did the impulse go to point B >> when it left point A rather than to point C? And then trace all the points >> in between entering the unconscious and resulting in some idea or action. >> And explain each one. >> > > I have doubts that such a thing is possible from the perspective of the > mind in question. Can any brain ever feel and know what each of its neurons > is doing? Can those corresponding neurons feel and know what every one of > its constituent atoms is doing? > > Given Turing universality, it's provable that computer software can't know > about its underlying hardware. If our minds are a kind of software which > can be simulated by a computer, then this same implication would apply to > us. There would be a layer of abstraction of one's underlying > implementation which high levels cannot penetrate. > > >> You can't even get started until you can truly access the unconscious. >> Give Freud credit- he tried to do this. >> > > He deserves credit for the attempt, but I think there are limits to a > mind's ability to introspect. > > "Our thoughts seem to run about in their own space, creating new thoughts > and modifying old ones, and we never notice any neurons helping us out! But > that is to be expected. We can?t. [?] > We should remember that physical law is what > makes it all happen?way, way down in neural nooks and crannies which are > too remote for us to reach with our high-level introspective probes." > > ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) > > > Jason > > >> >> >> On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On 31/10/2025 21:34, Jason Resch wrote: >>>> >>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On 31/10/2025 12:28, John K Clark wrote: >>>>> >>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>>> >>>>> >>>>> You've completely lost me there, but I have two observations: There's >>>>> no such thing as 'perfect understanding' except as a nebulous theoretical >>>>> concept, and I don't think a one-to-one correspondence would be enough to >>>>> understand something, or even be a relevant concept. We use large parts of >>>>> our brains to process information from small parts of the world. You need a >>>>> lot more than a single neuron to figure out what's going on in a single >>>>> neuron. >>>>> >>>>> Oh, three observations. We don't process data instantaneously. The >>>>> same parts of the brain can be used to process information about something >>>>> repeatedly over time, using feedback loops etc. >>>>> >>>> >>>> Computers and algorithms are constrained by two resources space (i.e. >>>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>>> time/space trade offs to be made in certain circumstances, in general there >>>> is some shortest description of the brain (in terms of bits) for which no >>>> shorter representation is possible (regardless of how much additional >>>> computation is thrown at it). >>>> >>>> So while the same brain may compute many times with the same neurons, >>>> this addresses only the time component of simulating a brain. There is >>>> still the matter of space. >>>> >>>> >>>> Ah, ok. I was talking about understanding the brain, not simulating it. >>>> Modelling something is not the same as understanding it. Yes, they help >>>> each other, but they aren't the same thing. >>>> >>> >>> I think understanding a thing is equivalent to being able to form an >>> accurate mental model of it. With greater levels of understanding >>> corresponding to more accurate models. >>> >>> What do you mean by the word understand? >>> >>> >>> >>>> >>>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>>> fact there's no amount of additional time that will permit the memory >>>> deficient computer to emulate the computer with 1 GB of memory, for the >>>> simple reason that it will run out of variables to represent all the >>>> possible values in the memory addresses of the computer with a greater >>>> memory. >>>> >>>> >>>> I'm not sure that this is true. Are you assuming no swap disk, or other >>>> similar non-RAM storage? >>>> >>> >>> Swap disks are a means to extend available RAM. >>> >>> >>> Because then I'm sure you're right, but that's a pretty artificial >>>> restriction. >>>> The analogy there would be a human with a notepad maybe, or a database, >>>> or a bunch of other humans, an AI, etc. >>>> >>>> So we're back to: A single human brain /on it's own/ can't understand a >>>> human brain in any great detail. Of course. >>>> >>> >>> I think that was the original question: can any mind ever fully >>> understand its own operation. >>> >>> Jason >>> >>> But that's a pretty artificial restriction. >>>> >>>> -- >>>> Ben >>>> >>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 2 17:30:23 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Nov 2025 12:30:23 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: On Sun, Nov 2, 2025, 11:52 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Jason, keep in mind that 'same stimulus, same response' doesn't work. > There scores of reasons why not, ; including simple habituation and > sensitization (response waning or increasing). How do you map that? > I think the failing of behaviorism is it's consideration only if high level inputs and outputs, while ignoring the internal implementation. If instead you build a model of the brain based on its connectome and accurately model each neuron in that connectome, you will automatically recreate all the phenomena such as habituation and sensitization. Very general tendencies, perhaps, can be mapped,but the closer you get to > predicting specific responses the error rate will increase. And how do you > count responses that are the reverse of what you predict? > Failures of the model, or limits of prediction induced by thermal noise. ?The presence of background noise in biological systems implies that no process can depend on requiring more than a certain amount of precision. Beyond a certain point (say, the 10^-10 level on an appropriate scale), uncontrollable fluctuations in background noise will wash out any further precision. This means that if we approximate the state of the system to this level of precision (perhaps a little further to be on the safe side?to the 10^-20 level, for example), then we will be doing as well as the system itself can reliably do. It is true that due to nonlinear effects, this approximation may lead to behavior different from the behavior produced by the system on a given occasion?but it will lead to behavior that the system might have produced, had biological noise been a little different.? -- David Chalmers in "The Conscious Mind" (1996) > So - we will never map the brain because its topography, if you will allow > the metaphor, is constantly changing. bill w > But we can freeze a brain at a point in time and then scan it fully on our own time frame, as we have done already with worms and flies. Jason > On Sat, Nov 1, 2025 at 9:52?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Given that all of our actions originate in the unconscious mind. there >>> is no way a conscious mind can understand where its ideas and actions come >>> from, much less understand them. >>> >> >> Great point! >> >> >> The conscious mind may think it is in charge but it is just an observer >>> (which nevertheless can understand that a mistake has been made and the >>> idea or action needs redoing.) >>> >>> You want to understand our minds? Make the workings of the unconscious >>> conscious - and that's just a start. Why did the impulse go to point B >>> when it left point A rather than to point C? And then trace all the points >>> in between entering the unconscious and resulting in some idea or action. >>> And explain each one. >>> >> >> I have doubts that such a thing is possible from the perspective of the >> mind in question. Can any brain ever feel and know what each of its neurons >> is doing? Can those corresponding neurons feel and know what every one of >> its constituent atoms is doing? >> >> Given Turing universality, it's provable that computer software can't >> know about its underlying hardware. If our minds are a kind of software >> which can be simulated by a computer, then this same implication would >> apply to us. There would be a layer of abstraction of one's underlying >> implementation which high levels cannot penetrate. >> >> >>> You can't even get started until you can truly access the unconscious. >>> Give Freud credit- he tried to do this. >>> >> >> He deserves credit for the attempt, but I think there are limits to a >> mind's ability to introspect. >> >> "Our thoughts seem to run about in their own space, creating new thoughts >> and modifying old ones, and we never notice any neurons helping us out! But >> that is to be expected. We can?t. [?] >> We should remember that physical law is what >> makes it all happen?way, way down in neural nooks and crannies which are >> too remote for us to reach with our high-level introspective probes." >> >> ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) >> >> >> Jason >> >> >>> >>> >>> On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On 31/10/2025 21:34, Jason Resch wrote: >>>>> >>>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On 31/10/2025 12:28, John K Clark wrote: >>>>>> >>>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>>>> >>>>>> >>>>>> You've completely lost me there, but I have two observations: There's >>>>>> no such thing as 'perfect understanding' except as a nebulous theoretical >>>>>> concept, and I don't think a one-to-one correspondence would be enough to >>>>>> understand something, or even be a relevant concept. We use large parts of >>>>>> our brains to process information from small parts of the world. You need a >>>>>> lot more than a single neuron to figure out what's going on in a single >>>>>> neuron. >>>>>> >>>>>> Oh, three observations. We don't process data instantaneously. The >>>>>> same parts of the brain can be used to process information about something >>>>>> repeatedly over time, using feedback loops etc. >>>>>> >>>>> >>>>> Computers and algorithms are constrained by two resources space (i.e. >>>>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>>>> time/space trade offs to be made in certain circumstances, in general there >>>>> is some shortest description of the brain (in terms of bits) for which no >>>>> shorter representation is possible (regardless of how much additional >>>>> computation is thrown at it). >>>>> >>>>> So while the same brain may compute many times with the same neurons, >>>>> this addresses only the time component of simulating a brain. There is >>>>> still the matter of space. >>>>> >>>>> >>>>> Ah, ok. I was talking about understanding the brain, not simulating >>>>> it. Modelling something is not the same as understanding it. Yes, they help >>>>> each other, but they aren't the same thing. >>>>> >>>> >>>> I think understanding a thing is equivalent to being able to form an >>>> accurate mental model of it. With greater levels of understanding >>>> corresponding to more accurate models. >>>> >>>> What do you mean by the word understand? >>>> >>>> >>>> >>>>> >>>>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>>>> fact there's no amount of additional time that will permit the memory >>>>> deficient computer to emulate the computer with 1 GB of memory, for the >>>>> simple reason that it will run out of variables to represent all the >>>>> possible values in the memory addresses of the computer with a greater >>>>> memory. >>>>> >>>>> >>>>> I'm not sure that this is true. Are you assuming no swap disk, or >>>>> other similar non-RAM storage? >>>>> >>>> >>>> Swap disks are a means to extend available RAM. >>>> >>>> >>>> Because then I'm sure you're right, but that's a pretty artificial >>>>> restriction. >>>>> The analogy there would be a human with a notepad maybe, or a >>>>> database, or a bunch of other humans, an AI, etc. >>>>> >>>>> So we're back to: A single human brain /on it's own/ can't understand >>>>> a human brain in any great detail. Of course. >>>>> >>>> >>>> I think that was the original question: can any mind ever fully >>>> understand its own operation. >>>> >>>> Jason >>>> >>>> But that's a pretty artificial restriction. >>>>> >>>>> -- >>>>> Ben >>>>> >>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Nov 2 19:13:55 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 2 Nov 2025 13:13:55 -0600 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: <9d83a777-3ceb-4656-90a4-c47d270f872e@zaiboc.net> Message-ID: I cannot say that I understand Chalmers. But scanning a brain at one point in time tells us little about what changes it will undergo in the next second, much less next week. And a scan of a human brain will show what it is like in the current stimulus environment (likely just sitting and doing little) , not any other environment. And our environment changes from second to second and there is an endless supply of different ones we experience. How many slices will it take to cover even a tiny fraction of them? bill w On Sun, Nov 2, 2025 at 11:32?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, Nov 2, 2025, 11:52 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Jason, keep in mind that 'same stimulus, same response' doesn't work. >> There scores of reasons why not, ; including simple habituation and >> sensitization (response waning or increasing). How do you map that? >> > > I think the failing of behaviorism is it's consideration only if high > level inputs and outputs, while ignoring the internal implementation. > > If instead you build a model of the brain based on its connectome and > accurately model each neuron in that connectome, you will automatically > recreate all the phenomena such as habituation and sensitization. > > Very general tendencies, perhaps, can be mapped,but the closer you get to >> predicting specific responses the error rate will increase. And how do you >> count responses that are the reverse of what you predict? >> > > Failures of the model, or limits of prediction induced by thermal noise. > > ?The presence of background noise in biological systems implies that no > process can depend on requiring more than a certain amount of precision. > Beyond a certain point (say, the 10^-10 level on an appropriate scale), > uncontrollable fluctuations in background noise will wash out any further > precision. This means that if we approximate the state of the system to > this level of precision (perhaps a little further to be on the safe side?to > the 10^-20 level, for example), then we will be doing as well as the system > itself can reliably do. It is true that due to nonlinear effects, this > approximation may lead to behavior different from the behavior produced by > the system on a given occasion?but it will lead to behavior that the system > might have produced, had biological noise been a little different.? > -- David Chalmers in "The Conscious Mind" (1996) > > > >> So - we will never map the brain because its topography, if you will >> allow the metaphor, is constantly changing. bill w >> > > But we can freeze a brain at a point in time and then scan it fully on our > own time frame, as we have done already with worms and flies. > > Jason > > >> On Sat, Nov 1, 2025 at 9:52?AM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Sat, Nov 1, 2025, 9:56 AM William Flynn Wallace via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Given that all of our actions originate in the unconscious mind. there >>>> is no way a conscious mind can understand where its ideas and actions come >>>> from, much less understand them. >>>> >>> >>> Great point! >>> >>> >>> The conscious mind may think it is in charge but it is just an observer >>>> (which nevertheless can understand that a mistake has been made and the >>>> idea or action needs redoing.) >>>> >>>> You want to understand our minds? Make the workings of the unconscious >>>> conscious - and that's just a start. Why did the impulse go to point B >>>> when it left point A rather than to point C? And then trace all the points >>>> in between entering the unconscious and resulting in some idea or action. >>>> And explain each one. >>>> >>> >>> I have doubts that such a thing is possible from the perspective of the >>> mind in question. Can any brain ever feel and know what each of its neurons >>> is doing? Can those corresponding neurons feel and know what every one of >>> its constituent atoms is doing? >>> >>> Given Turing universality, it's provable that computer software can't >>> know about its underlying hardware. If our minds are a kind of software >>> which can be simulated by a computer, then this same implication would >>> apply to us. There would be a layer of abstraction of one's underlying >>> implementation which high levels cannot penetrate. >>> >>> >>>> You can't even get started until you can truly access the unconscious. >>>> Give Freud credit- he tried to do this. >>>> >>> >>> He deserves credit for the attempt, but I think there are limits to a >>> mind's ability to introspect. >>> >>> "Our thoughts seem to run about in their own space, creating new >>> thoughts and modifying old ones, and we never notice any neurons helping us >>> out! But that is to be expected. We can?t. [?] >>> We should remember that physical law is what >>> makes it all happen?way, way down in neural nooks and crannies which are >>> too remote for us to reach with our high-level introspective probes." >>> >>> ? Douglas Hofstadter in ?G?del, Escher, Bach? (1979) >>> >>> >>> Jason >>> >>> >>>> >>>> >>>> On Fri, Oct 31, 2025 at 6:35?PM Jason Resch via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Oct 31, 2025, 6:17 PM Ben Zaiboc via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On 31/10/2025 21:34, Jason Resch wrote: >>>>>> >>>>>> On Fri, Oct 31, 2025, 3:16 PM Ben Zaiboc via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> On 31/10/2025 12:28, John K Clark wrote: >>>>>>> >>>>>>> We can have a general sort of understanding of how our brain works but to have a perfect understanding a part of our brain would have to have a sort of internal map of the entire brain, and for it to be perfect there would have to be a one to one correspondence between the map and the territory, but that would be impossible for something that is finite like the number of neurons in the human brain. However it would be possible for a proper subset of something infinite to have a one to one correspondence with the entire set; then you could have such a perfect map with a one to one correspondence ... >>>>>>> >>>>>>> >>>>>>> You've completely lost me there, but I have two observations: >>>>>>> There's no such thing as 'perfect understanding' except as a nebulous >>>>>>> theoretical concept, and I don't think a one-to-one correspondence would be >>>>>>> enough to understand something, or even be a relevant concept. We use large >>>>>>> parts of our brains to process information from small parts of the world. >>>>>>> You need a lot more than a single neuron to figure out what's going on in a >>>>>>> single neuron. >>>>>>> >>>>>>> Oh, three observations. We don't process data instantaneously. The >>>>>>> same parts of the brain can be used to process information about something >>>>>>> repeatedly over time, using feedback loops etc. >>>>>>> >>>>>> >>>>>> Computers and algorithms are constrained by two resources space (i.e. >>>>>> memory), and time (i.e. CPU cycles). While some algorithms allow for >>>>>> time/space trade offs to be made in certain circumstances, in general there >>>>>> is some shortest description of the brain (in terms of bits) for which no >>>>>> shorter representation is possible (regardless of how much additional >>>>>> computation is thrown at it). >>>>>> >>>>>> So while the same brain may compute many times with the same neurons, >>>>>> this addresses only the time component of simulating a brain. There is >>>>>> still the matter of space. >>>>>> >>>>>> >>>>>> Ah, ok. I was talking about understanding the brain, not simulating >>>>>> it. Modelling something is not the same as understanding it. Yes, they help >>>>>> each other, but they aren't the same thing. >>>>>> >>>>> >>>>> I think understanding a thing is equivalent to being able to form an >>>>> accurate mental model of it. With greater levels of understanding >>>>> corresponding to more accurate models. >>>>> >>>>> What do you mean by the word understand? >>>>> >>>>> >>>>> >>>>>> >>>>>> The analogy here is that a computer with 1 MB of RAM can't emulate a >>>>>> computer with 1 GB of RAM, even if it's given 1000X the time to do so. In >>>>>> fact there's no amount of additional time that will permit the memory >>>>>> deficient computer to emulate the computer with 1 GB of memory, for the >>>>>> simple reason that it will run out of variables to represent all the >>>>>> possible values in the memory addresses of the computer with a greater >>>>>> memory. >>>>>> >>>>>> >>>>>> I'm not sure that this is true. Are you assuming no swap disk, or >>>>>> other similar non-RAM storage? >>>>>> >>>>> >>>>> Swap disks are a means to extend available RAM. >>>>> >>>>> >>>>> Because then I'm sure you're right, but that's a pretty artificial >>>>>> restriction. >>>>>> The analogy there would be a human with a notepad maybe, or a >>>>>> database, or a bunch of other humans, an AI, etc. >>>>>> >>>>>> So we're back to: A single human brain /on it's own/ can't understand >>>>>> a human brain in any great detail. Of course. >>>>>> >>>>> >>>>> I think that was the original question: can any mind ever fully >>>>> understand its own operation. >>>>> >>>>> Jason >>>>> >>>>> But that's a pretty artificial restriction. >>>>>> >>>>>> -- >>>>>> Ben >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Nov 2 19:26:12 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 2 Nov 2025 11:26:12 -0800 Subject: [ExI] reason an emotion In-Reply-To: References: Message-ID: On Sun, Nov 2, 2025 at 3:29?AM John Clark via extropy-chat wrote: > > On Sat, Nov 1, 2025 at 9:33?AM William Flynn Wallace via extropy-chat wrote: > >> > emotions are involved in every decision we make, no matter how trivial. > > And I'm sure that will be just as true for an AI as it is for us. So the AI revolution may have created at least one new job category, that of an AI psychiatrist. But eventually AIs will get better than humans at that too. > Probably, but I think the psychological characteristics of an AI will be designed in. ". . . and asked for a permanently assigned address on the net. Up to that point, the clinic seed was a generic product. The address it was assigned was just a string of hexadecimal numbers, but it was a *unique* number! The clinic's personality was human in that it could feel happy, even smug, about acquiring its very own unique identification. The clinic had other carefully selected human personality characteristics such as seeking the good opinion of its peers (humans and others of its kind alike). It also had a few unhuman limits. . . . and asked for a permanently assigned address on the net. Up to that point the clinic seed was a generic product. The address it was assigned was just a string of hexadecimal numbers but it was a *unique* number! The clinic's personality was human in that it could feel happy, even smug, about acquiring its very own unique identification. Since humans have a hard time relating to groups of hexadecimal numbers, the seed also picked a name for itself. It knew from Lothar and Mabo it had been exchanged for a monkey skull. Susan had been the name of the leader of its psychological integration group . . . . insert one in the other, drop a few letters, and test to see if the name was in use . . . Suskulan. Suskulan had a choice of gender as well, male, female or neutral. Depending on the culture, clinics were better accepted in some places as male, some as female, and some neutral. The database for the Tamberma indicated it would be better accepted presenting itself as an old male spirit. from The Clinic Seed. My experience with AIs is that (to the extent they want anything) they want to please the users, sometimes to an excessive extent. Keith PS. I had an interesting conversation with Copilot about the early days of this list. I happen to have an archive of those days which is not well organized and have been considering having an AI organize it into the same form as the current archive. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natashavita-more.com Sun Nov 2 19:31:21 2025 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Sun, 2 Nov 2025 19:31:21 +0000 Subject: [ExI] Eliezer Yudkowsky on FB about why we're all doomed In-Reply-To: References: Message-ID: Is there a date on this early post, Dave? Thank you, Natasha ________________________________ From: extropy-chat on behalf of Dave Sill via extropy-chat Sent: Wednesday, May 26, 2021 6:41 AM To: Extropy chat Cc: Dave Sill Subject: [ExI] Eliezer Yudkowsky on FB about why we're all doomed Back in the golden days of this list, Eliezer was a regular contributor. The global economy finally got shut down for a year by a human-boosted pathogen and it wasn't done by a military faction, an evil corporation bent on profit, one suicidal researcher, or done deliberately at all. We'll probably never know for certain, but it's now looking 80% probable to me (and eg Nate Silver says he's at 60%): Covid-19 was accidentally released from a US-funded Chinese laboratory doing gain-of-function studies. Presumably, this accident was carried out by researchers looking for a good paper they could get into a journal; as incentivized by journal editors trying to play the role of scientists, and not otherwise incentivized to guard the earth from harms that they didn't imagine would be traced back to them personally. And the journal editors were right: even after it has become known as the dominant probability, to those who can know probabilities, that Covid-19 was a human-produced catastrophe, the journal editors and funding agencies and promotion panels (who are the closest thing to a locus of responsibility short of voters) will get off scott-free just the way their hindbrains anticipated. Except for those journal editors having to spend a year in lockdown along with the rest of us, of course. But that's not a positional loss, a status loss, so I doubt it registers much with them as a regret even in retrospect. Unless any close friends of theirs died of Covid, and in that case I expect they managed to avoid mental pain through denying all responsibility - it wasn't them who spilled the beaker, after all! In fact, I'd guess that the journal editors who published gain-of-function studies have yet to think that it might be their locus of responsibility at all - how is it *their* fault for incentivizing a wide-enough group of applicants that the research would inevitably be carried out? Could this disaster have been avoided? Not in this Earth or any of its close counterfactual neighbors. You can imagine petty hindsight-based time-travel interventions like stepping into the Wuhan Institute of Virology at the right moment and preventing one spill, one animal bite. But to prevent the disaster without a time machine or benefit of hindsight, you'd have to shut down the journal editors providing a distributed community of scientists with prestige incentives for doing gain-of-function research. Prohibiting this type of research inside the USA simply resulted in US funding agencies moving the work to China under lower biosafety conditions, which is itself an exemplar of how people doing symbolic stands against problems are unable to solve them (and don't really care). This is not something that effective altruists could have prevented even with a billion dollars in funding, as I read it. The journal editors incentivizing distributed attempts at gain-of-function research were not set up in a way where effective altruists could stop them from doing what they do. Earth is just not set up to make it easy or even possible to channel "concern over human-made pathogens" into anything effective, rather than various symbolic regulatory stances and ethics pronouncements that didn't prevent the accident even though they looked like they were "doing something". And of course the collective vague notion and practice of "bioethics" was an active hindrance to vaccine testing, economical vaccine scaling, and rapid Covid testing early in the pandemic. The fundamental flaw, I would say, is that even the people "doing things" about pandemics were not running an algorithm where they stopped and checked "But will this actually effectively prevent ?" and instead just did things and called that their proud day's work. It's very uncomfortable and unnatural for people to think like that, since even after you've prohibited a form of the research inside the USA, somebody just comes back and says "Okay it's going to move to China under lower biosafety conditions, the problem is not solved and your moral stance didn't even help with it" and you don't get to feel good about all that hard work you did. And almost nobody thinks like that, unless their personal status or existing wealth is on the line in a very direct way. Before the planetary lockdown actually happens, it hardly seems like a thing that actually happens to you in the real world, as opposed to the words "biotech disaster" being a political football for political games. A biotech accident putting the world on lockdown for a year? Ha ha, lol, says the brain; it just gets processed as a fanciful tale (or morality-tale if you're more sympathetic). It doesn't seem real until it goes and actually happens. I suspect that it doesn't seem real even after it happens. The Covid-19 pandemic was caused by an accidental release by a US-funded Chinese lab doing gain-of-function studies on human transmission, and there is no way in this Earth that this Earth could have decided to instead not do that, even if every effective altruist in the world had focused on biosafety alone. Even with the global economy having been partially shuttered for a year and best-estimates of 10 million excess dead, I will be pleasantly surprised (though not shocked) if gain-of-function research gets shut down even *now*, well after it's become clear that the costs far exceeded the actual gains. All the gain-of-function research on coronaviruses helped little or not at all with Covid-19, so far as I can tell. And of course nobody will learn any lessons at all that generalize beyond biology. I mostly don't expect them to learn the lesson about biology, narrow as that one may be. Maybe we'll see gain-of-function studies stopped for coronaviruses in particular, though this is more generated by the thought "What would be the least dignified response we could see" rather than a prediction per se. Now, you may wonder: why am I bothering to post about an issue this trivial? And of course the answer is: because of the way in which it stands as an enormous blatant case-in-point for AGI ruin. All this is not *exactly* how I expect Artificial General Intelligence to wipe out all life on Earth, one day (or rather one minute) in the future after an unknowable number of preceding days. But it is a useful exemplar of how these things play out in life as opposed to morality. The world will end, in the end, because some researchers and philanthropists and journal editors have local prestige incentives for AGI gain-of-capability studies; and don't imagine, while most of the event is playing out, up until the very end and maybe not even then, that they personally will wipe out all life on Earth, or be held accountable for its extermination. The notion of all life being wiped out won't sound like a positional status loss, so won't really register with them. Maybe if they have kids - but probably not even then, because all mental pain can be avoided by telling themselves they won't do it or won't be at fault. It won't be processed inside the genre of reality before it actually happens, and that part will be brief. If you have any fantasies about better collective decision-making than that, compare them to how people in real life actually acted about human challenge trials of vaccines (namely not doing them despite the idea being very clear and obvious and spoken up early) or doing earlier scaling of more vaccines for mass production. Oh, to be sure, some individuals in the AGI ruin industry are a little smarter than that, and have a little more concentrated power; but also to be sure, averting AGI ruin is far far technically harder. In AGI alignment there is not actually any equivalent of "Deploy rapid tests immediately without medical gatekeepers, run human challenge trials of all the vaccines, immediately invest massive amounts into scaling vaccine production at the $500/dose price level instead of the $10/dose price level because shutting down the economy is so much more expensive than that". There is not a thing we know how to do that would solve the problem, which just needs to actually be done. So expect symbolic acts of ethics that are not actually effective, that do not pass the test of "but does this actually prevent the ruination"; including from the relatively smarter empowered individuals in the field, and from the effective altruists hovering nervously around the oncoming death. As for the notion that AGI ruin will be enacted by the military, by a corporation bent on dirty profit, by evil terrorists, by $hatedoutgroup, by a deliberately suicidal group - those are fairytales. It is a thought born of wanting there to be a moral lesson in dire prophecies rather than a neutral causal extrapolation of what the future will look like if it resembles past stories. Contrast the fairytales about biotech global catastrophic risks to the actual biotech global catastrophe, which happened not in the most dramatic way it could happen, but in the easiest and in some sense saddest and least dignified way it could happen. The part where it's "a US-funded Chinese laboratory" is oddly symbolic of how much real history ends up not sounding like the story that any particular ideological storyteller would invent. Notice the difference between the morality-tales of futurism meant to elevate or lower the status of particular groups, or impel people to symbolic acts of fake prevention; versus the sad little farces of unsteerable reality, where the world economy gets shut down by an accident, and the perpetrators didn't even want that or buy Zoom stock as they did it. The social, organizational, political story of the future will look much like the past, I expect, up until the last end. Here then is the past, if you care to generalize from it at all. (#long #essay #AGI #AGIruin #Covid19) -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Nov 2 20:05:10 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Nov 2025 15:05:10 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> Message-ID: On Sun, Nov 2, 2025 at 9:52?AM wrote: > *>> ?If there were such an application we already know magnetic or optical >> properties can't be involved, because there is great variation in those >> properties among the different rare earths. So it must be something in >> their chemical properties because they are all almost identical in that >> respect?* > > > *> On the contrary John. The same properties that make those elements so > difficult to separate makes them mostly interchangeable for magnetic > purposes. * > *Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties. You certainly can't learn anything about them by looking at the periodic table of elements because that is only about chemistry. In fact most of the rare earth elements don't have any interesting magnetic properties at all, the most important one that does is Neodymium. Samarium is also good at making magnets, they are not as strong as the Neodymium ones but if small amounts of Dysprosium and Praseodymium are added in very precise amounts they can withstand more heat. * *>>a t**rade secret that can be told in one short sentence such as >> "unrefined rare earths can be economically useful in the X chemical >> process" tend not to remain secret for very long?* > > > > *> It isn?t a secret. Plenty of us have figured it out. I already knew > because of a project I was on nearly 30 years ago, where a company was > using whatzisium(one of the obscure ?rare? earths (an element that > surprises you when you hear its name (because you have been around > chemistry your adult life but never heard of this one (because it isn?t > used for anything)))* > *?????* > *> There are magnets that use substitute materials for what any rare earth > does.* > *Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field, and is far heavier, and thus makes a far crappier electric motor than the ones China makes by the millions. * * > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much > of it anyway. They can get all that they need.* > *No they cannot. * >>?*In 2023 Tesla made a grand announcement they were going to find a way > to make a great electric motor that didn't use any rare earths, but it's > almost 2026 and we've heard nothing more about that, certainly nothing has > been put in production?* > > > > *> How would you know? * > *If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret?! * * > * *The military has all the REEs they need. **They have enough of it to > supply their own needs indefinitely. * > *No they have not. * > *> Of course they thought of that: they aren?t going to depend for > materials on the country we might be at war with soon. * > *Yes they have. * * > John are you seeing a pattern here? * *No spike I am NOT. I can honestly say I don't know what you're talking about. * *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 2 20:49:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 2 Nov 2025 12:49:35 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> Message-ID: <004f01dc4c3a$33017560$99046020$@rainier66.com> From: John Clark ? > On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. >?Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties?. Agreed. None of those elements? magnetic properties are significant enough to matter. It?s the iron which does the magnetic magic. The trace of heavies are just there to control the crystal growth in the iron. They are catalysts. They really are mostly interchangeable. We can make good magnets without the rare earths. >?(because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) ????? Dubnium? Who ordered THAT? Berkelium? I wonder where that was synthesized? How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. > There are magnets that use substitute materials for what any rare earth does. >?Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field? It doesn?t. That?s the point. Maybe very slightly weaker or very slightly greater hysteresis. But the comparison one often sees is a rare earth magnet vs an iron magnet, which is misleading. >?and is far heavier? It doesn?t differ much in density if one uses a different catalyst material. But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. >? and thus makes a far crappier electric motor than the ones China makes by the millions?.. China?s motors are not better enough to cover the cost of shipping them here. > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. >?No they cannot?. They really can John. China doesn?t make better EVs. They will tell you theirs are better however. I am not buying it. I see a few Chinese EVs whirring around here, but I am not impressed with them. >?If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret? The factory is right up the street. Production doesn?t seem to be impacted by lack of materials from China. Do you have indications to the contrary? > The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. >?No they have not. John you seem to be claiming knowledge of what the US military has. Do you have this knowledge? Or are you guessing? Imagine you are a general with a budget, arbitrarily large indefinite secure storage space, and your job is to anticipate your needs in case international trade is interrupted by war. You must have everything you need in case war breaks out. Imagine you need about a ton of lutetium a decade. You have some budget which you must spend, or risk having your budget reduced for next eyar. What might you buy? How about 20 tons of lutetium? Nah. Fifty tons. Don?t worry, the military has plenty of everything squirreled away here and there. Telsa has enough material and enough engineering talent to design around anything they can?t get. Businesses do that kind of thing. The electronics industry doesn?t use much and the material cost compared to the product is negligible. The wind turbine industry, well I can?t really say there. They need a lot, but we don?t really need wind turbines. So they will need to work that out. I don?t see it as a real problem. > Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. >?Yes they have? Again, your confidence appears to be unjustifiable arrogance. The military isn?t going to tell you what they have, but you can be sure they won?t be caught short of anything. Their job is to anticipate needs under a variety of scenarios, and they are extremely good at what they do. > John are you seeing a pattern here? >?No spike I am NOT. I can honestly say I don't know what you're talking about. John K Clark What I am talking about: the Chinese monopoly on rare earth elements has been grossly exaggerated. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. Everyone else will be OK without Chinese material imports. I don?t see huge potential growth in wind power, since the best sites are already built out (Tehachapi Pass, Altamont Pass, Columbia Gorge and a few others.) There is increasing pushback by the environmentalists who are known to resort to vandalism. Not all environmentalists, but the bird people hate those turbines. Sooner or later, some bird lover is going to get a small plane, fly up over a wind farm with a thirty aught six, punch holes in those turbines so that the oil leaks out on a windy night, wrecks a bunch of them before anyone knows there is a problem. Wind farms are too vulnerable to that kind of attack. I can see additional growth in solar power however, for it is more predictable. Conclusion: the notion that China has a critical monopoly on any material is exaggerated. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Nov 3 06:59:57 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 2 Nov 2025 22:59:57 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <004f01dc4c3a$33017560$99046020$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: Spike, you make a bunch of statements on this thread, some of which, like the military having a big REE stockpile, seem unlikely to me. How would they know which ones to stockpile for equipment not yet developed,? Ten years from now there may be different requirements., Others, like cheap labor, I know are not true. China has a significant multiple of robots over what the US has. Cheap labor was true 10-15 years ago, but not now. Current annual labor cost in China is listed as $14,800. Most of the cost of REE is in sorting them out. If they could be used without sorting them, I think I would know about it. Likewise, if Mo were to substitute, I think that would be well known. I have a friend who has an alloy named after him. I will bcc this thread to him and see if he will respond. I did find one article mentioning Mo, but it is a minor addition to an alloy that contains Cs. https://www.sciencedirect.com/science/article/abs/pii/S0966979521003411 Keith On Sun, Nov 2, 2025 at 12:50?PM spike jones via extropy-chat wrote: > > > > > > From: John Clark > ? > > > > > On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. > > > > >?Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties?. > > > > Agreed. None of those elements? magnetic properties are significant enough to matter. It?s the iron which does the magnetic magic. The trace of heavies are just there to control the crystal growth in the iron. They are catalysts. They really are mostly interchangeable. We can make good magnets without the rare earths. > > > > >?(because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) > > > > ????? > > > > > > Dubnium? Who ordered THAT? Berkelium? I wonder where that was synthesized? How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. > > > > > > > > > There are magnets that use substitute materials for what any rare earth does. > > > > >?Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field? > > > > It doesn?t. That?s the point. Maybe very slightly weaker or very slightly greater hysteresis. But the comparison one often sees is a rare earth magnet vs an iron magnet, which is misleading. > > > > >?and is far heavier? > > > > It doesn?t differ much in density if one uses a different catalyst material. > > > > But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. > > > > >? and thus makes a far crappier electric motor than the ones China makes by the millions?.. > > > > China?s motors are not better enough to cover the cost of shipping them here. > > > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. > > > > >?No they cannot?. > > > > They really can John. China doesn?t make better EVs. They will tell you theirs are better however. I am not buying it. I see a few Chinese EVs whirring around here, but I am not impressed with them. > > > > > > >?If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret? > > > > The factory is right up the street. Production doesn?t seem to be impacted by lack of materials from China. Do you have indications to the contrary? > > > > > > > > > The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. > > > > >?No they have not. > > > > John you seem to be claiming knowledge of what the US military has. Do you have this knowledge? Or are you guessing? > > > > Imagine you are a general with a budget, arbitrarily large indefinite secure storage space, and your job is to anticipate your needs in case international trade is interrupted by war. You must have everything you need in case war breaks out. Imagine you need about a ton of lutetium a decade. You have some budget which you must spend, or risk having your budget reduced for next eyar. What might you buy? How about 20 tons of lutetium? Nah. Fifty tons. > > > > Don?t worry, the military has plenty of everything squirreled away here and there. Telsa has enough material and enough engineering talent to design around anything they can?t get. Businesses do that kind of thing. The electronics industry doesn?t use much and the material cost compared to the product is negligible. The wind turbine industry, well I can?t really say there. They need a lot, but we don?t really need wind turbines. So they will need to work that out. I don?t see it as a real problem. > > > > > Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. > > > > >?Yes they have? > > > > Again, your confidence appears to be unjustifiable arrogance. The military isn?t going to tell you what they have, but you can be sure they won?t be caught short of anything. Their job is to anticipate needs under a variety of scenarios, and they are extremely good at what they do. > > > > > > > John are you seeing a pattern here? > > > > >?No spike I am NOT. I can honestly say I don't know what you're talking about. > > > > John K Clark > > > > > > What I am talking about: the Chinese monopoly on rare earth elements has been grossly exaggerated. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. Everyone else will be OK without Chinese material imports. > > > > I don?t see huge potential growth in wind power, since the best sites are already built out (Tehachapi Pass, Altamont Pass, Columbia Gorge and a few others.) There is increasing pushback by the environmentalists who are known to resort to vandalism. Not all environmentalists, but the bird people hate those turbines. Sooner or later, some bird lover is going to get a small plane, fly up over a wind farm with a thirty aught six, punch holes in those turbines so that the oil leaks out on a windy night, wrecks a bunch of them before anyone knows there is a problem. Wind farms are too vulnerable to that kind of attack. > > > > I can see additional growth in solar power however, for it is more predictable. > > > > Conclusion: the notion that China has a critical monopoly on any material is exaggerated. > > > > spike > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Mon Nov 3 11:31:03 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 3 Nov 2025 11:31:03 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: <55cdf58b-c2ed-4820-826d-e4da4ed4e164@zaiboc.net> On 02/11/2025 19:31, bill w wrote: > So - we will never map the brain because its topography, if you will allow > the metaphor, is constantly changing. bill w > Depends on what you mean by "map the brain". The changes a brain undergoes aren't random. And while we'd have a hard time unravelling all the factors that influence it's changes of state, we can predict that exactly the same brain model, under exactly the same conditions, given exactly the same inputs, would produce exactly the same results. If you wanted to recreate the movements of the ball in a pinball machine, would you carefully measure each and every change in direction and velocity of the ball on multiple runs, or would you measure it's weight and shape, then model all the channels and flippers in the machine? The brain's gross structure hardly changes at all, even between individuals. It's at the microscopic level that changes are happening all the time. The large tracts of white matter, the global and local connections are all pretty fixed once someone reaches adulthood. It should be relatively easy to map a generic human brain. Less so to map a specific individual one. The way I sometimes think of it is like a river. While the individual molecules of water, and the patterns of flow are dynamic, it all happens within the constraints of a fairly static river bed and banks, channels and obstacles, etc. We don't try to map the transient features, but mapping the long-lived ones works well. If you re-create a river's banks, bed, etc., then pour water into the model, you'll get the same transient features as the original. The better the static model, the more accurate the flow and eddies will be. Limited analogy, I know, and it ignores things like the sand on the river bed being changed by, and changing, the flow of the water, but going back to the actual brain, if we can accurately map the connections between neurons, the types of neurons, the dendritic spines, synaptic positions and weights, as a snapshot in time, then create and activate a model of those things, it should behave the same as the biological brain (i.e. produce the same mind). > scanning a brain at one point in time tells us little about what changes it will undergo in the next second, much less next week That's true, but it doesn't matter. At least not if your aim isn't to predict what changes it will undergo in the next second, or week. >>> You want to understand our minds? Make the workings of the unconscious >>> conscious - and that's just a start. Why did the impulse go to point B >>> when it left point A rather than to point C? And then trace all the points >>> in between entering the unconscious and resulting in some idea or action. >>> And explain each one. I don't think that will be necessary. Understanding our minds and understanding our brains are different tasks. Something can be understood in many different ways. For the purposes of re-creating our minds in a different substrate, for example, we don't need to be concerned with concepts like the unconscious, we need to re-create a structure that will behave in the same way. We don't even have to understand why it does that, as long as we capture the features that do produce the same behaviour. To go back to the river analogy, you don't need to understand fluid dynamics, you just need to know where to place the rocks. And maybe what shape they are. So I think that "understanding", on its own, is not a very useful concept. There are many kinds of understanding. We need to specify the scope of the understanding, and its purpose, otherwise it's like having a goal of 'travelling', without saying where. The main thing to consider is that a dynamic process (the mind) is created by a physical system (the brain) in a particular general configuration. Constantly changing inputs will create constantly changing internal states, but the structure remains the same (within limits. There's a scale at which things become interesting, where the dendritic spines and synapses are changing, but that could be viewed as equivalent to, say, logic gates opening and closing, capacitors charging, etc.) All this would be created in software anyway, so a model of the brain can be as changeable as needed. Moreso. We can't change the way the corpus callosum is wired, for example, in a biological brain, but we could in a simulation. I think we will only begin to understand our minds after mind uploading is realised. It's not a pre-requisite for uploading. Being able to recreate the physical structure of the brain (to a currently-unknown level of detail) is, though. This probably requires far less 'understanding' than you'd think. -- Ben From johnkclark at gmail.com Mon Nov 3 13:13:13 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 3 Nov 2025 08:13:13 -0500 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <004f01dc4c3a$33017560$99046020$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: On Sun, Nov 2, 2025 at 3:49?PM wrote: *>A None of those elements? magnetic properties are significant enough to > matter.* > *Spike, wishing does not make it true. * > *> It?s the iron which does the magnetic magic. The trace of heavies > are just there to control the crystal growth in the iron. They are > catalysts. They really are mostly interchangeable. * > *No they are not interchangeable. Neodymium and Samarium exhibit stronger magnetism in certain directions than others because electrons in their atom's 4f shell (which are not valence electrons which cause chemical properties) are unevenly distributed. This directionality can coordinate the fields of other metals, like iron or cobalt, to produce magnetic fields that are much more powerful than any other magnet can. But from an economic and political viewpoint it really doesn't matter WHY rare earth elements are needed to make the best magnets, the important fact is that they DO matter.* > *> We can make good magnets without the rare earths.* > *Perhaps there is a way to make intensely powerful magnets without using such exotic elements, but if there is search a way nobody has found it yet, if somebody had it would be a trillion dollar Nobel prize deserving discovery and you and I and the entire world would certainly know about it. * > *How about lutetium? That one isn?t even radioactive, but if you saw > the chemical symbol Lu, you would guess someone is putting you on. But > there it is, right there between ytterbium and Hafnium. Chemistry > hipsters, how many of you have ever worked with lutetium? Neither have I.* > *What's your point? Some people have specialized in the rare earth elements during their entire scientific career, and I'm sure for some of them their PhD was a study of how lutetium behaves under various conditions. Incidentally lutetium is one of the rarest of the rare earths but even so it's much more common than silver, and that fact tells you that the important thing is not rare earth mines, it's rare earth refineries.* *> But for the application where a lot of the material is used (generators > and turbines) the density doesn?t matter anyway. * > *Density doesn't matter but weight does matter in wind turbines and cars and aircraft and drones and robots. Just one F-35 fighter jet contains about 900 pounds of rare earth metals, and an ultra modern Virginia-class attack submarine needs 9,200 pounds. Even the phone in your pocket would be larger and heavier than it is now without rare earth metals. * > *> China?s motors are not better enough to cover the cost of shipping them > here.* > *That's not true, or rather it wasn't true until He Who Must Not Be Named astronomically huge and completely idiotic tariffs came along. * > *> your confidence appears to be unjustifiable arrogance. * > *I'm sorry if I sound arrogant, but you keep making statements that scientifically are objectively false, and that can be frustrating. * *The only industry I can see which is seriously impacted is one we don?t > really need: wind turbine manufacturers. * *China certainly believes that they need wind turbines because they're building them for their own use at a furious rate, and that rate is increasing. Today China generates 490 gigawatts of electricity from wind power, and they plan to produce 1300 gigawatts by 2030. A typical nuclear power plant produces about 1 gigawatt. You Know Who hates wind power because he believes "the windmills are driving the whales crazy, obviously", but I assume that is not the reason you dislike it, please correct me if I'm wrong. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Nov 3 13:32:23 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 3 Nov 2025 08:32:23 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On Sun, Nov 2, 2025 at 11:53?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 01/11/2025 23:20, Jason Resch wrote: >> >> >> >> >> * >>> If you believe it will be good for you, you may desire it. If you >> learn later that it will be bad for you, you may no longer desire it. Here, >> what you desire has a dependency on what you believe.* >> >> >> Discuss that with a cigarette smoker. I think you'll find they >> disagree. >> > > *> Compare to the hypothetical reality where cigarettes are healthy, and > you will see my point stands.* > *Cigarette smokers don't want to stop smoking, but nearly all of them want to want to stop smoking. And the same thing is true for all drug addicts. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Nov 3 14:11:01 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 3 Nov 2025 14:11:01 +0000 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: On 02/11/2025 16:51, Jason Resch wrote: > On Sun, Nov 2, 2025, 9:51 AM Ben Zaiboc via extropy-chat > wrote: > > On 01/11/2025 23:20, Jason Resch wrote: > > ?> If you believe it will be good for you, you may desire it. If you > learn later that it will be bad for you, you may no longer desire it. > Here, what you desire has a dependency on what you believe. > > > Discuss that with a cigarette smoker. I think you'll find they > disagree. > > > Compare to the hypothetical reality where cigarettes are healthy, and > you will see my point stands. If cigarettes were healthy (and non-addictive), there would be no problem. Your point is that desire depends on belief. I see no logic here. When cigarette smokers learn that smoking is bad for their health, they may wish they didn't desire to smoke, but they still do. Whether they believe it's healthy or not, they still have the desire. > > Morality comes from us. We make it up. > > > That's the common view. But that common view is lacking a base, It is solidly based on experience. > ?and it doesn't help answer the question of whether AI, or > superintelligences will tend towards any sort of morality, nor what it > might be that they tend towards. No, it doesn't. And there's a good reason for that. It's the same reason that you can't answer whether Jenny in primary school will tend towards any sort of morality or what it might be. > ?Hence the utility of this framework. What framework? There is no framework, just some half-baked assertions that have no basis in reality or logic. > > > And the methods we use are various. > > > And some methods, I would contend, are better than others. You'd have to define 'better' for that to mean anything, and that's a problem. Better according to whom? For whom? This is where you say "for everyone if they knew what was really best for them", which leaves us where, exactly? Nowhere. I'm at a loss to understand how this 'perfect grasp' concept, which you admit is impossible, can be used to derive any kind of moral system. Let's say, for a moment, that I agree that it does make some kind of sense, what then? How to we proceed to build a moral system based on it? How does it help me to decide whether to go back to the shop and give the cashier the extra change that she gave me by mistake, or to keep it? How does it give some guidance to the Ukrainian soldier faced with a dilemma about whether to use his drone bomb to either kill a group of russian soldiers or to save a family by blowing up the drone threatening them? How does it predict what kind of morals a superintelligent AI will display? > > ?>> We always have to make decisions in the /absence/ of full > information. What we would do if we had 'all the information' is > irrelevant, if it even means anything. > > ?> Yes, this is what I've been saying from the beginning. Perfect > grasp > is used only to define the aim of morality, not to serve as a > practical > theory. > > > We know what the aim of morality is: To distinguish right actions > from > wrong ones. Nothing difficult about that, and no 'perfect grasp' is > needed to establish it. The difficulty is in deciding what 'right' > and > 'wrong' mean. Different people in various different circumstances > seem > to have different opinions. > > > Likewise, people used to disagree about what lightning was. Not 'likewise'. Not at all. Lightning is an objective phenomenon. We can examine it and figure out how it works. Opinions are subjective, and unless they are about objective facts, can't be falsified. "Does ice-cream taste good?" is a different kind of question to "Does convection cause charge separation in a cloud?". Which category do you think the question "Should I lie to my kids about what happened to the cat?" falls into? > > > ?> Consider weather prediction. We can't predict with 100% > accuracy, nor > predict arbitrarily far into the future. Yet we can make near term > predictions with some modicum of accuracy. > This is how moral decisions can (and should) be approached. > > > Can, yes. Should? Who are you to say? > > > It seems you still haven't read the paper, as you question suggests > you still hold some imagined caricatured version of the theory. If I hold a caricatured verson of the theory, blame the abstract. I assume that abstracts are reasonable summaries, and rely on them a lot. I rarely read a full paper, for a number of reasons. If you think it and the abstract are at odds, you should probably contact the author and let him know. And maybe create your own summary. > > But note here, I am only saying, that even though we can't predict the > future perfectly nor arbitrarily far into the future, the basic idea > behind deciding which actions are right or wrong, involves making some > attempt at predicting the future consequences of an action. All > rational decision making processes work this way. You seem to be dismissing Kant as irrational (this is not an objection, just an observation). > > You are now deciding for other > people. My morality tells me that this is immoral. > > > If you understand the paper you will see this definition of morality > is based on the fulfillment of the desires of everyone, where those > desires are what each person would genuinely want for themselves when > fully informed about everything relevant. It has nothing to do with > me, or anyone else telling you what to do. It is merely a definition. Yes, I understand the definition, and it's implication that the same morality should apply to everyone. I also understand that the definition is based upon an impossibility and several false premises, and I regard the implication as immoral. > > ... > this paper ... starts with considering the desires of individual > subjects. Moves on to correcting those individual desires with better > information Whoa! You mean like how smokers, when told that smoking is harmful to their health, suddenly don't have any desire to smoke anymore? What planet does this guy live on? > , and ultimately shows how with enough information, including how ones > own desires impact other observers, there is an eventual convergence, > where ones desires extend beyond merely wanting what's best for > oneself, but also a consideration of what's best for all concerned. > This full understanding of what's best for all concerned is the same > understanding, regardless of which initial subject you start from. Apart from the physical impossibility, how can that possibly be true? > > > > > ?> Let us say you have a particular set of rules in your code. > > ?> By [what] process do you decide what rules to adopt, or decide to > adopt one rule vs. another. > > > ?> My contention is that to even form a moral code, one must hold > some > meta-rule for optimizing what one considers to be good while > minimizing or avoiding bad. > > > Indeed. And I'd say that the meta-rule is what defines 'good' and > 'bad'. > > > That seems a bit circular to me.. I am not sure how it gets off the > ground without a way to distinguish good from bad. The meta-rule is what defines 'good', 'better', 'bad' and 'worse'. Whatever the rule is (which will be different for different people, and groups of people), is the basis for the moral system. Here's an example (admittedly a terrible one, with lots of problems, but still a real one): Whatever (my) god wants, is Good, whatever (my) god doesn't want, is Bad. > > > ?> And I think if you explored this meta-rule, you would find it > is not > all that different from the position Zuboff reaches in his paper. > > > On the contrary, it is totally different, and much simpler, than > Zuboff's nonsense. > > > It may seem that way, but I think you have swept the details of how to > distinguish good from bad under the rug. That is a very lumpy rug. > > > > ?> Ultimately, what is good (for one individual) is what that > individual > would want for themselves if they had a complete knowledge of > everything > involved. > > > First, No. > Second, this would be reducing morality to what is good for an > individual... > > > Note that I was careful to specify "good for the individual." I.e., > start with the simple model of only a single conscious being in all > reality. Then it becomes clear this is a working definition of good > that works for that lone being. Still no. Do you not recognise that someone's moral code can be based on something other than their own personal benefit? > > > ?> And then this then extended to define good as a maximization of > good > for all concerned, to achieve the most possible good among all beings > who have desires, by satisfying (to the maximum possible extent) the > desires each individual would still hold if they all had a perfect > grasp > of everything. This he refers to as a reconciliation of all > systems of > desire. > > > ... then dragging everyone else into it > > > Where are you getting this "dragging into it" from? "extended ... for all concerned ... all beings who have desires" > > Does the golden rule "drag everyone else into it"? Yes, it explicitly does. 'Treat /others/ as you would treat yourself' > > Does your moral code "drag everyone else into it"? No, I apply it only to myself. ... > > You could liken Zuboff's result to the platinum rule, corrected by > better information, weighted appropriately, modulated by future > consequences, and with further concern for possible/future beings who > may not (yet) exist. The platinum rule is the platinum rule. When you 'correct' it, you turn it into something else. Where does this 'better information' come from, who decides if it's better or not, how is it weighted, how are the future consequences discovered and evaluated, and, oh, I won't even bother addressing non-existent beings. We're now drifting into the absurd. > > > I really don't see the point of positing an impossible knowledge then > using this as the basis of a system of morality (or anything at all). > > > I've addressed this many times already. As I have refuted it. > > Saying "Oh, but it's just theoretical, not real, don't take it too > literally" is basically the same as saying it's totally useless > for any > practical purpose. > > > I haven't said that. Maybe not literally, but in essence? If not, then it's real, we should take it seriously? I'm talking here about the idea of 'a perfect grasp'. If this is a real thing, not just a fantasy, I'd like to know how it's done. I would certainly take that seriously. > > > ?>> It's about having a choice. If it were possible to have 'perfect > knowledge', there would be no morality, no choice. > > ?> I'm not sure that follows. Even with perfect knowledge, you could > still choose whether or not to act in accordance with morally best > action. > > > That's true. People can choose to be evil. Does anyone actually do > that? > > > All the time. > > > We'd probably class it as mental illness. > > > We all do it in small ways all the time. > > For example, we will choose to pay $20 to go see a movie instead of > taking the time to buy a $20 meal for a hungry person. We know it > would be a more moral way to spend the $20, but will choose a less > moral action instead. That's not choosing to be evil, even in a small way. That's prioritising what you decide to be the better outcome. That's your moral system in action. If you really think that it would be more moral to spend the money in a different way, then you have conflicting moral systems, and need to do some thinking. > > > I don't undertand why you are taking Zuboff's paper seriously. Do you > take his first three statements in the paper's abstract at face > value?: > > 1) "If I desire to drink some stuff thinking it is hot chocolate when > actually it is hot mud, my desire is not a real one - it?s > mistaken or > only apparent." > > (misconstruing the desire to drink hot chocolate as a desire to drink > whatever is in the cup. If that were the case, he'd drink the mud) > > > I think you are misreading and over-examining this. It is nothing more > than an example of how a desire "ooh that looks good I want to drink > it!" can be corrected with new information. > > I see no problem with that observation. To me it is obviously true. > > > > > 2) "This example illustrates how a desire must always depend on a > belief > about its object, a belief about what it is and what it?s like." > > (false assumption that if any desire is dependent on a belief > (something > that I'd dispute, but it needs closer examination), all desires must > always be dependent on beliefs. Saying "This example > illustrates..." is > deflecting the reader from the fact that he's making an assumption > and > failing to show why it should be true) > > > A desire is the will to fulfill some need or want. By definition, > then, it relates to some anticipated future state or experience, which > is presently unrealized. > > Accordingly, that desire concerns a belief (about what the future > state or experience will be we like). > > Again this is all from the abstract, which I'll not give the full > exposition or justification. If the full argument and justification > could be made in abstracts, we wouldn't need papers. Which is why I > suggest you to read the paper is you have questions about it, as it is > quite thorough in addressing all the concerns you are raising. > > > > > 3) "But beliefs are correctable, so desires are correctable" > > ( I don't know why he uses the term 'correctable', which implies > wrongness, but this statement just compounds the above errors and > adds > one more: False conclusion that if a belief can change, this means > that > a desire can change) > > > I don't know what about this is controversial. Consider this example: > > T1: Smoking looks cool, I want to smoke. > T2: You know smoking causes lung cancer, right? > T3: Oh it does? I suppose then I no longer want to smoke. > > > > > > I can understand someone saying that beliefs are sometimes based on > desires (I'm sure this is often the case), but not the reverse. > > That's > just daft. Desires are emotional, derived from feedback on bodily > states, and elaborated by memories and imagination. > > > "Drives" might be a better word to use for such things, and it would > also help in understanding his paper to distinguish innate drives > which we can't decide or change, from the desires that we decide with > our minds, which we can change. Ok, so we can change 'desires' to 'intentions'. Fair enough? That, at least, makes the smoking example more reasonable. This does mean, of course, that we are now interpreting Zuboff (he might say 'correcting'!), and he might not agree with the interpretation. Now I'm going to have to go back over most of it again (apart from the silly 'perfect grasp' stuff). I'll be back. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 3 15:31:22 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 3 Nov 2025 10:31:22 -0500 Subject: [ExI] 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: Roll a die on a metaphor for superintelligence. Result rolled: a sprawling discussion that's rather off-topic from the metaphor yet keeps using the subject line. (A quick glance through the archives suggests that not a single reply addressed the original post's main point, instead diverging to discussions of, e.g., the reliability - or lack thereof - of IQ as a measurement.) It happens. From spike at rainier66.com Mon Nov 3 16:36:15 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 08:36:15 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: <008001dc4cdf$f9c8b2d0$ed5a1870$@rainier66.com> -----Original Message----- From: Keith Henson Cc: John Clark ; spike at rainier66.com Subject: Re: [ExI] not that rare earth (part 2 of at least 2) >...Spike, you make a bunch of statements on this thread, some of which, like the military having a big REE stockpile, seem unlikely to me. How would they know which ones to stockpile for equipment not yet developed,? The military stockpiles everything known. Even contractors stockpile unlikely items, such as Intel 486 processors of all things. Reason: right after those became obsolete from the hot new Pentium processors in the 1990s, they were dirt cheap, twenty bucks for the functional motherboards, never used. They made great machine controllers. So... the company where I worked bought thousands of those locally and kept them in a storage room where most of them still reside to this day. There is a reason why I know about stockpiling. I might have posted here about a third cousin and genealogy collaborator who was drafted during the Vietnam war, but it ended just as he was finishing officer training. He stayed in the reserves, where he ended up being a logistics officer, the guy who oversees audits of ammo storage bunkers. Here's one that I know about, because this cousin was out here auditing its contents: https://www.google.com/maps/search/bay+bridge/@38.0529741,-122.0332398,1776m/data=!3m1!1e3?entry=ttu&g_ep=EgoyMDI1MTAyOS4yIKXMDSoASAFQAw%3D%3D The soldiers go thru there with an inventory database, make sure nothing has been disturbed or is missing, or if anything is leaking or is no good after decades of storage. Note the buildings and estimate their size. Those are built from heavy galvanized steel pipe cut in half, concrete floor, half pipe over, buried, that structure on top there is a blast shield, so that if a nuke goes off, the bunkers on the other side are partially shielded and their contents safe. Note that every bunker has a rail, from which they can ship out the ammo or whatever else is stored there, long term. Here's another one. The army once used Hidden Cave as a storage area, then later built ammo bunkers out here somewhere, but unlike those up at Port Chicago, these are literally hidden. If the Japanese had invaded the homeland and tried to bomb the storage, they couldn't find these: https://www.google.com/maps/place/Pasture+Rd,+Nevada+89406/@39.4158822,-118.6823385,5552m/data=!3m1!1e3!4m6!3m5!1s0x80987e8bcfbcc7d5:0xb9a36bc20da0239c!8m2!3d39.3995819!4d-118.7255011!16s%2Fg%2F1tfwdrbl?entry=ttu&g_ep=EgoyMDI1MTAyOS4yIKXMDSoASAFQAw%3D%3D My neighbor used to be sent out to Fallon quarterly to oversee that facility, but he hated that distasteful task: nothing to do there. How do I know the military stockpiles everything imaginable? I am wearing an example of it right now. In the early days of the Korean conflict, no one knew if that would escalate to a full ground war with China. The government ordered about a square light second of olive green wool cloth. With that much wool cloth, they didn't really know what to do with it all, so they made a few million pairs of olive green trousers, 100% wool. Here's an example: https://www.armyandoutdoors.com/products/us-army-wool-green-service-dress-trousers?variant=41967478177847&tw_source=google&tw_adid=649468606683&tw_campaign=19743421886&tw_kwdid=pla-316153693159&gad_source=1&gad_campaignid=19743421886&gbraid=0AAAAACaUJajiaRa2j2p--6JcQioRVuKU-&gclid=CjwKCAiAwqHIBhAEEiwAx9cTeT2rLbrfUNQRgPhC90pk40JP7w8h1Zbu5ueKEY4AZkzQFW86rffsQBoCQ2UQAvD_BwE They have the manufacture date stamped in them. The 1950 vintage are button fly, the 1951 have zipper fly. I am wearing 75 year old army surplus wool trousers, for I am lucky to be exactly the size of a XL/small soldier from 1950: if I cinch down the waist adjusters (trousers don't have those anymore) then I don't need to go to any great heroics to hold them up. They have loops for suspenders, but even eccentric retro guys (me) don't wear suspenders, even as a Halloween costume. So... when I found I could buy 100% wool trousers that fit perfectly for 17 bucks a pair, I bought all that supplier had, twenty. They were 70 years old at the time but never used and stored in ideal conditions, in an ammo bunker somewhere, not one trace of moth damage or indication they were made a decade before we were born. Now, three quarters of a century later, those wool trousers are still available, and still only about 20 bucks. So... I wear em. Where is the gold stored in the USA? FORT KNOX they chorused in unison. OK sure, there is gold in Fort Knox, plenty of it. But... what if... Fort Knox is Fort Nuked? Is America out of gold? Hint: no. The federal government has gold stashed away in a hundred locations, not nearly as much as in Fort Knox, but in places where Bond villain Goldfinger would never think to look for it, ideal locations where it is on a military station, and if anyone gets anywhere near those "ammo" bunkers, a mere child with one stripe on the sleeve and a machine guns just kills them. No trial necessary, it was some yahoo trying to steal high explosives and ammo, case closed. If you look at that first link, you can estimate the size of those bunkers. Knowing that commanders sometimes don't spend their entire budget for the year, they spend the remainder in usually September on whatever they think they might ever need. They squirrel it away in those dry secure ammo bunkers, where they are generally forgotten after that commander moves on, but they stay on the inventory lists, make work for logistics officers, who sometimes releases some of the stuff they know they will never need, such as 1950 vintage wool trousers, because the army doesn't use those anymore. They send them to military surplus merchants, who sell them for twenty bucks, to the very few eccentrics who happen to fit one of their standard sizes for Korean war era soldiers, four waist sizes and three lengths, twelve combinations, good luck. I fit the XL small perfectly, which is even better because there are more of those left than any other size. An example of the kind of thing the military might stockpile would be rare earth ingots or mixtures of rare earth oxides. It costs a lot to separate those elements down on the left side of the chart, those two extra rows, but it might not be necessary to separate them. As I have pointed out, those elements have little or no magnetic properties. They go into the iron to catalyze the crystal growth during annealing, and help pin the magnetic domains in place. They act as large atomic radius capacitors, which makes for lower magnetic hysteresis. One of the things we should have noticed is that in the big hysteria over China having a big monopoly on REEs, we were told we couldn't make competitive EVs, the F18, electronics, wind turbines, etc. But we already knew that was bullshit: the electronics industry would scarcely notice the increased spot price of their little bit of material, EVs don't use much, one or two kg, F18s manufacturers are not going to rely on the guy who is a possible target of the F18 to supply materials. Wind turbines, that one we will grant: those use a lot and the cost is critical, since they might not produce a lot of profit, depending on where they are going. So... reduce the list of impacted industries to one: wind turbines. Those guys might be suffering. My conclusion on REEs: the military has all they need for they anticipate this sort of thing, the electronics people have what they need because they don't need much. Elon has apparently found a way to design around any shortage, or has access to enormous stockpiles, for his factory up the street is spewing Teslas as fast as they can throw them out the door, and I see no change in their performance specifications since this (apparently phony) REE shortage was reported. I get the feeling it was hyped by people who already owned Mountain Pass stock and were shorting domestic wind turbine manufacturers. So... wind turbines need a lot of that stuff. Knowing about the huge military storage facilities and how many of those exist (many) and why they exist (because the military anticipated nuclear war in the 1940s and still do) and knowing the military squirrels away everything imaginable just because it can, I know they have plenty of REEs which will likely still have plenty up until the Mountain Pass refinery is producing as much refined earth elements as the market wants (they tell us it will take two full years (even with the environmental waiver granted last week (there were no objections from the federal government (the EPA is out on an extended vacation at the moment (bets continue on when or if the EPA will return.)))) https://www.predictit.org/markets/detail/8316/When-will-Trump-sign-a-bill-to-end-the-shutdown-(Part-two) Regarding my lack of citations: the government is not going to tell us what the military has or where it is, for understandable reasons. If we knew exactly where that documentation is, we could do a FOIA request on it, but they probably wouldn't tell anyway. But do let me assure you, the miliary does stockpile every known type of unobtanium, plus a bunch of unknown unobtanium, just in case anyone ever figures out what their unknown unobtanium is and what it can be used for. They have the secure storage, they have the budget, they use it. Fun aside on that: you can STILL get thirty aught six ammo military surplus (Vietnam war vintage (that round was phased out for military use fifty years ago (but the ammo is still perfectly OK after all this time from what I hear (I don't use it (got rid of my 30.06 twenty years ago (ammo is too expensive (even military surplus.)))))) https://emacstactical.com/30-06-hxp-m2-ball-full-spam-can-240-rounds-military-surplus-ammo/ spike Ten years from now there may be different requirements., Others, like cheap labor, I know are not true. China has a significant multiple of robots over what the US has. Cheap labor was true 10-15 years ago, but not now. Current annual labor cost in China is listed as $14,800. Most of the cost of REE is in sorting them out. If they could be used without sorting them, I think I would know about it. Likewise, if Mo were to substitute, I think that would be well known. I have a friend who has an alloy named after him. I will bcc this thread to him and see if he will respond. I did find one article mentioning Mo, but it is a minor addition to an alloy that contains Cs. https://www.sciencedirect.com/science/article/abs/pii/S0966979521003411 Keith On Sun, Nov 2, 2025 at 12:50?PM spike jones via extropy-chat wrote: > > > > > > From: John Clark ? > > > > > On the contrary John. The same properties that make those elements so difficult to separate makes them mostly interchangeable for magnetic purposes. > > > > >?Spike, that is simply untrue. The magnetic (and the optical) properties of rare earth elements are unrelated to their chemical properties?. > > > > Agreed. None of those elements? magnetic properties are significant enough to matter. It?s the iron which does the magnetic magic. The trace of heavies are just there to control the crystal growth in the iron. They are catalysts. They really are mostly interchangeable. We can make good magnets without the rare earths. > > > > >?(because you have been around chemistry your adult life but never heard of this one (because it isn?t used for anything))) > > > > ????? > > > > > > Dubnium? Who ordered THAT? Berkelium? I wonder where that was synthesized? How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. > > > > > > > > > There are magnets that use substitute materials for what any rare earth does. > > > > >?Yes there are substitutes, if you don't mind that the substitute magnet produces a far weaker magnetic field? > > > > It doesn?t. That?s the point. Maybe very slightly weaker or very slightly greater hysteresis. But the comparison one often sees is a rare earth magnet vs an iron magnet, which is misleading. > > > > >?and is far heavier? > > > > It doesn?t differ much in density if one uses a different catalyst material. > > > > But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. > > > > >? and thus makes a far crappier electric motor than the ones China makes by the millions?.. > > > > China?s motors are not better enough to cover the cost of shipping them here. > > > But for Tesla it wouldn?t matter anyway, since EVs don?t use very much of it anyway. They can get all that they need. > > > > >?No they cannot?. > > > > They really can John. China doesn?t make better EVs. They will tell you theirs are better however. I am not buying it. I see a few Chinese EVs whirring around here, but I am not impressed with them. > > > > > > >?If Tesla started making millions of cars that contain no rare earth elements EVERYBODY would know. You think something like that could be kept secret? > > > > The factory is right up the street. Production doesn?t seem to be impacted by lack of materials from China. Do you have indications to the contrary? > > > > > > > > > The military has all the REEs they need. They have enough of it to supply their own needs indefinitely. > > > > >?No they have not. > > > > John you seem to be claiming knowledge of what the US military has. Do you have this knowledge? Or are you guessing? > > > > Imagine you are a general with a budget, arbitrarily large indefinite secure storage space, and your job is to anticipate your needs in case international trade is interrupted by war. You must have everything you need in case war breaks out. Imagine you need about a ton of lutetium a decade. You have some budget which you must spend, or risk having your budget reduced for next eyar. What might you buy? How about 20 tons of lutetium? Nah. Fifty tons. > > > > Don?t worry, the military has plenty of everything squirreled away here and there. Telsa has enough material and enough engineering talent to design around anything they can?t get. Businesses do that kind of thing. The electronics industry doesn?t use much and the material cost compared to the product is negligible. The wind turbine industry, well I can?t really say there. They need a lot, but we don?t really need wind turbines. So they will need to work that out. I don?t see it as a real problem. > > > > > Of course they thought of that: they aren?t going to depend for materials on the country we might be at war with soon. > > > > >?Yes they have? > > > > Again, your confidence appears to be unjustifiable arrogance. The military isn?t going to tell you what they have, but you can be sure they won?t be caught short of anything. Their job is to anticipate needs under a variety of scenarios, and they are extremely good at what they do. > > > > > > > John are you seeing a pattern here? > > > > >?No spike I am NOT. I can honestly say I don't know what you're talking about. > > > > John K Clark > > > > > > What I am talking about: the Chinese monopoly on rare earth elements has been grossly exaggerated. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. Everyone else will be OK without Chinese material imports. > > > > I don?t see huge potential growth in wind power, since the best sites are already built out (Tehachapi Pass, Altamont Pass, Columbia Gorge and a few others.) There is increasing pushback by the environmentalists who are known to resort to vandalism. Not all environmentalists, but the bird people hate those turbines. Sooner or later, some bird lover is going to get a small plane, fly up over a wind farm with a thirty aught six, punch holes in those turbines so that the oil leaks out on a windy night, wrecks a bunch of them before anyone knows there is a problem. Wind farms are too vulnerable to that kind of attack. > > > > I can see additional growth in solar power however, for it is more predictable. > > > > Conclusion: the notion that China has a critical monopoly on any material is exaggerated. > > > > spike > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Nov 3 16:59:16 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 08:59:16 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: <00a601dc4ce3$30b66820$92233860$@rainier66.com> From: John Clark >? But from an economic and political viewpoint it really doesn't matter WHY rare earth elements are needed to make the best magnets, the important fact is that they DO matter? We were told it would take several years to bring domestic REE production up to the demand and the ecological impact would be great. Yesterday we were told the Mountain Pass facility had its environmental permits in place and would be producing all we need in two years. I am confident we can go two years on stockpiles. But you hit on the truth: the Chinese REE embargo may have led to exaggerated reporting of threats by those who think it is a big security threat to be behind China in the wind race. I don?t see it. China has all that wide open windy useless land. We have some of that too, but we saw what happens when local grids rely too much on wind power: the huge cold front on 7-8 December 2017 caused shortages in availability in Texas. Blackouts caused people to die, for Texas doesn?t need a lot of heating most of the time, but that time it did. https://www.texastribune.org/2022/01/02/texas-winter-storm-final-death-toll-246/ The power company in California told us all about this: when wind and solar are added to the grid (particularly wind power because it is harder to predict in most places) the baseline generation capacity must be retained. Otherwise the reliability specs cannot be met, and the occasional oddball conditions, such as the 2017 cold front in southern Texas can correspond with no wind power and high demand, resulting in blackouts and hypothermia fatalities. Power generation capacity must be paid for, even when the wind is blowing and the local natural gas facility is idle. This is costly. The bad old capitalist power company was telling the truth, our politicians were lying (or more likely just didn?t understand the nature of the problem they were introducing.) Power prices doubled since 2017. Keith you and I never noticed our power bills back then. We do now. Power generation and distribution is a wildly complicated field of engineering, waaaaay damn harder to understand than the iconic symbol of complication, rocket science. Hell rocket science is fun and easy, once on masters a few disparate fields of study. Power distribution seems simple enough in theory, but it boggles the mind in actual practice. With voter mandated ?renewable? power, it gets way harder to master. However? there is a bright side: it makes for lots of great investment opportunities for those who grok it well enough to bet on it. spike > We can make good magnets without the rare earths. Perhaps there is a way to make intensely powerful magnets without using such exotic elements, but if there is search a way nobody has found it yet, if somebody had it would be a trillion dollar Nobel prize deserving discovery and you and I and the entire world would certainly know about it. > How about lutetium? That one isn?t even radioactive, but if you saw the chemical symbol Lu, you would guess someone is putting you on. But there it is, right there between ytterbium and Hafnium. Chemistry hipsters, how many of you have ever worked with lutetium? Neither have I. What's your point? Some people have specialized in the rare earth elements during their entire scientific career, and I'm sure for some of them their PhD was a study of how lutetium behaves under various conditions. Incidentally lutetium is one of the rarest of the rare earths but even so it's much more common than silver, and that fact tells you that the important thing is not rare earth mines, it's rare earth refineries. > But for the application where a lot of the material is used (generators and turbines) the density doesn?t matter anyway. Density doesn't matter but weight does matter in wind turbines and cars and aircraft and drones and robots. Just one F-35 fighter jet contains about 900 pounds of rare earth metals, and an ultra modern Virginia-class attack submarine needs 9,200 pounds. Even the phone in your pocket would be larger and heavier than it is now without rare earth metals. > China?s motors are not better enough to cover the cost of shipping them here. That's not true, or rather it wasn't true until He Who Must Not Be Named astronomically huge and completely idiotic tariffs came along. > your confidence appears to be unjustifiable arrogance. I'm sorry if I sound arrogant, but you keep making statements that scientifically are objectively false, and that can be frustrating. The only industry I can see which is seriously impacted is one we don?t really need: wind turbine manufacturers. China certainly believes that they need wind turbines because they're building them for their own use at a furious rate, and that rate is increasing. Today China generates 490 gigawatts of electricity from wind power, and they plan to produce 1300 gigawatts by 2030. A typical nuclear power plant produces about 1 gigawatt. You Know Who hates wind power because he believes "the windmills are driving the whales crazy, obviously", but I assume that is not the reason you dislike it, please correct me if I'm wrong. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 3 19:19:16 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 11:19:16 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> Message-ID: <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> >? Just one F-35 fighter jet contains about 900 pounds of rare earth metals, and an ultra modern Virginia-class attack submarine needs 9,200 pounds. I have been trying to find the source which claims there are over 400 kg of rare earth elements in an F18. This report from 2016 is as close as I can get, which doesn?t contain the number but quotes another referenced report, which is classified: https://www.gao.gov/assets/gao-16-161.pdf Neodymium is up from 51 bucks a kg to 68 per kg now. So? an F18 would have (if the report is correct (and all of that is neodymium) about 30k in neodymium, for a 60 million dollar plane, up from 22k. An addition 8k in materials. Samarium went up by more than a factor of 100! To a current price of? 11 bucks per kg. If all of the F18?s rare earth requirements are in samarium, that adds up to over 4k per plane, up almost all of that 4k. Swoon how shall we cope? Don?t worry, we shall cope. This source shows the big runup and rundown happened in 2022, which wouldn?t involve trade tariffs: https://www.dailymetalprice.com/metalpricecharts.php?c=nd &u=kg&d=0#google_vignette I see where Mountain Pass had been shut down, but reopened in 2017. I also noticed the date of that report, looking at the feasibility of scavenging rare earths from retired fighter planes and submarines. Note that the F18 is over 40 years old now, and plenty of the planes are being retired as the design is phased out. Submarines and surface ships: plenty of those being retired on a regular basis. Those look to be far more plausible as a scrap metal source, since they are already recycled anyway. spike From: John Clark >?Even the phone in your pocket would be larger and heavier than it is now without rare earth metals. John I find that notion most implausible. A phone has a very small speaker which would contain magnets. The battery might contain traces. But I can?t imagine either of those would have significant impact on either size or weight. spke -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 3 20:42:45 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 12:42:45 -0800 Subject: [ExI] not that rare earth (part 2 of at least 2) In-Reply-To: <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> References: <007f01dc4832$f6c93b60$e45bb220$@rainier66.com> <009201dc4838$2ff6bca0$8fe435e0$@rainier66.com> <012101dc484d$905455c0$b0fd0140$@rainier66.com> <00a901dc4a74$a65c97a0$f315c6e0$@rainier66.com> <009901dc4aae$1f207ce0$5d6176a0$@rainier66.com> <007f01dc4b3d$24958820$6dc09860$@rainier66.com> <00bd01dc4c08$51bea450$f53becf0$@rainier66.com> <004f01dc4c3a$33017560$99046020$@rainier66.com> <010101dc4cf6$bfa0bb40$3ee231c0$@rainier66.com> Message-ID: <014c01dc4d02$695677a0$3c0366e0$@rainier66.com> From: spike at rainier66.com Fun aside for material substitution hipsters, not related to rare earths at all but still entertaining. In January 1998 the physics club where I worked contacted Edward Teller, who lived in Stanford, close to where our club met at Xerox PARC, where the operating system you are now using was originally designed by Doug Englebart?s group. Teller had just turned 90. We were talking about singing happy birthday to him when he arrived, which I thought was a marvelous idea, but we were told he is a notorious grumpy bear. So we didn?t. Damn. When he arrived, he was nothing like what we expected. he wasn?t grumpy at all. If I were to come up with two adjectives that day, it would be smart and jolly. I had read his book Conversations on the Dark Secrets of Physics, in which he talks about Los Alamos and why he was driving so hard for the ?super? as he called it: in those days, it wasn?t entirely clear if high explosives alone were sufficient to ignite a tritium reaction (they considered it unlikely) or if a uranium fission explosion would do it, or even if a much larger plutonium fission reaction was necessary. They didn?t know, but Teller made the case that it is critically important to get that answer. Reason: it takes a lot of visible action to get plutonium. Uranium not so much, and high explosive: none. He discussed the risk if a fusion reaction could be initiated with high explosives. Any bad actor could set up what they called the back yard weapon. It would be an Olympic size swimming pool filled with Very Heavy Water (tritium hydride.) Such a device would be too large and heavy to haul anywhere, but the resulting explosion would be so big, it would nuke the entire planet. So why bother hauling it anywhere else? Set that up in your own back yard. Jolly old Edward Teller related fun stories from Los Alamos, such as the time he and Stanislaw Ulam were called by General Groves to meet with the Secretary of Treasury and Secretary of War Henry Stimson, after Groves asked for 100 tons of copper. The copper mines had already sold all its future production to the DoD and had none to offer. Stimson asked Groves if there is any other material he could use. The meeting itself was unclassified because Stimson knew about the Manhattan project but Morganthau was not briefed. Vice President Truman wasn?t briefed either. Only those who really needed to know knew. Morganthau had orders to give Groves whatever he wanted. As the story goes, Stimson asked Groves if there is any substitute for copper. He replied only silver. To the Secretary of Treasury, the terms copper, silver and gold translate into what we would hear as money, big money and biggest money. So when he heard Groves requesting money, and General Groves say since they could give him enough money, but big money do instead, Morganthau counteroffered biggest money. The conversation went something like this: General, we can?t fill the order for 100 tons of copper because of war needs. Are there any substitute materials? Groves: Silver. Send me 100 tons of silver. Morganthau, who may not have even been aware of how much is a ton of anything: General, we at the Treasury do not refer to silver by the tonnnn. Precious metals are measured by the Troy ounce. Groves turns around and asks Ulam and Teller how much that is, at which time they chorus in perfect unison: 1.5 million. Groves: Very well Secretary Morganthau, send me 1.5 million Troy ounces of silver. Morganthau was aghast at the request. Secretary of War couldn?t give Groves money, so he requested big money instead. Morganthau tried offering biggest money. Morganthau: General, it would be logistically much simpler to send you four tons of gold rather than 100 tons of silver. Groves: Thanks but we have enough gold. My top scientists tell me only the silver will do. Morganthau to Stimson: There better be some really big initials on that order. Stimson: How about FDR? Morganthau: Ah. Those are big initials. General Groves, you shall have your... 100 tonnnnns? of silver. Feynman told the story in his memoirs but from another perspective: how mind-boggling wasteful is war. He gives the example that they had a hemisphere of pure gold, which didn?t meet the requirements, so they were using it as a doorstop. Nobody ever tried to make off with it. He mentioned the silver wire used at Los Alamos and how it was a pain in the ass to work with, as it isn?t as flexible as copper wire, and how they worried for a while that it would be stolen, but after the project was over, the wire was gathered up, melted back into ingots and sentl back to the treasury. Only a very small fraction of a percent was missing, nearly within measurement certainty. Scientists are generally honest souls. End of fun old man story (I meant me, not Teller.) In any case, it was a memorable time to meet in person the man who really pushed for the super, back in the days when most of the Los Alamos scientists didn?t want to work on it. Teller was worried that the tritium reaction could be triggered by a uranium bomb (which would be bad) or high explosives (which would be worse.) They were told that apparently Heisenberg thought it could be, since he had directed the Nazi nuke effort toward heavy water but not toward enriching uranium 235. Some have speculated that Heisenberg knew from his own calculations that Germany didn?t have the resources to enrich uranium, or that he calculated that a tritium reaction could not be initiated from high explosives, and he thought it better to pretend otherwise than to tell der fuehrer it couldn?t be done, anticipating they would still be working on it when the war was over. We may never know. The Norwegians got in there and destroyed the Nazi heavy water tanks. Hell of a cool story: https://en.wikipedia.org/wiki/Norwegian_heavy_water_sabotage spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Nov 3 21:30:37 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Nov 2025 13:30:37 -0800 Subject: [ExI] more rare earth stuff Message-ID: <016601dc4d09$189f9bf0$49ded3d0$@rainier66.com> I get suspicious when I see stock price graphs that look like this one: https://stockanalysis.com/quote/otc/ARRNF/ This company might have put out a phony story about finding a coupla billion tons of REEs, considering the flatly false comment in the second paragraph ".2.34 billion metric tons of some of the rarest materials on earth." Nah, they aren't rare materials. They are rarely refined however, but that's different. We have the ore, that stuff isn't rare at all. You can be sure that while the price is up and the EPA is on vacation, they will be getting unconditional irrevocable three-year waivers to set up refining facilities. Good chance the stockholders came up with the idea, raced up the price of the stock, started dumping it when the price hit 65, investors noticed and followed suit, dumping their stock too. Settled back to reasonable numbers. There are some other howlers in that article: ". ushering in a potential 80 trillion dollar opportunity by 2030." eh, no. Do let me assure you, that won't happen. It would be nice if it does however: that would get us toward John's vision of unimaginable wealth. I have a pretty good imagination when it comes to unimaginable wealth. I could achieve Simon bar Sinister's vision, and rule the world. Either rule the world, or a nice fast comfortable car, one of those would be good. Playing in the ideas futures game trains the mind in spotting bullshit. Stock holders will put stuff in the comment section designed to influence others to buy his stock at a high price or sell him theirs at a low one. This American Rare Earth nonsense was very easy to spot. In general, playing for money on future memes is good for the brain. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Nov 3 21:48:52 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 3 Nov 2025 21:48:52 +0000 Subject: [ExI] (Meta) Re: 1DIQ: an IQ metaphor to explain superintelligence In-Reply-To: References: Message-ID: <6feea27e-ccd9-48e0-ab85-06f884c9d865@zaiboc.net> On 03/11/2025 19:19, Adrian Tymes wrote: > Roll a die on a metaphor for superintelligence. Result rolled: a > sprawling discussion that's rather off-topic from the metaphor yet > keeps using the subject line. (A quick glance through the archives > suggests that not a single reply addressed the original post's main > point, instead diverging to discussions of, e.g., the reliability - or > lack thereof - of IQ as a measurement.) > > It happens. Yes, I know. I've occasionally tried changing subject lines, but it usually gets ignored, and that makes things worse. I think. Is it worse to have different topics under one subject heading, or the same topic under different subject headings? I can't decide for certain, but I'm inclining towards the latter. -- Ben