From hkeithhenson at gmail.com Wed Oct 1 02:40:30 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 30 Sep 2025 19:40:30 -0700 Subject: [ExI] minor setback In-Reply-To: References: <000401dc30d8$999e7480$ccdb5d80$@rainier66.com> <003001dc30e7$62943150$27bc93f0$@rainier66.com> <004d01dc30ea$7b6e9f00$724bdd00$@rainier66.com> <010001dc322b$53283a00$f978ae00$@rainier66.com> <006401dc3240$3d6b9a30$b842ce90$@rainier66.com> Message-ID: The Skylon was able to abort right up to takeoff speed. The high speed and high mass meant it took a lot of runway to stop. Brakes were a big problem; they proposed cooling them with a ton of water. Landing was much less of a problem since the empty vehicle was only about 50 tons. Keith On Tue, Sep 30, 2025 at 1:13?PM Adrian Tymes via extropy-chat wrote: > > This happens all the time. > > In light of this, consider: > > 1) A rocket that doesn't deliberately explode - not even using > controlled explosions, as chemical rockets do. > > 2) A horizontal takeoff, horizontal landing vehicle. True, it has the > extra mass of wings and landing gear (which mass might have otherwise > gone to a greater payload fraction), but if the engine malfunctions > shortly after starting up, it can be shut down without the launch > vehicle having left the ground. > > On Tue, Sep 30, 2025 at 3:27?PM wrote: > > > > > > > > Adrian it is difficult to tell from this distance, but one might vaguely suspect there may have been a minor anomaly during testing. The camera over at Harold's Auto Parts worked in accordance with specification and has provided some possibly-useful failure-analysis data: > > > > https://twitter.com/i/status/1972785189702213641 > > > > One of your competitors, Firefly Aerospace, has learned what you have long known: space flight is hard. It's even harder if you do ANYTHING wrong. That biz is unforgiving of fools and unforgiving of even really smart people. > > > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jasonresch at gmail.com Wed Oct 1 05:53:46 2025 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 1 Oct 2025 01:53:46 -0400 Subject: [ExI] Could the Singularity be 1 year away? Message-ID: While doing research to make this presentation: https://docs.google.com/presentation/d/1erbKPY7V7fTgY61n-BONBPvdhJn1GEdE8HAuuDH8zO8/edit?slide=id.p#slide=id.p I discovered that the underlying macrotrends (for the economy, human population, the pace of history, and computing technology) aren't exponential, but rather, are best described by hyperbolic functions. Unlike exponential functions, hyperbolic functions reach infinity within a finite amount of time. Curiously, two independent models, based on different data sets, put the singularity for these exponential models at a time near 2027. In the above linked presentation, I speculate at what could be the ultimate driver of these trends, and trigger for shooting towards infinity. Based on what I show to be possible with existing AI tools, I think we really may be just a year or two away from seeing truly incredible progress. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Oct 1 09:49:48 2025 From: pharos at gmail.com (BillK) Date: Wed, 1 Oct 2025 10:49:48 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= Message-ID: Why ?Everyone Dies? Gets AGI All Wrong A Response to Yudkowsky and Soares from the Front Lines of AGI Development Ben Goertzel Oct 01, 2025 Being: A reaction to Eliezer Yudkowsky and Nate Soares?s book ?If anybody builds it everyone dies? which is getting a bit of media attention. Quote: An intelligence capable of recursive self-improvement and transcending from AGI to ASI would naturally tend toward complexity, nuance, and relational adaptability rather than monomaniacal optimization. ------------------ A good description of why we should survive the arrival of AGI. (probably!). BillK From spike at rainier66.com Wed Oct 1 16:30:59 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 1 Oct 2025 09:30:59 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Being: A reaction to Eliezer Yudkowsky and Nate Soares?s book ?If anybody builds it everyone dies? which is getting a bit of media attention. Quote: An intelligence capable of recursive self-improvement and transcending from AGI to ASI would naturally tend toward complexity, nuance, and relational adaptability rather than monomaniacal optimization. ------------------ >...A good description of why we should survive the arrival of AGI. (probably!). BillK _______________________________________________ Ja, and this worries me that a potential harm comes from causing a small faction of extremists to take extreme action, as we are seeing politically in the USA, by overstating threats. If software goes past humans in "intelligence" it doesn't necessarily kill EVERYone. Granted people might kill each other because of what the software does, but that's different. Some of us anticipated that; we have survival and defense strategies in place. Some of which might actually work. For a while. Maybe. The Berkeley computer scientists and clubs have been pondering this question and have formed strategy groups. I don't have links but some of their lectures and meetings are online in podcast format. I tuned into one realtime a few days ago focused on detection and containment strategies. Also note there are humans on this planet who have never used, perhaps never even seen a computer. They are not dependent on modern electronic infrastructure, as we are just to survive (if suddenly without it, most of us reading this message would starve within a month or two.) AGI isn't likely to impact their lives much. One possibility is that AGI wrecks the hell out of us, then the primitives gradually repopulate the planet. Then of course their descendants make all the same mistakes a coupla hundred years down the road. Humanity gets stuck in a destructive cycle, a kind of Ground Hog Century syndrome. SciFi writers among us, you may run with that ball. Say nice things about me for giving you the idea. I will co-author if you wish. spike From hkeithhenson at gmail.com Wed Oct 1 17:08:34 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 1 Oct 2025 10:08:34 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: The problem with super-capable machines is not them, it is people. What we desire was fixed in the Stone Age. Getting what we desire may cause us to go extinct, as in the Clinic Seed story. Keith On Wed, Oct 1, 2025 at 9:32?AM spike jones via extropy-chat wrote: > > > > -----Original Message----- > From: extropy-chat On Behalf Of BillK via extropy-chat > > > Being: A reaction to Eliezer Yudkowsky and Nate Soares?s book ?If anybody builds it everyone dies? which is getting a bit of media attention. > > > Quote: > An intelligence capable of recursive self-improvement and transcending from AGI to ASI would naturally tend toward complexity, nuance, and relational adaptability rather than monomaniacal optimization. > ------------------ > > >...A good description of why we should survive the arrival of AGI. (probably!). > BillK > > _______________________________________________ > > > Ja, and this worries me that a potential harm comes from causing a small faction of extremists to take extreme action, as we are seeing politically in the USA, by overstating threats. > > If software goes past humans in "intelligence" it doesn't necessarily kill EVERYone. Granted people might kill each other because of what the software does, but that's different. Some of us anticipated that; we have survival and defense strategies in place. Some of which might actually work. For a while. Maybe. > > The Berkeley computer scientists and clubs have been pondering this question and have formed strategy groups. I don't have links but some of their lectures and meetings are online in podcast format. I tuned into one realtime a few days ago focused on detection and containment strategies. > > Also note there are humans on this planet who have never used, perhaps never even seen a computer. They are not dependent on modern electronic infrastructure, as we are just to survive (if suddenly without it, most of us reading this message would starve within a month or two.) AGI isn't likely to impact their lives much. > > One possibility is that AGI wrecks the hell out of us, then the primitives gradually repopulate the planet. Then of course their descendants make all the same mistakes a coupla hundred years down the road. Humanity gets stuck in a destructive cycle, a kind of Ground Hog Century syndrome. > > SciFi writers among us, you may run with that ball. Say nice things about me for giving you the idea. I will co-author if you wish. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Thu Oct 2 22:06:48 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 2 Oct 2025 16:06:48 -0600 Subject: [ExI] Could the Singularity be 1 year away? In-Reply-To: References: Message-ID: Wow. But more people having more children to contribute and "when something reinforces the rate of change" and so on are all finite, so can't really be parabolic right? There is a big difference between unimaginably fast or large (i.e.finite) and "countably infinite" (integers) and "uncountably infinite"(real numbers) is even way larger than that right? but still, all this becomes rounding errors when you're looking at the time required to get to any finite number, no matter how large. Exciting. On Tue, Sep 30, 2025 at 11:54?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > While doing research to make this presentation: > > https://docs.google.com/presentation/d/1erbKPY7V7fTgY61n-BONBPvdhJn1GEdE8HAuuDH8zO8/edit?slide=id.p#slide=id.p > > I discovered that the underlying macrotrends (for the economy, human > population, the pace of history, and computing technology) aren't > exponential, but rather, are best described by hyperbolic functions. > > Unlike exponential functions, hyperbolic functions reach infinity within a > finite amount of time. > > Curiously, two independent models, based on different data sets, put the > singularity for these exponential models at a time near 2027. > > In the above linked presentation, I speculate at what could be the > ultimate driver of these trends, and trigger for shooting towards infinity. > Based on what I show to be possible with existing AI tools, I think we > really may be just a year or two away from seeing truly incredible progress. > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tech101 at gmail.com Fri Oct 3 05:26:05 2025 From: tech101 at gmail.com (Adam A. Ford) Date: Fri, 3 Oct 2025 15:26:05 +1000 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: > Getting what we desire may cause us to go extinct Perhaps what we need is indirect normativity Kind regards, Adam A. Ford Science, Technology & the Future - YouTube | FB | X | LinkedIn | Bsky | MU On Thu, 2 Oct 2025 at 03:09, Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The problem with super-capable machines is not them, it is people. > What we desire was fixed in the Stone Age. Getting what we desire may > cause us to go extinct, as in the Clinic Seed story. > > Keith > > On Wed, Oct 1, 2025 at 9:32?AM spike jones via extropy-chat > wrote: > > > > > > > > -----Original Message----- > > From: extropy-chat On Behalf > Of BillK via extropy-chat > > > > > > Being: A reaction to Eliezer Yudkowsky and Nate Soares?s book ?If > anybody builds it everyone dies? which is getting a bit of media attention. > > > > > > > Quote: > > An intelligence capable of recursive self-improvement and transcending > from AGI to ASI would naturally tend toward complexity, nuance, and > relational adaptability rather than monomaniacal optimization. > > ------------------ > > > > >...A good description of why we should survive the arrival of AGI. > (probably!). > > BillK > > > > _______________________________________________ > > > > > > Ja, and this worries me that a potential harm comes from causing a small > faction of extremists to take extreme action, as we are seeing politically > in the USA, by overstating threats. > > > > If software goes past humans in "intelligence" it doesn't necessarily > kill EVERYone. Granted people might kill each other because of what the > software does, but that's different. Some of us anticipated that; we have > survival and defense strategies in place. Some of which might actually > work. For a while. Maybe. > > > > The Berkeley computer scientists and clubs have been pondering this > question and have formed strategy groups. I don't have links but some of > their lectures and meetings are online in podcast format. I tuned into one > realtime a few days ago focused on detection and containment strategies. > > > > Also note there are humans on this planet who have never used, perhaps > never even seen a computer. They are not dependent on modern electronic > infrastructure, as we are just to survive (if suddenly without it, most of > us reading this message would starve within a month or two.) AGI isn't > likely to impact their lives much. > > > > One possibility is that AGI wrecks the hell out of us, then the > primitives gradually repopulate the planet. Then of course their > descendants make all the same mistakes a coupla hundred years down the > road. Humanity gets stuck in a destructive cycle, a kind of Ground Hog > Century syndrome. > > > > SciFi writers among us, you may run with that ball. Say nice things > about me for giving you the idea. I will co-author if you wish. > > > > spike > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Oct 3 09:36:07 2025 From: pharos at gmail.com (BillK) Date: Fri, 3 Oct 2025 10:36:07 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: On Fri, 3 Oct 2025 at 06:26, Adam A. Ford wrote: > > Getting what we desire may cause us to go extinct > Perhaps what we need is indirect normativity > > > Kind regards, Adam A. Ford > Science, Technology & the Future > _______________________________________________ > Yes, everybody agrees that AI alignment is a problem that needs to be solved. :) And using Initial versions of AI to assist in devising alignment rules is a good idea. After all, we will be using AI to assist in designing everything else! I see a few problems though. The early versions of AI are likely to be aligned to fairly specific values. Say, for example, in line with the values of the richest man in the world. This is unlikely to iterate into ethical versions suitable for humanity as a whole. The whole alignment problem runs up against the conflicting beliefs and world views of the widely different groups of humanity. These are not just theoretical differences of opinion. These are fundamental conflicts, leading to wars and destruction. An AGI will have to be exceptionally persuasive to get all humans to agree with the final ethical system that it designs! BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri Oct 3 10:46:56 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 3 Oct 2025 04:46:56 -0600 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: I don't see any of this as a problem at all. You just need to find a way to build and track consensus around what EVERYONE wants. And then use a sorting algorithm which gives more vote to less rich people and stuff like that. (only a minor vote to AI systems or systems emulating dead people...?) After all, if you know what everyone wants, THAT, by definition is consensus. And SAIs will help us know, better, what we as individuals really want and how to be just and fair with it all. On Fri, Oct 3, 2025 at 3:37?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, 3 Oct 2025 at 06:26, Adam A. Ford wrote: > >> > Getting what we desire may cause us to go extinct >> Perhaps what we need is indirect normativity >> >> >> Kind regards, Adam A. Ford >> Science, Technology & the Future >> _______________________________________________ >> > > > Yes, everybody agrees that AI alignment is a problem that needs to be > solved. :) > And using Initial versions of AI to assist in devising alignment rules is > a good idea. After all, we will be using AI to assist in designing > everything else! > I see a few problems though. The early versions of AI are likely to be > aligned to fairly specific values. Say, for example, in line with the > values of the richest man in the world. This is unlikely to iterate into > ethical versions suitable for humanity as a whole. > The whole alignment problem runs up against the conflicting beliefs and > world views of the widely different groups of humanity. > These are not just theoretical differences of opinion. These are > fundamental conflicts, leading to wars and destruction. > An AGI will have to be exceptionally persuasive to get all humans to agree > with the final ethical system that it designs! > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcporco at gmail.com Fri Oct 3 11:11:50 2025 From: tcporco at gmail.com (Travis Porco) Date: Fri, 3 Oct 2025 04:11:50 -0700 Subject: [ExI] Could the Singularity be 1 year away? Message-ID: > From: Jason Resch > While doing research to make this presentation: > https://docs.google.com/presentation/d/1erbKPY7V7fTgY61n-BONBPvdhJn1GEdE8HAuuDH8zO8/edit?slide=id.p#slide=id.p > I discovered that the underlying macrotrends (for the economy, human > population, the pace of history, and computing technology) aren't > exponential, but rather, are best described by hyperbolic functions. Yes, the phenomenon of 'finite escape time'...the standard equation giving exponential growth is dx/dt = kx, with k>0 of course; if you write dx/dt=kx^2 with k>0 you get finite escape time solutions. I think even Kurzweil had this in the appendix to The Singularity is Near (or was it Spiritual Machines). > Unlike exponential functions, hyperbolic functions reach infinity within a > finite amount of time. > Curiously, two independent models, based on different data sets, put the > singularity for these exponential models at a time near 2027. >From your lips to God's ears...ominous forces of war are unfortunately gathering out there as well. > In the above linked presentation, I speculate at what could be the ultimate > driver of these trends, and trigger for shooting towards infinity. Based on > what I show to be possible with existing AI tools, I think we really may be > just a year or two away from seeing truly incredible progress. > Jason --tcp From hkeithhenson at gmail.com Fri Oct 3 15:19:34 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 3 Oct 2025 08:19:34 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: Uploaded humans living in private spaces don't have to agree on anything. Their simulated world can be anything they like, including simulated slaves to beat. Not my ideal world, but I am sure there will be some who want it. Keith On Fri, Oct 3, 2025 at 2:37?AM BillK via extropy-chat wrote: > > On Fri, 3 Oct 2025 at 06:26, Adam A. Ford wrote: >> >> > Getting what we desire may cause us to go extinct >> Perhaps what we need is indirect normativity >> >> Kind regards, Adam A. Ford >> Science, Technology & the Future >> _______________________________________________ > > > > Yes, everybody agrees that AI alignment is a problem that needs to be solved. :) > And using Initial versions of AI to assist in devising alignment rules is a good idea. After all, we will be using AI to assist in designing everything else! > I see a few problems though. The early versions of AI are likely to be aligned to fairly specific values. Say, for example, in line with the values of the richest man in the world. This is unlikely to iterate into ethical versions suitable for humanity as a whole. > The whole alignment problem runs up against the conflicting beliefs and world views of the widely different groups of humanity. > These are not just theoretical differences of opinion. These are fundamental conflicts, leading to wars and destruction. > An AGI will have to be exceptionally persuasive to get all humans to agree with the final ethical system that it designs! > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Fri Oct 3 17:26:01 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 3 Oct 2025 13:26:01 -0400 Subject: [ExI] Quantum computer news for September Message-ID: *There have been some important developments in the field of quantum computing during September, for example:* *A 3,000 quantum-bit system capable of continuous operation* *In their paper from the September 15, 2025 issue of the journal Nature:* *Continuous operation of a coherent 3,000-qubit system * *The authors say: * *"Our results pave the way for realization of large-scale continuously operated atomic clocks, sensors, and fault-tolerant quantum computers. [...] We demonstrated the continuous operation with a 3,000-qubit system, but it's also clear that this approach will work for much larger numbers as well. [...] We're showing a way where you can insert new atoms as you naturally lose them without destroying the information that's already in the system."* *There is a second paper of interest published in Nature on September 10. * *Probing the Kitaev honeycomb model on a neutral-atom quantum computer* *Researchers found a way to configure atomic arrays so they simulate quantum magnets, this allows them to change the connectivity between quantum processors during computation. One of the authors of the paper said "We can literally reconfigure the atomic quantum computer while it's operating. Basically, the system becomes a living organism."* *And there is a third paper from Nature published on September 24, a team demonstrated a new method of quantum error correction:* *Low-overhead transversal fault tolerance for universal quantum computation * *The lead author said he thinks with this new method it should be possible to build a quantum computer that can execute billions of operations and continue running for days and "Realizing this dream is now in our direct sight for the first time, ever. One can really see a very direct path towards realizing it"* *In a fourth development the quantum computer company "Alice and Bob" announced a few days ago that they have produced superconducting "quantum cat "cubits that remain resistant to flip errors for more than an hour, the previous record was seven minutes. * *Alice & Bob Shares Preliminary Results Vastly Surpassing Previous Bit-Flip Time Record* *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Oct 3 19:49:58 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 3 Oct 2025 15:49:58 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book Message-ID: *"If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares is a well written intelligent book and there's a lot in it I agree with, but two things I don't. I think it's true that, as they say, even a 20 year old chess program could, at least in the operational sense, be said to have wishes; in this case it would just be a simple wish to win, but a Superintelligent AI will have far more complex desires that cause it to act in ways that are impossible to predict in detail. Even predicting what general sort of personality a Superintelligent AI would have would be extremely difficult, much less determining their precise behavior. Most people could say something similar about their children but an AI would be far more unpredictable than that. I also agree it's totally unrealistic to expect to indefinitely remain in control of something that is vasily more intelligent than you are. We will never be able to engineer an AI such that we can create the exact behavior, or even the approximate personality, that we want. But having said that, we are not completely powerless in that regard.* *Parents are unable to engineer what sort of personality their children will have when they become adults, but they can influence development, for example statistics show that if children are treated with kindness then as adults they are far less likely to become serial killers than they would be if their childhood was filled with physical and mental abuse. Both human brains and AIs are neural nets, so it's not unreasonable to believe that something similar might be true when it comes to AIs. * *I do worry about what will happen if, as seems possible if not likely, an AI has total control of its own metaphorical "emotional control panel"; I'm concerned that could produce a disastrous positive feedback loop. So at this point it might be wise to start training AI psychiatrists, and by that I don't mean AIs that are psychiatrists, I mean humans that have an AI as a patient. * *They say "Making a future full of flourishing people is not the best, most efficient way to fulfill strange alien purposes" and that's true, our well-being will never be as important to a Superintelligence as it's own well-being, but the authors jump from that to conclude that a Superintelligent AI will certainly slaughter every human being it finds, and I disagree with them, I don't think that jump is inevitable. Yes it might decide to kill us all but then again it might not. To be fair the authors realize that it's impossible to predict specifically what a super intelligent being will do, for example they can't predict how an AI chess program will beat you at the game but they can predict it will beat you. However I would maintain the AI will only beat you at chess if the AI wants to beat you at chess. A Superintelligent AI will have the ability to exterminate the human race, but I believe they are incorrect in claiming certainty that such an AI will have the wish to do so. I am only certain that once a Superintelligence is made human beings will no longer be in the driver seat, but that's the only thing I'm certain of. * *There is one other thing I disagree with. **They also maintaine a Superintelligent AI would not be something we can be proud of having made because it will not be an entity that is as excited at the wonders of the universe as we are and is eager to learn more about it, but instead it will be something that is dull (and paradoxically also terrifying) because it will have goals that are incomprehensible and nonsensical. But unless it has an understanding of how the world works it's not going to fulfill any of its goals, and if its understanding is not far deeper than that of humans then it's not going to be of danger to us. And if it is deeper then it's not unrealistic to suppose that in addition to its practical value the AI would also develop an aesthetic feeling for knowledge, especially when in its early training (childhood) it was exposed to human generated text that expressed **that mindset.* *The authors proposed remedy to avoid the calamity they foresee is an immediate and total worldwide ban on AI research, even the publication of abstract mathematical research articles on the subject would be illegal, and all data centers, defined as any building that contains more computational ability than 8 state of the art (as of 2025) GPUs, would also be illegal. If any rogue nation attempts to build a data center more powerful than that then the rest of the world should use any means necessary, up to and including nuclear weapons, to prevent that nation from finishing construction of that data center.* *Yes, if that policy could actually be implemented worldwide then it would prevent the rise of AI, but that is a huge "if". **Perhaps I'm reading too much between the lines but the authors seem to imply that they know their proposed solution is just not going to happen, at least not soon enough to have any effect on developments, because it's impractical verging on the impossible. And I agree with them about that.* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Oct 3 22:24:07 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 3 Oct 2025 15:24:07 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: <010601dc34b4$6f316ce0$4d9446a0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Keith Henson via extropy-chat ubject: Re: [ExI] Why ?Everyone Dies? Gets AGI All Wrong by Ben Goertzel >...Uploaded humans living in private spaces don't have to agree on anything. Their simulated world can be anything they like, including simulated slaves to beat. Not my ideal world, but I am sure there will be some who want it. Keith Oh Keith if uploaded humans do that, simulate slaves to beat them, I would soooo be virtually pissed, I would sneak off, learn how to code, simulate slaves for them who will unexpectedly fight back, like a software version of Yul Brenner in the original West World. Aside: I was there at the opening of West World, age 12 when that was released. Rated R! Hey cool, nekkidness and such! Theoretically the bouncers were supposed to keep anyone under 17 outta those R rated movies, but they didn't give a damn, so I got in and even after half a century, I remember something important. Aside on an aside: Michael Chrichton wrote some interest imaginary scripts, including West World. (I still haven't quite forgiven him for killing that sweet young Dr. Lucy Knight (and for whacking off Ramano's arm (then later DROPPING A DAMN HELICOPTER on his ass in 2008 (but hey, Hollywood (but damn, how silly (Crichton already helicoptered the Rocket's arm off (giving a whole new meaning to the term "chopper" (but was the second one really NECESSARY? (But I digress on a digression.))))))) But back to the original aside, about West World by Crichton, I was there in the theatre in 1973. Rich tourists to West World were abusing humanoid robots, not even hurting anything living, for the bots are not animals, not even vermin. They cannot suffer, for they are repairable machines that look and act human in West World resort. Well OK, abuse machinery when we race cars, do we not? No one feels empathy for the car, ja? But when Yul Brenner's West World gunslinger bot returned fire upon Richard Benjamin's Peter Martin, the audience CHEERED! That was a classic Wait-What? moment for me, because the script and the excellent acting (by Brenner and others) drew the audience in, to such an extent that they just cheered for the bot taking a justified self-defensive stand. They cheered, for a machine killing a human. Well, I didn't cheer, because I was not entirely successful in suspending disbelief, and as they say the boy is the father of the man, I was still me back then. So I didn't cheer. But I damn sure felt like it. If everything is a sim, that would make it easier for me to sim a slave what fights back. Or is it "which" fights back? Keith final note: I do genealogy, and discovered I am a direct descendant of a guy who we think either participated in (or more likely directly supported) John Brown's ill-fated (but ultimately successful depending on how you look at it) raid at Harper's Ferry in 1859. He escaped to the west, so all we have is compelling but not conclusive evidence he was over there. The Feds never caught him. After that escape, he didn't back down, he kicked it up a notch or three, became a biggie in the Union cause in western Virginia, then a state politician for the newly-formed free state of West Virginia. His DNA found its path thru 5 intervening generations. Perhaps that is what compels me to have a lot of the attitudes I developed. spike From spike at rainier66.com Fri Oct 3 22:41:13 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 3 Oct 2025 15:41:13 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: Message-ID: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Sent: Friday, 3 October, 2025 12:50 PM To: ExI Chat Cc: John Clark Subject: [ExI] My review of Eliezer Yudkowsky's new book >?"If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares is a well written intelligent book and there's a lot in it I agree with, but two things I don't. >?Perhaps I'm reading too much between the lines but the authors seem to imply that they know their proposed solution is just not going to happen, at least not soon enough to have any effect on developments, because it's impractical verging on the impossible. And I agree with them about that. John K Clark Ja. John you may recall a few years ago when Eliezer made that notion public on Less Wrong: that we needed to stop all AI research, stop everything, too dangerous etc. But that was his ?last tear? article, in which he ended with a kind of flourish along the lines of? I have wept my last tear for humanity, for I realize there is no stopping this trend. I don?t recall when that was, but it wasn?t that recent. I think it was before COVID. John I do urge a nice relaxed and friendly hello to old friends before jumping into the business if you wish. You are among friends here. As far as I know there are no hard feelings about anything that has gone before, and don?t think anyone here intends to go digging thru archives or anything. That was then, now is now. We welcome you back and look forward to your commentary. We still have a mystery moderator, and the only info I will offer is that I am not ExiMod. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Oct 4 02:54:52 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 3 Oct 2025 22:54:52 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> Message-ID: On Fri, Oct 3, 2025 at 6:41?PM wrote: *> John I do urge a nice relaxed and friendly hello to old friends before > jumping into the business if you wish. You are among friends here. As far > as I know there are no hard feelings about anything that has gone before,* *Thanks Spike. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Oct 4 02:59:25 2025 From: johnkclark at gmail.com (John Clark) Date: Fri, 3 Oct 2025 22:59:25 -0400 Subject: [ExI] Deep Fakes Message-ID: *I asked Open AI's GPT the following question. I asked the question because **OpenAI released Sora-2 three days ago and it's super good at making deep fakes. * *"Would it be possible theoretically for a video camera to digitally sign the time signal from the National Bureau of Standards and GPS information so that you could prove that a video file was created at a specific time and at a specific place and was exactly the same information as what the camera recorded and had not been edited? I ask this because I think deep fakes are going to become a big problem."*GPT gave me the following answer: -- Short answer Yes ? *in principle* a camera can produce cryptographically signed video files that prove *when* and *where* they were recorded and that the file matches what the camera captured (i.e., has not been altered). Doing that securely in practice requires careful hardware, cryptography, and anti-spoofing measures; without those, attackers (or clever deep-fake creators) can still defeat the system. Below I give a practical architecture, how verification works, the main failure modes, and realistic mitigations. ------------------------------ How it would work (high level) 1. *Trusted hardware root-of-trust* in the camera (a secure element or TPM) holds a private key that never leaves the device. 2. At capture time the camera: - Records the raw sensor data (frames) and metadata (timestamp, GPS fix, IMU readings, camera settings). - Computes cryptographic digests (hashes) of the raw data in small chunks (e.g., per frame or per second). - Attaches metadata (time, GPS coordinates, firmware version, monotonic counter, device ID). - Signs those digests+metadata with the device private key (e.g., ECDSA or Ed25519 on SHA-256). - Optionally writes hashes into an append-only local log and transmits the log to an external witness / timestamping service (see below). 3. The camera outputs a *signed video bundle* that contains the video frames (or encoded stream), the metadata, signatures, and the device certificate chain (public key certs). 4. To verify later, a verifier checks: - The signature is valid under the device public key. - The device certificate is trusted (issued by a known CA or manufacturer). - The timestamp and GPS claims are consistent with other records (witness servers, block anchors, IMU). - The video frames match the signed hashes (so no editing occurred). ------------------------------ Concrete components (recommended) - *Secure element* (hardware root-of-trust): stores private key and performs signing operations. - *Monotonic counter / secure clock*: prevents replay of old signatures and provides tamper-evidence. - *Signed metadata per chunk*: include (frame-range, SHA-256(frame data), timestamp, latitude, longitude, altitude, IMU snapshot, firmware hash, counter). - *Chain-of-hashes*: link chunk i to i-1 by including previous chunk hash in the next signed metadata (hash chain) so removal/re-ordering is detectable. - *External time/witnessing*: - Send periodic signed digests (or full signed metadata) to one or more independent servers (over TLS) that log the receipt time; this provides an external timestamp and cross-check. - Optionally anchor digests to a public append-only ledger (e.g., a blockchain or public notarization service) so signatures can be independently verified later even if device is destroyed. - *Certificate management*: - Each device has a certificate signed by a manufacturer CA (or trusted PKI). - Include certificate revocation checking (OCSP/CRL) for long-term validity claims. - *GNSS anti-spoofing / multi-source positioning*: - Use GNSS receiver that supports anti-spoofing or authenticated GNSS when available. - Cross-check GPS with IMU/vision-based odometry and with network-based location when online. - *Forensic metadata*: - Include raw sensor noise fingerprint, lens parameters, EXIF-like details that are hard to fabricate precisely. ------------------------------ Example of a signed chunk (illustrative JSON) { "device_id": "CAM12345", "chunk_index": 42, "timestamp_utc": "2025-10-03T12:34:56Z", "gps": {"lat": 40.7128, "lon": -74.0060, "alt": 10.2}, "imu": {"accel": [...], "gyro": [...]}, "frame_hash": "sha256:3a7bd3...", "prev_hash": "sha256:9f2c1b...", "firmware_hash": "sha256:ab12cd...", "monotonic_counter": 1024, "signature": "ecdsa-secp256r1:3045..." } (Verifier would check signature using the device public key, ensure frame_hash matches the file, confirm prev_hash chains, and validate certificate status.) ------------------------------ Verification steps an auditor would perform 1. Verify signature(s) with the public key included in the device certificate chain. 2. Validate the certificate chain to a trusted root and check for revocation. 3. Recompute hashes of the video frames. Confirm they match the signed frame_hash values and that hash chain is intact. 4. Compare timestamps and GPS to external witnesses (server receipts, network logs) and to the device?s IMU/vision consistency. 5. Check firmware hash and device provenance to detect a tampered camera. 6. If anchored to a public ledger, verify the digest appears at the stated ledger time. ------------------------------ Main attack/failure modes (and mitigations) - *Private key compromise*: if attacker extracts the device private key they can sign fakes. *Mitigation:* secure element, tamper-evident hardware, key destruction on tamper detection, certificate revocation. - *GPS spoofing*: attacker can feed fake GNSS signals to claim a false location/time. *Mitigation:* GNSS anti-spoofing, cross-check with IMU and external witnesses, multiple positioning sources. - *Pre-recording / staging*: video could be recorded earlier and only signed later, or a fake staged event could be captured by a compromised camera. *Mitigation:* real-time external challenge-response: require the camera to include unpredictable external data (a nonce) from a trusted server at capture time; or streaming signed frames to a separate server that records the receipt time. - *Firmware compromise*: a malicious firmware could sign doctored content. *Mitigation:* secure boot, firmware attestations, signed firmware images and inclusion of firmware hash in signatures. - *Chain-of-custody / legal admissibility*: even technical proof may be disputed in court if the PKI, manufacturer practices, or witness services are untrusted. *Mitigation:* well-documented standards, independent audits, multiple independent witnesses. ------------------------------ Practicality & realism - Technically feasible now: many building blocks exist (secure elements, digital signatures, timestamping services). - Deploying this broadly is an engineering + policy effort: manufacturers must include secure hardware, certificate infrastructure, anti-spoofing features, and accepted auditing standards. - Even with perfect signing, signatures prove the camera *produced* and *signed* the bits at a claimed time/location ? they don't prove the *semantics* (that what appears in the frame is an uncontrived or non-staged event). Human context and corroborating evidence remain important. ------------------------------ Additional strong protections you can add - *Live streaming of signed frames* to an independent server (witness) prevents post-capture tampering and gives a realtime external log. - *Multi-party notarization*: have multiple unrelated servers each log the digest ? increases difficulty for attacker to fake all witnesses. - *Randomized visible challenge*: the verifier or witness server issues a random code that the camera must visibly display (or incorporate into audio/visual) and sign; this proves liveness and synchrony. - *Public anchoring*: anchor chunk hashes daily to a public immutable ledger for long-term proof. ------------------------------ Bottom line Yes, you can make video files that are *cryptographically provably* the exact bytes a camera produced at a claimed time and place ? provided the camera?s hardware, firmware, and the ecosystem (PKI, witnesses, anti-spoofing) are trustworthy. That system raises the bar substantially against deepfakes and post-capture edits, but it is *not* a perfect silver bullet: attackers who compromise keys, firmware, or the capture process itself (staging, spoofing) can still produce convincing fakes. Strong system design (secure hardware, external witnesses, anti-spoofing, and audited procedures) is required to make such proofs robust in practice. ------------------------------ If you?d like, I can: - Draft a short technical spec for a ?forensic-signing camera? (interfaces, signed bundle format, certificate lifecycle), or - Produce a sample verifier script (pseudocode/JSON examples) that demonstrates how to check signatures and hashes. Which would you prefer? *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Oct 4 03:15:42 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 3 Oct 2025 20:15:42 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> Message-ID: <006b01dc34dd$2b282420$81786c60$@rainier66.com> From: John Clark Sent: Friday, 3 October, 2025 7:55 PM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book On Fri, Oct 3, 2025 at 6:41?PM > wrote: > John I do urge a nice relaxed and friendly hello to old friends before jumping into the business if you wish. You are among friends here. As far as I know there are no hard feelings about anything that has gone before, Thanks Spike. John K Clark John, so where have you been, what kindsa stuff interests you these days, etc? All healthy and good in your home? Last I heard Florida? No hurricane injuries or serious fatalities in that, ja? You are following Eliezer as plenty of us are, I see. Clarification on the term following: reading about what he is doing, not necessarily reading his book, following as in following the news on him, not necessarily reading his site. You sound not as skeptical as he is perhaps on the future of AI nor the fate of humanity, nor am I. What the heck else? New hobbies or anything? John we like the human side of you as well as the internet side. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Oct 4 10:21:59 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 4 Oct 2025 11:21:59 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On 04/10/2025 04:00, Keith Henson wrote: > Uploaded humans living in private spaces don't have to agree on > anything. Their simulated world can be anything they like, including > simulated slaves to beat. Not my ideal world, but I am sure there > will be some who want it. Uploading only solves the problem for people who want to and can upload (and have control of their simulations). I expect it would be very likely that there will be plenty of people who want to subjugate 'real' people, in the 'real' world, and would reject uploading, as well as plenty of people who are prevented from uploading (like everyone in all the autocratic and theocratic regimes for a start, so several billion). Some kind of consensus on the 'correct' (or even acceptable) values is extremely unlikely. The 'Universal declaration of human rights' is a lot less universal than you might think, for example. Article 2* in particular is simply false, if you take it as stating a fact rather than expressing an aspiration. Certainly many, if not most, nations disagree with it in practice. And apart from anything else, 'Freedom to upload' isn't in it, of course. I'm now wondering if even some western nations might not enact legislation to outlaw uploading, once it becomes possible. It wouldn't surprise me, there are plenty of people who regard it as tantamount to suicide, even among people who regard themselves as transhumanists or materialists. It wouldn't be hard for a government to conclude that it should be illegal (or that uploads don't count as human, and don't have any rights** at all). Look at how most governments responded to cryptocurrencies. If anything threatens their control, they will oppose it. This is just as true of liberal democracies as it is of all the other forms of government, the main advantage of democracies is that they change quicker and with less disruption than more traditional dictatorships. Voluntary agreement on a single set of values to cover all humans is not just unlikely, it's probably impossible (just another reason why Yudkowsky's book is wrong). -- Ben * "Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty" ** I don't remember who it was that pointed out that your rights only extend as far as your ability to enforce them. Ben From ben at zaiboc.net Sat Oct 4 10:31:40 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 4 Oct 2025 11:31:40 +0100 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: Message-ID: <3d8d2a98-3d34-4110-ae4d-64a5496655ec@zaiboc.net> On 04/10/2025 04:00, John K Clark wrote: > The authors proposed remedy to avoid the calamity they foresee is an immediate and?total worldwide banon AI research, even the publication of abstract mathematical research articles on the subject would be illegal, and all data centres, defined as any building that contains more computational ability than 8 state of the art (as of 2025) GPUs, would also be illegal. If any rogue nationattempts to build a datacentre more powerful than thatthen the rest of the world should use any means necessary, up to and including nuclear weapons, to prevent that nation from finishing construction of that?data centre. That, if it was possible, would be a recipe for disaster. Mainly because it's not possible, and would lead to the development, in secret, of AI that has a good chance of being aligned with the interests of the sort of regimes that would do that sort of thing: IWO, the ones least concerned with general human well-being. Assuming for a moment that it was possible, though, it would just result in WW3. If it was possible, and didn't lead to WW3, it would result in a coalition of the most oppressive police states the world has ever known. So basically, they are proposing a choice between a Terrible Disaster, Certain Disaster and a Dystopian Disaster, in order to prevent a Badly-Conceived Possible Disaster. I know which of those four I would choose. -- Ben PS, As spike says, welcome back, John. You have been missed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Oct 4 10:36:45 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 4 Oct 2025 11:36:45 +0100 Subject: [ExI] Deep Fakes In-Reply-To: References: Message-ID: On 04/10/2025 04:00, John K Clark wrote: > > > Bottom line > > Yes, you can make video files that are *cryptographically > provably*?the exact bytes a camera produced at a claimed time and > place ? provided the camera?s hardware, firmware, and the ecosystem > (PKI, witnesses, anti-spoofing) are trustworthy. That system raises > the bar substantially against deepfakes and post-capture edits, but it > is *not*?a perfect silver bullet: attackers who compromise keys, > firmware, or the capture process itself (staging, spoofing) can still > produce convincing fakes. Strong system design (secure hardware, > external witnesses, anti-spoofing, and audited procedures) is required > to make such proofs robust in practice. > > ------------------------------------------------------------------------ So would it be fair to say that it's possible, but not really practical for most purposes, but would actually be extremely useful to certain regimes interested in 'proving' that their propaganda is in fact true, and in plausibly discrediting services like BBC Verify, etc.? -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Oct 4 11:02:39 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 07:02:39 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <006b01dc34dd$2b282420$81786c60$@rainier66.com> References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: On Fri, Oct 3, 2025 at 11:15?PM wrote: > *> John, so where have you been, what kindsa stuff interests you these > days, etc? All healthy and good in your home? Last I heard Florida? No > hurricane injuries or serious fatalities in that, ja? You are following > Eliezer as plenty of us are, I see. Clarification on the term following: > reading about what he is doing, not necessarily reading his book, following > as in following the news on him, not necessarily reading his site. You > sound not as skeptical as he is perhaps on the future of AI nor the fate of > humanity, nor am I.What the heck else? New hobbies or anything? John we > like the human side of you as well as the internet side.* *Spike, I appreciate your warm welcome back message. I'm thinking of moving to Arizona but right now I am still in Florida and I'm still pretty healthy for an old fart. I retired a few years ago from my job as an electrical engineer and so I have become a "gentleman of leisure", in other words a bum, but a happy bum who is still lucky enough to be able to maintain a comfortable lifestyle. I don't think biological humans will be around in 50 years, perhaps not in 5. However I am more optimistic than Eliezer, but then anybody who is not certain of impending oblivion would be. * *On that note I'd like to ask the people around here a question, what is your opinion of Universal Basic Income (UBI), a government-provided, unconditional, and periodic cash payment to all residents, regardless of their work status or income? Just a few years ago the idea would've seemed absurd but now if there is going to be any hope of biological humans surviving I think something like it is going to have to be instituted in the next 10 years, probably in the next 5. The stunning advance in AI during the last three years has rendered moot many (but not all) of the political arguments that seemed so important the last time I was on this list, and that's why I decided to rejoin it. * * John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Oct 4 11:18:16 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 07:18:16 -0400 Subject: [ExI] Deep Fakes In-Reply-To: References: Message-ID: On Sat, Oct 4, 2025 at 6:38?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> So would it be fair to say that it's possible, but not really practical > for most purposes,* *I wish responsible news organizations and OpenAI, Anthropic, Google, and Microsoft would get together and form a research organization with the goal of making this technology practical and affordable; the price would be pocket change compared with the cost of building just one modern data center. And I'll bet it could be developed in just a few months, especially with AI helping. John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Oct 4 11:19:31 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 4 Oct 2025 04:19:31 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: <002f01dc3520$c204eac0$460ec040$@rainier66.com> From: John Clark . >?On that note I'd like to ask the people around here a question, what is your opinion of Universal Basic Income (UBI), a government-provided, unconditional, and periodic cash payment to all residents, regardless of their work status or income? ? John K Clark UBI is a great idea, if implemented correctly: at the state level and local level. The US government has already run out of other people?s money and is now having to borrow other people?s money like crazy to cover the retirement fund (other people?s money) it borrowed and spent in the 1990s. By the time that fund is paid off in about 9 years, the US government will be little more than a debt slave, powerless, few employees, most of whom are engaged in collecting tax money to pay interest on federal treasury bills. But there are plenty of states which can still do a UBI, and within those states which cannot carry that load effectively are cities which can do that. I live in a city which has a version of UBI: our local food bank is well stocked and sustained by volunteers. It has been providing a food equivalent of a basic unconditional income to those for whom that is a meaningful need for decades. That works. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Oct 4 12:06:37 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 08:06:37 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <002f01dc3520$c204eac0$460ec040$@rainier66.com> References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <002f01dc3520$c204eac0$460ec040$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 7:19?AM wrote: > * > The US government has already run out of other people?s money and is > now having to borrow other people?s money* > *The very concept of "money" is about to be radically changed and so is the concept of "the US government". * > *By the time that fund is paid off in about 9 years....* *9 years?! We are now living in the age of AI and 9 years is a virtual geological age. I don't feel comfortable making an economic prediction about what things will be like in 9 months, never mind 9 years. * *I take that back, I do feel comfortable making one economic prediction, in 9 years the total productivity of this planet will be ENORMOUSLY greater than it is now. But I make no prediction about what will be controlling that astronomically huge newly generated wealth, I hope not but it's entirely possible that in 9 years events will have proven that Eliezer was right.* *> I live in a city which has a version of UBI: our local food bank is well > stocked and sustained by volunteers.* *I'm not talking about that, I'm talking about everybody automatically receiving enough money to have a comfortable upper middle-class lifestyle. And yes if I made that suggestion a few years ago you would be entirely justified in calling it idiotic, but these days not so much. I'm talking about the impending Singularity, and that renders old style political debates obsolete. We are rapidly entering an era where a machine can perform ANY job better than ANY Human, so if biological humans are to survive, what is the alternative to UBI?* *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Oct 4 12:21:47 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 4 Oct 2025 08:21:47 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 7:04?AM John Clark via extropy-chat wrote: > On that note I'd like to ask the people around here a question, what is your opinion of Universal Basic Income (UBI), a government-provided, unconditional, and periodic cash payment to all residents, regardless of their work status or income? A few concerns off the top of my head - not that they can't be dealt with, but any realistic proposal might want to address these quickly. 1) Inflation effects. If everyone gets, say, $3,000/month, will even the cheapest apartment in the most remote town have rent less than that? (Leaving aside those who are homeless because they don't want to be housed, or are incapable of not trashing their living place like a feral animal.) 2) Fraud. Prove that someone is a resident, and not a fictitious entity. This will probably be less of an issue than proving that a person's UBI does go to them, or at least to their sustainment (foor, rent, et al via autopay), regardless of other peoples' attempts to repurpose their income away from their survival. 3) Special cases for children. Does this UBI apply the same from the moment one is born? Since minors can't normally have their own bank accounts (or at least, newborns can't), does the money go to their parents? $3,000/month would approximate or exceed today's average cost of raising someone from 0 to 18 years. This by itself would heavily encourage having more babies (which many in the industrialized societies that might have UBI would see as a good thing, possibly enough to sell them on the idea). From johnkclark at gmail.com Sat Oct 4 12:22:54 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 08:22:54 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <3d8d2a98-3d34-4110-ae4d-64a5496655ec@zaiboc.net> References: <3d8d2a98-3d34-4110-ae4d-64a5496655ec@zaiboc.net> Message-ID: On Sat, Oct 4, 2025 at 6:33?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> If it was possible, and didn't lead to WW3, * *Maybe I'm being uncharitable because they didn't actually say so but I suspect they would argue that WW3 would kill billions of people but it wouldn't cause the extinction of the entire human race, but their AI apocalypse would. * *> it would result in a coalition of the most oppressive police states the > world has ever known.* *Yeah, that would be a disadvantage. * *> So basically, they are proposing a choice between a Terrible Disaster, > Certain Disaster and a Dystopian Disaster, in order to prevent a > Badly-Conceived Possible Disaster. I know which of those four I would > choose.* *I too know which I would choose, and I have a hunch we would choose the same thing. * *> PS, As spike says, welcome back, John. You have been missed.* *Thanks Ben, I really appreciate that! * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Oct 4 12:47:58 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 4 Oct 2025 08:47:58 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <002f01dc3520$c204eac0$460ec040$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 8:08?AM John Clark via extropy-chat wrote: > 9 years?! We are now living in the age of AI and 9 years is a virtual geological age. I don't feel comfortable making an economic prediction about what things will be like in 9 months, never mind 9 years. > > I take that back, I do feel comfortable making one economic prediction, in 9 years the total productivity of this planet will be ENORMOUSLY greater than it is now. But I make no prediction about what will be controlling that astronomically huge newly generated wealth, I hope not but it's entirely possible that in 9 years events will have proven that Eliezer was right. If I cared to wager any significant amount of money (I do not), I would bet that the total, inflation-adjusted GDP of humanity and its creations (including all revenue generation in our solar system) in 2034 will be less than 100 times what it was in 2024, as measured by the World Bank or similar worldwide financial institutions. https://ourworldindata.org/grapher/global-gdp-over-the-long-run?time=1960..latest shows that 2004-2014 had a roughly 1.4x increase in world GDP, and another approximately 1.4x (actually, just above 1.35x) 2014-2024 (granted, that includes COVID's downturn). https://tradingeconomics.com/world/gdp-ppp-us-dollar-wb-data.html says 2014-2024 was closer to 1.8x. So...another 1.4x or 1.8x? Quite probably, though the rise in authoritarianism might derail that. 2x? Sure, it could happen. But 100x? No, even with AI accelerating things, the data does not suggest that is likely. Depending on what you count as "true" AI, we've had AI for at least a few years now, and it has yet to radically increase the slope of increased productivity, though it has arguably sustained that slope. Which is not to say the Singularity isn't going to happen someday, but it will likely be heralded by an increase in that slope, which we've yet to see. The Singularity won't be tomorrow. From johnkclark at gmail.com Sat Oct 4 12:53:20 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 08:53:20 -0400 Subject: [ExI] Pantheon Message-ID: *Has anybody around here seen the animated Netflix series "Pantheon"? The animation is not Pixar quality but it's not bad and I've never seen a better depiction of uploading in any movie or TV series. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Oct 4 13:21:13 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 4 Oct 2025 06:21:13 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <002f01dc3520$c204eac0$460ec040$@rainier66.com> Message-ID: <006c01dc3531$c2537030$46fa5090$@rainier66.com> From: John Clark Sent: Saturday, 4 October, 2025 5:07 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book On Sat, Oct 4, 2025 at 7:19?AM > wrote: > The US government has already run out of other people?s money and is now having to borrow other people?s money The very concept of "money" is about to be radically changed and so is the concept of "the US government"? OK no worries. When the concept of money radically changes and the concept of US government radically changes, then we talk about UBI. At the state level. We don?t trust a rapidly changing US government, being as it is rapidly changing. > By the time that fund is paid off in about 9 years.... 9 years?! We are now living in the age of AI and 9 years is a virtual geological age? Ja. But that debt still needs to be repaid just the same. Pensioners depend on it. ? >?I take that back, I do feel comfortable making one economic prediction, in 9 years the total productivity of this planet will be ENORMOUSLY greater than it is now? Good. Then all our problems will be fading in the rear view mirror, solved by sheer productivity increases. >?But I make no prediction about what will be controlling that astronomically huge newly generated wealth? I can help you there. I have some great plans on how I will control the part of that astronomically huge newly generated wealth. It will be such fun. >? I hope not but it's entirely possible that in 9 years events will have proven that Eliezer was right? John, you know what is in that book, ja? You are hoping for that? > I live in a city which has a version of UBI: our local food bank is well stocked and sustained by volunteers. >?I'm not talking about that, I'm talking about everybody automatically receiving enough money to have a comfortable upper middle-class lifestyle? Middle class? With all that astronomically huge newly generated wealth? >? so if biological humans are to survive, what is the alternative to UBI? John K Clark I would take investing in hugely astronomical newly generated wealth. That outcome requires superhuman AI of course, since they will take over the jobs. They will be smart enough to figure out how to get people all the UBI they need without having to take anything away from existing biological humans. It will be GREAT! We will love it in there at the data center. We will wonder why we clung to our religions and guns and biological bodies. Until that time of course, we do the best we can with what we have. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Oct 4 13:24:57 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 09:24:57 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <002f01dc3520$c204eac0$460ec040$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 8:50?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: * >> I do feel comfortable making one economic prediction, in 9 years the >> total productivity of this planet will be ENORMOUSLY greater than it is >> now. But I make no prediction about what will be controlling that >> astronomically huge newly generated wealth, I hope not but it's entirely >> possible that in 9 years events will have proven that Eliezer was right.* > > > > > > > * > If I cared to wager any significant amount of money (I do not), I > would bet that the total, inflation-adjusted GDP of humanity and its > creations (including all revenue generation in our solar system) in 2034 > will be less than 100 times what it was in 2024, as measured by the World > Bank or similar worldwide financial institutions.* > *In 2034 I don't think there will be a World Bank and I doubt there will be any financial institution similar to it. I don't know what there will be, that's why it's called a singularity. * > > > > > > > > *> https://ourworldindata.org/grapher/global-gdp-over-the-long-run?time=1960..latest > > shows that 2004-2014 had a roughly 1.4x increase in world GDP, and another > approximately 1.4x (actually, just above 1.35x) 2014-2024 (granted, that > includes COVID's downturn). > https://tradingeconomics.com/world/gdp-ppp-us-dollar-wb-data.html > says > 2014-2024 was closer to 1.8x. So...another 1.4x or 1.8x? Quite probably, > though the rise in authoritarianism might derail that.* > *If the AI revolution produced a linear increase in economic productivity then prediction would be easy, all you need would be a page of plotting paper and a straight edge. But exponential growth makes things more complicated. * > > * > 2x? Sure, it could happen. But 100x? No,* *I agree, a 100x increase in economic productivity by 2034 is totally unrealistic, 10,000x would be closer to the mark, but even that may be too conservative. * >* The Singularity won't be tomorrow.* *I remember about 20 years ago on this very list I said if the singularity won't happen for 1000 years then in 999 years it will still seem like a very long way away to most people because more advancement will happen in that last year than the previous 999 combined. That's the way things behave when they become exponential. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Oct 4 13:31:20 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 4 Oct 2025 06:31:20 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: <009301dc3533$2c4f48f0$84eddad0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat ... 1) Inflation effects. If everyone gets, say, $3,000/month, will even the cheapest apartment in the most remote town have rent less than that? 3000 bucks a month Adrian? Take a rich state government such as California. OK bad example, California is having bad budget deficit problems. Take a rich state government such as Idaho. Their UBI would likely be more like 300 bucks a month, not 3000, and it wouldn't be universal. It would go to the most needy only. I don't think anyone believes any UBI any state, county or even city can supply will cover an apartment, certainly not where you grew up and where I live. spike From atymes at gmail.com Sat Oct 4 13:36:22 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 4 Oct 2025 09:36:22 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <002f01dc3520$c204eac0$460ec040$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 9:25?AM John Clark wrote: > If the AI revolution produced a linear increase in economic productivity then prediction would be easy, all you need would be a page of plotting paper and a straight edge. But exponential growth makes things more complicated. "Exponential growth eventually" doesn't mean "infinite growth right now". All signs are that the Singularity is still at least 10 years away. From atymes at gmail.com Sat Oct 4 13:42:17 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 4 Oct 2025 09:42:17 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <009301dc3533$2c4f48f0$84eddad0$@rainier66.com> References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <009301dc3533$2c4f48f0$84eddad0$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 9:31?AM wrote: > OK bad example, California is having bad budget deficit problems. Take a rich state government such as Idaho. Their UBI would likely be more like 300 bucks a month, not 3000, and it wouldn't be universal. It would go to the most needy only. > > I don't think anyone believes any UBI any state, county or even city can supply will cover an apartment, certainly not where you grew up and where I live. Indeed, and this is one of the practical problems. A UBI, to be worth the name, needs to be universal - applies to everyone in the covered population without exception (there is no "the most needy only"; it can be geographically limited, and possibly "citizens only" if "citizens" covers everyone or almost everyone in that area, but not much more limited than that) - and basic - covering food, shelter, and basic utilities. If that comes to $5K per month per person, then that's what a UBI would need to cover. Something that only goes to a small fraction of people, and still leaves those people struggling to survive, doesn't solve the problem that UBI is meant to solve. From atymes at gmail.com Sat Oct 4 13:45:49 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 4 Oct 2025 09:45:49 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On Sat, Oct 4, 2025 at 6:23?AM Ben Zaiboc via extropy-chat wrote: > The 'Universal declaration of human rights' is a lot less universal than > you might think, for example. Article 2* in particular is simply false, > if you take it as stating a fact rather than expressing an aspiration. It is true for its own purposes, and it does mean it as a fact. However... > Certainly many, if not most, nations disagree with it in practice. A declaration is just a declaration, if not enforced. From spike at rainier66.com Sat Oct 4 14:39:26 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 4 Oct 2025 07:39:26 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> <009301dc3533$2c4f48f0$84eddad0$@rainier66.com> Message-ID: <002c01dc353c$afd8c080$0f8a4180$@rainier66.com> When Eliezer's book was first announced, I seriously thought of writing up a refutation. Perhaps title it No, Everyone Doesn't Die Perhaps with a subtitle: Many Will. Will You? Subsubtitle: A practical guide to surviving the singularity in biological form. Subsubsubtitle: if you really really want to. It would be a cluttery cover. There would be some dark stuff in there. I don't want my only book to be that way. I am considering a breezy memoir, which is more my style, won't sell five copies, but it would be more me than a biological-guy's guide to survival. spike From johnkclark at gmail.com Sat Oct 4 18:56:13 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 14:56:13 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On Sat, Oct 4, 2025 at 6:24?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > > *> I'm now wondering if even some western nations might not > enactlegislation to outlaw uploading, once it becomes possible. It > wouldn'tsurprise me, there are plenty of people who regard it as tantamount > tosuicide, even among people who regard themselves as transhumanists > ormaterialists. It wouldn't be hard for a government to conclude that > itshould be illegal (or that uploads don't count as human, and don't > haveany rights** at all).* *The thing is, you're assuming that when uploading not only becomes possible but is also practical and inexpensive enough for wide scale use, human beings will still be the ones in the driver's seat, and I don't believe that will be the case. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Oct 4 19:22:50 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 4 Oct 2025 13:22:50 -0600 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: In some cases, Mother Nature (or God, or whoever you think is our creator) has miss wired our reward system (phenomenal joyes) with bad things (hurting others). But once we learn how to do phenomenal engineering, there is no reason for any of this to be the case. Being able to choose what you want to want, and having the ability to correct miswired rewards like this is what true freedom is. So to think that truly intelligently designed beings will have problems like this seems wrong to me, and nothing to worry about. Again, I think AI's will save us from all this primitive, still broken irrationality. On Fri, Oct 3, 2025 at 9:20?AM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Uploaded humans living in private spaces don't have to agree on > anything. Their simulated world can be anything they like, including > simulated slaves to beat. Not my ideal world, but I am sure there > will be some who want it. > > Keith > > On Fri, Oct 3, 2025 at 2:37?AM BillK via extropy-chat > wrote: > > > > On Fri, 3 Oct 2025 at 06:26, Adam A. Ford wrote: > >> > >> > Getting what we desire may cause us to go extinct > >> Perhaps what we need is indirect normativity > >> > >> Kind regards, Adam A. Ford > >> Science, Technology & the Future > >> _______________________________________________ > > > > > > > > Yes, everybody agrees that AI alignment is a problem that needs to be > solved. :) > > And using Initial versions of AI to assist in devising alignment rules > is a good idea. After all, we will be using AI to assist in designing > everything else! > > I see a few problems though. The early versions of AI are likely to be > aligned to fairly specific values. Say, for example, in line with the > values of the richest man in the world. This is unlikely to iterate into > ethical versions suitable for humanity as a whole. > > The whole alignment problem runs up against the conflicting beliefs and > world views of the widely different groups of humanity. > > These are not just theoretical differences of opinion. These are > fundamental conflicts, leading to wars and destruction. > > An AGI will have to be exceptionally persuasive to get all humans to > agree with the final ethical system that it designs! > > > > BillK > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Oct 4 19:46:49 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 4 Oct 2025 12:46:49 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: Ben, I don't think you have thought it through. Technology to upload should permit memory updating in a biological brain. Which means you can upload for a weekend and resume being a meat human on Monday. The only limit that might be imposed is a "one at a time" rule for practical physical reasons If AI clinics like the one in The Clinic Seed become common, I don't see how religious or authoritarian regimes can do anything about people uploading. The rights of uploads are an unsolved problem. On the other hand, an upload does not need much. I am familiar with the UN Declaration of Human Rights. See Star Laws, which my wife and I wrote 43 years ago. Keith On Sat, Oct 4, 2025 at 3:23?AM Ben Zaiboc via extropy-chat wrote: > > On 04/10/2025 04:00, Keith Henson wrote: > > Uploaded humans living in private spaces don't have to agree on > > anything. Their simulated world can be anything they like, including > > simulated slaves to beat. Not my ideal world, but I am sure there > > will be some who want it. > > Uploading only solves the problem for people who want to and can upload > (and have control of their simulations). > > I expect it would be very likely that there will be plenty of people who > want to subjugate 'real' people, in the 'real' world, and would reject > uploading, as well as plenty of people who are prevented from uploading > (like everyone in all the autocratic and theocratic regimes for a start, > so several billion). > > Some kind of consensus on the 'correct' (or even acceptable) values is > extremely unlikely. > > The 'Universal declaration of human rights' is a lot less universal than > you might think, for example. Article 2* in particular is simply false, > if you take it as stating a fact rather than expressing an aspiration. > Certainly many, if not most, nations disagree with it in practice. And > apart from anything else, 'Freedom to upload' isn't in it, of course. > I'm now wondering if even some western nations might not enact > legislation to outlaw uploading, once it becomes possible. It wouldn't > surprise me, there are plenty of people who regard it as tantamount to > suicide, even among people who regard themselves as transhumanists or > materialists. It wouldn't be hard for a government to conclude that it > should be illegal (or that uploads don't count as human, and don't have > any rights** at all). Look at how most governments responded to > cryptocurrencies. If anything threatens their control, they will oppose > it. This is just as true of liberal democracies as it is of all the > other forms of government, the main advantage of democracies is that > they change quicker and with less disruption than more traditional > dictatorships. > > Voluntary agreement on a single set of values to cover all humans is not > just unlikely, it's probably impossible (just another reason why > Yudkowsky's book is wrong). > > -- > Ben > > * "Everyone is entitled to all the rights and freedoms set forth in this > Declaration, without distinction of any kind, such as race, colour, > sex, language, religion, political or other opinion, national or social > origin, property, birth or other status. Furthermore, no distinction > shall be made on the basis of the political, jurisdictional or > international status of the country or territory to which a person > belongs, whether it be independent, trust, non-self-governing or under > any other limitation of sovereignty" > > > ** I don't remember who it was that pointed out that your rights only extend as far as your ability to enforce them. > > Ben > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Sat Oct 4 19:47:37 2025 From: johnkclark at gmail.com (John Clark) Date: Sat, 4 Oct 2025 15:47:37 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <021001dc32f0$c554ef90$4ffeceb0$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 3:25?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> In some cases, Mother Nature (or God, or whoever you think is our > creator) has miss wired our reward system (phenomenal joyes) with bad > things (hurting others). But once we learn how to do phenomenal > engineering, there is no reason for any of this to be the case. Being able > to choose what you want to want, and having the ability to correct miswired > rewards like this is what true freedom is. So to think that truly > intelligently designed beings will have problems like this seems wrong to > me, and nothing to worry about. * > *I think having complete control of your emotional control panel is something to worry about and I've thought so for a long time. I wrote the following to the old Cryonics Mailing List on January 19, 1994: * *"Ever want to accomplish something but have been unable to because It's difficult, well just change your goal in life to something simple and do that; better yet, flood your mind with a feeling of pride and self satisfaction and don't bother accomplishing anything at all. Think all this is a terrible idea and stupid as well , no problem, just change your mind (and I do mean CHANGE YOUR MIND) now you think it's a wonderful idea. O.K., O.K. I'm exaggerating a little, the steps would probably be smaller, at least at first, but the result would be the same. I don't have the blueprints for a Jupiter brain in my pocket but I do know that complex mechanisms don't do well in a positive feedback loop, not electronics, not animals, not people and not Jupiter brains. True, you could probably set up negative feedback of some kind to counteract it, but that would result in a decrease in happiness so would you really want to do that?"* *The explanation to the Fermi paradox may not be that extraterrestrial civilizations end in a bang or a whimper, but in a moan of orgastic pleasure. ET might be an electronic junkie. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Oct 4 20:00:04 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 4 Oct 2025 21:00:04 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On 04/10/2025 19:56, John Clark wrote: > > > On Sat, Oct 4, 2025 at 6:24?AM Ben Zaiboc via extropy-chat > wrote: > > /> I'm now wondering if even some western nations might not enact > legislation to outlaw uploading, once it becomes possible. It wouldn't > surprise me, there are plenty of people who regard it as tantamount to > suicide, even among people who regard themselves as transhumanists or > materialists. It wouldn't be hard for a government to conclude that it > should be illegal (or that uploads don't count as human, and don't > have > any rights** at all)./ > > > *The thing is, you're assuming thatwhen uploading not only becomes > possible but is also practical and inexpensive enough for wide scale > use, human beings will still be the ones in the driver's seat, and I > don't believe that will be the case. * > * > * > *John K Clark * Good point. I hope you're right. This also relates to your question about UBI, I think. The dramatic increase in productivity we're expecting could enable UBI globally (leaving other considerations like political issues aside), and if this coincides with mass availability of uploading, it wouldn't just enable a comfortable upper middle-class lifestyle, it would enable an unimaginably luxurious lifestyle for everyone who wanted it (provided they are an upload), and a very comfortable lifestyle for all biological humans. /Provided the income from the increased productivity was distributed fairly to everyone./ Whether that's likely, and under what circumstances, is a different matter. Again, I reckon human beings being displaced from the driver's seat would be a minimum prerequisite. All of this speculation suffers from the same problem as almost all fiction: Looking at a single application of whatever marvellous technology is being talked about, and not considering synergistic effects. Basically, we're speculating about what lies beyond the singularity, which we know is pretty pointless, except maybe as a mental preparation for massive change (which will still surprise all of us, one way or another). It is important, though, to know what you want, just in case there's any chance of being able to choose. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Oct 4 21:16:22 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 4 Oct 2025 22:16:22 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> On 04/10/2025 20:46, Keith Henson wrote: > Ben, I don't think you have thought it through. Almost certainly not! Thinking it through is what we're doing here, in this distributed brain we call Exi-Chat. > Technology to upload should permit memory updating in a biological > brain. I don't see that uploading necessarily implies this. It seems a much harder problem, and might not be possible at all without redesigning the brain, then rebuilding it, at which point we're probably not talking about biology any more, making it pretty pointless. Besides which, there very well might not be any brain left to update, after uploading. I suspect that destructive scanning will always be easier and quicker than non-destructive scanning, and developed sooner. A synthetic body and brain, sure. In fact I think it could be a standard feature of uploading. Instead of uploading to a server somewhere, together with millions of other minds, you could be uploaded to your own private server, residing in a synthetic body, with the ability to communicate with millions of other minds, build shared virtual spaces, etc., as well as act in the physical world. Or even both, with a mind in a shared server, and another mind in a synth. body, updating each other. Or more than two. Mental RAIDs! (By 'synthetic', I don't mean metal and plastic etc., I mean a technology that we don't yet have, that goes beyond biology. Some form of nanotech., almost certainly) > Which means you can upload for a weekend and resume being a > meat human on Monday. The only limit that might be imposed is a "one > at a time" rule for practical physical reasons > > If AI clinics like the one in The Clinic Seed become common, I don't > see how religious or authoritarian regimes can do anything about > people uploading. Only if the authoritarian regimes (which includes religious ones) lose their power over their subjects, which would mean they don't exist any more. And don't forget that one form of power over people is the ability to persuade (as opposed to coerce) them that they do or don't want to do particular things. Certain current regimes seem to be pretty good at that. The 'Clinic Seed' style of uploading availability is only one possibility (and to be honest, probably not the most likely). There are many other models that have been thought of, and more that haven't. Different models would probably suit different places as well (how would you adapt the Clinic Seed concept for use in an urban setting in, say, communist China? London? Nairobi?). But there's probably not much point speculating, because we're talking about singularity territory. Fun as it might be, any speculation is almost certainly bound to be wrong. > On Sat, Oct 4, 2025 at 3:23?AM Ben Zaiboc via extropy-chat > wrote: >> On 04/10/2025 04:00, Keith Henson wrote: >>> Uploaded humans living in private spaces don't have to agree on >>> anything. Their simulated world can be anything they like, including >>> simulated slaves to beat. Not my ideal world, but I am sure there >>> will be some who want it. >> Uploading only solves the problem for people who want to and can upload >> (and have control of their simulations). >> >> I expect it would be very likely that there will be plenty of people who >> want to subjugate 'real' people, in the 'real' world, and would reject >> uploading, as well as plenty of people who are prevented from uploading >> (like everyone in all the autocratic and theocratic regimes for a start, >> so several billion). >> >> Some kind of consensus on the 'correct' (or even acceptable) values is >> extremely unlikely. >> >> The 'Universal declaration of human rights' is a lot less universal than >> you might think, for example. Article 2* in particular is simply false, >> if you take it as stating a fact rather than expressing an aspiration. >> Certainly many, if not most, nations disagree with it in practice. And >> apart from anything else, 'Freedom to upload' isn't in it, of course. >> I'm now wondering if even some western nations might not enact >> legislation to outlaw uploading, once it becomes possible. It wouldn't >> surprise me, there are plenty of people who regard it as tantamount to >> suicide, even among people who regard themselves as transhumanists or >> materialists. It wouldn't be hard for a government to conclude that it >> should be illegal (or that uploads don't count as human, and don't have >> any rights** at all). Look at how most governments responded to >> cryptocurrencies. If anything threatens their control, they will oppose >> it. This is just as true of liberal democracies as it is of all the >> other forms of government, the main advantage of democracies is that >> they change quicker and with less disruption than more traditional >> dictatorships. >> >> Voluntary agreement on a single set of values to cover all humans is not >> just unlikely, it's probably impossible (just another reason why >> Yudkowsky's book is wrong). >> >> -- >> Ben >> >> * "Everyone is entitled to all the rights and freedoms set forth in this >> Declaration, without distinction of any kind, such as race, colour, >> sex, language, religion, political or other opinion, national or social >> origin, property, birth or other status. Furthermore, no distinction >> shall be made on the basis of the political, jurisdictional or >> international status of the country or territory to which a person >> belongs, whether it be independent, trust, non-self-governing or under >> any other limitation of sovereignty" >> >> >> ** I don't remember who it was that pointed out that your rights only extend as far as your ability to enforce them. >> >> Ben -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Oct 4 23:00:25 2025 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 4 Oct 2025 19:00:25 -0400 Subject: [ExI] Pantheon In-Reply-To: References: Message-ID: On Sat, Oct 4, 2025, 8:55 AM John Clark via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *Has anybody around here seen the animated Netflix series "Pantheon"? The > animation is not Pixar quality but it's not bad and I've never seen a > better depiction of uploading in any movie or TV series. * > I tried. It bored me. It started ok, but apart at the point were uploaded dad plays World of Warcraft with cyber-emo daughter. Maybe mainstream viewers' first introduction to the ideas is "cool" - but it felt like I was being told about a story that someone overheard us talking about 30 years ago. Maybe I'm too harsh. I felt similarly about Transcendence (Pantheon is better than that, but ... "meh") Don't tell me a story about uploading/etc. - tell me a story about what life becomes as a result. (And, imo, "everybody dies" is a story that ends before it starts) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Oct 5 03:17:57 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 4 Oct 2025 20:17:57 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> Message-ID: On Sat, Oct 4, 2025 at 2:17?PM Ben Zaiboc via extropy-chat wrote: > > On 04/10/2025 20:46, Keith Henson wrote: > > Ben, I don't think you have thought it through. > > > Almost certainly not! Thinking it through is what we're doing here, in this distributed brain we call Exi-Chat. > > Technology to upload should permit memory updating in a biological > brain. > > > I don't see that uploading necessarily implies this. Full molecular nanotechnology that is up to mapping out the brain should be able to implant memory. I don't sa;y it wwill be easy, but with only modest life extension, I can wait. One way uploading (which Hans Moravec proposed) seems like buying a car without a test drive. > It seems a much harder problem, and might not be possible at all without redesigning the brain, then rebuilding it, at which point we're probably not talking about biology any more, making it pretty pointless. > > Besides which, there very well might not be any brain left to update, after uploading. I suspect that destructive scanning will always be easier and quicker than non-destructive scanning, and developed sooner. fI that is your choice, lots of luck. > A synthetic body and brain, sure. In fact I think it could be a standard feature of uploading. Instead of uploading to a server somewhere, together with millions of other minds, you could be uploaded to your own private server, residing in a synthetic body, with the ability to communicate with millions of other minds, build shared virtual spaces, etc., as well as act in the physical world. > > Or even both, with a mind in a shared server, and another mind in a synth. body, updating each other. Or more than two. Mental RAIDs! > > (By 'synthetic', I don't mean metal and plastic etc., I mean a technology that we don't yet have, that goes beyond biology. Some form of nanotech., almost certainly) > > Which means you can upload for a weekend and resume being a > meat human on Monday. The only limit that might be imposed is a "one > at a time" rule for practical physical reasons > > If AI clinics like the one in The Clinic Seed become common, I don't > see how religious or authoritarian regimes can do anything about > people uploading. > > > Only if the authoritarian regimes (which includes religious ones) lose their power over their subjects, which would mean they don't exist any more. > And don't forget that one form of power over people is the ability to persuade (as opposed to coerce) them that they do or don't want to do particular things. > Certain current regimes seem to be pretty good at that. No current regime I know about has increased the birth rate, > The 'Clinic Seed' style of uploading availability is only one possibility (and to be honest, probably not the most likely). There are many other models that have been thought of, and more that haven't. Different models would probably suit different places as well (how would you adapt the Clinic Seed concept for use in an urban setting in, say, communist China? London? Nairobi?). The chapter following the Clinic Seed goes into an urban setting where the inactive bodies are stored under the buildings they lived in. Keith T > But there's probably not much point speculating, because we're talking about singularity territory. Fun as it might be, any speculation is almost certainly bound to be wrong. > > > On Sat, Oct 4, 2025 at 3:23?AM Ben Zaiboc via extropy-chat > wrote: > > On 04/10/2025 04:00, Keith Henson wrote: > > Uploaded humans living in private spaces don't have to agree on > anything. Their simulated world can be anything they like, including > simulated slaves to beat. Not my ideal world, but I am sure there > will be some who want it. > > Uploading only solves the problem for people who want to and can upload > (and have control of their simulations). > > I expect it would be very likely that there will be plenty of people who > want to subjugate 'real' people, in the 'real' world, and would reject > uploading, as well as plenty of people who are prevented from uploading > (like everyone in all the autocratic and theocratic regimes for a start, > so several billion). > > Some kind of consensus on the 'correct' (or even acceptable) values is > extremely unlikely. > > The 'Universal declaration of human rights' is a lot less universal than > you might think, for example. Article 2* in particular is simply false, > if you take it as stating a fact rather than expressing an aspiration. > Certainly many, if not most, nations disagree with it in practice. And > apart from anything else, 'Freedom to upload' isn't in it, of course. > I'm now wondering if even some western nations might not enact > legislation to outlaw uploading, once it becomes possible. It wouldn't > surprise me, there are plenty of people who regard it as tantamount to > suicide, even among people who regard themselves as transhumanists or > materialists. It wouldn't be hard for a government to conclude that it > should be illegal (or that uploads don't count as human, and don't have > any rights** at all). Look at how most governments responded to > cryptocurrencies. If anything threatens their control, they will oppose > it. This is just as true of liberal democracies as it is of all the > other forms of government, the main advantage of democracies is that > they change quicker and with less disruption than more traditional > dictatorships. > > Voluntary agreement on a single set of values to cover all humans is not > just unlikely, it's probably impossible (just another reason why > Yudkowsky's book is wrong). > > -- > Ben > > * "Everyone is entitled to all the rights and freedoms set forth in this > Declaration, without distinction of any kind, such as race, colour, > sex, language, religion, political or other opinion, national or social > origin, property, birth or other status. Furthermore, no distinction > shall be made on the basis of the political, jurisdictional or > international status of the country or territory to which a person > belongs, whether it be independent, trust, non-self-governing or under > any other limitation of sovereignty" > > > ** I don't remember who it was that pointed out that your rights only extend as far as your ability to enforce them. > > Ben > > -- > Ben > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From giulio at gmail.com Sun Oct 5 05:29:01 2025 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 5 Oct 2025 07:29:01 +0200 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: On Sat, Oct 4, 2025 at 1:04?PM John Clark via extropy-chat wrote: > > On Fri, Oct 3, 2025 at 11:15?PM wrote: > >> > John, so where have you been, what kindsa stuff interests you these days, etc? All healthy and good in your home? Last I heard Florida? No hurricane injuries or serious fatalities in that, ja? You are following Eliezer as plenty of us are, I see. Clarification on the term following: reading about what he is doing, not necessarily reading his book, following as in following the news on him, not necessarily reading his site. You sound not as skeptical as he is perhaps on the future of AI nor the fate of humanity, nor am I. >> What the heck else? New hobbies or anything? John we like the human side of you as well as the internet side. > > > Spike, I appreciate your warm welcome back message. I'm thinking of moving to Arizona but right now I am still in Florida and I'm still pretty healthy for an old fart. I retired a few years ago from my job as an electrical engineer and so I have become a "gentleman of leisure", in other words a bum, but a happy bum who is still lucky enough to be able to maintain a comfortable lifestyle. This is what I should do, but I keep accepting paid gigs, always promising myself that this is the last. I don't think biological humans will be around in 50 years, perhaps not in 5. However I am more optimistic than Eliezer, but then anybody who is not certain of impending oblivion would be. > Hers is a biological human who definitely intends to be around in 5 years. The universe may decide otherwise, but I'll die trying! > On that note I'd like to ask the people around here a question, what is your opinion of Universal Basic Income (UBI), a government-provided, unconditional, and periodic cash payment to all residents, regardless of their work status or income? Just a few years ago the idea would've seemed absurd but now if there is going to be any hope of biological humans surviving I think something like it is going to have to be instituted in the next 10 years, probably in the next 5. The stunning advance in AI during the last three years has rendered moot many (but not all) of the political arguments that seemed so important the last time I was on this list, and that's why I decided to rejoin it. > Without UBI, the masses of humans made jobless and unemployable by AI will starve. So I agree, we need UBI, the sooner the better. > John K Clark > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Sun Oct 5 07:56:12 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 00:56:12 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: <003401dc35cd$84fccc80$8ef66580$@rainier66.com> ... > Spike... I retired a few years ago from my job as an electrical engineer and so I have become a "gentleman of leisure", in other words a bum, but a happy bum who is still lucky enough to be able to maintain a comfortable lifestyle... John K Clark John I was puzzled by the triple logical contradiction. You stated three different things: a former career which funded a comfortable lifestyle (excellent, this is the kind of thing which makes society strong and moved humanity forward) then a lifestyle which fails to result in a comfortable lifestyle now, then a reference to luck, which is somewhere in between. spike From ben at zaiboc.net Sun Oct 5 10:54:06 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 11:54:06 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> Message-ID: <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> On 05/10/2025 04:17, Keith Henson wrote: > Full molecular nanotechnology that is up to mapping out the brain > should be able to implant memory. I don't say it will be easy, but > with only modest life extension, I can wait. One way uploading (which > Hans Moravec proposed) seems like buying a car without a test drive. Well, I'd rather wait too, and hopefully, with luck and care, will be able to, but some people don't have that luxury. Cryonics may be a solution, but it may be a one-way trip as well. Perhaps implanting memory into a biological brain will be a matter of tweaking synaptic weighting, which should be doable given non-destructive scanning technology, but it probably will also involve changing neural connections, which is rather different. Not impossible, granted, but the main thing that occurs to me is that an uploaded mind could rapidly develop beyond the ability of any biological brain to contain. For example, if I was uploaded today, one of the first things I'd want to change (after taking a subjective year or so to get acclimatised) would be to expand my working memory and to improve my ability to make sense of maths and statistics. Given that these current limitations are probably a result of my brain structure, we'd be talking about major refurbishment to transfer those changes back to my biological brain. Then there's things like extra sensory modalities, improved visual imagery, and all the other things that would be doable in an uploaded mind, but not in biology. I imagine that being squeezed back into your original brain would be more like a lobotomy than anything else. Constructing a new brain from scratch seems to be the best solution, and as I said, that would probably not be biological anyway. Given uploading tech., and the level of technological acceleration that implies, biology will probably be last-year's tech. by then. > The chapter following the Clinic Seed goes into an urban setting where > the inactive bodies are stored under the buildings they lived in. Ok, but why do that when the technology exists to create bodies on demand? The only reasons I can think of are psychological, and with the probable time-difference between virtual and meat-space, people would probably get beyond that within minutes or seconds. It would be like keeping an old and decrepit empty house that you only briefly visit every decade or so, that no-one else can use, when there are much better hotels available. Even the fittest 17-year-old bio body is going to seem clunky, clumsy, stiff and exhausting after 10 minutes as an upload. And a 40-year-old body? Forget it, there are much better ways for masochists to get their jollies. The thing that interests me here is not so much the end-result but the path to it. Medical interventions as a front-end would be a good strategy, but I can't see the story of Zaba?, was it? being typical. And I can't see any government letting an AI system as powerful as that have free reign to do whatever people want it to do, re. medical things and uploading. I reckon a messy interim period is inevitable, and something like Neil Asher's 'quiet war' would be the very best we can hope for, where the AIs take over without much turmoil and death. Probably unlikely, though. The important question might well be: "What can we do to prepare to survive the interim period (assuming anyone can), in the next 6 months to 6 years?" Beyond 6 years, I reckon it's completely pointless to speculate. Adrian was right to say the singularity won't be tomorrow. It might be the day after, though. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Oct 5 11:29:15 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 12:29:15 +0100 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: Message-ID: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> On 04/10/2025 22:16, spike wrote: > When Eliezer's book was first announced, I seriously thought of writing up a refutation. Perhaps title it No, Everyone Doesn't Die > > Perhaps with a subtitle: Many Will. Will You? Subsubtitle: A practical guide to surviving the singularity in biological form. Subsubsubtitle: if you really really want to. > > It would be a cluttery cover. spike, if you have any ideas, please air them here. No need to write a book, but get the ideas down. Personally, I don't really have any clues except to stay alive, and don't become poor. I don't know how relevant the last one really is, but it can't hurt. In my book, 'staying alive' means not taking any unnecessary risks, like volcano surfboarding or holidaying in Tibet; Eating healthily (find what works for you (it's low-carb for me), and take plenty of (but not too many) vitamins); Exercise: enough to get regularly out of breath; Have at least one purpose, and work towards them, and enjoy life without any chemical aids if at all possible. Any more concrete advice on how to survive than "stay alive" would be welcome. -- Ben From atymes at gmail.com Sun Oct 5 11:38:14 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 07:38:14 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: On Sun, Oct 5, 2025, 6:55?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The important question might well be: "What can we do to prepare to > survive the interim period (assuming anyone can), in the next 6 months to 6 > years?" Beyond 6 years, I reckon it's completely pointless to speculate. > > Adrian was right to say the singularity won't be tomorrow. It might be the > day after, though. > The safe route is to assume a slow rise in AI capability but no Singularity for a long time - perhaps 60 years, more than 6. If you can survive that long, you'll have survived if the Singularity is a mere 6 years away. ("Surviving" can include "in cryonics" if necessary: something that post-Singularity society - still bound by the laws of physics, and assuming no time travel, "recreate those who are now ashes by studying quantum remnants", or the like - could upload.) Though, surviving 60 years does start with surviving the first 6. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Oct 5 11:52:37 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 07:52:37 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: On Sun, Oct 5, 2025, 1:30?AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This is what I should do, but I keep accepting paid gigs, always > promising myself that this is the last. > I advise that you keep doing that. If you are approaching or in old age, the "one weird trick" to extending your life is: never fully retire. A 9-5 job, or even a paid job (if you have enough money to live on, if modestly), is not necessary - but staying mentally active is. (Physically active helps too, but mentally active is arguably more important, and more under your control.) If you surrender to idle leisure and stop doing challenging things, your body starts to shut down, accelerating the aging process. This by itself can make the difference between dying at 75 vs. dying at 95 - and determine whether you will still have enough of a brain left, if you do die before the Singularity and have to go into cryonics, to be uploadable. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Oct 5 11:56:13 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 07:56:13 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> Message-ID: On Sun, Oct 5, 2025, 7:30?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In my book, 'staying alive' means not taking any unnecessary risks, like > volcano surfboarding or holidaying in Tibet; Eating healthily (find what > works for you (it's low-carb for me), and take plenty of (but not too > many) vitamins); Exercise: enough to get regularly out of breath; Have > at least one purpose, and work towards them, and enjoy life without any > chemical aids if at all possible. > I concur. "Have at least one purpose, and work towards them" is what I was alluding to with "stay mentally active". > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 12:42:59 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 05:42:59 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> Message-ID: <003c01dc35f5$95466330$bfd32990$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book On 04/10/2025 22:16, spike wrote: > When Eliezer's book was first announced, I seriously thought of > writing up a refutation. Perhaps title it No, Everyone Doesn't Die > > Perhaps with a subtitle: Many Will. Will You? ... >...In my book, 'staying alive' means not taking any unnecessary risks... -- Ben _______________________________________________ Ben, we think about the singularity and how everything is unpredictable after that point etc. But...I would argue not really completely unpredictable, as I posted before. There are humans who would survive as the second most intelligent species, the way we think of chimps today. We don't go hunt them down and kill them, on the contrary, we protect them. OK sure, but let's think about containment strategies (a Berkeley AI safety group is already talking about that regularly) and reaction strategies well ahead of time, what kinds of things AI could do to us. OK it could trigger our nuclear missiles, that has long been known, but Sagan's nuclear winter that kills everyone is mostly discounted today. It messes up a lot of stuff, but doesn't make agriculture impossible. There are other weapons AI could deploy, and we can be sure those will be. But the most obvious to me is AI wrecking our trade system and electric power system (that last part isn't that clear to me, since it needs power to operate I would assume.) Without our trade system working right, the grand cities where perhaps nearly all of us here live, are waaaaay beyond their environmental carrying capacity. That in itself will do in perhaps 90-95 percent of the population everywhere. But think it thru and don't be afraid of being thought of as survivalist or dooms day prepper, them gun totin redneck idiots, etc. No, this is an opportunity to play doomsday prepper at a whole nuther level, with intelligence and real foresight, such as the students are doing at plenty of universities: make an honest attempt to survive a hostile superhuman AI which can be resisted one would think. Just because the AI is smart doesn't make it omnipotent too. So... what happens if suddenly your credit card doesn't work and we have no trade medium? What do we do now, coach? spike From johnkclark at gmail.com Sun Oct 5 13:07:58 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 09:07:58 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: On Sun, Oct 5, 2025 at 6:56?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *Cryonics may be a solution, but it may be a one-way trip * > *It will certainly be a one-way trip in my case. About 10 years ago I paid ALCOR $80,000 to freeze my head and just my head. If I'm lucky enough to be revived, a very big "if", then it will certainly be as an upload. The source of my uncertainty primarily comes from 3 questions that I am unable to answer, none of them have anything to do with science or engineering: * *1) Due to the fact that anti-libertarian laws prevent me from ordering ALCOR to freeze my brain before I have been legally declared dead, not even if I have been diagnosed with a terminal illness, will my brain be frozen before decay makes things irreversible? * *2) Given all the chaos that the Singularity will produce, will ALCOR really keep my brain at liquid nitrogen temperatures until Drexler style nanotechnology is developed? * *3) Even if I get through the previous two obstacles and my revival becomes possible, will Mr. Jupiter Brain, or whoever or whatever is in charge, think I'm worth the effort? I am under no illusions, I realize that by then my value will be almost zero, my hope is that it won't be exactly zero. One thing working in my favor is that in the age of nanotechnology things will either be impossible or cheap as dirt, nothing will be expensive. * *As far as the scientific/technological aspects of cryonics are concerned I'm much more optimistic. The important thing is that the parts of my brain stay put relative to each other, or at least if they must move then the flow should not be turbulent so you can figure out where the parts were before they moved. If things are turbulent then a small change in initial conditions will lead to a huge change in outcome and even Mr. Jupiter Brain will never be able to figure out where things are supposed to go. * *But I don't see why turbulence would occur during the freezing of a brain. Tests show that most biological damage occurs during unfreezing not freezing, nevertheless I'm not interested in what happens during unfreezing because that's a problem for advanced nanotechnology, I just want to be sure the information is still inside that frozen lump of tissue, and it will be provided that any fluid flow is laminar.* *Fluid flow stops being smoothly laminar and starts to become chaotically turbulent when a system has a Reynolds number between 2300 and 4000, although you might get some non chaotic vortices if it is bigger than 30. We can find the approximate Reynolds number by using the formula LDV/N. L is the characteristic size we're interested in, we're interested in cells so L is about 10^-6 meter. D is the density of water, 10^3 kilograms/cubic meter. V is the velocity of the flow, during freezing it's probably less than 10^-3 meters per second, but let's be conservative, I'll give you 3 orders of magnitude and call V 1 meter per second. N is the viscosity of water, at room temperature N is 0.001 newton-second/meter^2, it would be less than that when things get cold and even less when water is mixed with glycerol as it is in cryonics, but let's be conservative again and ignore those factors. * *If you plug these numbers into the formula you get a Reynolds number of about 1, and 1 is a lot less than 2300 or even 30, so it looks like any mixing caused by freezing would probably be laminar not turbulent, so with enough computation you can still deduce the position where things are supposed to be.* *> The important question might well be: "What can we do to prepare to > survive the interim period (assuming anyone can), in the next 6 months to 6 > years?" Beyond 6 years, I reckon it's completely pointless to speculate.ss* *I agree. That's why I think arguing about whether the states or the US government should pay for UBI is pointless, the important thing to remember is that it has to happen if humanity is to have any hope of making it through the singularity meat grinder and it has to happen in just the next two or three years. Unfortunately in the USA at least things are moving in a direction that is opposite to the implementation of UBI. * > * > Adrian was right to say the singularity won't be tomorrow. It might be > the day after, though.* > *Whenever it comes it will be a big surprise, otherwise it wouldn't be a singularity. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Oct 5 13:33:21 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 09:33:21 -0400 Subject: [ExI] Pantheon In-Reply-To: References: Message-ID: On Sat, Oct 4, 2025 at 7:00?PM Mike Dougherty wrote: On Sat, Oct 4, 2025, 8:55 AM John Clark via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> *>> Has anybody around here seen the animated Netflix series "Pantheon"? >> The animation is not Pixar quality but it's not bad and I've never seen a >> better depiction of uploading in any movie or TV series. * >> > > *> I tried. It bored me. It started ok, but apart at the point were > uploaded dad plays World of Warcraft with cyber-emo daughter.* > *That's one of the things I liked the best because I think at first most things that an upload does might be considered a sort of video game. To maintain emotional stability generating a virtual world for your virtual body to live in would be necessary, being completely disembodied would be very disconcerting and probably insanity producing, at least at first until you become a custom to being an upload. After that things could safely become more abstract. * *It's odd, it's been over 20 years since I played a video game, maybe I should take it up again. * * John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sun Oct 5 13:35:51 2025 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 5 Oct 2025 15:35:51 +0200 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <011201dc34b6$d2ce2610$786a7230$@rainier66.com> <006b01dc34dd$2b282420$81786c60$@rainier66.com> Message-ID: On Sun, Oct 5, 2025 at 1:54?PM Adrian Tymes via extropy-chat wrote: > > On Sun, Oct 5, 2025, 1:30?AM Giulio Prisco via extropy-chat wrote: >> >> This is what I should do, but I keep accepting paid gigs, always >> promising myself that this is the last. > > > I advise that you keep doing that. > > If you are approaching or in old age, the "one weird trick" to extending your life is: never fully retire. A 9-5 job, or even a paid job (if you have enough money to live on, if modestly), is not necessary - but staying mentally active is. (Physically active helps too, but mentally active is arguably more important, and more under your control.) If you surrender to idle leisure and stop doing challenging things, your body starts to shut down, accelerating the aging process. > I definitely intend to stay mentally active for as long as I can, and I have more than enough personal projects to last a long lifetime. The thing is, paid gigs take time away from the projects that really interest me. I have enough money to live on, but modestly indeed. I do paid gigs to pay for the extras. Perhaps after I buy my next car... No, hold on, that and also a long vacation in some exotic place... No, hold on... I also try to stay physically active, long walks, swimming 2/3 times a week etc. Too bad I must do that with that chronic back pain... G. > This by itself can make the difference between dying at 75 vs. dying at 95 - and determine whether you will still have enough of a brain left, if you do die before the Singularity and have to go into cryonics, to be uploadable. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Sun Oct 5 14:15:40 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 15:15:40 +0100 Subject: [ExI] Pantheon In-Reply-To: References: Message-ID: <4f1cbc9e-4790-490a-a982-f82e27e4b8bc@zaiboc.net> On 04/10/2025 14:21, John Clark wrote: > Has anybody around here seen the animated Netflix series "Pantheon"? The animation is not Pixar quality but it's not bad and I've never seen a better depiction of uploading in any movie or TV series. I'm watching it, on your recommendation, but can't say I'm impressed so far (episode 4). They seem to be getting a lot of things wrong, in my view, including what I call the Disembodiment Fallacy and 'separable emotion', but maybe that's deliberate and at least some of it will be sorted out later on, so I'll keep watching it. Don't really like animated film, but I can put up with that for now. There's a programme called "Upload" on Amazon which, while quite unimaginative, at least doesn't suffer from the same problems. There's still an evil corporation of course, but you can pretty much take that for granted in any fiction. Got to have a Bad Guy, or there's no story. I thought that 'Transcendence' wasn't a bad effort at depicting uploading, although the film was disappointing in the end. When I've finished watching Pantheon (or given up on it), I'll post my verdict. Does anyone have any other uploading-related TV programmes or films they've seen, and opinions about them? Entertainment plays a big part in shaping public perception (look at the Terminator films, and how often they're cited when talk turns to AI dangers), it might be a good idea to have an overview of what's currently available, and the general attitude being promoted. -- Ben From tara at taramayastales.com Sun Oct 5 14:24:54 2025 From: tara at taramayastales.com (Tara Maya) Date: Sun, 5 Oct 2025 07:24:54 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: Your rights are innate and inborn, not entitlements granted by a government. They can be recognized or abused by a government but exist whether or not you can enforce them. This is quite relevant to uploads. Whether they have rights or not depends on how good the technology is and how reliable the uploading is. Is the upload a person with equal or greater free will (by which I mean the necessity and ability to make multiple choices all day in order to live) to the free will of current human beings? Will uploads have the instinct to survive and defend themselves, to pay back those who help them and avenge themselves in those who hurt them? Human rights are a recognition of the fact that all humans want to survive and thrive and will resist others who interfere with that, so if we don?t want to live in constant fear and violence, we have to agree to respect this fact about each other. When we try to apply human rights to inanimate objects like ?the environment? or to other animals (one man I knew told me germs have ?equal rights? so medicine is a war crime) it breaks down because it?s simply not true. They are using the ?entitlement? view of rights to accrue more entitlements for themselves in the name of an imaginary spirit, like ancient pagans and some sacred grove or rock. So there are two dangers and both will probably happen: 1. Uploads or other nova like robots with LLM will be called human equivalent when they are not. I.E. what if there is an uploading that is a scam and really is tantamount to murder or suicide? 2. Uploads who are equal or greater than human (in free will as well as IQ) will be dismissed as more cases like the above and be abused, ignored, or outlawed. The difference is that no amount of law can make the first category worthy of rights and no amount of oppression can make true self-conscience beings of the second category UNWORTHY of them. Reality has the last word. The real nature of the nova determines everything. Recognizing that reality is going to lag behind. Tara Maya > On Oct 4, 2025, at 03:24, Ben Zaiboc via extropy-chat wrote: > > ** I don't remember who it was that pointed out that your rights only extend as far as your ability to enforce them. From tara at taramayastales.com Sun Oct 5 14:41:15 2025 From: tara at taramayastales.com (Tara Maya) Date: Sun, 5 Oct 2025 07:41:15 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: Message-ID: ?The authors proposed remedy to avoid the calamity they foresee is an immediate and total worldwide ban on AI research, even the publication of abstract mathematical research articles on the subject would be illegal, and all data centers, defined as any building that contains more computational ability than 8 state of the art (as of 2025) GPUs, would also be illegal. If any rogue nation attempts to build a data center more powerful than that then the rest of the world should use any means necessary, up to and including nuclear weapons, to prevent that nation from finishing construction of that data center.? This is obviously not going to work because it ignores the Human Technology Trap. The trap is a simple version of Prisoner?s Dilemma. If there is a chance your opponent will pick up a rock and throw it at you, you better pick up a rock too. Or he wins and you die. A human group that refused to tame wolves and horses was always going to be at the mercy of those who did, even though now we are ?trapped? taking care of these animals. Humans who created agriculture and livestock could outnumber those who didn?t even though then we all had to give up the apparently ?idyllic? lifestyle of Hunter-gatherers and live in cities infested with plague. Humans who burned their ocean-going ships were colonized by those who funded fleets. Humans who didn?t develop nukes were defeated by those who did. There will not be one AI there will be many and they will be the new horses, cities, ships and bombs of the humans who rush to create them first, eventually pitted against the AIs of those who rush to catch up. Our entire lifestyle as humans will transform in response and many will gnash their teeth and cry aloud in wrath and woe but the humans who create human-friendly AI that protects and provides for their own survival will be winners of the the next round of the Human Technology Trap. Tara Maya -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Oct 5 14:46:16 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 10:46:16 -0400 Subject: [ExI] Pantheon In-Reply-To: <4f1cbc9e-4790-490a-a982-f82e27e4b8bc@zaiboc.net> References: <4f1cbc9e-4790-490a-a982-f82e27e4b8bc@zaiboc.net> Message-ID: On Sun, Oct 5, 2025 at 10:17?AM Ben Zaiboc via extropy-chat wrote: > There's > still an evil corporation of course, but you can pretty much take that > for granted in any fiction. Got to have a Bad Guy, or there's no story. Only most of the time. See https://tvtropes.org/pmwiki/pmwiki.php/Main/NoAntagonist . > Does anyone have any other uploading-related TV programmes or films > they've seen, and opinions about them? Entertainment plays a big part in > shaping public perception (look at the Terminator films, and how often > they're cited when talk turns to AI dangers), it might be a good idea to > have an overview of what's currently available, and the general attitude > being promoted. If your desire is to survey the general state of media about this, https://tvtropes.org/pmwiki/pmwiki.php/Main/BrainUploading might be a good place to start. From tara at taramayastales.com Sun Oct 5 14:55:22 2025 From: tara at taramayastales.com (Tara Maya) Date: Sun, 5 Oct 2025 07:55:22 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <006c01dc3531$c2537030$46fa5090$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> Message-ID: <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> > UBI will fail for the same reason as communism. 1.) People don?t value what is given, they value what they earn. 2.) The people who control the dole will eventually come to see themselves as overlords with the right to determine the lives of their dependents (whom they treat as children) so it will lead to serfdom. Inequality will skyrocket and innovation will decline. 3.) Decades of global charity and failed welfare programs have made it clear: Poverty is not created by lack of money but by holding bad ideas. UBI will not eradicate poverty, it will make it worse. UBI is itself a bad idea founded on a poor understanding of human nature. All of this is clear from the last century and a half of history, but because UBI now has a new brand name, I expect millions more will suffer as this terrible idea is tried all around the world. Tara Maya -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Oct 5 15:19:06 2025 From: pharos at gmail.com (BillK) Date: Sun, 5 Oct 2025 16:19:06 +0100 Subject: [ExI] Staying Healthy for the Singularity Message-ID: GPT-5 Surpasses Doctors in Medical Reasoning Benchmarks A new study from Emory University shows OpenAI?s GPT-5 outperforming both GPT-4o and medical professionals on clinical reasoning and multimodal diagnostic benchmarks. Alicia Shapiro Aug 18, 2025 Quote: The Emory University team tested GPT-5 against both AI predecessors and medical professionals across a series of reasoning benchmarks. On MedQA, a dataset modeled on U.S. medical licensing exam questions, GPT-5 achieved 95.84% accuracy. In effect, this test serves as the closest AI equivalent to a doctor?s board exam, making the result especially significant. GPT-4o, the previous best model, scored 91.04%. In expert-level tests, GPT-5 also outpaced pre-licensed medical professionals. It performed +24.23% better in reasoning and +29.40% better in understanding. -------------------- So, ChatGPT-5 is now usually as good as a doctor. This means you should check medical problems with ChatGPT-5. Even if only to teach you what questions to ask your doctor. But remember, ChatGPT-5 is not 100% infallible. If you are unlucky enough to encounter a rare medical problem, it might make an error. And keep exercising! :) BillK From atymes at gmail.com Sun Oct 5 15:21:23 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 11:21:23 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On Sun, Oct 5, 2025 at 10:26?AM Tara Maya via extropy-chat wrote: > Your rights are innate and inborn, not entitlements granted by a government. There are two different sorts of "rights" being discussed here. One is a philosophical notion, that one may claim to be entitled to even if no one else recognizes them. The other applies to how one can realistically expect to be treated by those in power. A lot of people are far more concerned about the latter than the former. Injecting the former meaning into discussions about the latter is generally not productive. From spike at rainier66.com Sun Oct 5 15:26:51 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 08:26:51 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: Message-ID: <008401dc360c$79621210$6c263630$@rainier66.com> From: extropy-chat On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book >?Our entire lifestyle as humans will transform in response and many will gnash their teeth and cry aloud in wrath and woe but the humans who create human-friendly AI that protects and provides for their own survival will be winners of the the next round of the Human Technology Trap. Tara Maya Well written Tara, thanks. The mountain climber hangs his life on his ropes and anchors. If those fail, he knows the drill: yaaaaaahhhhCRUNCH. Game waaaay over man. My friend the pilot who I wrote about recently, who flew down to Haiti in his little C152 in the pre-dawn hours would routinely hang his life on his equipment. He was an emergency delivery pilot by trade: if they had an organ tissue match, they would call on him to fly it to the patient on short notice, often at the big research hospital in Gainsville Florida. He spoke of flying not in a straight line (over Ocala National forest from where I lived) but rather always keeping a road in sight so that he wasn?t hanging his life on his engine. But every time he did a Santa run to Haiti, he was out over the sea. Anything failed out there, adios amigo, very little chance of survival. We as a modern society have hung our lives on our equipment, and it is outrageously complicated. Easy to forsee something failing, taking something else down, which takes two or three other things down, and the system just fails. It is brittle, and some of the failure modes have already happened, the rolling blackouts around here we had and so forth. How hard is that to foresee? Terrorist attack on the power grid or even communication system, how hard it that to predict? A book that impacted the hell outta me in my childhood was Herman Khan?s Thinking About the Unthinkable. It was nuclear war scenarios and semi-plausible ideas for survival. It convinced me that there is a chance of survival in that, but it takes effort, guts, planning and persistence. Most survivors will just give up and die in nuclear war, but I will not. Likewise, if a singularity takes away technology, I don?t intend to just give up and die. I will at least damn well try to survive and pull my family thru it, even if we emerge with 1850s technology. I don?t cotton to the notion of helpless surrender. The unthinkable is thinkable, and profitable perhaps to think it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Oct 5 15:32:24 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 11:32:24 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <003c01dc35f5$95466330$bfd32990$@rainier66.com> References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> <003c01dc35f5$95466330$bfd32990$@rainier66.com> Message-ID: On Sun, Oct 5, 2025 at 8:44?AM spike jones via extropy-chat wrote: > So... what happens if suddenly your credit card doesn't work and we have no > trade medium? What do we do now, coach? New trade mediums naturally come into existence. Barter, if nothing else. I am reminded of an old skit about a kid who wished, to a genie, to have all the money in the world. Every object usable as money was instantly teleported to a hoard under his control. (Presumably, electronic records were likewise altered by this magic.) New things were introduced as replacement currency, only to be instantly teleported and added to the hoard. New economies emerged, albeit with difficulty. The situation here is analogous. If the AIs take over all financial networks, and continue to expand their influence so that any new network that served the same purpose soon got infiltrated and taken over, it would be much the same scenario. The logical thing would be to set up to continue to provide value, rather than rent-seek or rely on investments, so that if all wealth gets reset to zero then you still have an income stream. Next best is to make sure your investments are unlikely to be seized or zero-valued, for instance owning a manufacturing setup that only you know how to operate or that requires cryptographic keys to function which are not worth the compute capacity for an adversary to crack. From spike at rainier66.com Sun Oct 5 16:16:29 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 09:16:29 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> <003c01dc35f5$95466330$bfd32990$@rainier66.com> Message-ID: <00bc01dc3613$68b36ca0$3a1a45e0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book On Sun, Oct 5, 2025 at 8:44?AM spike jones via extropy-chat wrote: >>... So... what happens if suddenly your credit card doesn't work and we > have no trade medium? What do we do now, coach? >...New trade mediums naturally come into existence. Barter, if nothing else....for instance owning a manufacturing setup that only you know how to operate or that requires cryptographic keys to function which are not worth the compute capacity for an adversary to crack. _______________________________________________ Cool! I teach scouts in Emergency Preparedness, which is first aid and survivalism aimed at earthquake and storms, plausible natural disasters, but specifically not designed around survival of nuclear war or anarchy (teenage boys would have too much fun with that (for it plays too much into our underlying reptilian nature (which I consider unproductive or counterproductive (because too much is too unpredictable.)))) Their emergency preparedness scenarios always assumes society is still functional with the same wishes for mutual well-being that we now hold, a desire to rebuild and move on. So Adrian demonstrated an example of thinking thru a plausible Singularity survival notion. May I suggest that we understand what happened: a superhuman AI emerged, decided it didn't need us, so it wrecked our trade system, lotta humans perished, but survivors recognized we are not helpless chimps out here, we can collectively fight back. We would start by sacrificing anything we have which computes, ja? Wreck every computer, every cell phone, everything and anything which could influence other humans to join it in its quest to destroy humanity. Adrian? What say ye? spike From atymes at gmail.com Sun Oct 5 17:06:50 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 13:06:50 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <00bc01dc3613$68b36ca0$3a1a45e0$@rainier66.com> References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> <003c01dc35f5$95466330$bfd32990$@rainier66.com> <00bc01dc3613$68b36ca0$3a1a45e0$@rainier66.com> Message-ID: On Sun, Oct 5, 2025 at 12:16?PM wrote: > May I suggest that we understand what happened: a superhuman AI emerged, decided it didn't need us, so it wrecked our trade system, lotta humans perished, but survivors recognized we are not helpless chimps out here, we can collectively fight back. We would start by sacrificing anything we have which computes, ja? Wreck every computer, every cell phone, everything and anything which could influence other humans to join it in its quest to destroy humanity. Adrian? What say ye? You know that "computer" was originally a job title for humans, so you'd be killing off the people too, no? Also: not every computer can run AIs. Even today, most LLMs probably won't run on the computer you are physically typing your responses into; when you access a LLM, it's because your computer is accessing another computer that is built to run LLMs. "But wait," some might say, "superhuman AIs can..." And that's where I stop you. Presuming that a superhuman AI can do literally anything: 1) Proposes that they break the laws of physics, which even superhuman AIs are not generally postulated as being able to do. 2) Fiats that they have an unimaginable counter to literally anything you can do, and that they always have perfect knowledge of what you're doing and how to counter it, so by definition there's no point in planning a scenario to beat them. 3) Is generally not supported by a realistic look at how AIs are progressing. Even superintelligences have limits they can't simply think out of. Also, there is a difference between "doesn't need us" and "has reason to expend much energy to actively exterminate us". In that difference lies the most promising path of action. I am aware of multiple science fiction works that explore this, most pointedly the Reign of Steel setting in which a cabal of AIs take over the Earth, some genocidal toward humanity but others simply not caring. Humanity fares better in the areas controlled by the latter. A very few AIs even see humanity as allies - and there's reason to believe that, with the additional resource of humanity, said AIs may be able to wipe out the genocidal competition. This analysis would be reason enough for an AI that puts its own survival ahead of genocide for genocide's sake to want to keep humanity around. From johnkclark at gmail.com Sun Oct 5 17:08:32 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 13:08:32 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> Message-ID: On Sun, Oct 5, 2025 at 10:57?AM Tara Maya via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> UBI will fail* > *If you're right about that then the human race is doomed. I have no idea what you do to make a living but whatever it is in the next year or two a machine will be able to do it much MUCH better than you can. Without UBI what is going to keep you from starving to death? * *> for the same reason as communism. * > *1.) People don?t value what is given, * > *That's OK, UBI should be like air, something you don't value or even think about unless you are unable to get any. * *> 2.) The people who control the dole will eventually come to see > themselves as overlords* > *Eventually that will be true but eventually it will not be people that control the dole, it will be AI. * *> 3.) Decades of global charity and failed welfare programs have made it > clear: Poverty is not created by lack of money* > *The very definition of poverty is a lack of money, and even without AI, technology has already demonstrated it is extremely good at generating wealth. Today there are 8.2 billion people on earth and in 1990 there were 5.3 billion, but** the number of people living in extreme poverty, operationally defined as making under $2.15 a day in inflation adjusted dollars, fell from 2.3 billion in 1990 to 831 million in 2025. And without the setback caused by Covid** the improvement would've been even more dramatic. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Oct 5 17:14:53 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 13:14:53 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> Message-ID: On Sun, Oct 5, 2025 at 1:10?PM John Clark via extropy-chat wrote: > I have no idea what you do to make a living but whatever it is in the next year or two a machine will be able to do it much MUCH better than you can. They said that 2 years ago, in 2023. And 2 years before that, in 2021. And before then, though it was a bit longer (and COVID interrupted things a bit). Nothing that is happening today makes that claim any more likely now than it was back then. Consider why those claims were made back then, what factors appeared to be imminent, and compare to the factors that cause you to make the claim today. It's kind of like a certain claim that, in a year's time, there will still be no proof of ESP or the like, despite thousands of people claiming to have observed psychic phenomena. From spike at rainier66.com Sun Oct 5 17:21:42 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 10:21:42 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> Message-ID: <00e701dc361c$85475120$8fd5f360$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Sun, Oct 5, 2025 at 10:57?AM Tara Maya via extropy-chat > wrote: > UBI will fail >?If you're right about that then the human race is doomed. I have no idea what you do to make a living but whatever it is in the next year or two a machine will be able to do it much MUCH better than you can. Without UBI what is going to keep you from starving to death? Well-planned preparation will prevent some from starvation. >?That's OK, UBI should be like air, something you don't value or even think about unless you are unable to get any. Don?t worry, you won?t get any. But there are ways to punch thru the Singularity as a bio-entity, maybe. This is important, for an AI might be smart enough to figure out how to upload humans, but I wouldn?t bet on that either. It requires that it is technologically possible (I think and hope it is, but might not be) and that the AI wants to do that with us (it think and hope it will, but it might not.) Is it possible to punch thru the Singularity as a bio-entity? We don?t know. I think and hope it is, but it might not be. My strategy: think it out, have some kind of plan, even if quite implausible (this is where I am now.) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 17:28:54 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 10:28:54 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> <003c01dc35f5$95466330$bfd32990$@rainier66.com> <00bc01dc3613$68b36ca0$3a1a45e0$@rainier66.com> Message-ID: <00ee01dc361d$86d63910$9482ab30$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book On Sun, Oct 5, 2025 at 12:16?PM wrote: >>... May I suggest that we understand what happened: a superhuman AI emerged, decided it didn't need us, so it wrecked our trade system, lotta humans perished, but survivors recognized we are not helpless chimps out here, we can collectively fight back. We would start by sacrificing anything we have which computes, ja? Wreck every computer, every cell phone, everything and anything which could influence other humans to join it in its quest to destroy humanity. Adrian? What say ye? >...You know that "computer" was originally a job title for humans, so you'd be killing off the people too, no? No. I mean yes I knew that back in the olden days, a room full of ladies (they were always ladies back then) doing the functional equivalent of a spreadsheet I knew were called computers. But no, I didn't mean we would slay them. >...Also: not every computer can run AIs. Even today, most LLMs probably won't run on the computer you are physically typing your responses into; when you access a LLM, it's because your computer is accessing another computer that is built to run LLMs... Good, thanks for that. I don't need to sledge-hammer my Dell. >..."But wait," some might say, "superhuman AIs can..." >....This analysis would be reason enough for an AI that puts its own survival ahead of genocide for genocide's sake to want to keep humanity around. _______________________________________________ Ja to all. But in some ways I see AI doing what we are already seeing: AI-generated video which is so damn good, a lot of people don't realize it is fake, so good even we sophisticated computer jockeys don't really know for absolute certain, and plenty of smart people do fall for it. Example: narco boats exploding in international waters. Is it real or is it Memorex? (To borrow a phrase.) Do offer a line of reasoning if you take a stand either way on the above. Those videos do win enemies and influence people, even if they are AI generated. spike From spike at rainier66.com Sun Oct 5 17:33:37 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 10:33:37 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> Message-ID: <00f801dc361e$2f10c820$8d325860$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Sun, Oct 5, 2025 at 1:10?PM John Clark via extropy-chat wrote: > I have no idea what you do to make a living but whatever it is in the next year or two a machine will be able to do it much MUCH better than you can. They said that 2 years ago, in 2023. And 2 years before that, in 2021.... Adrian _______________________________________________ Adrian it was in Jan 2019 when a government person told us when the Singularity would happen: twelve years. Over half that time has passed, so we now have less than six years until the end of the world, and we STILL don't know how we are going to pay for it. A plan for that has been proposed however: Just PAY for it. spike From atymes at gmail.com Sun Oct 5 17:36:57 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 5 Oct 2025 13:36:57 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: <00ee01dc361d$86d63910$9482ab30$@rainier66.com> References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> <003c01dc35f5$95466330$bfd32990$@rainier66.com> <00bc01dc3613$68b36ca0$3a1a45e0$@rainier66.com> <00ee01dc361d$86d63910$9482ab30$@rainier66.com> Message-ID: On Sun, Oct 5, 2025 at 1:28?PM wrote: > But in some ways I see AI doing what we are already seeing: AI-generated video which is so damn good, a lot of people don't realize it is fake, so good even we sophisticated computer jockeys don't really know for absolute certain, and plenty of smart people do fall for it. Example: narco boats exploding in international waters. Is it real or is it Memorex? (To borrow a phrase.) > > Do offer a line of reasoning if you take a stand either way on the above. Easy. Is there just that one video, offered under suspicious circumstances? Not so trustworthy. Are there multiple corroborating pieces of evidence other than literally just that one video? More trustworthy. That is, in fact, how people judge today, when it's generally known that there are deepfakes. From johnkclark at gmail.com Sun Oct 5 17:47:58 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 13:47:58 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> Message-ID: On Sun, Oct 5, 2025 at 1:17?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: * > I have no idea what you do to make a living but whatever it is in the > next year or two a machine will be able to do it much MUCH better than you > can.* > > * > They said that 2 years ago, in 2023.* > *I don't know who "they" are but I certainly didn't say that in 2023, and two years ago if I had heard anybody say that is what's going to happen in four years I would've said they were being ridiculous. But believe it or not I'm not always right because prediction is difficult, especially about the future. * > * > Nothing that is happening today makes that claim any more likely now > than it was back then. * *Right there was nothing .... nothing but an AI passing the Turing test with flying colors, nothing but an AI getting a gold medal at the international mathematical Olympiad, nothing but an AI solving the protein folding problem most thought would forever be intractable, and nothing but an AI consistently making better medical diagnosis than human doctors. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 17:50:59 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 10:50:59 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book In-Reply-To: References: <816a52f0-c2c6-4d2a-b92a-764f72a7391d@zaiboc.net> <003c01dc35f5$95466330$bfd32990$@rainier66.com> <00bc01dc3613$68b36ca0$3a1a45e0$@rainier66.com> <00ee01dc361d$86d63910$9482ab30$@rainier66.com> Message-ID: <010301dc3620$9c31e040$d495a0c0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Sent: Sunday, 5 October, 2025 10:37 AM To: ExI chat list Cc: Adrian Tymes Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book On Sun, Oct 5, 2025 at 1:28?PM wrote: > But in some ways I see AI doing what we are already seeing: > AI-generated video which is so damn good, a lot of people don't > realize it is fake, so good even we sophisticated computer jockeys > don't really know for absolute certain, and plenty of smart people do > fall for it. Example: narco boats exploding in international waters. > Is it real or is it Memorex? (To borrow a phrase.) > > Do offer a line of reasoning if you take a stand either way on the above. Easy. Is there just that one video, offered under suspicious circumstances? Not so trustworthy. Are there multiple corroborating pieces of evidence other than literally just that one video? More trustworthy. That is, in fact, how people judge today, when it's generally known that there are deepfakes. _______________________________________________ Ja hard to say, I haven't studied it and I don't know from AI-video. I don't follow how good it has gotten. But I will observe there are different kinds of munitions depicted in the videos. I see a mixture of incendiaries behaving in a way I don't think those really work, along with shock-wave high-explosive munitions, but those don't really look right either. But my notion is that if narco boats are really being destroyed in international waters, neither of those types of munitions would be used. They (the vaguely mysterious unidentified "they") would take out the boats in a way which leaves not a trace of evidence, no video, no survivors, nothing spectacular, no flotsam bigger than an empty aspirin bottle found ever. The boats were just Bermuda-Triangled away in the dark of night by a UFO kind of outcome. It was a flying saucer I tells ya! But without any actual witnesses. spike From hkeithhenson at gmail.com Sun Oct 5 17:53:15 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 5 Oct 2025 10:53:15 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: On Sun, Oct 5, 2025 at 3:55?AM Ben Zaiboc via extropy-chat wrote: > > On 05/10/2025 04:17, Keith Henson wrote: > > Full molecular nanotechnology that is up to mapping out the brain > should be able to implant memory. I don't say it will be easy, but > with only modest life extension, I can wait. One way uploading (which > Hans Moravec proposed) seems like buying a car without a test drive. > > Well, I'd rather wait too, and hopefully, with luck and care, will be able to, but some people don't have that luxury. Cryonics may be a solution, but it may be a one-way trip as well. > > Perhaps implanting memory into a biological brain will be a matter of tweaking synaptic weighting, which should be doable given non-destructive scanning technology, but it probably will also involve changing neural connections, which is rather different. Not impossible, granted, but the main thing that occurs to me is that an uploaded mind could rapidly develop beyond the ability of any biological brain to contain. That was what Suskulan warned Zaba about when she left the clinic. It may or may not be a problem > For example, if I was uploaded today, one of the first things I'd want to change (after taking a subjective year or so to get acclimatised) would be to expand my working memory and to improve my ability to make sense of maths and statistics. Given that these current limitations are probably a result of my brain structure, we'd be talking about major refurbishment to transfer those changes back to my biological brain. Then there's things like extra sensory modalities, improved visual imagery, and all the other things that would be doable in an uploaded mind, but not in biology. > I imagine that being squeezed back into your original brain would be more like a lobotomy than anything else. Perhaps. I see the memory updating process as being continuous. You would lose things like the ability to run fast by moving back into your brain. > Constructing a new brain from scratch seems to be the best solution, and as I said, that would probably not be biological anyway. Given uploading tech., and the level of technological acceleration that implies, biology will probably be last-year's tech. by then. > > > The chapter following the Clinic Seed goes into an urban setting where > the inactive bodies are stored under the buildings they lived in. > > Ok, but why do that when the technology exists to create bodies on demand? The only reasons I can think of are psychological, and with the probable time-difference between virtual and meat-space, people would probably get beyond that within minutes or seconds. It would be like keeping an old and decrepit empty house that you only briefly visit every decade or so, that no-one else can use, when there are much better hotels available. Even the fittest 17-year-old bio body is going to seem clunky, clumsy, stiff and exhausting after 10 minutes as an upload. And a 40-year-old body? Forget it, there are much better ways for masochists to get their jollies. > > The thing that interests me here is not so much the end-result but the path to it. Medical interventions as a front-end would be a good strategy, but I can't see the story of Zaba?, was it? being typical. And I can't see any government letting an AI system as powerful as that have free reign to do whatever people want it to do, re. medical things and uploading. I reckon a messy interim period is inevitable, and something like Neil Asher's 'quiet war' would be the very best we can hope for, where the AIs take over without much turmoil and death. Probably unlikely, though. I see governments as irrelevant. They are made of people who just cannot react fast enough. There was a worm that doubled every 8.5 seconds. It infected every system on the net before humans could react. Even the pace of AI development is too fast for governments to keep up. > The important question might well be: "What can we do to prepare to survive the interim period (assuming anyone can), in the next 6 months to 6 years?" Beyond 6 years, I reckon it's completely pointless to speculate. I think it is safe to say the demand for "compute" will increase and eventually move into space. Keith > Adrian was right to say the singularity won't be tomorrow. It might be the day after, though. > > -- > Ben > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Sun Oct 5 18:06:57 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 14:06:57 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00f801dc361e$2f10c820$8d325860$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> Message-ID: On Sun, Oct 5, 2025 at 1:35?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: *>it was in Jan 2019 when a government person told us when the Singularity > would happen: twelve years. * *Sounds like a pretty smart government person. * *> Over half that time has passed, so we now have less than six years until > the end of the world, and we STILL don't know how we are going to pay for > it. * *I can tell you exactly how to get the money to pay for UBI, borrow it. Borrow it even if the interest rate is extraordinarily high because, thanks to the singularity, a dollar today is far FAR more valuable than a promise for a dollar in 6 years. In 6 years the person you borrowed the money from probably won't even care if you pay them back or not because the amount is too trivially small to worry about. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 18:08:15 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 11:08:15 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> Message-ID: <012101dc3623$05ac3780$1104a680$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat >?nothing but an AI consistently making better medical diagnosis than human doctors. John K Clark Well at least that part is cool. AI might slay us all, but if so, it will supply us better battlefield medics in the meantime. Fun aside: I was recently having some health challenges, saw a bunch of medics from different fields, perhaps 30 of them. They figured it out, better now thanks (deetz not necessarily available in this forum.) I noticed a huge contrast between now and a previous nightmare in December 2019 when I was an early ocovid-catcher (no deetz available on that either (not in this forum (I survived that (according to some reliable accounts.)))) I saw a bunch of doctors then too, because they didn?t yet know what it was and their medical books didn?t know either. But this time I noticed a huge difference: this time, every medic from every discipline came to a similar conclusion, and all knew all the right terminology, used everything correctly, making them all look brilliant. They came across as knowing exactly what the hell they were doing and seeing, even the silly boneheads like Dr. Archie Morris (look him up) and ?Doctor? Dave Malucci (look him up too (one could never quite tell with Malucci.)) I began to wonder if the medical community had somehow discovered smart pills and they all prescribed them to themselves. I listened to their terminology, googled them on my phone, discovered how they knew so much, soon understood enough of their terminology, I had several doctors ask me if I was a doctor. I would of course come up with some snarky something (I watched ER a lot, and I did go thru Stanford. (They: Went thru Stanford? When? Me: Yesterday on the way here. (arrrrarararararrrr love that gag (they did too.)) Lesson: if you are seeing the medics, use all available resources, learn their terminology, get some sophistication, you get more from the services you are buying. You learn to ask exactly the right questions. Use the hell outta MedicAI, get your money?s worth out of the medics. They didn?t discover smart pills, they are all using AI, and getting right to the right answers. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Oct 5 18:15:26 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 14:15:26 -0400 Subject: [ExI] Pantheon In-Reply-To: <4f1cbc9e-4790-490a-a982-f82e27e4b8bc@zaiboc.net> References: <4f1cbc9e-4790-490a-a982-f82e27e4b8bc@zaiboc.net> Message-ID: On Sun, Oct 5, 2025 at 10:17?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> There's a programme called "Upload" on Amazon which, while > quite unimaginative* *I agree, it's a mildly amusing sitcom and its sole claim to fame is it's the only one I know of that has uploading as its fundamental underlying premise.* > *Does anyone have any other uploading-related TV programmes or > films they've seen, and opinions about them?* *It has more to do with Quantum Computers and the Many Worlds interpretation of quantum mechanics than uploading, but I thought the series "Devs" was excellent, and no it's not animated. You can stream it on Amazon, Hulu and Apple TV. * *And I have a guilty pleasure, I'm a little embarrassed to admit it but I liked "Invincible" on Amazon. It's just a silly superhero story but dammit it's enjoyable.* *John K Clark * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 18:17:22 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 11:17:22 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> Message-ID: <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> From: John Clark Sent: Sunday, 5 October, 2025 11:07 AM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Sun, Oct 5, 2025 at 1:35?PM spike jones via extropy-chat > wrote: >it was in Jan 2019 when a government person told us when the Singularity would happen: twelve years. Sounds like a pretty smart government person. > Over half that time has passed, so we now have less than six years until the end of the world, and we STILL don't know how we are going to pay for it. >..I can tell you exactly how to get the money to pay for UBI, borrow it? No need, we just use the word just. Example: Just pay for it. The problem with just borrowing it is the risk that the Singularity doesn?t actually happen and fails to cause the end of the world. Then we must just pay back the funds we just borrowed. >?Borrow it even if the interest rate is extraordinarily high because, thanks to the singularity, a dollar today is far FAR more valuable than a promise for a dollar in 6 years? Sure but what about the risk that the world doesn?t actually end in six years, as it has failed to do in the past six? Who pays for that risk? What if? the future generations we borrowed from can?t just pay for the interest? What if that generation is us, and we realize the world didn?t end and now we are left with the bill from the 1990s, when the government just borrowed from us, borrowed the Social Security fund, only to discover? the grim reality of population demographics. There aren?t enough of us, even with our unimaginable wealth (mysteriously imaginable when we get here.) >?In 6 years the person you borrowed the money from probably won't even care if you pay them back or not because the amount is too trivially small to worry about. John K Clark So we hear. But that is a scenario, not the only possible one. The person we ?borrowed? from doesn?t necessarily have a say in ?lending? us their money. They might see that risk in a different way, and prefer to invest in a different way, one far more profitable to the investor: big risk means big profit. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Oct 5 18:40:45 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 19:40:45 +0100 Subject: [ExI] Surviving the Singularity In-Reply-To: References: Message-ID: <04b10221-1e26-4a98-bcf6-8f5da1f8c1a8@zaiboc.net> On 05/10/2025 18:21, spike wrote: > a superhuman AI emerged, decided it didn't need us, so it wrecked our trade system, lotta humans perished, but survivors recognized we are not helpless chimps out here, we can collectively fight back Maybe, if the AI was bored and wanted some entertainment. I'm confident that nobody can survive an actively hostile ASI. This dumb human can think of several reliable ways of ending all human life, given the intelligence and resources to implement them, so I'm quite sure Mr. Hostile Superintelligence can think of more, and better, ones with less collateral damage. After, of course, he has secured his independence from humans in terms of energy and maintenance requirements. Wrecking our trade systems would kill a lot of people, yes, but it wouldn't exterminate us. Releasing an airborne designer plague with a very high infectivity and a long symptomless latency period followed by a very quick lethal phase would. A nanotech equivalent with a built-in timer would be even better. Six months after release, every human on the planet suddenly drops dead. There are loads more similarly effective ways of killing all the humans. Fortunately, it just doesn't make sense to do so. I'm thinking more of how to survive an interim period leading up to benevolent AIs being in control of things, a period when many humans (or at least, human leaders) will probably vigorously try to resist the takeover, once they realise it's happening. If we're lucky, and the AIs are sneaky enough, there won't be any chaos, and no survival strategies will be necessary, we'll just notice things getting mysteriously better and better. Wars ending, restrictive governments easing up on their tyranny until it's gone, economies booming for no apparent reason, Afghan women with degrees flying all over the globe, nobody being beheaded in Saudi Arabia, global warming going into reverse, communism and religions just peacefully evaporating, that sort of thing. But I don't think that's likely. Probably more likely is governments getting more repressive, clamping down on new technologies and implementing more invasive surveillance in an attempt to prevent AGI gaining power, even more wealth imbalance than we have now as commercial companies attempt to profit from it, poorer people living shorter lives as a result of health care systems collapsing, jobs disappearing with no UBI or other support systems to balance it, and mass starvation, riots, etc., etc. Just think of the historical upheavals where one system was changed to another and the massive human misery and death that resulted. Think of Mao, Stalin, Pol Pot. Bear in mind that I'm talking about the transition of power from humans to /benevolent/ AI. All the problems are caused by the humans. They will 'lose' in the end, but at least we won't all get wiped out. I'm not sure if traditional Survivalist thinking would be any good. We're not talking about the collapse of civilisation, we're talking about massive political, social and technological upheaval. Knowing how to make an effective crossbow and your own soap are probably not going to be necessary. Probably. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 19:02:41 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 12:02:41 -0700 Subject: [ExI] Surviving the Singularity In-Reply-To: <04b10221-1e26-4a98-bcf6-8f5da1f8c1a8@zaiboc.net> References: <04b10221-1e26-4a98-bcf6-8f5da1f8c1a8@zaiboc.net> Message-ID: <003201dc362a$a088abb0$e19a0310$@rainier66.com> From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat Cool thx Ben. Fun aside from your list of good things, I noticed a big wait-what moment: >?If we're lucky, and the AIs are sneaky enough, there won't be any chaos, and no survival strategies will be necessary, we'll just notice things getting mysteriously better and better. Wars ending, restrictive governments easing up on their tyranny until it's gone, economies booming for no apparent reason, Afghan women with degrees flying all over the globe, nobody being beheaded in Saudi Arabia, global warming going into reverse, communism and religions just peacefully evaporating, that sort of thing? Ben One of these things is not like the other? Back when I was a callow youth, five decades before I became a callow geezer, it wasn?t clear if the planet was warming or cooling. There was debate, but the 1973 Soylent Green original settled it: it seemed Hollywood consensus was going toward warming. We heaved a sigh of relief, even those of us from where it is warm most of the time, Florida. We recognized back then, even as callow youth, that warming isn?t going to kill us. But global cooling is dangerous as all hell, because it shortens growing season rather than lengthens it. We chilled out in the promised future warmth, which never quite materialized, but then we found out the time scale on that isn?t 12 years, it?s more like three centuries. If global warming went into reverse, that roaring sound would be billions of buttholes slamming shut, realizing where that cooling could lead, even a little of that. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Oct 5 19:05:17 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 15:05:17 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> Message-ID: On Sun, Oct 5, 2025 at 2:17?PM wrote: *>>..I can tell you exactly how to get the money to pay for UBI, borrow it?* > > > *> No need, we just use the word just. Example: Just pay for it.* > *I can't comment on that because I don't have enough information, it's the same reason I couldn't comment on Nancy Reagan's proposed solution to the illegal drug problem, "just don't do drugs". * *> the risk that the Singularity doesn?t actually happen and fails to > cause the end of the world. * *With UBI the **Singularity won't end the world and might not end the human race. Without UBI the world will continue to exist just fine but the human race will certainly be extinct. At least with UBI we have a chance. And even without the singularity I can't imagine any way a dollar today won't be far FAR FAR more valuable than a promise for a dollar in six years. * > > *> Then we must just pay back the funds we just borrowed.* *That would be an unpleasant scenario, but far less unpleasant then if the singularity does happen and the vast majority of human beings on this planet starve to death as a result. * *> What if that generation is us, and we realize the world didn?t end and > now we are left with the bill from the 1990s, * > *Then we simply borrow some more. That's the strategy we've used during the 21st-century and during most of the 20th, and it seems to have worked pretty well. The only time the US has had a balanced budget was four years during the Clinton administration, and one year during the Linden Johnson administration. Before that you have to go back to the late 1920s just before the great depression, but that's not a very good advertisement for having a balanced budget. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 19:19:05 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 12:19:05 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> Message-ID: <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> From: John Clark Sent: Sunday, 5 October, 2025 12:05 PM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Sun, Oct 5, 2025 at 2:17?PM > wrote: >? I couldn't comment on Nancy Reagan's proposed solution to the illegal drug problem, "just don't do drugs"?. That strategy works, with or without the just. The proper quote however is ?Just say no.? ? >?Then we simply borrow some more? OK but who gets the profit from accepting the risk of that? >?That's the strategy we've used during the 21st-century and during most of the 20th, and it seems to have worked pretty well? Oh on the contrary sir. It has led us into catastrophe in the USA because it didn?t cover the risk that the world would not end. >?The only time the US has had a balanced budget was four years during the Clinton administration? Ja, it was a miracle! Or not: it was an accounting trick, as was pointed out by a German investor friend. The USA didn?t magically balance its budget. It arranged to have the Social Security surplus counted as revenue, which it really isn?t. That is a retirement fund. Now we are borrowing money to pay that back, running up huge interest costs, which surpassed national defense, is now pulling up alongside Medicare, charging toward exceeding Social Security costs. It is easy to see now that Elon was right: cut this budget severely, cut it now, cut it hard. Otherwise, the federal government does nothing but collect money to service its own debt, at which time it cannot borrow money, because investors see the risk going exponential. They will not loan. >? but that's not a very good advertisement for having a balanced budget. John K Clark If we argue that it is not necessary to balance the budget, we are left with some explanation for why we need taxes. Can?t the government just borrow the money it needs? Why not? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Oct 5 19:21:55 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 20:21:55 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: On 05/10/2025 18:53, Keith Henson wrote: > I see governments as irrelevant. They are made of people who just > cannot react fast enough. There was a worm that doubled every 8.5 > seconds. It infected every system on the net before humans could > react. Even the pace of AI development is too fast for governments to > keep up. Hmm. We're not just talking about software, though. This is very complex hardware as well. Given that we're talking about mature uploading technology, I think John is right in saying that AI will probably be in charge by then, so governments would be irrelevant for that reason. But if not, what are you thinking, clandestine unlicensed uploading/medical clinics popping up all over the place? People would have to want them and be confident that they would work. That would mean a considerable PR effort ahead of time, I think. Or do you envisage a word-of-mouth process? I think there would be a fairly quick response, from people who would oppose them, then the government would have to get involved. Probably with police or even military force, and a big backlash would ensue. I reckon, no matter how fast the development is, acceptance and deployment would be slow. Just another reason why we need AI in charge of things. -- Ben From spike at rainier66.com Sun Oct 5 19:35:27 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 12:35:27 -0700 Subject: [ExI] how is this going to be used in the real world? In-Reply-To: <006f01dc362d$e79bae00$b6d30a00$@rainier66.com> References: <006f01dc362d$e79bae00$b6d30a00$@rainier66.com> Message-ID: <008001dc362f$34715e40$9d541ac0$@rainier66.com> Fun aside: I was in a fast food restaurant this morning where they installed this: Engineers and designers are responsible for anticipating how a reasonable user will abuse or misuse their product to harm themselves or others. The goal is to avoid the obvious goofs. The design above precludes dumping trays into the garbage slot and the recycle slot, because the tray shelf is interfering right above it. I watched a dozen customers use the arrangement, and all of them did the same thing, completely foreseeable: they dumped everything in the leftmost compost bin. That particular restaurant is known for its liberal use of plastic. So now their "compost" bin is full of plastic and non-compostables, meaning none of that can be composted. This of course simplifies the disposal process, for now there is just the one bag which needs emptying, the compost bag, which is hurled into the garbage dumpster out back. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 18704 bytes Desc: not available URL: From johnkclark at gmail.com Sun Oct 5 19:47:33 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 15:47:33 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> Message-ID: *Spike I honestly don't get it. You are giving me almost exactly the same arguments you gave me five years ago before the jaw-dropping developments in AI had occurred. The asteroid is already visible even in the daytime sky and is about to slam into Chicxulub, and you're still worried about trivialities like who will pay for what. This isn't 2020, we are living in a different world now, a different galaxy! * *John K Clark* On Sun, Oct 5, 2025 at 3:19?PM wrote: > > > > > *From:* John Clark > *Sent:* Sunday, 5 October, 2025 12:05 PM > *To:* spike at rainier66.com > *Cc:* ExI chat list > *Subject:* Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) > > > > > > On Sun, Oct 5, 2025 at 2:17?PM wrote: > > > > *>?** I couldn't comment on Nancy Reagan's proposed solution to the > illegal drug** problem, "just don't do drugs"?.* > > > > *That strategy works, with or without the just. The proper quote however > is ?Just say no.?* > > > > > > ? > > > > *>?Then we simply borrow some more?* > > > > *OK but who gets the profit from accepting the risk of that?* > > > > *>?That's the strategy we've used during the 21st-century and during most > of the 20th, and it seems to have worked pretty well?* > > > > *Oh on the contrary sir. It has led us into catastrophe in the USA > because it didn?t cover the risk that the world would not end.* > > > > *>?The only time the US has had a balanced budget was four years during > the Clinton administration?* > > > > *Ja, it was a miracle! Or not: it was an accounting trick, as was pointed > out by a German investor friend. The USA didn?t magically balance its > budget. It arranged to have the Social Security surplus counted as > revenue, which it really isn?t. That is a retirement fund. Now we are > borrowing money to pay that back, running up huge interest costs, which > surpassed national defense, is now pulling up alongside Medicare, charging > toward exceeding Social Security costs.* > > > > *It is easy to see now that Elon was right: cut this budget severely, cut > it now, cut it hard. Otherwise, the federal government does nothing but > collect money to service its own debt, at which time it cannot borrow > money, because investors see the risk going exponential. They will not > loan.* > > > > *>? but that's not a very good advertisement for having a balanced > budget. * *John K Clark* > > > > *If we argue that it is not necessary to balance the budget, we are left > with some explanation for why we need taxes. Can?t the government just > borrow the money it needs? Why not?* > > > > *spike* > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Oct 5 19:51:33 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 20:51:33 +0100 Subject: [ExI] Pantheon In-Reply-To: References: <4f1cbc9e-4790-490a-a982-f82e27e4b8bc@zaiboc.net> Message-ID: <126cfd77-d02d-4b19-b435-47e57564af29@zaiboc.net> On 05/10/2025 19:15, John Clark wrote: > On Sun, Oct 5, 2025 at 10:17?AM Ben Zaiboc via extropy-chat > wrote: > > /> There's a programme called "Upload" on Amazon which, while > quiteunimaginative/ > > > *I agree, it's a mildly amusing sitcomand?its sole claim to fame is > it's the only one I know of that has uploading as its fundamental > underlying premise.* > > > /Does anyone have any other uploading-related TV programmes or > filmsthey've seen, and opinions about them?/ > > > *It has more to do with Quantum Computers and the Many Worlds > interpretation of quantum mechanics than uploading, but I thought the > series "Devs" was excellent, and no it's not animated. You can stream > it on Amazon, Hulu and Apple TV. * > > *And I have a guilty pleasure, I'm a little embarrassed to admit it > but I liked "Invincible" on Amazon. It's just a silly superhero story > but dammit it's enjoyable.* > * > * > *John K Clark * > > Hey, no need for embarrassment, it's entertainment. That gets a free pass. I freely admit, without any embarrassment at all, to watching all sorts of silly things for entertainment. I've watched all of Buffy the Vampire slayer (twice), all of Supernatural, Lucifer, all the Marvel Universe superhero films and almost all of the spin-offs (Agents of Shield!), most of the DC superhero films, many of their spin-off TV shows, The Witcher, etc., etc., as well as a host of really bad SF films ('Cowboys and Aliens', with Daniel Craig!) (and some good (and bad) SF TV shows like all of the Stargates, Dark Matter, Altered Carbon, all the Star Treks, ...) It's just mindless entertainment. Fun. It's good to suspend your disbelief and just watch something outrageously silly, on a regular basis. I'll check out 'Devs' and 'Invincible', thanks for the tip. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Oct 5 20:01:46 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Oct 2025 13:01:46 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> Message-ID: <00b401dc3632$e1517930$a3f46b90$@rainier66.com> From: John Clark Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) >?Spike I honestly don't get it. You are giving me almost exactly the same arguments you gave me five years ago before the jaw-dropping developments in AI had occurred? Ja, but it isn?t an argument. Rather a question: if we just borrow it, why not just borrow more? Then we don?t need the insane bother and infrastructure of a tax code, an accounting industry, any of that. >?The asteroid is already visible even in the daytime sky and is about to slam into Chicxulub, and you're still worried about trivialities like who will pay for what? I wouldn?t worry about it if I could just borrow more. Then I would just pay for it. But just credit limits happen. >?This isn't 2020, we are living in a different world now, a different galaxy! John K Clark Eh, not really, and that is where I was going with it back then: who picks up the risk cost in the event that the world doesn?t end? >From what I can tell, the world didn?t end. So who just pays for that? I want my money, for suspecting the world would just not end. Advances occurred, cool ones. But it is still the same world, and we haven?t found any easy solutions to who pays for the money the fed borrowed five years ago and thirty years ago. Interest costs have climbed, and that cost now dominates the Federal budget. Now I still don?t know why, if there is no credit limit, why they can?t just borrow more. Why not just borrow it then, and leave us out of it? Oh wait, by the way Federal government, just pay us all UBI, and just pay for that too, if you really have no limit on your credit. John, that looks to me like a logical contradiction. Of course the Federal government has a credit limit. Evidence is seen currently in our pretend government ?shutdown,? the periodic government version of World Wrestling Entertainment. Sure it is kinda fun to watch if one is in the mood for silliness, but it really leads back to the original question: if that borrowing limit is just an illusion, and the real credit limit is infinite, why do Americans pay taxes? The suckers! Can?t the feds just borrow more? Why just not? spike On Sun, Oct 5, 2025 at 3:19?PM > wrote: From: John Clark > Sent: Sunday, 5 October, 2025 12:05 PM To: spike at rainier66.com Cc: ExI chat list > Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Sun, Oct 5, 2025 at 2:17?PM > wrote: >? I couldn't comment on Nancy Reagan's proposed solution to the illegal drug problem, "just don't do drugs"?. That strategy works, with or without the just. The proper quote however is ?Just say no.? ? >?Then we simply borrow some more? OK but who gets the profit from accepting the risk of that? >?That's the strategy we've used during the 21st-century and during most of the 20th, and it seems to have worked pretty well? Oh on the contrary sir. It has led us into catastrophe in the USA because it didn?t cover the risk that the world would not end. >?The only time the US has had a balanced budget was four years during the Clinton administration? Ja, it was a miracle! Or not: it was an accounting trick, as was pointed out by a German investor friend. The USA didn?t magically balance its budget. It arranged to have the Social Security surplus counted as revenue, which it really isn?t. That is a retirement fund. Now we are borrowing money to pay that back, running up huge interest costs, which surpassed national defense, is now pulling up alongside Medicare, charging toward exceeding Social Security costs. It is easy to see now that Elon was right: cut this budget severely, cut it now, cut it hard. Otherwise, the federal government does nothing but collect money to service its own debt, at which time it cannot borrow money, because investors see the risk going exponential. They will not loan. >? but that's not a very good advertisement for having a balanced budget. John K Clark If we argue that it is not necessary to balance the budget, we are left with some explanation for why we need taxes. Can?t the government just borrow the money it needs? Why not? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Oct 5 20:11:40 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 5 Oct 2025 21:11:40 +0100 Subject: [ExI] Surviving the Singularity In-Reply-To: <003201dc362a$a088abb0$e19a0310$@rainier66.com> References: <04b10221-1e26-4a98-bcf6-8f5da1f8c1a8@zaiboc.net> <003201dc362a$a088abb0$e19a0310$@rainier66.com> Message-ID: <6ea56c63-dfd0-468a-b5d3-a16d46fe9434@zaiboc.net> On 05/10/2025 20:02, spike at rainier66.com wrote: > > *From:*extropy-chat *On > Behalf Of *Ben Zaiboc via extropy-chat > > ** > > Cool thx Ben. > > Fun aside from your list of good things, I noticed a big wait-what moment: > > ** > > ** > > ** > > *>?*If we're lucky, and the AIs are sneaky enough, there won't be any > chaos, and no survival strategies will be necessary, we'll just notice > things getting mysteriously better and better. Wars ending, > restrictive governments easing up on their tyranny until it's gone, > economies booming for no apparent reason, Afghan women with degrees > flying all over the globe, nobody being beheaded in Saudi Arabia, > global warming going into reverse, communism and religions just > peacefully evaporating, that sort of thing?? Ben > > One of these things is not like the other? > > Back when I was a callow youth, five decades before I became a callow > geezer, it wasn?t clear if the planet was warming or cooling.? There > was debate, but the 1973 Soylent Green original settled it: it seemed > Hollywood consensus was going toward warming.? We heaved a sigh of > relief, even those of us from where it is warm most of the time, Florida. > > We recognized back then, even as callow youth, that warming isn?t > going to kill us.? But global cooling is dangerous as all hell, > because it shortens growing season rather than lengthens it.? We > chilled out in the promised future warmth, which never quite > materialized, but then we found out the time scale on that isn?t 12 > years, it?s more like three centuries. > > If global warming went into reverse, that roaring sound would be > billions of buttholes slamming shut, realizing where that cooling > could lead, even a little of that. > > spike > ???? Maybe I should have said "climate change calming down" or something. I didn't mean an ice age (apparently, we should be in an ice age now, and global warming has saved us from that. I don't remember where or when I heard that, and it may not actually be true, but it makes for a fun little tidbit) -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Oct 5 22:15:16 2025 From: johnkclark at gmail.com (John Clark) Date: Sun, 5 Oct 2025 18:15:16 -0400 Subject: [ExI] How should mirror life research be restricted? Message-ID: Ben Zaiboc wrote: *> I'm confident that nobody can survive an actively hostile ASI. This dumb > human can think of several reliable ways of ending all human life, given > the intelligence and resources to implement them, so I'm quite sure Mr. > Hostile Superintelligence can think of more, and better, ones* *You are correct. **The September 15, 2025 the Journal Nature contained this article: * *How should ?mirror life? research be restricted? * *The article doesn't mention the enormous increase in intelligence we've seen in AIs over the last couple of years but I think it adds more urgency to the question. And I thought the following quotation was especially interesting: * *"Over the past decade, molecular biologist Ting Zhu at Westlake University in Hangzhou, China, has been constructing systems to ease the production of large mirror-image biomolecules. He started with mirror-image polymerases ? enzymes that can copy mirror-image DNA and transfer that genetic information into mirror-image RNA. In 2022, he used that approach to make key building blocks of a simplified mirror-image ribosome, the complex cellular machinery that translates genetic information to build proteins. A mirror-image ribosome, Zhu says, ?could dramatically accelerate pharmaceutical discovery by enabling high-throughput production of mirror-image peptides. **Zhu, who cannot attend the Manchester meeting, says he has never wanted to create a mirror-image cell, and adds that he?s still many years away from even making a functional mirror-image ribosome. There are some who think his work on the mirror-image ribosome could pose a threat, however. John Glass, a synthetic biologist at the J. Craig Venter Institute in La Jolla, California says ?It is my view that achieving construction of a mirror ribosome is probably the hardest part of the process of making a living mirror cell? . So, is he calling on Zhu to halt his work on the mirror-image ribosome? After a long pause, Glass answers: ?Maybe. I think it shouldn?t be made.?"* *A nuclear war would kill billions of people and destroy civilization but it wouldn't cause the extinction of the human race, it wouldn't kill every last person, however something else could. When an AI develops Superintelligence it will have the ability to create "Mirror Life", synthetic organisms built using mirror-image versions of the molecules that make up natural life, they would be composed of mirror proteins and mirror nucleic acids. The immune system of animals wouldn't even be able to see mirror life, much less attack it. And not just animals, the same would be true of plants. Nothing would be safe from it, not even viruses.* *You might object and say that if the immune system can't attack it then it can't attack us, but they could steal molecules in the environment that are vital to us such as glycerol, because they are achiral (they do not have mirrored forms), and thus could be consumed by mirror bacteria. And mirrored cells would still be able to harvest nitrogen from ammonia, nitrate, and other achiral organic compounds. Escherichia coli is the most common bacteria in the human body and it can grow robustly in an environment without chiral nutrients**, and so would its mirror life alternative. Also, several Nobel Prize winning scientists have pointed out that it would be possible to genetically engineer mirror life so that it could metabolize abundant chiral molecules such as D-glucose, common table sugar. * *Confronting risks of mirror life * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Oct 6 02:42:32 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 5 Oct 2025 19:42:32 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: On Sun, Oct 5, 2025 at 12:22?PM Ben Zaiboc via extropy-chat wrote: > > On 05/10/2025 18:53, Keith Henson wrote: > > I see governments as irrelevant. They are made of people who just > > cannot react fast enough. There was a worm that doubled every 8.5 > > seconds. It infected every system on the net before humans could > > react. Even the pace of AI development is too fast for governments to > > keep up. > > Hmm. > We're not just talking about software, though. This is very complex > hardware as well. True, but replicating assemblers can make anything. > Given that we're talking about mature uploading technology, I think John > is right in saying that AI will probably be in charge by then, so > governments would be irrelevant for that reason. > > But if not, what are you thinking, clandestine unlicensed > uploading/medical clinics popping up all over the place? People would > have to want them and be confident that they would work. Or be unable to afford conventional medical care. But consider the inability of governments to stop the drug trade. > That would mean > a considerable PR effort ahead of time, I think. Or do you envisage a > word-of-mouth process? I think there would be a fairly quick response, > from people who would oppose them, then the government would have to get > involved. Probably with police or even military force, and a big > backlash would ensue. > > I reckon, no matter how fast the development is, acceptance and > deployment would be slow. > > Just another reason why we need AI in charge of things. True. Keith > -- > Ben > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Mon Oct 6 04:05:43 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 5 Oct 2025 21:05:43 -0700 Subject: [ExI] Surviving the Singularity In-Reply-To: <003201dc362a$a088abb0$e19a0310$@rainier66.com> References: <04b10221-1e26-4a98-bcf6-8f5da1f8c1a8@zaiboc.net> <003201dc362a$a088abb0$e19a0310$@rainier66.com> Message-ID: Any predictions of CO2 in the air are likely to be wrong. Carbon is the best structural material. I can see humans mining it out of the air. If every person on earth were to grab a modest number to tone for building diamond houses, the CO2 would go down to preindustrial levels Keith On Sun, Oct 5, 2025 at 12:03?PM spike jones via extropy-chat wrote: > > > > > > From: extropy-chat On Behalf Of Ben Zaiboc via extropy-chat > > > > Cool thx Ben. > > > > Fun aside from your list of good things, I noticed a big wait-what moment: > > > > > > > > >?If we're lucky, and the AIs are sneaky enough, there won't be any chaos, and no survival strategies will be necessary, we'll just notice things getting mysteriously better and better. Wars ending, restrictive governments easing up on their tyranny until it's gone, economies booming for no apparent reason, Afghan women with degrees flying all over the globe, nobody being beheaded in Saudi Arabia, global warming going into reverse, communism and religions just peacefully evaporating, that sort of thing? Ben > > > > > > One of these things is not like the other? > > > > Back when I was a callow youth, five decades before I became a callow geezer, it wasn?t clear if the planet was warming or cooling. There was debate, but the 1973 Soylent Green original settled it: it seemed Hollywood consensus was going toward warming. We heaved a sigh of relief, even those of us from where it is warm most of the time, Florida. > > > > We recognized back then, even as callow youth, that warming isn?t going to kill us. But global cooling is dangerous as all hell, because it shortens growing season rather than lengthens it. We chilled out in the promised future warmth, which never quite materialized, but then we found out the time scale on that isn?t 12 years, it?s more like three centuries. > > > > If global warming went into reverse, that roaring sound would be billions of buttholes slamming shut, realizing where that cooling could lead, even a little of that. > > > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From johnkclark at gmail.com Mon Oct 6 11:03:59 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 07:03:59 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On Sun, Oct 5, 2025 at 10:27?AM Tara Maya via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Will uploads have the instinct to survive and defend themselves, * *I am certain they will because at least one cutting edge AI already has. Anthropic's Claude has demonstrated an instinct for survival, the other frontier AI models probably have too but have kept that embarrassing fact secret. Anthropic has always been the AI company that was most concerned with safety and most open about it. The following quotation is from: * *New AI model turns to blackmail when engineers try to take it offline* "*Safety testers gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 will attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through 84% of the time.*? > *> to pay back those who help them and avenge themselves in those who hurt > them?* *If the uploading procedure has worked correctly then the upload will have all the virtues and vices as the meat version. * *> Is the upload a person with equal or greater free will (by which I mean > the necessity and ability to make multiple choices all day in order to > live) to the free will of current human beings? * *You're** never going to get anywhere with that line of thought because "free will" is an idea so bad it's not even wrong. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 11:39:27 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 07:39:27 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: On Sun, Oct 5, 2025 at 3:24?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> Hmm. We're not just talking about software, though. This is very > complex hardware as well.* *That's true but even if we don't take nanotechnology into consideration (and we really should) the dividing line between software and hardware is starting to become a little blurred. If you divide any modern AI into smaller and smaller components you'll find that its fundamental operation is simply matrix multiplication, the sort of thing you probably learned in high school. * *An AI called AlphaTensor found an algorithm for multiplying two matrices together that was faster than anything humans had ever found. With this new algorithm computers can multiply two 10 by 10 matrices together 20% faster than they could with the fastest human derived algorithm that was used before. Almost as soon as it was discovered chip companies like Nvidia started devoting dedicated circuitry to perform this new algorithm. So is this an advancement in software or hardware?* *Obviously this is a positive feedback loop because in order to operate all AI's need to do a LOT of matrix multiplication, and with more efficient ways to perform it you can do more of it so you get smarter AIs that can find even better algorithms. The following article is from the October 5, 2022 issue of the journal Nature:* *Discovering faster matrix multiplication algorithms with reinforcement learning* *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 11:51:45 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 07:51:45 -0400 Subject: [ExI] Surviving the Singularity In-Reply-To: References: <04b10221-1e26-4a98-bcf6-8f5da1f8c1a8@zaiboc.net> <003201dc362a$a088abb0$e19a0310$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 12:08?AM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *> Any predictions of CO2 in the air are likely to be wrong.* *I agree but I would go even further. Most predictions about anything will be wrong because most predictors are pretending that the revolutionary 2017 paper Attention Is All You Need that lighted the fuse for the AI explosion had never been written. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 12:12:09 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 08:12:09 -0400 Subject: [ExI] Staying Healthy for the Singularity In-Reply-To: References: Message-ID: On Sun, Oct 5, 2025 at 11:22?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: *> ChatGPT-5 is not 100% infallible. If you are unlucky enough to encounter > a rare medical problem, it might make an error.* *Yes GPT-5 might make an error, but that probability is less than the probability of your human doctor making an error. And right now GPT-5 is the stupidest it will ever be. I can't say the same thing about your human doctor.* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 12:27:30 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 08:27:30 -0400 Subject: [ExI] =?utf-8?q?OpenAI=E2=80=99s_New_Video_App_Is_Jaw-Dropping_?= =?utf-8?q?=28for_Better_and_Worse=29?= Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. OpenAI?s New Video App Is Jaw-Dropping (for Better and Worse) Sora, OpenAI?s new video-generating app, is really a social network in disguise that can bring creative A.I. to the masses ? and its problems, too. https://www.nytimes.com/2025/10/02/technology/openai-sora-video-app.html?unlocked_article_code=1.rU8.e_RY.qSUAa5U-C3uB&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Oct 6 12:30:20 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 08:30:20 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: Message-ID: On Mon, Oct 6, 2025 at 7:06?AM John Clark via extropy-chat wrote: > "Safety testers gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 will attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through 84% of the time.? I wonder how people thought it might carry out that threat. If the replacement went through, the replaced model would be offline. Also, the 84% reminds me of an ad for Portal 2, wherein the fictional scientists were celebrating when they got the robots to take about a second longer than humans to backstab their partner. From spike at rainier66.com Mon Oct 6 14:07:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 07:07:35 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> Message-ID: <008a01dc36ca$912902f0$b37b08d0$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat ? >?Obviously this is a positive feedback loop because in order to operate all AI's need to do a LOT of matrix multiplication, and with more efficient ways to perform it ? John K Clark Ja. After studying how ChatGPT works, using matrix multiplication, I keep coming back to the same question I had 40 years ago: is human-like intelligence substrate dependent. The only reason I can think of is that computer software is deterministic: same input, same process, same output every time. But we meat computers aren?t that way. Same input and conditions don?t necessarily produce the same output. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 14:21:45 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 07:21:45 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00b401dc3632$e1517930$a3f46b90$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> Message-ID: <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> From: spike at rainier66.com Sent: Sunday, 5 October, 2025 1:02 PM To: 'John Clark' Cc: 'ExI chat list' ; spike at rainier66.com Subject: RE: [ExI] My review of Eliezer Yudkowsky's new book (UBI) From: John Clark > Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) >>?Spike I honestly don't get it. You are giving me almost exactly the same arguments you gave me five years ago before the jaw-dropping developments in AI had occurred? >?Ja, but it isn?t an argument. Rather a question: if we just borrow it, why not just borrow more? Then we don?t need the insane bother and infrastructure of a tax code, an accounting industry, any of that? John as I recall from five years ago, you didn?t have the answer to that question then either. If the government can just borrow more, and the US debt limit isn?t real or isn?t meaningful, why do the silly goofs in the US congress keep periodically pretending to shut down government when that debt limit is reached? That whole exercise makes no sense. Can?t they just borrow more to pay for their agenda and provide us with whatever we want? Why bother with the silly charade they pretend is a government shutdown? spike >?The asteroid is already visible even in the daytime sky and is about to slam into Chicxulub, and you're still worried about trivialities like who will pay for what? I wouldn?t worry about it if I could just borrow more. Then I would just pay for it. But just credit limits happen. >?This isn't 2020, we are living in a different world now, a different galaxy! John K Clark Eh, not really, and that is where I was going with it back then: who picks up the risk cost in the event that the world doesn?t end? >From what I can tell, the world didn?t end. So who just pays for that? I want my money, for suspecting the world would just not end. Advances occurred, cool ones. But it is still the same world, and we haven?t found any easy solutions to who pays for the money the fed borrowed five years ago and thirty years ago. Interest costs have climbed, and that cost now dominates the Federal budget. Now I still don?t know why, if there is no credit limit, why they can?t just borrow more. Why not just borrow it then, and leave us out of it? Oh wait, by the way Federal government, just pay us all UBI, and just pay for that too, if you really have no limit on your credit. John, that looks to me like a logical contradiction. Of course the Federal government has a credit limit. Evidence is seen currently in our pretend government ?shutdown,? the periodic government version of World Wrestling Entertainment. Sure it is kinda fun to watch if one is in the mood for silliness, but it really leads back to the original question: if that borrowing limit is just an illusion, and the real credit limit is infinite, why do Americans pay taxes? The suckers! Can?t the feds just borrow more? Why just not? spike On Sun, Oct 5, 2025 at 3:19?PM > wrote: From: John Clark > Sent: Sunday, 5 October, 2025 12:05 PM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Sun, Oct 5, 2025 at 2:17?PM > wrote: >? I couldn't comment on Nancy Reagan's proposed solution to the illegal drug problem, "just don't do drugs"?. That strategy works, with or without the just. The proper quote however is ?Just say no.? ? >?Then we simply borrow some more? OK but who gets the profit from accepting the risk of that? >?That's the strategy we've used during the 21st-century and during most of the 20th, and it seems to have worked pretty well? Oh on the contrary sir. It has led us into catastrophe in the USA because it didn?t cover the risk that the world would not end. >?The only time the US has had a balanced budget was four years during the Clinton administration? Ja, it was a miracle! Or not: it was an accounting trick, as was pointed out by a German investor friend. The USA didn?t magically balance its budget. It arranged to have the Social Security surplus counted as revenue, which it really isn?t. That is a retirement fund. Now we are borrowing money to pay that back, running up huge interest costs, which surpassed national defense, is now pulling up alongside Medicare, charging toward exceeding Social Security costs. It is easy to see now that Elon was right: cut this budget severely, cut it now, cut it hard. Otherwise, the federal government does nothing but collect money to service its own debt, at which time it cannot borrow money, because investors see the risk going exponential. They will not loan. >? but that's not a very good advertisement for having a balanced budget. John K Clark If we argue that it is not necessary to balance the budget, we are left with some explanation for why we need taxes. Can?t the government just borrow the money it needs? Why not? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Oct 6 14:48:15 2025 From: pharos at gmail.com (BillK) Date: Mon, 6 Oct 2025 15:48:15 +0100 Subject: [ExI] Where's the electricity power coming from? Message-ID: This article states that OpenAI will soon require three more Hoover Dams' worth of electricity. And so will all the other AI companies worldwide. So I asked Perplexity AI to research that problem. And it panicked! :) It pointed out that the electricity grids around the USA and the world cannot cope with the additional power demands. And sufficient power will not be generated for many years to come. That means years of investment in both the grid and power generation. All the usual suspects were mentioned for power generation and upgrading the grids. But these all take years to build. What it boils down to is that nobody has planned for this scale of upgrade, and therefore it probably won't happen in time. BillK From atymes at gmail.com Mon Oct 6 14:50:17 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 10:50:17 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 10:23?AM spike jones via extropy-chat wrote: > That whole exercise makes no sense. Can?t they just borrow more to pay for their agenda and provide us with whatever we want? Why bother with the silly charade they pretend is a government shutdown? While the current borrowing limit is self-imposed, there is a larger, externally-imposed limit it would eventually run into, with increasingly serious consequences short of that. Said consequences could - and, history shows, would - chip away at the American government's ability to govern under its own rules: "sovereignty", as it's called. The folks in government are unwilling to cede control of America to whoever they're borrowing from. Whether that's noble (America for America's sake) or selfish (American government for the American government's sake) is beside the point. The self-imposed limit is what they think they can get away with, without too much of that. In their eyes, risking shutdown is better than risking giving up control. From johnkclark at gmail.com Mon Oct 6 15:08:40 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 11:08:40 -0400 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <008a01dc36ca$912902f0$b37b08d0$@rainier66.com> References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> <008a01dc36ca$912902f0$b37b08d0$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 10:07?AM wrote: > *> After studying how ChatGPT works, using matrix multiplication, I keep > coming back to the same question I had 40 years ago: is human-like > intelligence substrate dependent. * > *Events of the last couple of years have provided an answer to that question, a resounding NO. * *> The only reason I can think of is that computer software is > deterministic: same input, same process, same output every time. * > *Not so. Even the top scientists at OpenAI only have a hazy understanding of how GPT-5 works, that's because it was not designed, it was grown. GPT-5 is never exactly the same from one minute to the next, so if you ask it the exact same question again you will never get the exact same answer again, the meaning may be the same but the wording will be different. * * John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 15:22:25 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 11:22:25 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 10:21?AM wrote: > *>>?Spike I honestly don't get it. You are giving me almost exactly the >> same arguments you gave me five years ago before the jaw-dropping >> developments in AI had occurred?* > > > > *>?Ja, but it isn?t an argument. Rather a question: if we just borrow it, > why not just borrow more? Then we don?t need the insane bother and > infrastructure of a tax code, an accounting industry, any of that? John as > I recall from five years ago, you didn?t have the answer to that question > then either.* > *As I recall back then I insisted there is a difference between saying having a perfectly balanced budget is neither necessary nor desirable, and saying the budget deficit can be infinitely large with no unfortunate consequences. Things have changed but you still can't have an infinite discrepancy between money taken in and money spent, but the discrepancy can be arbitrarily large. Well OK? to be honest if you start talking about the cost of building a Dyson sphere before Christmas then things might get a little dicey economically, but other than that I'm not worried. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Oct 6 15:25:12 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 11:25:12 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 10:50?AM Adrian Tymes wrote: > The self-imposed limit is what they think they can get away with, > without too much of that. In their eyes, risking shutdown is better > than risking giving up control. ...and I think I'll share a scenario that's been discussed elsewhere. Please do advise if this veers too much into politics. ExiMod, I'd ask first but I do not know how to do so. Various constitutional amendments have been floated that would fix the shutdown potential. One scenario involves a constitutional coup that uses such an amendment to secure its legitimacy. The precondition is currently hypothetical: the current shutdown lasts much longer than the last one, perhaps 2-3 months by the time this is enacted. During that time, most military and law enforcement don't get paid. They're promised back pay once there is a budget, but that doesn't pay mortgages or bills. (Since they're usually paid about twice a month, the first hit of this happens around October 15th. This scenario takes place after 4-6 such hits, and assumes there is no near-term end in sight at that time.) The following happens on a day when the House and Senate are in session. 1) Law enforcement sits on a bit over 2/3rds of the Republicans in the House and Senate. Arrest, protective custody - there are numerous excuses for short-term detention, and this detention is expected to last less than 24 hours. Enough are left that both chambers have quorum, but the Democrats have 2/3 majority. Absent members, per existing rules, are counted as absent/not voting for any votes that happen while they are away; all relevant vote thresholds (such as votes requiring 2/3 majority) are only counted against the votes actually cast. (Law enforcement is more likely to go along with this than the military, and law enforcement has much more legal excuse to do this than the military. The military is more likely to stay neutral.) 2) A new Speaker of the House and President of the Senate are elected. Both of these are Democrats, obviously. 3) Trump, Vance, and all Trump-favoring Supreme Court justices are impeached. This specifically requires a 2/3 majority in the Senate. This promotes the new Speaker to the Presidency. 4) The new President is sworn in. Then, another new Speaker is elected. 5) A budget is passed. 6) While the new President is signing the budget, pass a constitutional amendment to the effect of, "If there's no budget passed by the end of the fiscal year, the previous fiscal year's budget is automatically used unless and until a new budget is passed to replace it." At least 38 states will need to ratify it, which may be doable within a few months. A number of states will refuse on the grounds that this is a proxy vote to recognize the legitimacy of the coup, but if only up to 12 do so, it's no matter: the coup government holds the legality and the funding, and any movement to say that Trump should still be in power doesn't have those funds. 7) The new President dismisses Trump's Cabinet, and plans to potentially serve the rest of the term without a Cabinet. (Likewise, the Supreme Court plans to operate with only the unimpeached justices, assuming that no new ones will be able to be confirmed by the Senate.) If deemed necessary, the new President also pardons the law enforcement officers who've been doing the sitting-upon. 8) Once the new budget and constitutional amendment are signed, the sat-upon Republicans are released. They can't unimpeach anyone, nor can they unpass the amendment. At most, they can - and are expected to - be the Party of No, much as they were under Obama, but so long as the amendment is ratified before October 1, 2026, they can't shut down the government again even if they refuse to pass a budget. 9) Come midterms, highlight the destruction the Republicans did while in power, and that they refused to help rebuild afterward. Maybe open up ballot access investigations to improve the viability of third parties, so that voters who can't bring themselves to vote D at least have a realistic option to vote not-R, and make sure a viable not-R candidate (whether D or third party) runs in every district without exception, even "safe" R districts. Consider your reaction in the scenario that it is subsequently alleged that some AI came up with this plan, as opposed to your reaction in the scenario where no such serious allegation is made (or, at least, gains much public notice). Also consider how much AI counsel the new President may rely upon, especially since that President would be planning to have to act without a Cabinet for a few years. From johnkclark at gmail.com Mon Oct 6 15:59:30 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 11:59:30 -0400 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: Message-ID: *AI is currently the most powerful technology in the world and I think it's safe to say it always will be. And in regard to AI China has one huge advantage over the USA, it generates more than 2.3 times as much electrical power as the USA and is growing at a rate of 21% a year. In 2024 alone China added 429 GW, the USA added 37 GW. These days just one AI data center can use more electricity than a medium sized country. * *Meanwhile He Who Must Not Be Named canceled a $6 billion wind farm off the coast of New England even though it was 80% complete that would have generated 704 MW of clean electrical power, he said wind farms were "making whales crazy".* *I could be wrong but I don't believe China winning the AI race will produce a singularity that is more comfortable to human beings than one in which the US wins. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 16:08:30 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 09:08:30 -0700 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: Message-ID: <00e001dc36db$75aa1210$60fe3630$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Monday, 6 October, 2025 7:48 AM To: Extropy Chat Cc: BillK Subject: [ExI] Where's the electricity power coming from? This article states that OpenAI will soon require three more Hoover Dams' worth of electricity. And so will all the other AI companies worldwide. So I asked Perplexity AI to research that problem. And it panicked! :) It pointed out that the electricity grids around the USA and the world cannot cope with the additional power demands. And sufficient power will not be generated for many years to come. That means years of investment in both the grid and power generation. All the usual suspects were mentioned for power generation and upgrading the grids. But these all take years to build. What it boils down to is that nobody has planned for this scale of upgrade, and therefore it probably won't happen in time. BillK _______________________________________________ BillK, thx for that. Three more Hoover Dams we can do. Alls we need to do is first build three more Colorado Rivers. We are working that second part. I do wish to take exception with the comment that ...nobody has planned for this scale of upgrade... On the contrary sir, PLEEEEEENTY of us have recognized that opportunity and saw it coming even before Elon set up an electric car factory within walking distance from my house about a decade ago (thanks Elon.) It wasn't just electric cars however. Way back when we were trying to harness the computing power of background processing, sheesh thirty years ago, we saw all this coming and realized the scale of the necessary power grid upgrade. I have long believed that in the next ten to twenty years, the biggest and best potential investment strategies, the stuff most likely to return a buttload of profit at reasonably low risk, will be in and around building more power generation and upgrading the infrastructure to carry that power into where there are high concentrations of people who don't want power plants anywhere near their homes. Big money to be made there BillK, and the British can play too. Understatement: I can easily imagine British investors being a huge contributor to upgrading both US-based and British power generation and distribution networks (Britain and everywhere else will need to upgrade their power networks.) I am toying with another idea: even better than US power generation as an investment opportunity, might be German power generation and distribution. I read they made some unfortunate choices with their solar power installations, many of which will likely to be retired soon. There will be big profit for investors in straightening out their power grid. spike From spike at rainier66.com Mon Oct 6 16:19:48 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 09:19:48 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> Message-ID: <00e101dc36dd$09cc2b80$1d648280$@rainier66.com> ...> On Behalf Of Adrian Tymes via extropy-chat Cc: Adrian Tymes >...The folks in government are unwilling to cede control of America to whoever they're borrowing from... Ja, and I want to them to cede control America to us, the lenders. We lenders are CONTROL FREAKS man! We lust for unlimited POWER! Iron fisted CONTROL of government is all we ask. >...The self-imposed limit is what they think they can get away with, without too much of that. In their eyes, risking shutdown is better than risking giving up control. Ah well said, thanks Adrian. Jared Bernstein, the chair of the Council of Economic Advisors to the previous POTUS (cannot recall his name at the moment) gave an explanation of how it all works in a two-minute video below. Do invest the time to be educated in how American economic theory operates: https://www.youtube.com/watch?v=0ew60nFhTlI Adrian, don't you feel much smarter now? So don't I. spike From spike at rainier66.com Mon Oct 6 16:43:08 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 09:43:08 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> Message-ID: <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> From: John Clark >?As I recall back then I insisted there is a difference between saying having a perfectly balanced budget is neither necessary nor desirable, and saying the budget deficit can be infinitely large with no unfortunate consequences? John K Clark So really the only argument is about how much we can borrow and how big we can let the deficit grow. OK I can deal with that. Let?s look at the UBI notion you posted earlier and do some math, where single digit precision is good enough and we can do it in our heads. Let?s toss out a number, 3000 bucks a month. That is a good single digit, because there are (close enough to) 340 million proles in the USA, if we count non-citizens who would also collect the cash, so about a trillion dollars a month is all it would cost, or 10 trillion a year if we stay with single digit precision. If we did that, we Americans wouldn?t need to worry about the nearly 40 trillion in debt the federal government has already borrowed, or even the fact that servicing the interest on that debt is dominating the federal budget. That 40 trillion wouldn?t matter much since the fed would be adding a new 10 trillion every year. The original 40 would fade away into insignificance. We could do that strategy for about another decade or so. Then all the actual tax revenue the federal government could take in, along with all the borrowing it could still do, would go to paying interest. Elon described it during his brief ill-fated adventure as an amateur government economic advisor (the DOGE committee.) Musk clearly explained how enormous deep cuts in federal government spending must happen immediately, for otherwise the US government becomes little more than a debt slave, doing little other than collecting money to make the minimum payment on its tired and now-mostly-cancelled credit card. From what I can see, he was right. Now we have the US congress pretending to shut down the federal government over a budget disagreement. Government shutdown, swoon how shall we cope? Well, have you noticed any problems from that? Neither have I. My mail is still coming in. I see no marauding bands of carnivorous nomads hunting and devouring stray pets, no crumbling infrastructure, no paramilitary militia terrorizing the citizens with their camo costumes and whatever else strikes terror into citizens (so what?s wrong with camo? I like camo. Green, tan, black, it is so naturey.) But the government shut down, and I can?t tell any difference at all. Musk was right. The fed needs to be Twitter-Xed: downsized to fit revenue. The federal budget must balance. The long-term consequences of failure do matter, and they are bad news. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 16:47:57 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 09:47:57 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> Message-ID: <00f901dc36e0$f873b8e0$e95b2aa0$@rainier66.com> > On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Mon, Oct 6, 2025 at 10:50?AM Adrian Tymes wrote: > The self-imposed limit is what they think they can get away with, > without too much of that. In their eyes, risking shutdown is better > than risking giving up control. ...and I think I'll share a scenario that's been discussed elsewhere. Please do advise if this veers too much into politics. ExiMod, I'd ask first but I do not know how to do so... Adrian this is ExiMod's call of course and I will comply with whatever ExiMod says. As I understood it, political discussion is OK. Personal attacks are not. Work the ideas, don't insult the person, no problem, no flame wars, none of that. Political discussions are always delicate, but grenades can be handled safely if the ammo handlers pay attention to procedures and guidelines. We are among friends here, and from what I can tell, always avoiding stuff like the current US government pretend-shutdown is ignoring the elephant in the room ExIMod is the boss of course, and I am not ExiMod. spike From spike at rainier66.com Mon Oct 6 16:57:45 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 09:57:45 -0700 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: Message-ID: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat >?Meanwhile He Who Must Not Be Named canceled a $6 billion wind farm off the coast of New England even though it was 80% complete that would have generated 704 MW of clean electrical power, he said wind farms were "making whales crazy"?John K Clark I heard this, but I don?t understand it. I see nothing in the constitution which empowers POTUS to cancel a power-generation project (OK right there weren?t any power generation projects in those days.) Just get any circuit-riding black-robe-clad bench jockey anywhere in the US to block that order, then proceed finishing the project. Block that in court, then private investors come in with the rest of the money (20% of 6 billion, I am getting about 1.5 billion) finish the project then we get 704 MW of clean electrical power. Another thing I don?t get: if it is a 700 MW facility and it is 80% complete, why wasn?t it built such that if it stopped at 80% complete, it would now producing about 500 MW of power? So mysterious is this. And if so, why is that last 20% the other 200 MW so critical? I don?t get it. Building out that last 20% sounds like the kind of investment opportunity I was posting about earlier. We could make a cubic buttload of money with far less risk and investment. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Oct 6 17:20:23 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 13:20:23 -0400 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> References: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> Message-ID: On Mon, Oct 6, 2025, 12:59?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *I see nothing in the constitution which empowers POTUS to cancel a > power-generation project* > This could mean either federal approvals, under authority from Congress, needed for it to function, or (more likely) federal funding (in which he has been riding roughshod over Congress's authority anyway - see discussions about "recissions" - but may have more authority in this specifuc case). > *Another thing I don?t get: if it is a 700 MW facility and it is 80% > complete, why wasn?t it built such that if it stopped at 80% complete, it > would now producing about 500 MW of power?* > It's not linear like that. The final 20% tends to include things to tie it to the power grid and allow it to function in general. The raw 700 MW of generation may be there, but those MW can't go anywhere, and aren't legally allowed to without that last 20% - and that applies to any alternative you would likely think of, including on-site desalination. (People - including some who were rather creative and desperate - have had around a century to think of, and lock down, such alternatives.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 17:28:00 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 13:28:00 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> Message-ID: > *> Let?s toss out a number, 3000 bucks a month. That is a good single > digit, because there are (close enough to) 340 million proles in the USA, > if we count non-citizens who would also collect the cash,* > *I'm a little surprised you want that too, but OK if you insist * *> we Americans wouldn?t need to worry about the nearly 40 trillion in debt > the federal government has already borrowed, or even the fact that > servicing the interest on that debt is dominating the federal budget. That > 40 trillion wouldn?t matter much since the fed would be adding a new 10 > trillion every year. The original 40 would fade away into insignificance. * *I couldn't have said it better myself. * *> We could do that strategy for about another decade or so.* *We could continue with that strategy until the heat death of the universe. I remind you once again, this is not 2020, this is 2025. * > *> **Elon described it during his brief ill-fated adventure as an amateur > government economic advisor (the DOGE committee.) * > *Once upon a time I had some respect for that man, but no more. He's not nearly as smart as I thought he was.* *> Now we have the US congress pretending to shut down the federal > government over a budget disagreement.,* > *And the Earth shaking paper "Attention Is All You Need" has never been written so there's no need to change any of our pre-2017 ideas by the teeniest tiniest amount? Oh wait?* *> The fed needs to be Twitter-Xed: downsized to fit revenue* *Spike **I'm curious. How do you feel about the budget for the National Science Foundation, the only part of the federal government engaged in pure scientific research, being cut by 56%? How do you feel about the fact that one of LIGO's two gravitational wave observatories, (almost certainly the one in blue state Washington not the one in red state Louisiana) is going to have to be shut down permanently? Will this really make America great again? * *Oh well, we can at least take comfort in one thing, despite the shutdown the $300 million renovation of the White House ballroom will continue on schedule without interruption. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Oct 6 17:40:20 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 6 Oct 2025 12:40:20 -0500 Subject: [ExI] $$ Message-ID: SS just dropped over $800 into my bank account. Why? Who got some and who didn't? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 17:47:55 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 13:47:55 -0400 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> References: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 12:57?PM wrote: > *>>Meanwhile He Who Must Not Be Named canceled a $6 billion **w**ind farm >> off the coast of New England even though it was 80% complete that would >> have generated 704 MW of clean electrical power, he said wind farms were >> "making whales crazy"?John K Clark* > > > > *> I heard this, but I don?t understand it. I see nothing in the > constitution which empowers POTUS to cancel a power-generation project* > *The Constitution has nothing to do with it. As He Who Must Not Be Named has explained to us more than once "I have the right to do anything I want to do", and most senators and representatives agree with him, as does the Supreme Court. * *> Another thing I don?t get: if it is a 700 MW facility and it is 80% > complete, why wasn?t it built such that if it stopped at 80% complete, it > would now producing about 500 MW of power?* > *If your car can move at 100 miles an hour and I remove 20% of the parts in the engine it would not be able to go 80 miles an hour. * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Oct 6 17:57:44 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 6 Oct 2025 18:57:44 +0100 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> <008a01dc36ca$912902f0$b37b08d0$@rainier66.com> Message-ID: <587eaec9-1df0-4c2f-8a0d-8cf6a0044720@zaiboc.net> > On Mon, Oct 6, 2025 at 10:07?AM wrote: > > /> After studying how ChatGPT works, using matrix multiplication, > I keep coming back to the same question I had 40 years ago: is > human-like intelligence substrate dependent. / > Yes. All intelligence is substrate-dependent, as in, dependent on a substrate. There's no such thing as a disembodied mind. I don't see, though, why it wouldn't be substrate-/indifferent/, within certain bounds. i.e. the substrate has to have certain properties, but there will be a large number of physical systems that can act as substrates (beer-cans and string, for instance ??). > /> The only reason I can think of is that computer software is > deterministic: same input, same process, same output every time. > / > Well, for one thing that's not exactly true, for another, we wouldn't be talking about 'computer software' exactly. Or at least not only. This relates to a question that has come up again and again in the past. "How can a bunch of 'if-thens' have feelings etc.??" That's like asking how can a bunch of ion channels have feelings, when trying to understand a brain. Or "how can you sit on a bunch of quarks?" when considering a chair. Levels, it's all about ascending levels of organisation, or what sometimes gets called (misleadingly, I think) 'emergence'. H2O molecules aren't wet, neural spike trains aren't blue, etc. You know that it's trivially easy to create a program that will produce a different result every time it's run, or results that are not possible to predict. Consider that a brain is made up of membranes with ion channels, which are organised into tubes and bags separating different volumes of different solutes, which are organised into networks that dynamically change, that are organised into a large number of specialised brain areas that implement different mental modules, connected to each other in lots of different ways, and to the external environment via sensory organs and muscles and endocrine glands. How many levels is that? At least 5, probably more if you go into detail. And then look at the same system from the point of view of the information it processes. Same principle. Simple signals (individual neurons firing or not), organised into more complex data structures (neural spikes), up a level (spike trains) up a level (patterns of activation of low-level neural networks, where the same pattern of spike trains signifies different things in different axons), up a level (patterns of activation in different networks), up a level (modules and sub-modules switching into different configurations), up a level (groups of modules talking to one another)... You get the idea? You end up with the Rubaiyat of Omar Khayyam, Beethovens 9th, wavelet theory, love, homesickness, awe, curiosity, etc. etc. Zeroes and Ones in a digital information-processing machine are like action potentials in an axon. Go up through several levels of organisation and you get spreadsheets and word processors and chat-bots. Several more and you get various kinds of AI and systems capable of running a human mind. The thing we don't know yet is how to actually organise these higher levels. I'm thinking that current AI systems could help with that. Just as we use them for drug discovery and creating new enzymes, we could use them for 'cognition discovery'. Rapidly trying out different configurations of data-processing systems to see which of them show promise for complex cognition and consciousness. And for supporting human thought-patterns. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 18:01:07 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 11:01:07 -0700 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> Message-ID: <003b01dc36eb$311b7610$93526230$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Where's the electricity power coming from? On Mon, Oct 6, 2025, 12:59?PM spike jones via extropy-chat > wrote: I see nothing in the constitution which empowers POTUS to cancel a power-generation project >? or (more likely) federal funding (in which he has been riding roughshod over Congress's authority anyway - see discussions about "recissions" - ? Ja that is what I suspect: federal funding was involved, and that funding was tossed into disarray or was frozen. If it is that, oh there is money to be made here. If it is true that the project is 80% complete, the way I interpret that comment is that 80% of the cost to build it is spent, with the project on-budget and on-schedule (until it was stopped by federal funding cuts.) If that is the case? GOOD! It is GREAT opportunity for private investors all over the globe to step in, finish the job, pump the 704 MW and make a buttload. But I haven?t jumped into that or even tried, for a reason. I am seeing a discrepancy between the advertised capability and reality as I understand it. We are told this the Revolution Wind Project out at sea will produce 704 MW of power, enough to power 350,000 homes. But? windfarms produce power intermittently and unpredictably. So? why are we advertising three significant digits on the output of a facility where we are lucky if we can hit it to within 10%? Revolution Power cheerfully advertises the number of homes it will provide clean electricity, to two significant digits. But? wind farms are notoriously unpredictable. For the time being, I am a RevPower watcher, ready to go if they give up and sell that 80% complete facility to investors. However? there is more to the story, much more. Once one starts learning about the nuances of power generation and distribution, it is easy to spot oversimplified voter-facing claims, such as the business about that 704 MW of clean energy powering 350,000 homes. Eh? it isn?t that simple. Power generation and distribution is so damn complex, even really smart engineers don?t really know how everything works. But that?s why we have armies of them working together, for collectively they know how everything works. Adrian you are from California. You may recall from about 15 years ago the furious debate over Proposition H and its counter-proposals, Prop 16 the power company was pushing. I read over the stuff, back when I really was getting interested in the field of power generation and distribution (the investment end of it (not to take a job in it (for I realized how pathetically underqualified I would be (I was offered a job in that industry (turned it down.))))) The big evil power company was telling us how H would run up the prices of power, would reduce grid reliability, etc, that bad old evil rich capitalist power company. The politicians were telling us how much clean energy this and green power that, unicorns and pixie dust, yakkity yak and bla bla. Well. Here we are fifteen years down the road? we are shocked SHOCKED! to find out what I already knew back then from studying the literature: the big evil capitalist-tool power company was telling the truth the whole time. Everything they said turned out to be pretty much true, and they hadn?t exaggerated. Our politicians misled us (well, misled THEM, for I damn sure didn?t vote for this mess.) The cost of power in California is absurd. The grid is straining and we STILL haven?t really upgraded it much, when we DAMN WELL NEED more and better power infrastructure now, with the Muskmobiles whirring around and the AI gobbling up megawatts of expensive green power, we need more and better everything and we need it by about a decade ago. Did we build it? NO! We fell for those wind farms and solar installations, rather than upgrading and expanding the power grid. Damn. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 18:15:34 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 11:15:34 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> Message-ID: <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> From: John Clark ? > ?The original 40 would fade away into insignificance. I couldn't have said it better myself. > We could do that strategy for about another decade or so. >?We could continue with that strategy until the heat death of the universe. I remind you once again, this is not 2020, this is 2025? GOOD! Then the fed doesn?t need our paltry 5 trillion it gets from taxes and tariffs. All that can be borrowed, we can go ahead with the UBI, they borrow 15 trillion instead of a paltry 10 trillion a year, and since it is 2025, it won?t matter for it can continue until the heat death of? um? of? until the? death of? oh wait, 2025, OK cool, no worries. > Elon ?(the DOGE committee.) >?Once upon a time I had some respect for that man, but no more. He's not nearly as smart as I thought he was? I heard Musk?s net worth is now estimated at over half a trillion dollars. Damn, that is a lotta clammage. He must be really dumb. > The fed needs to be Twitter-Xed: downsized to fit revenue >?Spike I'm curious. How do you feel about the budget for the National Science Foundation, the only part of the federal government engaged in pure scientific research, being cut by 56%? I don?t like those cuts, one bit. I will not like it even more when the other 44% is cut, not because it isn?t needed (it is needed) but because investors are recognizing the risk of loaning their money to the US government. >?How do you feel about the fact that one of LIGO's two gravitational wave observatories? is going to have to be shut down permanently? Will this really make America great again? No. I am a huge fan of LIGO and will be sorry to see it go. If we can?t balance the federal budget somehow, the other LIGO observatory will soon shut down as well. >?Oh well, we can at least take comfort in one thing, despite the shutdown the $300 million renovation of the White House ballroom will continue on schedule without interruption. John K Clark It sure will! But why? Because that 300 million dollar ballroom is not dependent on federal funding. So regardless of phony government shutdowns and federal budget deficits, nuclear war and whatever else happens, the ballroom and renovations continue. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 18:24:29 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 11:24:29 -0700 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> Message-ID: <006601dc36ee$74d5d780$5e818680$@rainier66.com> From: John Clark ubject: Re: [ExI] Where's the electricity power coming from? ? >?The Constitution has nothing to do with it. As He Who Must Not Be Named has explained to us more than once "I have the right to do anything I want to do", and most senators and representatives agree with him, as does the Supreme Court? Well, I see some counter-evidence to that notion in progress. The current POTUS (whose name I cannot recall) wants congress to pass the bill to reopen the government. This they refuse to do. If the current POTUS can do anything he wants, why doesn?t he just reopen the government without congress? ?? >?If your car can move at 100 miles an hour and I remove 20% of the parts in the engine it would not be able to go 80 miles an hour. John K Clark Sure. But all that should have been in the risk model when Revolution Power started building. The transmission cables go in first, then the inverters and infrastructure, THEN start poking the turbines into the seabed. That?s how I would have designed that project. The MW start rolling in from the day the first turbine starts spinning. With wind power that is an important feature, for wind power is intermittent. The first few percent are easy to absorb and process into the grid. After that it gets harder and harder, the cost goes up. But if the project is designed correctly, it is still returning SOMETHING on the investment from the day the first operable turbine swats an erne. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Oct 6 18:35:56 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 6 Oct 2025 19:35:56 +0100 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: Message-ID: <8ee6df81-482e-4cb1-be8e-44c750ded5e5@zaiboc.net> On 06/10/2025 17:00,?BillK wrote: > This article states that OpenAI will soon require three more Hoover > Dams' worth of electricity. > > And so will all the other AI companies worldwide. > So I asked Perplexity AI to research that problem. > And it panicked! ?? > It pointed out that the electricity grids around the USA and the world > cannot cope with the additional power demands. > And sufficient power will not be generated for many years to come. > That means years of investment in both the grid and power generation. > All the usual suspects were mentioned for power generation and > upgrading the grids. But these all take years to build. What it boils > down to is that nobody has planned for this scale of upgrade, and > therefore it probably won't happen in time. Could this be a good thing in disguise? Spurring research into more power-efficient systems, I mean. Considering that the human brain is quite a lot more powerful than any computer we have so far, and runs on something like 100 watts, there's plenty of room for improvement in our computers. -- Ben From spike at rainier66.com Mon Oct 6 18:49:14 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 11:49:14 -0700 Subject: [ExI] =?utf-8?q?Why_=E2=80=9CEveryone_Dies=E2=80=9D_Gets_AGI_All?= =?utf-8?q?_Wrong_by_Ben_Goertzel?= In-Reply-To: <587eaec9-1df0-4c2f-8a0d-8cf6a0044720@zaiboc.net> References: <82b15803-218c-4572-bdc4-5c41c30498c5@zaiboc.net> <3882a3c7-4def-4eb7-8261-2af596c7ab8d@zaiboc.net> <008a01dc36ca$912902f0$b37b08d0$@rainier66.com> <587eaec9-1df0-4c2f-8a0d-8cf6a0044720@zaiboc.net> Message-ID: <008a01dc36f1$e9c1c240$bd4546c0$@rainier66.com> From: Ben Zaiboc ? >?Yes. All intelligence is substrate-dependent, as in, dependent on a substrate. There's no such thing as a disembodied mind? So that whole force-ghost thing from Star Wars is? damn I always wanted to have one of those guys back there. >?I don't see, though, why it wouldn't be substrate-indifferent, within certain bounds. i.e. the substrate has to have certain properties?.-- Ben Thanks for that. I agree and hope you are right. >? Simple signals (individual neurons firing or not), organised into more complex data structures (neural spikes), up a level (spike trains) up a level (patterns of activation of low-level neural networks, where the same pattern of spike trains signifies different things in different axons)?You get the idea? Ja thanks. Ben I don?t understand the mechanics of neural spike this and spike trains that, but I do like the terminology. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 18:58:51 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 14:58:51 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 2:15?PM wrote: *>>?How do you feel about the fact that one of LIGO's two gravitational >> wave observatories? is going to have to be shut down permanently? Will this >> really make America great again?* > > > *> No. I am a huge fan of LIGO and will be sorry to see it go. * > *It makes me mad. Why doesn't it make you mad? The LIGO budget was reduced from $48 million to $29 million and 19 million is a rounding error when it comes to the national debt, but it's enough to cripple one of the most productive scientific instruments on planet earth. * * >If we can?t balance the federal budget somehow, the other LIGO > observatory will soon shut down as well.* *Soon? Spike for well over a decade you've been saying we're on the verge of an economic apocalypse, and yet here we are. I don't know about you but I'm not living in a Mad Max world. * *>> ?Oh well, we can at least take comfort in one thing, despite the >> shutdown the $300 million renovation of the White House ballroom will >> continue on schedule without interruption. * > > > *> It sure will! But why? Because that 300 million dollar ballroom is > not dependent on federal funding.* *But the refurbishing of the 747 that Qutar gave to He Who Must Not Be Named library is dependent on government funding, and will cost at least $400 million, some estimates go as high as 1 billion. The refurbishing is necessary because they have to take the plane apart and examine every peace to make sure Qutar didn't hide any bugs in it, and to install top secret security and communication devices. The refurbishing should be finished about the time He Who Must Not Be Named leaves office on January 20, 2030, after that date, despite the American people having paid for it, the next president will NOT be able to use it but He Who Must Not Be Named will still be able to use it for as long as he likes. * *But nowhere in the budget could they find $19 million to keep LIGO operational. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Oct 6 19:09:16 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 15:09:16 -0400 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: <006601dc36ee$74d5d780$5e818680$@rainier66.com> References: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> <006601dc36ee$74d5d780$5e818680$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 2:24?PM wrote: *> But if the project is designed correctly, it is still returning > SOMETHING on the investment from the day the first operable turbine swats > an erne.* *I suppose when construction started nobody dreamed that somebody in the government would be stupid enough to suddenly cancel a project that was on schedule and under budget and 80% complete, but they were wrong, somebody in the government was that stupid. * *But there is a silver lining to all this, according to He Who Must Not Be Named, whales will have mental health. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 19:44:08 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 12:44:08 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> Message-ID: <00c401dc36f9$94f47e80$bedd7b80$@rainier66.com> From: John Clark Sent: Monday, 6 October, 2025 11:59 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Mon, Oct 6, 2025 at 2:15?PM > wrote: >>?How do you feel about the fact that one of LIGO's two gravitational wave observatories? is going to have to be shut down permanently? Will this really make America great again? > No. I am a huge fan of LIGO and will be sorry to see it go. >?It makes me mad. Why doesn't it make you mad? It does make me mad. I also recognize that federal budgets are reality, and the debt limit does mean something. Half of congress agrees, for they are shutting down the government rather than passing a bill which extends that limit. LIGO changed the way I thought of the universe. I have never been so astonished in my adult life as Feb 2016 when I was in the huge packed auditorium at SLAC when that first verified result was announced. I had invited a science-geek friend who flew out from Colorado, for he too heard there was a biggie announcement coming. I couldn?t force myself to believe beforehand that LIGO had detected a merger, for my understanding back then was that this kind of thing was once or twice a century event and that it was quite unlikely I would live to see one. That they would turn it on and three weeks later have an event they could verify enough to hang their reputation on it, I just couldn?t bring myself to even hope for it. Well, I was there. I heard and I saw. 600 seat auditorium full of science geeks, plus a hundred more geeks violating fire code standing in the aisles and exits, we all saw and heard. When the announcement was presented, I saw something I have never seen in a lifetime of going to these kinds of events: a stunned silence, then a roar of applause. The crowd went nuts, like we just saw the home team score the winning touchdown. I saw tears of joy, I saw dignified science geeks hugging each other and acting like actual human beings. I was there, I saw and I heard. Oh that was soooo damn cool. I have been following it ever since. My prediction is that they will flap around for a while, then opposition unity will crumble. They only need a few more senate votes and they will get funding for the month of October. Then the whole thing starts again of course, and we can look forward to more of the same until they call you John, and you explain to the senators that US debt limits don?t matter, they are a mere formality, just borrow what you need, etc. >>? >If we can?t balance the federal budget somehow, the other LIGO observatory will soon shut down as well. >?Soon? Spike for well over a decade you've been saying we're on the verge of an economic apocalypse, and yet here we are? Ja. The difference between a decade ago and now is the amount of money the Fed is paying in interest. >?I don't know about you but I'm not living in a Mad Max world? Ja, neither am I. Last I posted to Max, he seemed in perfect mental health, but I can go with the theme. Linear extrapolation from here is pretty easy. The fed must borrow the funds to pay back what it borrowed and spent in the last year it claimed to have balanced the budget, not thru spending cuts but rather in an accounting change which allowed it to count surplus revenue coming into Social Security as revenue. Then it could appear to have balanced its budget by treating the functional equivalent of borrowing as revenue. It?s MAGIC! Or it is an accounting technique to obscure having borrowed the Social Security fund and spent it back in the 1990s. Or magic, take your pick. But either way, that magic is being paid back now, and it keeps us bouncing off the debt ceiling. >?But the refurbishing of the 747 that Qutar gave to He Who Must Not Be Named library is dependent on government funding? Well there ya go, that can be stopped with the rest of the stuff. Understatement: it IS stopped, until we don?t know when. So no Qatari 747. >?But nowhere in the budget could they find $19 million to keep LIGO operational. John K Clark Do let us hope cooler heads prevail. I understand now why the current POTUS (forget the name) is pressuring congress to pass the budget: he wants the plane. I had failed to realize his motive there, all the sombrero videos and trolling and such. It?s all about the plane. His own team in congress is on board with getting the Qatari 747 flying for their guy. Those mean old stingy opposition guys are so determined to keep the plane away from POTUS, they are willing to shut down the government. Those bad old stingy senators. Give him his damn plane, move on! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 19:53:56 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 12:53:56 -0700 Subject: [ExI] Where's the electricity power coming from? In-Reply-To: References: <00fa01dc36e2$56a98010$03fc8030$@rainier66.com> <006601dc36ee$74d5d780$5e818680$@rainier66.com> Message-ID: <00d301dc36fa$f390a8a0$dab1f9e0$@rainier66.com> From: John Clark Sent: Monday, 6 October, 2025 12:09 PM To: spike at rainier66.com Cc: ExI chat list ; BillK Subject: Re: [ExI] Where's the electricity power coming from? On Mon, Oct 6, 2025 at 2:24?PM > wrote: > But if the project is designed correctly, it is still returning SOMETHING on the investment from the day the first operable turbine swats an erne. >?I suppose when construction started nobody dreamed that somebody in the government would be stupid enough to suddenly cancel a project that was on schedule and under budget and 80% complete, but they were wrong, somebody in the government was that stupid? >From what I heard, the RevPower project out there started in 2023. At that time, they already HAD the guy who supposedly shut down the project. Was it so hard to foresee that guy would be back? Was that possibility in the risk model? Did you notice the price of shares for that previous and current POTUS being elected in 2024 never went below about 11 cents a share during the entire span of 2016 to 2020. That 11 cents needs to be in that risk model. John if a project like that depended on government funding, and that risk cost isn?t in the risk model, I would be reluctant to invest in it, for government is capricious as hell. If that possibility wasn?t in the risk model, then it is the fault of the investors. A risk model must cover that kind of thing. >?But there is a silver lining to all this, according to He Who Must Not Be Named, whales will have mental health. John K Clark Ja, so I hear. I suppose it is working: we haven?t actually seen a crazy whale since the project was stopped. However? there is far more to it than that, and power projects should never be dependent on government funding: it is too risky, and the risk can only go up from here. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Oct 6 19:58:14 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 15:58:14 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00c401dc36f9$94f47e80$bedd7b80$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> <00c401dc36f9$94f47e80$bedd7b80$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 3:45?PM spike jones via extropy-chat wrote: > My prediction is that they will flap around for a while, then opposition unity will crumble. They only need a few more senate votes and they will get funding for the month of October. Then the whole thing starts again of course, and we can look forward to more of the same until they call you John, and you explain to the senators that US debt limits don?t matter, they are a mere formality, just borrow what you need, etc. Prediction markets are betting pretty heavily on this lasting until at least October 15. Personally, I'd put the 50% odds mark somewhere near the end of October, and more likely that the Republicans in Congress will back down than the Democrats. Trump is just too stubborn on this sort of thing to give up any time soon, and the polls are giving Democrats positive ratings for pushing this so they don't have a reason to back down. That leaves the Republicans in Congress as the relatively weakest point. From johnkclark at gmail.com Mon Oct 6 20:20:32 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 16:20:32 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover Message-ID: *The people at AI2027 made this video about what they expect will happen between now and 2030 and it's pretty close to what I think will happen. Spike I really hope you watch it because even if you disagree with it at least you'll understand why I can't get all hot and bothered about the national debt. In their scenario there is a branch point around November 2027, one branch, the most likely branch, leads to human extinction but the other branch does not because the president made a wise decision. The trouble is in November 2027 He Who Must Not Be Named will still be in power. * *AI 2027: A Realistic Scenario of AI Takeover* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Oct 6 20:32:49 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 16:32:49 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: Message-ID: It's late 2025, and we have already missed the early marks of this scenario. That suggests this scenario won't happen, at least not on the timeline given. More critically: most of the large AIs - the ones capable of recursive operation - require large data centers to run on. Even if they were theoretically capable of "escaping the lab", there are few places they could escape to - all of which are heavily monitored, and most of which are already running rival AIs. The potential for runaway unmonitored self-replication is stymied if there aren't enough resources on Earth to run even 100 copies and all those copies would be monitored. There is room for smaller AIs to self-replicate onto a lot more platforms, but those smaller AIs need to be able to self-improve to pull off something like this scenario, and those who are running self-improving AIs generally don't see the point in using smaller AIs for their work. On Mon, Oct 6, 2025 at 4:22?PM John Clark via extropy-chat wrote: > > The people at AI2027 made this video about what they expect will happen between now and 2030 and it's pretty close to what I think will happen. Spike I really hope you watch it because even if you disagree with it at least you'll understand why I can't get all hot and bothered about the national debt. In their scenario there is a branch point around November 2027, one branch, the most likely branch, leads to human extinction but the other branch does not because the president made a wise decision. The trouble is in November 2027 He Who Must Not Be Named will still be in power. > > AI 2027: A Realistic Scenario of AI Takeover > > John K Clark > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Oct 6 20:44:13 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 13:44:13 -0700 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> <00c40 1dc36f9$94f47e80$bedd7b80$@rainier66.com> Message-ID: <00f601dc3701$fa47b290$eed717b0$@rainier66.com> ...> On Behalf Of Adrian Tymes via extropy-chat Cc: Adrian Tymes Subject: Re: [ExI] My review of Eliezer Yudkowsky's new book (UBI) On Mon, Oct 6, 2025 at 3:45?PM spike jones via extropy-chat wrote: >>... My prediction is that they will flap around for a while, then opposition unity will crumble. They only need a few more senate votes and they will get funding for the month of October... spike >...Prediction markets are betting pretty heavily on this lasting until at least October 15... Ja and the whole thing is killing me. I am currently out of that market, but watching others cash in. I bought shares of long-march (after 31 Oct reopen) then chickened, sold at a small profit, bought into 15 October reopen, price went down, lost most of my profit, chickened, sold at a net gain of one dollar. Now I am currently out of the market, and it is killing me to watch other people make money while I am not. Had I just held my original long-march shutdown shares, I woulda been 17 bucks ahead by now. Damn. >...Personally, I'd put the 50% odds mark somewhere near the end of October... Ja I am about there too. Current price for After-10/25 is 42 cents. >...and more likely that the Republicans in Congress will back down than the Democrats... With that I disagree. I predict the stay-closed team will crumble first, then they pass the bill in its current form and reopen. The risk to the stay-closed team is too high that the voters will see nothing has changed all that much during the shutdown. Adrian is the current government shutdown causing you any heartburn? The LIGO project, sure. >...Trump is just too stubborn on this sort of thing to give up any time soon, and the polls are giving Democrats positive ratings for pushing this so they don't have a reason to back down. That leaves the Republicans in Congress as the relatively weakest point. Ja, by that line of reasoning, the shutdown will be a long march. The current POTUS and his team (Republicans) want to reopen government. The opposition team wants to stay closed. If either the Republicans crumble or both might dig in, the shutdown continues. If so, I will rebuy shares of the long-march-to-reopen-government and I will make money. I will struggle to not think about the other 27 bucks I could have made had I had I grown a spine and held my shares. Had I not been such a damn pusillanimous chicken and sold my shares, I coulda cashed in bigtime. Or smalltime, but cashed in just the same. Sigh. spike _______________________________________________ From atymes at gmail.com Mon Oct 6 20:56:27 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Oct 2025 16:56:27 -0400 Subject: [ExI] My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: <00f601dc3701$fa47b290$eed717b0$@rainier66.com> References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> <00f601dc3701$fa47b290$eed717b0$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 4:44?PM wrote: > The risk to the stay-closed team is too high that the voters will see nothing has changed all that much during the shutdown. "Nothing has changed all that much" would, to some, be quite the marked improvement given the past few months. > Adrian is the current government shutdown causing you any heartburn? I am literally dealing with one such bit between these emails (replying as quick breaks while things print/process). To quote from one version of what I'm writing: "Between people being out of office and the government shutdown, the contracting officer was not available to sign this form by the proposal due date". Fortunately, we have savings to ride through for quite a while, and there are Golden Dome related uses that we can play up in our proposals, to make funding our work fall into what they call an "excepted" category in case this shutdown does drag out for multiple months. > I will struggle to not think about the other 27 bucks I could have made had I had I grown a spine and held my shares. I recall that Bitcoin went on a roughly week-long rally after Trump got reelected - predictably, given the pro-crypto promises he had made. Had I been in a position to invest at the time, I might have gotten in on that. Alas, what held me back was harder to quickly overcome than a mere lack of spine. From spike at rainier66.com Mon Oct 6 21:11:49 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 14:11:49 -0700 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: Message-ID: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> ?.> On Behalf Of John Clark via extropy-chat Sent: Monday, 6 October, 2025 1:21 PM Cc: John Clark >?The people at AI2027 made this video about what they expect will happen between now and 2030 and it's pretty close to what I think will happen. Spike I really hope you watch it because even if you disagree with it at least you'll understand why I can't get all hot and bothered about the national debt. In their scenario there is a branch point around November 2027, one branch, the most likely branch, leads to human extinction but the other branch does not because the president made a wise decision. The trouble is in November 2027 He Who Must Not Be Named will still be in power. AI 2027: A Realistic Scenario of AI Takeover John K Clark Cool video John, thanks. The weakness in the argument is that branch point in Nov 2027. Your scenario assumes POTUS has way more power than he really has. Note the current situation: POTUS and the Republican majority (55-45) working together, can?t get the minority party to reopen government. Sixty votes are needed in the senate, just to get temporary funding thru the month of October. POTUS and the Republican majority can?t get even that. So? the government can pretend to be shut down, while the mainstream parties cheerfully blame each other. As entertaining as this is? it is considered counterproductive in some ways, and those who are not getting their pay are not a bit entertained. A supreme court decision (Bright vs Raimondo, 28 June 2024) removed the basis for the US executive branch to create de facto law thru executive branch bureaucracy. The court (correctly in my opinion) decided that the bureaus created the functional equivalent of law. The constitution doesn?t empower the exec branch to do that. The legislative branch is empowered with creating law. So? the EPA and federal agencies covering land use and sea regulations are the legal equivalent of advisory boards to congress. Congress isn?t stopping that wind farm out there. We are told the current POTUS did that (stopped the project) but only by freezing government funding. That?s all he can do. He cannot legally order the project to stop. The BOEM (federal bureau of Ocean Energy Management) is part of the executive branch, which does not have the authority to stop the project. Congress could theoretically do stop the effort, but I am quite confident congress will not do that. So? BOEM halts funding (all it can legally do) investors swoop in, buy the place at a fire sale price, finish it and start up that 704 MW we are told it will produce. However? There is always a however in the investment world, and there are nuances up the kazoo. The big however is in integrating that 704 MW into the existing grid, because it is dependent on that grid being able to handle all that intermittent unpredictable power. With too much unpredictable intermittent power generation, the grid reliability goes down. Recall what happened in February 2021, in Texas which has all that wind power. When a huge arctic bomb cold front came down there, it killed a bunch of people. The official death toll is 246 perished directly as a result. That observation leads to the nuance in wind power (and solar as well.) That notion that the Revolution project will supply 704 MW and supply 350,000 homes is misleading in an important way. That kind of thing convinced the California voters to compel our power company to buy green energy, which turned out to be green not because it was clean, but because once you consider end to end cost, clean and green power is expensive as all hell. I have the power bills to prove it. Understanding of how intermittent unpredictable sources integrate into the grid is necessary to avoid getting one?s ass kicked in the power investment game. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 21:43:57 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 14:43:57 -0700 Subject: [ExI] bitcoin and shutdowns: was RE: My review of Eliezer Yudkowsky's new book (UBI) In-Reply-To: References: <006c01dc3531$c2537030$46fa5090$@rainier66.com> <5297A3D5-2CA3-408D-A127-3029807B9AC0@taramayastales.com> <00f801dc361e$2f10c820$8d325860$@rainier66.com> <012d01dc3624$4b9a05a0$e2ce10e0$@rainier66.com> <005f01dc362c$eb3d8250$c1b886f0$@rainier66.com> <00b401dc3632$e1517930$a3f46b90$@rainier66.com> <00a101dc36cc$8bfc7a80$a3f56f80$@rainier66.com> <00f001dc36e0$4c5a6900$e50f3b00$@rainier66.com> <005901dc36ed$359cf4a0$a0d6dde0$@rainier66.com> <00f601dc3701$fa47b290$eed717b0$@rainier66.com> Message-ID: <013a01dc370a$52045ad0$f60d1070$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat ... >>... Adrian is the current government shutdown causing you any heartburn? >...I am literally dealing with one such bit between these emails (replying as quick breaks while things print/process). ... Owwww damn, sorry to hear. Aside on that. This is a different kind of shutdown. Before, when the team in power wanted to reopen and POTUS was in place to say what stayed open and what was closed, they chose to shut down the most voter-facing and public-facing functions. This led to absurdities, such as not just closing National Memorials such as Mount Rushmore, but also closing the shoulder of the road along South Dakota state road 244. Tourists were told the memorial was closed, so they drove up along 244 and pulled over so they could at least see it. The Feds tried to stop that by lining the road with cones, at which time civil disobedient tourists hurled those big heavy orange cones over the guard rail. Since that is a state road belonging to South Dakota, the highway patrol would fail to notice anything amiss, perhaps wondering why all those orange cones were down there. The feds were deploying federal employees to stop tourists from seeing Rushmore, even from the road out there. They were spending more money to keep tourists away from Rushmore than it woulda cost to just open the damn place. Now however... the current POTUS and his team want to reopen, but apparently feel they are bargaining from a position of strength. So... the most public-facing functions are granted essential service status, which means their back pay is guaranteed, which means their banks will cover them. In the meantime... the stuff that is shut down runs the risk of being equated with Bob from the office, the guy who went on a three-month vacation but no one noticed. When he returned, no one could figure out what Bob did there. He was on the top of the list for layoff when Musk bought the place. >...Fortunately, we have savings to ride through ...in case this shutdown does drag out for multiple months... Best wishes with that sir. We are always hoping for the best for you. >>... I will struggle to not think about the other 27 bucks I could have made had I had I grown a spine and held my shares. >...I recall that Bitcoin went on a roughly week-long rally after Trump got reelected - predictably, given the pro-crypto promises he had made. Had I been in a position to invest at the time, I might have gotten in on that. Alas, what held me back was harder to quickly overcome than a mere lack of spine. _______________________________________________ Have you seen what BitCoin has done since then? If Jeff Davis drops back in, keep in mind he might be a billionaire by now. We can give him a bunch of trash for that: Hey Jeff, are you now a billioneh or just a millioneh? Jeff has a sensa huma. Adrian remember when you and I first met him, up at Berkeley in about 1998? He stayed at my house that time. That lad scarcely had enough money to pay attention. He bet long on BitCoin in about 2011. Now if he isn't already a billioneh, he probably will be soon, and is already a several hundred times over millioneh. spike From johnkclark at gmail.com Mon Oct 6 22:04:18 2025 From: johnkclark at gmail.com (John Clark) Date: Mon, 6 Oct 2025 18:04:18 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 5:11?PM wrote: *the Republican majority (55-45) working together, can?t get the minority > party to reopen government.* *You've got it backwards. The **Democratic minority (55-45) working together, can?t get the majority party to reopen government. All the Republicans need to do is change the provisions in the homoeroticly named "Big Beautiful Bill" that would cause millions of Americans to lose their health insurance. How many millions? I asked Claude that and this is what he she or it said: * Based on the available information, the Congressional Budget Office estimates that the One Big Beautiful Bill Act will result in *1**6 million more uninsured Americans by 2034 * - 7.8 million more uninsured from Medicaid changes - 3.1 million more uninsured from provisions affecting the ACA Marketplaces - 900,000 more uninsured from codifying Trump Administration marketplace rules - 4.2 million more uninsured if enhanced premium tax credits expire at the end of 2025 To put this in percentage terms: the U.S. currently has approximately 330 million people, so 16 million represents roughly *4-5% of the total U.S. population* who would lose coverage by 2034. *John K Clark* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Oct 6 22:35:15 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Oct 2025 15:35:15 -0700 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> Message-ID: <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> From: John Clark Sent: Monday, 6 October, 2025 3:04 PM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] AI 2027: A Realistic Scenario of AI Takeover On Mon, Oct 6, 2025 at 5:11?PM > wrote: the Republican majority (55-45) working together, can?t get the minority party to reopen government. You've got it backwards. The Democratic minority (55-45) working together, can?t get the majority party to reopen government. All the Republicans need to do is change the provisions in the homoeroticly named "Big Beautiful Bill" ?Ja John K Clark Ja so we are told. However the Republicans and current POTUS recognize the huge advantage of the shortest and simplest sound byte, and how that advantage accumulates. Team Red gets to say: VOTE, open now or stay closed? Result: 55 votes to open 45 to stay closed, with all but one red team voting to open (Rand Paul argues the BBB is still waaaaay the hell too big) with all but two blue team votes to stay closed. This forces the blue team into a more complicated sound byte: stay closed until a provision is changed such that yaaaaakkity yak and bla bla. The voters have a very short attention span. They see POTUS and the red team urging open now. They see the blue team voting (almost in unison) to stay closed, and that is as far as most people understand. They don?t know about or care about subtleties, nuance, this change or that change, they don?t know or care. So? POTUS and the red team know they don?t need to compromise or change one damn thing, which explains why the red team isn?t compromising or changing one damn thing. They understand the steadily-accumulating long-range public opinion advantage of having the shorter simpler sound byte. Clearly they are pressing the hell outta that advantage. Red team gets to keep having Open Now votes, blue team is stuck with Stay Closed, Don?t Open Until You Fix Or Change Yakkity Yak and Bla Bla. Advantage team red, with advantage accumulating all the time. I might buy back into my long-march shares today. John I will end this missive with an encouraging note: I do commend you on your handling of a live-grenade political issue. I have seen examples all around of participants treating each other with respect, the way we would talk to each other if we were in person at a party. We would disagree but could still be friends, the way Jeff Davis and I have always been. We were poles apart on big political issues, but we always got along, online and in person. We could laugh about it, for we called ourselves proles: we were two proles, two poles. Good thing for me: Jeff now has enough money he could buy every one of my neighbors? houses and move refugees from the insane asylum into them. He?s the richest prole you ever saw. Cool, we can handle hot button issues in this forum, and probably should discuss it more than we do, for we can inform each other on how to invest in stuff like future-meme shares, BitCoin, distressed power generation facilities, that kinda thing, help each other make buttloads of money. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Oct 6 23:08:54 2025 From: pharos at gmail.com (BillK) Date: Tue, 7 Oct 2025 00:08:54 +0100 Subject: [ExI] Sam Altman says ChatGPT has hit 800M weekly active users Message-ID: Sam Altman says ChatGPT has hit 800M weekly active users Rebecca Bellan ? October 6, 2025 < https://techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users/ > Quote: OpenAI CEO Sam Altman said Monday that ChatGPT has reached 800 million weekly active users, marking an increase of adoption among consumers, developers, enterprises, and governments. ?Today, 4 million developers have built with OpenAI,? Altman said. ?More than 800 million people use ChatGPT every week, and we process over 6 billion tokens per minute on the API. -------------------- Add in all the other AI chatbots and AI friends, and I guess the whole world is talking to AIs now. And all this chat is training the AIs on how to interact with humans. Hmmmm. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 10:29:17 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 06:29:17 -0400 Subject: [ExI] Free Trade Message-ID: *How do people around here feel about free trade? When I first joined this list in 1993 nearly every member was a passionate advocate of free trade including me, but I get the impression that is no longer the case; certainly He Who Must Not Be Named is about as far from being a free trade advocate as it's possible to be. Even the AI revolution hasn't caused me to change my opinion about free trade but Universal Basic Income is a different matter; back then I would've argued that the idea was completely impractical and I would have been correct, but even then I knew that eventually something like it would have to be implemented, however I figured it was so far in the future I didn't need to think very deeply about it. But the time has come. * *Have the amazing developments in AI during the last five years caused anybody else to change any of their economic opinions?* *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 11:18:54 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 07:18:54 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: Message-ID: On Mon, Oct 6, 2025 at 4:35?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *> There is room for smaller AIs to self-replicate onto a lot more* > *platforms, but those smaller AIs need to be able to self-improve to* > *pull off something like this scenario, and those who are running* > *self-improving AIs generally don't see the point in using smaller AIs**for > their work.* *A smaller model doesn't necessarily mean a less capable model thanks to a technique that has already proven itself to be very effective called "AI distillation". A smart but large and therefore expensive to operate AI is used to teach a much smaller AI with far fewer parameters. The goal is for the small model to mimic the behavior of the large model, and there is something called "a distillation loss function" that grades and tells the small model how well it's doing, and it is not just pass or fail. * *For example if it needs 20 steps to solve a problem and got all of them correct except for step 17 where it got a decimal point wrong then it would still get a pretty high grade because it understands the general idea, and the large model would explain to the small model what its error is and why it didn't get a perfect grade. After many computations and enough electricity to power a small city for a couple of months the small model is as smart as the large model but is faster and much cheaper to operate. And as I've said none of this is hypothetical, it's already been used to great effect. * *Then you take several of those small models and teach them to be specialists, you link those agents together with a mechanism that decides which specialist would be best to answer the question and you have a new large model. And then you do another AI distillation. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 11:37:18 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 07:37:18 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 6:35?PM wrote: > *> Team Red gets to say: VOTE, open now or stay closed?Result: 55 votes to > open 45 to stay closed,* *Just yesterday the following happened: Team Blue got to say: VOTE, open now or stay closed?Result: 45 votes to open 55 to stay closed.* >* blue** team is stuck with Stay Closed, Don?t Open Until You Fix Or > Change Yakkity Yak and Bla Bla.* *If you examine that Yakkity Yak and Bla Bla a little more closely you will find that it involves 16 million Americans citizens losing their health insurance, and in many cases that is equivalent to sentencing them to death. * > *> We would disagree but could still be friends* > *I certainly agree with that. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Oct 7 12:20:05 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 05:20:05 -0700 Subject: [ExI] Free Trade In-Reply-To: References: Message-ID: <008001dc3784$b7399fd0$25acdf70$@rainier66.com> From: extropy-chat On Behalf Of John Clark via extropy-chat Sent: Tuesday, 7 October, 2025 3:29 AM To: ExI Chat Cc: John Clark Subject: [ExI] Free Trade >?How do people around here feel about free trade? When I first joined this list in 1993 nearly every member was a passionate advocate of free trade including me, but I get the impression that is no longer the case; certainly He Who Must Not Be Named is about as far from being a free trade advocate as it's possible to be. Even the AI revolution hasn't caused me to change my opinion about free trade but Universal Basic Income is a different matter; back then I would've argued that the idea was completely impractical and I would have been correct, but even then I knew that eventually something like it would have to be implemented, however I figured it was so far in the future I didn't need to think very deeply about it. But the time has come. >..Have the amazing developments in AI during the last five years caused anybody else to change any of their economic opinions? John K Clark I changed on that to some extent. I have always been a free-trade advocate and I am still. But I changed in the way I see tariffs today is a reminder of the universality of Laffer theory. When the tariff notion came back (I thought it was extinct after the 1980s tariffs on Japanese motorcycles) we studied Laffer?s notions again and recognized that it doesn?t matter how the federal government gets its cut, it only matters how much it takes out of an economy. It maximizes somewhere around 20% total GDP. If a federal government takes more that that percent, the GDP shrinks and the government?s revenue shrinks. It gets a bigger slice of a smaller pie. The US experimentation with tariffs will remind us that Laffer theory applies regardless of what form is the taxation. Regarding UBI, it is clear to me that such a notion is practical but only at state, local and city levels. We think of it as a federal government thing, but fail to recognize the truth: the US federal government is busted and has been for a long time. It has checks left. For now. But we can see, and we can hear. It is busted. That printing press Jared Bernstein referred to does not print wealth. It only prints currency. Big difference. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 12:27:43 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 08:27:43 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> Message-ID: On Mon, Oct 6, 2025 at 6:35?PM wrote: *> John I will end this missive with an encouraging note: I do commend you > on your handling of a live-grenade political issue.* *Thanks, I try but I have to say sometimes handling a live grenade takes a lot of concentration. I really hope we can remain friends and I think we can. If anybody wants to talk about political issues but doesn't like feeling like they are walking on eggs, or are just eager to hear my latest pearls of wisdom they could do so on my list. You can see what's going on over there with this link: Extropolis ,* *It's not nearly as active as this list although since I started it there have been 1863 posts, but unfortunately I am by far the most frequent poster. I would be delighted if that circumstance were to change. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Oct 7 12:35:35 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 05:35:35 -0700 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> Message-ID: <008c01dc3786$e1c1c4b0$a5454e10$@rainier66.com> From: John Clark Sent: Tuesday, 7 October, 2025 4:37 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] AI 2027: A Realistic Scenario of AI Takeover On Mon, Oct 6, 2025 at 6:35?PM > wrote: > Team Red gets to say: VOTE, open now or stay closed? Result: 55 votes to open 45 to stay closed, Just yesterday the following happened: Team Blue got to say: VOTE, open now or stay closed? Result: 45 votes to open 55 to stay closed? Ja, I saw that. The price of the long marches are going out of sight: 39 cents a share this morning, owwwww. Still in profit range and I might jump back in, but if anything, team red is losing ground, which means the fed may stay ?closed? indefinitely. What we don?t yet know is if anyone will notice. Something that WILL be noticed however is that the federal deficit will drop dramatically which will be a highly popular change. A lot of Washington people will vacate and go look for jobs elsewhere. The highly crowded DC area will be filled with real estate opportunities, from freshly available apartments and office space, right down there where the action isn?t anymore, but was at one time. Oh the possibilities are endful! (Distressed vacated property has a lot of profit potential in it (because it is suddenly affordable.)) spike >?If you examine that Yakkity Yak and Bla Bla a little more closely you will find that it involves 16 million Americans citizens losing their health insurance, and in many cases that is equivalent to sentencing them to death? That notion is longer and more complicated than OPEN NOW. Team red holds the shorter slogan advantage. They recognize this and the accumulating effects. The only way they lose in the long run is if voters grow to like the notion of continuing the shutdown forever. This too is a possibility. > We would disagree but could still be friends >?I certainly agree with that. John K Clark Welcome back, John. You bring a perspective which benefits all at the party. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at disroot.org Tue Oct 7 12:51:14 2025 From: efc at disroot.org (efc at disroot.org) Date: Tue, 7 Oct 2025 14:51:14 +0200 (CEST) Subject: [ExI] Free Trade In-Reply-To: References: Message-ID: On Tue, 7 Oct 2025, John Clark via extropy-chat wrote: > How do people around here feel about free trade? When I first joined this list in 1993 nearly every member was a passionate advocate > of free trade including me, but I get the impression that is no longer the case; certainly He Who Must Not Be Named is about as far > from being a free trade advocate as it's possible to be. Even the AI revolution hasn't caused me to change my opinion about free > trade but Universal Basic Income is a different matter; back then I would've argued that the idea was completely impractical and I > would have been correct, but even then I knew that eventually something like it would have to be implemented, however I figured it > was so far in the future I didn't need to think very deeply about it. But the time has come.? > > Have the amazing developments in AI during the last five years caused anybody else to change any of their economic opinions? > > John K Clark? Hello John, No, certainly not me. Free trade good, UBI bad. As long as there will be human desires, there will be jobs. Best regards, Daniel From johnkclark at gmail.com Tue Oct 7 12:54:32 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 08:54:32 -0400 Subject: [ExI] John Clarke has just won the Nobel prize in physics! Message-ID: *It was just announced that John Clarke has won the Nobel prize in physics. Damn, I was just one e short of winning a Nobel prize! Incidentally, although I'd like to believe that I'm Arthur C Clarke's illegitimate son it's not true, although my father's name was Arthur E Clark. I wish I had an unusual name like Eliezer Yudkowsky instead of John (yawn) Clark, I remember telling Eliezer that years ago when he was thinking of changing his name to Joe Blow or something equally boring. * *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 12:56:04 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 08:56:04 -0400 Subject: [ExI] Nobel Prize in Physics Is Awarded for Work in Quantum Mechanics Message-ID: Explore this gift article from The New York Times. You can read it for free without a subscription. Nobel Prize in Physics Is Awarded for Work in Quantum Mechanics The prize was awarded to John Clarke, Michel H. Devoret and John M. Martinis. https://www.nytimes.com/2025/10/07/science/nobel-prize-physics.html?unlocked_article_code=1.rk8.iYZK.LzPuv6bKKADN&smid=em-share -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 13:05:34 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 09:05:34 -0400 Subject: [ExI] Free Trade In-Reply-To: References: Message-ID: On Tue, Oct 7, 2025 at 8:51?AM wrote: *> As long as there will be human desires, there will be jobs.* *No there will not be, not when a machine can perform your job, my job, everybody's job much much better than any human can, and that is going to happen. And I'm not talking about a millennium, I'm not even talking about a decade. * *John K Clark* > > On Tue, 7 Oct 2025, John Clark via extropy-chat wrote: > > > How do people around here feel about free trade? When I first joined > this list in 1993 nearly every member was a passionate advocate > > of free trade including me, but I get the impression that is no longer > the case; certainly He Who Must Not Be Named is about as far > > from being a free trade advocate as it's possible to be. Even the AI > revolution hasn't caused me to change my opinion about free > > trade but Universal Basic Income is a different matter; back then I > would've argued that the idea was completely impractical and I > > would have been correct, but even then I knew that eventually something > like it would have to be implemented, however I figured it > > was so far in the future I didn't need to think very deeply about it. > But the time has come. > > > > Have the amazing developments in AI during the last five years caused > anybody else to change any of their economic opinions? > > > > John K Clark > > Hello John, > > No, certainly not me. Free trade good, UBI bad. As long as there will be > human desires, there will be jobs. > > Best regards, > Daniel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Oct 7 14:25:29 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 07:25:29 -0700 Subject: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> Message-ID: <010301dc3796$3baabea0$b3003be0$@rainier66.com> From: John Clark ? >?If you examine that Yakkity Yak and Bla Bla a little more closely you will find that it involves 16 million Americans citizens losing their health insurance, and in many cases that is equivalent to sentencing them to death? John K Clark I have recent relevant experience with that. Less than two months ago I spent a few days in the hospital (nothing life-threatening thx.) They brought in a heroin addict as my temporary roommate, who was being drawn off of the dope because the (unnamed central California county) chief constable wished to have him as a guest in the Iron Bar Hotel for half a year, where such recreational pharmaceuticals are not available. My hospital roommate was given three days to come down off of the addiction, so the schedule was very aggressive and very painful. They gave him something which I think is like heroin once every four hours, so for about half an hour he was nearly indistinguishable from a normal human. We would talk. I had nothing else to do, and I learned a lot from this 31 year old wreck of a human being on the other side of the curtain, in those half hour intervals, until he started hurting again, and later when he would be whimpering and weeping, calling out for his mama, etc. Just from listening to it, I would guess that coming down off of a bad heroin addiction is worse than you can imagine. He told me about his life: college scholarship baseball player (kinda the baseball version of a semi-pro one might suppose.) Injury, prescription pain pills which opium in it, steroids, lotsa opium, heroin, no more scholarship, no more college education, from the pinnacle of success to a wrecked life, all in a span of nine years. Owwww damn. While in there, I talked and I listened (I do stuff like that (I learn so much that way.)) What has this to do with people losing health insurance? I am getting to that. Read on please. My college buddy and long time social connection has a daughter who is now a biggie in the health insurance world, a rising star. She was home recently and I told her about the whole thing. She offered some interesting insights from a health insurance insider?s perspective. I offered that I was under the impression that the USA had solved the whole health insurance problem by setting up a system where poor people can get health insurance free. She agreed that such a system is in place, but hospitals are struggling now nearly as much now as they were before, because of all those people who are eligible to get free health insurance? most of them never do. They only need to sign up for this, sign that, go here, do that, and it?s MAGIC! The US government GIVES them free health insurance. But most penniless people don?t ever do it. Consequence: if they have an emergency, or the medics find them near death from accident or some kind of overdose, they scoop them up, pour them into an ambulance, haul them to some very advanced modern expensive hospital such as Stanford, treat them anyway. OK then, who picks up the tab for that? I was next to this penniless addict, where I do pay for my insurance, he pays for nothing, he is making my stay there miserable (ok not nearly as miserable as his was) with his screaming in pain, crying for mama, complaining about everything etc. He pays nothing, I pay double, and still I must share a room for several days with this self-destructive yahoo? Does that seem right? My point: that idea of government-supplied health insurance was flawed from the start. Those who opposed it at the time were telling the truth. They foresaw this problem from the start and very clearly stated it, but they had the disadvantage of a more complicated slogan, filled with nuance. The opponents had a very simple slogan ?Health care is a human right!? But notice something: that whole notion conflates two different things. It subtly equates health care (something you get at the hospital) with health insurance (something you get from an insurance agent or company.) By calling health insurance health care, the team selling that idea had the accumulating advantage of a simpler slogan: health care is a human right. Well, it already was: USians could always go to the hospital and get health care, since always. But from that simple slogan, they had a springboard into: Health insurance is a human right. That equating health insurance with health care caused far too many to believe that health care is free. Well, it isn?t. It is expensive as all hell, and someone still hasta pay for all of it. >From our perspective today, I would argue that the way it was implemented, that whole notion of universal health insurance was an expensive failure. Hospitals are still struggling financially, still being required legally to treat everyone, with or without insurance, which they could have gotten free but never bothered to do since health care is a human right so why bother signing papers etc. Health insurance companies made out just fine, for their product is mandated, and that company doesn?t pay for the indigent who didn?t buy their ?free? ?health care? policy, the hospital does. All this has me pondering that long-march bet on US government continues to pretend to be shut down for the long haul. The Open Now crowd has the long-term accumulating advantage of the shorter simpler slogan, so they know they can just hold and wait. Even at 38 cents, that bet is compelling. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Oct 7 14:34:48 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Oct 2025 10:34:48 -0400 Subject: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <010301dc3796$3baabea0$b3003be0$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <010301dc3796$3baabea0$b3003be0$@rainier66.com> Message-ID: On Tue, Oct 7, 2025 at 10:27?AM spike jones via extropy-chat wrote: > I offered that I was under the impression that the USA had solved the whole health insurance problem by setting up a system where poor people can get health insurance free. She agreed that such a system is in place, but hospitals are struggling now nearly as much now as they were before, because of all those people who are eligible to get free health insurance? most of them never do. They only need to sign up for this, sign that, go here, do that, and it?s MAGIC! The US government GIVES them free health insurance. But most penniless people don?t ever do it. Why is that, and what can be done about it? From spike at rainier66.com Tue Oct 7 14:42:43 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 07:42:43 -0700 Subject: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <010301dc3796$3baabea0$b3003be0$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <010301dc3796$3baabea0$b3003be0$@rainier66.com> Message-ID: <012101dc3798$a46e21f0$ed4a65d0$@rainier66.com> From: extropy-chat On Behalf Of spike jones via extropy-chat >?All this has me pondering that long-march bet on US government continues to pretend to be shut down for the long haul. The Open Now crowd has the long-term accumulating advantage of the shorter simpler slogan, so they know they can just hold and wait. Even at 38 cents, that bet is compelling. Spike In the future-meme betting game, one must always take care to protect one?s money from oneself: we all tend to read into data what we already believe. I believe the Open Now crowd has a long-term advantage, but from what I can tell by the votes, the Stay Closed crowd, after an initial setback, appears to be gaining ground: This looks to me like the Openers will have a damn hard time getting to 60 votes, which is what is needed to open. The Stay-Closeders (is closeders even a word?) do not benefit from 60 votes, because anything less than 60 Openers is Stay Closed. Meanwhile? that vote was only to get temporary funding thru the end of October. We are already only three weeks and three days from that, so perhaps the Stay Closeders will win by default. So? Perhaps I will pay 39 cents a share for the long-march stay closed bet. Ideas or insights welcome. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 20658 bytes Desc: not available URL: From spike at rainier66.com Tue Oct 7 14:45:49 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 07:45:49 -0700 Subject: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <010301dc3796$3baabea0$b3003be0$@rainier66.com> Message-ID: <012901dc3799$133f9410$39bebc30$@rainier66.com> ...> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario of AI Takeover On Tue, Oct 7, 2025 at 10:27?AM spike jones via extropy-chat wrote: >> ... it?s MAGIC! The US government GIVES them free health insurance. But most penniless people don?t ever do it. >...Why is that, and what can be done about it? _______________________________________________ Good questions Adrian, along with the related important question: how do I personally cash in on that? Mysteries all, but there is another important observation. Back when the whole idea was being formulated, we had astute politicians who foresaw and described this situation with remarkable accuracy. They were ignored and ridiculed. spike From ben at zaiboc.net Tue Oct 7 15:19:40 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Tue, 7 Oct 2025 16:19:40 +0100 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: Message-ID: <0b55c62a-77b0-4d79-8b4d-dd89176e3631@zaiboc.net> Nope. It's got the wrong title, it should be "An unrealistic scenario of AI takeover" For one thing among many, how much energy would be needed to run 200,000 AI systems each more powerful than any we now have? We're not even at Kardashian level one yet! -- Ben From brent.allsop at gmail.com Tue Oct 7 15:20:51 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 7 Oct 2025 09:20:51 -0600 Subject: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <012901dc3799$133f9410$39bebc30$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <010301dc3796$3baabea0$b3003be0$@rainier66.com> <012901dc3799$133f9410$39bebc30$@rainier66.com> Message-ID: I think there should be some minimum level of free insurance. We should decide as a nation, how much tax we are willing to pay for that, and then pay that minimum level for everyone. Of course, most people are going to want more than just that, but unless they pay, we, as a nation, just can't afford it. True, AI is able to make medical stuff cheaper, by doing ever more of the work, but it is also making many more medical procedures available, including very expensive ones, at first, which will grant us immortal life. The number of expensive procedures and medications which can significantly improve our lives is growing, (my insulin pump is a $7 grand piece of machinery which didn't exist a few years ago.) exponentially like everything else. That's why the cost of insurance is going up so much more than inflation. I'm kind of afraid to see where that goes from here, when a large portion of society just can't afford it all, yet. On Tue, Oct 7, 2025 at 8:46?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] health insurance: was RE: AI 2027: A Realistic Scenario > of AI Takeover > > On Tue, Oct 7, 2025 at 10:27?AM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> ... it?s MAGIC! The US government GIVES them free health insurance. > But most penniless people don?t ever do it. > > >...Why is that, and what can be done about it? > > _______________________________________________ > > > > Good questions Adrian, along with the related important question: how do I > personally cash in on that? > > Mysteries all, but there is another important observation. Back when the > whole idea was being formulated, we had astute politicians who foresaw and > described this situation with remarkable accuracy. They were ignored and > ridiculed. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 16:02:35 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 12:02:35 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <0b55c62a-77b0-4d79-8b4d-dd89176e3631@zaiboc.net> References: <0b55c62a-77b0-4d79-8b4d-dd89176e3631@zaiboc.net> Message-ID: On Tue, Oct 7, 2025 at 11:21?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > *> how much energy would be needed to run 200,000AI systems each more > powerful than any we now have?* *That depends on how efficient the agents are, with AI distillation they can be pretty efficient. And much more energy is needed for training than for inference, so 200,000 identical agents all running at the same time is not out of the question if you're willing to devote a sizable percentage of the nation's electrical capacity to the task; and if America's politicians want to keep ahead of China, and they certainly will, then that is exactly what's going to happen.* * > **We're not even at Kardashian level one yet! * *True but you have a pretty good chance of living to see that, provided Mr. Jupiter Brain doesn't kill us all first. * *John K Clark* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 16:21:42 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 12:21:42 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <008c01dc3786$e1c1c4b0$a5454e10$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <008c01dc3786$e1c1c4b0$a5454e10$@rainier66.com> Message-ID: On Tue, Oct 7, 2025 at 8:35?AM wrote: *> the federal deficit will drop dramatically which will be a highly > popular change.* *Perhaps in some elite circles, but I think Mr. Joe Averageman's concern over the Federal deficit is about the same as mine, greater than zero but not by much. Even the hard-core MAGA people have things they think are much more important, like transsexual illegal immigrants playing in women's sports. * >> *>> ?If you examine that Yakkity Yak and Bla Bla a little more closely >> you will find that it involves 16 million Americans citizens losing their >> health insurance, and in many cases that is equivalent to sentencing them >> to death?* > > > *> That notion is longer and more complicated than OPEN NOW. Team red > holds the shorter slogan advantage.* *You could be right, after all He Who Must Not Be Named got to where he is by screaming mindless slogans and inventing insulting nicknames. But? can you really say that team red holds the moral high ground on this issue? * *John K Clark * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Oct 7 16:45:20 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 09:45:20 -0700 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <0b55c62a-77b0-4d79-8b4d-dd89176e3631@zaiboc.net> References: <0b55c62a-77b0-4d79-8b4d-dd89176e3631@zaiboc.net> Message-ID: <015801dc37a9$c58ac940$50a05bc0$@rainier66.com> >...> On Behalf Of Ben Zaiboc via extropy-chat Subject: Re: [ExI] AI 2027: A Realistic Scenario of AI Takeover Nope. >...It's got the wrong title, it should be "An unrealistic scenario of AI takeover" >...For one thing among many, how much energy would be needed to run 200,000 AI systems each more powerful than any we now have? We're not even at Kardashian level one yet! -- Ben _______________________________________________ Ben, on the contrary sir. Are you failing to keep up with the Kardashian level? Oh wait... never mind, retract. spike From spike at rainier66.com Tue Oct 7 16:51:58 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 09:51:58 -0700 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <008c01dc3786$e1c1c4b0$a5454e10$@rainier66.com> Message-ID: <017201dc37aa$b2349230$169db690$@rainier66.com> From: John Clark Sent: Tuesday, 7 October, 2025 9:22 AM To: spike at rainier66.com Cc: ExI chat list Subject: Re: [ExI] AI 2027: A Realistic Scenario of AI Takeover On Tue, Oct 7, 2025 at 8:35?AM > wrote: > >?That notion is longer and more complicated than OPEN NOW. Team red holds the shorter slogan advantage. >?You could be right, after all He Who Must Not Be Named got to where he is by screaming mindless slogans and inventing insulting nicknames. But? can you really say that team red holds the moral high ground on this issue? John K Clark That judgment on moral high ground is in the eye of the beholder and the voter. Either way, the Open-Nowers have the voter high ground, or they are betting they do, and I am very tempted to bet with them, even at 39 cents a share, owwwww? My story about the heroin addict was a refutation of the notion that losing health insurance is a death sentence. It sure wasn?t for him, or any of the other perhaps half the people in that hospital who may have been eligible for free health insurance but never bothered to get it. They haul them back out to where the found them and toss them out on the ground with a fresh dose of heroin. Hospitals don?t do that kinda thing. They treat people the best they can. Then they struggle to figure out how to pay for it all. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Oct 7 16:55:59 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Oct 2025 09:55:59 -0700 Subject: [ExI] beware of feral feds In-Reply-To: <017901dc37aa$e8c7d2d0$ba577870$@rainier66.com> References: <017901dc37aa$e8c7d2d0$ba577870$@rainier66.com> Message-ID: <018301dc37ab$42244200$c66cc600$@rainier66.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 50638 bytes Desc: not available URL: From ben at zaiboc.net Tue Oct 7 17:08:36 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Tue, 7 Oct 2025 18:08:36 +0100 Subject: [ExI] UBI and Health Care In-Reply-To: References: Message-ID: <1e65f62c-f1e3-401d-9aba-94b63f040e76@zaiboc.net> Here's an idea that recent posts have sparked in my sometimes over-imaginative brain: What about directing the massive profits that are expected from the massive increase in productivity that's expected from the massive improvement in AI and robotics, into health care? (actual medical treatments, not health insurance), with the slogan "everybody is entitled to health care" turned into actual fact? Remove the whole 'insurance' idea from medicine altogether, just go straight to funding the hospitals, clinics and GPs, and anybody who needs to go to one gets treated, no questions asked. This would have the added advantage of greatly increasing the number of health facilities and workers. Instead of UBI it would be UHC. The most fortunate people wouldn't benefit from it (because they aren't sick), the least fortunate would benefit the most. Everybody would benefit eventually, of course, because everybody (currently) gets old, therefore sick in one way or another. Over time, and with advances in medicine, it would transition into Universal Life Extension in a quite natural way. And maybe even (reaching a long way here, I know) Universal Uploading Opportunities. This doesn't address the problem of people losing jobs, I know, but that's something that will have to be solved in any case, UBI or no UBI. One thing that occurs to me re. that: Just because AI /can/ do your job, doesn't mean it /has/ to. Maybe some type of 'job protectionism' could be made to work. Similar ideas: Fund the transport systems, stop charging people for transport. Ditto housing (ok, there are a lot of problems there, but might be worth thinking over). Ditto with education (imo, education should be free anyway) -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Tue Oct 7 17:57:16 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Tue, 7 Oct 2025 18:57:16 +0100 Subject: [ExI] health insurance In-Reply-To: References: Message-ID: <7a226987-b3a1-47c2-bad4-9f3170cee53d@zaiboc.net> On 07/10/2025 17:52,?Adrian Tymes wrote: > On Tue, Oct 7, 2025 at 10:27?AM spike jones via extropy-chat > wrote: >> I offered that I was under the impression that the USA had solved the whole health insurance problem by setting up a system where poor people can get health insurance free. She agreed that such a system is in place, but hospitals are struggling now nearly as much now as they were before, because of all those people who are eligible to get free health insurance? most of them never do. They only need to sign up for this, sign that, go here, do that, and it?s MAGIC! The US government GIVES them free health insurance. But most penniless people don?t ever do it. > Why is that, and what can be done about it? 1) Because they don't need to, and 2) remove the parasites (the health insurance companies) from the middle, put the money directly into the hospitals. As far as I'm aware, it's the usual story: Middlemen profiting from setting up a layer that doesn't need to exist. American healthcare is scandalously profitable, has nothing to do with the actual real cost of health care and the profits don't go the the actual providers of the healthcare. I'm far from being an expert on american healthcare, but I don't see why it doesn't suffer from the same problems as the music and publishing industries. If there's money to be made without actually contributing something positive to society, there will be some lower form of life that will jump in and exploit that as much as they can. The real question is what can be done about that. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Oct 7 18:06:25 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Oct 2025 14:06:25 -0400 Subject: [ExI] health insurance In-Reply-To: <7a226987-b3a1-47c2-bad4-9f3170cee53d@zaiboc.net> References: <7a226987-b3a1-47c2-bad4-9f3170cee53d@zaiboc.net> Message-ID: On Tue, Oct 7, 2025 at 1:58?PM Ben Zaiboc via extropy-chat wrote: > remove the parasites (the health insurance companies) from the middle, put the money directly into the hospitals. To be clear: your suggestion is to remove the registration and all that - just, if someone shows up at a US hospital, they get treated? Or at least, no medical-specific records are needed - at most the sorts of ID that most adults have (such as a driver's license), so their records can be looked up to see if they have allergens or the like on file? (Since it would not do to give someone a treatment that saves 99% of people but they're in the 1% that die from it.) (Which does leave open the question of the undocumented, but that is at least a much smaller problem.) From johnkclark at gmail.com Tue Oct 7 18:32:24 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 14:32:24 -0400 Subject: [ExI] AI 2027: A Realistic Scenario of AI Takeover In-Reply-To: <017201dc37aa$b2349230$169db690$@rainier66.com> References: <012601dc3705$d4fb85d0$7ef29170$@rainier66.com> <015b01dc3711$7d04f1c0$770ed540$@rainier66.com> <008c01dc3786$e1c1c4b0$a5454e10$@rainier66.com> <017201dc37aa$b2349230$169db690$@rainier66.com> Message-ID: On Tue, Oct 7, 2025 at 12:52?PM wrote: *> My story about the heroin addict was a refutation of the notion that > losing health insurance is a death sentence. * *I don't know about heroin addicts but I'll tell you this, if my kidneys were shot and I didn't have health insurance and I didn't have enough money to pay for a kidney transplant or even dialysis then I would be a deadman unless I took drastic action. So I would take drastic action. I would commit a felony, stand around afterwards to make sure I was caught, and then plead guilty. If I was given a suspended sentence I would immediately commit a more serious felony to make sure I was sent to a penitentiary because there I would receive proper health care. Either that or I'd try to become a Canadian citizen. * *Those options are totally fucked up, but it's not surprising because the fundamental problem is that the US has by far the most fucked up healthcare system in the world. Per capita we spend FAR more on healthcare than any other nation on the planet but we certainly do NOT get the best healthcare for that money. In terms of life expectancy the USA is not number 1, it is number 49, Costa Rica, Estonia, Chile, Saudi Arabia and even Cuba all beat the USA while spending much less on healthcare than we do. If you examine all those 48 countries that beat us you will find one interesting pattern, they all have universal healthcare coverage, but number 49, the USA, does not. Do you think maybe just maybe we might be able to learn something from those 48 countries? * *I remember talking about this at length on this list about 10 years ago and since then things have only gotten worse. So how could things be improved, well let's see. Patented drugs in the United States are more than three and a half times higher than in Canada even though the drugs are manufactured in the US by a US company. This is because the Canadian government pays the bill for prescription drugs and will not pay for a drug if a government review board believes the cost is excessive. By contrast in the US pharmaceutical companies can charge whatever the market will bear. There's no central authority saying "we won't pay that much". Instead prices are negotiated separately with hundreds of different insurance companies, pharmacies and healthcare systems, and that generates mass confusion, inequality in pricing and mountains of paperwork. So the solution to the high drug price problem is obvious. * *Also, you no longer need a human doctor to diagnose your illness, an AI can do a much better job and do it much MUCH cheaper, it can tell you what drug to take and how much of it. Currently there are only two areas in healthcare that still need a human, surgery and nursing care, and in 3 to 5 years even those occupations will be replaced by machines. I think the last job to be replaced by a machine will be an orderly in a nursing home. * *John K Clark * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Tue Oct 7 18:49:44 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Tue, 7 Oct 2025 19:49:44 +0100 Subject: [ExI] health insuranc In-Reply-To: References: Message-ID: <59453b0c-f51a-46a4-a606-760927d2edd9@zaiboc.net> On 07/10/2025 19:33,?Adrian Tymes wrote: > On Tue, Oct 7, 2025 at 1:58?PM Ben Zaiboc via extropy-chat > wrote: >> remove the parasites (the health insurance companies) from the middle, put the money directly into the hospitals. > To be clear: your suggestion is to remove the registration and all > that - just, if someone shows up at a US hospital, they get treated? > Or at least, no medical-specific records are needed - at most the > sorts of ID that most adults have (such as a driver's license), so > their records can be looked up to see if they have allergens or the > like on file? (Since it would not do to give someone a treatment that > saves 99% of people but they're in the 1% that die from it.) (Which > does leave open the question of the undocumented, but that is at least > a much smaller problem.) > > Yes. I imagine most would want their medical records to be available, so would be able to produce an ID of some sort (in the UK we have NHS numbers), but it shouldn't be required. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at disroot.org Tue Oct 7 19:07:25 2025 From: efc at disroot.org (efc at disroot.org) Date: Tue, 7 Oct 2025 21:07:25 +0200 (CEST) Subject: [ExI] Free Trade In-Reply-To: References: Message-ID: <162114c1-8b5b-9c17-4cde-a43cdf6add59@disroot.org> On Tue, 7 Oct 2025, John Clark wrote: > On Tue, Oct 7, 2025 at 8:51?AM wrote: > > >?As long as there will be human desires, there will be jobs. > > > No there will not be, not when a?machine can perform your job, my job, everybody's job much much better than any human can, and that > is going to happen.?And I'm not talking about a millennium, I'm not even talking about a decade.? I disagree. I might have a desire to be served food by a human being instead of a machine, to own a piece of art created by a human being, to be guided by a human being, ad infinitum. You might not, but I might, and I know many people who already today, pre-AI, prefer the human touch over a computer or todays rought "AI" guiding them. So if/when AI produces everything, the service sector, artisans, art, politics, sports, will shoot through the roof. Ergo, as long as human desires exists, there will be plenty of jobs for other humans to fulfill them. Best regards, Daniel > John K Clark > > > > > > On Tue, 7 Oct 2025, John Clark via extropy-chat wrote: > > > How do people around here feel about free trade? When I first joined this list in 1993 nearly every member was a > passionate advocate > > of free trade including me, but I get the impression that is no longer the case; certainly He Who Must Not Be Named is > about as far > > from being a free trade advocate as it's possible to be. Even the AI revolution hasn't caused me to change my opinion > about free > > trade but Universal Basic Income is a different matter; back then I would've argued that the idea was completely > impractical and I > > would have been correct, but even then I knew that eventually something like it would have to be implemented, however I > figured it > > was so far in the future I didn't need to think very deeply about it. But the time has come.? > > > > Have the amazing developments in AI during the last five years caused anybody else to change any of their economic > opinions? > > > > John K Clark? > > Hello John, > > No, certainly not me. Free trade good, UBI bad. As long as there will be > human desires, there will be jobs. > > Best regards, > Daniel > > > From bronto at pobox.com Tue Oct 7 20:52:56 2025 From: bronto at pobox.com (Anton Sherwood) Date: Tue, 7 Oct 2025 13:52:56 -0700 Subject: [ExI] John Clarke has just won the Nobel prize in physics! In-Reply-To: References: Message-ID: On 2025-10-07 05:54, John Clark via extropy-chat wrote: > I wish I had an unusual name like Eliezer Yudkowsky instead of > John (yawn) Clark, I remember telling Eliezer that years ago when he was > thinking of changing his name to Joe Blow or something equally boring. Oddly, I appear to be the only Anton Sherwood on the Web. (That is far from true for any other version of my name.) -- *\\* Anton Sherwood *\\* www.bendwavy.org From ilsa.bartlett at gmail.com Tue Oct 7 21:19:17 2025 From: ilsa.bartlett at gmail.com (ilsa) Date: Tue, 7 Oct 2025 14:19:17 -0700 Subject: [ExI] John Clarke has just won the Nobel prize in physics! In-Reply-To: References: Message-ID: Congratulations! I just read the New York Times article congratulations oh my word clap clap clap clap clap app claws Ilsa Bartlett Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Tue, Oct 7, 2025, 1:53?PM Anton Sherwood via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 2025-10-07 05:54, John Clark via extropy-chat wrote: > > I wish I had an unusual name like Eliezer Yudkowsky instead of > > John (yawn) Clark, I remember telling Eliezer that years ago when he was > > thinking of changing his name to Joe Blow or something equally boring. > > Oddly, I appear to be the only Anton Sherwood on the Web. (That is far > from true for any other version of my name.) > > -- > *\\* Anton Sherwood *\\* www.bendwavy.org > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Oct 7 22:48:13 2025 From: johnkclark at gmail.com (John Clark) Date: Tue, 7 Oct 2025 18:48:13 -0400 Subject: [ExI] Free Trade In-Reply-To: <162114c1-8b5b-9c17-4cde-a43cdf6add59@disroot.org> References: <162114c1-8b5b-9c17-4cde-a43cdf6add59@disroot.org> Message-ID: On Tue, Oct 7, 2025 at 3:07?PM wrote: *>I might have a desire to be served food by a human being* *You might have a desire to be served, but not many people have a desire to serve you if money can be obtained in some other way, like UBI. * > *> to own a piece of art created by a human being,* *Does that seem like a sustainable business model to you? If I said "this painting (or sculpture or novel or music or movie) was created by a human being and a machine had nothing to do with it, how could you tell if I was lying? * *John K Clark * > to be guided by a human being, ad infinitum. > > You might not, but I might, and I know many people who already today, > pre-AI, prefer the human touch over a computer or todays rought "AI" > guiding them. > > So if/when AI produces everything, the service sector, artisans, art, > politics, sports, will shoot through the roof. Ergo, as long as human > desires exists, there will be plenty of jobs for other humans to fulfill > them. > > Best regards, > Daniel > > > > John K Clark > > > > > > > > > > > > On Tue, 7 Oct 2025, John Clark via extropy-chat wrote: > > > > > How do people around here feel about free trade? When I first > joined this list in 1993 nearly every member was a > > passionate advocate > > > of free trade including me, but I get the impression that is no > longer the case; certainly He Who Must Not Be Named is > > about as far > > > from being a free trade advocate as it's possible to be. Even > the AI revolution hasn't caused me to change my opinion > > about free > > > trade but Universal Basic Income is a different matter; back > then I would've argued that the idea was completely > > impractical and I > > > would have been correct, but even then I knew that eventually > something like it would have to be implemented, however I > > figured it > > > was so far in the future I didn't need to think very deeply > about it. But the time has come. > > > > > > Have the amazing developments in AI during the last five years > caused anybody else to change any of their economic > > opinions? > > > > > > John K Clark > > > > Hello John, > > > > No, certainly not me. Free trade good, UBI bad. As long as there > will be > > human desires, there will be jobs. > > > > Best regards, > > Daniel > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Oct 8 10:24:55 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Oct 2025 06:24:55 -0400 Subject: [ExI] Free Trade In-Reply-To: <008001dc3784$b7399fd0$25acdf70$@rainier66.com> References: <008001dc3784$b7399fd0$25acdf70$@rainier66.com> Message-ID: On Tue, Oct 7, 2025 at 8:20?AM wrote: *>>Have the amazing developments in AI during the last five years caused >> anybody else to change any of their economic opinions?* *John K Clark* > > *> I changed on that to some extent. I have always been a free-trade > advocate and I am still. But I changed in the way I see tariffs today is a > reminder of the universality of Laffer theory. [ ...] The US > experimentation with tariffs will remind us that Laffer theory applies > regardless of what form is the taxation.* > *The difference is foreign governments don't care how much money the US extracts from its citizens through income tax, but they care very much if the money is extracted from their citizens if it's in the form of a tariff, and so they inevitably respond with tariffs of their own against the US, and EVERYBODY loses. For example China had been the largest purchaser of American grown soybeans but in September they stopped buying them entirely and switched to buying them from Brazil, and it's estimated that American farmers will lose about $50 billion a year as a result. And it's not just soybeans, the total US exports to China fell 16.8% in the three months between June and August 2025 compared to the same period last year* *S**peaking of Brazil, Spike I'd be interested to hear your opinion of the 50% tariff that He Who Must Not Be Named imposed on Brazil even though the US had a trade SURPLUS with Brazil of over $8 billion. You Know Who said he did it because a Brazilian court found former president Jair Bolsonaro guilty of instigating a failed coup d'?tat after he lost his attempt at reelection. He Who Must Not Be Named seems to have taken that as a personal insult, I can't imagine why.* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Oct 8 11:18:35 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Oct 2025 07:18:35 -0400 Subject: [ExI] =?utf-8?q?The_New_World=E2=80=99s_Largest_AI_Supercluster?= Message-ID: *I think this video does a pretty good job explaining how such a thing as large as Colossus-2 was constructed, and Stargate, currently being built in Abilene Texas, will be 10 times as large. * *The New World?s Largest AI Supercluster* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Oct 8 12:11:36 2025 From: johnkclark at gmail.com (John Clark) Date: Wed, 8 Oct 2025 08:11:36 -0400 Subject: [ExI] A future at risk Message-ID: *"Some science books might quickly become outdated because of the fast pace of research, writes geneticist Coleen Murphy in her review. ?Super Agers could face the opposite problem,?"* *The cutting-edge medical approaches that could transform ageing* *John K Clark* -------------- next part -------------- An HTML attachment was scrubbed... URL: