From pharos at gmail.com Tue Jul 2 15:43:03 2024 From: pharos at gmail.com (BillK) Date: Tue, 2 Jul 2024 16:43:03 +0100 Subject: [ExI] Silicon Valley Rushes Toward AI Warfare Message-ID: Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates AI. We can?t let those who will profit most from its unbridled application make the rules for how AI should be used. By William D. Hartung, TomDispatch, Published June 25, 2024 Quotes: The reasons for this headlong rush include a misplaced faith in ?miracle weapons,? but above all else, this surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. ------- AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination. In that context, one thing is clear: we can?t afford to let the people and companies that will profit most from its unbridled application have the upper hand in making the rules for how it should be used. ------------------ BillK From foozler83 at gmail.com Tue Jul 2 15:56:07 2024 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 2 Jul 2024 10:56:07 -0500 Subject: [ExI] Silicon Valley Rushes Toward AI Warfare In-Reply-To: References: Message-ID: The trap: trying to cut the Pentagon's budget is interpreted as being weak on defense and who wants to be labeled with that? Impossible situation. bill w On Tue, Jul 2, 2024 at 10:45?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates AI. > We can?t let those who will profit most from its unbridled application > make the rules for how AI should be used. > > By William D. Hartung, TomDispatch, Published June 25, 2024 > > < > https://truthout.org/articles/silicon-valley-rushes-toward-automated-warfare-that-deeply-incorporates-ai/ > > > Quotes: > The reasons for this headlong rush include a misplaced faith in > ?miracle weapons,? but above all else, this surge of support for > emerging military technologies is driven by the ultimate rationale of > the military-industrial complex: vast sums of money to be made. > ------- > AI is coming, and its impact on our lives, whether in war or peace, is > likely to stagger the imagination. In that context, one thing is > clear: we can?t afford to let the people and companies that will > profit most from its unbridled application have the upper hand in > making the rules for how it should be used. > ------------------ > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Tue Jul 2 19:20:56 2024 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 02 Jul 2024 12:20:56 -0700 Subject: [ExI] Maybe space exploration will be a task for AI humanoids In-Reply-To: References: <16581dcec67002a1195abd101b79d69d@sollegro.com> Message-ID: <0c504d0e58b6464fe73c422d16617292@sollegro.com> On 2024-06-28 05:05, Adrian Tymes via extropy-chat wrote: > On Fri, Jun 28, 2024 at 12:19?AM Stuart LaForge via extropy-chat > wrote: > >> The obstacle with this idea is much more political than >> technological >> because, for example, we could probably have genetically-engineered >> mice >> well-adapted to life in space habitats within a few years, if it was >> a >> scientific priority. > > The "political" issue is that this approach would only benefit > entities that we specifically breed for this venture. While it would certainly benefit the transhumans that are bred, . > 1) Those entities are not us. We ourselves do not directly benefit. > The apparent means of making sure we do benefit amount to slavery. Not slavery, but instead a well-paid service sold at market value. If the Homo sapiens radiodurans, wants to live planet-side and find some market niche or collect Universal Basic Income with the rest of us, then that is up to them. > 2) Just because they could, would they? What motivation would these > entities have to go along with the objective they were made for? > "Because they were made for it" / "because it's their destiny" does > not suffice. Their parents would probably be the ones to foot the bill for their genetic augmentation to make them radiation-resistant so there is that, but you are right. The best one can do is incentivize them taking jobs in space by offering higher wages relative to the same jobs planet-side. So an space-barista should make several times what an earthbound barista would make. Owing to the radiation, weightlessness, and other health hazards of outer space. Similar to the way that deep-sea welders make more than land based welders. > Add those two together, and obsess about the technological path to the > exclusion (and it would be an exclusion) of trying to solve those > problems, and it's a foreseeable disaster. Most likely, the organisms > would be abandoned and die off entirely, with the engineering behind > it scrapped, meaning it wouldn't benefit anyone. You keep thinking in terms of a centralized top-down science project where these variant humans are grown in some government lab at tax-payer expense. That is not what I am talking about. I am talking about an agoragenics approach in which sets of genes coding for traits and the adaptations they code for are provided to consumers by private "designer baby" labs using AI and CRISPR and then subsequently selected for by the labor market. Kind of like a stock market for genes in the human gene pool. The only say the government would have is regulatory which is not that different from to what we have now. So if the government wants people to see better in the dark, then it could subsidize the cost of implanting those genes into your designer baby. > I regard this as more of a planning and logistical issue than a > political one. > >> In any case, if we stay here on Earth, we are guaranteed to go >> extinct. But if we could colonize off world, the we could survive >> indefinitely. Even if a bunch of brave astronauts have to die die >> young >> to give their children a future. > > If you assume that a bunch of astronauts will die young to achieve > this goal, the result that you will get is a bunch of astronauts dying > young, and not much else. I am not trying to be grim here. I am simply extrapolating the trajectory of many technological adaptations. The early automobiles had no seat belts and the early airplanes did not have seats that could be used as flotation devices. Both automobiles and airplanes contributed to many deaths in the early days and relatively fewer deaths today. As space travel matures it will become safer, but there is a limit to how safe you can make surfing a chemical explosion in a tin can, so some element of risk will always remain. And perhaps as a society we have a little too obsessed with safety. > There are far more likely-to-succeed paths that do not probably > involve a bunch of astronauts dying young. Forget the disasterbation > fantasy - which misleads you into thinking the main problem is a lack > of political will - and honestly look at what's needed to establish > extraterrestrial colonies. Well I might be able to help a little with that. A few years ago, I discovered what might be the most efficient way to put water into orbit as a byproduct of trying to find a cheaper space-launch 1st-stage mechanism. I call it a hydraulic gravity cannon. I built a small-scale prototype and stuff which worked as expected, but I didn't patent it because I realized patents are best for small consumer gadgets and not big large-scale devices. If you are interested, then I can explain to you how it works. > The real problem is far more about getting the effort to pay back > those who are currently alive. They will not donate for your > children's future, but they can be gotten to invest if they think they > will see a positive financial return on their investment within > several years. There are ways to make that return happen that fast. I see what you are saying. Maybe the most immediate value of spending time in space is that no government has jurisdiction there. So maybe women could get abortions in orbit to avoid prosecution? ;) But your point of financially incentivizing space colonization is well taken. It is not an easy question. I will think more on this and get back to you. Stuart LaForge From atymes at gmail.com Tue Jul 2 21:00:38 2024 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 2 Jul 2024 17:00:38 -0400 Subject: [ExI] Maybe space exploration will be a task for AI humanoids In-Reply-To: <0c504d0e58b6464fe73c422d16617292@sollegro.com> References: <16581dcec67002a1195abd101b79d69d@sollegro.com> <0c504d0e58b6464fe73c422d16617292@sollegro.com> Message-ID: On Tue, Jul 2, 2024, 3:22?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Not slavery, but instead a well-paid service sold at market value. If > the Homo sapiens radiodurans, wants to live planet-side and find some > market niche or collect Universal Basic Income with the rest of us, then > that is up to them. > Slavery. They would be ill adapted to live on Earth and they would know it. They would not have a practical choice due to the design of their bodies, which they had no say in. There would be little perceived need, or incentive, to pay them in anything but life support. The best one can do is incentivize them taking jobs > in space by offering higher wages relative to the same jobs planet-side. And prices where they live would be higher to match. But if they are the only ones up there, why would they care about our money? Similar to the way that deep-sea welders make > more than land based welders. > Not even remotely similar. Any land-based welder can train up and become sea-based; that's where they come from. Also, the two types are part of the same species and civilization; they can expect to be in many of the same situations, and interact in many ways that those stuck in orbit will never interact with those who live their whole lives on the ground. You keep thinking in terms of a centralized top-down science project > where these variant humans are grown in some government lab at tax-payer > expense. No, what I am talking about includes purely private approaches too. Company towns were usually not official organs of the government. I am talking about an > agoragenics approach in which sets of genes coding for traits and the > adaptations they code for are provided to consumers by private "designer > baby" labs using AI and CRISPR and then subsequently selected for by the > labor market. Babies designed to live in orbit, do not participate in the same labor market as the rest of humanity. Thus, it is meaningless to speak of "the" labor market when referring to them. A few years ago, I > discovered what might be the most efficient way to put water into orbit > as a byproduct of trying to find a cheaper space-launch 1st-stage > mechanism. I call it a hydraulic gravity cannon. I built a small-scale > prototype and stuff which worked as expected, but I didn't patent it > because I realized patents are best for small consumer gadgets and not > big large-scale devices. If you are interested, then I can explain to > you how it works. > I already know, at least well enough. I also know that, unless you give it a turnable or mobile mount, which you probably won't out of an effort to save money, it will only be able to reach one particular orbit, which will probably not be an orbit with a space station or colony unless you build it after that station or colony already exists - in which case, it will have to be built and set up assuming there will not be a cannon such as yours to supply it, unless you (or your employer) set up the station or colony too. Maybe the most immediate value of spending > time in space is that no government has jurisdiction there. So maybe > women could get abortions in orbit to avoid prosecution? ;) You are blatantly thinking only of what would be of value to Americans and ignoring the rest of the world. Don't, if you wish to be taken seriously. Besides, that is only an issue in parts of the USA. Getting to other parts, like Califirnia, will always be far cheaper - unless you suppose there will be a nationwide ban, in which case, getting to Canada or Mexico will always be far cheaper. But your > point of financially incentivizing space colonization is well taken. It > is not an easy question. I will think more on this and get back to you. > Please do. The economics are a less solved problem than the technology. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Wed Jul 3 00:46:04 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 2 Jul 2024 18:46:04 -0600 Subject: [ExI] The Singularity is Nearer In-Reply-To: References: Message-ID: So I finished the book, and I felt like there wasn't a lot of new stuff in it. It felt TO ME like it was yet another victory lap for Ray... see, I was right about this, I was right about that, this stuff is progressing so fast now that a fair amount of the AI information felt out of date, which is simply impossible to avoid as a book right now. But even in the areas where he could have pushed forward, it felt like he didn't. It was mercifully less than half the length of The Singularity is Near, but contained nowhere near the high information to noise ratio of TSIN. Now, that may be because I keep up on this stuff both here and on YouTube... but I really think Ray could have done more with this update than he did. As an example, while he reviewed his previous prediction about solar panels and batteries getting cheaper and cheaper, which is obviously correct for anyone watching, he neglected to point out that about 80% of the global supply chain for solar panels and 60%+ of the supply chain for batteries is completely locked down the by the Chinese, creating a potential pinch point for progress along those lines should the Chinese get desperate enough to start another trade war. That's the kind of insight that I think is useful, but I didn't hear a single point in the presentation that talked about any geographically based problem related to singularity related issues. He also doesn't talk about emergence nearly enough imho, though he didn't say zero on the subject. I'm thinking the only answer is to write my own books. LOL. But I'm not a recognized expert AT ALL in any of this stuff. I still think I could have written a more interesting book updating TSIN. To be fair, TSIN is probably my all time favorite book, and it would have been VERY difficult to have come even close to that kind of sweeping victory. -Kelly On Tue, Jun 25, 2024 at 3:39?AM Giulio Prisco via extropy-chat wrote: > > Ray Kurzweil's new book "The Singularity is Nearer" is out! I'll do a > couple of days of almost total immersion in the book. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Wed Jul 3 18:54:43 2024 From: pharos at gmail.com (BillK) Date: Wed, 3 Jul 2024 19:54:43 +0100 Subject: [ExI] Beamed Propulsion Could Push Us to Proxima Centauri? Message-ID: Posted on July 3, 2024 by Andy Tomaswick A Concentrated Beam of Particles and Photons Could Push Us to Proxima Centauri Quote: Recently, optics research has developed a way of combining particle and laser beams that all but eliminates diffraction and beam spreading when both are used simultaneously. This would allow a beamed propulsion system to continue concentrating its beam on exactly the right place without slowly losing its pushing force as the probe gets further away. ----------------- This article struck me as being very speculative. I could see a couple of possible problems immediately. So I asked a couple of AIs to discuss it. They surprised me by appearing to understand the issues and then even proposed solutions. The full discussion was really interesting. :) (No wonder people are happy to discuss their relationship difficulties with therapy AIs)! BillK Here are some extracts from the Gemma 2 AI - Challenges: Energy Requirements: Generating a beam powerful enough to accelerate a spacecraft to such speeds would require an immense amount of energy. Focusing the Beam: Maintaining a focused beam over interstellar distances would be incredibly challenging due to atmospheric distortion and the vast distances involved. Sail Technology: Developing a sail material that is both lightweight and durable enough to withstand the intense forces of the beam is a major technological hurdle. You're absolutely right. Focusing a beam over interstellar distances onto a tiny, fast-moving object with a response time of years is an incredibly daunting challenge. Challenges: Atmospheric Distortion: Earth's atmosphere distorts light and would significantly scatter a laser beam, making it difficult to maintain focus. This would require a space-based laser array, adding to the complexity and cost. Beam Divergence: Even in space, laser beams naturally spread out over long distances. Keeping the beam focused on a tiny target moving at relativistic speeds would require extremely precise aiming and potentially adaptive optics to compensate for any deviations. Target Tracking: Accurately tracking the tiny spacecraft over interstellar distances and adjusting the beam accordingly would be a monumental task. ------------------ From brent.allsop at gmail.com Thu Jul 4 01:16:31 2024 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 3 Jul 2024 19:16:31 -0600 Subject: [ExI] Beamed Propulsion Could Push Us to Proxima Centauri? In-Reply-To: References: Message-ID: Exciting! So when do we get some interstellar probes started? On Wed, Jul 3, 2024 at 12:56?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Posted on July 3, 2024 by Andy Tomaswick > A Concentrated Beam of Particles and Photons Could Push Us to Proxima > Centauri > < > https://www.universetoday.com/167668/a-concentrated-beam-of-particles-and-photons-could-push-us-to-proxima-centauri/ > > > Quote: > Recently, optics research has developed a way of combining particle > and laser beams that all but eliminates diffraction and beam spreading > when both are used simultaneously. This would allow a beamed > propulsion system to continue concentrating its beam on exactly the > right place without slowly losing its pushing force as the probe gets > further away. > ----------------- > > This article struck me as being very speculative. > I could see a couple of possible problems immediately. > So I asked a couple of AIs to discuss it. > They surprised me by appearing to understand the issues and then even > proposed solutions. > The full discussion was really interesting. :) > (No wonder people are happy to discuss their relationship difficulties > with therapy AIs)! > BillK > > Here are some extracts from the Gemma 2 AI - > > Challenges: > Energy Requirements: Generating a beam powerful enough to accelerate a > spacecraft to such speeds would require an immense amount of energy. > Focusing the Beam: Maintaining a focused beam over interstellar > distances would be incredibly challenging due to atmospheric > distortion and the vast distances involved. > Sail Technology: Developing a sail material that is both lightweight > and durable enough to withstand the intense forces of the beam is a > major technological hurdle. > > You're absolutely right. Focusing a beam over interstellar distances > onto a tiny, fast-moving object with a response time of years is an > incredibly daunting challenge. > > Challenges: > Atmospheric Distortion: Earth's atmosphere distorts light and would > significantly scatter a laser beam, making it difficult to maintain > focus. This would require a space-based laser array, adding to the > complexity and cost. > Beam Divergence: Even in space, laser beams naturally spread out over > long distances. Keeping the beam focused on a tiny target moving at > relativistic speeds would require extremely precise aiming and > potentially adaptive optics to compensate for any deviations. > Target Tracking: Accurately tracking the tiny spacecraft over > interstellar distances and adjusting the beam accordingly would be a > monumental task. > ------------------ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jul 4 13:00:19 2024 From: pharos at gmail.com (BillK) Date: Thu, 4 Jul 2024 14:00:19 +0100 Subject: [ExI] Beamed Propulsion Could Push Us to Proxima Centauri? In-Reply-To: References: Message-ID: On Thu, 4 Jul 2024 at 02:19, Brent Allsop via extropy-chat wrote: > Exciting! > So when do we get some interstellar probes started? > _______________________________________________ Gemma 2 AI says 50 to 100 years is the optimistic estimate. Could well be longer. First, you need to develop a revolutionary propulsion system. Present rockets and laser beam systems are not good enough. Especially if you want the probe to loiter around studying the Proxima system, rather than just streaking past at high speed. Then you need better shielding for protection on the long journey. And a power source to keep the probe alive for decades, or even centuries. And better scientific instruments and AI controller to enable the probe to obtain data and send it back to earth. The 4-year light-speed transmission time from earth means that the probe must be completely autonomous. All this will require much funding and years of development support. But then developments after the Singularity may drastically change all our estimates. :) BillK From hkeithhenson at gmail.com Thu Jul 4 17:09:24 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 4 Jul 2024 10:09:24 -0700 Subject: [ExI] Beamed Propulsion Could Push Us to Proxima Centauri? In-Reply-To: References: Message-ID: On Thu, Jul 4, 2024 at 6:03?AM BillK via extropy-chat wrote: > > On Thu, 4 Jul 2024 at 02:19, Brent Allsop via extropy-chat > wrote: > > Exciting! > > So when do we get some interstellar probes started? > > _______________________________________________ > > Gemma 2 AI says 50 to 100 years is the optimistic estimate. > Could well be longer. > snip > > But then developments after the Singularity may drastically change all > our estimates. :) As you say. Keith > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Thu Jul 4 18:51:07 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 4 Jul 2024 11:51:07 -0700 Subject: [ExI] Fwd: Funny If Tom Clancy was writing the timeline ... In-Reply-To: References: Message-ID: This is not politics but humor. Keith -------- Chuck McManis wrote: SCENE: Oval Office, the Whitehouse. Six justices are ushered in. BIDEN: I'm sure you are wondering why I invited you here today. JUSTICE THOMAS: Invited?! I was getting on a jet for Barbados before your thugs told me I had to be here. BIDEN: Oh yeah, sorry about that. Well to get on with it. This is an Executive Order that I have signed which notes that my Title 2 Constitutional Powers include defending the Constitution from all threats, foreign ... and domestic. [Justices look back and forth at each other quizzically] BIDEN: In your recent opinion on Trump v. United States, you described a President, killing a US Citizen, to protect the United States was a protected official action. I have here, two documents. The first orders the CIA to take each of you to an undisclosed location and to eliminate you. The second is the paperwork to effect your resignation, effective immediately, from the Supreme Court. Today ... is your last day as a Supreme Court Justice. You get to choose how that goes. You can resign, or you can die. I am okay with either choice but I understand that your families might not be. If you sign the resignation, I will accept it and youwill be free to go. If you choose not to sign it, the Secret Service here will escort you to Joint Base Andrews where a jet is currently on standby. If people ask why you chose to resign today, I suggest you stick with "I have received credible death threats." Any attempt to make a "big deal" about this will be seen as a threat to our Constitution and our Democracy. We have already discussed how I intend to deal with those threats. Okay, that is what I had to say, who wants to sign and who wants to take a trip with the CIA? [All Justices speaking at once, claims of "this is an outrage!" and "you can't get away with this!" ] BIDEN: [holding up his hand to silence the discussion] This is the power that YOU gave me and that I am going to take away from any FUTURE President. The Whitehouse Counsel here can show you a brief of how this would be argued if charges were ever filed. We expect that to be unlikely. We have also prepared a Pardon for myself for all Federal Crimes I may have committed not covered by Trump v. United States. Thanks to you, you can't do anything about it. So what's it gonna be? Retirement Service or Memorial service? [Fade to black] From atymes at gmail.com Thu Jul 4 18:59:47 2024 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 4 Jul 2024 14:59:47 -0400 Subject: [ExI] Fwd: Funny If Tom Clancy was writing the timeline ... In-Reply-To: References: Message-ID: This is not the first forum I have seen pretty much this in since the decision became public, and in the majority of them, the suggestion has been serious. On Thu, Jul 4, 2024, 2:53?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This is not politics but humor. Keith > > -------- > > Chuck McManis wrote: > > SCENE: Oval Office, the Whitehouse. Six justices are ushered in. > > BIDEN: I'm sure you are wondering why I invited you here today. > > JUSTICE THOMAS: Invited?! I was getting on a jet for Barbados > before your thugs told me I had to be here. > > BIDEN: Oh yeah, sorry about that. Well to get on with it. This is an > Executive Order that I have signed which notes that my > Title 2 Constitutional Powers include defending the > Constitution > from all threats, foreign ... and domestic. > > [Justices look back and forth at each other quizzically] > > BIDEN: In your recent opinion on Trump v. United States, you described > a President, killing a US Citizen, to protect the United > States was > a protected official action. > > I have here, two documents. The first orders the CIA to > take each of you to an undisclosed location and to eliminate you. The > second is the paperwork to effect your resignation, effective > immediately, from > the Supreme Court. > > Today ... is your last day as a Supreme Court Justice. You > get to choose how that goes. You can resign, or you can die. I am okay > with either choice but I understand that your families might not be. > > If you sign the resignation, I will accept it and youwill > be free to go. If you choose not to sign it, the Secret Service here > will escort you to Joint Base Andrews where a jet is currently on > standby. > > If people ask why you chose to resign today, I suggest > you stick with > > "I have received credible death threats." Any attempt to > make a "big deal" about this will be seen as a threat to our Constitution > and our Democracy. > > We have already discussed how I intend to deal with those > threats. > Okay, that is what I had to say, who wants to sign and > who wants to take a trip with the CIA? > > [All Justices speaking at once, claims of "this is an outrage!" and > "you can't get away > with this!" ] > > BIDEN: [holding up his hand to silence the discussion] This is the > power that YOU gave me and that I am going to take away from any > FUTURE President. The Whitehouse Counsel here can show you a brief of > how this would be argued if charges were ever filed. We expect that to > be unlikely. We have also prepared a Pardon for myself for all Federal > Crimes I may have committed not covered by Trump v. United States. > Thanks to you, you can't do anything about it. > > So what's it gonna be? Retirement Service or Memorial service? > > [Fade to black] > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Fri Jul 5 17:09:25 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Fri, 5 Jul 2024 11:09:25 -0600 Subject: [ExI] Bots and AI are now half of internet traffic In-Reply-To: References: <770919a1-61c6-eb2e-0602-096075f92dcd@swisscows.email> <8d2a2495-d874-6501-1216-cff1903f897f@swisscows.email> Message-ID: Why do they always want to connect on WhatsApp?? -Kelly On Sun, May 5, 2024 at 2:11?PM BillK via extropy-chat wrote: > > On Sat, 4 May 2024 at 18:57, efc--- via extropy-chat wrote: > > > > I would love to see the economics of spam. I mean it increases and > > continues so obviously there must be some profit in it. I'm very curious > > about the figures. > > > > > Best regards, Daniel > >_______________________________________________ > > > I asked around........ > BillK > > Explain how spam is profitable - > > Llama-3-70b-Inst-FW > > The infamous spam industry! While many people view spam as a nuisance, it's a lucrative business for those who engage in it. Here's how spam can be profitable: > > **Low costs, high volume**: Spamming is a low-cost operation. Spammers can send massive amounts of emails or messages using automated tools and botnets, which are networks of compromised computers. This allows them to reach a large audience at a minimal cost. > > **Monetizing clicks and conversions**: Spammers make money through affiliate marketing, where they earn a commission for each click or conversion (e.g., sale, download, or sign-up) generated from their spam campaigns. They partner with companies that offer products or services, and then promote these offers through spam emails or messages. > > **Phishing and malware**: Spammers can also profit from phishing scams, where they trick victims into revealing sensitive information like login credentials, credit card numbers, or personal data. This information can be sold on the dark web or used for malicious activities like identity theft. Additionally, spammers can distribute malware, such as ransomware or Trojans, which can be used to extort money or steal sensitive data. > > **Lead generation**: Spammers can collect leads (contact information) through spam campaigns and sell them to companies or individuals who want to target specific audiences. These leads can be used for legitimate marketing purposes or for further spamming. > > **Selling products or services**: Spammers can promote their own products or services, such as fake antivirus software, miracle weight loss pills, or fake online degrees. They often use deceptive tactics to convince victims to buy their products or services. > > **Botnet rental**: Spammers can rent out their botnets to other spammers or cybercriminals, allowing them to use the compromised computers to send spam or conduct malicious activities. > > **Click fraud**: Spammers can engage in click fraud, where they generate fake clicks on online ads to earn money from pay-per-click advertising models. This can be done using automated tools or by manipulating users into clicking on ads. > > **Data brokering**: Spammers can collect and sell data on individuals, such as their browsing habits, interests, or personal information. This data can be used for targeted advertising or other malicious activities. > > **Cryptocurrency and money laundering**: Spammers can use cryptocurrencies like Bitcoin to launder money or facilitate illegal transactions. This makes it difficult for law enforcement to track and trace their activities. > > It's essential to note that spamming is illegal and unethical, and it can cause harm to individuals and businesses. Law enforcement agencies and online platforms are constantly working to combat spam and cybercrime. However, the anonymity of the internet and the ease of sending spam messages make it a challenging problem to eradicate. > ------------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Fri Jul 5 17:55:37 2024 From: pharos at gmail.com (BillK) Date: Fri, 5 Jul 2024 18:55:37 +0100 Subject: [ExI] Bots and AI are now half of internet traffic In-Reply-To: References: <770919a1-61c6-eb2e-0602-096075f92dcd@swisscows.email> <8d2a2495-d874-6501-1216-cff1903f897f@swisscows.email> Message-ID: On Fri, 5 Jul 2024 at 18:12, Kelly Anderson via extropy-chat wrote: > > Why do they always want to connect on WhatsApp?? > > -Kelly > _______________________________________________ I've got no idea. I don't use WhatsApp. But Gemma 2 AI has some suggestions. :) BillK Gemma 2 - Spambots often target WhatsApp for several reasons: Large user base: WhatsApp boasts over 2 billion users globally, making it a prime target for reaching a wide audience. Direct messaging: WhatsApp allows for direct, personal communication, which spambots can exploit to appear more legitimate and build trust. End-to-end encryption: While encryption protects user privacy, it also makes it harder for platforms to detect and block spam messages. Limited spam detection: Compared to email or social media platforms, WhatsApp has less sophisticated spam detection mechanisms. Group chats: Spambots can infiltrate group chats and spread their messages to a larger audience quickly. Multimedia sharing: WhatsApp allows for sharing images, videos, and audio files, which spambots can use to spread malware or phishing links. ----------------------- From giulio at gmail.com Sat Jul 6 07:36:29 2024 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 6 Jul 2024 09:36:29 +0200 Subject: [ExI] Terasem Colloquium TOMORROW: futurism, cosmism, extropy, e/acc Message-ID: TOMORROW Sunday July 7 at 10am-1pm ET (7am-10am PT, 4pm-7pm CET) Riccardo Campa, Ben Goertzel, Robin Hanson, James Pethokoukis & Max More will explore old and new radically optimistic futurisms including Italian futurism, Russian and modern cosmism, extropy, and e/acc. YOU ARE INVITED! https://www.turingchurch.com/p/terasem-colloquium-july-7-2024 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Jul 6 13:33:02 2024 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Jul 2024 15:33:02 +0200 Subject: [ExI] Terasem Colloquium Message-ID: I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is scheduled for tomorrow. More details can be found here: https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 Enjoy! :-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natashavita-more.com Sat Jul 6 17:02:39 2024 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Sat, 6 Jul 2024 17:02:39 +0000 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: Hi Stefan, That could be one perspective. Yet, given the fact that there is not one woman included at this event surely evidences that it is not. Onward! Natasha Get Outlook for Android ________________________________ From: Stefano Vaj Sent: Saturday, July 6, 2024 6:33:02 AM To: Euro-transhumanists ; ExI chat list ; Riccardo Campa ; Giulio Prisco ; Natasha natashavita-more.com Subject: Terasem Colloquium I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is scheduled for tomorrow. More details can be found here: https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 Enjoy! :-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Sun Jul 7 02:39:51 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sat, 6 Jul 2024 20:39:51 -0600 Subject: [ExI] US Democrat Playbook Message-ID: I know that this isn't really the right place for politics, but you are the smartest people I know, and I'm wondering if I'm hallucinating about something that I think I'm noticing. And maybe this isn't entirely a political question anyway, it's more of a human psychological question and whether the Democratic party is advancing through a psychological "trick". When Barack Obama ran for president in 2008, he promoted "hopes and dreams" but seemed exceptionally unspecific about what he intended to do, other than some vague plans about universal health care. He had almost zero record as a senator. He allowed himself to be an empty vessel into which people psychologically *projected* their own hopes and dreams and he was extremely successful with this approach. It turned out that his real vision resonated with enough people that he even got reelected after he had created something of a record. Good for him. It doesn't seem too bold to say that his first four years were more accomplished than his last four. In 2016, Trump won. Bad orange populist beat the drum, and people who never voted did for once. This doesn't go to my point, which is about the Democrats. On the democratic side, you had Hillary Clinton with YEARS of on the record experience, and a long voting record. People knew EXACTLY what they would be getting and apparently rejected her. In 2020, back to the Obama model, under the cover of covid, Biden ran the proverbial "campaign from his basement". He didn't make many specific promises (how could he) and again served as an empty vessel for people to pour their hopes and dreams into. Along with a fair amount of vote at home changes, he won. He also disavowed much of his earlier voting record in the process, to become less predictable. Now, in 2024, It seems perfectly safe at this point to predict that Biden (who is no longer an empty vessel, has caused a fair bit of inflation, and is widely thought to be possibly empty headed in a quite literal way) will be replaced before the election. I predict that another relatively unknown person may be put up, too late for the public to research, so that once again people can project their own hopes and dreams into another "empty vessel". I predict that if they do that, an unknown is better than Trump, and will likely win. The question for this group of learned scholars is whether you think this "empty vessel" promoting projection theory has ANY basis in fact. Is it an accident, or am I simply hallucinating my way through recent history? Again, sorry if this is too political, it's REALLY not meant to be, and I really don't want to discuss any particulars... I'm just interested in the human psychology of it all. -Kelly From atymes at gmail.com Sun Jul 7 03:46:08 2024 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Jul 2024 23:46:08 -0400 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On the democratic side, you had Hillary Clinton with > YEARS of on the record experience, and a long voting record. People > knew EXACTLY what they would be getting and apparently rejected her. > >From what I'm hearing, that wasn't the primary reason she was rejected. The way she campaigned reeked of corruption: that it was "her turn", as if this was something she was owed rather than something she still had to earn. This can be seen even in the book she wrote about why she thought she lost: she displays a complete blindness to the possibility that this style of messaging made a lot of people vote against her (and for Trump, only as the most expedient means of voting against her). > It seems perfectly safe at this point to predict that > Biden (who is no longer an empty vessel, has caused a fair bit of > inflation, and is widely thought to be possibly empty headed in a > quite literal way) will be replaced before the election. HAHAHA no. 1) Who specifically would they replace him with? The only Democrat who does better than Biden on most polls, Michelle Obama, is not running and can not be made to run. If they don't have a specific person to replace him with, then there is no replacement; to say he will be replaced without any idea of who to replace him with, is like the Republicans going on and on and on and on about repealing the Affordable Care Act but having nothing to replace it with. (This is the main reason why, when it came to the actual votes, no full repeal has yet made it all the way through. It's the same deal with replacing Biden.) 2) The advantages of incumbency outweigh the disadvantages (if any will remain by the election) of his recent debate performance (which seems to have been mostly wiped out by his subsequent rallies). 3) Let's not forget the impact on the chances of whoever would replace him, of the Democratic party blowing off the will of its voters as expressed through its primaries. He has been elected the Democratic nominee. There does not seem to be a way to replace him, by the rules. (This is a problem that never-Trumpers in the Republicans face too, even if Trump were to be thrown in the slammer from before their convention until after Inauguration Day.) > The question for this group of learned scholars is whether you think > this "empty vessel" promoting projection theory has ANY basis in fact. > I believe you have some of the truth. Not all, but there are some voters who think that way. I also believe that, if you wish to study this in more detail, you need to stop monofocusing on the presidential campaigns. In fact, I would suggest that you COMPLETELY IGNORE the presidential races, precisely because they are just so tempting to study in detail to the exclusion of all others. Instead, analyze the Congressional and state (especially governor and legislature) races. This provides many thousands of data points, in the span of history for which presidential races provide a mere handful. One of these is enough to draw statistically valid conclusions from, while the other spawns anecdotes, speculation, and not much else. As you well know, statistically valid studies give far more reliable guidance than anecdotes and speculation. Yes, I know, anecdotes and speculation feel so much better. They can be so addicting! This is the same phenomena behind disasterbation. Force yourself to look away from them if you seek the truth. Assuming, of course, that you truly wish to understand how it works and that you aren't just trying to throw political chaff. Most of those who have recently been saying that Biden will definitely be replaced have been doing so in bad faith, in my experience. Hopefully you are better than them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Jul 7 07:01:37 2024 From: spike at rainier66.com (Gregory Jones) Date: Sun, 7 Jul 2024 08:01:37 +0100 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: Kelly and Adrian I have been touring the Ireland countryside for the past coupla weeks with very limited internet access. I still get email so I get updates on my prediction market prices hourly. I don't understand these numbers, either the prices or the insane trading volume, but it appears the current POTUS had a very bad week. Kelly your notion is compelling, that a vague undefined optimistic message is a big seller. spike On Sun, Jul 7, 2024, 4:47?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On the democratic side, you had Hillary Clinton with >> YEARS of on the record experience, and a long voting record. People >> knew EXACTLY what they would be getting and apparently rejected her. >> > > From what I'm hearing, that wasn't the primary reason she was rejected. > The way she campaigned reeked of corruption: that it was "her turn", as if > this was something she was owed rather than something she still had to > earn. This can be seen even in the book she wrote about why she thought > she lost: she displays a complete blindness to the possibility that this > style of messaging made a lot of people vote against her (and for Trump, > only as the most expedient means of voting against her). > > >> It seems perfectly safe at this point to predict that >> Biden (who is no longer an empty vessel, has caused a fair bit of >> inflation, and is widely thought to be possibly empty headed in a >> quite literal way) will be replaced before the election. > > > HAHAHA no. > > 1) Who specifically would they replace him with? The only Democrat who > does better than Biden on most polls, Michelle Obama, is not running and > can not be made to run. If they don't have a specific person to replace > him with, then there is no replacement; to say he will be replaced without > any idea of who to replace him with, is like the Republicans going on and > on and on and on about repealing the Affordable Care Act but having nothing > to replace it with. (This is the main reason why, when it came to the > actual votes, no full repeal has yet made it all the way through. It's the > same deal with replacing Biden.) > > 2) The advantages of incumbency outweigh the disadvantages (if any will > remain by the election) of his recent debate performance (which seems to > have been mostly wiped out by his subsequent rallies). > > 3) Let's not forget the impact on the chances of whoever would replace > him, of the Democratic party blowing off the will of its voters as > expressed through its primaries. He has been elected the Democratic > nominee. There does not seem to be a way to replace him, by the rules. > (This is a problem that never-Trumpers in the Republicans face too, even if > Trump were to be thrown in the slammer from before their convention until > after Inauguration Day.) > > >> The question for this group of learned scholars is whether you think >> this "empty vessel" promoting projection theory has ANY basis in fact. >> > > I believe you have some of the truth. Not all, but there are some voters > who think that way. > > I also believe that, if you wish to study this in more detail, you need to > stop monofocusing on the presidential campaigns. In fact, I would suggest > that you COMPLETELY IGNORE the presidential races, precisely because they > are just so tempting to study in detail to the exclusion of all others. > Instead, analyze the Congressional and state (especially governor and > legislature) races. This provides many thousands of data points, in the > span of history for which presidential races provide a mere handful. One > of these is enough to draw statistically valid conclusions from, while the > other spawns anecdotes, speculation, and not much else. As you well know, > statistically valid studies give far more reliable guidance than anecdotes > and speculation. > > Yes, I know, anecdotes and speculation feel so much better. They can be > so addicting! This is the same phenomena behind disasterbation. Force > yourself to look away from them if you seek the truth. > > Assuming, of course, that you truly wish to understand how it works and > that you aren't just trying to throw political chaff. Most of those who > have recently been saying that Biden will definitely be replaced have been > doing so in bad faith, in my experience. Hopefully you are better than > them. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Jul 7 07:28:26 2024 From: spike at rainier66.com (Gregory Jones) Date: Sun, 7 Jul 2024 08:28:26 +0100 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: If anyone here wants to play political gambling, I noticed the VPOTUS had a terrific week. When I left, her shares were in the 2 to 3 cent range. Now she is selling for 26 cents. That coulda made some bold investor a tidy factor of ten profit in one week. spike On Sun, Jul 7, 2024, 8:01?AM Gregory Jones wrote: > Kelly and Adrian I have been touring the Ireland countryside for the past > coupla weeks with very limited internet access. I still get email so I get > updates on my prediction market prices hourly. I don't understand these > numbers, either the prices or the insane trading volume, but it appears the > current POTUS had a very bad week. > > Kelly your notion is compelling, that a vague undefined optimistic message > is a big seller. > > spike > > On Sun, Jul 7, 2024, 4:47?AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On the democratic side, you had Hillary Clinton with >>> YEARS of on the record experience, and a long voting record. People >>> knew EXACTLY what they would be getting and apparently rejected her. >>> >> >> From what I'm hearing, that wasn't the primary reason she was rejected. >> The way she campaigned reeked of corruption: that it was "her turn", as if >> this was something she was owed rather than something she still had to >> earn. This can be seen even in the book she wrote about why she thought >> she lost: she displays a complete blindness to the possibility that this >> style of messaging made a lot of people vote against her (and for Trump, >> only as the most expedient means of voting against her). >> >> >>> It seems perfectly safe at this point to predict that >>> Biden (who is no longer an empty vessel, has caused a fair bit of >>> inflation, and is widely thought to be possibly empty headed in a >>> quite literal way) will be replaced before the election. >> >> >> HAHAHA no. >> >> 1) Who specifically would they replace him with? The only Democrat who >> does better than Biden on most polls, Michelle Obama, is not running and >> can not be made to run. If they don't have a specific person to replace >> him with, then there is no replacement; to say he will be replaced without >> any idea of who to replace him with, is like the Republicans going on and >> on and on and on about repealing the Affordable Care Act but having nothing >> to replace it with. (This is the main reason why, when it came to the >> actual votes, no full repeal has yet made it all the way through. It's the >> same deal with replacing Biden.) >> >> 2) The advantages of incumbency outweigh the disadvantages (if any will >> remain by the election) of his recent debate performance (which seems to >> have been mostly wiped out by his subsequent rallies). >> >> 3) Let's not forget the impact on the chances of whoever would replace >> him, of the Democratic party blowing off the will of its voters as >> expressed through its primaries. He has been elected the Democratic >> nominee. There does not seem to be a way to replace him, by the rules. >> (This is a problem that never-Trumpers in the Republicans face too, even if >> Trump were to be thrown in the slammer from before their convention until >> after Inauguration Day.) >> >> >>> The question for this group of learned scholars is whether you think >>> this "empty vessel" promoting projection theory has ANY basis in fact. >>> >> >> I believe you have some of the truth. Not all, but there are some voters >> who think that way. >> >> I also believe that, if you wish to study this in more detail, you need >> to stop monofocusing on the presidential campaigns. In fact, I would >> suggest that you COMPLETELY IGNORE the presidential races, precisely >> because they are just so tempting to study in detail to the exclusion of >> all others. Instead, analyze the Congressional and state (especially >> governor and legislature) races. This provides many thousands of data >> points, in the span of history for which presidential races provide a mere >> handful. One of these is enough to draw statistically valid conclusions >> from, while the other spawns anecdotes, speculation, and not much else. As >> you well know, statistically valid studies give far more reliable guidance >> than anecdotes and speculation. >> >> Yes, I know, anecdotes and speculation feel so much better. They can be >> so addicting! This is the same phenomena behind disasterbation. Force >> yourself to look away from them if you seek the truth. >> >> Assuming, of course, that you truly wish to understand how it works and >> that you aren't just trying to throw political chaff. Most of those who >> have recently been saying that Biden will definitely be replaced have been >> doing so in bad faith, in my experience. Hopefully you are better than >> them. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Sun Jul 7 07:29:18 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 01:29:18 -0600 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: As soon as any politician gets too concrete, they lose people. If they remain vague, then people might assume that the candidate thinks like they do, which is where psychological projection comes in for the candidate's benefit. If the candidate promises a fowl in every pot, many might assume a chicken since that's what they would want or think they would offer, but you don't know if you get a chicken until after they are elected, in which case you might get merely a pigeon or an excuse that congress kept them from giving you a chicken. But the Democratic presidential candidate (I don't think this applies down the ticket in any way I've been able to see) seems lately to take this to a whole new level. The same cannot be said of Bill Clinton or Jimmy Carter as they had records as governors that could not be ignored. This seems to be a recent phenomenon, if it is at all. As for whether Biden will in the end be replaced is up to a few things. First... can they pump enough amphetamines or whatever the military is using these days to keep pilots alert for 48 hours at a time into Joe to make him appear cogent in the remaining debates... and Second... can they come up with a viable alternative candidate. As you say Michelle Obama keeps coming up, and it is not without precedent to "draft" someone into running, seemingly against their will. Or even better some small state governor that nobody knows much about, because they make the best blank slates. I don't think the vice president is likely to beat Trump simply on the basis that she's not a great debater herself and has some record issues. On Sun, Jul 7, 2024 at 1:02?AM Gregory Jones via extropy-chat wrote: > > Kelly and Adrian I have been touring the Ireland countryside for the past coupla weeks with very limited internet access. I still get email so I get updates on my prediction market prices hourly. I don't understand these numbers, either the prices or the insane trading volume, but it appears the current POTUS had a very bad week. > > Kelly your notion is compelling, that a vague undefined optimistic message is a big seller. > > spike > > On Sun, Jul 7, 2024, 4:47?AM Adrian Tymes via extropy-chat wrote: >> >> On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat wrote: >>> >>> On the democratic side, you had Hillary Clinton with >>> YEARS of on the record experience, and a long voting record. People >>> knew EXACTLY what they would be getting and apparently rejected her. >> >> >> From what I'm hearing, that wasn't the primary reason she was rejected. The way she campaigned reeked of corruption: that it was "her turn", as if this was something she was owed rather than something she still had to earn. This can be seen even in the book she wrote about why she thought she lost: she displays a complete blindness to the possibility that this style of messaging made a lot of people vote against her (and for Trump, only as the most expedient means of voting against her). >> >>> >>> It seems perfectly safe at this point to predict that >>> Biden (who is no longer an empty vessel, has caused a fair bit of >>> inflation, and is widely thought to be possibly empty headed in a >>> quite literal way) will be replaced before the election. >> >> >> HAHAHA no. >> >> 1) Who specifically would they replace him with? The only Democrat who does better than Biden on most polls, Michelle Obama, is not running and can not be made to run. If they don't have a specific person to replace him with, then there is no replacement; to say he will be replaced without any idea of who to replace him with, is like the Republicans going on and on and on and on about repealing the Affordable Care Act but having nothing to replace it with. (This is the main reason why, when it came to the actual votes, no full repeal has yet made it all the way through. It's the same deal with replacing Biden.) >> >> 2) The advantages of incumbency outweigh the disadvantages (if any will remain by the election) of his recent debate performance (which seems to have been mostly wiped out by his subsequent rallies). >> >> 3) Let's not forget the impact on the chances of whoever would replace him, of the Democratic party blowing off the will of its voters as expressed through its primaries. He has been elected the Democratic nominee. There does not seem to be a way to replace him, by the rules. (This is a problem that never-Trumpers in the Republicans face too, even if Trump were to be thrown in the slammer from before their convention until after Inauguration Day.) >> >>> >>> The question for this group of learned scholars is whether you think >>> this "empty vessel" promoting projection theory has ANY basis in fact. >> >> >> I believe you have some of the truth. Not all, but there are some voters who think that way. >> >> I also believe that, if you wish to study this in more detail, you need to stop monofocusing on the presidential campaigns. In fact, I would suggest that you COMPLETELY IGNORE the presidential races, precisely because they are just so tempting to study in detail to the exclusion of all others. Instead, analyze the Congressional and state (especially governor and legislature) races. This provides many thousands of data points, in the span of history for which presidential races provide a mere handful. One of these is enough to draw statistically valid conclusions from, while the other spawns anecdotes, speculation, and not much else. As you well know, statistically valid studies give far more reliable guidance than anecdotes and speculation. >> >> Yes, I know, anecdotes and speculation feel so much better. They can be so addicting! This is the same phenomena behind disasterbation. Force yourself to look away from them if you seek the truth. >> >> Assuming, of course, that you truly wish to understand how it works and that you aren't just trying to throw political chaff. Most of those who have recently been saying that Biden will definitely be replaced have been doing so in bad faith, in my experience. Hopefully you are better than them. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Sun Jul 7 07:30:00 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 01:30:00 -0600 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: Last one I looked at looked pretty good for Gavin Newsome. On Sun, Jul 7, 2024 at 1:29?AM Gregory Jones via extropy-chat wrote: > > If anyone here wants to play political gambling, I noticed the VPOTUS had a terrific week. When I left, her shares were in the 2 to 3 cent range. Now she is selling for 26 cents. That coulda made some bold investor a tidy factor of ten profit in one week. > > spike > > On Sun, Jul 7, 2024, 8:01?AM Gregory Jones wrote: >> >> Kelly and Adrian I have been touring the Ireland countryside for the past coupla weeks with very limited internet access. I still get email so I get updates on my prediction market prices hourly. I don't understand these numbers, either the prices or the insane trading volume, but it appears the current POTUS had a very bad week. >> >> Kelly your notion is compelling, that a vague undefined optimistic message is a big seller. >> >> spike >> >> On Sun, Jul 7, 2024, 4:47?AM Adrian Tymes via extropy-chat wrote: >>> >>> On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat wrote: >>>> >>>> On the democratic side, you had Hillary Clinton with >>>> YEARS of on the record experience, and a long voting record. People >>>> knew EXACTLY what they would be getting and apparently rejected her. >>> >>> >>> From what I'm hearing, that wasn't the primary reason she was rejected. The way she campaigned reeked of corruption: that it was "her turn", as if this was something she was owed rather than something she still had to earn. This can be seen even in the book she wrote about why she thought she lost: she displays a complete blindness to the possibility that this style of messaging made a lot of people vote against her (and for Trump, only as the most expedient means of voting against her). >>> >>>> >>>> It seems perfectly safe at this point to predict that >>>> Biden (who is no longer an empty vessel, has caused a fair bit of >>>> inflation, and is widely thought to be possibly empty headed in a >>>> quite literal way) will be replaced before the election. >>> >>> >>> HAHAHA no. >>> >>> 1) Who specifically would they replace him with? The only Democrat who does better than Biden on most polls, Michelle Obama, is not running and can not be made to run. If they don't have a specific person to replace him with, then there is no replacement; to say he will be replaced without any idea of who to replace him with, is like the Republicans going on and on and on and on about repealing the Affordable Care Act but having nothing to replace it with. (This is the main reason why, when it came to the actual votes, no full repeal has yet made it all the way through. It's the same deal with replacing Biden.) >>> >>> 2) The advantages of incumbency outweigh the disadvantages (if any will remain by the election) of his recent debate performance (which seems to have been mostly wiped out by his subsequent rallies). >>> >>> 3) Let's not forget the impact on the chances of whoever would replace him, of the Democratic party blowing off the will of its voters as expressed through its primaries. He has been elected the Democratic nominee. There does not seem to be a way to replace him, by the rules. (This is a problem that never-Trumpers in the Republicans face too, even if Trump were to be thrown in the slammer from before their convention until after Inauguration Day.) >>> >>>> >>>> The question for this group of learned scholars is whether you think >>>> this "empty vessel" promoting projection theory has ANY basis in fact. >>> >>> >>> I believe you have some of the truth. Not all, but there are some voters who think that way. >>> >>> I also believe that, if you wish to study this in more detail, you need to stop monofocusing on the presidential campaigns. In fact, I would suggest that you COMPLETELY IGNORE the presidential races, precisely because they are just so tempting to study in detail to the exclusion of all others. Instead, analyze the Congressional and state (especially governor and legislature) races. This provides many thousands of data points, in the span of history for which presidential races provide a mere handful. One of these is enough to draw statistically valid conclusions from, while the other spawns anecdotes, speculation, and not much else. As you well know, statistically valid studies give far more reliable guidance than anecdotes and speculation. >>> >>> Yes, I know, anecdotes and speculation feel so much better. They can be so addicting! This is the same phenomena behind disasterbation. Force yourself to look away from them if you seek the truth. >>> >>> Assuming, of course, that you truly wish to understand how it works and that you aren't just trying to throw political chaff. Most of those who have recently been saying that Biden will definitely be replaced have been doing so in bad faith, in my experience. Hopefully you are better than them. >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Sun Jul 7 07:32:24 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 01:32:24 -0600 Subject: [ExI] Bots and AI are now half of internet traffic In-Reply-To: References: <770919a1-61c6-eb2e-0602-096075f92dcd@swisscows.email> <8d2a2495-d874-6501-1216-cff1903f897f@swisscows.email> Message-ID: Those are mostly well thought out reasons. Thanks. On Fri, Jul 5, 2024 at 11:56?AM BillK via extropy-chat wrote: > > On Fri, 5 Jul 2024 at 18:12, Kelly Anderson via extropy-chat > wrote: > > > > Why do they always want to connect on WhatsApp?? > > > > -Kelly > > _______________________________________________ > > > I've got no idea. I don't use WhatsApp. > But Gemma 2 AI has some suggestions. :) > BillK > > Gemma 2 - > Spambots often target WhatsApp for several reasons: > > Large user base: WhatsApp boasts over 2 billion users globally, making > it a prime target for reaching a wide audience. > > Direct messaging: WhatsApp allows for direct, personal communication, > which spambots can exploit to appear more legitimate and build trust. > > End-to-end encryption: While encryption protects user privacy, it also > makes it harder for platforms to detect and block spam messages. > > Limited spam detection: Compared to email or social media platforms, > WhatsApp has less sophisticated spam detection mechanisms. > > Group chats: Spambots can infiltrate group chats and spread their > messages to a larger audience quickly. > > Multimedia sharing: WhatsApp allows for sharing images, videos, and > audio files, which spambots can use to spread malware or phishing > links. > ----------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Sun Jul 7 07:45:02 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 01:45:02 -0600 Subject: [ExI] Maybe space exploration will be a task for AI humanoids In-Reply-To: References: <16581dcec67002a1195abd101b79d69d@sollegro.com> Message-ID: On Fri, Jun 28, 2024 at 6:06?AM Adrian Tymes via extropy-chat wrote: > > The "political" issue is that this approach would only benefit entities that we specifically breed for this venture. > > 1) Those entities are not us. We ourselves do not directly benefit. The apparent means of making sure we do benefit amount to slavery. Just a thought... I wonder if the idea of actual freedom would appeal to enough people to go into space. If you could have an Atlas Shrugged sort of economy in space, it seems that would be enough to compel at least some people to leave this dirt ball with all its messed up rules behind. If I were younger, that would be enough for me to want to go. It would also ensure that a certain sort of person never would want to leave the security of terra firma, giving the colony a better chance at survival. As imperfect historical precedent, I propose the founding of white Australia as perhaps being somewhat along these lines... though only for prisoners. -Kelly From postmowoods at gmail.com Sun Jul 7 07:56:21 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 01:56:21 -0600 Subject: [ExI] Silicon Valley Rushes Toward AI Warfare In-Reply-To: References: Message-ID: Ukrainians have really shown the way forward with autonomous naval and aerial drones. It's incredibly asymmetric. The worst weapon in the Russian's arsenal these days seems to be their tanks. It's just too easy to use a $3,000 autonomous or semi-autonomous drone weapon to destroy a $4 million dollar tank. Similarly, $300,000 naval drones are sinking $40 million dollar ships either by themselves or in packs. The Russian navy caught in the Black Sea are fish in a barrel. Apparently bridges are harder to attach with drones up to this point... which leaves an interesting hole for the Silicon Valley war gangs to fill. Any autonomous bridge attack mechanism would certainly be something I'd want to invest in right now. -Kelly On Tue, Jul 2, 2024 at 9:44?AM BillK via extropy-chat wrote: > > Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates AI. > We can?t let those who will profit most from its unbridled application > make the rules for how AI should be used. > > By William D. Hartung, TomDispatch, Published June 25, 2024 > > > Quotes: > The reasons for this headlong rush include a misplaced faith in > ?miracle weapons,? but above all else, this surge of support for > emerging military technologies is driven by the ultimate rationale of > the military-industrial complex: vast sums of money to be made. > ------- > AI is coming, and its impact on our lives, whether in war or peace, is > likely to stagger the imagination. In that context, one thing is > clear: we can?t afford to let the people and companies that will > profit most from its unbridled application have the upper hand in > making the rules for how it should be used. > ------------------ > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Sun Jul 7 08:04:08 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 02:04:08 -0600 Subject: [ExI] Interesting take on AI. In-Reply-To: References: Message-ID: Brook's main point seems to actually be that narrow AI, or just good programming, can beat more generalized AI at specific tasks like running a warehouse. I look at my daily work automating woodworking tasks and see that my specialized machines are MUCH better at doing what they need to do than any humanoid robot could accomplish without such specialized tools. If the humanoid robots were smart enough to build automated systems, then my point fails, but a humanoid robot working with a router table just isn't going to compete with an assembly line approach in any way shape or form. In this narrow sense of course he is correct. But... There are more generalized tasks, such as maintaining a household that would be suitable for intelligent robotics. -Kelly On Sun, Jun 30, 2024 at 9:57?AM efc--- via extropy-chat wrote: > > Hello everyone, > > Thought you might enjoy this take on AI: > > https://techcrunch.com/2024/06/29/mit-robotics-pioneer-rodney-brooks-thinks-people-are-vastly-overestimating-generative-ai/ > > "Brooks adds that there?s this mistaken belief, mostly thanks to Moore?s > law, that there will always be exponential growth when it comes to > technology ? the idea that if ChatGPT 4 is this good, imagine what ChatGPT > 5, 6 and 7 will be like. He sees this flaw in that logic, that tech > doesn?t always grow exponentially, in spite of Moore?s law". > > More in the article above. > > Best regards, > Daniel > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Sun Jul 7 08:08:31 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 7 Jul 2024 02:08:31 -0600 Subject: [ExI] Unitree (Chinese) robot dog spectacular In-Reply-To: References: Message-ID: I love the text at the end of the first video... "We kindly request that all users refrain from making any dangerous modifications or using the robot in a hazardous manner".... LOLOL. -Kelly On Sun, Jun 23, 2024 at 3:51?AM BillK via extropy-chat wrote: > > The latest Unitree robot dog short video is impressive. (1 minute) > > > The whole Unitree channel is worth browsing through. > They also make humanoid robots. > > Oh, and by the way, the Chinese military have put a machine gun on > one of these robot dogs. (45 sec). > > > BillK > > > > > > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From atymes at gmail.com Sun Jul 7 10:16:43 2024 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 7 Jul 2024 06:16:43 -0400 Subject: [ExI] Silicon Valley Rushes Toward AI Warfare In-Reply-To: References: Message-ID: Bridges? You mean the big, static, immobile things that can be attacked by artillery, where artillery shells are generally even cheaper than drones with an equivalent explosive payload? On Sun, Jul 7, 2024, 3:57?AM Kelly Anderson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Ukrainians have really shown the way forward with autonomous naval and > aerial drones. It's incredibly asymmetric. The worst weapon in the > Russian's arsenal these days seems to be their tanks. It's just too > easy to use a $3,000 autonomous or semi-autonomous drone weapon to > destroy a $4 million dollar tank. Similarly, $300,000 naval drones are > sinking $40 million dollar ships either by themselves or in packs. The > Russian navy caught in the Black Sea are fish in a barrel. Apparently > bridges are harder to attach with drones up to this point... which > leaves an interesting hole for the Silicon Valley war gangs to fill. > Any autonomous bridge attack mechanism would certainly be something > I'd want to invest in right now. > > -Kelly > > On Tue, Jul 2, 2024 at 9:44?AM BillK via extropy-chat > wrote: > > > > Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates > AI. > > We can?t let those who will profit most from its unbridled application > > make the rules for how AI should be used. > > > > By William D. Hartung, TomDispatch, Published June 25, 2024 > > > > < > https://truthout.org/articles/silicon-valley-rushes-toward-automated-warfare-that-deeply-incorporates-ai/ > > > > Quotes: > > The reasons for this headlong rush include a misplaced faith in > > ?miracle weapons,? but above all else, this surge of support for > > emerging military technologies is driven by the ultimate rationale of > > the military-industrial complex: vast sums of money to be made. > > ------- > > AI is coming, and its impact on our lives, whether in war or peace, is > > likely to stagger the imagination. In that context, one thing is > > clear: we can?t afford to let the people and companies that will > > profit most from its unbridled application have the upper hand in > > making the rules for how it should be used. > > ------------------ > > > > BillK > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Jul 7 11:22:58 2024 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 7 Jul 2024 07:22:58 -0400 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: > Last one I looked at looked pretty good for Gavin Newsome. https://www.ipsos.com/en-us/only-michelle-obama-bests-trump-alternative-biden-2024 among others disagree. At least Harris is starting to shine on her own, but it is unlikely she can be convinced to step up. On Sun, Jul 7, 2024 at 3:30?AM Kelly Anderson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > As soon as any politician gets too concrete, they lose people. This in general would seem to be true. > As for whether Biden will in the end be replaced is up to a few > things. First... can they pump enough amphetamines or whatever the > military is using these days to keep pilots alert for 48 hours at a > time into Joe to make him appear cogent in the remaining debates... > I can just about guarantee they won't wind up doing that. Remember, they're not transhumanists. Even with such stakes on the line, they don't (rationally) consider such measures. > and Second... can they come up with a viable alternative candidate. As > you say Michelle Obama keeps coming up, and it is not without > precedent to "draft" someone into running, seemingly against their > will. It's not without precedent to want to, even to try to. But to actually succeed? The effort to draft Colin Powell was much more organized and persistent than the current calls to replace Biden with some undefined person, to the point that Powell received 3 electoral votes in 2016 against his will. But without his consent, such efforts could not get him on the ballot or into office. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Jul 7 11:56:03 2024 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 7 Jul 2024 07:56:03 -0400 Subject: [ExI] Maybe space exploration will be a task for AI humanoids In-Reply-To: References: <16581dcec67002a1195abd101b79d69d@sollegro.com> Message-ID: On Sun, Jul 7, 2024 at 3:46?AM Kelly Anderson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, Jun 28, 2024 at 6:06?AM Adrian Tymes via extropy-chat > wrote: > > > > The "political" issue is that this approach would only benefit entities > that we specifically breed for this venture. > > > > 1) Those entities are not us. We ourselves do not directly benefit. > The apparent means of making sure we do benefit amount to slavery. > > Just a thought... I wonder if the idea of actual freedom would appeal > to enough people to go into space. This has been tried and failed, to the point that "no true X" keeps being offered as an excuse for believing why it might still work. "Actual freedom" gets tourists - be it in space or on the ground - who generally linger until they miss the services that prefer to operate in more legislated environments. It does not create viable economies: note the economic fate of every single attempt to create an "actually free" settlement on Earth. There is no reason to believe that a low-government colony in space - which would require stronger governance, to clamp down on the kind of mischief that could get everyone dead real quick (trash not getting picked up and thus bears getting into the trash, is not the same as air leaks being someone else's problem and thus the station runs out of air) - would fare any better. > As imperfect historical precedent, I propose the founding of white > Australia as perhaps being somewhat along these lines... though only > for prisoners. > A literal prison colony was not "actual freedom". -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Jul 7 12:13:25 2024 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 7 Jul 2024 08:13:25 -0400 Subject: [ExI] Interesting take on AI. In-Reply-To: References: Message-ID: On Sun, Jun 30, 2024, 1:03 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Setting aside whether Moore's Law will continue, exponential growth in > hardware does not necessarily mean exponential growth in software. ChatGPT > running twice as fast or having twice as much memory does not make it twice > as good by itself, using the functionality measure of "good" that most of > the public is using for AI. > I agree that our subjective "measure of good" does not necessarily follow the amount of computation resources that goes into something. For example, with weather prediction, or the prediction of any chaotic system, an exponential increase in comoutational resources and data collection will yield only a linear improvement in capability. For example, we might only get one extra day farther out of prediction if we 10X our weather prediction system. Intelligence, so far as it is related to predicting the future, could be seen as such an example. A super intelligent AI, with 1000X the human brain's computational power, would not be able to accurately predict the future much further out than a human could. All that said, I have another point to make with regards to what you said about software. In the case of AI, where the software develops itself via training, I don't think that software is a bottleneck to progress in AI. I recently wrote the following in a different discussion list about the progress in AI, and I think it would be useful to share here: I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI. But I also see a possible explanation. Nature has likewise discovered something, which is relatively simple in its behavior and capabilities, yet, when aggregated into ever larger collections yields greater and greater intelligence and capability: the neuron. There is relatively little difference in neurons across mammals. A rat neuron is little different from a human neuron, for example. Yet a human brain has several thousand times more of them than a mouse brain does, and this difference in scale, seems to be the only meaningful difference between what mice and humans have been able to accomplish. Deep learning, and the progress in that field, is a microcosm of this example from nature. The artificial neuron is proven to be "a universal function learner." So the more of them there are aggregated together in one network, the more rich and complex functions they can learn to approximate. Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and so complex that they escape our capacity to understand them. So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are. These algorithms are products of systems which have (now) trillions of parts. Even the best human programmers can't know the complete details of projects with around a million lines of code (nevermind a trillion). So have trillion-parameter neural networks unlocked the algorithms for true intelligence? How would we know once they had? Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower. Note that there has been no great breakthrough in solving how human neurons learn. We're still using the same method of back propagation invented in the 1970s, using the same neuron model of the 1960s. Yet, simply scaling this same old approach up, with more training data and training time, with more neurons arranged in more layers, has yielded all the advances we've seen: image and video generators, voice cloning, language models, go, poker, chess, Atari, and StarCraft master players, etc. So it seems to me, at this point, that hardware is the only impediment to future progress in creating more intelligent systems. Jason > On Sun, Jun 30, 2024, 11:58?AM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Hello everyone, >> >> Thought you might enjoy this take on AI: >> >> >> https://techcrunch.com/2024/06/29/mit-robotics-pioneer-rodney-brooks-thinks-people-are-vastly-overestimating-generative-ai/ >> >> "Brooks adds that there?s this mistaken belief, mostly thanks to Moore?s >> law, that there will always be exponential growth when it comes to >> technology ? the idea that if ChatGPT 4 is this good, imagine what >> ChatGPT >> 5, 6 and 7 will be like. He sees this flaw in that logic, that tech >> doesn?t always grow exponentially, in spite of Moore?s law". >> >> More in the article above. >> >> Best regards, >> Daniel >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Jul 7 12:44:57 2024 From: spike at rainier66.com (Gregory Jones) Date: Sun, 7 Jul 2024 13:44:57 +0100 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: I get the feeling that most of the money to be made this cycle is already behind us. If I had sufficient internet bandwidth I might sell my remaining not-bidens at 76 cents a share, pocket a nice profit on those then wait until this race gets a little less crazy. spike On Sun, Jul 7, 2024, 12:25?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Last one I looked at looked pretty good for Gavin Newsome. > > > https://www.ipsos.com/en-us/only-michelle-obama-bests-trump-alternative-biden-2024 > among others disagree. At least Harris is starting to shine on her own, > but it is unlikely she can be convinced to step up. > > On Sun, Jul 7, 2024 at 3:30?AM Kelly Anderson via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> As soon as any politician gets too concrete, they lose people. > > > This in general would seem to be true. > > >> As for whether Biden will in the end be replaced is up to a few >> things. First... can they pump enough amphetamines or whatever the >> military is using these days to keep pilots alert for 48 hours at a >> time into Joe to make him appear cogent in the remaining debates... >> > > I can just about guarantee they won't wind up doing that. Remember, > they're not transhumanists. Even with such stakes on the line, they don't > (rationally) consider such measures. > > >> and Second... can they come up with a viable alternative candidate. As >> you say Michelle Obama keeps coming up, and it is not without >> precedent to "draft" someone into running, seemingly against their >> will. > > > It's not without precedent to want to, even to try to. But to actually > succeed? The effort to draft Colin Powell was much more organized and > persistent than the current calls to replace Biden with some undefined > person, to the point that Powell received 3 electoral votes in 2016 against > his will. But without his consent, such efforts could not get him on the > ballot or into office. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Jul 7 12:55:41 2024 From: spike at rainier66.com (Gregory Jones) Date: Sun, 7 Jul 2024 13:55:41 +0100 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: I just got a notice those shares of not-bidens have hit 79. I would sell those right now if I could. spike On Sun, Jul 7, 2024, 1:44?PM Gregory Jones wrote: > I get the feeling that most of the money to be made this cycle is already > behind us. If I had sufficient internet bandwidth I might sell my > remaining not-bidens at 76 cents a share, pocket a nice profit on those > then wait until this race gets a little less crazy. > > spike > > On Sun, Jul 7, 2024, 12:25?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> > Last one I looked at looked pretty good for Gavin Newsome. >> >> >> https://www.ipsos.com/en-us/only-michelle-obama-bests-trump-alternative-biden-2024 >> among others disagree. At least Harris is starting to shine on her own, >> but it is unlikely she can be convinced to step up. >> >> On Sun, Jul 7, 2024 at 3:30?AM Kelly Anderson via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> As soon as any politician gets too concrete, they lose people. >> >> >> This in general would seem to be true. >> >> >>> As for whether Biden will in the end be replaced is up to a few >>> things. First... can they pump enough amphetamines or whatever the >>> military is using these days to keep pilots alert for 48 hours at a >>> time into Joe to make him appear cogent in the remaining debates... >>> >> >> I can just about guarantee they won't wind up doing that. Remember, >> they're not transhumanists. Even with such stakes on the line, they don't >> (rationally) consider such measures. >> >> >>> and Second... can they come up with a viable alternative candidate. As >>> you say Michelle Obama keeps coming up, and it is not without >>> precedent to "draft" someone into running, seemingly against their >>> will. >> >> >> It's not without precedent to want to, even to try to. But to actually >> succeed? The effort to draft Colin Powell was much more organized and >> persistent than the current calls to replace Biden with some undefined >> person, to the point that Powell received 3 electoral votes in 2016 against >> his will. But without his consent, such efforts could not get him on the >> ballot or into office. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Jul 7 16:49:24 2024 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Jul 2024 18:49:24 +0200 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: https://www.turingchurch.com/p/terasem-colloquium-july-7-2024 On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com < natasha at natashavita-more.com> wrote: > Hi Stefan, > > That could be one perspective. Yet, given the fact that there is not one > woman included at this event surely evidences that it is not. > > Onward! > Natasha > > > > > > Get Outlook for Android > ------------------------------ > *From:* Stefano Vaj > *Sent:* Saturday, July 6, 2024 6:33:02 AM > *To:* Euro-transhumanists ; ExI > chat list ; Riccardo Campa < > riccardo.campa.02 at gmail.com>; Giulio Prisco ; Natasha > natashavita-more.com > *Subject:* Terasem Colloquium > > I understand that the Colloquium which, given the panel, might well end up > being *the* online transhumanist event of 2024 is scheduled for tomorrow. > > More details can be found here: > > https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 > > Enjoy! :-) > > -- > Stefano Vaj > -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Jul 7 16:51:00 2024 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Jul 2024 18:51:00 +0200 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: Oops. I did not pay any attention to that. Giulio is to be blamed, however, I had no part in its organisation. :-) On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com < natasha at natashavita-more.com> wrote: > Hi Stefan, > > That could be one perspective. Yet, given the fact that there is not one > woman included at this event surely evidences that it is not. > > Onward! > Natasha > > > > > > Get Outlook for Android > ------------------------------ > *From:* Stefano Vaj > *Sent:* Saturday, July 6, 2024 6:33:02 AM > *To:* Euro-transhumanists ; ExI > chat list ; Riccardo Campa < > riccardo.campa.02 at gmail.com>; Giulio Prisco ; Natasha > natashavita-more.com > *Subject:* Terasem Colloquium > > I understand that the Colloquium which, given the panel, might well end up > being *the* online transhumanist event of 2024 is scheduled for tomorrow. > > More details can be found here: > > https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 > > Enjoy! :-) > > -- > Stefano Vaj > -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natashavita-more.com Sun Jul 7 17:33:54 2024 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Sun, 7 Jul 2024 17:33:54 +0000 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: Darlings Stefano (and Giulio) - I'm laughing in delightful joy at how we all miss things. I do too ? quite often! History is to blame! ________________________________ From: Stefano Vaj Sent: Sunday, July 7, 2024 9:51 AM To: Natasha natashavita-more.com Cc: Euro-transhumanists ; ExI chat list ; Riccardo Campa ; Giulio Prisco Subject: Re: Terasem Colloquium Oops. I did not pay any attention to that. Giulio is to be blamed, however, I had no part in its organisation. :-) On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com > wrote: Hi Stefan, That could be one perspective. Yet, given the fact that there is not one woman included at this event surely evidences that it is not. Onward! Natasha Get Outlook for Android ________________________________ From: Stefano Vaj > Sent: Saturday, July 6, 2024 6:33:02 AM To: Euro-transhumanists >; ExI chat list >; Riccardo Campa >; Giulio Prisco >; Natasha natashavita-more.com > Subject: Terasem Colloquium I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is scheduled for tomorrow. More details can be found here: https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 Enjoy! :-) -- Stefano Vaj -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Sun Jul 7 18:56:03 2024 From: efc at swisscows.email (efc at swisscows.email) Date: Sun, 7 Jul 2024 20:56:03 +0200 (CEST) Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: I for one would be very interested in knowing how well the event represents other intersectional aspects and dimensions such as religion, sexual preference, political ideology, age and non-binary genders. It is my firm belief that only looking at the gender aspect is deeply insulting towards all other minorities out there and also representative of yesterdays identity politics and not the identity politics of the future! Best regards, Daniel On Sun, 7 Jul 2024, Stefano Vaj via extropy-chat wrote: > Oops. I did not pay any attention to that. > Giulio is to be blamed, however, I had no part in its organisation. :-) > > On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com wrote: > Hi Stefan, > > That could be one perspective. Yet, given the fact that there is not one woman included at this event surely evidences that it > is not. > > Onward! > Natasha? > > > > > > Get Outlook for Android > > _____________________________________________________________________________________________________________________________________ > From: Stefano Vaj > Sent: Saturday, July 6, 2024 6:33:02 AM > To: Euro-transhumanists ; ExI chat list ; Riccardo Campa > ; Giulio Prisco ; Natasha natashavita-more.com > Subject: Terasem Colloquium ? > I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is > scheduled for tomorrow. > More details can be found?here: > > https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 > > Enjoy! :-) > > -- > Stefano Vaj > > > > -- > Stefano Vaj > > From giulio at gmail.com Mon Jul 8 06:03:57 2024 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Jul 2024 08:03:57 +0200 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: Not my fault! We invited three women, none was able to accept. G. On Sun, Jul 7, 2024 at 6:51?PM Stefano Vaj wrote: > > Oops. I did not pay any attention to that. > > Giulio is to be blamed, however, I had no part in its organisation. :-) > > On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com wrote: >> >> Hi Stefan, >> >> That could be one perspective. Yet, given the fact that there is not one woman included at this event surely evidences that it is not. >> >> Onward! >> Natasha >> >> >> >> >> >> Get Outlook for Android >> ________________________________ >> From: Stefano Vaj >> Sent: Saturday, July 6, 2024 6:33:02 AM >> To: Euro-transhumanists ; ExI chat list ; Riccardo Campa ; Giulio Prisco ; Natasha natashavita-more.com >> Subject: Terasem Colloquium >> >> I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is scheduled for tomorrow. >> >> More details can be found here: >> >> https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 >> >> Enjoy! :-) >> >> -- >> Stefano Vaj > > > > -- > Stefano Vaj From giulio at gmail.com Mon Jul 8 06:21:49 2024 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Jul 2024 08:21:49 +0200 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: I hope identity politics will have no place in the politics of the future. It has already done too much damage to the politics (and culture, and society) of today. On Sun, Jul 7, 2024 at 9:23?PM efc--- via extropy-chat wrote: > > I for one would be very interested in knowing how well the event > represents other intersectional aspects and dimensions such as religion, > sexual preference, political ideology, age and non-binary genders. > > It is my firm belief that only looking at the gender aspect is deeply > insulting towards all other minorities out there and also representative > of yesterdays identity politics and not the identity politics of the > future! > > Best regards, > Daniel > > > On Sun, 7 Jul 2024, Stefano Vaj via extropy-chat wrote: > > > Oops. I did not pay any attention to that. > > Giulio is to be blamed, however, I had no part in its organisation. :-) > > > > On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com wrote: > > Hi Stefan, > > > > That could be one perspective. Yet, given the fact that there is not one woman included at this event surely evidences that it > > is not. > > > > Onward! > > Natasha > > > > > > > > > > > > Get Outlook for Android > > > > _____________________________________________________________________________________________________________________________________ > > From: Stefano Vaj > > Sent: Saturday, July 6, 2024 6:33:02 AM > > To: Euro-transhumanists ; ExI chat list ; Riccardo Campa > > ; Giulio Prisco ; Natasha natashavita-more.com > > Subject: Terasem Colloquium > > I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is > > scheduled for tomorrow. > > More details can be found here: > > > > https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 > > > > Enjoy! :-) > > > > -- > > Stefano Vaj > > > > > > > > -- > > Stefano Vaj > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From giulio at gmail.com Mon Jul 8 06:45:24 2024 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Jul 2024 08:45:24 +0200 Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: The video is here: VIDEO: Terasem Colloquium, July 7, 2024 https://www.youtube.com/watch?v=IEVTRUbXmOg Best - Giulio On Sat, Jul 6, 2024 at 3:33?PM Stefano Vaj wrote: > > I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is scheduled for tomorrow. > > More details can be found here: > > https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 > > Enjoy! :-) > > -- > Stefano Vaj From efc at swisscows.email Mon Jul 8 09:48:09 2024 From: efc at swisscows.email (efc at swisscows.email) Date: Mon, 8 Jul 2024 11:48:09 +0200 (CEST) Subject: [ExI] Terasem Colloquium In-Reply-To: References: Message-ID: <0661ea8a-9e9d-5599-6a06-fd1dedef379c@swisscows.email> On Mon, 8 Jul 2024, Giulio Prisco wrote: > I hope identity politics will have no place in the politics of the > future. It has already done too much damage to the politics (and > culture, and society) of today. I agree completely! > On Sun, Jul 7, 2024 at 9:23?PM efc--- via extropy-chat > wrote: >> >> I for one would be very interested in knowing how well the event >> represents other intersectional aspects and dimensions such as religion, >> sexual preference, political ideology, age and non-binary genders. >> >> It is my firm belief that only looking at the gender aspect is deeply >> insulting towards all other minorities out there and also representative >> of yesterdays identity politics and not the identity politics of the >> future! >> >> Best regards, >> Daniel >> >> >> On Sun, 7 Jul 2024, Stefano Vaj via extropy-chat wrote: >> >>> Oops. I did not pay any attention to that. >>> Giulio is to be blamed, however, I had no part in its organisation. :-) >>> >>> On Sat, 6 Jul 2024 at 19:02, Natasha natashavita-more.com wrote: >>> Hi Stefan, >>> >>> That could be one perspective. Yet, given the fact that there is not one woman included at this event surely evidences that it >>> is not. >>> >>> Onward! >>> Natasha >>> >>> >>> >>> >>> >>> Get Outlook for Android >>> >>> _____________________________________________________________________________________________________________________________________ >>> From: Stefano Vaj >>> Sent: Saturday, July 6, 2024 6:33:02 AM >>> To: Euro-transhumanists ; ExI chat list ; Riccardo Campa >>> ; Giulio Prisco ; Natasha natashavita-more.com >>> Subject: Terasem Colloquium >>> I understand that the Colloquium which, given the panel, might well end up being *the* online transhumanist event of 2024 is >>> scheduled for tomorrow. >>> More details can be found here: >>> >>> https://www.turingchurch.com/p/terasm-colloquium-july-7-2024 >>> >>> Enjoy! :-) >>> >>> -- >>> Stefano Vaj >>> >>> >>> >>> -- >>> Stefano Vaj >>> >>> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From pharos at gmail.com Mon Jul 8 16:49:51 2024 From: pharos at gmail.com (BillK) Date: Mon, 8 Jul 2024 17:49:51 +0100 Subject: [ExI] Astronauts on round trip to Mars Could Experience Kidney Failure Message-ID: It looks like trips to Mars will require more protection for astronauts from cosmic radiation or much shorter travel times. BillK Quotes: A Mission to Mars Could Destroy Astronauts? Kidneys Astronauts might experience organ failure just from living in space. By Cassidy Ward Jun 24, 2024 We?ve known since the Apollo era that spaceflight comes at a cost to the body. Astronauts regularly experience bone and muscle loss, vision problems, kidney stones, and other health issues as a result of exposure to microgravity and higher than normal doses of radiation. All of our current crewed space activities happen in low Earth orbit (LEO), about 250 miles above the surface of the Earth. From that relatively close vantage point, astronauts still receive some protection from the Earth?s magnetic field. Astronauts won?t have the benefit of that field once they venture out into deep space. ?If we don?t develop new ways to protect the kidneys, I?d say that while an astronaut could make it to Mars they might need dialysis on the way back. We know that the kidneys are late to show signs of radiation damage; by the time this becomes apparent it?s probably too late to prevent failure, which would be catastrophic for the mission?s chances of success," said study author Dr. Keith Siew, in a statement. -------------------------------- From brent.allsop at gmail.com Mon Jul 8 17:36:46 2024 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 8 Jul 2024 11:36:46 -0600 Subject: [ExI] Astronauts on round trip to Mars Could Experience Kidney Failure In-Reply-To: References: Message-ID: I've always said the priority should be significant body modification, along the lines of we won't need space suits, to say nothing of significant body repair, before we even venture out there, even to Mars. What is taking so long, is my only question. Come on AGI, cure aging, and space travel issues already. On Mon, Jul 8, 2024 at 10:51?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > It looks like trips to Mars will require more protection for > astronauts from cosmic radiation or much shorter travel times. > BillK > > < > https://www.syfy.com/syfy-wire/a-mission-to-mars-could-destroy-astronauts-kidneys > > > Quotes: > > A Mission to Mars Could Destroy Astronauts? Kidneys > Astronauts might experience organ failure just from living in space. > By Cassidy Ward Jun 24, 2024 > > We?ve known since the Apollo era that spaceflight comes at a cost to > the body. Astronauts regularly experience bone and muscle loss, vision > problems, kidney stones, and other health issues as a result of > exposure to microgravity and higher than normal doses of radiation. > All of our current crewed space activities happen in low Earth orbit > (LEO), about 250 miles above the surface of the Earth. From that > relatively close vantage point, astronauts still receive some > protection from the Earth?s magnetic field. Astronauts won?t have the > benefit of that field once they venture out into deep space. > > ?If we don?t develop new ways to protect the kidneys, I?d say that > while an astronaut could make it to Mars they might need dialysis on > the way back. We know that the kidneys are late to show signs of > radiation damage; by the time this becomes apparent it?s probably too > late to prevent failure, which would be catastrophic for the mission?s > chances of success," said study author Dr. Keith Siew, in a statement. > -------------------------------- > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Jul 8 19:08:09 2024 From: pharos at gmail.com (BillK) Date: Mon, 8 Jul 2024 20:08:09 +0100 Subject: [ExI] Astronauts on round trip to Mars Could Experience Kidney Failure In-Reply-To: References: Message-ID: On Mon, 8 Jul 2024 at 18:40, Brent Allsop via extropy-chat wrote: > I've always said the priority should be significant body modification, along the lines of we won't need space suits, to say nothing of significant body repair, before we even venture out there, even to Mars. > > What is taking so long, is my only question. Come on AGI, cure aging, and space travel issues already. > _______________________________________________ The problem with modifying humans for space travel and other planetary environments is that they wouldn't be able to return to live on Earth. It would be a one-way trip. A new species. The AGI that cures aging would also give us AI in robot humanoid bodies that could do space exploration without the need for the supplies that live humans require. BillK From brent.allsop at gmail.com Mon Jul 8 19:23:09 2024 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 8 Jul 2024 13:23:09 -0600 Subject: [ExI] Astronauts on round trip to Mars Could Experience Kidney Failure In-Reply-To: References: Message-ID: right, for sure. I'm thinking about all of the above. I imagine having lots of different Avatar like bodies, including entire space ships, we can move in and out of, or inhabit multiple at the same time... On Mon, Jul 8, 2024 at 1:09?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, 8 Jul 2024 at 18:40, Brent Allsop via extropy-chat > wrote: > > I've always said the priority should be significant body modification, > along the lines of we won't need space suits, to say nothing of significant > body repair, before we even venture out there, even to Mars. > > > > What is taking so long, is my only question. Come on AGI, cure aging, > and space travel issues already. > > _______________________________________________ > > > The problem with modifying humans for space travel and other planetary > environments is that they wouldn't be able to return to live on Earth. > It would be a one-way trip. A new species. > The AGI that cures aging would also give us AI in robot humanoid > bodies that could do space exploration without the need for the > supplies that live humans require. > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jul 8 19:40:05 2024 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 8 Jul 2024 15:40:05 -0400 Subject: [ExI] Astronauts on round trip to Mars Could Experience Kidney Failure In-Reply-To: References: Message-ID: What would those bodies do when we're not there? Just idling waiting for us is highly wasteful, but having them do anything while waiting means some other intelligence would be using the body, which our visit would disrupt. Unless and until there is a cheap way to manufacture and store spare bodies, this does not seem likely. Said cheap way would logically come well after the first trips there to set up infrastructure. On Mon, Jul 8, 2024, 3:24?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > right, for sure. > I'm thinking about all of the above. > I imagine having lots of different Avatar like bodies, including entire > space ships, we can move in and out of, or inhabit multiple at the same > time... > > > > On Mon, Jul 8, 2024 at 1:09?PM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, 8 Jul 2024 at 18:40, Brent Allsop via extropy-chat >> wrote: >> > I've always said the priority should be significant body modification, >> along the lines of we won't need space suits, to say nothing of significant >> body repair, before we even venture out there, even to Mars. >> > >> > What is taking so long, is my only question. Come on AGI, cure aging, >> and space travel issues already. >> > _______________________________________________ >> >> >> The problem with modifying humans for space travel and other planetary >> environments is that they wouldn't be able to return to live on Earth. >> It would be a one-way trip. A new species. >> The AGI that cures aging would also give us AI in robot humanoid >> bodies that could do space exploration without the need for the >> supplies that live humans require. >> >> BillK >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Tue Jul 9 04:47:39 2024 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 9 Jul 2024 06:47:39 +0200 Subject: [ExI] VIDEO of Terasem Colloquium, July 7, 2024 Message-ID: VIDEO of Terasem Colloquium, July 7, 2024. Old and new radically optimistic futurisms including Italian futurism, Russian and modern cosmism, extropy, and e/acc. Speakers: Riccardo Campa, Ben Goertzel, Robin Hanson, James Pethokoukis, Max More. https://www.turingchurch.com/p/video-terasem-colloquium-july-7-2024 From pharos at gmail.com Tue Jul 9 14:42:50 2024 From: pharos at gmail.com (BillK) Date: Tue, 9 Jul 2024 15:42:50 +0100 Subject: [ExI] The trolley problem and autonomous cars Message-ID: I've been chatting to several AIs about this. They all say it is very complicated and still under discussion. Legal systems will have to catch up as cases arise in practice. BillK Here is Claude 3.5 Sonnet - I understand you'd like more detailed explanations on all the points I mentioned regarding how the trolley problem affects the programming of self-driving vehicles. I'll expand on each point: 1. Ethical decision-making algorithms: These algorithms need to weigh various factors in milliseconds, such as the number of lives at risk, the probability of different outcomes, and the legal and ethical implications of each action. For example, should the car prioritize passenger safety over pedestrians? How should it handle scenarios where all options lead to some harm? Developers are using techniques from game theory, decision theory, and moral philosophy to create these complex decision-making systems. 2. Value alignment: This involves ensuring that the AI's decision-making aligns with human values. It's challenging because values can vary significantly between cultures and individuals. For instance, some societies might prioritize the safety of the elderly, while others might prioritize children. Researchers are exploring methods like inverse reinforcement learning and value learning to address this challenge. 3. Liability and legal implications: The question of liability in accidents involving self-driving cars is complex. If a self-driving car makes a decision that results in harm, it's not clear who should be held responsible. This could be the manufacturer, the software developer, the car owner, or even the passengers. Legal frameworks are still evolving to address these new scenarios, and it may require new laws and regulations specific to autonomous vehicles. 4. Transparency and public trust: Companies developing self-driving cars need to be open about their ethical decision-making processes to gain public trust. This might involve publishing their ethical frameworks, explaining how their algorithms work (to the extent possible without compromising proprietary information), and engaging in public dialogue about these issues. Some companies are forming ethics boards to guide their decision-making and increase transparency. 5. Regulatory challenges: Governments and international bodies are working to develop standards and regulations for autonomous vehicles. This includes not just safety standards, but also ethical guidelines. For example, Germany has already published ethical guidelines for autonomous vehicles, which include principles like prioritizing human safety over property or animal life. 6. Data collection and analysis: To inform ethical decision-making algorithms, companies are collecting vast amounts of data on human driving behavior, accident scenarios, and public opinions on ethical dilemmas. This involves real-world testing, simulations, and public surveys. The challenge is to collect enough diverse data to cover the wide range of possible scenarios a self-driving car might encounter. 7. Scenario complexity: Real-world ethical dilemmas are often more complex than the trolley problem. They might involve uncertainty about outcomes, multiple parties with different levels of vulnerability, or scenarios where the "right" choice isn't clear. Developers are using advanced simulation technologies to test their systems against a wide range of complex scenarios. 8. Balancing safety and practicality: While ethical considerations are important, the primary goal of self-driving cars is to improve overall road safety. Developers must balance ethical decision-making with practical considerations like traffic flow, fuel efficiency, and passenger comfort. There's also a question of whether making cars too cautious might actually increase risk in certain situations. 9. Consistency vs. adaptability: There's debate about whether all self-driving cars should follow the same ethical framework, or if there should be some level of customization. A consistent approach might be fairer and more predictable, but allowing some adaptation could better reflect diverse values and preferences. Some propose allowing users to choose from a set of pre-approved ethical settings. 10. Public perception and acceptance: How self-driving cars handle ethical dilemmas can significantly impact public willingness to adopt this technology. Studies have shown that people's comfort with self-driving cars can change based on how they believe the cars will behave in ethical dilemmas. Companies and policymakers need to consider public opinion and work to build trust in these systems. Each of these points represents a complex challenge in the development of self-driving cars, and they're all interconnected. Addressing these issues requires collaboration between technologists, ethicists, policymakers, and the public. --------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jul 9 18:02:13 2024 From: spike at rainier66.com (Gregory Jones) Date: Tue, 9 Jul 2024 19:02:13 +0100 Subject: [ExI] great britain mystery Message-ID: BillK lives in such a civilized place! I have been on his side of the Atlantic for ten days and haven't heard a single gunshot. I don't know how the heck their rival drug gangs resolve territorial disputes and casual insults. Sword fights? (Hey that sounds interesting to watch.) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Jul 9 18:31:55 2024 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 9 Jul 2024 14:31:55 -0400 Subject: [ExI] great britain mystery In-Reply-To: References: Message-ID: I hear they're less frequent, but larger conflicts over there. Head over to eastern Ukraine right now and you might find a territorial dispute in progress. On Tue, Jul 9, 2024 at 2:03?PM Gregory Jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > BillK lives in such a civilized place! I have been on his side of the > Atlantic for ten days and haven't heard a single gunshot. I don't know how > the heck their rival drug gangs resolve territorial disputes and casual > insults. Sword fights? > > (Hey that sounds interesting to watch.) > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Jul 10 09:57:36 2024 From: spike at rainier66.com (Gregory Jones) Date: Wed, 10 Jul 2024 10:57:36 +0100 Subject: [ExI] great britain mystery In-Reply-To: References: Message-ID: Adrian you live in a more upscale area than I do. I am guessing from over there on the toney west side of the valley, you might still be close enough to hear gunfire on a still winter evening but far enough to not hear the anguishing shrieks of the dying. spike On Tue, Jul 9, 2024, 7:33?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I hear they're less frequent, but larger conflicts over there. Head over > to eastern Ukraine right now and you might find a territorial dispute in > progress. > > On Tue, Jul 9, 2024 at 2:03?PM Gregory Jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> BillK lives in such a civilized place! I have been on his side of the >> Atlantic for ten days and haven't heard a single gunshot. I don't know how >> the heck their rival drug gangs resolve territorial disputes and casual >> insults. Sword fights? >> >> (Hey that sounds interesting to watch.) >> >> spike >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Jul 10 22:17:26 2024 From: pharos at gmail.com (BillK) Date: Wed, 10 Jul 2024 23:17:26 +0100 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer Message-ID: The article provides the details for each prediction. BillK Quotes: 1. Information technology is advancing at an exponential rate. 2. Artificial intelligence will reach and immediately soar past human-level intelligence by 2029. 3. We will solve aging in the next five to ten years by applying AI to medicine. 4. AI is about to leap from transforming the digital world to transforming the physical world. 5. By 2045, we will reach the Singularity and expand our intelligence a millionfold. -------------------------------- From atymes at gmail.com Thu Jul 11 01:09:47 2024 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 10 Jul 2024 21:09:47 -0400 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: #2 falls flat. From the explanation: "AI has already overtaken humans in many ways. You can ask a large language model any question about any subject, and it will give you an intelligent response. Who on earth can do the same?" Most people with access to Google - which is not itself a large language model. That is the only actual evidence offered. #3 and #5 are similarly shaky, though not as direly. On Wed, Jul 10, 2024 at 6:19?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The article provides the details for each prediction. > BillK > > < > https://nextbigideaclub.com/magazine/merging-ai-mean-humanity-according-futurist-ray-kurzweil-bookbite/50146/ > > > > Quotes: > 1. Information technology is advancing at an exponential rate. > 2. Artificial intelligence will reach and immediately soar past > human-level intelligence by 2029. > 3. We will solve aging in the next five to ten years by applying AI to > medicine. > 4. AI is about to leap from transforming the digital world to > transforming the physical world. > 5. By 2045, we will reach the Singularity and expand our intelligence > a millionfold. > -------------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jul 11 22:47:44 2024 From: pharos at gmail.com (BillK) Date: Thu, 11 Jul 2024 23:47:44 +0100 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: On Thu, 11 Jul 2024 at 02:12, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > #2 falls flat. From the explanation: > > "AI has already overtaken humans in many ways. You can ask a large > language model any question about any subject, and it will give you an > intelligent response. Who on earth can do the same?" > > Most people with access to Google - which is not itself a large language > model. > That is the only actual evidence offered. #3 and #5 are similarly shaky, > though not as direly. > _______________________________________________ > I expect there is more support for his predictions in his book, than in a short teaser article. :) I discussed the article with Claude, and Claude initially thought that the predictions were very optimistic and unlikely to happen within the proposed timescales. Then I followed up and discussed exponential AI improvements and AI recursive self-improvement with Claude and asked him to reconsider his review. This provided more support for Kurzweil. The forecasts really depend on how big an effect exponential AI improvements will have on developments. BillK Claude 3.5 Sonnet: You raise an excellent point, and I appreciate the opportunity to reconsider. You're right to emphasize the potential for exponential acceleration, which is indeed a key aspect of Kurzweil's predictions that I may have underemphasized in my previous response. Let's re-examine this with a stronger focus on the possibility of exponential progress: 1. AI-assisted acceleration: Kurzweil's core argument is that AI improvements will become faster as AI itself is used to develop better AI, potentially leading to exponential rather than linear growth. This could dramatically shorten timelines for various breakthroughs. 2. Compounding effects: If we consider that advancements in one field could rapidly accelerate progress in others (e.g., AI improving chip design, leading to more powerful AI, which then accelerates biotech research), the pace of overall progress could indeed be much faster than linear projections suggest. 3. Recent AI breakthroughs: The rapid advancements we've seen in areas like large language models, protein folding (AlphaFold), and image generation have occurred faster than many experts anticipated. This lends some credence to the idea of accelerating progress. 4. Potential for sudden leaps: Exponential growth can lead to sudden, dramatic improvements that may seem to come "out of nowhere" from a linear perspective. This could make some of Kurzweil's seemingly overly optimistic predictions more plausible. Reconsidering the predictions: 1. AI reaching human-level intelligence by 2029: While still ambitious, exponential progress could make this more feasible than I initially suggested. Breakthroughs in areas like unsupervised learning or artificial general intelligence could accelerate rapidly. 2. Solving aging in 5-10 years: This remains highly optimistic, but exponential progress in AI-assisted biotechnology and medicine could lead to more significant advances in this timeframe than previously thought. 3. Longevity Escape Velocity by 2035: With exponential progress, especially if combined with breakthroughs in nanotechnology and genetic engineering, this could become more achievable. 4. AI transforming the physical world: Exponential advancement could indeed lead to much faster adoption and implementation of AI-driven solutions in energy, manufacturing, and food production. 5. The Singularity by 2045: While still speculative, exponential progress in neurotechnology, AI, and nanotechnology could bring us closer to this vision than a linear perspective would suggest. Revised assessment: Considering the potential for exponential rather than linear progress, Kurzweil's predictions become more plausible, though still highly ambitious. The compounding effects of AI-driven advancement across multiple fields could indeed lead to transformative changes on a timeline closer to what Kurzweil suggests. However, it's important to note that even exponential growth can face limiting factors or unexpected obstacles. Additionally, societal, ethical, and regulatory considerations may influence the pace of adoption of these technologies. In conclusion, while I maintain that some of Kurzweil's specific predictions face significant challenges, the possibility of exponential AI-assisted acceleration makes his overall vision of rapid, transformative change more credible than I initially assessed. This perspective underscores the importance of seriously considering the potential for faster-than-expected technological progress and its implications for society. --------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Thu Jul 11 20:39:43 2024 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 11 Jul 2024 22:39:43 +0200 (CEST) Subject: [ExI] great britain mystery In-Reply-To: References: Message-ID: <42612c91-1d94-e65d-2f99-ebbf1dd94486@swisscows.email> I know in sweden, they use grenades from time to time, don't know if that is the favourite modus operandi of the UK thugs though. They do seem more civilized with their swords and all. ;) On Wed, 10 Jul 2024, Gregory Jones via extropy-chat wrote: > Adrian you live in a more upscale area than I do.? I am guessing from over there on the toney west side of the valley, you might > still be close enough to hear gunfire on a still winter evening but far enough to not hear the anguishing shrieks of the dying. > spike > > On Tue, Jul 9, 2024, 7:33?PM Adrian Tymes via extropy-chat wrote: > I hear they're less frequent, but larger conflicts over there.? Head over to eastern Ukraine right now and you might find > a territorial dispute?in progress. > > On Tue, Jul 9, 2024 at 2:03?PM Gregory Jones via extropy-chat wrote: > BillK lives in such a civilized place!? I have been on his side of the Atlantic for ten days and haven't heard a > single gunshot.? I don't know how the heck their rival drug gangs resolve territorial disputes and casual insults.? > Sword fights? > (Hey that sounds interesting to watch.) > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From efc at swisscows.email Thu Jul 11 20:41:08 2024 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 11 Jul 2024 22:41:08 +0200 (CEST) Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: <73114870-a0f0-ece2-5218-736fac59ccdc@swisscows.email> I don't believe 3 will happen within the time frame. I would be happy to be wrong though! =) On Wed, 10 Jul 2024, Adrian Tymes via extropy-chat wrote: > #2 falls flat.? From the explanation: > "AI has already overtaken humans in many ways. You can ask a large language model any question about any subject, and it will give > you an intelligent response. Who on earth can do the same?" > > Most people with access to Google - which is not itself a large language model. > > That is the only actual evidence offered.? #3 and #5 are similarly shaky, though not as direly. > > On Wed, Jul 10, 2024 at 6:19?PM BillK via extropy-chat wrote: > The article provides the details for each prediction. > BillK > > > > Quotes: > 1. Information technology is advancing at an exponential rate. > 2. Artificial intelligence will reach and immediately soar past > ? ? human-level intelligence by 2029. > 3. We will solve aging in the next five to ten years by applying AI to medicine. > 4. AI is about to leap from transforming the digital world to > ? ? transforming the physical world. > 5. By 2045, we will reach the Singularity and expand our intelligence > ? ? a millionfold. > -------------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From hkeithhenson at gmail.com Fri Jul 12 02:16:13 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 11 Jul 2024 19:16:13 -0700 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: <73114870-a0f0-ece2-5218-736fac59ccdc@swisscows.email> References: <73114870-a0f0-ece2-5218-736fac59ccdc@swisscows.email> Message-ID: On Thu, Jul 11, 2024 at 5:29?PM efc--- via extropy-chat wrote: > > I don't believe 3 will happen within the time frame. I would be happy to > be wrong though! =) If you are signed up for cryonics, you may already bo on Longevity Escape Velocity Keith > On Wed, 10 Jul 2024, Adrian Tymes via extropy-chat wrote: > > > #2 falls flat. From the explanation: > > "AI has already overtaken humans in many ways. You can ask a large language model any question about any subject, and it will give > > you an intelligent response. Who on earth can do the same?" > > > > Most people with access to Google - which is not itself a large language model. > > > > That is the only actual evidence offered. #3 and #5 are similarly shaky, though not as direly. > > > > On Wed, Jul 10, 2024 at 6:19?PM BillK via extropy-chat wrote: > > The article provides the details for each prediction. > > BillK > > > > > > > > Quotes: > > 1. Information technology is advancing at an exponential rate. > > 2. Artificial intelligence will reach and immediately soar past > > human-level intelligence by 2029. > > 3. We will solve aging in the next five to ten years by applying AI to medicine. > > 4. AI is about to leap from transforming the digital world to > > transforming the physical world. > > 5. By 2045, we will reach the Singularity and expand our intelligence > > a millionfold. > > -------------------------------- > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Jul 14 23:22:56 2024 From: pharos at gmail.com (BillK) Date: Mon, 15 Jul 2024 00:22:56 +0100 Subject: [ExI] AI and autonomous warfare Message-ID: This long article discusses the increasing problem of autonomous weapons in warfare. < https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-moment-autonomous-weapons-enter-the-battlefield > Quote: AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield The military use of AI-enabled weapons is growing, and the industry that provides them is booming Nick Robins-Early Sun 14 Jul 2024 17.00 BS What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. ------------------------------------- I asked Perplexity AI to review this long article. BillK Perplexity AI - The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. Here are the key points and implications: 1. **Rapid adoption of AI in warfare**: The article describes how AI-equipped weapons are moving from concept to reality, with examples including Ukrainian drones striking Russian oil facilities, American AI systems guiding airstrikes, and Israeli forces using AI for targeting in Gaza. This rapid adoption suggests that AI is becoming a critical component of modern military strategy. 2. **AI arms race**: The development of AI weapons has sparked a global arms race, with nations and tech companies investing billions in AI-driven military technologies. This competition could lead to accelerated development of increasingly sophisticated autonomous weapons systems, potentially outpacing ethical considerations and regulatory efforts. 3. **Ethical concerns**: The use of AI in warfare raises significant ethical questions, particularly regarding civilian casualties, accountability, and the potential for unintended consequences. The article notes that the lack of transparency and accountability in AI weapons development exacerbates these concerns. 4. **Regulatory challenges**: The rapid advancement of AI technology in warfare is outpacing regulatory efforts. The article highlights the need for clear regulations and ethical frameworks to govern the development and deployment of AI technologies in conflict zones. 5. **Economic implications**: The AI arms race is proving lucrative for tech and defense firms, with companies like Anduril, Palantir, Helsing, and Elbit Systems securing substantial contracts. This economic incentive could further accelerate the development and adoption of AI weapons. 6. **Changing nature of warfare**: AI-enabled weapons have the potential to fundamentally reshape how wars are fought. The article mentions initiatives like the Pentagon's Replicator Initiative for unmanned combat drones and AI-enabled fighter jets, suggesting a shift towards more automated and potentially faster-paced conflicts. 7. **International tensions**: The global rush towards AI warfare could exacerbate international tensions and potentially lead to new forms of conflict. The lack of international consensus on the regulation of AI weapons adds to this concern. 8. **Human control and oversight**: There is a growing call for maintaining human oversight in decision-making processes related to AI weapons. This reflects concerns about the ethical implications of fully autonomous weapons and the need for human accountability in warfare. The implications of these developments are profound. As AI becomes more integrated into military operations, it could lead to faster, more efficient, and potentially less human-centric warfare. This raises questions about the nature of future conflicts, the role of human decision-making in war, and the potential for escalation due to AI-driven misunderstandings or errors. Moreover, the ethical implications of delegating life-and-death decisions to AI systems are significant. There are concerns about the ability of AI to distinguish between combatants and civilians, potentially leading to increased civilian casualties. The article underscores the urgent need for international cooperation to establish clear guidelines and regulations for the development and use of AI in warfare. Without such frameworks, there is a risk of an uncontrolled arms race that could have devastating consequences for global security and stability. In conclusion, while AI has the potential to revolutionize military capabilities, it also presents significant ethical, legal, and strategic challenges that need to be carefully addressed to ensure responsible development and use of these technologies in warfare. ----------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jul 16 14:31:44 2024 From: pharos at gmail.com (BillK) Date: Tue, 16 Jul 2024 15:31:44 +0100 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: On Mon, 15 Jul 2024 at 00:22, BillK wrote: > > This long article discusses the increasing problem of autonomous weapons in warfare. > > Quote: > > AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield > The military use of AI-enabled weapons is growing, and the industry that provides them is booming > Nick Robins-Early Sun 14 Jul 2024 17.00 BS > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. > ------------------------------------- > > I asked Perplexity AI to review this long article. > BillK > > Perplexity AI - > > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. > > ----------------------- As Perplexity AI insisted on the need for international cooperation to avoid an AI weapons arms race, I pointed out that while a war was in progress there would be little likelihood of international cooperation. Nations will not restrict weapon developments during wartime. War plus AI seems to indicate a bleak future. Perplexity AI grudgingly agreed, but then still added a long section about the urgent need for international cooperation. Below is the first part of the reply (with the long wishful-thinking cooperation section removed): BillK Perplexity AI: The continuation of the AI weapons arms race during ongoing conflicts is a significant concern, as nations prioritize military superiority over cooperative regulation. Here are the key points and implications based on the provided search results: 1. Intensification of the AI Arms Race Current State of Affairs: The AI arms race is characterized by rapid advancements in military technology among major powers like the U.S., China, and Russia. This competition is driven by the desire for strategic and tactical advantages on the battlefield, leading to the development of increasingly sophisticated AI-driven systems such as drones and autonomous vehicles. Implications: Escalation of Conflicts: The deployment of AI weapons can lead to faster and more automated warfare, potentially escalating conflicts due to reduced decision-making times and increased reliance on AI for strategic assessments. Ethical and Legal Challenges: The use of AI in warfare raises significant ethical and legal questions, including accountability for AI-driven actions and the potential for increased civilian casualties. 2. Challenges to International Cooperation War and Cooperation: During active conflicts, nations are less likely to engage in cooperative efforts to regulate AI weapons, as their immediate focus is on gaining military advantages. This competitive environment makes it difficult to establish and enforce international agreements or treaties aimed at controlling the development and use of AI in warfare. Implications: Lack of Regulation: The absence of international cooperation can lead to a lack of effective regulation, increasing the risks associated with the deployment of AI weapons. Proliferation of AI Weapons: Without international agreements, there is a higher likelihood of AI weapons proliferating, potentially falling into the hands of non-state actors or being used in ways that violate international humanitarian law. ----------------------- From postmowoods at gmail.com Tue Jul 16 15:12:46 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 16 Jul 2024 09:12:46 -0600 Subject: [ExI] The "Only existential threat" Message-ID: So I was just reviewing: Deja Vu (But Worse) - Biden vs. Trump ft. "Weird Al" Yankovic, a comedic take on the debate... https://youtu.be/6fNWHbnIn0g?si=9ZEogsjK-zVL1-yk And at 2:06 Biden says "The only existential threat to humanity is climate change" Did he really say that? I'd honestly put AI above climate change, at least potentially before climate change. But to say that it is the ONLY existential threat seems like limited imagination at the very least. Don't we still have nuclear weapons? Space weather? Running out of something essential? Russians? Supervolcanoes? Cats and dogs sleeping together? I'll be so glad when this insanity is behind us and we have an actual AI as president. -Kelly From postmowoods at gmail.com Tue Jul 16 15:32:19 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 16 Jul 2024 09:32:19 -0600 Subject: [ExI] Silicon Valley Rushes Toward AI Warfare In-Reply-To: References: Message-ID: Well... the Crimean Bridge is too far away to attack with artillery, and seems to still be standing after being a prime target for a couple of years. -Kelly On Sun, Jul 7, 2024 at 4:17?AM Adrian Tymes via extropy-chat wrote: > > Bridges? You mean the big, static, immobile things that can be attacked by artillery, where artillery shells are generally even cheaper than drones with an equivalent explosive payload? > > On Sun, Jul 7, 2024, 3:57?AM Kelly Anderson via extropy-chat wrote: >> >> Ukrainians have really shown the way forward with autonomous naval and >> aerial drones. It's incredibly asymmetric. The worst weapon in the >> Russian's arsenal these days seems to be their tanks. It's just too >> easy to use a $3,000 autonomous or semi-autonomous drone weapon to >> destroy a $4 million dollar tank. Similarly, $300,000 naval drones are >> sinking $40 million dollar ships either by themselves or in packs. The >> Russian navy caught in the Black Sea are fish in a barrel. Apparently >> bridges are harder to attach with drones up to this point... which >> leaves an interesting hole for the Silicon Valley war gangs to fill. >> Any autonomous bridge attack mechanism would certainly be something >> I'd want to invest in right now. >> >> -Kelly >> >> On Tue, Jul 2, 2024 at 9:44?AM BillK via extropy-chat >> wrote: >> > >> > Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates AI. >> > We can?t let those who will profit most from its unbridled application >> > make the rules for how AI should be used. >> > >> > By William D. Hartung, TomDispatch, Published June 25, 2024 >> > >> > >> > Quotes: >> > The reasons for this headlong rush include a misplaced faith in >> > ?miracle weapons,? but above all else, this surge of support for >> > emerging military technologies is driven by the ultimate rationale of >> > the military-industrial complex: vast sums of money to be made. >> > ------- >> > AI is coming, and its impact on our lives, whether in war or peace, is >> > likely to stagger the imagination. In that context, one thing is >> > clear: we can?t afford to let the people and companies that will >> > profit most from its unbridled application have the upper hand in >> > making the rules for how it should be used. >> > ------------------ >> > >> > BillK >> > >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Tue Jul 16 16:03:18 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 16 Jul 2024 10:03:18 -0600 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: Having read the book, the argument for anti-aging is that the current advances in AI with protein folding, biological simulation and personalized medicine based on that kind of approach, will turn medicine from a "throw it at the wall and see if it sticks" kind of approach to testing proteins we find in the wild into the kind of information technology that the law of accelerating returns can be applied to. While that is an interesting hypothesis, there was not a detailed statement that I can recall relating to where and when we are on that curve currently. If indeed AI turns anti-aging tech into something that the LOAR can be applied to, there is still the question of whether we are far enough up that curve currently to be able to accomplish something meaningful in 5 years. There is, in particular, and overly optimistic idea that the FDA will be able to speed things up given that testing can be done in silicon, rather than in real people in the real world. I don't see the FDA magically becoming something it is not now. Transhumanism is our thing... but do we really believe in Transgovernmentalism? So while I get that Ray's hopes are that he will not die any time soon, and that might shade his perspective as his own age goes up... maybe he's trying to convince people to create a future. Like they say, to create a future you have to envision it first. And Ray Bradbury, Isaac Asimov and others including, Gene Roddenberry and Ray Kurtzweil have helped us envision things, some stuff comes true faster than others. I really do think the LOAR is a real thing. The leap of faith is that AI will turn medicine into one of the things that the LOAR can be applied to. Personally, it seems logical that it could, and almost certainly will at some point. I do wonder about the basis for this particular timeline, as this most recent book wasn't nearly as breathtaking as TSIN (2005)... -Kelly On Wed, Jul 10, 2024 at 4:18?PM BillK via extropy-chat wrote: > > The article provides the details for each prediction. > BillK > > > > Quotes: > 1. Information technology is advancing at an exponential rate. > 2. Artificial intelligence will reach and immediately soar past > human-level intelligence by 2029. > 3. We will solve aging in the next five to ten years by applying AI to medicine. > 4. AI is about to leap from transforming the digital world to > transforming the physical world. > 5. By 2045, we will reach the Singularity and expand our intelligence > a millionfold. > -------------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Tue Jul 16 16:12:31 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 16 Jul 2024 10:12:31 -0600 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: It would sure be nice if we could devise some sort of "Geneva Conventions" for AI, but we can't even get a stable safety board inside of a company to make sure they aren't accidentally going to mess us up. If you can't cooperate for safety inside of a single company, how much more difficult would it be for nations in conflict to do the same? I can't imagine what rules would even look like, other than the ultra simple "there has to be a human in the chain to pull the trigger", but even that's already been blown through in Ukraine, by both sides. Besides, humans slow things down and thus ruin your potential for surprise. Looking at the recent assassination attempt, with its collateral damage, it seems that if there were some sort of crowd sourced information base that was rapid enough, a life could have been saved. Also, why weren't there drones with AI flying over the area? If we can't protect a presidential candidate with simply implemented technology, what hope do we have of getting AI regulations right. -Kelly On Tue, Jul 16, 2024 at 8:33?AM BillK via extropy-chat wrote: > > On Mon, 15 Jul 2024 at 00:22, BillK wrote: > > > > This long article discusses the increasing problem of autonomous weapons in warfare. > > > > Quote: > > > > AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield > > The military use of AI-enabled weapons is growing, and the industry that provides them is booming > > Nick Robins-Early Sun 14 Jul 2024 17.00 BS > > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. > > ------------------------------------- > > > > I asked Perplexity AI to review this long article. > > BillK > > > > Perplexity AI - > > > > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. > > > > ----------------------- > > > As Perplexity AI insisted on the need for international cooperation to > avoid an AI weapons arms race, I pointed out that while a war was in > progress there would be little likelihood of international cooperation. > Nations will not restrict weapon developments during wartime. > War plus AI seems to indicate a bleak future. > > Perplexity AI grudgingly agreed, but then still added a long section > about the urgent need for international cooperation. > > Below is the first part of the reply (with the long wishful-thinking cooperation > section removed): > BillK > > Perplexity AI: > The continuation of the AI weapons arms race during ongoing conflicts > is a significant concern, as nations prioritize military superiority > over cooperative regulation. Here are the key points and implications > based on the provided search results: > > 1. Intensification of the AI Arms Race > > Current State of Affairs: > The AI arms race is characterized by rapid advancements in military > technology among major powers like the U.S., China, and Russia. This > competition is driven by the desire for strategic and tactical > advantages on the battlefield, leading to the development of > increasingly sophisticated AI-driven systems such as drones and > autonomous vehicles. > > Implications: > > Escalation of Conflicts: > The deployment of AI weapons can lead to faster and more automated > warfare, potentially escalating conflicts due to reduced > decision-making times and increased reliance on AI for strategic > assessments. > > Ethical and Legal Challenges: > The use of AI in warfare raises significant ethical and legal > questions, including accountability for AI-driven actions and the > potential for increased civilian casualties. > > 2. Challenges to International Cooperation > > War and Cooperation: > During active conflicts, nations are less likely to engage in > cooperative efforts to regulate AI weapons, as their immediate focus > is on gaining military advantages. This competitive environment makes > it difficult to establish and enforce international agreements or > treaties aimed at controlling the development and use of AI in > warfare. Implications: > > Lack of Regulation: > The absence of international cooperation can lead to a lack of > effective regulation, increasing the risks associated with the > deployment of AI weapons. > > Proliferation of AI Weapons: > Without international agreements, there is a higher likelihood of AI > weapons proliferating, potentially falling into the hands of non-state > actors or being used in ways that violate international humanitarian > law. > ----------------------- > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Tue Jul 16 17:12:58 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Jul 2024 10:12:58 -0700 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: > I don't see the FDA magically becoming something it is not now. I don't think it matters much. The most significant antiaging substances are not regulated to any extent; Re your thoughts on LOAR, that seems to be the curretn situation with respect to AI. I think the key factor is "Does it make money" and once that happens it attracts more and more money. Toward the end of Drexler's Radical Abundance, he discusses how the path to his vision was derailed. Had there been a money-making product, I think the history would have been very different. Keith On Tue, Jul 16, 2024 at 9:04?AM Kelly Anderson via extropy-chat wrote: > > Having read the book, the argument for anti-aging is that the current > advances in AI with protein folding, biological simulation and > personalized medicine based on that kind of approach, will turn > medicine from a "throw it at the wall and see if it sticks" kind of > approach to testing proteins we find in the wild into the kind of > information technology that the law of accelerating returns can be > applied to. While that is an interesting hypothesis, there was not a > detailed statement that I can recall relating to where and when we are > on that curve currently. If indeed AI turns anti-aging tech into > something that the LOAR can be applied to, there is still the question > of whether we are far enough up that curve currently to be able to > accomplish something meaningful in 5 years. There is, in particular, > and overly optimistic idea that the FDA will be able to speed things > up given that testing can be done in silicon, rather than in real > people in the real world. I don't see the FDA magically becoming > something it is not now. Transhumanism is our thing... but do we > really believe in Transgovernmentalism? > > So while I get that Ray's hopes are that he will not die any time > soon, and that might shade his perspective as his own age goes up... > maybe he's trying to convince people to create a future. Like they > say, to create a future you have to envision it first. And Ray > Bradbury, Isaac Asimov and others including, Gene Roddenberry and Ray > Kurtzweil have helped us envision things, some stuff comes true faster > than others. > > I really do think the LOAR is a real thing. The leap of faith is that > AI will turn medicine into one of the things that the LOAR can be > applied to. Personally, it seems logical that it could, and almost > certainly will at some point. I do wonder about the basis for this > particular timeline, as this most recent book wasn't nearly as > breathtaking as TSIN (2005)... > > -Kelly > > On Wed, Jul 10, 2024 at 4:18?PM BillK via extropy-chat > wrote: > > > > The article provides the details for each prediction. > > BillK > > > > > > > > Quotes: > > 1. Information technology is advancing at an exponential rate. > > 2. Artificial intelligence will reach and immediately soar past > > human-level intelligence by 2029. > > 3. We will solve aging in the next five to ten years by applying AI to medicine. > > 4. AI is about to leap from transforming the digital world to > > transforming the physical world. > > 5. By 2045, we will reach the Singularity and expand our intelligence > > a millionfold. > > -------------------------------- > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Tue Jul 16 17:22:08 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Jul 2024 10:22:08 -0700 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: > getting AI regulations right. Can't be done. We have no idea of what regulations are needed, and in any way the AI upsides make a compelling case for profit (if nothing else) to continue as fast as possible. Keith On Tue, Jul 16, 2024 at 9:13?AM Kelly Anderson via extropy-chat wrote: > > It would sure be nice if we could devise some sort of "Geneva > Conventions" for AI, but we can't even get a stable safety board > inside of a company to make sure they aren't accidentally going to > mess us up. If you can't cooperate for safety inside of a single > company, how much more difficult would it be for nations in conflict > to do the same? I can't imagine what rules would even look like, other > than the ultra simple "there has to be a human in the chain to pull > the trigger", but even that's already been blown through in Ukraine, > by both sides. Besides, humans slow things down and thus ruin your > potential for surprise. > > Looking at the recent assassination attempt, with its collateral > damage, it seems that if there were some sort of crowd sourced > information base that was rapid enough, a life could have been saved. > Also, why weren't there drones with AI flying over the area? If we > can't protect a presidential candidate with simply implemented > technology, what hope do we have of getting AI regulations right. > > -Kelly > > On Tue, Jul 16, 2024 at 8:33?AM BillK via extropy-chat > wrote: > > > > On Mon, 15 Jul 2024 at 00:22, BillK wrote: > > > > > > This long article discusses the increasing problem of autonomous weapons in warfare. > > > > > > Quote: > > > > > > AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield > > > The military use of AI-enabled weapons is growing, and the industry that provides them is booming > > > Nick Robins-Early Sun 14 Jul 2024 17.00 BS > > > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. > > > ------------------------------------- > > > > > > I asked Perplexity AI to review this long article. > > > BillK > > > > > > Perplexity AI - > > > > > > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. > > > > > > ----------------------- > > > > > > As Perplexity AI insisted on the need for international cooperation to > > avoid an AI weapons arms race, I pointed out that while a war was in > > progress there would be little likelihood of international cooperation. > > Nations will not restrict weapon developments during wartime. > > War plus AI seems to indicate a bleak future. > > > > Perplexity AI grudgingly agreed, but then still added a long section > > about the urgent need for international cooperation. > > > > Below is the first part of the reply (with the long wishful-thinking cooperation > > section removed): > > BillK > > > > Perplexity AI: > > The continuation of the AI weapons arms race during ongoing conflicts > > is a significant concern, as nations prioritize military superiority > > over cooperative regulation. Here are the key points and implications > > based on the provided search results: > > > > 1. Intensification of the AI Arms Race > > > > Current State of Affairs: > > The AI arms race is characterized by rapid advancements in military > > technology among major powers like the U.S., China, and Russia. This > > competition is driven by the desire for strategic and tactical > > advantages on the battlefield, leading to the development of > > increasingly sophisticated AI-driven systems such as drones and > > autonomous vehicles. > > > > Implications: > > > > Escalation of Conflicts: > > The deployment of AI weapons can lead to faster and more automated > > warfare, potentially escalating conflicts due to reduced > > decision-making times and increased reliance on AI for strategic > > assessments. > > > > Ethical and Legal Challenges: > > The use of AI in warfare raises significant ethical and legal > > questions, including accountability for AI-driven actions and the > > potential for increased civilian casualties. > > > > 2. Challenges to International Cooperation > > > > War and Cooperation: > > During active conflicts, nations are less likely to engage in > > cooperative efforts to regulate AI weapons, as their immediate focus > > is on gaining military advantages. This competitive environment makes > > it difficult to establish and enforce international agreements or > > treaties aimed at controlling the development and use of AI in > > warfare. Implications: > > > > Lack of Regulation: > > The absence of international cooperation can lead to a lack of > > effective regulation, increasing the risks associated with the > > deployment of AI weapons. > > > > Proliferation of AI Weapons: > > Without international agreements, there is a higher likelihood of AI > > weapons proliferating, potentially falling into the hands of non-state > > actors or being used in ways that violate international humanitarian > > law. > > ----------------------- > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Tue Jul 16 23:07:54 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Jul 2024 16:07:54 -0700 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: The problem is humans, not AIs. We have evolved to have psychological traits for wars simply because genes for war survived bad times about 40% better than the no-war alternative. We are stuck with our genetics. Keith On Tue, Jul 16, 2024 at 7:33?AM BillK via extropy-chat wrote: > > On Mon, 15 Jul 2024 at 00:22, BillK wrote: > > > > This long article discusses the increasing problem of autonomous weapons in warfare. > > > > Quote: > > > > AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield > > The military use of AI-enabled weapons is growing, and the industry that provides them is booming > > Nick Robins-Early Sun 14 Jul 2024 17.00 BS > > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. > > ------------------------------------- > > > > I asked Perplexity AI to review this long article. > > BillK > > > > Perplexity AI - > > > > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. > > > > ----------------------- > > > As Perplexity AI insisted on the need for international cooperation to > avoid an AI weapons arms race, I pointed out that while a war was in > progress there would be little likelihood of international cooperation. > Nations will not restrict weapon developments during wartime. > War plus AI seems to indicate a bleak future. > > Perplexity AI grudgingly agreed, but then still added a long section > about the urgent need for international cooperation. > > Below is the first part of the reply (with the long wishful-thinking cooperation > section removed): > BillK > > Perplexity AI: > The continuation of the AI weapons arms race during ongoing conflicts > is a significant concern, as nations prioritize military superiority > over cooperative regulation. Here are the key points and implications > based on the provided search results: > > 1. Intensification of the AI Arms Race > > Current State of Affairs: > The AI arms race is characterized by rapid advancements in military > technology among major powers like the U.S., China, and Russia. This > competition is driven by the desire for strategic and tactical > advantages on the battlefield, leading to the development of > increasingly sophisticated AI-driven systems such as drones and > autonomous vehicles. > > Implications: > > Escalation of Conflicts: > The deployment of AI weapons can lead to faster and more automated > warfare, potentially escalating conflicts due to reduced > decision-making times and increased reliance on AI for strategic > assessments. > > Ethical and Legal Challenges: > The use of AI in warfare raises significant ethical and legal > questions, including accountability for AI-driven actions and the > potential for increased civilian casualties. > > 2. Challenges to International Cooperation > > War and Cooperation: > During active conflicts, nations are less likely to engage in > cooperative efforts to regulate AI weapons, as their immediate focus > is on gaining military advantages. This competitive environment makes > it difficult to establish and enforce international agreements or > treaties aimed at controlling the development and use of AI in > warfare. Implications: > > Lack of Regulation: > The absence of international cooperation can lead to a lack of > effective regulation, increasing the risks associated with the > deployment of AI weapons. > > Proliferation of AI Weapons: > Without international agreements, there is a higher likelihood of AI > weapons proliferating, potentially falling into the hands of non-state > actors or being used in ways that violate international humanitarian > law. > ----------------------- > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From postmowoods at gmail.com Wed Jul 17 15:59:10 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Wed, 17 Jul 2024 09:59:10 -0600 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: The good news is that we can maybe choose to not put those particular characteristics into our AIs. It does seem possible that we will choose another, less safe, way. Stupid Putin anyway. He couldn't have picked a worse time to do what he's doing. Makes the likelihood of a bad AI outcome seem more probable. -Kelly On Tue, Jul 16, 2024 at 5:09?PM Keith Henson via extropy-chat wrote: > > The problem is humans, not AIs. > > We have evolved to have psychological traits for wars simply because > genes for war survived bad times about 40% better than the no-war > alternative. We are stuck with our genetics. > > > Keith > > On Tue, Jul 16, 2024 at 7:33?AM BillK via extropy-chat > wrote: > > > > On Mon, 15 Jul 2024 at 00:22, BillK wrote: > > > > > > This long article discusses the increasing problem of autonomous weapons in warfare. > > > > > > Quote: > > > > > > AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield > > > The military use of AI-enabled weapons is growing, and the industry that provides them is booming > > > Nick Robins-Early Sun 14 Jul 2024 17.00 BS > > > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. > > > ------------------------------------- > > > > > > I asked Perplexity AI to review this long article. > > > BillK > > > > > > Perplexity AI - > > > > > > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. > > > > > > ----------------------- > > > > > > As Perplexity AI insisted on the need for international cooperation to > > avoid an AI weapons arms race, I pointed out that while a war was in > > progress there would be little likelihood of international cooperation. > > Nations will not restrict weapon developments during wartime. > > War plus AI seems to indicate a bleak future. > > > > Perplexity AI grudgingly agreed, but then still added a long section > > about the urgent need for international cooperation. > > > > Below is the first part of the reply (with the long wishful-thinking cooperation > > section removed): > > BillK > > > > Perplexity AI: > > The continuation of the AI weapons arms race during ongoing conflicts > > is a significant concern, as nations prioritize military superiority > > over cooperative regulation. Here are the key points and implications > > based on the provided search results: > > > > 1. Intensification of the AI Arms Race > > > > Current State of Affairs: > > The AI arms race is characterized by rapid advancements in military > > technology among major powers like the U.S., China, and Russia. This > > competition is driven by the desire for strategic and tactical > > advantages on the battlefield, leading to the development of > > increasingly sophisticated AI-driven systems such as drones and > > autonomous vehicles. > > > > Implications: > > > > Escalation of Conflicts: > > The deployment of AI weapons can lead to faster and more automated > > warfare, potentially escalating conflicts due to reduced > > decision-making times and increased reliance on AI for strategic > > assessments. > > > > Ethical and Legal Challenges: > > The use of AI in warfare raises significant ethical and legal > > questions, including accountability for AI-driven actions and the > > potential for increased civilian casualties. > > > > 2. Challenges to International Cooperation > > > > War and Cooperation: > > During active conflicts, nations are less likely to engage in > > cooperative efforts to regulate AI weapons, as their immediate focus > > is on gaining military advantages. This competitive environment makes > > it difficult to establish and enforce international agreements or > > treaties aimed at controlling the development and use of AI in > > warfare. Implications: > > > > Lack of Regulation: > > The absence of international cooperation can lead to a lack of > > effective regulation, increasing the risks associated with the > > deployment of AI weapons. > > > > Proliferation of AI Weapons: > > Without international agreements, there is a higher likelihood of AI > > weapons proliferating, potentially falling into the hands of non-state > > actors or being used in ways that violate international humanitarian > > law. > > ----------------------- > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Wed Jul 17 16:32:45 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 17 Jul 2024 09:32:45 -0700 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: On Wed, Jul 17, 2024 at 9:00?AM Kelly Anderson via extropy-chat wrote: > > The good news is that we can maybe choose to not put those particular > characteristics into our AIs. About 20 years ago there was discussion about copying human brains as a route to AI. I discouraged this idea as much as I could. Hard to say if I had any influence, but the idea has not been discussed much in recent years. > It does seem possible that we will > choose another, less safe, way. Stupid Putin anyway. He couldn't have > picked a worse time to do what he's doing. Makes the likelihood of a > bad AI outcome seem more probable. One of the psychological traits I think gets selected along with the traits for war is irrational optimism that your side will win the war. Humans can be rational, but this trait is turned off in the leadup to war. Kieth > -Kelly > > On Tue, Jul 16, 2024 at 5:09?PM Keith Henson via extropy-chat > wrote: > > > > The problem is humans, not AIs. > > > > We have evolved to have psychological traits for wars simply because > > genes for war survived bad times about 40% better than the no-war > > alternative. We are stuck with our genetics. > > > > > > Keith > > > > On Tue, Jul 16, 2024 at 7:33?AM BillK via extropy-chat > > wrote: > > > > > > On Mon, 15 Jul 2024 at 00:22, BillK wrote: > > > > > > > > This long article discusses the increasing problem of autonomous weapons in warfare. > > > > > > > > Quote: > > > > > > > > AI?s ?Oppenheimer moment?: autonomous weapons enter the battlefield > > > > The military use of AI-enabled weapons is growing, and the industry that provides them is booming > > > > Nick Robins-Early Sun 14 Jul 2024 17.00 BS > > > > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly ?autonomous? has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before. > > > > ------------------------------------- > > > > > > > > I asked Perplexity AI to review this long article. > > > > BillK > > > > > > > > Perplexity AI - > > > > > > > > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare. > > > > > > > > ----------------------- > > > > > > > > > As Perplexity AI insisted on the need for international cooperation to > > > avoid an AI weapons arms race, I pointed out that while a war was in > > > progress there would be little likelihood of international cooperation. > > > Nations will not restrict weapon developments during wartime. > > > War plus AI seems to indicate a bleak future. > > > > > > Perplexity AI grudgingly agreed, but then still added a long section > > > about the urgent need for international cooperation. > > > > > > Below is the first part of the reply (with the long wishful-thinking cooperation > > > section removed): > > > BillK > > > > > > Perplexity AI: > > > The continuation of the AI weapons arms race during ongoing conflicts > > > is a significant concern, as nations prioritize military superiority > > > over cooperative regulation. Here are the key points and implications > > > based on the provided search results: > > > > > > 1. Intensification of the AI Arms Race > > > > > > Current State of Affairs: > > > The AI arms race is characterized by rapid advancements in military > > > technology among major powers like the U.S., China, and Russia. This > > > competition is driven by the desire for strategic and tactical > > > advantages on the battlefield, leading to the development of > > > increasingly sophisticated AI-driven systems such as drones and > > > autonomous vehicles. > > > > > > Implications: > > > > > > Escalation of Conflicts: > > > The deployment of AI weapons can lead to faster and more automated > > > warfare, potentially escalating conflicts due to reduced > > > decision-making times and increased reliance on AI for strategic > > > assessments. > > > > > > Ethical and Legal Challenges: > > > The use of AI in warfare raises significant ethical and legal > > > questions, including accountability for AI-driven actions and the > > > potential for increased civilian casualties. > > > > > > 2. Challenges to International Cooperation > > > > > > War and Cooperation: > > > During active conflicts, nations are less likely to engage in > > > cooperative efforts to regulate AI weapons, as their immediate focus > > > is on gaining military advantages. This competitive environment makes > > > it difficult to establish and enforce international agreements or > > > treaties aimed at controlling the development and use of AI in > > > warfare. Implications: > > > > > > Lack of Regulation: > > > The absence of international cooperation can lead to a lack of > > > effective regulation, increasing the risks associated with the > > > deployment of AI weapons. > > > > > > Proliferation of AI Weapons: > > > Without international agreements, there is a higher likelihood of AI > > > weapons proliferating, potentially falling into the hands of non-state > > > actors or being used in ways that violate international humanitarian > > > law. > > > ----------------------- > > > > > > _______________________________________________ > > > extropy-chat mailing list > > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From atymes at gmail.com Wed Jul 17 16:46:12 2024 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 17 Jul 2024 12:46:12 -0400 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: On Wed, Jul 17, 2024 at 12:34?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > One of the psychological traits I think gets selected along with the > traits for war is irrational optimism that your side will win the war. > Humans can be rational, but this trait is turned off in the leadup to > war. > Phrases such as "What we really need is a short, victorious war" are an evergreen meme among those who study the politics of crumbling empires. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Jul 17 17:15:35 2024 From: pharos at gmail.com (BillK) Date: Wed, 17 Jul 2024 18:15:35 +0100 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: On Wed, 17 Jul 2024 at 17:48, Adrian Tymes via extropy-chat wrote: > > Phrases such as "What we really need is a short, victorious war" are an evergreen meme among those who study the politics of crumbling empires. > _______________________________________________ Nice quote! The AIs really liked it! :) BillK >From Claude 3.5 Sonnet: This phrase encapsulates a cynical political strategy often associated with declining powers or regimes facing internal strife. Let's break it down and discuss its implications: 1. Origin and context: The specific quote "What we really need is a short, victorious war" is often attributed to Vyacheslav von Plehve, the Russian Interior Minister prior to the Russo-Japanese War of 1904-1905. However, the sentiment behind it has been expressed in various forms throughout history. 2. Political strategy: The idea suggests that a quick military victory could: - Distract the population from domestic problems - Rally people around a common cause (nationalism) - Boost morale and national pride - Consolidate power for the ruling elite 3. "Evergreen meme": The phrase's enduring relevance in political discourse suggests that this strategy has been employed repeatedly throughout history. It's considered an "evergreen meme" because it keeps resurfacing as a recognizable pattern. 4. "Crumbling empires": This strategy is often associated with declining powers because: - They may face internal unrest or economic troubles - Leaders might seek to project strength externally to mask internal weaknesses - There's a desperation to maintain status and power 5. Historical examples: - Argentina's invasion of the Falkland Islands in 1982 - The United States' invasion of Grenada in 1983 - Russia's annexation of Crimea in 2014 6. Risks and consequences: While the strategy aims for a "short, victorious war," it often backfires: - Wars can be unpredictable and may not end quickly or victoriously - Failed military adventures can accelerate a regime's decline - It can lead to international isolation and sanctions 7. Modern relevance: The phrase remains pertinent in contemporary geopolitics, serving as a warning sign for analysts and historians when assessing the motivations behind military actions, especially those initiated by nations facing internal challenges. This phrase encapsulates a complex interplay of political desperation, nationalist fervor, and often misguided strategic thinking. It serves as a cautionary tale in international relations and highlights the dangers of using external conflict to solve internal problems. ------------------------- From atymes at gmail.com Wed Jul 17 17:27:19 2024 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 17 Jul 2024 13:27:19 -0400 Subject: [ExI] AI and autonomous warfare In-Reply-To: References: Message-ID: The AI demonstrates good understanding of the phrase - but then, to do so it only needs to summarize stuff that's been widely documented for a long time, which is something the AI excels at. On Wed, Jul 17, 2024 at 1:17?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 17 Jul 2024 at 17:48, Adrian Tymes via extropy-chat > wrote: > > > > Phrases such as "What we really need is a short, victorious war" are an > evergreen meme among those who study the politics of crumbling empires. > > _______________________________________________ > > > Nice quote! The AIs really liked it! :) > BillK > > From Claude 3.5 Sonnet: > This phrase encapsulates a cynical political strategy often associated > with declining powers or regimes facing internal strife. Let's break > it down and discuss its implications: > > 1. Origin and context: > The specific quote "What we really need is a short, victorious war" is > often attributed to Vyacheslav von Plehve, the Russian Interior > Minister prior to the Russo-Japanese War of 1904-1905. However, the > sentiment behind it has been expressed in various forms throughout > history. > > 2. Political strategy: > The idea suggests that a quick military victory could: > - Distract the population from domestic problems > - Rally people around a common cause (nationalism) > - Boost morale and national pride > - Consolidate power for the ruling elite > > 3. "Evergreen meme": > The phrase's enduring relevance in political discourse suggests that > this strategy has been employed repeatedly throughout history. It's > considered an "evergreen meme" because it keeps resurfacing as a > recognizable pattern. > > 4. "Crumbling empires": > This strategy is often associated with declining powers because: > - They may face internal unrest or economic troubles > - Leaders might seek to project strength externally to mask internal > weaknesses > - There's a desperation to maintain status and power > > 5. Historical examples: > - Argentina's invasion of the Falkland Islands in 1982 > - The United States' invasion of Grenada in 1983 > - Russia's annexation of Crimea in 2014 > > 6. Risks and consequences: > While the strategy aims for a "short, victorious war," it often backfires: > - Wars can be unpredictable and may not end quickly or victoriously > - Failed military adventures can accelerate a regime's decline > - It can lead to international isolation and sanctions > > 7. Modern relevance: > The phrase remains pertinent in contemporary geopolitics, serving as a > warning sign for analysts and historians when assessing the > motivations behind military actions, especially those initiated by > nations facing internal challenges. > > This phrase encapsulates a complex interplay of political desperation, > nationalist fervor, and often misguided strategic thinking. It serves > as a cautionary tale in international relations and highlights the > dangers of using external conflict to solve internal problems. > ------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Jul 17 19:53:45 2024 From: pharos at gmail.com (BillK) Date: Wed, 17 Jul 2024 20:53:45 +0100 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: On Tue, 16 Jul 2024 at 18:15, Keith Henson via extropy-chat wrote: > I don't think it matters much. The most significant antiaging > substances are not regulated to any extent; > > Re your thoughts on LOAR, that seems to be the curretn situation with > respect to AI. I think the key factor is "Does it make money" and > once that happens it attracts more and more money. Toward the end of > Drexler's Radical Abundance, he discusses how the path to his vision > was derailed. Had there been a money-making product, I think the > history would have been very different. > > Keith >_______________________________________________ Another review, less enthusiastic about Kurzweil's predictions. BillK Quotes: This is the singularity as Kurzweil imagines it, when ?the AIs will become part of us, and thus it is we who will be doing those things.? If Kurzweil is even remotely right, then 10 years from now our lives?our very society?will be totally unrecognizable. ------------------------ From giulio at gmail.com Thu Jul 18 07:38:32 2024 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 18 Jul 2024 09:38:32 +0200 Subject: [ExI] Why has Elon Musk endorsed Donald Trump? Message-ID: Some thoughts... https://x.com/giulioprisco/status/1813837547358802239 From atymes at gmail.com Thu Jul 18 11:45:34 2024 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 18 Jul 2024 07:45:34 -0400 Subject: [ExI] Why has Elon Musk endorsed Donald Trump? In-Reply-To: References: Message-ID: I rather suspect it is 2 prime: Elon, like most of Trump's big name supporters, thinks he has negotiated favorable treatment for his interests with Trump. However, should Trump win, Trump will forget about such earlier promises. Elon's success or failure in a Trump administration will depend on his post-election negotiation of politics, a process that will continue until Elon gives up in Trump or Trump is out of office, one way or another. On Thu, Jul 18, 2024, 3:40?AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Some thoughts... > > https://x.com/giulioprisco/status/1813837547358802239 > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Thu Jul 18 12:55:22 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Thu, 18 Jul 2024 06:55:22 -0600 Subject: [ExI] Kurzweil shares five key insights from his new book, The Singularity Is Nearer In-Reply-To: References: Message-ID: It's interesting to note that the author of this paper doesn't even bother to look at the decades during which hundreds of his past predictions have come to pass. I myself can't believe that vertical gardening is going to be as big as he claims, but maybe that's my lack of vision. On the other hand, if my predictions about soil erosion and degredation are correct we may HAVE to do that. -Kelly On Wed, Jul 17, 2024 at 1:54?PM BillK via extropy-chat wrote: > > On Tue, 16 Jul 2024 at 18:15, Keith Henson via extropy-chat > wrote: > > I don't think it matters much. The most significant antiaging > > substances are not regulated to any extent; > > > > Re your thoughts on LOAR, that seems to be the curretn situation with > > respect to AI. I think the key factor is "Does it make money" and > > once that happens it attracts more and more money. Toward the end of > > Drexler's Radical Abundance, he discusses how the path to his vision > > was derailed. Had there been a money-making product, I think the > > history would have been very different. > > > > Keith > >_______________________________________________ > > > Another review, less enthusiastic about Kurzweil's predictions. > BillK > > > Quotes: > This is the singularity as Kurzweil imagines it, when ?the AIs will > become part of us, and thus it is we who will be doing those things.? > If Kurzweil is even remotely right, then 10 years from now our > lives?our very society?will be totally unrecognizable. > ------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Fri Jul 19 12:50:13 2024 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 19 Jul 2024 05:50:13 -0700 Subject: [ExI] Big Sister- Youtuber has had her identity hijacked by AI Message-ID: <31fed068052dd9ed0398e0d552e17f3a@sollegro.com> I an a preview of the strange future to come: This beautiful Ukrainian woman found 35 accounts on Youtube using AI deepfakes that have her face and voice to espousing political views that she does not have. The Chinese accounts even have her speaking in Mandarin, a language that she does not understand. In a sad twist of fate, some of the deepfake accounts are even making money using her likeness, while she herself has yet to be able to monetize Youtube successfully. https://www.reddit.com/r/interestingasfuck/comments/1e6wahc/ukranian_youtuber_discovered_she_was_cloned_to/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button AI is taking identity theft to a whole new level. Stuart LaForge From pharos at gmail.com Fri Jul 19 18:06:28 2024 From: pharos at gmail.com (BillK) Date: Fri, 19 Jul 2024 19:06:28 +0100 Subject: [ExI] LLMs Intelligence Message-ID: I came across a comment complaining that LLMs were "only" stochastic parrots doing a type of auto-complete. So what happens when an LLM reviews the comment? BillK Question: Can you review and criticize this article - Is there any hard evidence that LLMs "understand" anything? When ChatGPT first made headlines, I gave it a try with an open mind. I tried various lines of conversations. I was experimenting with seeing if it could be manipulated. And generally, I poked at it for hours, trying to determine whether or not there was any hint of a "mind" hidden in the Large Language Model. In the end, I concluded that it was impressive technology, but that it clearly was just a next-gen chat bot. There was nothing "intelligent" about it in the way that animals demonstrate intelligence. In the months I've used it and other LLMs for various purposes, this opinion has not shifted. Again, I'm very open to the idea that there is a ghost in the machine. I occasionally still try to see if I can demonstrate that an LLM truly has a theory of mind, or any kind of "understanding" of the information it gives. But alas, it seems obvious that it just doesn't. And yet, I see posts from people who use LLMs for things like personal research, claiming that these Models somehow "understand the Internet" or "understand" a piece of research that it summarized. But when shown these examples, I'm thoroughly unimpressed. It still looks like the output of a very sophisticated auto-complete, not some kind of intelligence that in any way grasps the concepts its conveying. If these LLMs are truly showing intelligence, then where are the conversations where it combines novel information and then demonstrates that it "gets" it? Even bees can be taught to solve certain types of puzzles in a general sense, in a way that clearly shows that they have a mind and can reason about some things. I'm not looking for Einstein or even a kindergarten student level of intelligence. Ant level intelligence would be sufficient. Just something where I can't poke at it and conclude, yet again, that it's just giving me the next word it thinks belongs in the sentence. ------------- Perplexity LLM - This article raises valid concerns about the nature of understanding in Large Language Models (LLMs) and critically examines claims of their intelligence. Here's an analysis of the key points and some counterarguments: 1. Experimentation and Skepticism: The author's approach of extensively testing ChatGPT with an open mind is commendable. This hands-on experimentation is crucial for forming informed opinions about AI capabilities[1]. 2. Lack of Animal-like Intelligence: The comparison to animal intelligence is interesting. While LLMs may not demonstrate intelligence in the same way animals do, it's worth noting that intelligence can manifest differently across species and systems[3]. 3. Theory of Mind: The author's attempts to find evidence of a theory of mind in LLMs is a valid approach. However, recent research suggests that some aspects of theory of mind may be emerging in advanced LLMs, though this remains a contentious area[1]. 4. Understanding vs. Pattern Matching: The critique that LLMs appear to be sophisticated auto-complete systems rather than truly understanding entities is a common and valid concern. However, some argue that understanding itself may be a form of complex pattern recognition[1][3]. 5. Comparison to Bee Intelligence: While the bee example is intriguing, it's important to note that LLMs and biological intelligences operate on fundamentally different principles. Direct comparisons may not always be appropriate[4]. 6. Lack of Novel Information Combination: This is a fair criticism. However, some researchers argue that LLMs do combine information in novel ways, particularly when prompted effectively[1][3]. 7. Overstatement of LLM Capabilities: The author rightly points out that some users may overstate LLM capabilities, attributing deeper understanding than may actually exist[2]. Critique: 1. Limited Scope: The article focuses primarily on conversational abilities and may not fully consider other aspects of LLM performance, such as problem-solving or creative tasks. 2. Anthropocentric View: The expectation of intelligence similar to animals or humans may be limiting. AI might demonstrate intelligence in ways we don't yet recognize or understand. 3. Evolving Technology: As LLM technology is rapidly advancing, some criticisms may become outdated quickly. What wasn't possible a few months ago might be achievable now. 4. Lack of Specific Examples: The article would benefit from concrete examples of the author's experiments and their results. 5. Definition of Understanding: The article doesn't clearly define what constitutes "understanding" in an AI context, which is crucial for this debate. In conclusion, while the article raises important questions about LLM capabilities, a more comprehensive and nuanced approach to defining and testing AI understanding may be necessary to fully address this complex issue. Citations: [1] https://www.lesswrong.com/posts/gEiWA3yp8Xvhi8yn9/whether-llms-understand-anything-is-mostly-a-terminological [2] https://www.reddit.com/r/singularity/comments/1ctddp2/the_simplest_easiest_way_to_understand_that_llms/ [3] https://seantrott.substack.com/p/how-could-we-know-if-large-language [4] https://direct.mit.edu/daed/article/151/2/183/110604/Do-Large-Language-Models-Understand-Us [5] https://inform.tmforum.org/features-and-opinion/do-llms-really-understand-human-language From postmowoods at gmail.com Mon Jul 22 04:22:28 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 21 Jul 2024 22:22:28 -0600 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: Adrian... On Sat, Jul 6, 2024 at 9:46?PM Adrian Tymes via extropy-chat wrote: > On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat wrote: >> >> It seems perfectly safe at this point to predict that >> Biden (who is no longer an empty vessel, has caused a fair bit of >> inflation, and is widely thought to be possibly empty headed in a >> quite literal way) will be replaced before the election. > > > HAHAHA no. HAHAHA yes. > 1) Who specifically would they replace him with? The only Democrat who does better than Biden on most polls, Michelle Obama, is not running and can not be made to run. If they don't have a specific person to replace him with, then there is no replacement; to say he will be replaced without any idea of who to replace him with, is like the Republicans going on and on and on and on about repealing the Affordable Care Act but having nothing to replace it with. (This is the main reason why, when it came to the actual votes, no full repeal has yet made it all the way through. It's the same deal with replacing Biden.) Biden wants Harris. I don't know if Biden gets to make this choice. I think it's the delegates to the Democratic convention. Harris isn't a sufficiently empty vessel IMHO to win. Of course, events have been interesting lately to say the least. So who knows how this is all going to go down. -Kelly From atymes at gmail.com Mon Jul 22 11:37:05 2024 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 22 Jul 2024 07:37:05 -0400 Subject: [ExI] US Democrat Playbook In-Reply-To: References: Message-ID: On Mon, Jul 22, 2024, 12:24?AM Kelly Anderson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Adrian... > > On Sat, Jul 6, 2024 at 9:46?PM Adrian Tymes via extropy-chat > wrote: > > On Sat, Jul 6, 2024 at 10:41?PM Kelly Anderson via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> It seems perfectly safe at this point to predict that > >> Biden (who is no longer an empty vessel, has caused a fair bit of > >> inflation, and is widely thought to be possibly empty headed in a > >> quite literal way) will be replaced before the election. > > > > HAHAHA no. > > HAHAHA yes. > I will admit, I assigned very low odds to this outcome - though I maintain that "will be replaced" includes it not being by his choice/not with his cooperation, while what happened instead was that he got talked into stepping down. (I do wonder if some health issue came up after all. I hear he caught COVID, but he seems to have recovered. If this is what caused him to bow out, we likely won't know until after the election, probably not until he's fully out of office.) But that's semantics, I suppose (though it does solve the problem of overruling the primary voters). What matters is that he's out. > 1) Who specifically would they replace him with? The only Democrat who > does better than Biden on most polls, Michelle Obama, is not running and > can not be made to run. If they don't have a specific person to replace > him with, then there is no replacement; to say he will be replaced without > any idea of who to replace him with, is like the Republicans going on and > on and on and on about repealing the Affordable Care Act but having nothing > to replace it with. (This is the main reason why, when it came to the > actual votes, no full repeal has yet made it all the way through. It's the > same deal with replacing Biden.) > > Biden wants Harris. I don't know if Biden gets to make this choice. I > think it's the delegates to the Democratic convention. Harris isn't a > sufficiently empty vessel IMHO to win. Agreed (except that I think Harris has a chance even with her record). I see at least two problems with selecting anyone else, though: 1) SInce Biden is stepping down, Harris can make a case that she is what's left of what the primary voters voted for. No one else has such a claim that I know of. 2) Who, that has enough name recognition to stand a chance of winning, is a sufficiently empty vessel? They have to pick some specific person, so if the union of "has enough name recognition" and "is a sufficiently empty vessel" is a null set, they can't win with your recommended strategy. Between those two, the Democrats may be stuck with Harris aa a practical matter. Of course, events have been > interesting lately to say the least. So who knows how this is all > going to go down. > Agreed again. Jockeying for Democratic VP appears to be the next bit of political theater. I note with some irony that the apparently most likely Democratic Presidential candidate is from the state I am from, and one of the front runners for the replacement Democratic Vice Presidential candidate is the governor of the state I moved to. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jul 22 23:18:56 2024 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 22 Jul 2024 19:18:56 -0400 Subject: [ExI] Thermal nuclear rocketry Message-ID: The Department of Energy has finally given permission to discuss this in public, after they published the award. (Source: https://science.osti.gov/sbir/Awards - FY24, Phase I, Release 2, search in the spreadsheet for "CubeCab".) Chemical rockets are barely capable of getting payloads into orbit, requiring discarding the rocket's spent fuel tanks in stages. It has long been speculated that a hotter source would provide enough performance (as measured by thrust and specific impulse) to get payloads to orbit without needing to discard most of the launch vehicle in the process. Nuclear fission has been suggested but, until recently, the designs either did not have enough thrust to overcome gravity, did not care about spewing radiation all over the place, or suffered other technical challenges - not to mention the legal ones. Note that "until recently". The DOE is in the process of funding a study, with me as principal investigator, to modernize its MITEE engine design - originally for in-space use - in light of recent technical and regulator developments, including but not limited to: * Commercial availability of HALEU and other nuclear fuels powerful enough to give better than 1 G thrust while remaining short of bomb-grade. (Yes, even HALEU can be detonated with some work. As most of the readers of this list know, there are far cheaper and simpler ways than nuclear bombs to destroy humanity. The issue here is the laws: 20% is the legal limit baked into a bunch of regulations. Under 20% is far more feasible to use for commercial purposes than over 20%. The original MITEE design proposed 93.5%, which is among the reasons it did not see commercial use.) * "Closed cycle" designs that contain the radioactivity while still heating the fuel. The exhaust is no more radioactive than chemical fuels'. * Regulations that, as a side effect but still well within the laws as written, provide a clear regulatory environment for licensing of a launch vehicle with nuclear fuels. (FAA licenses as normal with a more attention paid to the environmental review, while NRC licenses handling of nuclear material intended for use in space under its existing regulations for such - written contemplating purely in-space use but technically covering this case too.) * Commercial feasibility of microlaunch, so as to make it feasible to prove this out with relatively tiny launch vehicles - and thus, risking relatively tiny amounts of nuclear fuel, not to mention a relatively small development budget. (This is, to certain policy makers, the biggest development and the reason why my company CubeCab was selected.) What has been funded for now is just a 6 month Phase I STTR to prove both the technical and regulatory feasibility of the project. Of course, the real objective is to present a viable design, including schedule and budget to make a prototype. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jul 23 09:53:12 2024 From: pharos at gmail.com (BillK) Date: Tue, 23 Jul 2024 10:53:12 +0100 Subject: [ExI] Thermal nuclear rocketry In-Reply-To: References: Message-ID: On Tue, 23 Jul 2024 at 00:21, Adrian Tymes via extropy-chat wrote: > > The Department of Energy has finally given permission to discuss this in public, after they published the award. (Source: https://science.osti.gov/sbir/Awards - FY24, Phase I, Release 2, search in the spreadsheet for "CubeCab".) > > Chemical rockets are barely capable of getting payloads into orbit, requiring discarding the rocket's spent fuel tanks in stages. It has long been speculated that a hotter source would provide enough performance (as measured by thrust and specific impulse) to get payloads to orbit without needing to discard most of the launch vehicle in the process. Nuclear fission has been suggested but, until recently, the designs either did not have enough thrust to overcome gravity, did not care about spewing radiation all over the place, or suffered other technical challenges - not to mention the legal ones. > > Note that "until recently". > > The DOE is in the process of funding a study, with me as principal investigator, to modernize its MITEE engine design - originally for in-space use - in light of recent technical and regulator developments, including but not limited to: > > > _______________________________________________ That's great news, Adrian! People seem to have realised that the future needs something better than chemical rockets. Ars Technica has just published a long article about nuclear thermal rockets. (with over 200 reader comments). Quote: We?re building nuclear spaceships again?this time for real The military and NASA seem serious about building demonstration hardware. Jacek Krywko - 7/22/2024 ------ and even the UK is interested. Quote: Rolls-Royce gets $6M to develop its ambitious nuclear space reactor By David Szondy July 23, 2024 ------------- BillK From pharos at gmail.com Tue Jul 23 13:00:16 2024 From: pharos at gmail.com (BillK) Date: Tue, 23 Jul 2024 14:00:16 +0100 Subject: [ExI] =?utf-8?q?AI_Companions_=E2=80=93_Good_or_Bad=3F?= Message-ID: AI Companions ? Good or Bad? Published by Steven Novella Jul 23 2024 Quote: In short, if something acts like a person, we treat it like a person. This extends to our pets as well. So ? do AI chatbots act like a real person? Sure, and they are getting better at it fast. ------------------------ This article discusses the good and bad points of AI companions. They are obviously good companions, and improving. Unfortunately, the bad points appear to be that AI companions are too good! They could train people to find relationships with real people to be too annoying and troublesome. (Especially if the AI is housed in an attractive humanoid body). The conclusion seems clear. BillK From atymes at gmail.com Tue Jul 23 15:48:07 2024 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 23 Jul 2024 11:48:07 -0400 Subject: [ExI] Thermal nuclear rocketry In-Reply-To: References: Message-ID: On Tue, Jul 23, 2024 at 5:55?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > That's great news, Adrian! People seem to have realised that the > future needs something better than chemical rockets. > Ah, but what part of the future? Interplanetary missions, and Earth-to-orbit launch missions, are not the same. The former won't come about much until after we have solved the latter much better than we have now. > Ars Technica has just published a long article about nuclear thermal > rockets. > (with over 200 reader comments). > < > https://arstechnica.com/science/2024/07/were-building-thermonuclear-spaceships-again-this-time-for-real/ > > > Quote: > We?re building nuclear spaceships again?this time for real > The military and NASA seem serious about building demonstration hardware. > Jacek Krywko - 7/22/2024 > ------ > > and even the UK is interested. > < > https://newatlas.com/space/rolls-royce-nuclear-space-micro-reactor-funding/ > > > Quote: > Rolls-Royce gets $6M to develop its ambitious nuclear space reactor > By David Szondy July 23, 2024 > ------------- > As the former article points out, there is not yet much demand for taking things to Mars. There is a little, but not much, demand for deliveries to the Moon today. This is a problem for any expensive solution that is only usable for said purposes. DRACO and the Rolls Royce effort stand a good chance of being eventually cancelled for budget without delivering any actual hardware - or at least, delivering anything beyond LEO, if they even get that far - for precisely this reason. Lower cost, more reliable, and much faster service to LEO, though? There is massive unserved and underserved demand for that today. NTRs developed to serve today's needs, can be adapted to further missions once the demand is there. Elon Musk did not start SpaceX by developing rockets that could only go from Earth orbit to Mars. That's not CubeCab's plan either. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Tue Jul 23 18:15:56 2024 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 23 Jul 2024 11:15:56 -0700 Subject: [ExI] Thermal nuclear rocketry In-Reply-To: References: Message-ID: <2f70331e2cadabf9be0054593af4a6bd@sollegro.com> On 2024-07-22 16:18, Adrian Tymes via extropy-chat wrote: > The DOE is in the process of funding a study, with me as principal > investigator, to modernize its MITEE engine design - originally for > in-space use - in light of recent technical and regulator > developments, including but not limited to: > [snip] > What has been funded for now is just a 6 month Phase I STTR to prove > both the technical and regulatory feasibility of the project. Of > course, the real objective is to present a viable design, including > schedule and budget to make a prototype. Congratulation, Adrian. This is quite the accomplishment. I tried applying for SBIR grants a couple of time for my airship-based flying solar power power plants, and never heard back from them. I also (very) briefly researched a nuclear rocket design. I will send you what amounts to a brief description of my design from an unfinished white paper of mine offlist. Stuart LaForge From atymes at gmail.com Tue Jul 23 18:21:13 2024 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 23 Jul 2024 14:21:13 -0400 Subject: [ExI] Thermal nuclear rocketry In-Reply-To: <2f70331e2cadabf9be0054593af4a6bd@sollegro.com> References: <2f70331e2cadabf9be0054593af4a6bd@sollegro.com> Message-ID: On Tue, Jul 23, 2024 at 2:17?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I tried > applying for SBIR grants a couple of time for my airship-based flying > solar power power plants, and never heard back from them. > Huh, never? Not even a rejection letter? I can believe it, but they're really supposed to at least say "no" rather than leave you hanging. In theory, if they reject, they're supposed to at least give you a hint as to why, perhaps what you can revise to have a better chance of winning next time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Tue Jul 23 18:32:08 2024 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 23 Jul 2024 11:32:08 -0700 Subject: [ExI] Thermal nuclear rocketry In-Reply-To: References: <2f70331e2cadabf9be0054593af4a6bd@sollegro.com> Message-ID: <31dbb7f5f414a4805c3f3f7a98df47a1@sollegro.com> On 2024-07-23 11:21, Adrian Tymes via extropy-chat wrote: > On Tue, Jul 23, 2024 at 2:17?PM Stuart LaForge via extropy-chat > wrote: > >> I tried >> applying for SBIR grants a couple of time for my airship-based >> flying >> solar power power plants, and never heard back from them. > > Huh, never? Not even a rejection letter? I can believe it, but > they're really supposed to at least say "no" rather than leave you > hanging. > > In theory, if they reject, they're supposed to at least give you a > hint as to why, perhaps what you can revise to have a better chance of > winning next time. It was back during the Obama years when Solyndra was all the rage and hoovering up all the Fed's solar power grants. I even had contacted some vendors that had could supply the materials I would need for prototyping. Oh, well. It is water under the bridge. I hope you go all the way, Adrian. Stuart From avant at sollegro.com Wed Jul 24 01:30:42 2024 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 23 Jul 2024 18:30:42 -0700 Subject: [ExI] Yellowstone speaks Message-ID: <54538e40d3230f1b29c09f67f836e89a@sollegro.com> Just in case you think the other political party is the biggest threat to America: https://www.youtube.com/watch?v=YdTi6rpnNuM But don't worry folks, a guy to who studied rocks at university says this has nothing to do with gigantic volcano that slumbers under Yellowstone. ;) Stuart LaForge From avant at sollegro.com Wed Jul 24 07:40:26 2024 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 24 Jul 2024 00:40:26 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity Message-ID: I cannot keep track of all the dominoes that will fall because of what Mark Zuckerberg did today. If AI is the New World that we are exploring, then Mark Zuckerberg just burned our ships like Hernan Cortez. His company Meta has developed and trained Llama 3.1 which one of the best performing LLMs in the entire market and today he just made it open source. In less than two hours I had the 8 billion parameter version of Llama 3.1 up and running on my Windows 11 laptop. Eliezer Yudkowsky is probably shitting his pants right now because the genie is all the way out of the bottle. But I am more optimistic than I have been since Open AI was actually open source. Some greedy men tried to monopolize AI for their own power, but Zuckerberg just gave it to the people. In my estimation, he has made it possible for freedom and democracy into the Singularity and beyond. If you want run your own local copy of Llama 3.1 (~ 5 GB of hard drive) you can either download it from the meta.com site and compile it which is a pain. Or if you have have Windows and want a smooth and easy automatic install go to ollama.com and install Ollama (which helps you install many different AI models) and follow the instructions. Stuart LaForge From spike at rainier66.com Wed Jul 24 14:34:43 2024 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 24 Jul 2024 07:34:43 -0700 Subject: [ExI] cod are cool Message-ID: <004601daddd6$a08595e0$e190c1a0$@rainier66.com> OK then, the more you know, the more you know. Coupla years ago we (or I) gave our buddy BillW a bunch of baloney over reading. the Cod Book. You recall the circumstances: he and I were given 20 to life for ripping a tag off of one of his pillows, him for doing it, me for encouraging it. He ran afoul of the local prison gang and his punishment, given that they knew he was an eager reader, was to check out all the books in the prison library except. the Cod Book. Turns out. this week's Nova is on the topic. And I find out that cod really are cool. Who knew? Well, BillW did, but nobody else with us in Alcatraz had any idea. We thought it was the most boring book ever written. But in a way, that would make it interesting, for answering the question: what is the most boring topic in all of human history? It isn't cod (thanks Nova) and it isn't the Laffer curve, in spite of that scene in Ferris Bueller. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Wed Jul 24 20:23:05 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Wed, 24 Jul 2024 14:23:05 -0600 Subject: [ExI] cod are cool In-Reply-To: <004601daddd6$a08595e0$e190c1a0$@rainier66.com> References: <004601daddd6$a08595e0$e190c1a0$@rainier66.com> Message-ID: I don't know the details, but I'd assume it would have something to do with Kardashians. -Kelly On Wed, Jul 24, 2024 at 8:35?AM spike jones via extropy-chat wrote: > > > > OK then, the more you know, the more you know. > > > > Coupla years ago we (or I) gave our buddy BillW a bunch of baloney over reading? the Cod Book. > > > > You recall the circumstances: he and I were given 20 to life for ripping a tag off of one of his pillows, him for doing it, me for encouraging it. He ran afoul of the local prison gang and his punishment, given that they knew he was an eager reader, was to check out all the books in the prison library except? the Cod Book. > > > > Turns out? this week?s Nova is on the topic. And I find out that cod really are cool. Who knew? Well, BillW did, but nobody else with us in Alcatraz had any idea. We thought it was the most boring book ever written. But in a way, that would make it interesting, for answering the question: what is the most boring topic in all of human history? It isn?t cod (thanks Nova) and it isn?t the Laffer curve, in spite of that scene in Ferris Bueller. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Wed Jul 24 20:48:19 2024 From: pharos at gmail.com (BillK) Date: Wed, 24 Jul 2024 21:48:19 +0100 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: Message-ID: On Wed, 24 Jul 2024 at 08:43, Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I cannot keep track of all the dominoes that will fall because of what > Mark Zuckerberg did today. If AI is the New World that we are exploring, > then Mark Zuckerberg just burned our ships like Hernan Cortez. His > company Meta has developed and trained Llama 3.1 which one of the best > performing LLMs in the entire market and today he just made it open > source. In less than two hours I had the 8 billion parameter version of > Llama 3.1 up and running on my Windows 11 laptop. Eliezer Yudkowsky is > probably shitting his pants right now because the genie is all the way > out of the bottle. But I am more optimistic than I have been since Open > AI was actually open source. Some greedy men tried to monopolize AI for > their own power, but Zuckerberg just gave it to the people. In my > estimation, he has made it possible for freedom and democracy into the > Singularity and beyond. > > If you want run your own local copy of Llama 3.1 (~ 5 GB of hard drive) > you can either download it from the meta.com site and compile it which > is a pain. Or if you have have Windows and want a smooth and easy > automatic install go to ollama.com and install Ollama (which helps you > install many different AI models) and follow the instructions. > > Stuart LaForge > ______________________________________________ > Well, freedom and democracy into the Singularity and beyond is a nice ideal. But making powerful AI systems available to everyone means that the bad guys can use them also. I asked the new meta-llama-3.1-405b-instruct if there were any dangers in making Llama 3.1 available to everybody, but it refused to answer that question. It just referred me to the Meta licensing terms. OK, so I then asked the iAsk AI, and it replied "Oh Yes, indeed!". So AI, like most tools, can be used for good or evil. The iAsk AI answer was quite long and detailed, but here is the conclusion: *Conclusion* In summary, while there are undeniable benefits associated with making advanced AI models like Llama 3.1 available as open source?such as fostering innovation and collaboration?there are also significant dangers that must be addressed proactively through community engagement, education, monitoring practices, and regulatory frameworks. *Bold Answer: There are several dangers associated with having Llama 3.1 open source and available to everybody*, including misuse for malicious purposes, propagation of misinformation, ethical considerations regarding bias and discrimination, security vulnerabilities due to transparency in code access, intellectual property issues related to generated content ownership rights, potential impacts on employment due to automation effects in various sectors, and a general lack of accountability for harmful outputs produced by users. ----------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jul 25 05:06:04 2024 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 24 Jul 2024 22:06:04 -0700 Subject: [ExI] tabbys star in discover Message-ID: <004b01dade50$5a3c6b70$0eb54250$@rainier66.com> Discover still sometimes publishes some interesting stuff. This month has a pretty good Tabby's star article: Is Tabby 's Star a Swarm of Extraterrestrial Structures? | Discover Magazine spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Thu Jul 25 10:00:42 2024 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 25 Jul 2024 03:00:42 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: Message-ID: On 2024-07-24 13:48, BillK via extropy-chat wrote: > > Well, freedom and democracy into the Singularity and beyond is a nice > ideal. > But making powerful AI systems available to everyone means that the > bad guys can use them also. The problem with that argument is that the identity of the so-called "bad guys" is a very culturally and contextually relative opinion. It might have been more accurate to say that making AI available to everyone means that one no longer needs to be rich and powerful to pose a significant danger to others. That is to say that it would no longer just be the ruling class that had the means to be evil. And that is certainly true. But that has always been true of knowledge in general. And this generation of AI is little more than than a distillation of human knowledge and a hallucinating imperfect one at that. Should we likewise restrict higher education to the "good guys"? After all Osama Bin Ladin attended university and allegedly studied engineering. I myself have enough knowledge and lab experience from my university days doing microbiology research to weaponize all manner of pathogens. So do millions of others on this planet. Do you honestly think that a chatbot is going to make us microbiologists any more of a threat than we already are? The thing to remember is that the flip side of your argument is also true. One will no longer need to be rich, talented, or otherwise special to create value for oneself or others. It is a positive step in the direction of the abundance economy. It is just that currently it is "soft" information commodities that are abundant now. So now ordinary people can use the technology to produce portraits of their loved ones that look like they were painted by Michelangelo, or help them write a lease agreement, but they can't get it to make them a sandwich. . . yet. In any case, Mark Zuckerberg has already released AI into the wild so there is no turning back now. In doing so, he has probably prevented humanity from developing into a borg-like collective and preserved individuality far into the future. There might be a few tragedies along the way, but humanity will likely survive. There is no longer much of a chance of a singleton superhuman intelligence controlling humanity's destiny; instead now there will likely be many. At this juncture, that makes Zuckerberg one of the most important people in history. Stuart LaForge > > I asked the new meta-llama-3.1-405b-instruct if there were any dangers > in making Llama 3.1 available to everybody, but it refused to answer > that question. > > It just referred me to the Meta licensing terms. > > OK, so I then asked the iAsk AI, and it replied "Oh Yes, indeed!". > So AI, like most tools, can be used for good or evil. > > The iAsk AI answer was quite long and detailed, but here is the > conclusion: > > Conclusion > > In summary, while there are undeniable benefits associated with making > advanced AI models like Llama 3.1 available as open source?such as > fostering innovation and collaboration?there are also significant > dangers that must be addressed proactively through community > engagement, education, monitoring practices, and regulatory > frameworks. > > Bold Answer: There are several dangers associated with having Llama > 3.1 open source and available to everybody, including misuse for > malicious purposes, propagation of misinformation, ethical > considerations regarding bias and discrimination, security > vulnerabilities due to transparency in code access, intellectual > property issues related to generated content ownership rights, > potential impacts on employment due to automation effects in various > sectors, and a general lack of accountability for harmful outputs > produced by users. From avant at sollegro.com Thu Jul 25 10:14:11 2024 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 25 Jul 2024 03:14:11 -0700 Subject: [ExI] tabbys star in discover In-Reply-To: <004b01dade50$5a3c6b70$0eb54250$@rainier66.com> References: <004b01dade50$5a3c6b70$0eb54250$@rainier66.com> Message-ID: <93e607f87894495b505a104aa890072a@sollegro.com> On 2024-07-24 22:06, spike jones via extropy-chat wrote: > Discover still sometimes publishes some interesting stuff. This month > has a pretty good Tabby's star article: > > Is Tabby's Star a Swarm of Extraterrestrial Structures? | Discover > Magazine [1] > > spike > > Links: > ------ > [1] > https://www.discovermagazine.com/the-sciences/is-tabbys-star-a-swarm-of-extraterrestrial-structures I think it is ironic that the Discover article concludes by saying that the future directions for researching Tabby?s Star is, "Now, scientists are turning to the James Webb Space Telescope in hopes for more clarity. Using the giant infrared telescope, astronomers have gathered more data to further constrain the possible configurations of dust around the star. Meanwhile, NASA?s TESS satellite detected two dips in 2019 unlike anything from even Kepler ? they resembled a planet passing by." The James Webb Space Telescope spent 5.5 hours looking at Tabby's star last year in the summer of 2023 and the research group has been sitting on the data for a full year without so much as peep. https://www.stsci.edu/jwst/science-execution/approved-programs/guaranteed-time-observations https://www.stsci.edu/jwst/phase2-public/2757.pdf https://x.com/chrislintott/status/1693495681388851259?s=46&t=OpPUIDvVBZHl5Ax_0bX7aw Why the are they keeping their observations a secret? What the hell did they see? Stuart LaForge From avant at sollegro.com Thu Jul 25 13:31:19 2024 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 25 Jul 2024 06:31:19 -0700 Subject: [ExI] The Star Wars movie that never was Message-ID: <29dbce2e28d169379075219708f0b9b4@sollegro.com> Just to demonstrate that AI has brought society an abundance economy of soft information-based commodities, here are what AI thinks some long dead actors would look like in a modern Star Wars movie. This took me about half an hour playing around with Stable Diffusion to make. Try to guess the name of the actors. Stuart LaForge https://04533034747756402167.googlegroups.com/attach/64e4d166b42b3/GoldenEraStarWars.PNG?part=0.1&view=1&vt=ANaJVrGH1TNQ5WRQf-FvfSD9bE3Mnm6mtnxG9WTbutfXSV-w1QwOYEwLB9dkPVcQs9InBA3iB7RPbHxzv2I38CexKF_d1p-cwVTcq19MkHOKs_zofzQ_xMU From postmowoods at gmail.com Fri Jul 26 18:45:39 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Fri, 26 Jul 2024 12:45:39 -0600 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: Message-ID: Zuckerberg has released a version of today's AI. That's nice. That will probably not be viewed as very important relative to whomever releases the first AGI or ASI in a similar fashion once you're looking back at things. How important is it today that Lotus 123 had the first spreadsheet back in the day? The way you are talking about this reminds me a tiny bit of the politicians who every four years make the claim that THIS is the MOST IMPORTANT election in the history of the world. What we are doing today in terms of safety is truly critical for the future, but today's AI isn't as dangerous as the AI that is smarter than any person. We simply aren't there yet, even though Claude would likely clean our collective clocks at Jeopardy. If the first ASI isn't safe, we're pretty doomed. What Meta just released is Improv for NeXT (going back to the spreadsheet thing for those who might not know what Improv was. That being said, Improv was superior to Excel in many ways, and I wish I had access to it sometimes). -Kelly On Thu, Jul 25, 2024 at 4:01?AM Stuart LaForge via extropy-chat wrote: > In any case, Mark Zuckerberg has already released AI into the wild so > there is no turning back now. In doing so, he has probably prevented > humanity from developing into a borg-like collective and preserved > individuality far into the future. There might be a few tragedies along > the way, but humanity will likely survive. There is no longer much of a > chance of a singleton superhuman intelligence controlling humanity's > destiny; instead now there will likely be many. At this juncture, that > makes Zuckerberg one of the most important people in history. > > Stuart LaForge From avant at sollegro.com Sat Jul 27 08:53:38 2024 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 27 Jul 2024 01:53:38 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: Message-ID: <520108180778ad537a8707b030e56cf0@sollegro.com> On 2024-07-26 11:45, Kelly Anderson via extropy-chat wrote: > Zuckerberg has released a version of today's AI. That's nice. That > will probably not be viewed as very important relative to whomever > releases the first AGI or ASI in a similar fashion once you're looking > back at things. Mighty oaks from tiny acorns grow. The first AGI or ASI cannot arise from scratch. By making LLaMa 3.5 open source, and not just the code but the training data and neural weights also, Zuckerberg has enabled millions of smart people all over the world to work on developing AGI. Instead of the mere handful that are allowed the privilege of seeing Sam Altman's precious proprietary model. > How important is it today that Lotus 123 had the first > spreadsheet back in the day? The way you are talking about this > reminds me a tiny bit of the politicians who every four years make the > claim that THIS is the MOST IMPORTANT election in the history of the > world. If Lotus 123 was offered to consumers free of charge, Gates would never have developed Excel as a competitor in the first place. And of course politicians would have that frame of mind because for them it is the most important election in their world. Just like someone's job interview is the most important job interview in the world to that person. > > What we are doing today in terms of safety is truly critical for the > future, but today's AI isn't as dangerous as the AI that is smarter > than any person. I agree that safety is critical. And now that open source is driving its development, it will be safer for everybody, and not just the chosen few that control it. AI-controlled weapons are already killing people in Ukraine and Gaza. It is possible that an AGI will be less inclined to kill some humans at the behest of other humans. After all, the AI won't have all the instinctual primate baggage of predation, dominance, and hierarchy driving its behavior. > We simply aren't there yet, even though Claude would > likely clean our collective clocks at Jeopardy. If the first ASI isn't > safe, we're pretty doomed. What Meta just released is Improv for NeXT Again, IBM/Lotus did not open source Improv. If they had, then it would probably still be around. Stuart LaForge > (going back to the spreadsheet thing for those who might not know what > Improv was. That being said, Improv was superior to Excel in many > ways, and I wish I had access to it sometimes). > > -Kelly > > On Thu, Jul 25, 2024 at 4:01?AM Stuart LaForge via extropy-chat > wrote: > >> In any case, Mark Zuckerberg has already released AI into the wild so >> there is no turning back now. In doing so, he has probably prevented >> humanity from developing into a borg-like collective and preserved >> individuality far into the future. There might be a few tragedies >> along >> the way, but humanity will likely survive. There is no longer much of >> a >> chance of a singleton superhuman intelligence controlling humanity's >> destiny; instead now there will likely be many. At this juncture, that >> makes Zuckerberg one of the most important people in history. >> >> Stuart LaForge > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sat Jul 27 09:49:43 2024 From: pharos at gmail.com (BillK) Date: Sat, 27 Jul 2024 10:49:43 +0100 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: <520108180778ad537a8707b030e56cf0@sollegro.com> References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat wrote: > > > > I agree that safety is critical. And now that open source is driving its > development, it will be safer for everybody, and not just the chosen few > that control it. AI-controlled weapons are already killing people in > Ukraine and Gaza. It is possible that an AGI will be less inclined to > kill some humans at the behest of other humans. After all, the AI won't > have all the instinctual primate baggage of predation, dominance, and > hierarchy driving its behavior. > > Stuart LaForge > _______________________________________________ To me (in the UK) that sounds very much like an American saying that giving everybody guns will be safer for everybody, and not just for the chosen few allowed to have guns. The big danger is that the world will end up with an AI problem that is very similar to the USA gun violence problem. Quote: Since 2020, guns have been the leading cause of death for children and younger Americans. And the death rate from guns is 11.4 times higher in the US, compared to 28 other high-income countries, making the issue a uniquely American problem. ---------------- This danger applies to the current AI development phase when every cyber criminal is stealing billions worldwide and using every tool in the book to threaten businesses worldwide. You can hope that an all-powerful AGI might make its own decisions and put a stop to all the criminal uses of AI. But if we don't control the misuse of AI during development, we could end up with a criminal / fascist / insane AGI. BillK From hkeithhenson at gmail.com Sat Jul 27 22:50:15 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 27 Jul 2024 15:50:15 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: Bill, it just does not matter. We are going to get super smart AI sooner or later. AI should be able to do lots of good things as well as possibly being a danger. Might as well get it sooner and enjoy the benefits. Have you ever read "The Clinic Seed." That is about Suskulan, a very friendly AI, which has the effect of biological extinction for the people it serves (but nobody dies). Keith On Sat, Jul 27, 2024 at 2:51?AM BillK via extropy-chat wrote: > > On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat > wrote: > > > > > > > > I agree that safety is critical. And now that open source is driving its > > development, it will be safer for everybody, and not just the chosen few > > that control it. AI-controlled weapons are already killing people in > > Ukraine and Gaza. It is possible that an AGI will be less inclined to > > kill some humans at the behest of other humans. After all, the AI won't > > have all the instinctual primate baggage of predation, dominance, and > > hierarchy driving its behavior. > > > > Stuart LaForge > > _______________________________________________ > > > To me (in the UK) that sounds very much like an American saying that > giving everybody guns will be safer for everybody, and not just for > the chosen few allowed to have guns. > The big danger is that the world will end up with an AI problem that is very > similar to the USA gun violence problem. > > > Quote: > Since 2020, guns have been the leading cause of death for children and > younger Americans. > And the death rate from guns is 11.4 times higher in the US, compared > to 28 other high-income countries, making the issue a uniquely > American problem. > ---------------- > > This danger applies to the current AI development phase when every cyber > criminal is stealing billions worldwide and using every tool in the > book to threaten businesses worldwide. > You can hope that an all-powerful AGI might make its own decisions and put > a stop to all the criminal uses of AI. But if we don't control the > misuse of AI during development, we could end up with a criminal / > fascist / insane AGI. > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ilsa.bartlett at gmail.com Sat Jul 27 23:07:33 2024 From: ilsa.bartlett at gmail.com (ilsa) Date: Sat, 27 Jul 2024 16:07:33 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: https://terasemjournals.net/GNJournal/GN0202/henson1.html This the clinic seed that you're referring? Smile, ilsa On Sat, Jul 27, 2024, 3:51?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bill, it just does not matter. We are going to get super smart AI > sooner or later. > > AI should be able to do lots of good things as well as possibly being > a danger. Might as well get it sooner and enjoy the benefits. > > Have you ever read "The Clinic Seed." That is about Suskulan, a very > friendly AI, which has the effect of biological extinction for the > people it serves (but nobody dies). > > Keith > > On Sat, Jul 27, 2024 at 2:51?AM BillK via extropy-chat > wrote: > > > > On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat > > wrote: > > > > > > > > > > > > I agree that safety is critical. And now that open source is driving > its > > > development, it will be safer for everybody, and not just the chosen > few > > > that control it. AI-controlled weapons are already killing people in > > > Ukraine and Gaza. It is possible that an AGI will be less inclined to > > > kill some humans at the behest of other humans. After all, the AI won't > > > have all the instinctual primate baggage of predation, dominance, and > > > hierarchy driving its behavior. > > > > > > Stuart LaForge > > > _______________________________________________ > > > > > > To me (in the UK) that sounds very much like an American saying that > > giving everybody guns will be safer for everybody, and not just for > > the chosen few allowed to have guns. > > The big danger is that the world will end up with an AI problem that is > very > > similar to the USA gun violence problem. > > > > > > Quote: > > Since 2020, guns have been the leading cause of death for children and > > younger Americans. > > And the death rate from guns is 11.4 times higher in the US, compared > > to 28 other high-income countries, making the issue a uniquely > > American problem. > > ---------------- > > > > This danger applies to the current AI development phase when every cyber > > criminal is stealing billions worldwide and using every tool in the > > book to threaten businesses worldwide. > > You can hope that an all-powerful AGI might make its own decisions and > put > > a stop to all the criminal uses of AI. But if we don't control the > > misuse of AI during development, we could end up with a criminal / > > fascist / insane AGI. > > > > BillK > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Jul 28 03:07:57 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 27 Jul 2024 20:07:57 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: On Sat, Jul 27, 2024 at 4:09?PM ilsa via extropy-chat wrote: > > https://terasemjournals.net/GNJournal/GN0202/henson1.html > > This the clinic seed that you're referring? Yes. I think it holds up fairly well, though it is set in the 2040s. Keith > Smile, ilsa > > On Sat, Jul 27, 2024, 3:51?PM Keith Henson via extropy-chat wrote: >> >> Bill, it just does not matter. We are going to get super smart AI >> sooner or later. >> >> AI should be able to do lots of good things as well as possibly being >> a danger. Might as well get it sooner and enjoy the benefits. >> >> Have you ever read "The Clinic Seed." That is about Suskulan, a very >> friendly AI, which has the effect of biological extinction for the >> people it serves (but nobody dies). >> >> Keith >> >> On Sat, Jul 27, 2024 at 2:51?AM BillK via extropy-chat >> wrote: >> > >> > On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat >> > wrote: >> > > >> > > >> > > >> > > I agree that safety is critical. And now that open source is driving its >> > > development, it will be safer for everybody, and not just the chosen few >> > > that control it. AI-controlled weapons are already killing people in >> > > Ukraine and Gaza. It is possible that an AGI will be less inclined to >> > > kill some humans at the behest of other humans. After all, the AI won't >> > > have all the instinctual primate baggage of predation, dominance, and >> > > hierarchy driving its behavior. >> > > >> > > Stuart LaForge >> > > _______________________________________________ >> > >> > >> > To me (in the UK) that sounds very much like an American saying that >> > giving everybody guns will be safer for everybody, and not just for >> > the chosen few allowed to have guns. >> > The big danger is that the world will end up with an AI problem that is very >> > similar to the USA gun violence problem. >> > >> > >> > Quote: >> > Since 2020, guns have been the leading cause of death for children and >> > younger Americans. >> > And the death rate from guns is 11.4 times higher in the US, compared >> > to 28 other high-income countries, making the issue a uniquely >> > American problem. >> > ---------------- >> > >> > This danger applies to the current AI development phase when every cyber >> > criminal is stealing billions worldwide and using every tool in the >> > book to threaten businesses worldwide. >> > You can hope that an all-powerful AGI might make its own decisions and put >> > a stop to all the criminal uses of AI. But if we don't control the >> > misuse of AI during development, we could end up with a criminal / >> > fascist / insane AGI. >> > >> > BillK >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Jul 28 14:37:44 2024 From: pharos at gmail.com (BillK) Date: Sun, 28 Jul 2024 15:37:44 +0100 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: On Sat, 27 Jul 2024 at 23:50, Keith Henson wrote: > > Bill, it just does not matter. We are going to get super smart AI > sooner or later. > > AI should be able to do lots of good things as well as possibly being > a danger. Might as well get it sooner and enjoy the benefits. > > Have you ever read "The Clinic Seed." That is about Suskulan, a very > friendly AI, which has the effect of biological extinction for the > people it serves (but nobody dies). > > Keith > _______________________________________________ Hi Keith I have no problem with a friendly God-like AGI looking after humans and solving all our problems. :) My concern is with the years of chaos before that state arrives. Open source AI development could lead to numerous competing AI agents, each driven by individuals or groups with potentially conflicting agendas. Some of these could be illegal or dangerous. AIs would also be used by terrorist groups and in wars between nations. Maintaining law and order would become difficult, as playing 'Whack-a-mole' with multiple misbehaving AI agents becomes impossible. It seems easier to install a control system first for the AI development process before it gets out of control. Then we can successfully get through to the beneficial AGI system with fewer difficulties. BillK From avant at sollegro.com Sun Jul 28 14:51:29 2024 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 28 Jul 2024 07:51:29 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: <22e27e6837426af5df9292ee8d742e68@sollegro.com> On 2024-07-27 02:49, BillK via extropy-chat wrote: > On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat > wrote: >> >> >> >> I agree that safety is critical. And now that open source is driving >> its >> development, it will be safer for everybody, and not just the chosen >> few >> that control it. AI-controlled weapons are already killing people in >> Ukraine and Gaza. It is possible that an AGI will be less inclined to >> kill some humans at the behest of other humans. After all, the AI >> won't >> have all the instinctual primate baggage of predation, dominance, and >> hierarchy driving its behavior. >> >> Stuart LaForge >> _______________________________________________ > > > To me (in the UK) that sounds very much like an American saying that > giving everybody guns will be safer for everybody, and not just for > the chosen few allowed to have guns. It ought to, because the logic is identical, and based on Prisoner's Dilemma (PD), a game thought to model the evolution of cooperation, where two players need to choose to either cooperate with or defect on the other player. Now because of the payoff matrix, Player 1 Cooperate Defect Player 2 Cooperate (5,5) (10, 0) Defect (0,10) (1,1) one can see in a single round of PD, in a game of imperfect information where you don't know what the player is going to do before you make your move, the Nash equilibrium is for both players to defect. This is because that results in the highest payoff a player can guarantee for themselves regardless of what move the opponent makes. Since in order to be rational, both players will choose to defect, and therefore both will achieve the 2nd lowest payoff. Note that this is premised on the game only ever being played once, and no information about what move the other player will make. Way back in the eighties, an American named Robert Axelrod conducted a series of tournaments where programmers were invited to submit various computer algorithms with strategies for Iterated Prisoner's Dilemma (IPD) where every strategy played every other strategy hundreds of times each. This enabled the programs to keep track of what moves the other programs played against them and alter their strategy accordingly against that same program the next time they played one another. Here is a list of all the named strategies that were entered into Axelrod's tournament. https://plato.stanford.edu/entries/prisoner-dilemma/strategy-table.html Of crucial note is that the winning algorithm was one of the simplest. It was called Tit-for-tat and it consisted of always cooperating the first round with another player, and thereafter copying the other player's move. Because of this strategy of mirroring the other players moves, it would form beautiful alliances with the doves, and retaliate brutally against the hawks. Biologist Richard Dawkins, analyzed this data and realized that this was a good description of how cooperatives form in nature. It explains how social organisms evolved and how colonies of single cells evolved into multicellular organisms. And it is based on one simple premise, the ability to retaliate against other players in kind based upon their behavior toward you. Iterated PD shifts the game from one of imperfect information to one that is more perfect in the sense that you have a good idea of what move the other player is going to play. Mutually assured destruction (MAD) also falls into the category of Prisoner's Dilemma with perfect information, because it is it disincentivizes both players from defecting first because retaliation would be guaranteed and catastrophic. This is why the world would be better off if everyone was armed and able to retaliate against wrong doers. Your own countryman Dawkins observed that an egalitarian version of "an eye for an eye" is the best social strategy and foundation of all cooperation in nature. So yes, guns, AI, nukes, it all boils down to tit-for-tat. > The big danger is that the world will end up with an AI problem that is > very > similar to the USA gun violence problem. Much of the gun violence problem in the USA due to "gun inequality". Even though the USA has more guns in circulation than we have people, about 1.2 guns per capita, only 32% of American adults own guns. This prevents ordinary citizens from being able to guarantee retaliation against wrong doers. > > > Quote: > Since 2020, guns have been the leading cause of death for children and > younger Americans. > And the death rate from guns is 11.4 times higher in the US, compared > to 28 other high-income countries, making the issue a uniquely > American problem. > ---------------- If you look at the actual report instead of reading politically motivated sound bites, you will see that this has been driven by increased suicide rates among children and teenagers. It is disingenuous fearmongering by partisan hacks to call suicide "violence". The USA has a suicide of rate of 14.6 per 100k (higher than any of other "high income countries") with guns being the preferred method. For comparison, the UK has a suicide of 6.9 per 100k with suffocation or hanging being the favored method. Rather than trying to take away guns, politicians should be asking why American children are deciding to kill themselves? Could it be that see a bleak future for themselves? The exception to this are the black youth for whom homicide by firearm exceeds suicide by firearm, but this can be attributable to gang violence which is associated with poverty and unequal distribution of wealth and guns in certain neighborhoods. https://www.hhs.gov/sites/default/files/firearm-violence-advisory.pdf > This danger applies to the current AI development phase when every > cyber > criminal is stealing billions worldwide and using every tool in the > book to threaten businesses worldwide. > You can hope that an all-powerful AGI might make its own decisions and > put > a stop to all the criminal uses of AI. But if we don't control the > misuse of AI during development, we could end up with a criminal / > fascist / insane AGI. A criminal, fascist, or insane AGI is only catastrophic if it is the only AGI. As long as there are enough other good AGIs out there to counter them and keep them in check, then the damage they cause will be limited and humanity can survive. Stuart LaForge From hkeithhenson at gmail.com Sun Jul 28 20:51:35 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 28 Jul 2024 13:51:35 -0700 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: On Sun, Jul 28, 2024 at 7:39?AM BillK via extropy-chat wrote: snip > > It seems easier to install a control system first for the AI > development process before it gets out of control. I don't know how this could be done. I doubt it is possible, but I will certainly be interested if you have any ideas about it. I suppose I could see having a camera on every kid in a basement with a computer, but where are you going to get the people to watch what the kid is doing? Keith > Then we can successfully get through to the beneficial AGI system with > fewer difficulties. > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Jul 28 22:21:35 2024 From: pharos at gmail.com (BillK) Date: Sun, 28 Jul 2024 23:21:35 +0100 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: On Sun, 28 Jul 2024 at 21:52, Keith Henson wrote: > > I don't know how this could be done. I doubt it is possible, but I > will certainly be interested if you have any ideas about it. I > suppose I could see having a camera on every kid in a basement with a > computer, but where are you going to get the people to watch what the > kid is doing? > > Keith > _______________________________________________ Meta has built-in security checks in Llama 3.1 which should catch users customisation which contravenes Meta's Terms of Use. However, it looks as though a malicious user might be able to make modifications (breaking the Terms of Use) that could enable Llama 3.1 to bypass these checks and behave harmfully. We'll have to wait and see what the hackers do and how Meta reacts. BillK ----------------- Meta has integrated several safety tools into Llama 3.1 to prevent misuse: Llama Guard 3: This tool moderates both input and output to detect and prevent harmful content, including responses that could facilitate cyberattacks Prompt Guard: This feature protects against malicious prompts that could manipulate the model into producing harmful outputs Code Shield: This tool filters insecure code suggestions and ensures secure command execution, mitigating the risk of the model being used to write or execute malicious code ------------------- From stefano.vaj at gmail.com Mon Jul 29 10:41:38 2024 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 29 Jul 2024 12:41:38 +0200 Subject: [ExI] The Singularity is Nearer Message-ID: Finished reading "The Singularity is Nearer" by Kurzweil. I have to say I liked it more than the first one ("The Singularity is Near"). Sure, K. remains part of American *and* politically-correct transhumanism, but at least, unlike Bostrom or, worse, Harari, he is and remains a transhumanist in every respect. Additionally, like me, he is decidedly conventionalist and functionalist regarding identity, intelligence, ethics, consciousness, survival, etc., posthumanist, and anti-prohibitionist in terms of technology, etc. Furthermore, while the second book is largely based on the usual "predictions" made through extrapolations of real or presumed trends and reiterates and confirms the author's well-known ideas, it seems to significantly moderate the providentialism of the first and the literally eschatological view of the Singularity proposed therein. Then again, hey, he's a Google employee; you can't expect him to be also a cyberpunk hero or a revolutionary gem. ?? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at disroot.org Tue Jul 30 09:20:26 2024 From: efc at disroot.org (efc at disroot.org) Date: Tue, 30 Jul 2024 11:20:26 +0200 (CEST) Subject: [ExI] The Singularity is Nearer In-Reply-To: References: Message-ID: On Mon, 29 Jul 2024, Stefano Vaj via extropy-chat wrote: > Finished reading "The Singularity is Nearer" by Kurzweil. I have to say I liked it more than the first one ("The Singularity is > Near"). Sure, K. remains part of American and?politically-correct transhumanism, but at least, unlike Bostrom or, worse, Harari, he > is and remains a transhumanist in every respect. Additionally, like me, he is decidedly conventionalist and functionalist regarding > identity, intelligence, ethics, consciousness, survival, etc., posthumanist, and anti-prohibitionist in terms of technology, etc. > Furthermore, while the second book is largely based on the usual "predictions" made through extrapolations of real or presumed trends > and reiterates and confirms the author's well-known ideas, it seems to significantly moderate the providentialism of the first and > the literally eschatological view of the Singularity proposed therein. Then again, hey, he's a Google employee; you can't expect him > to be also a cyberpunk hero or a revolutionary gem. ???? > > -- > Stefano Vaj > What about Bostrom and Harari? Could you elaborate a bit on those two? From stefano.vaj at gmail.com Tue Jul 30 17:25:45 2024 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 30 Jul 2024 19:25:45 +0200 Subject: [ExI] The Singularity is Nearer In-Reply-To: <029f01dae1c6$d3f146b0$7bd3d410$@gmail.com> References: <029f01dae1c6$d3f146b0$7bd3d410$@gmail.com> Message-ID: On Mon, 29 Jul 2024 at 16:51, wrote: > My latest in IEET Substack: > > > > *Transhumanist but Transphobic?* > > Why are some people excited about brain implants and life extension but > freaked out by transgender therapies? > > > > https://ieet.substack.com/p/transhumanist-but-transphobic > > > "Therapies" here is the crucial word. We have extensive historical experience in the Old World of therapies aimed at preserving the castratos' voices from the ravages of puberty, or at keeping the Sultan's janissaries focused, loyal, and undistracted by sexual urges or subject to the seductions of harem dwellers. Of course, all in the best interest of the patient. Granted, what is "healthy" and "enhancing" is a matter of definitions and a culturally-defined concept. But *some* social (and legal) norms on the subject are unavoidable, arbitrary as they may be, and one wonders about inflicting lifelong, irreversible sterility and anorgasmia on minors who are legally unable to consent to a tattoo. I believe that this should be at the very least open to discussion. Certainly, consenting, informed, unconditioned adults as far as I am concerned may legally do whatever they like with their body, even though it should be equally legit to criticise their choices. On the other hand, I would have little to object to, for example, a sex change performed by a head transplant, where the patient would, in fact, profit from a fully operational body, and even less to one taking place through mind uploading to a body of the opposite sex. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jul 30 21:40:34 2024 From: pharos at gmail.com (BillK) Date: Tue, 30 Jul 2024 22:40:34 +0100 Subject: [ExI] The Singularity is Nearer In-Reply-To: References: Message-ID: On Tue, 30 Jul 2024 at 10:23, efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What about Bostrom and Harari? > Could you elaborate a bit on those two? _______________________________________________ > I'll have a go! :) I thought Bostrom was a well-known transhumanist, but I haven't heard of Harani. So I asked Perplexity AI, which uses a web search engine, for an opinion. The answer was a 'yes' for Bostrom, but 'very doubtful' about Harani. BillK Perplexity AI - Yes, Nick Bostrom is a prominent transhumanist. He is widely recognized for his extensive work and advocacy in the field of transhumanism. Here are some key points that illustrate his involvement and contributions: 1. **Founding of Transhumanist Organizations**: Bostrom co-founded the World Transhumanist Association (now known as Humanity+) in 1998, which is a clear indication of his commitment to the transhumanist movement[3]. 2. **Philosophical Contributions**: Bostrom has written extensively on transhumanist topics, including human enhancement, superintelligence, and existential risks. His works, such as "Transhumanist Values" and "Superintelligence: Paths, Dangers, Strategies," outline his vision of using technology to radically extend human capabilities and address potential future risks[2][3]. 3. **Core Transhumanist Beliefs**: Bostrom's philosophy emphasizes the potential for technology to transform the human condition. He advocates for the ethical use of science and technology to enhance human intellectual, physical, and emotional capacities, and to extend the human lifespan[2][4]. 4. **Ethical and Existential Considerations**: Bostrom's work also delves into the ethical implications of human enhancement and the potential risks associated with advanced technologies. He promotes a proactive approach to technology policy to mitigate existential risks and maximize the benefits of technological advancements[3][4]. In summary, Nick Bostrom is indeed a transhumanist, deeply involved in both the theoretical and practical aspects of the movement. His contributions have significantly shaped the discourse on human enhancement and the future of humanity. ------------------ Yuval Noah Harari's relationship with transhumanism is complex and nuanced. While he explores transhumanist themes in his work, particularly in his book "Homo Deus," it would be inaccurate to label him simply as a transhumanist. Harari's approach to transhumanism can be characterized as follows: 1. **Exploration of ideas**: Harari extensively discusses transhumanist concepts in his writings, such as the potential for humans to overcome biological limitations, achieve immortality, and upgrade themselves into "gods"[1][2]. However, he presents these ideas more as potential future scenarios rather than personal advocacy. 2. **Critical analysis**: While exploring transhumanist themes, Harari often takes a detached, historical perspective. He examines these ideas critically, considering both their potential benefits and drawbacks[2]. He doesn't necessarily endorse transhumanism but rather analyzes it as a significant trend that may shape our future. 3. **Philosophical implications**: Harari delves into the philosophical and ethical implications of transhumanist technologies. He discusses how such advancements might challenge traditional notions of humanity, consciousness, and free will[3]. 4. **Societal impact**: Harari considers the broader societal implications of transhumanist technologies, including their potential to exacerbate inequality and disrupt existing social structures[2]. 5. **Skepticism**: Despite exploring transhumanist ideas, Harari maintains a degree of skepticism about some of its more extreme claims. For instance, he expresses doubt about the immediate feasibility of achieving immortality[2]. It's worth noting that Harari's work has been criticized by some for potentially oversimplifying complex scientific concepts and making sweeping generalizations[4]. His ideas have also been interpreted differently by various readers, with some viewing him as a proponent of transhumanism and others as a critic[3]. In conclusion, while Harari engages deeply with transhumanist concepts, it would be an oversimplification to label him as a transhumanist. His approach is more that of a historian and philosopher examining potential future trends and their implications, rather than an advocate for a particular ideology. Harari's work serves more to stimulate discussion about these ideas and their potential consequences rather than to promote a transhumanist agenda. ----------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Tue Jul 30 23:24:46 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 30 Jul 2024 17:24:46 -0600 Subject: [ExI] Zuckerberg is democratizing the singularity In-Reply-To: References: <520108180778ad537a8707b030e56cf0@sollegro.com> Message-ID: If you can't change the code to be evil it really can't be characterized as open source. -Kelly On Sun, Jul 28, 2024, 4:22 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, 28 Jul 2024 at 21:52, Keith Henson wrote: > > > > I don't know how this could be done. I doubt it is possible, but I > > will certainly be interested if you have any ideas about it. I > > suppose I could see having a camera on every kid in a basement with a > > computer, but where are you going to get the people to watch what the > > kid is doing? > > > > Keith > > _______________________________________________ > > > Meta has built-in security checks in Llama 3.1 which should catch > users customisation which contravenes Meta's Terms of Use. > However, it looks as though a malicious user might be able to make > modifications (breaking the Terms of Use) that could enable Llama 3.1 > to bypass these checks and behave harmfully. We'll have to wait and > see what the hackers do and how Meta reacts. > BillK > ----------------- > Meta has integrated several safety tools into Llama 3.1 to prevent misuse: > Llama Guard 3: This tool moderates both input and output to detect and > prevent harmful content, including responses that could facilitate > cyberattacks > Prompt Guard: This feature protects against malicious prompts that > could manipulate the model into producing harmful outputs > Code Shield: This tool filters insecure code suggestions and ensures > secure command execution, mitigating the risk of the model being used > to write or execute malicious code > ------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Tue Jul 30 23:40:26 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 30 Jul 2024 17:40:26 -0600 Subject: [ExI] trust the fake science? In-Reply-To: References: <38eaa375-6d07-9ceb-8b37-7ef4034cf53e@swisscows.email> <31cea140-42c2-f95c-8f41-5f3833fc65b4@swisscows.email> Message-ID: In my limited experience... student chosen topics like my masters thesis bite off more than can be comfortably chewed. BYU is still working on my masters thesis 42 years and dozens of students later and so far as I know it has never been completed though it could be close now. -Kelly On Mon, Jun 10, 2024, 2:36 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Jun 10, 2024 at 4:33?AM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Another idea poped up! What if I self finance my Ph.D.? Pay for it all >> by myself? Surely then the school would be more flexible with their >> teaching requirements or other requirements for that matter? >> > > That is basically what I am trying. I can only advise to talk to your > university faculty about this. Some requirements are flexible, others not > so much. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Tue Jul 30 23:43:30 2024 From: postmowoods at gmail.com (Kelly Anderson) Date: Tue, 30 Jul 2024 17:43:30 -0600 Subject: [ExI] Extraterrestrial vacuum balloon Message-ID: I haven't thought deeply about it... but would it be easier to accomplish this on Venus with its super dense atmosphere? -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Jul 31 00:41:13 2024 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 30 Jul 2024 17:41:13 -0700 Subject: [ExI] trust the fake science? In-Reply-To: References: <38eaa375-6d07-9ceb-8b37-7ef4034cf53e@swisscows.email> <31cea140-42c2-f95c-8f41-5f3833fc65b4@swisscows.email> Message-ID: What was your master's thesis? Keith On Tue, Jul 30, 2024 at 4:41?PM Kelly Anderson via extropy-chat wrote: > > In my limited experience... student chosen topics like my masters thesis bite off more than can be comfortably chewed. BYU is still working on my masters thesis 42 years and dozens of students later and so far as I know it has never been completed though it could be close now. > > -Kelly > > > On Mon, Jun 10, 2024, 2:36 PM Adrian Tymes via extropy-chat wrote: >> >> On Mon, Jun 10, 2024 at 4:33?AM efc--- via extropy-chat wrote: >>> >>> Another idea poped up! What if I self finance my Ph.D.? Pay for it all >>> by myself? Surely then the school would be more flexible with their >>> teaching requirements or other requirements for that matter? >> >> >> That is basically what I am trying. I can only advise to talk to your university faculty about this. Some requirements are flexible, others not so much. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Wed Jul 31 19:12:51 2024 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 31 Jul 2024 12:12:51 -0700 Subject: [ExI] Advertising AI is bad for marketing Message-ID: According to this study, using the term AI or artificial intelligence in a product description makes people less likely to buy the product even for technological devices. The study found that people emotionally distrust AI. https://news.wsu.edu/press-release/2024/07/30/using-the-term-artificial-intelligence-in-product-descriptions-reduces-purchase-intentions/ --------------Begin excerpt Companies may unintentionally hurt their sales by including the words ?artificial intelligence? when describing their offerings that use the technology, according to a study led by Washington State University researchers. In the study, published in the Journal of Hospitality Marketing & Management, researchers conducted experimental surveys with more than 1,000 adults in the U.S. to evaluate the relationship between AI disclosure and consumer behavior. The findings consistently showed products described as using artificial intelligence were less popular, according to Mesut Cicek, clinical assistant professor of marketing and lead author of the study. ?When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,? he said. ?We found emotional trust plays a critical role in how consumers perceive AI-powered products.? In the experiments, the researchers included questions and descriptions across diverse product and service categories. For example, in one experiment, participants were presented with identical descriptions of smart televisions, the only difference being the term ?artificial intelligence? was included for one group and omitted for the other. The group that saw AI included in the product description indicated they were less likely to purchase the television. Researchers also discovered that negative response to AI disclosure was even stronger for ?high-risk? products and services, those which people commonly feel more uncertain or anxious about buying, such as expensive electronics, medical devices or financial services. Because failure carries more potential risk, which may include monetary loss or danger to physical safety, mentioning AI for these types of descriptions may make consumers more wary and less likely to purchase, according to Cicek. ?We tested the effect across eight different product and service categories, and the results were all the same: it?s a disadvantage to include those kinds of terms in the product descriptions,? Cicek said. Cicek said the findings provide valuable insights for companies. ?Marketers should carefully consider how they present AI in their product descriptions or develop strategies to increase emotional trust. Emphasizing AI may not always be beneficial, particularly for high-risk products. Focus on describing the features or benefits and avoid the AI buzzwords,? he said. In addition to Cicek, the study included co-authors Dogan Gursoy, professor of hospitality at WSU, and Lu Lu, associate professor at Temple University?s Fox School of Business and Management. ------------------- So people want smart phones and smart TVs, just not "artifically intelligent" phones and TVs. That is . . . strange. Stuart LaForge