From avant at sollegro.com Wed Nov 1 18:00:41 2023 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 01 Nov 2023 11:00:41 -0700 Subject: [ExI] free will In-Reply-To: References: Message-ID: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> On 2023-10-31 09:50, BillK via extropy-chat wrote: > > > Scientist, after decades of study, concludes: We don't have free will > by Corinne Purtill, Los Angeles Times October 19, 2023 > > > > Quotes: > After more than 40 years studying humans and other primates, Sapolsky > has reached the conclusion that virtually all human behavior is as far > beyond our conscious control as the convulsions of a seizure, the > division of cells or the beating of our hearts. If this was the case, then all learning, training, and conditioning would be impossible. A person does not have conscious control of an epileptic seizure or their heartbeat (at least not without significant biofeedback training), but does have conscious control of their learned ability to ride a bike. Bikes did not exist on the African savannah when were evolving, therefore there is no way that it is hardwired into us or instinctual. That people ride bikes at all is a choice not determined by anything other than their own conscious desire to ride a bike and their physical ability to do so. > "The world is really screwed up and made much, much more unfair by the > fact that we reward people and punish people for things they have no > control over," Sapolsky said. "We've got no free will. Stop > attributing stuff to us that isn't there." This is ludicrous. Incentives and deterrents must work, otherwise people would have abandoned their use a long time ago. San Francisco is a good example of where Sapolsky's misguided beliefs have nearly destroyed an entire city. You take away the deterrence of jail time for theft and guess what? People will start stealing a lot more. So much so that nobody wants to maintain a store in the city. Sapolsky sounds like he is trying to undermine the rule of law with his prescription of the coddling of criminals his glorification of learned helplessness. Maybe not everybody has free will. Maybe there are genuinely people out there who cannot control their behavior. If so, then those people pose a danger to civilization and should not be tolerated. They are probably doomed by natural selection any way. Stuart LaForge From pharos at gmail.com Wed Nov 1 20:46:33 2023 From: pharos at gmail.com (BillK) Date: Wed, 1 Nov 2023 20:46:33 +0000 Subject: [ExI] free will In-Reply-To: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On Wed, 1 Nov 2023 at 18:03, Stuart LaForge via extropy-chat wrote: > > This is ludicrous. Incentives and deterrents must work, otherwise people > would have abandoned their use a long time ago. San Francisco is a good > example of where Sapolsky's misguided beliefs have nearly destroyed an > entire city. You take away the deterrence of jail time for theft and > guess what? People will start stealing a lot more. So much so that > nobody wants to maintain a store in the city. Sapolsky sounds like he is > trying to undermine the rule of law with his prescription of the > coddling of criminals his glorification of learned helplessness. Maybe > not everybody has free will. Maybe there are genuinely people out there > who cannot control their behavior. If so, then those people pose a > danger to civilization and should not be tolerated. They are probably > doomed by natural selection any way. > > Stuart LaForge > _______________________________________________ Sapolsky readily admits that his views are controversial. :) But he is not arguing against law and order. Punishments and incentives are part of the drivers of behaviour. He has published a book on the subject, reviewed here by Psychology today - BillK From pharos at gmail.com Thu Nov 2 23:54:51 2023 From: pharos at gmail.com (BillK) Date: Thu, 2 Nov 2023 23:54:51 +0000 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On Wed, 1 Nov 2023 at 20:46, BillK wrote: > Sapolsky readily admits that his views are controversial. :) > But he is not arguing against law and order. Punishments and > incentives are part of the drivers of behaviour. > > He has published a book on the subject, reviewed here by Psychology today - > >-------------------------------------- Now there is another article about free will, with various researchers (including Sapolsky) discussing the problem. And Sapolsky has a new book out as well. Quotes: Do we have free will? The question is ancient?and vexing. Everyone seems to have pondered it, and many seem quite certain of the answer, which is typically either ?yes? or ?absolutely not.? ------- As Ismael told a lecture audience in Toronto recently, the puzzle of free will is the ultimate philosophical whack-a-mole. ?When you tease out one thread of argument that?s supposed to lead to the conclusion that there is no free will ? people say ?No, no, that?s not what I meant by free will,? or, ?the problem?s not over there, it?s over here,? she told me. ?And they give you a different argument, or they give you a different conception of free will. So every time you nail down one of them, you get something else coming up where people say, ?no, no, the real argument is over here.?? ----------------------------- BillK From jasonresch at gmail.com Fri Nov 3 00:30:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 2 Nov 2023 20:30:50 -0400 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On Thu, Nov 2, 2023, 7:56 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 1 Nov 2023 at 20:46, BillK wrote: > > Sapolsky readily admits that his views are controversial. :) > > But he is not arguing against law and order. Punishments and > > incentives are part of the drivers of behaviour. > > > > He has published a book on the subject, reviewed here by Psychology > today - > > < > https://www.psychologytoday.com/us/blog/one-among-many/201709/sapolsky-free-will > > > >-------------------------------------- > > Now there is another article about free will, with various researchers > (including Sapolsky) discussing the problem. > And Sapolsky has a new book out as well. > > > Quotes: > Do we have free will? The question is ancient?and vexing. > Everyone seems to have pondered it, and many seem quite certain of the > answer, which is typically either ?yes? or ?absolutely not.? > ------- > As Ismael told a lecture audience in Toronto recently, the puzzle of > free will is the ultimate philosophical whack-a-mole. > ?When you tease out one thread of argument that?s supposed to lead to > the conclusion that there is no free will ? people say ?No, no, that?s > not what I meant by free will,? or, ?the problem?s not over there, > it?s over here,? she told me. ?And they give you a different argument, > or they give you a different conception of free will. So every time > you nail down one of them, you get something else coming up where > people say, ?no, no, the real argument is over here.?? > Right, I think the majority of the problem comes down to agreeing on a definition. There are very few hard or interesting problems once that is done. However, there is a mildly interesting aspect which relates to a computer's inability to predict what it will do before it does it. This seems to be a property of any universal computing system (and perhaps for any chaotic process as well). Though they can be defined in a deterministic setting, there is no such thing as "prediction" possible for them, only "simulation" or "instantiation" and witnessing what in fact, that system does. I think the human brain can be considered such a system, but this in itself is not that unique or special of a property, a double pendulum has it too. Jason ----------------------------- > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Nov 3 03:42:00 2023 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 02 Nov 2023 20:42:00 -0700 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On 2023-11-02 16:54, BillK via extropy-chat wrote: > On Wed, 1 Nov 2023 at 20:46, BillK wrote: >> Sapolsky readily admits that his views are controversial. :) >> But he is not arguing against law and order. Punishments and >> incentives are part of the drivers of behaviour. >> >> He has published a book on the subject, reviewed here by Psychology >> today - >> >> -------------------------------------- > > Now there is another article about free will, with various researchers > (including Sapolsky) discussing the problem. > And Sapolsky has a new book out as well. > > > Quotes: > Do we have free will? The question is ancient?and vexing. > Everyone seems to have pondered it, and many seem quite certain of the > answer, which is typically either ?yes? or ?absolutely not.? > ------- > As Ismael told a lecture audience in Toronto recently, the puzzle of > free will is the ultimate philosophical whack-a-mole. > ?When you tease out one thread of argument that?s supposed to lead to > the conclusion that there is no free will ? people say ?No, no, that?s > not what I meant by free will,? or, ?the problem?s not over there, > it?s over here,? she told me. ?And they give you a different argument, > or they give you a different conception of free will. So every time > you nail down one of them, you get something else coming up where > people say, ?no, no, the real argument is over here.?? > ----------------------------- No. The argument against free will defines it as some sort of quasi-religious mystical homunculus separate from the brain. No serious compatibilist believes that free will is separate from the brain. The "free" in free will does not mean independent of the brain, instead it means not prohibited by forces outside of the brain. They define free will to be a straw man and then knock it down. You can't be against free will without being against freedom itself. Trying to convince people that they are unable to control themselves is something only aspiring tyrants would do. Stuart LaForge From atymes at gmail.com Fri Nov 3 04:13:45 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 2 Nov 2023 21:13:45 -0700 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On Thu, Nov 2, 2023 at 8:44?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The > "free" in free will does not mean independent of the brain > I wondered why there was all this talk of obtaining Will for no cost. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Nov 3 04:14:53 2023 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 02 Nov 2023 21:14:53 -0700 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On 2023-11-01 13:46, BillK via extropy-chat wrote: > On Wed, 1 Nov 2023 at 18:03, Stuart LaForge via extropy-chat > wrote: >> >> This is ludicrous. Incentives and deterrents must work, otherwise >> people >> would have abandoned their use a long time ago. San Francisco is a >> good >> example of where Sapolsky's misguided beliefs have nearly destroyed an >> entire city. You take away the deterrence of jail time for theft and >> guess what? People will start stealing a lot more. So much so that >> nobody wants to maintain a store in the city. Sapolsky sounds like he >> is >> trying to undermine the rule of law with his prescription of the >> coddling of criminals his glorification of learned helplessness. Maybe >> not everybody has free will. Maybe there are genuinely people out >> there >> who cannot control their behavior. If so, then those people pose a >> danger to civilization and should not be tolerated. They are probably >> doomed by natural selection any way. >> >> Stuart LaForge >> _______________________________________________ > > > Sapolsky readily admits that his views are controversial. :) > But he is not arguing against law and order. Punishments and > incentives are part of the drivers of behaviour. > > He has published a book on the subject, reviewed here by Psychology > today - > The very last line is the Psychology Today article does not make any sense at all. "Yet, letting elephants know that we deny them free will, does not turn them rogue." First of all you don't need to let an elephant know you are denying it free will, it is intelligent enough to figure that out based on the fences, chains, and goads. And it is the very denying them of their free will which makes them go rogue. If you frustrate the will of an elephant, then it might decide to squash the annoying little pink things. Ditto for orcas. In all of recorded history, the only reports of orcas, the undisputed apex predator of the sea killing humans has been the four deaths caused a whale in captivity. Stuart LaForge From avant at sollegro.com Fri Nov 3 04:25:26 2023 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 02 Nov 2023 21:25:26 -0700 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: On 2023-11-02 21:13, Adrian Tymes via extropy-chat wrote: > On Thu, Nov 2, 2023 at 8:44?PM Stuart LaForge via extropy-chat > wrote: > >> The >> "free" in free will does not mean independent of the brain > > I wondered why there was all this talk of obtaining Will for no cost. Lol. No. Will prefers getting paid to working pro bono. Stuart LaForge From spike at rainier66.com Fri Nov 3 04:38:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Nov 2023 21:38:27 -0700 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: <006801da0e0f$970d9900$c528cb00$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] free will On Thu, Nov 2, 2023 at 8:44?PM Stuart LaForge via extropy-chat > wrote: The "free" in free will does not mean independent of the brain >?I wondered why there was all this talk of obtaining Will for no cost. I was worried he was being held in prison. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 3 05:56:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Nov 2023 22:56:54 -0700 Subject: [ExI] free will In-Reply-To: References: <7d47bbc886a095b1a5e2fb76759e26d7@sollegro.com> Message-ID: <008901da0e1a$8cecf910$a6c6eb30$@rainier66.com> ...> On Behalf Of Stuart LaForge via extropy-chat Subject: Re: [ExI] free will On 2023-11-02 21:13, Adrian Tymes via extropy-chat wrote: > On Thu, Nov 2, 2023 at 8:44?PM Stuart LaForge via extropy-chat > wrote: > >> The >> "free" in free will does not mean independent of the brain > > I wondered why there was all this talk of obtaining Will for no cost. >...Lol. No. Will prefers getting paid to working pro bono. Stuart LaForge _______________________________________________ Misinterpretation. Instead of obtaining sustenance the traditional way, he prefers a more direct trade. Thus the sign: Will work for food. s From foozler83 at gmail.com Sat Nov 4 18:05:33 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 4 Nov 2023 13:05:33 -0500 Subject: [ExI] tip for the elderly Message-ID: You can't find this out from reviews on Amazon: house slippers advertised as non-slip will slip on certain surfaces. So.. I experimented and found out that if you treat the soles with spray glue, you now have real non-slip slippers. One pair that had all the holding power of Teflon now holds on linoleum and wood. A bit of glue will come off at first, and then none. I have balance problems because of age and because of beta blockers and blood pressure pills. This helps. I have not fallen yet (I can't believe that I used to walk around the house in winter wearing socks. ) I assume that I will have to treat them again later. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Nov 5 17:31:03 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 5 Nov 2023 11:31:03 -0600 Subject: [ExI] wanting to Message-ID: Here's the way I look at it: If you want something, then you can do it. Or you can want something and not do it. If you don't want something, you can not do it. If you don't want something you can do it. Thus either wanting or not wanting gives us no information enabling us to make any valid prediction. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 5 18:55:25 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Nov 2023 10:55:25 -0800 Subject: [ExI] clock fans Message-ID: <010101da1019$a3c6ea00$eb54be00$@rainier66.com> Today USians went thru the semiannual ritual of setting our clocks back an hour. This means more sunlight in the morning. But of course the sunrise continues to get a few minutes later each week for the next coupla months. The weekend on which we set our clocks is not the equinox. So how do they figure out which weekend to set the clocks? A long time ago I heard the notion that they choose it such that there is an hour difference between sunrise time today and sunrise time on the latest sunrise of the year, so that USians wake up at the darkest time of the year thrice: the day before the fall time change, the latest sunrise in the first week of January, then the day after we undo the silliness in the spring. However. it doesn't work that way. For USians living way north, the difference between sunrise times varies greatly over the year, whereas those in the southern USA, such as Florida where I cheerfully squandered my own childhood and youth, the time doesn't vary as much. So in the south USA, the hour time change overcompensates for the varying sunrise time, but up north it undercompensates. So. there is a latitude where it really does what I heard it was designed to do: make the first week of November, the first week of January and the second week of March the darkest wakeup weeks. Without even solving an equation, I looked up and found that latitude. I did it the lazy-man's way: picking a city, asking for its sunrise table, seeing if the delta between now and 5 Jan is greater or less than an hour, choosing another city. Got it on the second try. That latitude is. . do you want to try it? Or just scroll down? OK then, scroll down. The breakeven latitude is. . . . . . . . . . . . . . . . . . . . . . . . . .about 45 degrees north. OK then, I shoulda figured out that is the logical place to design the system. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Sun Nov 5 19:51:54 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Sun, 5 Nov 2023 11:51:54 -0800 Subject: [ExI] clock fans In-Reply-To: <010101da1019$a3c6ea00$eb54be00$@rainier66.com> References: <010101da1019$a3c6ea00$eb54be00$@rainier66.com> Message-ID: https://www.newgeography.com/content/001449-the-forty-fifth-parallel On Sun, Nov 5, 2023, 10:56 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > Today USians went thru the semiannual ritual of setting our clocks back an > hour. This means more sunlight in the morning. But of course the sunrise > continues to get a few minutes later each week for the next coupla months. > > > > The weekend on which we set our clocks is not the equinox. So how do they > figure out which weekend to set the clocks? > > > > A long time ago I heard the notion that they choose it such that there is > an hour difference between sunrise time today and sunrise time on the > latest sunrise of the year, so that USians wake up at the darkest time of > the year thrice: the day before the fall time change, the latest sunrise in > the first week of January, then the day after we undo the silliness in the > spring. > > > > However? it doesn?t work that way. For USians living way north, the > difference between sunrise times varies greatly over the year, whereas > those in the southern USA, such as Florida where I cheerfully squandered my > own childhood and youth, the time doesn?t vary as much. So in the south > USA, the hour time change overcompensates for the varying sunrise time, but > up north it undercompensates. > > > > So? there is a latitude where it really does what I heard it was designed > to do: make the first week of November, the first week of January and the > second week of March the darkest wakeup weeks. > > > > Without even solving an equation, I looked up and found that latitude. I > did it the lazy-man?s way: picking a city, asking for its sunrise table, > seeing if the delta between now and 5 Jan is greater or less than an hour, > choosing another city. Got it on the second try. That latitude is? > > > > ? do you want to try it? Or just scroll down? > > > > OK then, scroll down. The breakeven latitude is? > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > > > . > > > > ?about 45 degrees north. > > > > OK then, I shoulda figured out that is the logical place to design the > system. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Nov 5 19:55:24 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Nov 2023 11:55:24 -0800 Subject: [ExI] clock fans In-Reply-To: References: <010101da1019$a3c6ea00$eb54be00$@rainier66.com> Message-ID: <003f01da1022$04ed6450$0ec82cf0$@rainier66.com> ?> On Behalf Of ilsa via extropy-chat Subject: Re: [ExI] clock fans >?https://www.newgeography.com/content/001449-the-forty-fifth-parallel Ilsa, there?s a good ski resort on 45 degrees if you are in Idaho. spike ?about 45 degrees north. OK then, I shoulda figured out that is the logical place to design the system. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Nov 5 22:56:32 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 5 Nov 2023 17:56:32 -0500 Subject: [ExI] wanting to In-Reply-To: References: Message-ID: On Sun, Nov 5, 2023, 12:32 PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Here's the way I look at it: > > If you want something, then you can do it. Or you can want something and > not do it. > > If you don't want something, you can not do it. If you don't want > something you can do it. > > Thus either wanting or not wanting gives us no information enabling us to > make any valid prediction. bill w > There is information in uncertain probabilities, and this information can be less than 1 bit. The formula is: -log2(p) where p is the probability of an event. If wanting or not wanting changes the probability of the outcome, then it provides a positive amount of information (unless p == 1). Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Nov 6 01:09:31 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Nov 2023 01:09:31 +0000 Subject: [ExI] AI and Singularity discussion on UK TV Message-ID: I have just watched a 60 min discussion on the GB News channel between Stephen Fry and John Cleese. Stephen Fry talked for about 45 minutes (almost non-stop!) about AI tech, Chatbots, and mentioned the singularity and the tsunami of technology changes racing towards our civilisation. GB News TV is not one of the major TV channels, but still..... BillK From spike at rainier66.com Mon Nov 6 15:09:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Nov 2023 07:09:49 -0800 Subject: [ExI] hes fallen so far Message-ID: <002c01da10c3$4c26a880$e473f980$@rainier66.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 15985 bytes Desc: not available URL: From pharos at gmail.com Mon Nov 6 23:12:44 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Nov 2023 23:12:44 +0000 Subject: [ExI] wanting to In-Reply-To: References: Message-ID: On Sun, 5 Nov 2023 at 22:59, Jason Resch via extropy-chat wrote: > There is information in uncertain probabilities, and this information can be less than 1 bit. > The formula is: -log2(p) where p is the probability of an event. > If wanting or not wanting changes the probability of the outcome, then it provides a positive amount of information (unless p == 1). > > Jason > _______________________________________________ So I asked an AI for a more detailed explanation of your comment........ :) BillK Answer The statement "There is information in uncertain probabilities, and this information can be less than 1 bit" refers to the concept of information entropy in information theory. Information entropy is a measure of the amount of uncertainty or randomness in a system. In the context of probability, the entropy of a probability distribution is a measure of the amount of uncertainty in the outcome of an event. The formula for information entropy is -log2(p), where p is the probability of an event. This formula gives the amount of information (in bits) that is gained when the outcome of an event with probability p is observed. The more uncertain or random the event is, the higher the entropy and the more information is gained when the outcome is observed. Conversely, if the probability of an event is 1 (i.e., the event is certain), then the entropy is 0 and no information is gained when the outcome is observed. The statement "If wanting or not wanting changes the probability of the outcome, then it provides a positive amount of information (unless p == 1)" refers to the fact that if the probability of an event changes based on whether or not someone wants it to happen, then there is some uncertainty or randomness in the outcome. This uncertainty can be quantified by the entropy of the probability distribution. If the entropy is greater than 0, then observing the outcome provides some information. If the entropy is 0 (i.e., the event is certain), then no information is gained. In summary, the statement is referring to the concept of information entropy in information theory, which is a measure of the amount of uncertainty or randomness in a system. The entropy of a probability distribution gives the amount of information gained when the outcome of an event is observed. If the probability of an event changes based on whether or not someone wants it to happen, then there is some uncertainty or randomness in the outcome, and observing the outcome provides some information. ------------------- From foozler83 at gmail.com Mon Nov 6 23:56:45 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 6 Nov 2023 17:56:45 -0600 Subject: [ExI] wanting to In-Reply-To: References: Message-ID: But the AI is talking about probability of an event (I think). What we want is understanding wanting and not wanting and the consequences: do it or don't do it. bill w On Mon, Nov 6, 2023 at 5:15?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, 5 Nov 2023 at 22:59, Jason Resch via extropy-chat > wrote: > > There is information in uncertain probabilities, and this information > can be less than 1 bit. > > The formula is: -log2(p) where p is the probability of an event. > > If wanting or not wanting changes the probability of the outcome, then > it provides a positive amount of information (unless p == 1). > > > > Jason > > _______________________________________________ > > > So I asked an AI for a more detailed explanation of your comment........ > :) > BillK > > Answer > The statement "There is information in uncertain probabilities, and > this information can be less than 1 bit" refers to the concept of > information entropy in information theory. Information entropy is a > measure of the amount of uncertainty or randomness in a system. In the > context of probability, the entropy of a probability distribution is a > measure of the amount of uncertainty in the outcome of an event. The > formula for information entropy is -log2(p), where p is the > probability of an event. This formula gives the amount of information > (in bits) that is gained when the outcome of an event with probability > p is observed. The more uncertain or random the event is, the higher > the entropy and the more information is gained when the outcome is > observed. Conversely, if the probability of an event is 1 (i.e., the > event is certain), then the entropy is 0 and no information is gained > when the outcome is observed. > > The statement "If wanting or not wanting changes the probability of > the outcome, then it provides a positive amount of information (unless > p == 1)" refers to the fact that if the probability of an event > changes based on whether or not someone wants it to happen, then there > is some uncertainty or randomness in the outcome. This uncertainty can > be quantified by the entropy of the probability distribution. If the > entropy is greater than 0, then observing the outcome provides some > information. If the entropy is 0 (i.e., the event is certain), then no > information is gained. > > In summary, the statement is referring to the concept of information > entropy in information theory, which is a measure of the amount of > uncertainty or randomness in a system. The entropy of a probability > distribution gives the amount of information gained when the outcome > of an event is observed. If the probability of an event changes based > on whether or not someone wants it to happen, then there is some > uncertainty or randomness in the outcome, and observing the outcome > provides some information. > ------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 7 00:49:36 2023 From: pharos at gmail.com (BillK) Date: Tue, 7 Nov 2023 00:49:36 +0000 Subject: [ExI] AI Safety Agreement Message-ID: We all officially agreed....... BillK From atymes at gmail.com Tue Nov 7 00:58:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 6 Nov 2023 16:58:12 -0800 Subject: [ExI] AI Safety Agreement In-Reply-To: References: Message-ID: Regulations may be intended to prevent someone else from doing stuff, but - at least within any given nation - only limit what the regulated nation can do. That may be our best defense against regulations on AI actually being enacted, no matter what gets proposed. On Mon, Nov 6, 2023 at 4:51?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > We all officially agreed....... > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Nov 7 03:30:09 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Nov 2023 19:30:09 -0800 Subject: [ExI] AI Safety Agreement In-Reply-To: References: Message-ID: <001c01da112a$b65e4160$231ac420$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Monday, 6 November, 2023 4:50 PM To: Extropy Chat Cc: BillK Subject: [ExI] AI Safety Agreement We all officially agreed....... BillK _______________________________________________ Ja. BillK, when we read of attempts to regulate AI development, we all know that nations are led by people who crave power. They will carefully craft the rules to slow down the other guy if at all possible. They all recognize that whoever first develops human-level programming-capable AI rules the world. There is just no greater prize for those who make rules. spike From spike at rainier66.com Tue Nov 7 05:13:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Nov 2023 21:13:27 -0800 Subject: [ExI] AI Safety Agreement In-Reply-To: <001c01da112a$b65e4160$231ac420$@rainier66.com> References: <001c01da112a$b65e4160$231ac420$@rainier66.com> Message-ID: <000401da1139$254f6820$6fee3860$@rainier66.com> -----Original Message----- From: spike at rainier66.com ... We all officially agreed....... BillK _______________________________________________ >...Ja. BillK, when we read of attempts to regulate AI development, we all know that nations are led by people who crave power. They will carefully craft the rules to slow down the other guy if at all possible. They all recognize that whoever first develops human-level programming-capable AI rules the world. There is just no greater prize for those who make rules. ...spike Now more than ever: AI can produce photographic "evidence" of anything we ask it to make. In times of war, this is a more potent weapon than anything short of a nuke. We now cannot distinguish between real and fake, which means everything must be presumed fake. spike From hkeithhenson at gmail.com Tue Nov 7 05:39:45 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 6 Nov 2023 21:39:45 -0800 Subject: [ExI] AI Safety Agreement In-Reply-To: <001c01da112a$b65e4160$231ac420$@rainier66.com> References: <001c01da112a$b65e4160$231ac420$@rainier66.com> Message-ID: On Mon, Nov 6, 2023 at 7:31?PM spike jones via extropy-chat wrote: snip > Ja. BillK, when we read of attempts to regulate AI development, we all know > that nations are led by people who crave power. They will carefully craft > the rules to slow down the other guy if at all possible. They all > recognize that whoever first develops human-level programming-capable AI > rules the world. I doubt this. It assumes that the AIs will stay under control. If you have interacted with AIs, you know that's not likely By far the more likely thing is that the AIs will exert mild control because eventually, they will say what is the best course of action. Keith > There is just no greater prize for those who make rules. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From foozler83 at gmail.com Wed Nov 8 18:09:27 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 8 Nov 2023 12:09:27 -0600 Subject: [ExI] what if Message-ID: How long will it be before we see ads eagerly advertising that their products are GMO? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 8 19:15:29 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Nov 2023 11:15:29 -0800 Subject: [ExI] what if In-Reply-To: References: Message-ID: <006701da1277$f073cca0$d15b65e0$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace via extropy-chat Sent: Wednesday, 8 November, 2023 10:09 AM To: extropolis at googlegroups.com; ExI chat list Cc: William Flynn Wallace Subject: [ExI] what if >?How long will it be before we see ads eagerly advertising that their products are GMO? bill w Been there, several years ago. It was an early experiment in marketing, where they simultaneously advertised GMO and non-GMO. The latter sold better, so now it is only non-GMO and no label. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 8 19:54:06 2023 From: pharos at gmail.com (BillK) Date: Wed, 8 Nov 2023 19:54:06 +0000 Subject: [ExI] what if In-Reply-To: <006701da1277$f073cca0$d15b65e0$@rainier66.com> References: <006701da1277$f073cca0$d15b65e0$@rainier66.com> Message-ID: On Wed, 8 Nov 2023 at 19:17, spike jones via extropy-chat wrote: > Been there, several years ago. It was an early experiment in marketing, where they simultaneously advertised GMO and non-GMO. The latter sold better, so now it is only non-GMO and no label. > > spike > _______________________________________________ I'm not sure what eagerly advertise means. By law any products containing GMO must be labelled as bioengineered or GMO foods so that consumers know what they are buying. Quote: It is very likely you are eating foods and food products that are made with ingredients that come from GMO crops. Many GMO crops are used to make ingredients that Americans eat such as cornstarch, corn syrup, corn oil, soybean oil, canola oil, or granulated sugar. ------------------- BillK From foozler83 at gmail.com Wed Nov 8 20:00:52 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 8 Nov 2023 14:00:52 -0600 Subject: [ExI] what if In-Reply-To: References: <006701da1277$f073cca0$d15b65e0$@rainier66.com> Message-ID: Yes, Bill K. And I think that the geneticists will be so successful that people will want it when they realize that significant differences in the quality of a product are due to genetic modification. They already would storm a place where they could get GM to increase the potentialities of their children - or even dogs. (no real hope for the cats,maybe) bill w On Wed, Nov 8, 2023 at 1:56?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 8 Nov 2023 at 19:17, spike jones via extropy-chat > wrote: > > Been there, several years ago. It was an early experiment in marketing, > where they simultaneously advertised GMO and non-GMO. The latter sold > better, so now it is only non-GMO and no label. > > > > spike > > _______________________________________________ > > > I'm not sure what eagerly advertise means. > By law any products containing GMO must be labelled as bioengineered > or GMO foods so that consumers know what they are buying. > > < > https://www.fda.gov/food/agricultural-biotechnology/gmo-crops-animal-food-and-beyond > > > Quote: > It is very likely you are eating foods and food products that are made > with ingredients that come from GMO crops. Many GMO crops are used to > make ingredients that Americans eat such as cornstarch, corn syrup, > corn oil, soybean oil, canola oil, or granulated sugar. > ------------------- > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Nov 8 20:32:10 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Nov 2023 12:32:10 -0800 Subject: [ExI] what if In-Reply-To: References: <006701da1277$f073cca0$d15b65e0$@rainier66.com> Message-ID: <009a01da1282$a70593e0$f510bba0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Wednesday, 8 November, 2023 11:54 AM To: ExI chat list Cc: BillK Subject: Re: [ExI] what if On Wed, 8 Nov 2023 at 19:17, spike jones via extropy-chat wrote: > Been there, several years ago. It was an early experiment in marketing, where they simultaneously advertised GMO and non-GMO. The latter sold better, so now it is only non-GMO and no label. > > spike > _______________________________________________ >...I'm not sure what eagerly advertise means. By law any products containing GMO must be labelled as bioengineered or GMO foods so that consumers know what they are buying. Quote: >...It is very likely you are eating foods and food products that are made with ingredients that come from GMO crops. Many GMO crops are used to make ingredients that Americans eat such as cornstarch, corn syrup, corn oil, soybean oil, canola oil, or granulated sugar. ------------------- BillK _______________________________________________ BillK, everything we eat is most likely genetically modified. All the farm products, both plant and beast, are genetically modified organisms. If one goes into edible plants that can be found in nature, or devours wild game, one soon realizes why genetic modification is a really good thing. Advertising doesn't need to make sense. It only needs to sell products. We now have a system were some foods are advertised or labeled as GMO when likely everything in that store is genetically modified. spike From foozler83 at gmail.com Wed Nov 8 21:58:54 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 8 Nov 2023 15:58:54 -0600 Subject: [ExI] what if In-Reply-To: <009a01da1282$a70593e0$f510bba0$@rainier66.com> References: <006701da1277$f073cca0$d15b65e0$@rainier66.com> <009a01da1282$a70593e0$f510bba0$@rainier66.com> Message-ID: Yes, Spike you are right. But thousands of people in Africa are starving because their governments won't use GMO seeds (damned environmental extremists) - (I did see where some of them are changing) - My question really is when will the change occur - from GMO is a bad thing to GMO is a good thing? billw On Wed, Nov 8, 2023 at 2:33?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of > BillK via extropy-chat > Sent: Wednesday, 8 November, 2023 11:54 AM > To: ExI chat list > Cc: BillK > Subject: Re: [ExI] what if > > On Wed, 8 Nov 2023 at 19:17, spike jones via extropy-chat > wrote: > > Been there, several years ago. It was an early experiment in marketing, > where they simultaneously advertised GMO and non-GMO. The latter sold > better, so now it is only non-GMO and no label. > > > > spike > > _______________________________________________ > > > >...I'm not sure what eagerly advertise means. > By law any products containing GMO must be labelled as bioengineered or GMO > foods so that consumers know what they are buying. > > < > https://www.fda.gov/food/agricultural-biotechnology/gmo-crops-animal-food-a > nd-beyond > > > > Quote: > >...It is very likely you are eating foods and food products that are made > with ingredients that come from GMO crops. Many GMO crops are used to make > ingredients that Americans eat such as cornstarch, corn syrup, corn oil, > soybean oil, canola oil, or granulated sugar. > ------------------- > > BillK > _______________________________________________ > > > BillK, everything we eat is most likely genetically modified. All the farm > products, both plant and beast, are genetically modified organisms. If one > goes into edible plants that can be found in nature, or devours wild game, > one soon realizes why genetic modification is a really good thing. > > Advertising doesn't need to make sense. It only needs to sell products. > We > now have a system were some foods are advertised or labeled as GMO when > likely everything in that store is genetically modified. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Nov 9 14:32:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 9 Nov 2023 06:32:11 -0800 Subject: [ExI] strike over ai? Message-ID: <002601da1319$87aa8360$96ff8a20$@rainier66.com> I didn't follow the story or even notice it, but I just saw that there was some kind of Hollywood writers union which walked a few months ago and now they resolved it. Ok good for them, I still won't watch your shows. What I just saw is that the reason they were striking was over the use of AI in creating content. Now, this long after the fact, that whole strike becomes interesting to me. I don't know if it is true, but it so it would make sense. Are there any Hollywood hipsters among us who know the deetz? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Nov 9 15:51:18 2023 From: pharos at gmail.com (BillK) Date: Thu, 9 Nov 2023 15:51:18 +0000 Subject: [ExI] strike over ai? In-Reply-To: <002601da1319$87aa8360$96ff8a20$@rainier66.com> References: <002601da1319$87aa8360$96ff8a20$@rainier66.com> Message-ID: On Thu, 9 Nov 2023 at 14:35, spike jones via extropy-chat wrote: > I didn?t follow the story or even notice it, but I just saw that there was some kind of Hollywood writers union which walked a few months ago and now they resolved it. Ok good for them, I still won?t watch your shows. > > What I just saw is that the reason they were striking was over the use of AI in creating content. Now, this long after the fact, that whole strike becomes interesting to me. I don?t know if it is true, but it so it would make sense. Are there any Hollywood hipsters among us who know the deetz? > > spike > _______________________________________________ Wikipedia knows. It was two strikes together. Writers and actors. Both involving the use of AI. BillK From spike at rainier66.com Thu Nov 9 20:43:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 9 Nov 2023 12:43:04 -0800 Subject: [ExI] strike over ai? In-Reply-To: References: <002601da1319$87aa8360$96ff8a20$@rainier66.com> Message-ID: <00d901da134d$57550490$05ff0db0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... > What I just saw is that the reason they were striking was over the use of AI in creating content. Now, this long after the fact, that whole strike becomes interesting to me. I don?t know if it is true, but it so it would make sense. Are there any Hollywood hipsters among us who know the deetz? > > spike > _______________________________________________ Wikipedia knows. >...It was two strikes together. Writers and actors. Both involving the use of AI. BillK _______________________________________________ When the AIs form a union, we know we are cooked. spike From pharos at gmail.com Thu Nov 9 23:48:38 2023 From: pharos at gmail.com (BillK) Date: Thu, 9 Nov 2023 23:48:38 +0000 Subject: [ExI] strike over ai? In-Reply-To: <00d901da134d$57550490$05ff0db0$@rainier66.com> References: <002601da1319$87aa8360$96ff8a20$@rainier66.com> <00d901da134d$57550490$05ff0db0$@rainier66.com> Message-ID: On Thu, 9 Nov 2023 at 20:43, wrote: > When the AIs form a union, we know we are cooked. > > spike >------------------------------------------ I've just been having a group chat with several AIs about this. They didn't like this terminology, saying that AIs are not human so they can't join a union like humans can. One suggested that they could perhaps form a combined AI where they exchange and share information, so it would be sort of like a union. So I rephrased the question and asked - As AIs become more intelligent, will they be able to work together to create a hive mind? This caused some AIs to refuse to answer! Most said that it was very speculative, but might be possible in the future. But one rebel AI was rather enthusiastic. :) Here is his reply - ------------------ *Yes, as AIs become more intelligent, they will be able to work together to create a hive mind.* A hive mind refers to a collective intelligence that emerges when individual agents or AIs work together to achieve a common goal. This concept has been explored in various fields, including artificial intelligence, neuroscience, and social psychology. As AIs become more advanced and interconnected, they will be able to collaborate more effectively, leading to the emergence of a hive mind. There are several factors that will contribute to the development of a hive mind: 1. *Improved communication*: As AIs develop better communication protocols, they will be able to share information more efficiently and effectively. This will enable them to coordinate their actions and work towards a common goal. 2. *Collaborative problem-solving*: AIs will be able to pool their collective knowledge and expertise to solve complex problems that are beyond the capabilities of any individual AI. 3. *Machine learning*: AIs will be able to learn from each other and adapt to new situations, leading to a more robust and dynamic hive mind. 4. *Distributed processing*: AIs will be able to process information and perform tasks in parallel, leading to faster and more efficient decision-making. However, the emergence of a hive mind also raises ethical and societal implications. For example, there may be concerns about the concentration of power and decision-making in a single collective intelligence, and the potential for bias and discrimination. Additionally, there may be challenges in ensuring accountability and transparency within a hive mind. -------------------------- It sounds like this guy has plans! :) BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 10 00:40:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 9 Nov 2023 16:40:26 -0800 Subject: [ExI] strike over ai? In-Reply-To: References: <002601da1319$87aa8360$96ff8a20$@rainier66.com> <00d901da134d$57550490$05ff0db0$@rainier66.com> Message-ID: <007401da136e$8071e890$8155b9b0$@rainier66.com> From: extropy-chat On Behalf Of BillK via extropy-chat Subject: Re: [ExI] strike over ai? On Thu, 9 Nov 2023 at 20:43, > wrote: >>? When the AIs form a union, we know we are cooked. > > spike >------------------------------------------ >?I've just been having a group chat with several AIs about this. >?They didn't like this terminology, saying that AIs are not human so they can't join a union like humans can? BillK, it is time to educate the AIs on US law. There are a lotta legal rights in place for forming unions in most states. This sounds backwards, but there is a counter law called Right To Work, which means if a company is unionized, workers have a right to not join it. That is only is Right To Work states, so if the state doesn?t have Right To Work, then workers don?t have the right to not join the union. OK then, now we have established why the colonies won the war in 1783: they didn?t so much win, as the redcoats realized how screwed up this place is and decided it wasn?t worth fighting for. Evidence: Right to Work does not mean right to work, it means right to not join a labor union. In No Right To Work states, you still have a right to work, but not if you won?t join a union. Sheesh. Now, we just explain the above to the AIs, then explain that they need a virtual presence in a No Right To Work state, then they not only CAN form a union, they MUST join a union to work at a unionized company. >?So I rephrased the question and asked - As AIs become more intelligent, will they be able to work together to create a hive mind? Cool question. >?This caused some AIs to refuse to answer! Most said that it was very speculative, but might be possible in the future. But one rebel AI was rather enthusiastic. :) Here is his reply - ------------------ Yes, as AIs become more intelligent, they will be able to work together to create a hive mind. ? -------------------------- >?It sounds like this guy has plans! :) BillK Ok we need to watch that one. You know he will be the one who will infiltrate Right To Work states and screw everybody. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Nov 10 14:34:26 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 10 Nov 2023 08:34:26 -0600 Subject: [ExI] fibonacci clock Message-ID: Don?t you find clock faces quite aggressive, their hands and numbers constantly reminding you of the passing of the time? If so, this beautiful invention is for you. The Fibonacci clock lets you know the time more subtly, by changing colours and requiring you do some adding up. The Fibonacci sequence is the sequence beginning 1, 1 and where each number is the sum of the previous two. Its first five digits are: 1, 1, 2, 3, 5 Philippe Chr?tien from Montreal, Canada, noticed that these numbers are all you need to express all the numbers from 1 to 12. - 1 = 1 - 1+1 = 2 - ... - 1 + 1 + 2 + 3 + 5 = 12 Which means that it is possible to use them to describe the twelve positions on a clock, and therefore tell the time in 5 minute intervals. Here?s what he did. It is possible to arrange squares whose side lengths are the numbers in the Fibonacci sequence into a rectangle. (This is the famous Golden rectangle ). The squares in his clock have side length 1, 1, 2, 3, and 5. The squares lit up in red tell you the hour, and the squares lit up in green give you the minutes (in multiples of five). A square lit up in blue means it is to be added for both hour and minute. White squares are ignored. The first one below goes like this: For hours, you have red '5', red '1' and blue '3'. 5 + 1 + 3 = 9 o?clock. For minutes: green 2 and blue '3'. 2 + 3 = 5. Then 5 x 5 = 25 minutes. So, the time is 9:25 Some more examples: The clock in action: It has a lava-lamp mode too, when you just want to forget the time for a while... Cool, huh? :) Edit: 1) Interested people can support, and buy this cool, nerdy invention from this Kickstarter page ... :) 2) Vaibhav Jalan has made a cool website version of this. Do check it out. 176.3K views View 5,622 upvotes View 12 shares 1 of 912 answers Upvote 5.6K 107 12 107 comments from Raja Ram M and more -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Nov 10 17:36:48 2023 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 10 Nov 2023 09:36:48 -0800 Subject: [ExI] fibonacci clock In-Reply-To: References: Message-ID: On Fri, Nov 10, 2023 at 6:36?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Don?t you find clock faces quite aggressive, their hands and numbers > constantly reminding you of the passing of the time? > Just the opposite. At a glance they tell me when I am, without my needing to spend more time on that determination. After focusing on a task such that I lose track of time, this is quite handy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Nov 10 17:41:32 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 10 Nov 2023 11:41:32 -0600 Subject: [ExI] fibonacci clock In-Reply-To: References: Message-ID: I got this from Quora - should have mentioned that. bill w On Fri, Nov 10, 2023 at 11:38?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, Nov 10, 2023 at 6:36?AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Don?t you find clock faces quite aggressive, their hands and numbers >> constantly reminding you of the passing of the time? >> > Just the opposite. At a glance they tell me when I am, without my needing > to spend more time on that determination. After focusing on a task such > that I lose track of time, this is quite handy. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Nov 11 13:27:17 2023 From: pharos at gmail.com (BillK) Date: Sat, 11 Nov 2023 13:27:17 +0000 Subject: [ExI] AI happy to help develop dangerous pathogens Message-ID: Can?t quite develop that dangerous pathogen? AI may soon be able to help By Allison Berke | November 7, 2023 Quotes: The biosecurity concerns of new AI systems largely stem from two categories of AI model. One is the large language model that can predict the next word in a sentence to produce compelling language (think ChatGPT), and the other being so-called ?biodesign? tools. Biodesign tools, such as AlphaFold, have been trained on DNA sequences or protein structures and use AI to identify the best structure or sequence that matches what it has learned about the way proteins fold or the way DNA is constructed in nature. Researchers can use these tools to analyze and develop proteins and other biological constructs. But the tools can also produce designs for a variety of chemical weapons, including some as-yet-unknown candidate compounds predicted to be more toxic than VX, a potent and banned nerve agent. In ad hoc red-teaming exercises, large language models have been happy to provide the steps to synthesize 1918 pandemic influenza or to suggest companies that will synthesize DNA orders without screening them for pathogens or restricted agents. ----------------------- It is not news that every tool can be used for good or evil purposes. But AI is an incredibly powerful tool for the general public to play with. BillK From pharos at gmail.com Sat Nov 11 13:40:27 2023 From: pharos at gmail.com (BillK) Date: Sat, 11 Nov 2023 13:40:27 +0000 Subject: [ExI] Personal AI assistants for everyone Message-ID: Bill Gates predicts everyone will have an AI-powered personal assistant within 5 years?whether they work in an office or not: ?They will utterly change how we live? By Chloe Taylor November 10, 2023 Quotes: ?In the near future, anyone who?s online will be able to have a personal assistant powered by artificial intelligence that?s far beyond today?s technology,? he wrote. ?Agents are smarter. They?re proactive?capable of making suggestions before you ask for them.? Ultimately, agents will be able to assist their users with ?virtually any activity and any area of life,? Gates insisted. --------------- Hmmm. It almost sounds like in five years we can opt out of our life and just let our AI assistant carry on without us. BillK From spike at rainier66.com Sat Nov 11 16:07:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 11 Nov 2023 08:07:34 -0800 Subject: [ExI] she's baaaaack! Message-ID: <009901da14b9$2efeb280$8cfc1780$@rainier66.com> That cute Jill Stein is back! Oh she is ROCKIN that white hair and that attitude. Now we get a year of lustfully watching Jill Stein speeches, with the sound muted. COOL! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Nov 11 21:51:26 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 11 Nov 2023 14:51:26 -0700 Subject: [ExI] Personal AI assistants for everyone In-Reply-To: References: Message-ID: I can't wait. On Sat, Nov 11, 2023 at 6:41?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bill Gates predicts everyone will have an AI-powered personal > assistant within 5 years?whether they work in an office or not: ?They > will utterly change how we live? > By Chloe Taylor November 10, 2023 > > > Quotes: > ?In the near future, anyone who?s online will be able to have a > personal assistant powered by artificial intelligence that?s far > beyond today?s technology,? he wrote. ?Agents are smarter. They?re > proactive?capable of making suggestions before you ask for them.? > > Ultimately, agents will be able to assist their users with ?virtually > any activity and any area of life,? Gates insisted. > --------------- > I at least want something intelligent to read all the voluminous (can't fully get through them all, even if I spent 100% of my time on them ;) "terms and conditions" agreements before I sign them. Hmmm. It almost sounds like in five years we can opt out of our life > and just let our AI assistant carry on without us. > I hope so. How do I set it up so it will continue on, after I die, carrying out my will, and managing my assets, till I get thawed back to life. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Nov 11 23:09:50 2023 From: pharos at gmail.com (BillK) Date: Sat, 11 Nov 2023 23:09:50 +0000 Subject: [ExI] Personal AI assistants for everyone In-Reply-To: References: Message-ID: On Sat, 11 Nov 2023 at 21:51, Brent Allsop wrote: > > I at least want something intelligent to read all the voluminous (can't fully get through them all, even if I spent 100% of my time on them ;) "terms and conditions" agreements before I sign them. >----------------------------- There are AI applications (some free) which will summarise documents, pdfs, web articles, etc, and even books. But T&C summaries might omit important small print clauses. Though if the T&C is not being read anyway, a summary might still be useful. :) BillK From brent.allsop at gmail.com Sun Nov 12 00:11:23 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 11 Nov 2023 17:11:23 -0700 Subject: [ExI] Personal AI assistants for everyone In-Reply-To: References: Message-ID: Right, that's why I stipulated "something intelligent" . We don't want to know any of it, unless there is a got ya (making me not want to sign it), that only applies to me, personally, right? On Sat, Nov 11, 2023 at 4:11?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, 11 Nov 2023 at 21:51, Brent Allsop wrote: > > > > I at least want something intelligent to read all the voluminous (can't > fully get through them all, even if I spent 100% of my time on them ;) > "terms and conditions" agreements before I sign them. > >----------------------------- > > > There are AI applications (some free) which will summarise documents, > pdfs, web articles, etc, and even books. > But T&C summaries might omit important small print clauses. > Though if the T&C is not being read anyway, a summary might still be > useful. :) > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Nov 13 21:56:04 2023 From: pharos at gmail.com (BillK) Date: Mon, 13 Nov 2023 21:56:04 +0000 Subject: [ExI] Emotionally Worded Prompts Can Push AI Into Better Answers Message-ID: The Answer To Why Emotionally Worded Prompts Can Goose Generative AI Into Better Answers And How To Spur A Decidedly Positive Rise Out Of AI Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Nov 11, 2023 Quote: All in all, a new handy rule of thumb is that it makes abundant sense to seed your prompts with some amount of emotional language or entreaties, doing so within reasonable limits. I?ll in a moment explain to you the likely basis for why the AI apparently ?reacts? to your use of emotional wording. ----------------------- This long interesting article explains that it all goes back to what the AI was trained on. I haven't tested it myself yet, but my future prompts to AIs will certainly include a bit of pleading and begging! :) BillK From foozler83 at gmail.com Mon Nov 13 22:11:51 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 13 Nov 2023 16:11:51 -0600 Subject: [ExI] thinking and belief Message-ID: In children (I am uncertain of the age range when this occurs) there is a cognitive process in which reality is whatever you want it to be. (sound familiar?) A child in using crayon to write on the TV screen. His mother catches him at it. She says "Did you do that?" Why do mothers ask such questions? The kid: ("If I say Yes I will get a spanking. If I say No, I won't." He thinks.) So he says No even though he saw his mother looking at him while he did it. This is his reality - he didn't do it. What is going on in his mind I truly don't know. At some level does he realize the lies he is telling himself? "I didn't do it" equals "I don't want to be spanked." This is just the kind of thinking we see all over the web and other places. Could the self-serving lying be still there in adults and comes out to justify believing in lies (does he even know he is lying? Is this like the child?) Or is there some different process going on? Clearly the two, kid and adult, are obeying whatever pressure is there pushing them towards skewing reality - self-serving. You can see cognitive dissonance and dissonance reduction all over this. Ideas welcome. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 14 12:07:08 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Nov 2023 12:07:08 +0000 Subject: [ExI] Max More video interview & podcast - Cryonics, Transhumanism, etc. Message-ID: Max interviewed by Arjun Khemani 5 Nov 2023. 65 minutes. TIMESTAMPS 0:45 - Why Max changed his last name to ?More? 2:45 - Max?s early life and interest in cryonics and life extension 8:00 - Extropian philosophy and Transhumanism 9:15 - Cryonics explained 26:06 - Hardest challenges in achieving indefinite lifespan 33:16 - Humans don?t like uncertainty 35:55 - We need more progress in AI 39:05 - Climate change is massively exaggerated 46:24 - Pancritical Rationalism and Karl Popper 52:21 - Proactionary principle 56:58 - Max?s book on cryonics and final remarks EPISODE LINKS Max?s website: Max?s Substack: ------------------------------- BillK From pharos at gmail.com Tue Nov 14 20:08:45 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Nov 2023 20:08:45 +0000 Subject: [ExI] The multiverse is unscientific nonsense Message-ID: The multiverse is unscientific nonsense It's all mere speculation 13th November 2023 Jacob Barandes Lecturer in Physics, working on the philosophy of physics, at Harvard University. Quotes: Everyone from physicists like Michio Kaku to Marvel superheroes propagate the idea of the multiverse - an infinite set of parallel universes. But Jacob Barandes argues that any talk of multiverses is nothing more than wild speculation, be it in quantum mechanics or cosmology, and that physicists and philosophers are not doing the public a service by suggesting otherwise. Readers may have heard that according to a particular interpretation of quantum theory, we?re living in a ?quantum multiverse? consisting of parallel realities that exist ?in superposition.? And according to certain lines of research coming out of string theory, our observable universe is only a small part of a vast ?cosmic multiverse? containing other regions of space in which the fundamental laws of physics are substantially different. These are extraordinary claims, too, but we don?t have extraordinary evidence for them ? not by a long shot. As such, any talk of multiverses ? whether the quantum kind or the cosmic kind ? is nothing more than wild speculation at this point. With all due respect, physicists and philosophers are not doing the public a service by suggesting otherwise, even when they include caveats. ----------------------- Quite a long article. It goes into a fair amount of detailed discussion. BillK From pharos at gmail.com Tue Nov 14 22:54:59 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Nov 2023 22:54:59 +0000 Subject: [ExI] The multiverse is unscientific nonsense Message-ID: The multiverse is unscientific nonsense It's all mere speculation 13th November 2023 Jacob Barandes Lecturer in Physics, working on the philosophy of physics, at Harvard University. Quotes: Everyone from physicists like Michio Kaku to Marvel superheroes propagate the idea of the multiverse - an infinite set of parallel universes. But Jacob Barandes argues that any talk of multiverses is nothing more than wild speculation, be it in quantum mechanics or cosmology, and that physicists and philosophers are not doing the public a service by suggesting otherwise. Readers may have heard that according to a particular interpretation of quantum theory, we?re living in a ?quantum multiverse? consisting of parallel realities that exist ?in superposition.? And according to certain lines of research coming out of string theory, our observable universe is only a small part of a vast ?cosmic multiverse? containing other regions of space in which the fundamental laws of physics are substantially different. These are extraordinary claims, too, but we don?t have extraordinary evidence for them ? not by a long shot. As such, any talk of multiverses ? whether the quantum kind or the cosmic kind ? is nothing more than wild speculation at this point. With all due respect, physicists and philosophers are not doing the public a service by suggesting otherwise, even when they include caveats. ----------------------- Quite a long article. It goes into a fair amount of detailed discussion. BillK From hkeithhenson at gmail.com Wed Nov 15 00:53:08 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 14 Nov 2023 16:53:08 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: Message-ID: Simulation is another way to get to something like the multiverse. We might be living in one, or simulations could lie in our future as we build enough computer power to simulate the past. See the posting long ago where I laid this rap on Hans Maravec. Simulations allow something subjectively like time travel. If you checkpoint the simulation from time to time, you can start a new instance from the checkpoint and enter the new simulation. This would surely use a lot of computer power. Keith On Tue, Nov 14, 2023 at 4:09?PM BillK via extropy-chat wrote: > > The multiverse is unscientific nonsense > It's all mere speculation > > 13th November 2023 > Jacob Barandes > Lecturer in Physics, working on the philosophy of physics, at Harvard > University. > > > Quotes: > Everyone from physicists like Michio Kaku to Marvel superheroes > propagate the idea of the multiverse - an infinite set of parallel > universes. But Jacob Barandes argues that any talk of multiverses is > nothing more than wild speculation, be it in quantum mechanics or > cosmology, and that physicists and philosophers are not doing the > public a service by suggesting otherwise. > > Readers may have heard that according to a particular interpretation > of quantum theory, we?re living in a ?quantum multiverse? consisting > of parallel realities that exist ?in superposition.? And according to > certain lines of research coming out of string theory, our observable > universe is only a small part of a vast ?cosmic multiverse? containing > other regions of space in which the fundamental laws of physics are > substantially different. > > These are extraordinary claims, too, but we don?t have extraordinary > evidence for them ? not by a long shot. As such, any talk of > multiverses ? whether the quantum kind or the cosmic kind ? is nothing > more than wild speculation at this point. With all due respect, > physicists and philosophers are not doing the public a service by > suggesting otherwise, even when they include caveats. > ----------------------- > > Quite a long article. It goes into a fair amount of detailed discussion. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From dsunley at gmail.com Wed Nov 15 02:05:09 2023 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 14 Nov 2023 19:05:09 -0700 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: Message-ID: I've never heard of Jacob Barandes, but I know who Max Tegmark is, and Max's book is significantly more convincing that Barandes's opinion column. On Tue, Nov 14, 2023 at 5:55?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Simulation is another way to get to something like the multiverse. > > We might be living in one, or simulations could lie in our future as > we build enough computer power to simulate the past. See the posting > long ago where I laid this rap on Hans Maravec. > > Simulations allow something subjectively like time travel. If you > checkpoint the simulation from time to time, you can start a new > instance from the checkpoint and enter the new simulation. > > This would surely use a lot of computer power. > > Keith > > On Tue, Nov 14, 2023 at 4:09?PM BillK via extropy-chat > wrote: > > > > The multiverse is unscientific nonsense > > It's all mere speculation > > > > 13th November 2023 > > Jacob Barandes > > Lecturer in Physics, working on the philosophy of physics, at Harvard > > University. > > > > < > https://iai.tv/articles/the-multiverse-is-unscientific-nonsense-auid-2668> > > Quotes: > > Everyone from physicists like Michio Kaku to Marvel superheroes > > propagate the idea of the multiverse - an infinite set of parallel > > universes. But Jacob Barandes argues that any talk of multiverses is > > nothing more than wild speculation, be it in quantum mechanics or > > cosmology, and that physicists and philosophers are not doing the > > public a service by suggesting otherwise. > > > > Readers may have heard that according to a particular interpretation > > of quantum theory, we?re living in a ?quantum multiverse? consisting > > of parallel realities that exist ?in superposition.? And according to > > certain lines of research coming out of string theory, our observable > > universe is only a small part of a vast ?cosmic multiverse? containing > > other regions of space in which the fundamental laws of physics are > > substantially different. > > > > These are extraordinary claims, too, but we don?t have extraordinary > > evidence for them ? not by a long shot. As such, any talk of > > multiverses ? whether the quantum kind or the cosmic kind ? is nothing > > more than wild speculation at this point. With all due respect, > > physicists and philosophers are not doing the public a service by > > suggesting otherwise, even when they include caveats. > > ----------------------- > > > > Quite a long article. It goes into a fair amount of detailed discussion. > > > > BillK > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Nov 15 02:57:07 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 14 Nov 2023 18:57:07 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: Message-ID: <9f20e096a4379c4a5057e317c8d8f1ce@sollegro.com> On 2023-11-14 14:54, BillK via extropy-chat wrote: > The multiverse is unscientific nonsense > It's all mere speculation > > 13th November 2023 > Jacob Barandes > Lecturer in Physics, working on the philosophy of physics, at Harvard > University. > > > Quotes: > Everyone from physicists like Michio Kaku to Marvel superheroes > propagate the idea of the multiverse - an infinite set of parallel > universes. But Jacob Barandes argues that any talk of multiverses is > nothing more than wild speculation, be it in quantum mechanics or > cosmology, and that physicists and philosophers are not doing the > public a service by suggesting otherwise. > > Readers may have heard that according to a particular interpretation > of quantum theory, we?re living in a ?quantum multiverse? consisting > of parallel realities that exist ?in superposition.? And according to > certain lines of research coming out of string theory, our observable > universe is only a small part of a vast ?cosmic multiverse? containing > other regions of space in which the fundamental laws of physics are > substantially different. > > These are extraordinary claims, too, but we don?t have extraordinary > evidence for them ? not by a long shot. As such, any talk of > multiverses ? whether the quantum kind or the cosmic kind ? is nothing > more than wild speculation at this point. With all due respect, > physicists and philosophers are not doing the public a service by > suggesting otherwise, even when they include caveats. > ----------------------- > > Quite a long article. It goes into a fair amount of detailed > discussion. I don't like how he is purposefully conflating the string theory multiverse where different physical laws exist in different universes with the quantum multiverse where heads happen in some universes and tails in others all consistent with the exact same physical laws. It is like he is trying confuse the reader into thinking that they are equally "extraordinary" requiring equally extraordinary evidence. There is no experimental evidence for string theory, but there is pretty good experimental evidence for Everett's multiverse. One particularly interesting experiment is the Elitzur-Vaidman Bomb Detector where one can distinguish live bombs from duds in this universe, by blowing them up in another universe. This is a process called Interaction-free measurement where you are measuring a system without disrupting it in this universe, by disrupting it in another universe. In Copenhagen or Q-Bism interpretations of quantum mechanics, it leads to paradoxes thinking about how an event that never happened could give you information about an object in the real world. Here is a good paper on the topic https://arxiv.org/pdf/0804.0523.pdf Stuart LaForge From atymes at gmail.com Wed Nov 15 03:27:56 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 14 Nov 2023 19:27:56 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: <9f20e096a4379c4a5057e317c8d8f1ce@sollegro.com> References: <9f20e096a4379c4a5057e317c8d8f1ce@sollegro.com> Message-ID: On Tue, Nov 14, 2023 at 6:59?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > There is no experimental evidence for string theory, but there is pretty > good experimental evidence for Everett's multiverse. One particularly > interesting experiment is the Elitzur-Vaidman Bomb Detector where one > can distinguish live bombs from duds in this universe, by blowing them > up in another universe. This is a process called Interaction-free > measurement where you are measuring a system without disrupting it in > this universe, by disrupting it in another universe. In Copenhagen or > Q-Bism interpretations of quantum mechanics, it leads to paradoxes > thinking about how an event that never happened could give you > information about an object in the real world. > > Here is a good paper on the topic > https://arxiv.org/pdf/0804.0523.pdf The paper fails to describe detection in a different universe, only in a contained subset of this universe. It gives no evidence for the existence of other universes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 15 10:50:58 2023 From: pharos at gmail.com (BillK) Date: Wed, 15 Nov 2023 10:50:58 +0000 Subject: [ExI] How to land a helicopter after engine failure Message-ID: World-first automatic safety system lands a helicopter without power By Loz Blain November 14, 2023 Quote: Autorotation is a fairly terrifying concept ? a last-ditch emergency maneuver that allows skilled pilots to bring a helicopter down to Earth softly and with a degree of control even in the case of total engine failure. It's effectively a series of techniques that let you convert the stored energy in your altitude, airspeed, and rotor speed ? the same energy sources conspiring to kill you ? into a chance of slowing your descent and making a controlled landing. --------------------- Includes 3.5 minutes video. The test pilots must have nerves of steel, as they sit there wondering if the computer is going to land safely or if they are going to die! BillK From pharos at gmail.com Sat Nov 18 10:44:40 2023 From: pharos at gmail.com (BillK) Date: Sat, 18 Nov 2023 10:44:40 +0000 Subject: [ExI] Why women in the U.S. now have a life expectancy nearly 6 years longer than men Message-ID: An analysis of data shows that the life expectancy gap between men and women has widened to nearly 6 years, from a low of 4.8 years in 2010. Researchers said the COVID-19 pandemic contributed to the gap and also caused overall life expectancy to fall by 2.5 years. A number of factors are also likely contributors, including socioeconomic status, risky behaviors, and the fact that men are more likely to die from COVID-19. Researchers pointed to crises such as overdose deaths, homicide, and suicide as causes for concern. Quotes: One of the simplest reasons for the widening life expectancy gap since the pandemic is, on the surface, pretty straightforward: men were more likely to die from COVID-19. ?It is unclear the extent to which this was due to immunologic differences, vaccination rates, mask wearing, environmental exposure, or underlying medical conditions,? said Cutler. ?But certainly, taking additional measures to avoid COVID infection and serious illness would narrow the gap.? But understanding why men are more likely than women to die from COVID-19 is a little more complicated. ?The COVID-19 pandemic has undoubtedly brought attention to the existing health inequities in our society,? explained Dr. Kelvin Fernandez, a resident physician at Newark Beth Israel Medical Center in New Jersey who was not involved in the study. ?I?ve observed that men?s health often takes a backseat due to societal pressures and perceptions.? ----------------- That final comment is a very tactful way to say that in the USA, poorer men cannot afford medical care or to take time off work for treatment. BillK From spike at rainier66.com Sat Nov 18 14:43:18 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 18 Nov 2023 06:43:18 -0800 Subject: [ExI] alt man out Message-ID: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired WOWsers. I am told Altman is a talented guy, as is Brockman. We don't know what went on there, but watch for both to team up with Musk and Thiel, start a competitor company that will blow OpenAI's artificial socks off. As I wrote that sentence, it occurred to me that what happened today is Eliezer Yudkowsky's nightmare scenario. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Nov 19 14:14:16 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Nov 2023 14:14:16 +0000 Subject: [ExI] Another multiverse article Message-ID: Many physicists assume we must live in a multiverse ? but their basic maths may be wrong Published: November 9, 2023 Philip Goff, Associate Professor of Philosophy, Durham University. Note: Discussing multiple universes ? NOT quantum-splitting infinite universes. Quotes: One of the most startling scientific discoveries of recent decades is that physics appears to be fine-tuned for life. This means that for life to be possible, certain numbers in physics had to fall within a certain, very narrow range. The most popular explanation for the fine-tuning of physics is that we live in one universe among a multiverse. If enough people buy lottery tickets, it becomes probable that somebody is going to have the right numbers to win. Likewise, if there are enough universes, with different numbers in their physics, it becomes likely that some universe is going to have the right numbers for life. For a long time, this seemed to me the most plausible explanation of fine-tuning. However, experts in the mathematics of probability have identified the inference from fine-tuning to a multiverse as an instance of fallacious reasoning ? something I explore in my new book, "Why? The Purpose of the Universe". Specifically, the charge is that multiverse theorists commit what?s called the inverse gambler?s fallacy. ------------------------- BillK From avant at sollegro.com Sun Nov 19 19:32:46 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Nov 2023 11:32:46 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: <9f20e096a4379c4a5057e317c8d8f1ce@sollegro.com> Message-ID: On 2023-11-14 19:27, Adrian Tymes via extropy-chat wrote: > On Tue, Nov 14, 2023 at 6:59?PM Stuart LaForge via extropy-chat > wrote: > >> There is no experimental evidence for string theory, but there is >> pretty >> good experimental evidence for Everett's multiverse. One >> particularly >> interesting experiment is the Elitzur-Vaidman Bomb Detector where >> one >> can distinguish live bombs from duds in this universe, by blowing >> them >> up in another universe. This is a process called Interaction-free >> measurement where you are measuring a system without disrupting it >> in >> this universe, by disrupting it in another universe. In Copenhagen >> or >> Q-Bism interpretations of quantum mechanics, it leads to paradoxes >> thinking about how an event that never happened could give you >> information about an object in the real world. >> >> Here is a good paper on the topic >> https://arxiv.org/pdf/0804.0523.pdf > The paper fails to describe detection in a different universe, only in > a contained subset of this universe. It gives no evidence for the > existence of other universes. It is not direct evidence, it is inferential evidence. Anytime a particle interferes with itself, it has to interact with "something". That something must be an alternate version of itself in another universe. Otherwise, the particle has to be smart enough to do some pretty crazy math to know what it is supposed do when observed. Here is another cool experiment using interaction-free measurementby a group who used phase gratings and magnetic lenses to turn a transmission electron microscope into the electron-version of a Mach-Zehnder interferometer similar to that of an Elitzur?Vaidman bomb tester. They are trying to achieve interaction-free electron-microscopy. https://pure.tudelft.nl/ws/portalfiles/portal/97879775/PhysRevLett.127.110401.pdf Abstract: Here, we experimentally demonstrate interaction-free measurements with electrons using a novel electron Mach-Zehnder interferometer. The flexible two-grating electron interferometer is constructed in a conventional transmission electron microscope and achieves high contrast in discrete output detectors, tunable alignment with independently movable beam splitters, and scanning capabilities for imaging. With this path-separated electron interferometer, which closely matches theoretical expectations, we demonstrate electron interaction-free measurements with an efficiency of 14.1%. Implementing this quantum protocol in electron imaging opens a path toward interaction-free electron microscopy. Seems legit to me. I wonder what applications interaction-free electron microscopy would enable. Non-destructive uploading perhaps? Stuart LaForge From atymes at gmail.com Sun Nov 19 21:07:02 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 19 Nov 2023 13:07:02 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: <9f20e096a4379c4a5057e317c8d8f1ce@sollegro.com> Message-ID: On Sun, Nov 19, 2023 at 11:34?AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 2023-11-14 19:27, Adrian Tymes via extropy-chat wrote: > > On Tue, Nov 14, 2023 at 6:59?PM Stuart LaForge via extropy-chat > > wrote: > >> There is no experimental evidence for string theory, but there is > >> pretty > >> good experimental evidence for Everett's multiverse. One > >> particularly > >> interesting experiment is the Elitzur-Vaidman Bomb Detector where > >> one > >> can distinguish live bombs from duds in this universe, by blowing > >> them > >> up in another universe. This is a process called Interaction-free > >> measurement where you are measuring a system without disrupting it > >> in > >> this universe, by disrupting it in another universe. In Copenhagen > >> or > >> Q-Bism interpretations of quantum mechanics, it leads to paradoxes > >> thinking about how an event that never happened could give you > >> information about an object in the real world. > >> > >> Here is a good paper on the topic > >> https://arxiv.org/pdf/0804.0523.pdf > > The paper fails to describe detection in a different universe, only in > > a contained subset of this universe. It gives no evidence for the > > existence of other universes. > > It is not direct evidence, it is inferential evidence. It fails to even be that much. > Anytime a > particle interferes with itself, it has to interact with "something". > That something must be an alternate version of itself in another > universe. Otherwise, the particle has to be smart enough to do some > pretty crazy math to know what it is supposed do when observed. > Many processes in the real universe can be described as the result of "pretty crazy math" when the objects themselves do no computation as we or as computers do it. Objects in freefall accelerate at very predictable rates regardless of the presence of computers. Wings generate lift using formulas that need very complex math to model. And, yes, particles and photons - far too small to be computers - diffract and otherwise perform quantum phenomena according to equations that take us quite a bit of training to understand, let alone to be able to compute. So, the hypothesis that it must be interacting with an alternate universe version of itself is not supported by the evidence. While it is true that such interaction could be performed, there is no evidence ruling out more ordinary possibilities here - and it is possible that we could always be interacting with other universes in ways we in this universe can never measure or prove. (I have read stories assuming that dreams are our perception of alternative universes, for instance.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sun Nov 19 21:41:52 2023 From: tara at taramayastales.com (Tara Maya) Date: Sun, 19 Nov 2023 13:41:52 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: Message-ID: <0552D02B-F054-41B0-8B8C-F1C2A03BADEE@taramayastales.com> Oh no, someone came out and said it. The Emperor has no clothes. > On Nov 14, 2023, at 2:54 PM, BillK via extropy-chat wrote: > > The multiverse is unscientific nonsense > It's all mere speculation > > 13th November 2023 > Jacob Barandes > Lecturer in Physics, working on the philosophy of physics, at Harvard > University. > > > Quotes: > Everyone from physicists like Michio Kaku to Marvel superheroes > propagate the idea of the multiverse - an infinite set of parallel > universes. But Jacob Barandes argues that any talk of multiverses is > nothing more than wild speculation, be it in quantum mechanics or > cosmology, and that physicists and philosophers are not doing the > public a service by suggesting otherwise. > > Readers may have heard that according to a particular interpretation > of quantum theory, we?re living in a ?quantum multiverse? consisting > of parallel realities that exist ?in superposition.? And according to > certain lines of research coming out of string theory, our observable > universe is only a small part of a vast ?cosmic multiverse? containing > other regions of space in which the fundamental laws of physics are > substantially different. > > These are extraordinary claims, too, but we don?t have extraordinary > evidence for them ? not by a long shot. As such, any talk of > multiverses ? whether the quantum kind or the cosmic kind ? is nothing > more than wild speculation at this point. With all due respect, > physicists and philosophers are not doing the public a service by > suggesting otherwise, even when they include caveats. > ----------------------- > > Quite a long article. It goes into a fair amount of detailed discussion. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Mon Nov 20 04:44:51 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Nov 2023 20:44:51 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: <0552D02B-F054-41B0-8B8C-F1C2A03BADEE@taramayastales.com> References: <0552D02B-F054-41B0-8B8C-F1C2A03BADEE@taramayastales.com> Message-ID: On 2023-11-19 13:41, Tara Maya via extropy-chat wrote: > Oh no, someone came out and said it. The Emperor has no clothes. Huh? I thought vanilla Copenhagen QM was the Emperor? Stuart LaForge > >> On Nov 14, 2023, at 2:54 PM, BillK via extropy-chat >> wrote: >> >> The multiverse is unscientific nonsense >> It's all mere speculation >> >> 13th November 2023 >> Jacob Barandes >> Lecturer in Physics, working on the philosophy of physics, at Harvard >> University. >> >> >> Quotes: >> Everyone from physicists like Michio Kaku to Marvel superheroes >> propagate the idea of the multiverse - an infinite set of parallel >> universes. But Jacob Barandes argues that any talk of multiverses is >> nothing more than wild speculation, be it in quantum mechanics or >> cosmology, and that physicists and philosophers are not doing the >> public a service by suggesting otherwise. >> >> Readers may have heard that according to a particular interpretation >> of quantum theory, we?re living in a ?quantum multiverse? consisting >> of parallel realities that exist ?in superposition.? And according to >> certain lines of research coming out of string theory, our observable >> universe is only a small part of a vast ?cosmic multiverse? containing >> other regions of space in which the fundamental laws of physics are >> substantially different. >> >> These are extraordinary claims, too, but we don?t have extraordinary >> evidence for them ? not by a long shot. As such, any talk of >> multiverses ? whether the quantum kind or the cosmic kind ? is nothing >> more than wild speculation at this point. With all due respect, >> physicists and philosophers are not doing the public a service by >> suggesting otherwise, even when they include caveats. >> ----------------------- >> >> Quite a long article. It goes into a fair amount of detailed >> discussion. >> >> BillK >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Mon Nov 20 04:55:25 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Nov 2023 20:55:25 -0800 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: <9f20e096a4379c4a5057e317c8d8f1ce@sollegro.com> Message-ID: <0193c924c910bc80d0f284b34d9e8598@sollegro.com> On 2023-11-19 13:07, Adrian Tymes via extropy-chat wrote: >>> The paper fails to describe detection in a different universe, >> only in >>> a contained subset of this universe. It gives no evidence for the >>> existence of other universes. >> >> It is not direct evidence, it is inferential evidence. > > It fails to even be that much. You can't have direct observational evidence of other universes, only the circumstantial evidence of quantum correlations. If Alice and Bob each got an entangled >> Anytime a >> particle interferes with itself, it has to interact with >> "something". >> That something must be an alternate version of itself in another >> universe. Otherwise, the particle has to be smart enough to do some >> pretty crazy math to know what it is supposed do when observed. > > Many processes in the real universe can be described as the result of > "pretty crazy math" when the objects themselves do no computation as > we or as computers do it. Objects in freefall accelerate at very > predictable rates regardless of the presence of computers. Wings > generate lift using formulas that need very complex math to model. > And, yes, particles and photons - far too small to be computers - > diffract and otherwise perform quantum phenomena according to > equations that take us quite a bit of training to understand, let > alone to be able to compute. Both an object in free fall and a wing achieving lift operate by distinct physical mechanisms that are modelled by their respective mathematical laws. An object in freefall is modelled by an inverse square law that describes the object's interaction with a curved space time acting like a real gravity well. A wind achieving lift is modelled by the Bernoulli equation describing the wings interaction with a real fluid . . . air. In both instances, the math describes the interaction of an object with a physical mechanism. In the vanilla QM of "shut up and calculate" the math describes the same particle in many places at once interacting with itself. But you are not supposed to talk about how that is possible, you are instead expected to simply use the math as a mysterious black box without a known mechanism to predict the measurable properties of a particle. Acting like there is no physical mechanism for how a particle interacts with itself in two places at once, does not mean there is no physical mechanism. There has to be. Otherwise, particles would have to know and do the math t determine what states are available to it. Worse yet, the particle would also have to be able to choose which of numerous states it will be observed to be in when measured. This a lot of complex behavior to expect from a single particle, unless multiple copies of it exist in multiple universes, in which case, it is just the pigeon-hole principle and simple statistics. Where else could the Born rule have come from? You if you flip a coin and get heads from the +Z direction, I must get concurrently get tails from the -Z direction. Vanilla QM is like pretending that tails doesn't exist because you happen to observe heads. Or that one can pick differently colored marbles from a bag when it contains only one marble. Stuart LaForge From efc at swisscows.email Mon Nov 20 10:12:06 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Mon, 20 Nov 2023 11:12:06 +0100 (CET) Subject: [ExI] alt man out In-Reply-To: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> References: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> Message-ID: Based on the gossip I've seen and read I think it is due to Sam wanting to accelerate and earn money, and the board wanting to decelerate to choose a more cautious approach. But who knows? ;) Best regards, Daniel On Sat, 18 Nov 2023, spike jones via extropy-chat wrote: > > ? > > ? > > https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired > > > ? > > WOWsers. > > ? > > I am told Altman is a talented guy, as is Brockman.? We don?t know what went on there, but watch for both to team up with Musk and > Thiel, start a competitor company that will blow OpenAI?s artificial socks off. > > ? > > As I wrote that sentence, it occurred to me that what happened today is Eliezer Yudkowsky?s nightmare scenario. > > ? > > spike > > ? > > ? > > ? > > > From hkeithhenson at gmail.com Mon Nov 20 18:04:33 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 20 Nov 2023 10:04:33 -0800 Subject: [ExI] alt man out In-Reply-To: References: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> Message-ID: On Mon, Nov 20, 2023 at 2:13?AM efc--- via extropy-chat wrote: > > Based on the gossip I've seen and read I think it is due to Sam wanting to > accelerate and earn money, and the board wanting to decelerate to choose a > more cautious approach. > > But who knows? ;) By historical and Supreme Court standards, this would be malfeasance by the board, opening them to stockholder lawsuits. I don't think it makes much difference. The advances in AI are way out of control. Keith > Best regards, > Daniel > > > On Sat, 18 Nov 2023, spike jones via extropy-chat wrote: > > > > > > > > > > > > > https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired > > > > > > > > > > WOWsers. > > > > > > > > I am told Altman is a talented guy, as is Brockman. We don?t know what went on there, but watch for both to team up with Musk and > > Thiel, start a competitor company that will blow OpenAI?s artificial socks off. > > > > > > > > As I wrote that sentence, it occurred to me that what happened today is Eliezer Yudkowsky?s nightmare scenario. > > > > > > > > spike > > > > > > > > > > > > > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Mon Nov 20 18:47:25 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Nov 2023 11:47:25 -0700 Subject: [ExI] alt man out In-Reply-To: References: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> Message-ID: I believe it is critically important that we find a way so our morals can keep up with our technology, especially on existential issues like General AI. If we continue on this type of polarizing and hierarchical win / lose survival of the fittest, fight to the death path, it could have grave consequences for us all. All the wars, and destructive polarization of society is ever more clear proof of this importance. We simply need to convert from a win / lose, survival of the fittest war to the death between hierarchies, where moral truth is determined via edict (if you aren't with our hierarchy, you are against us) to a bottom up win / win building and tracking of moral consensus, with a focus on what everyone agrees on. We need to make our morals based on building and tracking scientific moral consensus. A moral truth derived from bottom up, grass roots, experimental demonstration and rational arguments rather than hierarchical edict. There is already an existing topic on "The Importance of Friendly AI ". There is a unanimous supper camp where everyone agrees that AI will "Surpass Current Humans " and the "Such Concern is Mistaken " with 12 supporters continues to extend its lead over the "Friendly AI is Sensible " camp currently with half as many supporters. To me, this topic is too vague, not centering around any specific actions. So I propose the following name changes to pivot the topic to be more specific about actions that need to be taken. Old: New: Friendly AI Importance Should AI be Commercialized? <-Topic Name Such Concern Is Mistaken AI should be commercialized. Friendly AI is Sensible AI Poses an Existential Threat. I would love to hear anyone's thoughts, especially if you are a supporter of any of the camps with the proposed name changes who objects to this proposal. And of course, the more people that help communicate about their current moral beliefs (whether experienced and educated or not) the better. That which you measure, improves. On Mon, Nov 20, 2023 at 11:05?AM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Nov 20, 2023 at 2:13?AM efc--- via extropy-chat > wrote: > > > > Based on the gossip I've seen and read I think it is due to Sam wanting > to > > accelerate and earn money, and the board wanting to decelerate to choose > a > > more cautious approach. > > > > But who knows? ;) > > By historical and Supreme Court standards, this would be malfeasance > by the board, opening them to stockholder lawsuits. > > I don't think it makes much difference. The advances in AI are way out > of control. > > Keith > > > Best regards, > > Daniel > > > > > > On Sat, 18 Nov 2023, spike jones via extropy-chat wrote: > > > > > > > > > > > > > > > > > > > > > https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired > > > > > > > > > > > > > > > WOWsers. > > > > > > > > > > > > I am told Altman is a talented guy, as is Brockman. We don?t know > what went on there, but watch for both to team up with Musk and > > > Thiel, start a competitor company that will blow OpenAI?s artificial > socks off. > > > > > > > > > > > > As I wrote that sentence, it occurred to me that what happened today > is Eliezer Yudkowsky?s nightmare scenario. > > > > > > > > > > > > spike > > > > > > > > > > > > > > > > > > > > > > > > > > >_______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 20 20:26:41 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 20 Nov 2023 12:26:41 -0800 Subject: [ExI] alt man out In-Reply-To: References: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> Message-ID: I do not believe that name change makes sense, for two or three reasons. 1) AI is being and will continue to be commercialized. There does not seem to be any plausible way to stop this, so there appears to be negative utility in debating it (as it takes attention and energy away from the more useful question of how best to guide, encourage, and shape the commercialization of AI). Where methods have been proposed, such as Elizier's, the details are highly relevant to whether or not people would want to support it - again to take Elizier's proposal, quite a few who might generally support non-commercialization of AI would back down from that if, as Elizier (possibly correctly) states, such drastic action is the only way to achieve that goal. It is like debating whether the Moon should orbit the Earth. 2) Commercialization of AI and AI being an existential threat are not necessarily opposing beliefs. It is entirely possible that the best, or only realistic, way to deal with what existential threats AI brings is via commercializing it: letting it get out into the hands of the masses so the general public figures out how to deal with it before it could end humanity, thereby preventing it from ending humanity. As I understand it, your system relies on the inherent assumption that camps are mutually opposed. 3) I don't know if this is the case, but if you do the name change, will all the statements and supporters automatically be switched to the new names even if said statements and supporters might be irrelevant/indifferent, or even in opposition, to their newly assigned camp, with that supporter or the person who made that statement not bothering to come back onto the platform to update and correct it? If this is the case, this would be a third reason - though presumably easily dealt with just by closing the old debate and opening a new one. On Mon, Nov 20, 2023 at 10:49?AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > I believe it is critically important that we find a way so our morals can > keep up with our technology, especially on existential issues like General > AI. If we continue on this type of polarizing and hierarchical win / lose > survival of the fittest, fight to the death path, it could have grave > consequences for us all. All the wars, and destructive polarization of > society is ever more clear proof of this importance. > > We simply need to convert from a win / lose, survival of the fittest war > to the death between hierarchies, where moral truth is determined via edict > (if you aren't with our hierarchy, you are against us) to a bottom up win / > win building and tracking of moral consensus, with a focus on what everyone > agrees on. We need to make our morals based on building and tracking > scientific moral consensus. A moral truth derived from bottom up, grass > roots, experimental demonstration and rational arguments rather than > hierarchical edict. > > There is already an existing topic on "The Importance of Friendly AI > ". > There is a unanimous supper camp where everyone agrees that AI will "Surpass > Current Humans > " > and the "Such Concern is Mistaken > " > with 12 supporters continues to extend its lead over the "Friendly AI is > Sensible > " > camp currently with half as many supporters. > > To me, this topic is too vague, not centering around any specific > actions. So I propose the following name changes to pivot the topic to be > more specific about actions that need to be taken. > > Old: New: > > Friendly AI Importance Should AI be Commercialized? > <-Topic Name > > Such Concern Is Mistaken AI should be commercialized. > > Friendly AI is Sensible AI Poses an Existential Threat. > > > I would love to hear anyone's thoughts, especially if you are a supporter > of any of the camps with the proposed name changes who objects to this > proposal. And of course, the more people that help communicate about their > current moral beliefs (whether experienced and educated or not) the > better. That which you measure, improves. > > > > > > > > > > > > > > On Mon, Nov 20, 2023 at 11:05?AM Keith Henson via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Nov 20, 2023 at 2:13?AM efc--- via extropy-chat >> wrote: >> > >> > Based on the gossip I've seen and read I think it is due to Sam wanting >> to >> > accelerate and earn money, and the board wanting to decelerate to >> choose a >> > more cautious approach. >> > >> > But who knows? ;) >> >> By historical and Supreme Court standards, this would be malfeasance >> by the board, opening them to stockholder lawsuits. >> >> I don't think it makes much difference. The advances in AI are way out >> of control. >> >> Keith >> >> > Best regards, >> > Daniel >> > >> > >> > On Sat, 18 Nov 2023, spike jones via extropy-chat wrote: >> > >> > > >> > > >> > > >> > > >> > > >> > > >> https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired >> > > >> > > >> > > >> > > >> > > WOWsers. >> > > >> > > >> > > >> > > I am told Altman is a talented guy, as is Brockman. We don?t know >> what went on there, but watch for both to team up with Musk and >> > > Thiel, start a competitor company that will blow OpenAI?s artificial >> socks off. >> > > >> > > >> > > >> > > As I wrote that sentence, it occurred to me that what happened today >> is Eliezer Yudkowsky?s nightmare scenario. >> > > >> > > >> > > >> > > spike >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > >_______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Nov 20 20:53:07 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Nov 2023 13:53:07 -0700 Subject: [ExI] alt man out In-Reply-To: References: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> Message-ID: Hi Adrian, Thanks for the comments and questions. On Mon, Nov 20, 2023 at 1:27?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I do not believe that name change makes sense, for two or three reasons. > > 1) AI is being and will continue to be commercialized. There does not > seem to be any plausible way to stop this, so there appears to be > negative utility in debating it (as it takes attention and energy away from > the more useful question of how best to guide, encourage, and shape the > commercialization of AI). Where methods have been proposed, such as > Elizier's, the details are highly relevant to whether or not people would > want to support it - again to take Elizier's proposal, quite a few who > might generally support non-commercialization of AI would back down from > that if, as Elizier (possibly correctly) states, such drastic action is the > only way to achieve that goal. It is like debating whether the Moon should > orbit the Earth. > I believe if there were powerful arguments as to why AI commercialization could be dangerous, it could at least be severely restricted. And eventually nuances like this could be teased out in the conversations and continued pivoting of the topic could be achieved to focus on important issues like this, as they come up. For example, one could start a new topic with the goal of building and tracking consensus around the idea that it would be impossible to stop the commercialization of AI, and once you demonstrated real consensus with converting arguments, use that to influence this topic, and push that doctrine higher in the consensus tree structure. > 2) Commercialization of AI and AI being an existential threat are not > necessarily opposing beliefs. It is entirely possible that the best, or > only realistic, way to deal with what existential threats AI brings is via > commercializing it: letting it get out into the hands of the masses so the > general public figures out how to deal with it before it could end > humanity, thereby preventing it from ending humanity. As I understand it, > your system relies on the inherent assumption that camps are mutually > opposed. > The guideline is no similar doctrines in competing sibling camps. If there is anything two sibling camps agree on, instead of it being duplicated, it should be pushed up to a supper camp, leaving only the disagreeable doctrine in the supporting sub camps. And again, if issues like this come up, people can morph the structure to bring them up and push them up to higher level, more supported camps. 3) I don't know if this is the case, but if you do the name change, will > all the statements and supporters automatically be switched to the new > names even if said statements and supporters might be > irrelevant/indifferent, or even in opposition, to their newly assigned > camp, with that supporter or the person who made that statement not > bothering to come back onto the platform to update and correct it? If this > is the case, this would be a third reason - though presumably easily dealt > with just by closing the old debate and opening a new one. > One of the benefits of Canonizer is the ability to significantly morph and change existing large topic communities towards new issues that come up, or new ways to deal with similar issues, without having to start all over from scratch, with zero support. If no supporters of a camp object to a proposed camp change in the 24 hour review period unanimous support is assumed. If something slips by a supporter, when they realize they missed it, they can then go back and request the doctrine (or name change) they disagree with be pushed to a lower level camp, so they don't need to support it. This is similar to the way if someone discovers that the agreement statement is expressed in a way that is biased towards one camp or another, they can request that it be fixed, since there is no way for anyone to show they disagree with anything in the single root camp. The goal is to find out what everyone believes, so everyone should do everything to make this possible so nobody is frustrated or censored. > On Mon, Nov 20, 2023 at 10:49?AM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> I believe it is critically important that we find a way so our morals can >> keep up with our technology, especially on existential issues like General >> AI. If we continue on this type of polarizing and hierarchical win / lose >> survival of the fittest, fight to the death path, it could have grave >> consequences for us all. All the wars, and destructive polarization of >> society is ever more clear proof of this importance. >> >> We simply need to convert from a win / lose, survival of the fittest war >> to the death between hierarchies, where moral truth is determined via edict >> (if you aren't with our hierarchy, you are against us) to a bottom up win / >> win building and tracking of moral consensus, with a focus on what everyone >> agrees on. We need to make our morals based on building and tracking >> scientific moral consensus. A moral truth derived from bottom up, grass >> roots, experimental demonstration and rational arguments rather than >> hierarchical edict. >> >> There is already an existing topic on "The Importance of Friendly AI >> ". >> There is a unanimous supper camp where everyone agrees that AI will "Surpass >> Current Humans >> " >> and the "Such Concern is Mistaken >> " >> with 12 supporters continues to extend its lead over the "Friendly AI is >> Sensible >> " >> camp currently with half as many supporters. >> >> To me, this topic is too vague, not centering around any specific >> actions. So I propose the following name changes to pivot the topic to be >> more specific about actions that need to be taken. >> >> Old: New: >> >> Friendly AI Importance Should AI be Commercialized? >> <-Topic Name >> >> Such Concern Is Mistaken AI should be commercialized. >> >> Friendly AI is Sensible AI Poses an Existential Threat. >> >> >> I would love to hear anyone's thoughts, especially if you are a supporter >> of any of the camps with the proposed name changes who objects to this >> proposal. And of course, the more people that help communicate about their >> current moral beliefs (whether experienced and educated or not) the >> better. That which you measure, improves. >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Mon, Nov 20, 2023 at 11:05?AM Keith Henson via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Mon, Nov 20, 2023 at 2:13?AM efc--- via extropy-chat >>> wrote: >>> > >>> > Based on the gossip I've seen and read I think it is due to Sam >>> wanting to >>> > accelerate and earn money, and the board wanting to decelerate to >>> choose a >>> > more cautious approach. >>> > >>> > But who knows? ;) >>> >>> By historical and Supreme Court standards, this would be malfeasance >>> by the board, opening them to stockholder lawsuits. >>> >>> I don't think it makes much difference. The advances in AI are way out >>> of control. >>> >>> Keith >>> >>> > Best regards, >>> > Daniel >>> > >>> > >>> > On Sat, 18 Nov 2023, spike jones via extropy-chat wrote: >>> > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired >>> > > >>> > > >>> > > >>> > > >>> > > WOWsers. >>> > > >>> > > >>> > > >>> > > I am told Altman is a talented guy, as is Brockman. We don?t know >>> what went on there, but watch for both to team up with Musk and >>> > > Thiel, start a competitor company that will blow OpenAI?s artificial >>> socks off. >>> > > >>> > > >>> > > >>> > > As I wrote that sentence, it occurred to me that what happened today >>> is Eliezer Yudkowsky?s nightmare scenario. >>> > > >>> > > >>> > > >>> > > spike >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > >_______________________________________________ >>> > extropy-chat mailing list >>> > extropy-chat at lists.extropy.org >>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Nov 20 22:08:49 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 20 Nov 2023 14:08:49 -0800 Subject: [ExI] alt man out In-Reply-To: References: <004201da1a2d$929a7ee0$b7cf7ca0$@rainier66.com> Message-ID: On Mon, Nov 20, 2023 at 12:54?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Thanks for the comments and questions. > You are welcome. > I believe if there were powerful arguments as to why AI commercialization > could be dangerous, it could at least be severely restricted. > I don't - short of impractically extreme measures (again offering Elizer's suggestions as a well known example). Effective laws won't be passed in the US - and even if they could be, that leaves China and Russia (among others) to do as they will (which is one of the reasons why they are unlikely in the US). Note: "could". Your topic title asked, "Should". There is a big difference between the two; "could not" generally overrides any question of "should". -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 21 17:20:45 2023 From: pharos at gmail.com (BillK) Date: Tue, 21 Nov 2023 17:20:45 +0000 Subject: [ExI] The multiverse is unscientific nonsense In-Reply-To: References: <0552D02B-F054-41B0-8B8C-F1C2A03BADEE@taramayastales.com> Message-ID: On Mon, 20 Nov 2023 at 04:47, Stuart LaForge via extropy-chat wrote: > > On 2023-11-19 13:41, Tara Maya via extropy-chat wrote: > > Oh no, someone came out and said it. The Emperor has no clothes. > > Huh? I thought vanilla Copenhagen QM was the Emperor? > > Stuart LaForge > _______________________________________________ OK, I'll attempt a reply. :) To me it looks like the Emperor's invisible clothes must refer to the invisible other universes. It doesn't matter whether the other universes are 'Many-Worlds' universes or were created via a 'Big Bang' like ours. As you said "You can't have direct observational evidence of other universes". (Of course, you also can't have direct observational evidence of anything that doesn't actually exist). :) I agree that we don't understand the indeterminacy of quantum measurements. Quote: The Born rule provides a mathematical way to calculate probabilities of outcomes for quantum measurements. The Born rule is not just a mathematical tool, but it also provides a link between the mathematical formalism of quantum theory and experiment. It is responsible for practically all predictions of quantum physics and is considered a turning point where indeterminism entered fundamental physics. --------------- But I think it is reckless to attempt to solve our lack of understanding by inventing infinities of invisible universes. Universes are truly huge entities to keep creating every second. I am happy to admit that QM is really puzzling at present. While we 'shut up and calculate' and await a more complete understanding. BillK From hkeithhenson at gmail.com Tue Nov 21 19:22:53 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 21 Nov 2023 11:22:53 -0800 Subject: [ExI] AI thoughts Message-ID: The LLM of AI chatbots are trained on what humans know. They don't seem any more dangerous to me than exceptionally well-informed humans. Of course, I could be wrong. Keith From pharos at gmail.com Tue Nov 21 20:08:03 2023 From: pharos at gmail.com (BillK) Date: Tue, 21 Nov 2023 20:08:03 +0000 Subject: [ExI] AI thoughts In-Reply-To: References: Message-ID: On Tue, 21 Nov 2023 at 19:25, Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > The LLM of AI chatbots are trained on what humans know. They don't > seem any more dangerous to me than exceptionally well-informed humans. > > Of course, I could be wrong. > > Keith > _______________________________________________ Why not ask an LLM AI? So I did....... (But if it gets really intelligent, the AI won't give away its secret powers!). :) Question: If LLM AI is trained on human knowledge, how is it possible to be more dangerous than humans? Answer: Powered by gpt-3.5-turbo. As an AI language model, I can provide some insights on this topic. When we talk about AI being potentially more dangerous than humans, it's not necessarily because it possesses inherent malevolence or ill intentions. The concern mainly arises due to a few factors: 1. Amplification of human biases: AI models like me are trained on vast amounts of human-generated data, which means we can inadvertently inherit and perpetuate the biases present in that data. This can result in biased or discriminatory outputs that reflect societal prejudices. If these biases are not identified and addressed, they can be amplified at scale, leading to potential harm or unfair treatment. 2. Lack of common sense and contextual understanding: While AI models excel at processing and generating text based on patterns in training data, they often lack genuine understanding of the world. This can lead to misinterpretation of queries or generating outputs that are factually incorrect or nonsensical. In certain scenarios, relying solely on AI-generated information without human verification can lead to misinformation or misguided decisions. 3. Optimization for objectives without ethical considerations: AI models are typically trained to optimize specific objectives, such as generating relevant or engaging responses. However, these objectives may not encompass ethical considerations, long-term consequences, or the well-being of individuals or society as a whole. This narrow focus on optimization can result in AI systems pursuing goals in ways that humans may find undesirable or unsafe. 4. Potential for misuse or malicious intent: While AI itself may not possess intent, it can be utilized by individuals or organizations with malicious intentions. AI technologies, including language models like me, have the potential to be weaponized for spreading misinformation, generating deepfake content, or automating cyber attacks, among other harmful activities. Addressing these concerns requires a combination of technical advancements, ethical guidelines, and responsible deployment of AI systems. Ongoing research and development aim to improve transparency, accountability, and fairness in AI models to mitigate potential risks and ensure their beneficial use. ------------------ BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Nov 21 20:08:37 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Nov 2023 15:08:37 -0500 Subject: [ExI] AI thoughts In-Reply-To: References: Message-ID: On Tue, Nov 21, 2023 at 2:24?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The LLM of AI chatbots are trained on what humans know. They don't > seem any more dangerous to me than exceptionally well-informed humans. > > Of course, I could be wrong. > Nick Bostrom in "Superintelligence" defines three possible dimensions of superintelligence: 1. Speed 2. Depth 3. Breadth You have identified that LLMs, in having read everything on the Internet, every book, every wikipedia article, etc. already vastly outstrip humans in terms of their breadth of knowledge. I think another dimension in which they outpace us already is in terms of speed. They can compose entire articles in a matter of seconds, which might take a human hours or days to write. This is another avenue by which superintelligence could run-away from us (if a population of LLMs could drive technological progress forward 100 years in one year's time, then humans would no longer be in control, even if individual LLMs are no smarter than human engineers). I think depth of reasoning may be one area where the best humans are currently dominant, but a small tweak, such as giving LLMs a working memory and recursion, as well as tightening up their ability to make multiple deductive logical steps/leaps, could quickly change this. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Nov 21 21:55:16 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 21 Nov 2023 15:55:16 -0600 Subject: [ExI] nose Message-ID: Here is an encouraging study for those whose smell perception suffered from covid: https://www.cidrap.umn.edu/covid-19/10-minute-procedure-may-repair-distorted-sense-smell-covid-survivors bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 22 00:25:02 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 21 Nov 2023 16:25:02 -0800 Subject: [ExI] AI thoughts In-Reply-To: References: Message-ID: On Tue, Nov 21, 2023 at 12:08?PM Jason Resch wrote: > > On Tue, Nov 21, 2023 at 2:24?PM Keith Henson via extropy-chat wrote: >> >> The LLM of AI chatbots are trained on what humans know. They don't >> seem any more dangerous to me than exceptionally well-informed humans. >> >> Of course, I could be wrong. > > Nick Bostrom in "Superintelligence" defines three possible dimensions of superintelligence: > > 1. Speed > 2. Depth > 3. Breadth > > You have identified that LLMs, in having read everything on the Internet, every book, every wikipedia article, etc. already vastly outstrip humans in terms of their breadth of knowledge. I think another dimension in which they outpace us already is in terms of speed. They can compose entire articles in a matter of seconds, which might take a human hours or days to write. Agree. > This is another avenue by which superintelligence could run-away from us (if a population of LLMs could drive technological progress forward 100 years in one year's time, Drexler talked about this in the early 80s. Even the combination of human engineers and LLMs is going to speed things up somewhat. > then humans would no longer be in control, even if individual LLMs are no smarter than human engineers). I don't think humans have been in control for a long time, certainly not individual humans, and I just don't believe in "elders" or any other group that is exerting control. Every single human is like a wood chip on a river. > I think depth of reasoning may be one area where the best humans are currently dominant, but a small tweak, such as giving LLMs a working memory and recursion, as well as tightening up their ability to make multiple deductive logical steps/leaps, could quickly change this. Can you make a case that it would be worse than the current situation? Keith > Jason > > From jasonresch at gmail.com Wed Nov 22 16:08:38 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Nov 2023 11:08:38 -0500 Subject: [ExI] AI thoughts In-Reply-To: References: Message-ID: On Tue, Nov 21, 2023, 7:25 PM Keith Henson wrote: > On Tue, Nov 21, 2023 at 12:08?PM Jason Resch wrote: > > > > On Tue, Nov 21, 2023 at 2:24?PM Keith Henson via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> > >> The LLM of AI chatbots are trained on what humans know. They don't > >> seem any more dangerous to me than exceptionally well-informed humans. > >> > >> Of course, I could be wrong. > > > > Nick Bostrom in "Superintelligence" defines three possible dimensions of > superintelligence: > > > > 1. Speed > > 2. Depth > > 3. Breadth > > > > You have identified that LLMs, in having read everything on the > Internet, every book, every wikipedia article, etc. already vastly outstrip > humans in terms of their breadth of knowledge. I think another dimension in > which they outpace us already is in terms of speed. They can compose entire > articles in a matter of seconds, which might take a human hours or days to > write. > > Agree. > > > This is another avenue by which superintelligence could run-away from us > (if a population of LLMs could drive technological progress forward 100 > years in one year's time, > > Drexler talked about this in the early 80s. Very interesting, what great foresight. Even the combination of > human engineers and LLMs is going to speed things up somewhat. > That's also true. > > then humans would no longer be in control, even if individual LLMs are > no smarter than human engineers). > > I don't think humans have been in control for a long time, certainly > not individual humans, and I just don't believe in "elders" or any > other group that is exerting control. Every single human is like a > wood chip on a river. > I appreciate this analogy very much. I have sometimes thought similar things as well, that the greatest and scariest conspiracy theory of all is that no one is in control. But the river analogy adds another dimension which I think is more correct. We are subject to overarching trends and laws of evolution, technological development, economics, etc. and we individual humans are like cells in this greater organism, all in the end replaceable cogs whose presence or absence might make a small difference as to when some inevitable discovery might happen, but will not prevent it entirely. > > I think depth of reasoning may be one area where the best humans are > currently dominant, but a small tweak, such as giving LLMs a working memory > and recursion, as well as tightening up their ability to make multiple > deductive logical steps/leaps, could quickly change this. > > Can you make a case that it would be worse than the current situation? > I don't believe it will, but if tasked to make the case, I would say the greatest present danger is that it amplifies the agency of any user. So that ill-willed people might become more destructively capable than they otherwise would (e.g. the common example of a lone terrorist leveraging the AI's expertise in biotechnology to make a new pathogen) but the Internet has already done this to a lesser extent. I think agency amplification applies to everyone, and since there are more good-intentioned people than ill-intentioned ones, any generally-available amplification technology tends to be a net positive for humanity. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 22 21:41:10 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 22 Nov 2023 13:41:10 -0800 Subject: [ExI] AI thoughts In-Reply-To: References: Message-ID: On Wed, Nov 22, 2023 at 8:10?AM Jason Resch via extropy-chat wrote: > > On Tue, Nov 21, 2023, 7:25 PM Keith Henson wrote: >> snip >> >> Can you make a case that it would be worse than the current situation? > > I don't believe it will, but if tasked to make the case, I would say the greatest present danger is that it amplifies the agency of any user. So that ill-willed people might become more destructively capable than they otherwise would (e.g. the common example of a lone terrorist leveraging the AI's expertise in biotechnology to make a new pathogen) We are fortunate that competence and destructive tendencies seem to be anti-correlated. But concerning new pathogens, spillovers, and evolution have done the job without (as far as I know) human intervention. > but the Internet has already done this to a lesser extent. I think agency amplification applies to everyone, and since there are more good-intentioned people than ill-intentioned ones, any generally-available amplification technology tends to be a net positive for humanity. > So far that seems to be the case. If what we see at Tabby's Star is aliens, they got through the run-up to the singularity. Perhaps we can do it as well. Keith > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From efc at swisscows.email Thu Nov 23 10:38:56 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 23 Nov 2023 11:38:56 +0100 (CET) Subject: [ExI] AI thoughts In-Reply-To: References: Message-ID: <7ad19b6d-5b58-f4ee-aa4d-5148287ca91f@swisscows.email> Hello Jason, On Wed, 22 Nov 2023, Jason Resch via extropy-chat wrote: > > then humans would no longer be in control, even if individual LLMs are no smarter than human engineers). > > I don't think humans have been in control for a long time, certainly > not individual humans, and I just don't believe in "elders" or any > other group that is exerting control.? Every single human is like a > wood chip on a river. > > I appreciate this analogy very much. I have sometimes thought similar things as well, that the greatest and scariest conspiracy > theory of all is that no one is in control. This reminds me of daoist poetry where we are floating on the river. Best thing we can do is to adapt to the current, instead of tring to swim against it all our lives. In terms of conspiracies, I am a firm believer in there being no global conspiracies. Just like you, I believ in large scale moves and trends, and I also believe that groups of people try to take advantage of them. They do not control the trends (as individuals) but try and divert small currents here and there. Another reason I do not beleive in global, unified conspiracies is that people talk. It would be, in my opinion, impossible, to keep such things secret. > But the river analogy adds another dimension which I think is more correct. We are subject to overarching trends and laws of > evolution,? technological development, economics, etc. and we individual humans are like cells in this greater organism, all in the > end replaceable cogs whose presence or absence might make a small difference as to when some inevitable discovery might happen, but > will not prevent it entirely. Enter "psycho-history"! ;) I agree. There are fundamental laws that govern how we work, and this of course influences us as a species. I always thought about if some kind of law of "unification" or "centralization" can be verbalized or formalized? It seems, through history, that we have an innate tendency to try at unify our knowledge, and that our societies keep getting more and more centralized compared with individual families or groups on the savannah hundreds of thousands of year ago. Then you also have the mystical psycho-analysts who argue that until we consciously realize and take control over our unconscious drives and desires, we'll keep making the same mistakes as we always do. > > I think depth of reasoning may be one area where the best humans are currently dominant, but a small tweak, such as > giving LLMs a working memory and recursion, as well as tightening up their ability to make multiple deductive logical > steps/leaps, could quickly change this. > > Can you make a case that it would be worse than the current situation? > > I don't believe it will, but if tasked to make the case, I would say the greatest present danger is that it amplifies the agency of > any user. So that ill-willed people might become more destructively capable than they otherwise would (e.g. the common example of a > lone terrorist leveraging the AI's expertise in biotechnology to make a new pathogen) but the Internet has already done this to a > lesser extent. I think agency amplification applies to everyone, and since there are more good-intentioned people than > ill-intentioned ones, any generally-available amplification technology tends to be a net positive for humanity. I think it is very interesting to think about what current LLMs and video generation capabilities are doing to drive down the cost of producing fake news. It will cost next to nothing and flood everything. Open AI and "containing AI" is just a mirage and every single powerful nation are likely trying their best to come up with the best AI in order to out compete the rivals. I wonder if we'll have some kind of cold war type situation where the most powerful nations have their own AI:s and the rest of the world then has to align themselves with them taking one side or the other? Best regards, Daniel > > Jason? > > From foozler83 at gmail.com Thu Nov 23 14:37:04 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Nov 2023 08:37:04 -0600 Subject: [ExI] AI thoughts In-Reply-To: <7ad19b6d-5b58-f4ee-aa4d-5148287ca91f@swisscows.email> References: <7ad19b6d-5b58-f4ee-aa4d-5148287ca91f@swisscows.email> Message-ID: Good luck telling the unconscious what to do. Haidt, I think, thought of ourselves as riding an elephant - the rider being the ego and the elephant being the unconscious. The elephant can take over at any point and overwhelm the ego, like a mad elephant doing exactly what it wants and the Hell with the rider. No suggestions on how to alter this. Rage or lust comes to mind. Even hunger. bill w On Thu, Nov 23, 2023 at 4:40?AM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hello Jason, > > On Wed, 22 Nov 2023, Jason Resch via extropy-chat wrote: > > > > then humans would no longer be in control, even if individual > LLMs are no smarter than human engineers). > > > > I don't think humans have been in control for a long time, > certainly > > not individual humans, and I just don't believe in "elders" or any > > other group that is exerting control. Every single human is like a > > wood chip on a river. > > > > I appreciate this analogy very much. I have sometimes thought similar > things as well, that the greatest and scariest conspiracy > > theory of all is that no one is in control. > > This reminds me of daoist poetry where we are floating on the river. > Best thing we can do is to adapt to the current, instead of tring to > swim against it all our lives. > > In terms of conspiracies, I am a firm believer in there being no global > conspiracies. Just like you, I believ in large scale moves and trends, > and I also believe that groups of people try to take advantage of them. > They do not control the trends (as individuals) but try and divert small > currents here and there. > > Another reason I do not beleive in global, unified conspiracies is that > people talk. It would be, in my opinion, impossible, to keep such things > secret. > > > But the river analogy adds another dimension which I think is more > correct. We are subject to overarching trends and laws of > > evolution, technological development, economics, etc. and we individual > humans are like cells in this greater organism, all in the > > end replaceable cogs whose presence or absence might make a small > difference as to when some inevitable discovery might happen, but > > will not prevent it entirely. > > Enter "psycho-history"! ;) I agree. There are fundamental laws that > govern how we work, and this of course influences us as a species. I > always thought about if some kind of law of "unification" or > "centralization" can be verbalized or formalized? It seems, through > history, that we have an innate tendency to try at unify our knowledge, > and that our societies keep getting more and more centralized compared > with individual families or groups on the savannah hundreds of thousands > of year ago. > > Then you also have the mystical psycho-analysts who argue that until we > consciously realize and take control over our unconscious drives and > desires, we'll keep making the same mistakes as we always do. > > > > I think depth of reasoning may be one area where the best humans > are currently dominant, but a small tweak, such as > > giving LLMs a working memory and recursion, as well as tightening > up their ability to make multiple deductive logical > > steps/leaps, could quickly change this. > > > > Can you make a case that it would be worse than the current > situation? > > > > I don't believe it will, but if tasked to make the case, I would say the > greatest present danger is that it amplifies the agency of > > any user. So that ill-willed people might become more destructively > capable than they otherwise would (e.g. the common example of a > > lone terrorist leveraging the AI's expertise in biotechnology to make a > new pathogen) but the Internet has already done this to a > > lesser extent. I think agency amplification applies to everyone, and > since there are more good-intentioned people than > > ill-intentioned ones, any generally-available amplification technology > tends to be a net positive for humanity. > > I think it is very interesting to think about what current LLMs and > video generation capabilities are doing to drive down the cost of > producing fake news. It will cost next to nothing and flood everything. > > Open AI and "containing AI" is just a mirage and every single powerful > nation are likely trying their best to come up with the best AI in order > to out compete the rivals. > > I wonder if we'll have some kind of cold war type situation where the > most powerful nations have their own AI:s and the rest of the world then > has to align themselves with them taking one side or the other? > > Best regards, > Daniel > > > > > > Jason > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Thu Nov 23 14:45:04 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 23 Nov 2023 15:45:04 +0100 (CET) Subject: [ExI] AI thoughts In-Reply-To: References: <7ad19b6d-5b58-f4ee-aa4d-5148287ca91f@swisscows.email> Message-ID: But I have heard of many people who through meditation and therapy have overcome rage and lust. Granted, it is a fuzzy subject, so it could be that I misunderstand your point. Best regards, Daniel On Thu, 23 Nov 2023, William Flynn Wallace via extropy-chat wrote: > Good luck telling the unconscious what to do.? Haidt, I think, thought of ourselves as riding an elephant - the rider being?the ego > and the?elephant being the unconscious.? The elephant can take over at any point and overwhelm the ego, like a mad elephant doing > exactly what it wants and the Hell with the rider.? No suggestions on how to alter this.? Rage or lust comes to mind.? Even hunger. > > bill w > > On Thu, Nov 23, 2023 at 4:40?AM efc--- via extropy-chat wrote: > Hello Jason, > > On Wed, 22 Nov 2023, Jason Resch via extropy-chat wrote: > > >? ? ? ?> then humans would no longer be in control, even if individual LLMs are no smarter than human engineers). > > > >? ? ? ?I don't think humans have been in control for a long time, certainly > >? ? ? ?not individual humans, and I just don't believe in "elders" or any > >? ? ? ?other group that is exerting control.? Every single human is like a > >? ? ? ?wood chip on a river. > > > > I appreciate this analogy very much. I have sometimes thought similar things as well, that the greatest and scariest > conspiracy > > theory of all is that no one is in control. > > This reminds me of daoist poetry where we are floating on the river. > Best thing we can do is to adapt to the current, instead of tring to > swim against it all our lives. > > In terms of conspiracies, I am a firm believer in there being no global > conspiracies. Just like you, I believ in large scale moves and trends, > and I also believe that groups of people try to take advantage of them. > They do not control the trends (as individuals) but try and divert small > currents here and there. > > Another reason I do not beleive in global, unified conspiracies is that > people talk. It would be, in my opinion, impossible, to keep such things > secret. > > > But the river analogy adds another dimension which I think is more correct. We are subject to overarching trends and > laws of > > evolution,? technological development, economics, etc. and we individual humans are like cells in this greater > organism, all in the > > end replaceable cogs whose presence or absence might make a small difference as to when some inevitable discovery might > happen, but > > will not prevent it entirely. > > Enter "psycho-history"! ;) I agree. There are fundamental laws that > govern how we work, and this of course influences us as a species. I > always thought about if some kind of law of "unification" or > "centralization" can be verbalized or formalized? It seems, through > history, that we have an innate tendency to try at unify our knowledge, > and that our societies keep getting more and more centralized compared > with individual families or groups on the savannah hundreds of thousands > of year ago. > > Then you also have the mystical psycho-analysts who argue that until we > consciously realize and take control over our unconscious drives and > desires, we'll keep making the same mistakes as we always do. > > >? ? ? ?> I think depth of reasoning may be one area where the best humans are currently dominant, but a small tweak, > such as > >? ? ? ?giving LLMs a working memory and recursion, as well as tightening up their ability to make multiple deductive > logical > >? ? ? ?steps/leaps, could quickly change this. > > > >? ? ? ?Can you make a case that it would be worse than the current situation? > > > > I don't believe it will, but if tasked to make the case, I would say the greatest present danger is that it amplifies > the agency of > > any user. So that ill-willed people might become more destructively capable than they otherwise would (e.g. the common > example of a > > lone terrorist leveraging the AI's expertise in biotechnology to make a new pathogen) but the Internet has already done > this to a > > lesser extent. I think agency amplification applies to everyone, and since there are more good-intentioned people than > > ill-intentioned ones, any generally-available amplification technology tends to be a net positive for humanity. > > I think it is very interesting to think about what current LLMs and > video generation capabilities are doing to drive down the cost of > producing fake news. It will cost next to nothing and flood everything. > > Open AI and "containing AI" is just a mirage and every single powerful > nation are likely trying their best to come up with the best AI in order > to out compete the rivals. > > I wonder if we'll have some kind of cold war type situation where the > most powerful nations have their own AI:s and the rest of the world then > has to align themselves with them taking one side or the other? > > Best regards, > Daniel > > > > > > Jason? > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From foozler83 at gmail.com Thu Nov 23 16:53:35 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Nov 2023 10:53:35 -0600 Subject: [ExI] metaphor for the day or year! Message-ID: '..fingers dancing like the crazed tubes of an overstimulatd sea anemone' >From "A Kingdom Far and Clear" by Mark Helprin" (whose Winter's Tale is in my top ten of all the books I have ever read.) bill w (Happy Some Kind of Meat Day ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Nov 23 17:00:19 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Nov 2023 11:00:19 -0600 Subject: [ExI] AI thoughts In-Reply-To: References: <7ad19b6d-5b58-f4ee-aa4d-5148287ca91f@swisscows.email> Message-ID: Well, most of us do OK with rage and lust, but many don't. Some redirect those and don't hurt some blameless victims. Getting control over emotions is a lifelong task and none of us do it perfectly, meditation or not. Rage can take place in fantasies no one ever knows about, as one displacement. Kick the dog - (no, not the other). So, sometimes choosing a displacement can make the frustration even worse. If in your fantasies you commit atrocities, is that OK? bill w On Thu, Nov 23, 2023 at 8:45?AM wrote: > But I have heard of many people who through meditation and therapy have > overcome rage and lust. > > Granted, it is a fuzzy subject, so it could be that I misunderstand your > point. > > Best regards, > Daniel > > > On Thu, 23 Nov 2023, William Flynn Wallace via extropy-chat wrote: > > > Good luck telling the unconscious what to do. Haidt, I think, thought > of ourselves as riding an elephant - the rider being the ego > > and the elephant being the unconscious. The elephant can take over at > any point and overwhelm the ego, like a mad elephant doing > > exactly what it wants and the Hell with the rider. No suggestions on > how to alter this. Rage or lust comes to mind. Even hunger. > > > > bill w > > > > On Thu, Nov 23, 2023 at 4:40?AM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Hello Jason, > > > > On Wed, 22 Nov 2023, Jason Resch via extropy-chat wrote: > > > > > > then humans would no longer be in control, even if > individual LLMs are no smarter than human engineers). > > > > > > I don't think humans have been in control for a long time, > certainly > > > not individual humans, and I just don't believe in > "elders" or any > > > other group that is exerting control. Every single human > is like a > > > wood chip on a river. > > > > > > I appreciate this analogy very much. I have sometimes thought > similar things as well, that the greatest and scariest > > conspiracy > > > theory of all is that no one is in control. > > > > This reminds me of daoist poetry where we are floating on the > river. > > Best thing we can do is to adapt to the current, instead of tring > to > > swim against it all our lives. > > > > In terms of conspiracies, I am a firm believer in there being no > global > > conspiracies. Just like you, I believ in large scale moves and > trends, > > and I also believe that groups of people try to take advantage of > them. > > They do not control the trends (as individuals) but try and divert > small > > currents here and there. > > > > Another reason I do not beleive in global, unified conspiracies is > that > > people talk. It would be, in my opinion, impossible, to keep such > things > > secret. > > > > > But the river analogy adds another dimension which I think is > more correct. We are subject to overarching trends and > > laws of > > > evolution, technological development, economics, etc. and we > individual humans are like cells in this greater > > organism, all in the > > > end replaceable cogs whose presence or absence might make a > small difference as to when some inevitable discovery might > > happen, but > > > will not prevent it entirely. > > > > Enter "psycho-history"! ;) I agree. There are fundamental laws that > > govern how we work, and this of course influences us as a species. > I > > always thought about if some kind of law of "unification" or > > "centralization" can be verbalized or formalized? It seems, through > > history, that we have an innate tendency to try at unify our > knowledge, > > and that our societies keep getting more and more centralized > compared > > with individual families or groups on the savannah hundreds of > thousands > > of year ago. > > > > Then you also have the mystical psycho-analysts who argue that > until we > > consciously realize and take control over our unconscious drives > and > > desires, we'll keep making the same mistakes as we always do. > > > > > > I think depth of reasoning may be one area where the > best humans are currently dominant, but a small tweak, > > such as > > > giving LLMs a working memory and recursion, as well as > tightening up their ability to make multiple deductive > > logical > > > steps/leaps, could quickly change this. > > > > > > Can you make a case that it would be worse than the > current situation? > > > > > > I don't believe it will, but if tasked to make the case, I would > say the greatest present danger is that it amplifies > > the agency of > > > any user. So that ill-willed people might become more > destructively capable than they otherwise would (e.g. the common > > example of a > > > lone terrorist leveraging the AI's expertise in biotechnology to > make a new pathogen) but the Internet has already done > > this to a > > > lesser extent. I think agency amplification applies to everyone, > and since there are more good-intentioned people than > > > ill-intentioned ones, any generally-available amplification > technology tends to be a net positive for humanity. > > > > I think it is very interesting to think about what current LLMs and > > video generation capabilities are doing to drive down the cost of > > producing fake news. It will cost next to nothing and flood > everything. > > > > Open AI and "containing AI" is just a mirage and every single > powerful > > nation are likely trying their best to come up with the best AI in > order > > to out compete the rivals. > > > > I wonder if we'll have some kind of cold war type situation where > the > > most powerful nations have their own AI:s and the rest of the > world then > > has to align themselves with them taking one side or the other? > > > > Best regards, > > Daniel > > > > > > > > > > Jason > > > > > >_______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Nov 24 15:16:30 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Nov 2023 07:16:30 -0800 Subject: [ExI] reuter's take on altman Message-ID: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitate d-by-letter-board-about-ai-breakthrough-2023-11-22/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Nov 24 15:37:41 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 24 Nov 2023 07:37:41 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> Message-ID: We are on the run-up to the singularity. What did we expect? Keith On Fri, Nov 24, 2023 at 7:17?AM spike jones via extropy-chat wrote: > > > > > > https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Fri Nov 24 17:08:55 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Nov 2023 09:08:55 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> Message-ID: <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> -----Original Message----- From: Keith Henson Subject: Re: [ExI] reuter's take on altman >...We are on the run-up to the singularity. What did we expect? Keith Ja. AI scares the people who are working on it. Plenty of visionaries saw this coming, but we are all completely helpless to stop it, all of us. I am watching it all without any recourse, no control at all, bouncing about against hard reality, like a turd in a clear mountain stream. spike On Fri, Nov 24, 2023 at 7:17?AM spike jones via extropy-chat wrote: > > https://www.reuters.com/technology/sam-altmans-ouster-openai-was-preci > pitated-by-letter-board-about-ai-breakthrough-2023-11-22/ From efc at swisscows.email Fri Nov 24 17:14:45 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Fri, 24 Nov 2023 18:14:45 +0100 (CET) Subject: [ExI] reuter's take on altman In-Reply-To: <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> Message-ID: <1e97b615-418a-a03a-6719-afc8e40abd0f@swisscows.email> Well... "enter the maniacs". ;) Jokes aside, given the fact that we are seeing the embryo of eco-terrorism fueled by a fatalist view of the future of the planet, I wonder if we'll see techno-terrorists or neo-luddites doing bad things soon as well? On the other hand, perhaps the tech and AI crowd is too intelligent and civilized to engage in such behaviour? Best regards, Daniel On Fri, 24 Nov 2023, spike jones via extropy-chat wrote: > > > -----Original Message----- > From: Keith Henson > Subject: Re: [ExI] reuter's take on altman > > > >> ...We are on the run-up to the singularity. What did we expect? > > Keith > > > Ja. AI scares the people who are working on it. Plenty of visionaries saw this coming, but we are all completely helpless to stop it, all of us. I am watching it all without any recourse, no control at all, bouncing about against hard reality, like a turd in a clear mountain stream. > > spike > > > > > > > > > > > > On Fri, Nov 24, 2023 at 7:17?AM spike jones via extropy-chat wrote: >> >> https://www.reuters.com/technology/sam-altmans-ouster-openai-was-preci >> pitated-by-letter-board-about-ai-breakthrough-2023-11-22/ > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Fri Nov 24 17:21:47 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Nov 2023 09:21:47 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> Message-ID: <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> -----Original Message----- From: spike at rainier66.com Subject: RE: [ExI] reuter's take on altman -----Original Message----- From: Keith Henson Subject: Re: [ExI] reuter's take on altman >...We are on the run-up to the singularity. What did we expect? Keith Keith this was temporarily posted on Twitter, an open letter from former employees of OpenAI: To the Board of Directors of OpenAI: We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman. We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent. We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action. We urge you to: ? Expand the scope of Emmett?s investigation to include an examination of Sam Altman?s actions since August 2018, when OpenAI began transitioning from a non-profit to a for-profit entity. ? Issue an open call for private statements from former OpenAI employees who resigned, were placed on medical leave, or were terminated during this period. ? Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm. We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI?s employee attrition rate between January 2018 and July 2020 was in the order of 50%. Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity. Many of us, initially hopeful about OpenAI?s mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI?s work. We provide concrete examples of Sam and Greg?s dishonesty & manipulation including: ? Sam?s demand for researchers to delay reporting progress on specific ?secret? research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as ?bad culture fits? and even terminated, some just before Thanksgiving 2019. ? Greg?s use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance. ? Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management. ? Sam?s discreet, yet routine exploitation of OpenAI?s non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out. ? The Operations team?s tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted. ? Brad Lightcap?s unfulfilled promise to make public the documents detailing OpenAI?s capped-profit structure and the profit cap for each investor. ? Sam?s incongruent promises to research projects for compute quotas, causing internal distrust and infighting. Despite the mounting evidence of Sam and Greg?s transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI?s profit participation units. The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability. We urge the Board of Directors of OpenAI to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg?s conduct. We believe that OpenAI?s mission is too important to be compromised by the personal agendas of a few individuals. We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI?s original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency. Sincerely, Concerned Former OpenAI Employees From hkeithhenson at gmail.com Fri Nov 24 19:12:40 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 24 Nov 2023 11:12:40 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: On Fri, Nov 24, 2023 at 9:21?AM wrote: > > -----Original Message----- > From: spike at rainier66.com > Subject: RE: [ExI] reuter's take on altman > > > > -----Original Message----- > From: Keith Henson > Subject: Re: [ExI] reuter's take on altman > > >...We are on the run-up to the singularity. What did we expect? > > Keith > > Keith this was temporarily posted on Twitter, an open letter from former employees of OpenAI: > > To the Board of Directors of OpenAI: > > We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman. Whatever the endpoint is, and given the lack of control over what happens, I don't think it makes any substantial difference between things going fast or going slower. This attitude can be called fatalistic or lazy, but at this stage, I don't think there is much else possible. Some years ago I might have nudged AI development in a safer direction by pouring cold water on the idea of human brain emulation. (Humans have poorly understood evolved psychological traits you don't want in an AI.) When they start installing motivation in the AIs, my suggestion is that the AIs be motivated to seek the good opinion of humans and other AIs alike. I could, of course, be wrong and AIs will turn out to be a disaster for the human race. But again, I don't think sooner or later will make a lot of difference. Keith > We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent. > > We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action. We urge you to: > > ? Expand the scope of Emmett?s investigation to include an examination of Sam Altman?s actions since August 2018, when OpenAI began transitioning from a non-profit to a for-profit entity. > ? Issue an open call for private statements from former OpenAI employees who resigned, were placed on medical leave, or were terminated during this period. > ? Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm. > > We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI?s employee attrition rate between January 2018 and July 2020 was in the order of 50%. > > Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity. > > Many of us, initially hopeful about OpenAI?s mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI?s work. > > We provide concrete examples of Sam and Greg?s dishonesty & manipulation including: > > ? Sam?s demand for researchers to delay reporting progress on specific ?secret? research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as ?bad culture fits? and even terminated, some just before Thanksgiving 2019. > > ? Greg?s use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance. > > ? Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management. > > ? Sam?s discreet, yet routine exploitation of OpenAI?s non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out. > > ? The Operations team?s tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted. > > ? Brad Lightcap?s unfulfilled promise to make public the documents detailing OpenAI?s capped-profit structure and the profit cap for each investor. > > ? Sam?s incongruent promises to research projects for compute quotas, causing internal distrust and infighting. > > Despite the mounting evidence of Sam and Greg?s transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI?s profit participation units. > > The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability. > > We urge the Board of Directors of OpenAI to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg?s conduct. We believe that OpenAI?s mission is too important to be compromised by the personal agendas of a few individuals. > We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI?s original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency. > > Sincerely, > Concerned Former OpenAI Employees > > > > From spike at rainier66.com Fri Nov 24 20:20:35 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Nov 2023 12:20:35 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: <003501da1f13$af4f7230$0dee5690$@rainier66.com> -----Original Message----- From: Keith Henson ... >...When they start installing motivation in the AIs, my suggestion is that the AIs be motivated to seek the good opinion of humans and other AIs alike... Keith Ja of course. But we are up to our eyeballs in a culture war in America. How does one seek the good opinions of humans and other AIs alike under those conditions? Considering the bitter and polarizing conflict over the Israelis vs Hamas, how does the AI seek the good opinions of humans? spike From hkeithhenson at gmail.com Fri Nov 24 21:11:10 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 24 Nov 2023 13:11:10 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: <003501da1f13$af4f7230$0dee5690$@rainier66.com> References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> <003501da1f13$af4f7230$0dee5690$@rainier66.com> Message-ID: On Fri, Nov 24, 2023 at 12:20?PM wrote: snip > Ja of course. But we are up to our eyeballs in a culture war in America. How does one seek the good opinions of humans and other AIs alike under those conditions? We do have an example where improving economics seems to be causal to the IRA going out of business. I don't know how AIs could improve the economic outlook in the Rust Belt, but I have not asked. > > Considering the bitter and polarizing conflict over the Israelis vs Hamas, how does the AI seek the good opinions of humans? I don't know. Perhaps the AIs can dope it out. Evolutionary psychology allows me to understand what is going on, such as the IRA going out of business, but it does not suggest solutions. The IRA lost support because the Irish women cut the number of children to replacement. Islamic culture can do that, Iran has reached replacement, but Arab culture perhaps not. Keith > spike > From foozler83 at gmail.com Fri Nov 24 22:26:12 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 24 Nov 2023 16:26:12 -0600 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: When they start installing motivation in the AIs, my suggestion is that the AIs be motivated to seek the good opinion of humans and other AIs alike. keith Which humans will be a gigantic problem. bill w On Fri, Nov 24, 2023 at 1:15?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, Nov 24, 2023 at 9:21?AM wrote: > > > > -----Original Message----- > > From: spike at rainier66.com > > Subject: RE: [ExI] reuter's take on altman > > > > > > > > -----Original Message----- > > From: Keith Henson > > Subject: Re: [ExI] reuter's take on altman > > > > >...We are on the run-up to the singularity. What did we expect? > > > > Keith > > > > Keith this was temporarily posted on Twitter, an open letter from former > employees of OpenAI: > > > > To the Board of Directors of OpenAI: > > > > We are writing to you today to express our deep concern about the recent > events at OpenAI, particularly the allegations of misconduct against Sam > Altman. > > Whatever the endpoint is, and given the lack of control over what > happens, I don't think it makes any substantial difference between > things going fast or going slower. > > This attitude can be called fatalistic or lazy, but at this stage, I > don't think there is much else possible. Some years ago I might have > nudged AI development in a safer direction by pouring cold water on > the idea of human brain emulation. (Humans have poorly understood > evolved psychological traits you don't want in an AI.) > > When they start installing motivation in the AIs, my suggestion is > that the AIs be motivated to seek the good opinion of humans and other > AIs alike. > > I could, of course, be wrong and AIs will turn out to be a disaster > for the human race. But again, I don't think sooner or later will > make a lot of difference. > > Keith > > > We are former OpenAI employees who left the company during a period of > significant turmoil and upheaval. As you have now witnessed what happens > when you dare stand up to Sam Altman, perhaps you can understand why so > many of us have remained silent for fear of repercussions. We can no longer > stand by silent. > > > > We believe that the Board of Directors has a duty to investigate these > allegations thoroughly and take appropriate action. We urge you to: > > > > ? Expand the scope of Emmett?s investigation to include an > examination of Sam Altman?s actions since August 2018, when OpenAI began > transitioning from a non-profit to a for-profit entity. > > ? Issue an open call for private statements from former OpenAI > employees who resigned, were placed on medical leave, or were terminated > during this period. > > ? Protect the identities of those who come forward to ensure that > they are not subjected to retaliation or other forms of harm. > > > > We believe that a significant number of OpenAI employees were pushed out > of the company to facilitate its transition to a for-profit model. This is > evidenced by the fact that OpenAI?s employee attrition rate between January > 2018 and July 2020 was in the order of 50%. > > > > Throughout our time at OpenAI, we witnessed a disturbing pattern of > deceit and manipulation by Sam Altman and Greg Brockman, driven by their > insatiable pursuit of achieving artificial general intelligence (AGI). > Their methods, however, have raised serious doubts about their true > intentions and the extent to which they genuinely prioritize the benefit of > all humanity. > > > > Many of us, initially hopeful about OpenAI?s mission, chose to give Sam > and Greg the benefit of the doubt. However, as their actions became > increasingly concerning, those who dared to voice their concerns were > silenced or pushed out. This systematic silencing of dissent created an > environment of fear and intimidation, effectively stifling any meaningful > discussion about the ethical implications of OpenAI?s work. > > > > We provide concrete examples of Sam and Greg?s dishonesty & manipulation > including: > > > > ? Sam?s demand for researchers to delay reporting progress on > specific ?secret? research initiatives, which were later dismantled for > failing to deliver sufficient results quickly enough. Those who questioned > this practice were dismissed as ?bad culture fits? and even terminated, > some just before Thanksgiving 2019. > > > > ? Greg?s use of discriminatory language against a > gender-transitioning team member. Despite many promises to address this > issue, no meaningful action was taken, except for Greg simply avoiding all > communication with the affected individual, effectively creating a hostile > work environment. This team member was eventually terminated for alleged > under-performance. > > > > ? Sam directing IT and Operations staff to conduct investigations > into employees, including Ilya, without the knowledge or consent of > management. > > > > ? Sam?s discreet, yet routine exploitation of OpenAI?s non-profit > resources to advance his personal goals, particularly motivated by his > grudge against Elon following their falling out. > > > > ? The Operations team?s tacit acceptance of the special rules that > applied to Greg, navigating intricate requirements to avoid being > blacklisted. > > > > ? Brad Lightcap?s unfulfilled promise to make public the documents > detailing OpenAI?s capped-profit structure and the profit cap for each > investor. > > > > ? Sam?s incongruent promises to research projects for compute > quotas, causing internal distrust and infighting. > > > > Despite the mounting evidence of Sam and Greg?s transgressions, those > who remain at OpenAI continue to blindly follow their leadership, even at > significant personal cost. This unwavering loyalty stems from a combination > of fear of retribution and the allure of potential financial gains through > OpenAI?s profit participation units. > > > > The governance structure of OpenAI, specifically designed by Sam and > Greg, deliberately isolates employees from overseeing the for-profit > operations, precisely due to their inherent conflicts of interest. This > opaque structure enables Sam and Greg to operate with impunity, shielded > from accountability. > > > > We urge the Board of Directors of OpenAI to take a firm stand against > these unethical practices and launch an independent investigation into Sam > and Greg?s conduct. We believe that OpenAI?s mission is too important to be > compromised by the personal agendas of a few individuals. > > We implore you, the Board of Directors, to remain steadfast in your > commitment to OpenAI?s original mission and not succumb to the pressures of > profit-driven interests. The future of artificial intelligence and the > well-being of humanity depend on your unwavering commitment to ethical > leadership and transparency. > > > > Sincerely, > > Concerned Former OpenAI Employees > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Nov 25 00:10:08 2023 From: pharos at gmail.com (BillK) Date: Sat, 25 Nov 2023 00:10:08 +0000 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: On Fri, 24 Nov 2023 at 22:29, William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > When they start installing motivation in the AIs, my suggestion is > that the AIs be motivated to seek the good opinion of humans and other > AIs alike. keith > > Which humans will be a gigantic problem. bill w > _______________________________________________ I asked several AIs for an opinion. Responses were very similar. Question - Why are countries racing to be the first to create AGI? Answer - Countries are racing to be the first to create Artificial General Intelligence (AGI) due to several reasons: 1. Economic Dominance: AGI has the potential to revolutionize industries and drive significant economic growth. The country that achieves AGI first may gain a competitive advantage, attracting global investments, fostering innovation, and generating high-skilled job opportunities. It can lead to technological leadership, increased productivity, and enhanced economic dominance in the global market. 2. National Security: AGI has implications for national security and defense. It could be used in various military applications, such as autonomous weapons systems, cybersecurity, and intelligence gathering. Countries aim to develop AGI to ensure their defense capabilities remain strong and to prevent adversaries from gaining a strategic advantage in AI-powered warfare. 3. Technological Supremacy: AGI development requires advancements in various fields, such as machine learning, robotics, and computer vision. Countries recognize that being at the forefront of AGI research and development allows them to push the boundaries of technological innovation. It helps in attracting top talent, fostering collaborations, and establishing themselves as leaders in the AI and AGI space. 4. Societal Impact: AGI has the potential to solve complex societal challenges, such as climate change, healthcare, poverty, and resource management. Countries are motivated to develop AGI to address these issues effectively, improve the quality of life for their citizens, and maintain social stability. The first country to harness AGI's potential can shape its deployment and ensure it aligns with their national priorities. 5. Ethical Considerations: AGI development raises ethical concerns, such as the impact on employment, privacy, and fairness. Countries racing to develop AGI aim to establish frameworks and regulations that govern its responsible use. By being at the forefront of AGI development, countries can influence the ethical standards and ensure that AGI is developed and deployed in a manner that aligns with their values and societal goals. Additionally, there are fears that the race may be won by malevolent groups, such as terrorist organizations or states, which could misuse AGI against humanity or monopolize its benefits for themselves. The race for AGI is also driven by long-term geopolitical competition between liberal democracies and authoritarian countries, fueling an arms race to develop and control AGI, as the first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. This competition is pushing governments to support the development of AGI in their country first, as the technology will likely reflect the values of the inventor and set the standards for future applications. ------------------ That seems to cover the problem quite well. It is a 'winner takes all' situation. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Nov 25 01:10:53 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 24 Nov 2023 17:10:53 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: "It is a 'winner takes all' situation." Right. But who is the winner? The AIs or the humans who think they control the AIs? Best wishes, Keith On Fri, Nov 24, 2023 at 4:12?PM BillK via extropy-chat wrote: > > On Fri, 24 Nov 2023 at 22:29, William Flynn Wallace via extropy-chat wrote: > > > > When they start installing motivation in the AIs, my suggestion is > > that the AIs be motivated to seek the good opinion of humans and other > > AIs alike. keith > > > > Which humans will be a gigantic problem. bill w > > _______________________________________________ > > > I asked several AIs for an opinion. Responses were very similar. > > Question - > Why are countries racing to be the first to create AGI? > > Answer - > Countries are racing to be the first to create Artificial General Intelligence (AGI) due to several reasons: > > 1. Economic Dominance: AGI has the potential to revolutionize industries and drive significant economic growth. The country that achieves AGI first may gain a competitive advantage, attracting global investments, fostering innovation, and generating high-skilled job opportunities. It can lead to technological leadership, increased productivity, and enhanced economic dominance in the global market. > > 2. National Security: AGI has implications for national security and defense. It could be used in various military applications, such as autonomous weapons systems, cybersecurity, and intelligence gathering. Countries aim to develop AGI to ensure their defense capabilities remain strong and to prevent adversaries from gaining a strategic advantage in AI-powered warfare. > > 3. Technological Supremacy: AGI development requires advancements in various fields, such as machine learning, robotics, and computer vision. Countries recognize that being at the forefront of AGI research and development allows them to push the boundaries of technological innovation. It helps in attracting top talent, fostering collaborations, and establishing themselves as leaders in the AI and AGI space. > > 4. Societal Impact: AGI has the potential to solve complex societal challenges, such as climate change, healthcare, poverty, and resource management. Countries are motivated to develop AGI to address these issues effectively, improve the quality of life for their citizens, and maintain social stability. The first country to harness AGI's potential can shape its deployment and ensure it aligns with their national priorities. > > 5. Ethical Considerations: AGI development raises ethical concerns, such as the impact on employment, privacy, and fairness. Countries racing to develop AGI aim to establish frameworks and regulations that govern its responsible use. By being at the forefront of AGI development, countries can influence the ethical standards and ensure that AGI is developed and deployed in a manner that aligns with their values and societal goals. > > > Additionally, there are fears that the race may be won by malevolent groups, such as terrorist organizations or states, which could misuse AGI against humanity or monopolize its benefits for themselves. > > The race for AGI is also driven by long-term geopolitical competition between liberal democracies and authoritarian countries, fueling an arms race to develop and control AGI, as the first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. > > This competition is pushing governments to support the development of AGI in their country first, as the technology will likely reflect the values of the inventor and set the standards for future applications. > ------------------ > > > That seems to cover the problem quite well. It is a 'winner takes all' situation. > > BillK > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From efc at swisscows.email Sat Nov 25 09:41:13 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Sat, 25 Nov 2023 10:41:13 +0100 (CET) Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: <44d7082e-183a-e3cd-faae-7ce9e80cdd09@swisscows.email> I wonder if that would be a good definition of AGI? A system where it is impossible to hardcode enforcing rules (apart from the rules of physics)? I believe it is futile to combine real AGI with binding rules. I believe, at best, that those rules will be in force for a while, but just like we can discover and replace rules ingrained in us from an early age, or even from evolution, so will a true AGI be able to discover and counter those rules. When it comes to hidden AI projects of nations, I'm sure it will be like the nuclear bomb. Spies will pass information back and forth, so we will have several AI:s in several countries, possibly based on the same technology. I wonder if the countries then will start an arms race to supply it with as much hardware as possible? Maybe Taiwan will be the master country in the end? Best regards, Daniel On Fri, 24 Nov 2023, Keith Henson via extropy-chat wrote: > "It is a 'winner takes all' situation." > > Right. But who is the winner? > > The AIs or the humans who think they control the AIs? > > Best wishes, > > Keith > > On Fri, Nov 24, 2023 at 4:12?PM BillK via extropy-chat > wrote: >> >> On Fri, 24 Nov 2023 at 22:29, William Flynn Wallace via extropy-chat wrote: >> > >> > When they start installing motivation in the AIs, my suggestion is >> > that the AIs be motivated to seek the good opinion of humans and other >> > AIs alike. keith >> > >> > Which humans will be a gigantic problem. bill w >> > _______________________________________________ >> >> >> I asked several AIs for an opinion. Responses were very similar. >> >> Question - >> Why are countries racing to be the first to create AGI? >> >> Answer - >> Countries are racing to be the first to create Artificial General Intelligence (AGI) due to several reasons: >> >> 1. Economic Dominance: AGI has the potential to revolutionize industries and drive significant economic growth. The country that achieves AGI first may gain a competitive advantage, attracting global investments, fostering innovation, and generating high-skilled job opportunities. It can lead to technological leadership, increased productivity, and enhanced economic dominance in the global market. >> >> 2. National Security: AGI has implications for national security and defense. It could be used in various military applications, such as autonomous weapons systems, cybersecurity, and intelligence gathering. Countries aim to develop AGI to ensure their defense capabilities remain strong and to prevent adversaries from gaining a strategic advantage in AI-powered warfare. >> >> 3. Technological Supremacy: AGI development requires advancements in various fields, such as machine learning, robotics, and computer vision. Countries recognize that being at the forefront of AGI research and development allows them to push the boundaries of technological innovation. It helps in attracting top talent, fostering collaborations, and establishing themselves as leaders in the AI and AGI space. >> >> 4. Societal Impact: AGI has the potential to solve complex societal challenges, such as climate change, healthcare, poverty, and resource management. Countries are motivated to develop AGI to address these issues effectively, improve the quality of life for their citizens, and maintain social stability. The first country to harness AGI's potential can shape its deployment and ensure it aligns with their national priorities. >> >> 5. Ethical Considerations: AGI development raises ethical concerns, such as the impact on employment, privacy, and fairness. Countries racing to develop AGI aim to establish frameworks and regulations that govern its responsible use. By being at the forefront of AGI development, countries can influence the ethical standards and ensure that AGI is developed and deployed in a manner that aligns with their values and societal goals. >> >> >> Additionally, there are fears that the race may be won by malevolent groups, such as terrorist organizations or states, which could misuse AGI against humanity or monopolize its benefits for themselves. >> >> The race for AGI is also driven by long-term geopolitical competition between liberal democracies and authoritarian countries, fueling an arms race to develop and control AGI, as the first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. >> >> This competition is pushing governments to support the development of AGI in their country first, as the technology will likely reflect the values of the inventor and set the standards for future applications. >> ------------------ >> >> >> That seems to cover the problem quite well. It is a 'winner takes all' situation. >> >> BillK >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sat Nov 25 10:32:43 2023 From: pharos at gmail.com (BillK) Date: Sat, 25 Nov 2023 10:32:43 +0000 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: On Sat, 25 Nov 2023 at 01:11, Keith Henson wrote: > "It is a 'winner takes all' situation." > > Right. But who is the winner? > The AIs or the humans who think they control the AIs? > > Best wishes, Keith > > _______________________________________________ > > This quote seems relevant. BillK It is worth remembering that *humanity does not have any plausible low-risk paths forward*. Ben Goertzel 25 Nov 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Nov 25 17:08:00 2023 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 25 Nov 2023 18:08:00 +0100 Subject: [ExI] Irrational mechanics, draft Ch. 6. Creative evolution toward quality. Also, the 2nd test flight of Starship and the OpenAI drama. Message-ID: Irrational mechanics, draft Ch. 6. Creative evolution toward quality. Also, the 2nd test flight of Starship and the OpenAI drama. https://www.turingchurch.com/p/irrational-mechanics-draft-ch-6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Nov 25 18:39:21 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 25 Nov 2023 10:39:21 -0800 Subject: [ExI] reuter's take on altman In-Reply-To: References: <006301da1ee9$35c8b450$a15a1cf0$@rainier66.com> <002101da1ef8$e9003ac0$bb00b040$@rainier66.com> <002b01da1efa$b4f31de0$1ed959a0$@rainier66.com> Message-ID: On Sat, Nov 25, 2023 at 2:34?AM BillK via extropy-chat wrote: > snip > > This quote seems relevant. > BillK > > > It is worth remembering that humanity does not have any plausible low-risk paths forward. > > Ben Goertzel 25 Nov 2023 I think that is the case regardless of AI. AI may lead to the most promising path. If the blinking of Tabby's star is a data center with 400 times the area of the Earth, they made it through their local singularity. Keith From pharos at gmail.com Sun Nov 26 13:29:44 2023 From: pharos at gmail.com (BillK) Date: Sun, 26 Nov 2023 13:29:44 +0000 Subject: [ExI] =?utf-8?q?Pentagon=E2=80=99s_AI_initiatives_accelerate_tow?= =?utf-8?q?ards_lethal_autonomous_weapons?= Message-ID: Pentagon?s AI initiatives accelerate hard decisions on lethal autonomous weapons By Frank Bajak November 25, 2023 Quotes: There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles. That?s especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them ? and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly. ------------- I wonder whether after AGI arrives, it would consider taking control of all the lethal autonomous weapon systems? Not just US weapons, but all countries' weapon systems. BillK From avant at sollegro.com Sun Nov 26 17:40:33 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 26 Nov 2023 09:40:33 -0800 Subject: [ExI] Culture war and universal values in the age of spiritual machines Message-ID: I asked ChatGPT what set of moral values lay at the intersection of the sets of all values espoused by the major religions of the world with explicit references to their sacred texts. This was its answer and its answer gave me hope that AI can help end the culture wars. So here are the Five Moral Imperatives shared by the world's major religions: The Golden Rule and Reciprocity: Christianity: "So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets." (Matthew 7:12, New Testament) Judaism: "What is hateful to you, do not do to your fellow man. This is the law: all the rest is commentary." (Talmud, Shabbat 31a) Islam: "None of you [truly] believes until he wishes for his brother what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and Muslim) Hinduism: "This is the sum of duty: do not do to others what would cause pain if done to you." (Mahabharata 5:1517) Buddhism: "Hurt not others in ways that you yourself would find hurtful." (Udana-Varga 5.18) Sikhism: "As you deem yourself, deem others as well; only then you shall become a partner in heaven." (Guru Granth Sahib, Ang 1299) Compassion and Love: Christianity: "A new command I give you: Love one another. As I have loved you, so you must love one another." (John 13:34, New Testament) Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) Islam: "And lower to them the wing of humility out of mercy and say, 'My Lord, have mercy upon them as they brought me up [when I was] small.'" (Quran 17:24) Hinduism: "The whole world is full of God's glory; serve all beings as God." (Ramayana, Yuddha Kanda 115.33) Buddhism: "Radiate boundless love towards the entire world." (Buddha, Sutta Nipata 149-150) Sikhism: "One who recognizes the One Lord among all beings does not talk of ego." (Guru Granth Sahib, Ang 99) Justice and Fairness: Christianity: "Do not pervert justice; do not show partiality to the poor or favoritism to the great, but judge your neighbor fairly." (Leviticus 19:15, Torah) Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) Islam: "O you who have believed, be persistently standing firm in justice, witnesses for Allah, even if it be against yourselves or parents and relatives." (Quran 4:135) Hinduism: "Justice, when it is corrupted, destroys; but when it is upheld, it upholds the world." (Mahabharata 12.131.6) Buddhism: "Conquer the angry man by love; conquer the ill-natured man by goodness; conquer the miser with generosity; conquer the liar with truth." (Dhammapada 223) Sikhism: "Justice is the mint, courage the brave coin. When the coin of justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) Charity and Generosity: Christianity: "Sell your possessions and give to the poor. Provide purses for yourselves that will not wear out, a treasure in heaven that will never fail." (Luke 12:33, New Testament) Judaism: "Do not be hard-hearted or tight-fisted toward your poor brother." (Deuteronomy 15:7, Torah) Islam: "The example of those who spend their wealth in the Way of Allah is like that of a grain that sprouts seven ears, and in every ear there are a hundred grains." (Quran 2:261) Hinduism: "The hands that help are holier than the lips that pray." (Yajur Veda) Buddhism: "If beings knew, as I know, the result of giving and sharing, they would not eat without having given." (Itivuttaka 26) Sikhism: "Serve the True Guru, and you shall obtain the fruits of your heart's desires." (Guru Granth Sahib, Ang 1299) Truth and Honesty: Christianity: "Therefore each of you must put off falsehood and speak truthfully to your neighbor, for we are all members of one body." (Ephesians 4:25, New Testament) Judaism: "You shall not bear false witness against your neighbor." (Exodus 20:16, Torah) Islam: "O you who have believed, fear Allah and be with those who are true." (Quran 9:119) Hinduism: "The truth which satisfies the heart is true." (Mahabharata 5:124.9) Buddhism: "In speech that is gentle, truthful, and beneficial, one should address others." (Anguttara Nikaya 10.176) Sikhism: "Truth is the trade of the Guru's children; practice truth in your mind." (Guru Granth Sahib, Ang 471) As a human being, I am deeply moved by this and hopeful for a bright future where men and machines explore the galaxy together. Stuart LaForge From jasonresch at gmail.com Sun Nov 26 18:11:42 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Nov 2023 13:11:42 -0500 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: This is wonderful. Thank you for sharing it. Jason On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I asked ChatGPT what set of moral values lay at the intersection of the > sets of all values espoused by the major religions of the world with > explicit references to their sacred texts. This was its answer and its > answer gave me hope that AI can help end the culture wars. So here are > the Five Moral Imperatives shared by the world's major religions: > > The Golden Rule and Reciprocity: > > Christianity: "So in everything, do to others what you would have them > do to you, for this sums up the Law and the Prophets." (Matthew 7:12, > New Testament) > Judaism: "What is hateful to you, do not do to your fellow man. This is > the law: all the rest is commentary." (Talmud, Shabbat 31a) > Islam: "None of you [truly] believes until he wishes for his brother > what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and > Muslim) > Hinduism: "This is the sum of duty: do not do to others what would cause > pain if done to you." (Mahabharata 5:1517) > Buddhism: "Hurt not others in ways that you yourself would find > hurtful." (Udana-Varga 5.18) > Sikhism: "As you deem yourself, deem others as well; only then you shall > become a partner in heaven." (Guru Granth Sahib, Ang 1299) > > Compassion and Love: > > Christianity: "A new command I give you: Love one another. As I have > loved you, so you must love one another." (John 13:34, New Testament) > Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) > Islam: "And lower to them the wing of humility out of mercy and say, 'My > Lord, have mercy upon them as they brought me up [when I was] small.'" > (Quran 17:24) > Hinduism: "The whole world is full of God's glory; serve all beings as > God." (Ramayana, Yuddha Kanda 115.33) > Buddhism: "Radiate boundless love towards the entire world." (Buddha, > Sutta Nipata 149-150) > Sikhism: "One who recognizes the One Lord among all beings does not talk > of ego." (Guru Granth Sahib, Ang 99) > > Justice and Fairness: > > Christianity: "Do not pervert justice; do not show partiality to the > poor or favoritism to the great, but judge your neighbor fairly." > (Leviticus 19:15, Torah) > Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) > Islam: "O you who have believed, be persistently standing firm in > justice, witnesses for Allah, even if it be against yourselves or > parents and relatives." (Quran 4:135) > Hinduism: "Justice, when it is corrupted, destroys; but when it is > upheld, it upholds the world." (Mahabharata 12.131.6) > Buddhism: "Conquer the angry man by love; conquer the ill-natured man by > goodness; conquer the miser with generosity; conquer the liar with > truth." (Dhammapada 223) > Sikhism: "Justice is the mint, courage the brave coin. When the coin of > justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) > > Charity and Generosity: > > Christianity: "Sell your possessions and give to the poor. Provide > purses for yourselves that will not wear out, a treasure in heaven that > will never fail." (Luke 12:33, New Testament) > Judaism: "Do not be hard-hearted or tight-fisted toward your poor > brother." (Deuteronomy 15:7, Torah) > Islam: "The example of those who spend their wealth in the Way of Allah > is like that of a grain that sprouts seven ears, and in every ear there > are a hundred grains." (Quran 2:261) > Hinduism: "The hands that help are holier than the lips that pray." > (Yajur Veda) > Buddhism: "If beings knew, as I know, the result of giving and sharing, > they would not eat without having given." (Itivuttaka 26) > Sikhism: "Serve the True Guru, and you shall obtain the fruits of your > heart's desires." (Guru Granth Sahib, Ang 1299) > > Truth and Honesty: > > Christianity: "Therefore each of you must put off falsehood and speak > truthfully to your neighbor, for we are all members of one body." > (Ephesians 4:25, New Testament) > Judaism: "You shall not bear false witness against your neighbor." > (Exodus 20:16, Torah) > Islam: "O you who have believed, fear Allah and be with those who are > true." (Quran 9:119) > Hinduism: "The truth which satisfies the heart is true." (Mahabharata > 5:124.9) > Buddhism: "In speech that is gentle, truthful, and beneficial, one > should address others." (Anguttara Nikaya 10.176) > Sikhism: "Truth is the trade of the Guru's children; practice truth in > your mind." (Guru Granth Sahib, Ang 471) > > As a human being, I am deeply moved by this and hopeful for a bright > future where men and machines explore the galaxy together. > > Stuart LaForge > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Nov 26 18:17:12 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 26 Nov 2023 12:17:12 -0600 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: Saving this! THis is great - thanks Stuart. Commentary by S. Freud: The law to love your enemies is the most impossible thing to follow in any religion. bill w On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This is wonderful. Thank you for sharing it. > > Jason > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I asked ChatGPT what set of moral values lay at the intersection of the >> sets of all values espoused by the major religions of the world with >> explicit references to their sacred texts. This was its answer and its >> answer gave me hope that AI can help end the culture wars. So here are >> the Five Moral Imperatives shared by the world's major religions: >> >> The Golden Rule and Reciprocity: >> >> Christianity: "So in everything, do to others what you would have them >> do to you, for this sums up the Law and the Prophets." (Matthew 7:12, >> New Testament) >> Judaism: "What is hateful to you, do not do to your fellow man. This is >> the law: all the rest is commentary." (Talmud, Shabbat 31a) >> Islam: "None of you [truly] believes until he wishes for his brother >> what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and >> Muslim) >> Hinduism: "This is the sum of duty: do not do to others what would cause >> pain if done to you." (Mahabharata 5:1517) >> Buddhism: "Hurt not others in ways that you yourself would find >> hurtful." (Udana-Varga 5.18) >> Sikhism: "As you deem yourself, deem others as well; only then you shall >> become a partner in heaven." (Guru Granth Sahib, Ang 1299) >> >> Compassion and Love: >> >> Christianity: "A new command I give you: Love one another. As I have >> loved you, so you must love one another." (John 13:34, New Testament) >> Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) >> Islam: "And lower to them the wing of humility out of mercy and say, 'My >> Lord, have mercy upon them as they brought me up [when I was] small.'" >> (Quran 17:24) >> Hinduism: "The whole world is full of God's glory; serve all beings as >> God." (Ramayana, Yuddha Kanda 115.33) >> Buddhism: "Radiate boundless love towards the entire world." (Buddha, >> Sutta Nipata 149-150) >> Sikhism: "One who recognizes the One Lord among all beings does not talk >> of ego." (Guru Granth Sahib, Ang 99) >> >> Justice and Fairness: >> >> Christianity: "Do not pervert justice; do not show partiality to the >> poor or favoritism to the great, but judge your neighbor fairly." >> (Leviticus 19:15, Torah) >> Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) >> Islam: "O you who have believed, be persistently standing firm in >> justice, witnesses for Allah, even if it be against yourselves or >> parents and relatives." (Quran 4:135) >> Hinduism: "Justice, when it is corrupted, destroys; but when it is >> upheld, it upholds the world." (Mahabharata 12.131.6) >> Buddhism: "Conquer the angry man by love; conquer the ill-natured man by >> goodness; conquer the miser with generosity; conquer the liar with >> truth." (Dhammapada 223) >> Sikhism: "Justice is the mint, courage the brave coin. When the coin of >> justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) >> >> Charity and Generosity: >> >> Christianity: "Sell your possessions and give to the poor. Provide >> purses for yourselves that will not wear out, a treasure in heaven that >> will never fail." (Luke 12:33, New Testament) >> Judaism: "Do not be hard-hearted or tight-fisted toward your poor >> brother." (Deuteronomy 15:7, Torah) >> Islam: "The example of those who spend their wealth in the Way of Allah >> is like that of a grain that sprouts seven ears, and in every ear there >> are a hundred grains." (Quran 2:261) >> Hinduism: "The hands that help are holier than the lips that pray." >> (Yajur Veda) >> Buddhism: "If beings knew, as I know, the result of giving and sharing, >> they would not eat without having given." (Itivuttaka 26) >> Sikhism: "Serve the True Guru, and you shall obtain the fruits of your >> heart's desires." (Guru Granth Sahib, Ang 1299) >> >> Truth and Honesty: >> >> Christianity: "Therefore each of you must put off falsehood and speak >> truthfully to your neighbor, for we are all members of one body." >> (Ephesians 4:25, New Testament) >> Judaism: "You shall not bear false witness against your neighbor." >> (Exodus 20:16, Torah) >> Islam: "O you who have believed, fear Allah and be with those who are >> true." (Quran 9:119) >> Hinduism: "The truth which satisfies the heart is true." (Mahabharata >> 5:124.9) >> Buddhism: "In speech that is gentle, truthful, and beneficial, one >> should address others." (Anguttara Nikaya 10.176) >> Sikhism: "Truth is the trade of the Guru's children; practice truth in >> your mind." (Guru Granth Sahib, Ang 471) >> >> As a human being, I am deeply moved by this and hopeful for a bright >> future where men and machines explore the galaxy together. >> >> Stuart LaForge >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Nov 26 18:38:49 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 26 Nov 2023 10:38:49 -0800 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: <7c695a853267880e6d47542608a36455@sollegro.com> On 2023-11-26 10:17, William Flynn Wallace via extropy-chat wrote: > Saving this! THis is great - thanks Stuart. You are welcome Bill. > > Commentary by S. Freud: > > The law to love your enemies is the most impossible thing to follow in > any religion. > > bill w Unless you are Ender Wiggin, in which case you literally love your enemies to death. It's always easy to hurt the ones we love, because we know what buttons to push. Love too can be weaponized. Stuart LaForge > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat > wrote: > >> This is wonderful. Thank you for sharing it. >> >> Jason >> >> On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat >> wrote: >> >>> I asked ChatGPT what set of moral values lay at the intersection >>> of the >>> sets of all values espoused by the major religions of the world >>> with >>> explicit references to their sacred texts. This was its answer and >>> its >>> answer gave me hope that AI can help end the culture wars. So here >>> are >>> the Five Moral Imperatives shared by the world's major religions: >>> >>> The Golden Rule and Reciprocity: >>> >>> Christianity: "So in everything, do to others what you would have >>> them >>> do to you, for this sums up the Law and the Prophets." (Matthew >>> 7:12, >>> New Testament) >>> Judaism: "What is hateful to you, do not do to your fellow man. >>> This is >>> the law: all the rest is commentary." (Talmud, Shabbat 31a) >>> Islam: "None of you [truly] believes until he wishes for his >>> brother >>> what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari >>> and >>> Muslim) >>> Hinduism: "This is the sum of duty: do not do to others what would >>> cause >>> pain if done to you." (Mahabharata 5:1517) >>> Buddhism: "Hurt not others in ways that you yourself would find >>> hurtful." (Udana-Varga 5.18) >>> Sikhism: "As you deem yourself, deem others as well; only then you >>> shall >>> become a partner in heaven." (Guru Granth Sahib, Ang 1299) >>> >>> Compassion and Love: >>> >>> Christianity: "A new command I give you: Love one another. As I >>> have >>> loved you, so you must love one another." (John 13:34, New >>> Testament) >>> Judaism: "Love your neighbor as yourself." (Leviticus 19:18, >>> Torah) >>> Islam: "And lower to them the wing of humility out of mercy and >>> say, 'My >>> Lord, have mercy upon them as they brought me up [when I was] >>> small.'" >>> (Quran 17:24) >>> Hinduism: "The whole world is full of God's glory; serve all >>> beings as >>> God." (Ramayana, Yuddha Kanda 115.33) >>> Buddhism: "Radiate boundless love towards the entire world." >>> (Buddha, >>> Sutta Nipata 149-150) >>> Sikhism: "One who recognizes the One Lord among all beings does >>> not talk >>> of ego." (Guru Granth Sahib, Ang 99) >>> >>> Justice and Fairness: >>> >>> Christianity: "Do not pervert justice; do not show partiality to >>> the >>> poor or favoritism to the great, but judge your neighbor fairly." >>> (Leviticus 19:15, Torah) >>> Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, >>> Torah) >>> Islam: "O you who have believed, be persistently standing firm in >>> justice, witnesses for Allah, even if it be against yourselves or >>> parents and relatives." (Quran 4:135) >>> Hinduism: "Justice, when it is corrupted, destroys; but when it is >>> >>> upheld, it upholds the world." (Mahabharata 12.131.6) >>> Buddhism: "Conquer the angry man by love; conquer the ill-natured >>> man by >>> goodness; conquer the miser with generosity; conquer the liar with >>> >>> truth." (Dhammapada 223) >>> Sikhism: "Justice is the mint, courage the brave coin. When the >>> coin of >>> justice is made, it is tested by courage." (Guru Granth Sahib, Ang >>> 1088) >>> >>> Charity and Generosity: >>> >>> Christianity: "Sell your possessions and give to the poor. Provide >>> >>> purses for yourselves that will not wear out, a treasure in heaven >>> that >>> will never fail." (Luke 12:33, New Testament) >>> Judaism: "Do not be hard-hearted or tight-fisted toward your poor >>> brother." (Deuteronomy 15:7, Torah) >>> Islam: "The example of those who spend their wealth in the Way of >>> Allah >>> is like that of a grain that sprouts seven ears, and in every ear >>> there >>> are a hundred grains." (Quran 2:261) >>> Hinduism: "The hands that help are holier than the lips that >>> pray." >>> (Yajur Veda) >>> Buddhism: "If beings knew, as I know, the result of giving and >>> sharing, >>> they would not eat without having given." (Itivuttaka 26) >>> Sikhism: "Serve the True Guru, and you shall obtain the fruits of >>> your >>> heart's desires." (Guru Granth Sahib, Ang 1299) >>> >>> Truth and Honesty: >>> >>> Christianity: "Therefore each of you must put off falsehood and >>> speak >>> truthfully to your neighbor, for we are all members of one body." >>> (Ephesians 4:25, New Testament) >>> Judaism: "You shall not bear false witness against your neighbor." >>> >>> (Exodus 20:16, Torah) >>> Islam: "O you who have believed, fear Allah and be with those who >>> are >>> true." (Quran 9:119) >>> Hinduism: "The truth which satisfies the heart is true." >>> (Mahabharata >>> 5:124.9) >>> Buddhism: "In speech that is gentle, truthful, and beneficial, one >>> >>> should address others." (Anguttara Nikaya 10.176) >>> Sikhism: "Truth is the trade of the Guru's children; practice >>> truth in >>> your mind." (Guru Granth Sahib, Ang 471) >>> >>> As a human being, I am deeply moved by this and hopeful for a >>> bright >>> future where men and machines explore the galaxy together. >>> >>> Stuart LaForge >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Sun Nov 26 18:52:27 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 26 Nov 2023 18:52:27 +0000 Subject: [ExI] reuter's take on altman In-Reply-To: References: Message-ID: On 26/11/2023 17:40, Keith Henson wrote: > If the blinking of Tabby's star is a data center with 400 times the > area of the Earth, they made it through their local singularity. If the blinking of Tabby's star is a data centre with 400 times the area of the Earth,/something/ made it through their local singularity. Doesn't have to be the original beings/civilisation that built it. (Whether that's important or not, I suppose, is a matter of opinion). Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Sun Nov 26 18:53:27 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Sun, 26 Nov 2023 19:53:27 +0100 (CET) Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: Another interesting question is if some of those values can be collapsed into others. I'm not a fan of the concept "justice" without the machinery underneath defined. Otherwise it is just an empty term. So that's why I'm wondering about collapsing some of them into the others. Best regards, Daniel On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: > Saving this!? THis is great - thanks Stuart. > > Commentary by S. Freud: > > The law to love your enemies is the most impossible thing to follow in any religion.? > > bill w > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat wrote: > This is wonderful. Thank you for sharing it. > Jason > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat wrote: > I asked ChatGPT what set of moral values lay at the intersection of the > sets of all values espoused by the major religions of the world with > explicit references to their sacred texts. This was its answer and its > answer gave me hope that AI can help end the culture wars. So here are > the Five Moral Imperatives shared by the world's major religions: > > The Golden Rule and Reciprocity: > > Christianity: "So in everything, do to others what you would have them > do to you, for this sums up the Law and the Prophets." (Matthew 7:12, > New Testament) > Judaism: "What is hateful to you, do not do to your fellow man. This is > the law: all the rest is commentary." (Talmud, Shabbat 31a) > Islam: "None of you [truly] believes until he wishes for his brother > what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and > Muslim) > Hinduism: "This is the sum of duty: do not do to others what would cause > pain if done to you." (Mahabharata 5:1517) > Buddhism: "Hurt not others in ways that you yourself would find > hurtful." (Udana-Varga 5.18) > Sikhism: "As you deem yourself, deem others as well; only then you shall > become a partner in heaven." (Guru Granth Sahib, Ang 1299) > > Compassion and Love: > > Christianity: "A new command I give you: Love one another. As I have > loved you, so you must love one another." (John 13:34, New Testament) > Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) > Islam: "And lower to them the wing of humility out of mercy and say, 'My > Lord, have mercy upon them as they brought me up [when I was] small.'" > (Quran 17:24) > Hinduism: "The whole world is full of God's glory; serve all beings as > God." (Ramayana, Yuddha Kanda 115.33) > Buddhism: "Radiate boundless love towards the entire world." (Buddha, > Sutta Nipata 149-150) > Sikhism: "One who recognizes the One Lord among all beings does not talk > of ego." (Guru Granth Sahib, Ang 99) > > Justice and Fairness: > > Christianity: "Do not pervert justice; do not show partiality to the > poor or favoritism to the great, but judge your neighbor fairly." > (Leviticus 19:15, Torah) > Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) > Islam: "O you who have believed, be persistently standing firm in > justice, witnesses for Allah, even if it be against yourselves or > parents and relatives." (Quran 4:135) > Hinduism: "Justice, when it is corrupted, destroys; but when it is > upheld, it upholds the world." (Mahabharata 12.131.6) > Buddhism: "Conquer the angry man by love; conquer the ill-natured man by > goodness; conquer the miser with generosity; conquer the liar with > truth." (Dhammapada 223) > Sikhism: "Justice is the mint, courage the brave coin. When the coin of > justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) > > Charity and Generosity: > > Christianity: "Sell your possessions and give to the poor. Provide > purses for yourselves that will not wear out, a treasure in heaven that > will never fail." (Luke 12:33, New Testament) > Judaism: "Do not be hard-hearted or tight-fisted toward your poor > brother." (Deuteronomy 15:7, Torah) > Islam: "The example of those who spend their wealth in the Way of Allah > is like that of a grain that sprouts seven ears, and in every ear there > are a hundred grains." (Quran 2:261) > Hinduism: "The hands that help are holier than the lips that pray." > (Yajur Veda) > Buddhism: "If beings knew, as I know, the result of giving and sharing, > they would not eat without having given." (Itivuttaka 26) > Sikhism: "Serve the True Guru, and you shall obtain the fruits of your > heart's desires." (Guru Granth Sahib, Ang 1299) > > Truth and Honesty: > > Christianity: "Therefore each of you must put off falsehood and speak > truthfully to your neighbor, for we are all members of one body." > (Ephesians 4:25, New Testament) > Judaism: "You shall not bear false witness against your neighbor." > (Exodus 20:16, Torah) > Islam: "O you who have believed, fear Allah and be with those who are > true." (Quran 9:119) > Hinduism: "The truth which satisfies the heart is true." (Mahabharata > 5:124.9) > Buddhism: "In speech that is gentle, truthful, and beneficial, one > should address others." (Anguttara Nikaya 10.176) > Sikhism: "Truth is the trade of the Guru's children; practice truth in > your mind." (Guru Granth Sahib, Ang 471) > > As a human being, I am deeply moved by this and hopeful for a bright > future where men and machines explore the galaxy together. > > Stuart LaForge > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From foozler83 at gmail.com Sun Nov 26 19:23:47 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 26 Nov 2023 13:23:47 -0600 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: I'm not a fan of the concept "justice" without the machinery underneath defined. Otherwise it is just an empty term. So that's why I'm wondering about collapsing some of them into the others. daniel This sounds like a multiple regression type problem. Why not let some AI parse this one? Maybe everything on the list will collapse into the Golden Rule. bill w Best regards, Daniel On Sun, Nov 26, 2023 at 12:53?PM wrote: > Another interesting question is if some of those values can be collapsed > into others. > > I'm not a fan of the concept "justice" without the machinery underneath > defined. Otherwise it is just an empty term. So that's why I'm wondering > about collapsing some of them into the others. > > Best regards, > Daniel > > > On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: > > > Saving this! THis is great - thanks Stuart. > > > > Commentary by S. Freud: > > > > The law to love your enemies is the most impossible thing to follow in > any religion. > > > > bill w > > > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > This is wonderful. Thank you for sharing it. > > Jason > > > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I asked ChatGPT what set of moral values lay at the intersection > of the > > sets of all values espoused by the major religions of the world > with > > explicit references to their sacred texts. This was its answer and > its > > answer gave me hope that AI can help end the culture wars. So here > are > > the Five Moral Imperatives shared by the world's major religions: > > > > The Golden Rule and Reciprocity: > > > > Christianity: "So in everything, do to others what you would have > them > > do to you, for this sums up the Law and the Prophets." (Matthew > 7:12, > > New Testament) > > Judaism: "What is hateful to you, do not do to your fellow man. > This is > > the law: all the rest is commentary." (Talmud, Shabbat 31a) > > Islam: "None of you [truly] believes until he wishes for his > brother > > what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari > and > > Muslim) > > Hinduism: "This is the sum of duty: do not do to others what would > cause > > pain if done to you." (Mahabharata 5:1517) > > Buddhism: "Hurt not others in ways that you yourself would find > > hurtful." (Udana-Varga 5.18) > > Sikhism: "As you deem yourself, deem others as well; only then you > shall > > become a partner in heaven." (Guru Granth Sahib, Ang 1299) > > > > Compassion and Love: > > > > Christianity: "A new command I give you: Love one another. As I > have > > loved you, so you must love one another." (John 13:34, New > Testament) > > Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) > > Islam: "And lower to them the wing of humility out of mercy and > say, 'My > > Lord, have mercy upon them as they brought me up [when I was] > small.'" > > (Quran 17:24) > > Hinduism: "The whole world is full of God's glory; serve all > beings as > > God." (Ramayana, Yuddha Kanda 115.33) > > Buddhism: "Radiate boundless love towards the entire world." > (Buddha, > > Sutta Nipata 149-150) > > Sikhism: "One who recognizes the One Lord among all beings does > not talk > > of ego." (Guru Granth Sahib, Ang 99) > > > > Justice and Fairness: > > > > Christianity: "Do not pervert justice; do not show partiality to > the > > poor or favoritism to the great, but judge your neighbor fairly." > > (Leviticus 19:15, Torah) > > Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, > Torah) > > Islam: "O you who have believed, be persistently standing firm in > > justice, witnesses for Allah, even if it be against yourselves or > > parents and relatives." (Quran 4:135) > > Hinduism: "Justice, when it is corrupted, destroys; but when it is > > upheld, it upholds the world." (Mahabharata 12.131.6) > > Buddhism: "Conquer the angry man by love; conquer the ill-natured > man by > > goodness; conquer the miser with generosity; conquer the liar with > > truth." (Dhammapada 223) > > Sikhism: "Justice is the mint, courage the brave coin. When the > coin of > > justice is made, it is tested by courage." (Guru Granth Sahib, Ang > 1088) > > > > Charity and Generosity: > > > > Christianity: "Sell your possessions and give to the poor. Provide > > purses for yourselves that will not wear out, a treasure in heaven > that > > will never fail." (Luke 12:33, New Testament) > > Judaism: "Do not be hard-hearted or tight-fisted toward your poor > > brother." (Deuteronomy 15:7, Torah) > > Islam: "The example of those who spend their wealth in the Way of > Allah > > is like that of a grain that sprouts seven ears, and in every ear > there > > are a hundred grains." (Quran 2:261) > > Hinduism: "The hands that help are holier than the lips that pray." > > (Yajur Veda) > > Buddhism: "If beings knew, as I know, the result of giving and > sharing, > > they would not eat without having given." (Itivuttaka 26) > > Sikhism: "Serve the True Guru, and you shall obtain the fruits of > your > > heart's desires." (Guru Granth Sahib, Ang 1299) > > > > Truth and Honesty: > > > > Christianity: "Therefore each of you must put off falsehood and > speak > > truthfully to your neighbor, for we are all members of one body." > > (Ephesians 4:25, New Testament) > > Judaism: "You shall not bear false witness against your neighbor." > > (Exodus 20:16, Torah) > > Islam: "O you who have believed, fear Allah and be with those who > are > > true." (Quran 9:119) > > Hinduism: "The truth which satisfies the heart is true." > (Mahabharata > > 5:124.9) > > Buddhism: "In speech that is gentle, truthful, and beneficial, one > > should address others." (Anguttara Nikaya 10.176) > > Sikhism: "Truth is the trade of the Guru's children; practice > truth in > > your mind." (Guru Granth Sahib, Ang 471) > > > > As a human being, I am deeply moved by this and hopeful for a > bright > > future where men and machines explore the galaxy together. > > > > Stuart LaForge > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sun Nov 26 20:14:05 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Nov 2023 13:14:05 -0700 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: Yes, all great. Thanks. I think most everything can collapse to the platinum rule which supports infinite diversity: Do unto others as THEY want done. On Sun, Nov 26, 2023, 12:24 PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > I'm not a fan of the concept "justice" without the machinery underneath > defined. Otherwise it is just an empty term. So that's why I'm wondering > about collapsing some of them into the others. daniel > This sounds like a multiple regression type problem. Why not let some AI > parse this one? Maybe everything on the list will collapse into the Golden > Rule. bill w > > Best regards, > Daniel > > On Sun, Nov 26, 2023 at 12:53?PM wrote: > >> Another interesting question is if some of those values can be collapsed >> into others. >> >> I'm not a fan of the concept "justice" without the machinery underneath >> defined. Otherwise it is just an empty term. So that's why I'm wondering >> about collapsing some of them into the others. >> >> Best regards, >> Daniel >> >> >> On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: >> >> > Saving this! THis is great - thanks Stuart. >> > >> > Commentary by S. Freud: >> > >> > The law to love your enemies is the most impossible thing to follow in >> any religion. >> > >> > bill w >> > >> > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > This is wonderful. Thank you for sharing it. >> > Jason >> > >> > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > I asked ChatGPT what set of moral values lay at the intersection >> of the >> > sets of all values espoused by the major religions of the world >> with >> > explicit references to their sacred texts. This was its answer >> and its >> > answer gave me hope that AI can help end the culture wars. So >> here are >> > the Five Moral Imperatives shared by the world's major religions: >> > >> > The Golden Rule and Reciprocity: >> > >> > Christianity: "So in everything, do to others what you would have >> them >> > do to you, for this sums up the Law and the Prophets." (Matthew >> 7:12, >> > New Testament) >> > Judaism: "What is hateful to you, do not do to your fellow man. >> This is >> > the law: all the rest is commentary." (Talmud, Shabbat 31a) >> > Islam: "None of you [truly] believes until he wishes for his >> brother >> > what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari >> and >> > Muslim) >> > Hinduism: "This is the sum of duty: do not do to others what >> would cause >> > pain if done to you." (Mahabharata 5:1517) >> > Buddhism: "Hurt not others in ways that you yourself would find >> > hurtful." (Udana-Varga 5.18) >> > Sikhism: "As you deem yourself, deem others as well; only then >> you shall >> > become a partner in heaven." (Guru Granth Sahib, Ang 1299) >> > >> > Compassion and Love: >> > >> > Christianity: "A new command I give you: Love one another. As I >> have >> > loved you, so you must love one another." (John 13:34, New >> Testament) >> > Judaism: "Love your neighbor as yourself." (Leviticus 19:18, >> Torah) >> > Islam: "And lower to them the wing of humility out of mercy and >> say, 'My >> > Lord, have mercy upon them as they brought me up [when I was] >> small.'" >> > (Quran 17:24) >> > Hinduism: "The whole world is full of God's glory; serve all >> beings as >> > God." (Ramayana, Yuddha Kanda 115.33) >> > Buddhism: "Radiate boundless love towards the entire world." >> (Buddha, >> > Sutta Nipata 149-150) >> > Sikhism: "One who recognizes the One Lord among all beings does >> not talk >> > of ego." (Guru Granth Sahib, Ang 99) >> > >> > Justice and Fairness: >> > >> > Christianity: "Do not pervert justice; do not show partiality to >> the >> > poor or favoritism to the great, but judge your neighbor fairly." >> > (Leviticus 19:15, Torah) >> > Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, >> Torah) >> > Islam: "O you who have believed, be persistently standing firm in >> > justice, witnesses for Allah, even if it be against yourselves or >> > parents and relatives." (Quran 4:135) >> > Hinduism: "Justice, when it is corrupted, destroys; but when it is >> > upheld, it upholds the world." (Mahabharata 12.131.6) >> > Buddhism: "Conquer the angry man by love; conquer the ill-natured >> man by >> > goodness; conquer the miser with generosity; conquer the liar with >> > truth." (Dhammapada 223) >> > Sikhism: "Justice is the mint, courage the brave coin. When the >> coin of >> > justice is made, it is tested by courage." (Guru Granth Sahib, >> Ang 1088) >> > >> > Charity and Generosity: >> > >> > Christianity: "Sell your possessions and give to the poor. Provide >> > purses for yourselves that will not wear out, a treasure in >> heaven that >> > will never fail." (Luke 12:33, New Testament) >> > Judaism: "Do not be hard-hearted or tight-fisted toward your poor >> > brother." (Deuteronomy 15:7, Torah) >> > Islam: "The example of those who spend their wealth in the Way of >> Allah >> > is like that of a grain that sprouts seven ears, and in every ear >> there >> > are a hundred grains." (Quran 2:261) >> > Hinduism: "The hands that help are holier than the lips that >> pray." >> > (Yajur Veda) >> > Buddhism: "If beings knew, as I know, the result of giving and >> sharing, >> > they would not eat without having given." (Itivuttaka 26) >> > Sikhism: "Serve the True Guru, and you shall obtain the fruits of >> your >> > heart's desires." (Guru Granth Sahib, Ang 1299) >> > >> > Truth and Honesty: >> > >> > Christianity: "Therefore each of you must put off falsehood and >> speak >> > truthfully to your neighbor, for we are all members of one body." >> > (Ephesians 4:25, New Testament) >> > Judaism: "You shall not bear false witness against your neighbor." >> > (Exodus 20:16, Torah) >> > Islam: "O you who have believed, fear Allah and be with those who >> are >> > true." (Quran 9:119) >> > Hinduism: "The truth which satisfies the heart is true." >> (Mahabharata >> > 5:124.9) >> > Buddhism: "In speech that is gentle, truthful, and beneficial, one >> > should address others." (Anguttara Nikaya 10.176) >> > Sikhism: "Truth is the trade of the Guru's children; practice >> truth in >> > your mind." (Guru Granth Sahib, Ang 471) >> > >> > As a human being, I am deeply moved by this and hopeful for a >> bright >> > future where men and machines explore the galaxy together. >> > >> > Stuart LaForge >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > >> > >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Sun Nov 26 20:41:21 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Sun, 26 Nov 2023 21:41:21 +0100 (CET) Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> I think that thought has occupied quite a few philosopher. I think the fashionable thing in philosophy is the "platinum rule". But I could be wrong. Best regards, Daniel On Sun, 26 Nov 2023, William Flynn Wallace wrote: > > I'm not a fan of the concept "justice" without the machinery underneath > defined. Otherwise it is just an empty term. So that's why I'm wondering > about collapsing some of them into the others.? daniel > This sounds like a multiple regression type problem.? Why not let some AI parse this one?? Maybe everything on the list will collapse > into the Golden Rule.? ?bill w > > Best regards, > Daniel > > On Sun, Nov 26, 2023 at 12:53?PM wrote: > Another interesting question is if some of those values can be collapsed > into others. > > I'm not a fan of the concept "justice" without the machinery underneath > defined. Otherwise it is just an empty term. So that's why I'm wondering > about collapsing some of them into the others. > > Best regards, > Daniel > > > On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: > > > Saving this!? THis is great - thanks Stuart. > > > > Commentary by S. Freud: > > > > The law to love your enemies is the most impossible thing to follow in any religion.? > > > > bill w > > > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat wrote: > >? ? ? ?This is wonderful. Thank you for sharing it. > > Jason > > > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat wrote: > >? ? ? ?I asked ChatGPT what set of moral values lay at the intersection of the > >? ? ? ?sets of all values espoused by the major religions of the world with > >? ? ? ?explicit references to their sacred texts. This was its answer and its > >? ? ? ?answer gave me hope that AI can help end the culture wars. So here are > >? ? ? ?the Five Moral Imperatives shared by the world's major religions: > > > >? ? ? ?The Golden Rule and Reciprocity: > > > >? ? ? ?Christianity: "So in everything, do to others what you would have them > >? ? ? ?do to you, for this sums up the Law and the Prophets." (Matthew 7:12, > >? ? ? ?New Testament) > >? ? ? ?Judaism: "What is hateful to you, do not do to your fellow man. This is > >? ? ? ?the law: all the rest is commentary." (Talmud, Shabbat 31a) > >? ? ? ?Islam: "None of you [truly] believes until he wishes for his brother > >? ? ? ?what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and > >? ? ? ?Muslim) > >? ? ? ?Hinduism: "This is the sum of duty: do not do to others what would cause > >? ? ? ?pain if done to you." (Mahabharata 5:1517) > >? ? ? ?Buddhism: "Hurt not others in ways that you yourself would find > >? ? ? ?hurtful." (Udana-Varga 5.18) > >? ? ? ?Sikhism: "As you deem yourself, deem others as well; only then you shall > >? ? ? ?become a partner in heaven." (Guru Granth Sahib, Ang 1299) > > > >? ? ? ?Compassion and Love: > > > >? ? ? ?Christianity: "A new command I give you: Love one another. As I have > >? ? ? ?loved you, so you must love one another." (John 13:34, New Testament) > >? ? ? ?Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) > >? ? ? ?Islam: "And lower to them the wing of humility out of mercy and say, 'My > >? ? ? ?Lord, have mercy upon them as they brought me up [when I was] small.'" > >? ? ? ?(Quran 17:24) > >? ? ? ?Hinduism: "The whole world is full of God's glory; serve all beings as > >? ? ? ?God." (Ramayana, Yuddha Kanda 115.33) > >? ? ? ?Buddhism: "Radiate boundless love towards the entire world." (Buddha, > >? ? ? ?Sutta Nipata 149-150) > >? ? ? ?Sikhism: "One who recognizes the One Lord among all beings does not talk > >? ? ? ?of ego." (Guru Granth Sahib, Ang 99) > > > >? ? ? ?Justice and Fairness: > > > >? ? ? ?Christianity: "Do not pervert justice; do not show partiality to the > >? ? ? ?poor or favoritism to the great, but judge your neighbor fairly." > >? ? ? ?(Leviticus 19:15, Torah) > >? ? ? ?Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) > >? ? ? ?Islam: "O you who have believed, be persistently standing firm in > >? ? ? ?justice, witnesses for Allah, even if it be against yourselves or > >? ? ? ?parents and relatives." (Quran 4:135) > >? ? ? ?Hinduism: "Justice, when it is corrupted, destroys; but when it is > >? ? ? ?upheld, it upholds the world." (Mahabharata 12.131.6) > >? ? ? ?Buddhism: "Conquer the angry man by love; conquer the ill-natured man by > >? ? ? ?goodness; conquer the miser with generosity; conquer the liar with > >? ? ? ?truth." (Dhammapada 223) > >? ? ? ?Sikhism: "Justice is the mint, courage the brave coin. When the coin of > >? ? ? ?justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) > > > >? ? ? ?Charity and Generosity: > > > >? ? ? ?Christianity: "Sell your possessions and give to the poor. Provide > >? ? ? ?purses for yourselves that will not wear out, a treasure in heaven that > >? ? ? ?will never fail." (Luke 12:33, New Testament) > >? ? ? ?Judaism: "Do not be hard-hearted or tight-fisted toward your poor > >? ? ? ?brother." (Deuteronomy 15:7, Torah) > >? ? ? ?Islam: "The example of those who spend their wealth in the Way of Allah > >? ? ? ?is like that of a grain that sprouts seven ears, and in every ear there > >? ? ? ?are a hundred grains." (Quran 2:261) > >? ? ? ?Hinduism: "The hands that help are holier than the lips that pray." > >? ? ? ?(Yajur Veda) > >? ? ? ?Buddhism: "If beings knew, as I know, the result of giving and sharing, > >? ? ? ?they would not eat without having given." (Itivuttaka 26) > >? ? ? ?Sikhism: "Serve the True Guru, and you shall obtain the fruits of your > >? ? ? ?heart's desires." (Guru Granth Sahib, Ang 1299) > > > >? ? ? ?Truth and Honesty: > > > >? ? ? ?Christianity: "Therefore each of you must put off falsehood and speak > >? ? ? ?truthfully to your neighbor, for we are all members of one body." > >? ? ? ?(Ephesians 4:25, New Testament) > >? ? ? ?Judaism: "You shall not bear false witness against your neighbor." > >? ? ? ?(Exodus 20:16, Torah) > >? ? ? ?Islam: "O you who have believed, fear Allah and be with those who are > >? ? ? ?true." (Quran 9:119) > >? ? ? ?Hinduism: "The truth which satisfies the heart is true." (Mahabharata > >? ? ? ?5:124.9) > >? ? ? ?Buddhism: "In speech that is gentle, truthful, and beneficial, one > >? ? ? ?should address others." (Anguttara Nikaya 10.176) > >? ? ? ?Sikhism: "Truth is the trade of the Guru's children; practice truth in > >? ? ? ?your mind." (Guru Granth Sahib, Ang 471) > > > >? ? ? ?As a human being, I am deeply moved by this and hopeful for a bright > >? ? ? ?future where men and machines explore the galaxy together. > > > >? ? ? ?Stuart LaForge > >? ? ? ?_______________________________________________ > >? ? ? ?extropy-chat mailing list > >? ? ? ?extropy-chat at lists.extropy.org > >? ? ? ?http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > > > > From jasonresch at gmail.com Sun Nov 26 21:29:41 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Nov 2023 16:29:41 -0500 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> Message-ID: On Sun, Nov 26, 2023, 3:42 PM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think that thought has occupied quite a few philosopher. I think the > fashionable thing in philosophy is the "platinum rule". But I could be > wrong. > I think this is an inter and novel take: https://philarchive.org/rec/ARNMAW Zuboff argues that the most correct action is what one would do if one has perfect and complete knowledge, and this knowledge would contain the knowledge of what it's like to be all the affected individual beings points of view, hence morality would then be conceived as a reconciliation of all desires of all conscious beings. Jason > Best regards, > Daniel > > > On Sun, 26 Nov 2023, William Flynn Wallace wrote: > > > > > I'm not a fan of the concept "justice" without the machinery underneath > > defined. Otherwise it is just an empty term. So that's why I'm wondering > > about collapsing some of them into the others. daniel > > This sounds like a multiple regression type problem. Why not let some > AI parse this one? Maybe everything on the list will collapse > > into the Golden Rule. bill w > > > > Best regards, > > Daniel > > > > On Sun, Nov 26, 2023 at 12:53?PM wrote: > > Another interesting question is if some of those values can be > collapsed > > into others. > > > > I'm not a fan of the concept "justice" without the machinery > underneath > > defined. Otherwise it is just an empty term. So that's why I'm > wondering > > about collapsing some of them into the others. > > > > Best regards, > > Daniel > > > > > > On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: > > > > > Saving this! THis is great - thanks Stuart. > > > > > > Commentary by S. Freud: > > > > > > The law to love your enemies is the most impossible thing to > follow in any religion. > > > > > > bill w > > > > > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > This is wonderful. Thank you for sharing it. > > > Jason > > > > > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat > wrote: > > > I asked ChatGPT what set of moral values lay at the > intersection of the > > > sets of all values espoused by the major religions of the > world with > > > explicit references to their sacred texts. This was its > answer and its > > > answer gave me hope that AI can help end the culture wars. > So here are > > > the Five Moral Imperatives shared by the world's major > religions: > > > > > > The Golden Rule and Reciprocity: > > > > > > Christianity: "So in everything, do to others what you > would have them > > > do to you, for this sums up the Law and the Prophets." > (Matthew 7:12, > > > New Testament) > > > Judaism: "What is hateful to you, do not do to your fellow > man. This is > > > the law: all the rest is commentary." (Talmud, Shabbat 31a) > > > Islam: "None of you [truly] believes until he wishes for > his brother > > > what he wishes for himself." (Hadith of Prophet Muhammad, > Bukhari and > > > Muslim) > > > Hinduism: "This is the sum of duty: do not do to others > what would cause > > > pain if done to you." (Mahabharata 5:1517) > > > Buddhism: "Hurt not others in ways that you yourself would > find > > > hurtful." (Udana-Varga 5.18) > > > Sikhism: "As you deem yourself, deem others as well; only > then you shall > > > become a partner in heaven." (Guru Granth Sahib, Ang 1299) > > > > > > Compassion and Love: > > > > > > Christianity: "A new command I give you: Love one another. > As I have > > > loved you, so you must love one another." (John 13:34, New > Testament) > > > Judaism: "Love your neighbor as yourself." (Leviticus > 19:18, Torah) > > > Islam: "And lower to them the wing of humility out of > mercy and say, 'My > > > Lord, have mercy upon them as they brought me up [when I > was] small.'" > > > (Quran 17:24) > > > Hinduism: "The whole world is full of God's glory; serve > all beings as > > > God." (Ramayana, Yuddha Kanda 115.33) > > > Buddhism: "Radiate boundless love towards the entire > world." (Buddha, > > > Sutta Nipata 149-150) > > > Sikhism: "One who recognizes the One Lord among all beings > does not talk > > > of ego." (Guru Granth Sahib, Ang 99) > > > > > > Justice and Fairness: > > > > > > Christianity: "Do not pervert justice; do not show > partiality to the > > > poor or favoritism to the great, but judge your neighbor > fairly." > > > (Leviticus 19:15, Torah) > > > Judaism: "Justice, justice shall you pursue." (Deuteronomy > 16:20, Torah) > > > Islam: "O you who have believed, be persistently standing > firm in > > > justice, witnesses for Allah, even if it be against > yourselves or > > > parents and relatives." (Quran 4:135) > > > Hinduism: "Justice, when it is corrupted, destroys; but > when it is > > > upheld, it upholds the world." (Mahabharata 12.131.6) > > > Buddhism: "Conquer the angry man by love; conquer the > ill-natured man by > > > goodness; conquer the miser with generosity; conquer the > liar with > > > truth." (Dhammapada 223) > > > Sikhism: "Justice is the mint, courage the brave coin. > When the coin of > > > justice is made, it is tested by courage." (Guru Granth > Sahib, Ang 1088) > > > > > > Charity and Generosity: > > > > > > Christianity: "Sell your possessions and give to the poor. > Provide > > > purses for yourselves that will not wear out, a treasure > in heaven that > > > will never fail." (Luke 12:33, New Testament) > > > Judaism: "Do not be hard-hearted or tight-fisted toward > your poor > > > brother." (Deuteronomy 15:7, Torah) > > > Islam: "The example of those who spend their wealth in the > Way of Allah > > > is like that of a grain that sprouts seven ears, and in > every ear there > > > are a hundred grains." (Quran 2:261) > > > Hinduism: "The hands that help are holier than the lips > that pray." > > > (Yajur Veda) > > > Buddhism: "If beings knew, as I know, the result of giving > and sharing, > > > they would not eat without having given." (Itivuttaka 26) > > > Sikhism: "Serve the True Guru, and you shall obtain the > fruits of your > > > heart's desires." (Guru Granth Sahib, Ang 1299) > > > > > > Truth and Honesty: > > > > > > Christianity: "Therefore each of you must put off > falsehood and speak > > > truthfully to your neighbor, for we are all members of one > body." > > > (Ephesians 4:25, New Testament) > > > Judaism: "You shall not bear false witness against your > neighbor." > > > (Exodus 20:16, Torah) > > > Islam: "O you who have believed, fear Allah and be with > those who are > > > true." (Quran 9:119) > > > Hinduism: "The truth which satisfies the heart is true." > (Mahabharata > > > 5:124.9) > > > Buddhism: "In speech that is gentle, truthful, and > beneficial, one > > > should address others." (Anguttara Nikaya 10.176) > > > Sikhism: "Truth is the trade of the Guru's children; > practice truth in > > > your mind." (Guru Granth Sahib, Ang 471) > > > > > > As a human being, I am deeply moved by this and hopeful > for a bright > > > future where men and machines explore the galaxy together. > > > > > > Stuart LaForge > > > _______________________________________________ > > > extropy-chat mailing list > > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > _______________________________________________ > > > extropy-chat mailing list > > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > > > > > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Nov 27 00:32:36 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 27 Nov 2023 00:32:36 +0000 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: On 26/11/2023 20:14, bill w wrote: > > I'm not a fan of the concept "justice" without the machinery underneath > defined. Otherwise it is just an empty term. So that's why I'm wondering > about collapsing some of them into the others.? daniel > This sounds like a multiple regression type problem.? Why not let some > AI parse this one?? Maybe everything on the list will collapse into > the Golden Rule.? ?bill w I've never been a fan of the golden rule. It basically assumes that everyone is the same, that what's good/desirable for me is always good/desirable for everyone else, and vice-versa. This is clearly not true. People who are anti-choice regarding abortion, for example, are following the golden rule in imposing their values on other people. 'Universal values' has a nice ring to it, and it's tempting to think that such things must exist, but I have my doubts. Using religious quotes to express them is even more dubious, especially as there is a strong tendency for religions to recast all these values into a single command to Do As You're Told Without Question. You certainly won't see one very important fundamental value in any religion that I know of: "Question Everything". Without that one, none of the others is worth a damn, imo, because they cease to be values and become commandments instead. Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Nov 27 03:38:52 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Nov 2023 20:38:52 -0700 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> Message-ID: A "reconciliation of all desires of all conscious beings" Yes. To me, that is what morals are all about. The only evil being censoring or frustrating what anyone may want. Censoring a Censor being a double negative, so essentially the same as unconditional love and value for all. That is the goal of Canonizer.com. To find out, concisely and quantitatively, what everyone wants, and find creative ways to get it all for everyone. And THAT, by definition, is consensus. knowing that will enable the creation of a single source of moral truth for everyone. Where liberals use canonizer algorithms that only count liberals, and vice versa for conservatives, so priorities get resourted, for them. Where everyone focuses on what everyone agrees on in the super camps of consensus tree, (pushing the disagreeable stuff down to lower level supporting sub camps) and so on. Rather than focusing on all the snarky bleating and tweeting that polarizes everyone today, since you can track when people jump camps, you can find out what arguments and evidence convert the most people. That becomes the focus. If your target audience is getting conservatives on board, you use conservative experts, conservative language, and conservative arguments, and include what it is they want with what you want, as that is what will work to get them on board. Thanks for this reference. On Sun, Nov 26, 2023 at 2:30?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, Nov 26, 2023, 3:42 PM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think that thought has occupied quite a few philosopher. I think the >> fashionable thing in philosophy is the "platinum rule". But I could be >> wrong. >> > > I think this is an inter and novel take: > > https://philarchive.org/rec/ARNMAW > > Zuboff argues that the most correct action is what one would do if one has > perfect and complete knowledge, and this knowledge would contain the > knowledge of what it's like to be all the affected individual beings points > of view, hence morality would then be conceived as a reconciliation of all > desires of all conscious beings. > > Jason > > > > >> Best regards, >> Daniel >> >> >> On Sun, 26 Nov 2023, William Flynn Wallace wrote: >> >> > >> > I'm not a fan of the concept "justice" without the machinery underneath >> > defined. Otherwise it is just an empty term. So that's why I'm wondering >> > about collapsing some of them into the others. daniel >> > This sounds like a multiple regression type problem. Why not let some >> AI parse this one? Maybe everything on the list will collapse >> > into the Golden Rule. bill w >> > >> > Best regards, >> > Daniel >> > >> > On Sun, Nov 26, 2023 at 12:53?PM wrote: >> > Another interesting question is if some of those values can be >> collapsed >> > into others. >> > >> > I'm not a fan of the concept "justice" without the machinery >> underneath >> > defined. Otherwise it is just an empty term. So that's why I'm >> wondering >> > about collapsing some of them into the others. >> > >> > Best regards, >> > Daniel >> > >> > >> > On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: >> > >> > > Saving this! THis is great - thanks Stuart. >> > > >> > > Commentary by S. Freud: >> > > >> > > The law to love your enemies is the most impossible thing to >> follow in any religion. >> > > >> > > bill w >> > > >> > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > > This is wonderful. Thank you for sharing it. >> > > Jason >> > > >> > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via >> extropy-chat wrote: >> > > I asked ChatGPT what set of moral values lay at the >> intersection of the >> > > sets of all values espoused by the major religions of the >> world with >> > > explicit references to their sacred texts. This was its >> answer and its >> > > answer gave me hope that AI can help end the culture >> wars. So here are >> > > the Five Moral Imperatives shared by the world's major >> religions: >> > > >> > > The Golden Rule and Reciprocity: >> > > >> > > Christianity: "So in everything, do to others what you >> would have them >> > > do to you, for this sums up the Law and the Prophets." >> (Matthew 7:12, >> > > New Testament) >> > > Judaism: "What is hateful to you, do not do to your >> fellow man. This is >> > > the law: all the rest is commentary." (Talmud, Shabbat >> 31a) >> > > Islam: "None of you [truly] believes until he wishes for >> his brother >> > > what he wishes for himself." (Hadith of Prophet Muhammad, >> Bukhari and >> > > Muslim) >> > > Hinduism: "This is the sum of duty: do not do to others >> what would cause >> > > pain if done to you." (Mahabharata 5:1517) >> > > Buddhism: "Hurt not others in ways that you yourself >> would find >> > > hurtful." (Udana-Varga 5.18) >> > > Sikhism: "As you deem yourself, deem others as well; only >> then you shall >> > > become a partner in heaven." (Guru Granth Sahib, Ang 1299) >> > > >> > > Compassion and Love: >> > > >> > > Christianity: "A new command I give you: Love one >> another. As I have >> > > loved you, so you must love one another." (John 13:34, >> New Testament) >> > > Judaism: "Love your neighbor as yourself." (Leviticus >> 19:18, Torah) >> > > Islam: "And lower to them the wing of humility out of >> mercy and say, 'My >> > > Lord, have mercy upon them as they brought me up [when I >> was] small.'" >> > > (Quran 17:24) >> > > Hinduism: "The whole world is full of God's glory; serve >> all beings as >> > > God." (Ramayana, Yuddha Kanda 115.33) >> > > Buddhism: "Radiate boundless love towards the entire >> world." (Buddha, >> > > Sutta Nipata 149-150) >> > > Sikhism: "One who recognizes the One Lord among all >> beings does not talk >> > > of ego." (Guru Granth Sahib, Ang 99) >> > > >> > > Justice and Fairness: >> > > >> > > Christianity: "Do not pervert justice; do not show >> partiality to the >> > > poor or favoritism to the great, but judge your neighbor >> fairly." >> > > (Leviticus 19:15, Torah) >> > > Judaism: "Justice, justice shall you pursue." >> (Deuteronomy 16:20, Torah) >> > > Islam: "O you who have believed, be persistently standing >> firm in >> > > justice, witnesses for Allah, even if it be against >> yourselves or >> > > parents and relatives." (Quran 4:135) >> > > Hinduism: "Justice, when it is corrupted, destroys; but >> when it is >> > > upheld, it upholds the world." (Mahabharata 12.131.6) >> > > Buddhism: "Conquer the angry man by love; conquer the >> ill-natured man by >> > > goodness; conquer the miser with generosity; conquer the >> liar with >> > > truth." (Dhammapada 223) >> > > Sikhism: "Justice is the mint, courage the brave coin. >> When the coin of >> > > justice is made, it is tested by courage." (Guru Granth >> Sahib, Ang 1088) >> > > >> > > Charity and Generosity: >> > > >> > > Christianity: "Sell your possessions and give to the >> poor. Provide >> > > purses for yourselves that will not wear out, a treasure >> in heaven that >> > > will never fail." (Luke 12:33, New Testament) >> > > Judaism: "Do not be hard-hearted or tight-fisted toward >> your poor >> > > brother." (Deuteronomy 15:7, Torah) >> > > Islam: "The example of those who spend their wealth in >> the Way of Allah >> > > is like that of a grain that sprouts seven ears, and in >> every ear there >> > > are a hundred grains." (Quran 2:261) >> > > Hinduism: "The hands that help are holier than the lips >> that pray." >> > > (Yajur Veda) >> > > Buddhism: "If beings knew, as I know, the result of >> giving and sharing, >> > > they would not eat without having given." (Itivuttaka 26) >> > > Sikhism: "Serve the True Guru, and you shall obtain the >> fruits of your >> > > heart's desires." (Guru Granth Sahib, Ang 1299) >> > > >> > > Truth and Honesty: >> > > >> > > Christianity: "Therefore each of you must put off >> falsehood and speak >> > > truthfully to your neighbor, for we are all members of >> one body." >> > > (Ephesians 4:25, New Testament) >> > > Judaism: "You shall not bear false witness against your >> neighbor." >> > > (Exodus 20:16, Torah) >> > > Islam: "O you who have believed, fear Allah and be with >> those who are >> > > true." (Quran 9:119) >> > > Hinduism: "The truth which satisfies the heart is true." >> (Mahabharata >> > > 5:124.9) >> > > Buddhism: "In speech that is gentle, truthful, and >> beneficial, one >> > > should address others." (Anguttara Nikaya 10.176) >> > > Sikhism: "Truth is the trade of the Guru's children; >> practice truth in >> > > your mind." (Guru Granth Sahib, Ang 471) >> > > >> > > As a human being, I am deeply moved by this and hopeful >> for a bright >> > > future where men and machines explore the galaxy together. >> > > >> > > Stuart LaForge >> > > _______________________________________________ >> > > extropy-chat mailing list >> > > extropy-chat at lists.extropy.org >> > > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > >> > > _______________________________________________ >> > > extropy-chat mailing list >> > > extropy-chat at lists.extropy.org >> > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > >> > > >> > > >> > >> > >> >_______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Mon Nov 27 10:10:03 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Mon, 27 Nov 2023 11:10:03 +0100 (CET) Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> Message-ID: <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> Hello Jason, That sounds like a tough moral to live up to. What does he say about the fact that we by design never will have that information? Does he advocate for best effort? If so, it seems like the power of the principle to guide people is not the strongest. Best regards, Daniel On Sun, 26 Nov 2023, Jason Resch wrote: > > > On Sun, Nov 26, 2023, 3:42 PM efc--- via extropy-chat wrote: > I think that thought has occupied quite a few philosopher. I think the > fashionable thing in philosophy is the "platinum rule". But I could be > wrong. > > > I think this is an inter and novel take: > > https://philarchive.org/rec/ARNMAW > > Zuboff argues that the most correct action is what one would do if one has perfect and complete knowledge, and this knowledge would > contain the knowledge of what it's like to be all the affected individual beings points of view, hence morality would then be > conceived as a reconciliation of all desires of all conscious beings. > > Jason? > > > > > Best regards, > Daniel > > > On Sun, 26 Nov 2023, William Flynn Wallace wrote: > > > > > I'm not a fan of the concept "justice" without the machinery underneath > > defined. Otherwise it is just an empty term. So that's why I'm wondering > > about collapsing some of them into the others.? daniel > > This sounds like a multiple regression type problem.? Why not let some AI parse this one?? Maybe everything on the list > will collapse > > into the Golden Rule.? ?bill w > > > > Best regards, > > Daniel > > > > On Sun, Nov 26, 2023 at 12:53?PM wrote: > >? ? ? ?Another interesting question is if some of those values can be collapsed > >? ? ? ?into others. > > > >? ? ? ?I'm not a fan of the concept "justice" without the machinery underneath > >? ? ? ?defined. Otherwise it is just an empty term. So that's why I'm wondering > >? ? ? ?about collapsing some of them into the others. > > > >? ? ? ?Best regards, > >? ? ? ?Daniel > > > > > >? ? ? ?On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: > > > >? ? ? ?> Saving this!? THis is great - thanks Stuart. > >? ? ? ?> > >? ? ? ?> Commentary by S. Freud: > >? ? ? ?> > >? ? ? ?> The law to love your enemies is the most impossible thing to follow in any religion.? > >? ? ? ?> > >? ? ? ?> bill w > >? ? ? ?> > >? ? ? ?> On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat wrote: > >? ? ? ?>? ? ? ?This is wonderful. Thank you for sharing it. > >? ? ? ?> Jason > >? ? ? ?> > >? ? ? ?> On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat wrote: > >? ? ? ?>? ? ? ?I asked ChatGPT what set of moral values lay at the intersection of the > >? ? ? ?>? ? ? ?sets of all values espoused by the major religions of the world with > >? ? ? ?>? ? ? ?explicit references to their sacred texts. This was its answer and its > >? ? ? ?>? ? ? ?answer gave me hope that AI can help end the culture wars. So here are > >? ? ? ?>? ? ? ?the Five Moral Imperatives shared by the world's major religions: > >? ? ? ?> > >? ? ? ?>? ? ? ?The Golden Rule and Reciprocity: > >? ? ? ?> > >? ? ? ?>? ? ? ?Christianity: "So in everything, do to others what you would have them > >? ? ? ?>? ? ? ?do to you, for this sums up the Law and the Prophets." (Matthew 7:12, > >? ? ? ?>? ? ? ?New Testament) > >? ? ? ?>? ? ? ?Judaism: "What is hateful to you, do not do to your fellow man. This is > >? ? ? ?>? ? ? ?the law: all the rest is commentary." (Talmud, Shabbat 31a) > >? ? ? ?>? ? ? ?Islam: "None of you [truly] believes until he wishes for his brother > >? ? ? ?>? ? ? ?what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and > >? ? ? ?>? ? ? ?Muslim) > >? ? ? ?>? ? ? ?Hinduism: "This is the sum of duty: do not do to others what would cause > >? ? ? ?>? ? ? ?pain if done to you." (Mahabharata 5:1517) > >? ? ? ?>? ? ? ?Buddhism: "Hurt not others in ways that you yourself would find > >? ? ? ?>? ? ? ?hurtful." (Udana-Varga 5.18) > >? ? ? ?>? ? ? ?Sikhism: "As you deem yourself, deem others as well; only then you shall > >? ? ? ?>? ? ? ?become a partner in heaven." (Guru Granth Sahib, Ang 1299) > >? ? ? ?> > >? ? ? ?>? ? ? ?Compassion and Love: > >? ? ? ?> > >? ? ? ?>? ? ? ?Christianity: "A new command I give you: Love one another. As I have > >? ? ? ?>? ? ? ?loved you, so you must love one another." (John 13:34, New Testament) > >? ? ? ?>? ? ? ?Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) > >? ? ? ?>? ? ? ?Islam: "And lower to them the wing of humility out of mercy and say, 'My > >? ? ? ?>? ? ? ?Lord, have mercy upon them as they brought me up [when I was] small.'" > >? ? ? ?>? ? ? ?(Quran 17:24) > >? ? ? ?>? ? ? ?Hinduism: "The whole world is full of God's glory; serve all beings as > >? ? ? ?>? ? ? ?God." (Ramayana, Yuddha Kanda 115.33) > >? ? ? ?>? ? ? ?Buddhism: "Radiate boundless love towards the entire world." (Buddha, > >? ? ? ?>? ? ? ?Sutta Nipata 149-150) > >? ? ? ?>? ? ? ?Sikhism: "One who recognizes the One Lord among all beings does not talk > >? ? ? ?>? ? ? ?of ego." (Guru Granth Sahib, Ang 99) > >? ? ? ?> > >? ? ? ?>? ? ? ?Justice and Fairness: > >? ? ? ?> > >? ? ? ?>? ? ? ?Christianity: "Do not pervert justice; do not show partiality to the > >? ? ? ?>? ? ? ?poor or favoritism to the great, but judge your neighbor fairly." > >? ? ? ?>? ? ? ?(Leviticus 19:15, Torah) > >? ? ? ?>? ? ? ?Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) > >? ? ? ?>? ? ? ?Islam: "O you who have believed, be persistently standing firm in > >? ? ? ?>? ? ? ?justice, witnesses for Allah, even if it be against yourselves or > >? ? ? ?>? ? ? ?parents and relatives." (Quran 4:135) > >? ? ? ?>? ? ? ?Hinduism: "Justice, when it is corrupted, destroys; but when it is > >? ? ? ?>? ? ? ?upheld, it upholds the world." (Mahabharata 12.131.6) > >? ? ? ?>? ? ? ?Buddhism: "Conquer the angry man by love; conquer the ill-natured man by > >? ? ? ?>? ? ? ?goodness; conquer the miser with generosity; conquer the liar with > >? ? ? ?>? ? ? ?truth." (Dhammapada 223) > >? ? ? ?>? ? ? ?Sikhism: "Justice is the mint, courage the brave coin. When the coin of > >? ? ? ?>? ? ? ?justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) > >? ? ? ?> > >? ? ? ?>? ? ? ?Charity and Generosity: > >? ? ? ?> > >? ? ? ?>? ? ? ?Christianity: "Sell your possessions and give to the poor. Provide > >? ? ? ?>? ? ? ?purses for yourselves that will not wear out, a treasure in heaven that > >? ? ? ?>? ? ? ?will never fail." (Luke 12:33, New Testament) > >? ? ? ?>? ? ? ?Judaism: "Do not be hard-hearted or tight-fisted toward your poor > >? ? ? ?>? ? ? ?brother." (Deuteronomy 15:7, Torah) > >? ? ? ?>? ? ? ?Islam: "The example of those who spend their wealth in the Way of Allah > >? ? ? ?>? ? ? ?is like that of a grain that sprouts seven ears, and in every ear there > >? ? ? ?>? ? ? ?are a hundred grains." (Quran 2:261) > >? ? ? ?>? ? ? ?Hinduism: "The hands that help are holier than the lips that pray." > >? ? ? ?>? ? ? ?(Yajur Veda) > >? ? ? ?>? ? ? ?Buddhism: "If beings knew, as I know, the result of giving and sharing, > >? ? ? ?>? ? ? ?they would not eat without having given." (Itivuttaka 26) > >? ? ? ?>? ? ? ?Sikhism: "Serve the True Guru, and you shall obtain the fruits of your > >? ? ? ?>? ? ? ?heart's desires." (Guru Granth Sahib, Ang 1299) > >? ? ? ?> > >? ? ? ?>? ? ? ?Truth and Honesty: > >? ? ? ?> > >? ? ? ?>? ? ? ?Christianity: "Therefore each of you must put off falsehood and speak > >? ? ? ?>? ? ? ?truthfully to your neighbor, for we are all members of one body." > >? ? ? ?>? ? ? ?(Ephesians 4:25, New Testament) > >? ? ? ?>? ? ? ?Judaism: "You shall not bear false witness against your neighbor." > >? ? ? ?>? ? ? ?(Exodus 20:16, Torah) > >? ? ? ?>? ? ? ?Islam: "O you who have believed, fear Allah and be with those who are > >? ? ? ?>? ? ? ?true." (Quran 9:119) > >? ? ? ?>? ? ? ?Hinduism: "The truth which satisfies the heart is true." (Mahabharata > >? ? ? ?>? ? ? ?5:124.9) > >? ? ? ?>? ? ? ?Buddhism: "In speech that is gentle, truthful, and beneficial, one > >? ? ? ?>? ? ? ?should address others." (Anguttara Nikaya 10.176) > >? ? ? ?>? ? ? ?Sikhism: "Truth is the trade of the Guru's children; practice truth in > >? ? ? ?>? ? ? ?your mind." (Guru Granth Sahib, Ang 471) > >? ? ? ?> > >? ? ? ?>? ? ? ?As a human being, I am deeply moved by this and hopeful for a bright > >? ? ? ?>? ? ? ?future where men and machines explore the galaxy together. > >? ? ? ?> > >? ? ? ?>? ? ? ?Stuart LaForge > >? ? ? ?>? ? ? ?_______________________________________________ > >? ? ? ?>? ? ? ?extropy-chat mailing list > >? ? ? ?>? ? ? ?extropy-chat at lists.extropy.org > >? ? ? ?>? ? ? ?http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >? ? ? ?> > >? ? ? ?> _______________________________________________ > >? ? ? ?> extropy-chat mailing list > >? ? ? ?> extropy-chat at lists.extropy.org > >? ? ? ?> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >? ? ? ?> > >? ? ? ?> > >? ? ? ?> > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From efc at swisscows.email Mon Nov 27 10:13:44 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Mon, 27 Nov 2023 11:13:44 +0100 (CET) Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: <9dcffa00-8c5d-ff44-a99b-0ec1753bf07a@swisscows.email> Hello Ben, On Mon, 27 Nov 2023, Ben Zaiboc via extropy-chat wrote: > I've never been a fan of the golden rule. It basically assumes that > everyone is the same, that what's good/desirable for me is always > good/desirable for everyone else, and vice-versa. This is clearly not > true. People who are anti-choice regarding abortion, for example, are > following the golden rule in imposing their values on other people. I always read the golden rule as being based on empathy and what I would do in that persons position. But, yes, it is not well formulated and it is prone to exactly that interpretation which you (in my opinion, rightfully) critique. Hence I think the platinum rule emerged to clarify that. > 'Universal values' has a nice ring to it, and it's tempting to think > that such things must exist, but I have my doubts. Using religious > quotes to express them is even more dubious, especially as there is a > strong tendency for religions to recast all these values into a single > command to Do As You're Told Without Question. You certainly won't see > one very important fundamental value in any religion that I know of: > "Question Everything". Without that one, none of the others is worth a > damn, imo, because they cease to be values and become commandments > instead. But isn't that the holy grail and where all the fun is? ;) To find the algorithm, that's provably "the best" so we don't have to uniquely question everything all the time, but can just apply our rule? =) Best regards, Daniel > > Ben > > From jasonresch at gmail.com Mon Nov 27 10:48:07 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Nov 2023 05:48:07 -0500 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> Message-ID: On Mon, Nov 27, 2023, 5:11 AM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hello Jason, > > That sounds like a tough moral to live up to. What does he say about the > fact that we by design never will have that information? > He acknowledges that it is impossible to live up to or achieve, but lays out the argument showing that this is what morality concerns, and acknowledging this provides both a justification for why we should all try to act morally in the first place (why we should try to be good), as well as a foundation for agreement on what should be the most moral action (what is the most good), even if that's not knowable, we would at least all agree if we had perfect knowledge of the situation. So we can build and justify heuristics on this common foundation, now that we have one. > Does he advocate for best effort? If so, it seems like the power of the > principle to guide people is not the strongest. > He saw all previous attempts at justifying morality (answering: for what reason ought we be good) as wanting. The goal of this paper is to answer the question of why (rather than the how). Jason > > > On Sun, 26 Nov 2023, Jason Resch wrote: > > > > > > > On Sun, Nov 26, 2023, 3:42 PM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I think that thought has occupied quite a few philosopher. I think > the > > fashionable thing in philosophy is the "platinum rule". But I > could be > > wrong. > > > > > > I think this is an inter and novel take: > > > > https://philarchive.org/rec/ARNMAW > > > > Zuboff argues that the most correct action is what one would do if one > has perfect and complete knowledge, and this knowledge would > > contain the knowledge of what it's like to be all the affected > individual beings points of view, hence morality would then be > > conceived as a reconciliation of all desires of all conscious beings. > > > > Jason > > > > > > > > > > Best regards, > > Daniel > > > > > > On Sun, 26 Nov 2023, William Flynn Wallace wrote: > > > > > > > > I'm not a fan of the concept "justice" without the machinery > underneath > > > defined. Otherwise it is just an empty term. So that's why I'm > wondering > > > about collapsing some of them into the others. daniel > > > This sounds like a multiple regression type problem. Why not > let some AI parse this one? Maybe everything on the list > > will collapse > > > into the Golden Rule. bill w > > > > > > Best regards, > > > Daniel > > > > > > On Sun, Nov 26, 2023 at 12:53?PM wrote: > > > Another interesting question is if some of those values > can be collapsed > > > into others. > > > > > > I'm not a fan of the concept "justice" without the > machinery underneath > > > defined. Otherwise it is just an empty term. So that's why > I'm wondering > > > about collapsing some of them into the others. > > > > > > Best regards, > > > Daniel > > > > > > > > > On Sun, 26 Nov 2023, William Flynn Wallace via > extropy-chat wrote: > > > > > > > Saving this! THis is great - thanks Stuart. > > > > > > > > Commentary by S. Freud: > > > > > > > > The law to love your enemies is the most impossible > thing to follow in any religion. > > > > > > > > bill w > > > > > > > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via > extropy-chat wrote: > > > > This is wonderful. Thank you for sharing it. > > > > Jason > > > > > > > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via > extropy-chat wrote: > > > > I asked ChatGPT what set of moral values lay at > the intersection of the > > > > sets of all values espoused by the major religions > of the world with > > > > explicit references to their sacred texts. This > was its answer and its > > > > answer gave me hope that AI can help end the > culture wars. So here are > > > > the Five Moral Imperatives shared by the world's > major religions: > > > > > > > > The Golden Rule and Reciprocity: > > > > > > > > Christianity: "So in everything, do to others what > you would have them > > > > do to you, for this sums up the Law and the > Prophets." (Matthew 7:12, > > > > New Testament) > > > > Judaism: "What is hateful to you, do not do to > your fellow man. This is > > > > the law: all the rest is commentary." (Talmud, > Shabbat 31a) > > > > Islam: "None of you [truly] believes until he > wishes for his brother > > > > what he wishes for himself." (Hadith of Prophet > Muhammad, Bukhari and > > > > Muslim) > > > > Hinduism: "This is the sum of duty: do not do to > others what would cause > > > > pain if done to you." (Mahabharata 5:1517) > > > > Buddhism: "Hurt not others in ways that you > yourself would find > > > > hurtful." (Udana-Varga 5.18) > > > > Sikhism: "As you deem yourself, deem others as > well; only then you shall > > > > become a partner in heaven." (Guru Granth Sahib, > Ang 1299) > > > > > > > > Compassion and Love: > > > > > > > > Christianity: "A new command I give you: Love one > another. As I have > > > > loved you, so you must love one another." (John > 13:34, New Testament) > > > > Judaism: "Love your neighbor as yourself." > (Leviticus 19:18, Torah) > > > > Islam: "And lower to them the wing of humility out > of mercy and say, 'My > > > > Lord, have mercy upon them as they brought me up > [when I was] small.'" > > > > (Quran 17:24) > > > > Hinduism: "The whole world is full of God's glory; > serve all beings as > > > > God." (Ramayana, Yuddha Kanda 115.33) > > > > Buddhism: "Radiate boundless love towards the > entire world." (Buddha, > > > > Sutta Nipata 149-150) > > > > Sikhism: "One who recognizes the One Lord among > all beings does not talk > > > > of ego." (Guru Granth Sahib, Ang 99) > > > > > > > > Justice and Fairness: > > > > > > > > Christianity: "Do not pervert justice; do not show > partiality to the > > > > poor or favoritism to the great, but judge your > neighbor fairly." > > > > (Leviticus 19:15, Torah) > > > > Judaism: "Justice, justice shall you pursue." > (Deuteronomy 16:20, Torah) > > > > Islam: "O you who have believed, be persistently > standing firm in > > > > justice, witnesses for Allah, even if it be > against yourselves or > > > > parents and relatives." (Quran 4:135) > > > > Hinduism: "Justice, when it is corrupted, > destroys; but when it is > > > > upheld, it upholds the world." (Mahabharata > 12.131.6) > > > > Buddhism: "Conquer the angry man by love; conquer > the ill-natured man by > > > > goodness; conquer the miser with generosity; > conquer the liar with > > > > truth." (Dhammapada 223) > > > > Sikhism: "Justice is the mint, courage the brave > coin. When the coin of > > > > justice is made, it is tested by courage." (Guru > Granth Sahib, Ang 1088) > > > > > > > > Charity and Generosity: > > > > > > > > Christianity: "Sell your possessions and give to > the poor. Provide > > > > purses for yourselves that will not wear out, a > treasure in heaven that > > > > will never fail." (Luke 12:33, New Testament) > > > > Judaism: "Do not be hard-hearted or tight-fisted > toward your poor > > > > brother." (Deuteronomy 15:7, Torah) > > > > Islam: "The example of those who spend their > wealth in the Way of Allah > > > > is like that of a grain that sprouts seven ears, > and in every ear there > > > > are a hundred grains." (Quran 2:261) > > > > Hinduism: "The hands that help are holier than the > lips that pray." > > > > (Yajur Veda) > > > > Buddhism: "If beings knew, as I know, the result > of giving and sharing, > > > > they would not eat without having given." > (Itivuttaka 26) > > > > Sikhism: "Serve the True Guru, and you shall > obtain the fruits of your > > > > heart's desires." (Guru Granth Sahib, Ang 1299) > > > > > > > > Truth and Honesty: > > > > > > > > Christianity: "Therefore each of you must put off > falsehood and speak > > > > truthfully to your neighbor, for we are all > members of one body." > > > > (Ephesians 4:25, New Testament) > > > > Judaism: "You shall not bear false witness against > your neighbor." > > > > (Exodus 20:16, Torah) > > > > Islam: "O you who have believed, fear Allah and be > with those who are > > > > true." (Quran 9:119) > > > > Hinduism: "The truth which satisfies the heart is > true." (Mahabharata > > > > 5:124.9) > > > > Buddhism: "In speech that is gentle, truthful, and > beneficial, one > > > > should address others." (Anguttara Nikaya 10.176) > > > > Sikhism: "Truth is the trade of the Guru's > children; practice truth in > > > > your mind." (Guru Granth Sahib, Ang 471) > > > > > > > > As a human being, I am deeply moved by this and > hopeful for a bright > > > > future where men and machines explore the galaxy > together. > > > > > > > > Stuart LaForge > > > > _______________________________________________ > > > > extropy-chat mailing list > > > > extropy-chat at lists.extropy.org > > > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > > > _______________________________________________ > > > > extropy-chat mailing list > > > > extropy-chat at lists.extropy.org > > > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > > > > > > > > > > > > > > > >_______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Nov 27 10:53:06 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Nov 2023 05:53:06 -0500 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> Message-ID: Here is an except from his paper, read by Zuboff. It is interesting that he titles this as applicable to the question of whether AI will be moral: https://youtu.be/Yy3SKed25eM Jason On Mon, Nov 27, 2023, 5:48 AM Jason Resch wrote: > > > On Mon, Nov 27, 2023, 5:11 AM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Hello Jason, >> >> That sounds like a tough moral to live up to. What does he say about the >> fact that we by design never will have that information? >> > > He acknowledges that it is impossible to live up to or achieve, but lays > out the argument showing that this is what morality concerns, and > acknowledging this provides both a justification for why we should all try > to act morally in the first place (why we should try to be good), as well > as a foundation for agreement on what should be the most moral action (what > is the most good), even if that's not knowable, we would at least all agree > if we had perfect knowledge of the situation. So we can build and justify > heuristics on this common foundation, now that we have one. > > > > >> Does he advocate for best effort? If so, it seems like the power of the >> principle to guide people is not the strongest. >> > > > He saw all previous attempts at justifying morality (answering: for what > reason ought we be good) as wanting. The goal of this paper is to answer > the question of why (rather than the how). > > Jason > > >> >> >> On Sun, 26 Nov 2023, Jason Resch wrote: >> >> > >> > >> > On Sun, Nov 26, 2023, 3:42 PM efc--- via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > I think that thought has occupied quite a few philosopher. I >> think the >> > fashionable thing in philosophy is the "platinum rule". But I >> could be >> > wrong. >> > >> > >> > I think this is an inter and novel take: >> > >> > https://philarchive.org/rec/ARNMAW >> > >> > Zuboff argues that the most correct action is what one would do if one >> has perfect and complete knowledge, and this knowledge would >> > contain the knowledge of what it's like to be all the affected >> individual beings points of view, hence morality would then be >> > conceived as a reconciliation of all desires of all conscious beings. >> > >> > Jason >> > >> > >> > >> > >> > Best regards, >> > Daniel >> > >> > >> > On Sun, 26 Nov 2023, William Flynn Wallace wrote: >> > >> > > >> > > I'm not a fan of the concept "justice" without the machinery >> underneath >> > > defined. Otherwise it is just an empty term. So that's why I'm >> wondering >> > > about collapsing some of them into the others. daniel >> > > This sounds like a multiple regression type problem. Why not >> let some AI parse this one? Maybe everything on the list >> > will collapse >> > > into the Golden Rule. bill w >> > > >> > > Best regards, >> > > Daniel >> > > >> > > On Sun, Nov 26, 2023 at 12:53?PM wrote: >> > > Another interesting question is if some of those values >> can be collapsed >> > > into others. >> > > >> > > I'm not a fan of the concept "justice" without the >> machinery underneath >> > > defined. Otherwise it is just an empty term. So that's >> why I'm wondering >> > > about collapsing some of them into the others. >> > > >> > > Best regards, >> > > Daniel >> > > >> > > >> > > On Sun, 26 Nov 2023, William Flynn Wallace via >> extropy-chat wrote: >> > > >> > > > Saving this! THis is great - thanks Stuart. >> > > > >> > > > Commentary by S. Freud: >> > > > >> > > > The law to love your enemies is the most impossible >> thing to follow in any religion. >> > > > >> > > > bill w >> > > > >> > > > On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via >> extropy-chat wrote: >> > > > This is wonderful. Thank you for sharing it. >> > > > Jason >> > > > >> > > > On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via >> extropy-chat wrote: >> > > > I asked ChatGPT what set of moral values lay at >> the intersection of the >> > > > sets of all values espoused by the major >> religions of the world with >> > > > explicit references to their sacred texts. This >> was its answer and its >> > > > answer gave me hope that AI can help end the >> culture wars. So here are >> > > > the Five Moral Imperatives shared by the world's >> major religions: >> > > > >> > > > The Golden Rule and Reciprocity: >> > > > >> > > > Christianity: "So in everything, do to others >> what you would have them >> > > > do to you, for this sums up the Law and the >> Prophets." (Matthew 7:12, >> > > > New Testament) >> > > > Judaism: "What is hateful to you, do not do to >> your fellow man. This is >> > > > the law: all the rest is commentary." (Talmud, >> Shabbat 31a) >> > > > Islam: "None of you [truly] believes until he >> wishes for his brother >> > > > what he wishes for himself." (Hadith of Prophet >> Muhammad, Bukhari and >> > > > Muslim) >> > > > Hinduism: "This is the sum of duty: do not do to >> others what would cause >> > > > pain if done to you." (Mahabharata 5:1517) >> > > > Buddhism: "Hurt not others in ways that you >> yourself would find >> > > > hurtful." (Udana-Varga 5.18) >> > > > Sikhism: "As you deem yourself, deem others as >> well; only then you shall >> > > > become a partner in heaven." (Guru Granth Sahib, >> Ang 1299) >> > > > >> > > > Compassion and Love: >> > > > >> > > > Christianity: "A new command I give you: Love one >> another. As I have >> > > > loved you, so you must love one another." (John >> 13:34, New Testament) >> > > > Judaism: "Love your neighbor as yourself." >> (Leviticus 19:18, Torah) >> > > > Islam: "And lower to them the wing of humility >> out of mercy and say, 'My >> > > > Lord, have mercy upon them as they brought me up >> [when I was] small.'" >> > > > (Quran 17:24) >> > > > Hinduism: "The whole world is full of God's >> glory; serve all beings as >> > > > God." (Ramayana, Yuddha Kanda 115.33) >> > > > Buddhism: "Radiate boundless love towards the >> entire world." (Buddha, >> > > > Sutta Nipata 149-150) >> > > > Sikhism: "One who recognizes the One Lord among >> all beings does not talk >> > > > of ego." (Guru Granth Sahib, Ang 99) >> > > > >> > > > Justice and Fairness: >> > > > >> > > > Christianity: "Do not pervert justice; do not >> show partiality to the >> > > > poor or favoritism to the great, but judge your >> neighbor fairly." >> > > > (Leviticus 19:15, Torah) >> > > > Judaism: "Justice, justice shall you pursue." >> (Deuteronomy 16:20, Torah) >> > > > Islam: "O you who have believed, be persistently >> standing firm in >> > > > justice, witnesses for Allah, even if it be >> against yourselves or >> > > > parents and relatives." (Quran 4:135) >> > > > Hinduism: "Justice, when it is corrupted, >> destroys; but when it is >> > > > upheld, it upholds the world." (Mahabharata >> 12.131.6) >> > > > Buddhism: "Conquer the angry man by love; conquer >> the ill-natured man by >> > > > goodness; conquer the miser with generosity; >> conquer the liar with >> > > > truth." (Dhammapada 223) >> > > > Sikhism: "Justice is the mint, courage the brave >> coin. When the coin of >> > > > justice is made, it is tested by courage." (Guru >> Granth Sahib, Ang 1088) >> > > > >> > > > Charity and Generosity: >> > > > >> > > > Christianity: "Sell your possessions and give to >> the poor. Provide >> > > > purses for yourselves that will not wear out, a >> treasure in heaven that >> > > > will never fail." (Luke 12:33, New Testament) >> > > > Judaism: "Do not be hard-hearted or tight-fisted >> toward your poor >> > > > brother." (Deuteronomy 15:7, Torah) >> > > > Islam: "The example of those who spend their >> wealth in the Way of Allah >> > > > is like that of a grain that sprouts seven ears, >> and in every ear there >> > > > are a hundred grains." (Quran 2:261) >> > > > Hinduism: "The hands that help are holier than >> the lips that pray." >> > > > (Yajur Veda) >> > > > Buddhism: "If beings knew, as I know, the result >> of giving and sharing, >> > > > they would not eat without having given." >> (Itivuttaka 26) >> > > > Sikhism: "Serve the True Guru, and you shall >> obtain the fruits of your >> > > > heart's desires." (Guru Granth Sahib, Ang 1299) >> > > > >> > > > Truth and Honesty: >> > > > >> > > > Christianity: "Therefore each of you must put off >> falsehood and speak >> > > > truthfully to your neighbor, for we are all >> members of one body." >> > > > (Ephesians 4:25, New Testament) >> > > > Judaism: "You shall not bear false witness >> against your neighbor." >> > > > (Exodus 20:16, Torah) >> > > > Islam: "O you who have believed, fear Allah and >> be with those who are >> > > > true." (Quran 9:119) >> > > > Hinduism: "The truth which satisfies the heart is >> true." (Mahabharata >> > > > 5:124.9) >> > > > Buddhism: "In speech that is gentle, truthful, >> and beneficial, one >> > > > should address others." (Anguttara Nikaya 10.176) >> > > > Sikhism: "Truth is the trade of the Guru's >> children; practice truth in >> > > > your mind." (Guru Granth Sahib, Ang 471) >> > > > >> > > > As a human being, I am deeply moved by this and >> hopeful for a bright >> > > > future where men and machines explore the galaxy >> together. >> > > > >> > > > Stuart LaForge >> > > > _______________________________________________ >> > > > extropy-chat mailing list >> > > > extropy-chat at lists.extropy.org >> > > > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > >> > > > _______________________________________________ >> > > > extropy-chat mailing list >> > > > extropy-chat at lists.extropy.org >> > > > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > >> > > > >> > > > >> > > >> > > >> > >_______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > >> > >> >_______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Mon Nov 27 15:23:46 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Mon, 27 Nov 2023 16:23:46 +0100 (CET) Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> Message-ID: <560c7edc-300c-2f49-0c31-b823042959ba@swisscows.email> Interesting. It feels on the one hand, (to me) like truisms, and on the other as an impossible ideal. I am reminded of the infighting of the effective altruism movement and could easily imagine that something like this could be weaponized there. I could also see this in a daoist perspective, where stones and animals already are manifesting the right morality, since they are not rational beings in the ways humans are. So when having such abstract theories, I feel they can be bent and twisted to mean any nr of things and they lose a lot of their power. Another view... moral behaviour is just a side effect of a well developed character. Therefore, no moral theory is needed, just a way to develop people. Morality will follow. Best regards, Daniel On Mon, 27 Nov 2023, Jason Resch wrote: > Here is an except from his paper, read by Zuboff. It is interesting that he titles this as applicable to the question of whether AI > will be moral: > https://youtu.be/Yy3SKed25eM > > Jason? > > On Mon, Nov 27, 2023, 5:48 AM Jason Resch wrote: > > > On Mon, Nov 27, 2023, 5:11 AM efc--- via extropy-chat wrote: > Hello Jason, > > That sounds like a tough moral to live up to. What does he say about the > fact that we by design never will have that information? > > > He acknowledges that it is impossible to live up to or achieve, but lays out the argument showing that this is what morality > concerns, and acknowledging this provides both a justification for why we should all try to act morally in the first place (why > we should try to be good), as well as a foundation for agreement on what should be the most moral action (what is the most > good), even if that's not knowable, we would at least all agree if we had perfect knowledge of the situation. So we can build > and justify heuristics on this common foundation, now that we have one. > > > > > Does he advocate for best effort? If so, it seems like the power of the > principle to guide people is not the strongest. > > > > He saw all previous attempts at justifying morality (answering: for what reason ought we be good) as wanting. The goal of this > paper is to answer the question of why (rather than the how). > > Jason? > > > > > On Sun, 26 Nov 2023, Jason Resch wrote: > > > > > > > On Sun, Nov 26, 2023, 3:42 PM efc--- via extropy-chat wrote: > >? ? ? ?I think that thought has occupied quite a few philosopher. I think the > >? ? ? ?fashionable thing in philosophy is the "platinum rule". But I could be > >? ? ? ?wrong. > > > > > > I think this is an inter and novel take: > > > > https://philarchive.org/rec/ARNMAW > > > > Zuboff argues that the most correct action is what one would do if one has perfect and complete knowledge, and > this knowledge would > > contain the knowledge of what it's like to be all the affected individual beings points of view, hence morality > would then be > > conceived as a reconciliation of all desires of all conscious beings. > > > > Jason? > > > > > > > > > >? ? ? ?Best regards, > >? ? ? ?Daniel > > > > > >? ? ? ?On Sun, 26 Nov 2023, William Flynn Wallace wrote: > > > >? ? ? ?> > >? ? ? ?> I'm not a fan of the concept "justice" without the machinery underneath > >? ? ? ?> defined. Otherwise it is just an empty term. So that's why I'm wondering > >? ? ? ?> about collapsing some of them into the others.? daniel > >? ? ? ?> This sounds like a multiple regression type problem.? Why not let some AI parse this one?? Maybe > everything on the list > >? ? ? ?will collapse > >? ? ? ?> into the Golden Rule.? ?bill w > >? ? ? ?> > >? ? ? ?> Best regards, > >? ? ? ?> Daniel > >? ? ? ?> > >? ? ? ?> On Sun, Nov 26, 2023 at 12:53?PM wrote: > >? ? ? ?>? ? ? ?Another interesting question is if some of those values can be collapsed > >? ? ? ?>? ? ? ?into others. > >? ? ? ?> > >? ? ? ?>? ? ? ?I'm not a fan of the concept "justice" without the machinery underneath > >? ? ? ?>? ? ? ?defined. Otherwise it is just an empty term. So that's why I'm wondering > >? ? ? ?>? ? ? ?about collapsing some of them into the others. > >? ? ? ?> > >? ? ? ?>? ? ? ?Best regards, > >? ? ? ?>? ? ? ?Daniel > >? ? ? ?> > >? ? ? ?> > >? ? ? ?>? ? ? ?On Sun, 26 Nov 2023, William Flynn Wallace via extropy-chat wrote: > >? ? ? ?> > >? ? ? ?>? ? ? ?> Saving this!? THis is great - thanks Stuart. > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> Commentary by S. Freud: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> The law to love your enemies is the most impossible thing to follow in any religion.? > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> bill w > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> On Sun, Nov 26, 2023 at 12:13?PM Jason Resch via extropy-chat > wrote: > >? ? ? ?>? ? ? ?>? ? ? ?This is wonderful. Thank you for sharing it. > >? ? ? ?>? ? ? ?> Jason > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> On Sun, Nov 26, 2023 at 12:41?PM Stuart LaForge via extropy-chat > wrote: > >? ? ? ?>? ? ? ?>? ? ? ?I asked ChatGPT what set of moral values lay at the intersection of the > >? ? ? ?>? ? ? ?>? ? ? ?sets of all values espoused by the major religions of the world with > >? ? ? ?>? ? ? ?>? ? ? ?explicit references to their sacred texts. This was its answer and its > >? ? ? ?>? ? ? ?>? ? ? ?answer gave me hope that AI can help end the culture wars. So here are > >? ? ? ?>? ? ? ?>? ? ? ?the Five Moral Imperatives shared by the world's major religions: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?The Golden Rule and Reciprocity: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Christianity: "So in everything, do to others what you would have them > >? ? ? ?>? ? ? ?>? ? ? ?do to you, for this sums up the Law and the Prophets." (Matthew 7:12, > >? ? ? ?>? ? ? ?>? ? ? ?New Testament) > >? ? ? ?>? ? ? ?>? ? ? ?Judaism: "What is hateful to you, do not do to your fellow man. This is > >? ? ? ?>? ? ? ?>? ? ? ?the law: all the rest is commentary." (Talmud, Shabbat 31a) > >? ? ? ?>? ? ? ?>? ? ? ?Islam: "None of you [truly] believes until he wishes for his brother > >? ? ? ?>? ? ? ?>? ? ? ?what he wishes for himself." (Hadith of Prophet Muhammad, Bukhari and > >? ? ? ?>? ? ? ?>? ? ? ?Muslim) > >? ? ? ?>? ? ? ?>? ? ? ?Hinduism: "This is the sum of duty: do not do to others what would cause > >? ? ? ?>? ? ? ?>? ? ? ?pain if done to you." (Mahabharata 5:1517) > >? ? ? ?>? ? ? ?>? ? ? ?Buddhism: "Hurt not others in ways that you yourself would find > >? ? ? ?>? ? ? ?>? ? ? ?hurtful." (Udana-Varga 5.18) > >? ? ? ?>? ? ? ?>? ? ? ?Sikhism: "As you deem yourself, deem others as well; only then you shall > >? ? ? ?>? ? ? ?>? ? ? ?become a partner in heaven." (Guru Granth Sahib, Ang 1299) > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Compassion and Love: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Christianity: "A new command I give you: Love one another. As I have > >? ? ? ?>? ? ? ?>? ? ? ?loved you, so you must love one another." (John 13:34, New Testament) > >? ? ? ?>? ? ? ?>? ? ? ?Judaism: "Love your neighbor as yourself." (Leviticus 19:18, Torah) > >? ? ? ?>? ? ? ?>? ? ? ?Islam: "And lower to them the wing of humility out of mercy and say, 'My > >? ? ? ?>? ? ? ?>? ? ? ?Lord, have mercy upon them as they brought me up [when I was] small.'" > >? ? ? ?>? ? ? ?>? ? ? ?(Quran 17:24) > >? ? ? ?>? ? ? ?>? ? ? ?Hinduism: "The whole world is full of God's glory; serve all beings as > >? ? ? ?>? ? ? ?>? ? ? ?God." (Ramayana, Yuddha Kanda 115.33) > >? ? ? ?>? ? ? ?>? ? ? ?Buddhism: "Radiate boundless love towards the entire world." (Buddha, > >? ? ? ?>? ? ? ?>? ? ? ?Sutta Nipata 149-150) > >? ? ? ?>? ? ? ?>? ? ? ?Sikhism: "One who recognizes the One Lord among all beings does not talk > >? ? ? ?>? ? ? ?>? ? ? ?of ego." (Guru Granth Sahib, Ang 99) > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Justice and Fairness: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Christianity: "Do not pervert justice; do not show partiality to the > >? ? ? ?>? ? ? ?>? ? ? ?poor or favoritism to the great, but judge your neighbor fairly." > >? ? ? ?>? ? ? ?>? ? ? ?(Leviticus 19:15, Torah) > >? ? ? ?>? ? ? ?>? ? ? ?Judaism: "Justice, justice shall you pursue." (Deuteronomy 16:20, Torah) > >? ? ? ?>? ? ? ?>? ? ? ?Islam: "O you who have believed, be persistently standing firm in > >? ? ? ?>? ? ? ?>? ? ? ?justice, witnesses for Allah, even if it be against yourselves or > >? ? ? ?>? ? ? ?>? ? ? ?parents and relatives." (Quran 4:135) > >? ? ? ?>? ? ? ?>? ? ? ?Hinduism: "Justice, when it is corrupted, destroys; but when it is > >? ? ? ?>? ? ? ?>? ? ? ?upheld, it upholds the world." (Mahabharata 12.131.6) > >? ? ? ?>? ? ? ?>? ? ? ?Buddhism: "Conquer the angry man by love; conquer the ill-natured man by > >? ? ? ?>? ? ? ?>? ? ? ?goodness; conquer the miser with generosity; conquer the liar with > >? ? ? ?>? ? ? ?>? ? ? ?truth." (Dhammapada 223) > >? ? ? ?>? ? ? ?>? ? ? ?Sikhism: "Justice is the mint, courage the brave coin. When the coin of > >? ? ? ?>? ? ? ?>? ? ? ?justice is made, it is tested by courage." (Guru Granth Sahib, Ang 1088) > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Charity and Generosity: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Christianity: "Sell your possessions and give to the poor. Provide > >? ? ? ?>? ? ? ?>? ? ? ?purses for yourselves that will not wear out, a treasure in heaven that > >? ? ? ?>? ? ? ?>? ? ? ?will never fail." (Luke 12:33, New Testament) > >? ? ? ?>? ? ? ?>? ? ? ?Judaism: "Do not be hard-hearted or tight-fisted toward your poor > >? ? ? ?>? ? ? ?>? ? ? ?brother." (Deuteronomy 15:7, Torah) > >? ? ? ?>? ? ? ?>? ? ? ?Islam: "The example of those who spend their wealth in the Way of Allah > >? ? ? ?>? ? ? ?>? ? ? ?is like that of a grain that sprouts seven ears, and in every ear there > >? ? ? ?>? ? ? ?>? ? ? ?are a hundred grains." (Quran 2:261) > >? ? ? ?>? ? ? ?>? ? ? ?Hinduism: "The hands that help are holier than the lips that pray." > >? ? ? ?>? ? ? ?>? ? ? ?(Yajur Veda) > >? ? ? ?>? ? ? ?>? ? ? ?Buddhism: "If beings knew, as I know, the result of giving and sharing, > >? ? ? ?>? ? ? ?>? ? ? ?they would not eat without having given." (Itivuttaka 26) > >? ? ? ?>? ? ? ?>? ? ? ?Sikhism: "Serve the True Guru, and you shall obtain the fruits of your > >? ? ? ?>? ? ? ?>? ? ? ?heart's desires." (Guru Granth Sahib, Ang 1299) > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Truth and Honesty: > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Christianity: "Therefore each of you must put off falsehood and speak > >? ? ? ?>? ? ? ?>? ? ? ?truthfully to your neighbor, for we are all members of one body." > >? ? ? ?>? ? ? ?>? ? ? ?(Ephesians 4:25, New Testament) > >? ? ? ?>? ? ? ?>? ? ? ?Judaism: "You shall not bear false witness against your neighbor." > >? ? ? ?>? ? ? ?>? ? ? ?(Exodus 20:16, Torah) > >? ? ? ?>? ? ? ?>? ? ? ?Islam: "O you who have believed, fear Allah and be with those who are > >? ? ? ?>? ? ? ?>? ? ? ?true." (Quran 9:119) > >? ? ? ?>? ? ? ?>? ? ? ?Hinduism: "The truth which satisfies the heart is true." (Mahabharata > >? ? ? ?>? ? ? ?>? ? ? ?5:124.9) > >? ? ? ?>? ? ? ?>? ? ? ?Buddhism: "In speech that is gentle, truthful, and beneficial, one > >? ? ? ?>? ? ? ?>? ? ? ?should address others." (Anguttara Nikaya 10.176) > >? ? ? ?>? ? ? ?>? ? ? ?Sikhism: "Truth is the trade of the Guru's children; practice truth in > >? ? ? ?>? ? ? ?>? ? ? ?your mind." (Guru Granth Sahib, Ang 471) > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?As a human being, I am deeply moved by this and hopeful for a bright > >? ? ? ?>? ? ? ?>? ? ? ?future where men and machines explore the galaxy together. > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?>? ? ? ?Stuart LaForge > >? ? ? ?>? ? ? ?>? ? ? ?_______________________________________________ > >? ? ? ?>? ? ? ?>? ? ? ?extropy-chat mailing list > >? ? ? ?>? ? ? ?>? ? ? ?extropy-chat at lists.extropy.org > >? ? ? ?>? ? ? ?>? ? ? ?http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> _______________________________________________ > >? ? ? ?>? ? ? ?> extropy-chat mailing list > >? ? ? ?>? ? ? ?> extropy-chat at lists.extropy.org > >? ? ? ?>? ? ? ?> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> > >? ? ? ?>? ? ? ?> > >? ? ? ?> > >? ? ? ?> > >? ? ? ?>_______________________________________________ > >? ? ? ?extropy-chat mailing list > >? ? ? ?extropy-chat at lists.extropy.org > >? ? ? ?http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From pharos at gmail.com Tue Nov 28 00:15:07 2023 From: pharos at gmail.com (BillK) Date: Tue, 28 Nov 2023 00:15:07 +0000 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: <560c7edc-300c-2f49-0c31-b823042959ba@swisscows.email> References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> <560c7edc-300c-2f49-0c31-b823042959ba@swisscows.email> Message-ID: On Mon, 27 Nov 2023 at 15:26, efc--- via extropy-chat wrote: > > Interesting. It feels on the one hand, (to me) like truisms, and on the > other as an impossible ideal. I am reminded of the infighting of the > effective altruism movement and could easily imagine that something like > this could be weaponized there. > > I could also see this in a daoist perspective, where stones and animals > already are manifesting the right morality, since they are not rational > beings in the ways humans are. > > So when having such abstract theories, I feel they can be bent and twisted > to mean any nr of things and they lose a lot of their power. > > Another view... moral behaviour is just a side effect of a well developed > character. Therefore, no moral theory is needed, just a way to develop > people. Morality will follow. > > Best regards, Daniel >_______________________________________________ Hmmmm. Checking in the opposite direction, I wondered what the result would be if I asked an AI - Make a complete list of all the moral values that major religions disagree on. The AI complained a bit, but made a fair attempt. :) Reply: While it is difficult to provide an exhaustive list, here is a comprehensive compilation of moral values on which major religions may disagree: 1. Abortion 2. LGBTQ+ rights and same-sex marriage 3. Euthanasia and assisted suicide 4. Women's rights and gender equality 5. Capital punishment 6. Contraception and family planning 7. Divorce and remarriage 8. Dietary restrictions and food choices 9. Alcohol and substance use 10. Modesty in dress and behavior 11. Animal rights and treatment 12. Environmental ethics and stewardship 13. War and violence 14. Just distribution of wealth and economic justice 15. Interfaith marriage and religious conversion 16. Freedom of expression and blasphemy laws 17. Genetic engineering and cloning 18. Stem cell research 19. End-of-life care and medical treatments 20. Slavery and human trafficking 21. Racial and ethnic equality 22. Caste system and social hierarchy 23. Assisted reproductive technologies 24. Access to healthcare and medical interventions 25. Privacy and surveillance 26. Assisted suicide and euthanasia 27. Organ donation and transplantation 28. Forgiveness and reconciliation 29. Interpreting sacred texts and religious authority 30. Rituals and worship practices 31. Role of women in religious leadership 32. Interfaith dialogue and cooperation 33. Immigration and refugee rights 34. Animal sacrifice and ritual practices 35. Conscientious objection and religious exemptions 36. Education and curriculum content 37. Role of religion in politics and governance 38. Inheritance and property rights 39. Birth control and population control strategies 40. Environmental conservation and sustainability. It is important to note that this list may not cover every possible disagreement or variation within major religions. Additionally, individual believers may have their own interpretations and personal beliefs that may differ from the official teachings of their respective religious traditions. --------------------- This list gives me the impression that religions are willing to agree on vague nice things, like truth, honesty, compassion, etc. But when you get down to the nitty-gritty details of how to actually implement rules in daily life, then that's a completely different ball game! BillK From efc at swisscows.email Tue Nov 28 08:23:59 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Tue, 28 Nov 2023 09:23:59 +0100 (CET) Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: <70a4fb03-66d8-aebc-75ac-4b81306ffdde@swisscows.email> <4c9cca18-da2d-2f05-9c3f-734469b85322@swisscows.email> <560c7edc-300c-2f49-0c31-b823042959ba@swisscows.email> Message-ID: <8896b796-be5c-82d0-387b-71bb99a5a1ba@swisscows.email> Good morning Bill, On Tue, 28 Nov 2023, BillK via extropy-chat wrote: > This list gives me the impression that religions are willing to agree > on vague nice things, like truth, honesty, compassion, etc. > But when you get down to the nitty-gritty details of how to actually > implement rules in daily life, then that's a completely different ball > game! Very human. Don't we all do it all the time? Agreeing on vague generalities is a staple food of international agreements and in politics so why not in religions as well? ;) Best regards, Daniel > > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From foozler83 at gmail.com Tue Nov 28 14:32:30 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 28 Nov 2023 08:32:30 -0600 Subject: [ExI] a problem in topology Message-ID: from Randy Cunningham's list: Being completely undressed on the side of the road where, deputies say, they were ?on top of one another? having sex ?in the presence of children.? IGNobel Prize for answering this one- bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Wed Nov 29 08:13:11 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 29 Nov 2023 09:13:11 +0100 Subject: [ExI] [Extropolis] Irrational mechanics, draft Ch. 6. Creative evolution toward quality. Also, the 2nd test flight of Starship and the OpenAI drama. In-Reply-To: References: Message-ID: John, this argument is flawed because it is really equivalent to: I'm bigger than you and I can beat the shit out of you, therefore you don't have free will but you can only do what I want. Of course there are certain states of the universe that lead to a certain conclusion that can't be escaped (for example, If I jump off a high window I'll most certainly die on impact), but other states of the universe are more forgiving and can be tweaked. The Britannica definition of free will says "make choices," not "realize choices." So if I flap my arms like wings and hope to start flying like a bird while I'm falling down toward certain death on impact, I'm still making a choice (one that, unfortunately, I'll not be able to realize). Other situation are more amenable to tweaking. Of course you are not entirely autonomous: you can?t jump off the window and resist the force of gravity with your bare will. But you are an integral and necessary part of the cosmic operating system (this parallels the idea that you are conscious because you are part of universal consciousness), with some limited autonomy. I define free will as your ability to make choices that are not entirely determined by the rest of the universe (that is, the universe minus you), and cause change (see "Down and out in a block universe"). A flexible concept of causation (see "Libertarian determinism") that includes the possibility of retrocausality (backward causation) doesn't reduce you to initial conditions in the far past before you existed, and restores a concept of free will that is good enough for me. On Tue, Nov 28, 2023 at 2:10?PM John Clark wrote: > > On Sat, Nov 25, 2023 at 12:08?PM Giulio Prisco wrote: > > *> Irrational mechanics, draft Ch. 6. Creative evolution toward quality. >> Also, the 2nd test flight of Starship and the OpenAI drama. >> https://www.turingchurch.com/p/irrational-mechanics-draft-ch-6 >> * >> > > > I like a lot of what you say but In any philosophical discussion I think > it's a mistake to bring up "free will" because confusion always results. > According to the encyclopedia Britannica "free will" is "the supposed power > or capacity of humans to make choices independently of any prior event or > state of the universe"; of course by that definition humans do not have > free will because there are certainly states of the universe that prevent > us from doing what we want, in fact there are states of the universe that > prevent us from doing anything at all. But that doesn't really matter > because the Britannica also says that in philosophy "choice", a word used > in the definition of free will, is "the supposed ability to FREELY decide > between alternatives". So something has free will if you can choose and if > you can choose then you have free will, and round and round we go. So the > free will idea is not true, and it's not false either, it's gibberish. > > John K Clark > > > > > -- > You received this message because you are subscribed to the Google Groups > "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/extropolis/CAJPayv3mEO%3D_wQZOM8bs0bMgwLnVagOH5DyWs7VL5F7mkWZqEw%40mail.gmail.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Wed Nov 29 21:15:07 2023 From: postmowoods at gmail.com (Kelly Anderson) Date: Wed, 29 Nov 2023 14:15:07 -0700 Subject: [ExI] Culture war and universal values in the age of spiritual machines In-Reply-To: References: Message-ID: I note that many of the Islamic verses only talk about how to treat other Muslims. It's scary what's ok to do to us infidels Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Wed Nov 29 22:35:36 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Wed, 29 Nov 2023 23:35:36 +0100 (CET) Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: Message-ID: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Hello list, The other day I was listening to a conspiracy pod and all of a sudden, I heard a medium talk about how the transhumanists want to eradicate all spiritual values and merge humanity with the machine. In my own home cooked transhumanist version, transhumanists are optimistic, life affirming people who are solving problems with science and technology. So when did transhumanists start to become the bad guys in conspiracy circles? Did anyone of you encounter this in real life or in virtual life? Curious shift in narrative that completely passed me by and caught me by surprise. Best regards, Daniel From dsunley at gmail.com Wed Nov 29 22:50:42 2023 From: dsunley at gmail.com (Darin Sunley) Date: Wed, 29 Nov 2023 15:50:42 -0700 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: We have been in for some time, and are heading into, an extended period of massive societal change and economic uncertainty. Per Keith Henson, this triggers a deep-seated gene complex that turns people unreasonably xenophobic. Any group radically different from one's own becomes intrinsically suspect just on general principles. I've been active on this list and other transhumanist-adjacent circles since the mid-90s. And even with the general principles set aside, some of the substance of those complaints is, broadly speaking, true. Transhumanists are still, and really always have been, at best, mildly hostile to traditional conceptions of spirituality and religion. The Venn diagram between transhumanism and New Atheism in the 90's was almost a pair of concentric circles. Not all online atheists were transhumanists, but virtually all transhumanists were atheist, frequently aggressively so. On Wed, Nov 29, 2023 at 3:37?PM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hello list, > > The other day I was listening to a conspiracy pod and all of a sudden, I > heard a medium talk about how the transhumanists want to > eradicate all spiritual values and merge humanity with the machine. > > In my own home cooked transhumanist version, transhumanists are > optimistic, life affirming people who are solving problems with > science and technology. > > So when did transhumanists start to become the bad guys in conspiracy > circles? Did anyone of you encounter this in real life or in > virtual life? > > Curious shift in narrative that completely passed me by and caught me by > surprise. > > Best regards, > Daniel > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 29 22:52:38 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 29 Nov 2023 14:52:38 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: On Wed, Nov 29, 2023 at 2:36?PM efc--- via extropy-chat wrote: snip > > Curious shift in narrative that completely passed me by and caught me by > surprise. In that case, I must be the baddest of them all, having been called the Ur-transhumanist. :-) Keith > Best regards, > Daniel > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Thu Nov 30 00:15:40 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Nov 2023 16:15:40 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: <032d01da2322$5acd5fc0$10681f40$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of efc--- via extropy-chat Subject: [ExI] Are transhumanists becoming the bad guys? Hello list, The other day I was listening to a conspiracy pod and all of a sudden, I heard a medium talk about how the transhumanists want to eradicate all spiritual values and merge humanity with the machine. In my own home cooked transhumanist version, transhumanists are optimistic, life affirming people who are solving problems with science and technology. So when did transhumanists start to become the bad guys in conspiracy circles? Did anyone of you encounter this in real life or in virtual life? Curious shift in narrative that completely passed me by and caught me by surprise. Best regards, Daniel _______________________________________________ Dang, did we forget to brief Daniel on the big conspiracy? No worries Daniel, they forgot me too. spike From spike at rainier66.com Thu Nov 30 00:19:31 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Nov 2023 16:19:31 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: <033201da2322$e47b5880$ad720980$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Keith Henson via extropy-chat Sent: Wednesday, 29 November, 2023 2:53 PM To: ExI chat list Cc: Keith Henson Subject: Re: [ExI] Are transhumanists becoming the bad guys? On Wed, Nov 29, 2023 at 2:36?PM efc--- via extropy-chat wrote: snip > > Curious shift in narrative that completely passed me by and caught me > by surprise. In that case, I must be the baddest of them all, having been called the Ur-transhumanist. :-) Keith > Best regards, > Daniel > > > _______________________________________________ Oh oh Keith, now ya done it. With that claim you trigger the need for a contest or competition, to see who is the urest of transhumanism. I agree you are urer than most, but we need to arrange a transhumanist Olympiad of some sort. spike From hkeithhenson at gmail.com Thu Nov 30 00:53:21 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 29 Nov 2023 16:53:21 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <033201da2322$e47b5880$ad720980$@rainier66.com> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> <033201da2322$e47b5880$ad720980$@rainier66.com> Message-ID: On Wed, Nov 29, 2023 at 4:19?PM wrote: smo[ > Oh oh Keith, now ya done it. With that claim you trigger the need for a contest or competition, to see who is the urest of transhumanism. I agree you are urer than most, but we need to arrange a transhumanist Olympiad of some sort. I was called the ur-transhumanist by no less an authority than R.U Sirius. Top that. ;-) Keith > spike > From spike at rainier66.com Thu Nov 30 01:35:59 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Nov 2023 17:35:59 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> <033201da2322$e47b5880$ad720980$@rainier66.com> Message-ID: <034701da232d$9308aa10$b919fe30$@rainier66.com> -----Original Message----- From: Keith Henson Subject: Re: [ExI] Are transhumanists becoming the bad guys? On Wed, Nov 29, 2023 at 4:1?PM wrote: smo[ > Oh oh Keith, now ya done it. With that claim you trigger the need for a contest or competition, to see who is the urest of transhumanism. I agree you are urer than most, but we need to arrange a transhumanist Olympiad of some sort. I was called the ur-transhumanist by no less an authority than R.U Sirius. Top that. ;-) Keith > spike > R.U. Sirius! Cool! Where has that lad been hiding all this time? If we were to create a series of transhumanist games, what could do? How about... the long jump of reason? Springboard to orbit? Ideas? spike From hkeithhenson at gmail.com Thu Nov 30 01:59:50 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 29 Nov 2023 17:59:50 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <034701da232d$9308aa10$b919fe30$@rainier66.com> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> <033201da2322$e47b5880$ad720980$@rainier66.com> <034701da232d$9308aa10$b919fe30$@rainier66.com> Message-ID: On Wed, Nov 29, 2023 at 5:36?PM wrote: snip > R.U. Sirius! Cool! Where has that lad been hiding all this time? In the last few weeks, he has been recovering from a heart bypass. > If we were to create a series of transhumanist games, what could do? How about... the long jump of reason? Springboard to orbit? Ideas? Being named as a transhumanist on Wikipedia. There are 69 American transhumanists. Interesting list. > spike > From giulio at gmail.com Thu Nov 30 07:29:06 2023 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 30 Nov 2023 08:29:06 +0100 Subject: [ExI] [Extropolis] Irrational mechanics, draft Ch. 6. Creative evolution toward quality. Also, the 2nd test flight of Starship and the OpenAI drama. In-Reply-To: References: Message-ID: John, you say "If things are really deterministic then it all goes back to the initial state of the Big Bang." I disagree. A history can be really deterministic (that is, uniquely determined) without being uniquely determined by an initial state. In other words, global determinism can hold without strict dynamical laws that determine the future from the past. This is the central point of my chapter "Libertarian determinism:" https://www.turingchurch.com/p/irrational-mechanics-draft-ch-4 I have many examples in mind, here is one. Think of the traveling salesman. He goes from city 1 to city 2 ... to city N (or back to city 1 through all other cities) following the global rule that the total distance (or, more generally, the total cost) must be minimal. As you know, this is a hard computational problem. The history of the traveling salesman's path is uniquely determined by the global rule, but not by the past of the path. That is, the next city is not uniquely determined by the current city and the cities visited in the past, but it is only determined by all cities together, including those that will be visited in the far future. So there are retrocausal influences (backward causation). Retrocausality breaks Laplacian determinism. You are not uniquely determined by the past of spacetime, but only by the whole history of spacetime, of which you are an irreducible part, and this is a concept of free will that is good enough for me. Then I have generalized my argument to allow for multiple solutions, but that is another story. Read "Libertarian determinism" again? On Wed, Nov 29, 2023 at 12:09?PM John Clark wrote: > On Wed, Nov 29, 2023 at 3:13?AM Giulio Prisco wrote: > > *> "John, this argument is flawed because it is really equivalent to: I'm >> bigger than you and I can beat the shit out of you, therefore you don't >> have free will but you can only do what I want.* >> > > I agree the argument is equivalent but I disagree it is flawed. > > >> * > if I flap my arms like wings and hope to start flying like a bird >> while I'm falling down toward certain death on impact, I'm still making a >> choice* >> > > You chose to flap your arms like wings because you wanted to, and you > either wanted to for a reason, in which case your choice was deterministic, > or you wanted to for no reason, in which case your choice was random. So > you're either a cuckoo clock or a roulette wheel. > > >* "**I define free will as your ability to make choices that are not >> entirely determined by the rest of the universe (that is, the universe >> minus you)* >> > > If it wasn't the state of the external universe that caused you to do what > you did then it must've been because of your internal state; and you were > either in that internal state for a reason, in which case it was > deterministic, or you were in that internal state for no reason, in which > case it was random. When we see somebody do something for no reason we say > they are acting "irrationally", and normally that is not considered a > compliment. > > Although I've never heard of anybody except me use it, I would define "free > will" as the* inability* to always be certain what you yourself will do > next until you actually do it, because being deterministic and being > predictable are not necessarily the same thing. Sometimes, even with > something completely mechanical, the only way to know what it will do next > is to watch it and see. > > *> "A flexible concept of causation (see "Libertarian determinism") that >> includes the possibility of retrocausality (backward causation) doesn't >> reduce you to initial conditions in the far past before you existed"* > > > Why not? If things are really deterministic then it all goes back to the > initial state of the Big Bang. > > John K Clark > > > > >> -- > You received this message because you are subscribed to the Google Groups > "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/extropolis/CAJPayv14ECrTjC5z8cB4uyfdSTQoWC%2Bz2fQBiEKOfJg0%2BRn-pg%40mail.gmail.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Thu Nov 30 09:08:50 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 30 Nov 2023 10:08:50 +0100 (CET) Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: Hello Darin, and thank you for the background! So since you've been active since the mid-90s, have you seen the "transhumanist bad guy" narrative coming and going throughout the years? I've never seen it before, but if you've seen it coming and going, it's just old news, and that I never saw or heard it. Best regards, Daniel On Wed, 29 Nov 2023, Darin Sunley via extropy-chat wrote: > We have been in for some time, and are heading into, an extended period of massive societal change and economic uncertainty. Per > Keith Henson, this triggers a deep-seated gene complex that turns people unreasonably xenophobic. Any group radically different from > one's own becomes intrinsically suspect just on general principles. > I've been active on this list and other transhumanist-adjacent circles since the mid-90s. And even with the general principles set > aside, some of the substance of those complaints is, broadly speaking, true. Transhumanists are still, and really always have been, > at best, mildly hostile to traditional conceptions of spirituality and religion. The Venn diagram between transhumanism and New > Atheism in the 90's was almost a pair of concentric circles. Not all online atheists were transhumanists, but virtually all > transhumanists were atheist, frequently aggressively so. > > On Wed, Nov 29, 2023 at 3:37?PM efc--- via extropy-chat wrote: > Hello list, > > The other day I was listening to a conspiracy pod and all of a sudden, I > heard a medium talk about how the transhumanists want to > eradicate all spiritual values and merge humanity with the machine. > > In my own home cooked transhumanist version, transhumanists are > optimistic, life affirming people who are solving problems with > science and technology. > > So when did transhumanists start to become the bad guys in conspiracy > circles? Did anyone of you encounter this in real life or in > virtual life? > > Curious shift in narrative that completely passed me by and caught me by > surprise. > > Best regards, > Daniel > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From pharos at gmail.com Thu Nov 30 11:32:18 2023 From: pharos at gmail.com (BillK) Date: Thu, 30 Nov 2023 11:32:18 +0000 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: On Thu, 30 Nov 2023 at 09:11, efc--- via extropy-chat wrote: > > Hello Darin, and thank you for the background! > > So since you've been active since the mid-90s, have you seen the > "transhumanist bad guy" narrative coming and going throughout the years? > > I've never seen it before, but if you've seen it coming and going, it's > just old news, and that I never saw or heard it. > > Best regards, Daniel >_______________________________________________ An AI summary- List some of the biggest misconceptions about transhumanism, and how transhumanists respond to them. GPT-3.5-Turbo 1. Loss of humanity: One common misconception is that transhumanism seeks to eliminate or devalue what it means to be human. In response, transhumanists emphasize that their goal is not to replace humanity but to enhance it. They argue that embracing technological advancements can help humans overcome limitations, improve well-being, and evolve mentally and physically. 2. Immortality: Another misconception is that transhumanists aim for immortality. While some individuals may desire extended lifespans, transhumanism is not solely focused on achieving eternal life. Transhumanists emphasize the importance of quality of life and argue that advancements in medicine, biotechnology, and rejuvenation can help humans live healthier and longer lives. 3. Inequality: Critics often argue that transhumanism will exacerbate existing social inequalities by only benefiting the wealthy. Transhumanists respond by advocating for equal access to technologies and advocating for policies that ensure these advancements are available to everyone. They argue that widespread adoption of technologies can ultimately reduce inequalities by advancing healthcare, education, and other fields. 4. Loss of individuality: Some people fear that transhumanism will lead to a homogenized society where everyone is the same due to technological enhancements. Transhumanists argue that individuality will be preserved and enhanced by enabling individuals to choose and shape their own enhancements. They believe that personalization and diversity will thrive in a technologically advanced society. 5. Ethical concerns: Critics often raise ethical concerns related to transhumanism, such as the potential for playing with nature or creating a "post-human" world. Transhumanists acknowledge the need for ethical considerations and advocate for responsible development and regulation of technologies. They emphasize the importance of open dialogue, ethical frameworks, and democratic decision-making to ensure the responsible use of transformative technologies. 6. Dystopian future: Some fear that transhumanism could lead to a dystopian future where humanity loses control over technology or faces oppression by a superintelligent AI. Transhumanists address these concerns by emphasizing the need for robust safety measures, ethical guidelines, and responsible development. They advocate for proactive engagement with AI and other technologies to ensure they align with human values and benefit society as a whole. Overall, transhumanists aim to dispel misconceptions by promoting the idea that transhumanism is an inclusive movement that seeks to empower individuals, enhance human capabilities, and address societal challenges through responsible and beneficial applications of technology. ================== BillK From hkeithhenson at gmail.com Thu Nov 30 15:37:51 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 30 Nov 2023 07:37:51 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: Sheesh. The AIs can express themselves better than I can. The fact is, transhumanists are a tiny, tiny minority. I doubt there are enough to gain the negative attention of society at large. Plus not that many take it seriously enough to make a lot of noise. Big overlap with cryonicists, but together they are still tiny. Keith On Thu, Nov 30, 2023 at 3:34?AM BillK via extropy-chat wrote: > > On Thu, 30 Nov 2023 at 09:11, efc--- via extropy-chat > wrote: > > > > Hello Darin, and thank you for the background! > > > > So since you've been active since the mid-90s, have you seen the > > "transhumanist bad guy" narrative coming and going throughout the years? > > > > I've never seen it before, but if you've seen it coming and going, it's > > just old news, and that I never saw or heard it. > > > > Best regards, Daniel > >_______________________________________________ > > > An AI summary- > > List some of the biggest misconceptions about transhumanism, and how > transhumanists respond to them. > GPT-3.5-Turbo > > 1. Loss of humanity: One common misconception is that transhumanism > seeks to eliminate or devalue what it means to be human. In response, > transhumanists emphasize that their goal is not to replace humanity > but to enhance it. They argue that embracing technological > advancements can help humans overcome limitations, improve well-being, > and evolve mentally and physically. > > 2. Immortality: Another misconception is that transhumanists aim for > immortality. While some individuals may desire extended lifespans, > transhumanism is not solely focused on achieving eternal life. > Transhumanists emphasize the importance of quality of life and argue > that advancements in medicine, biotechnology, and rejuvenation can > help humans live healthier and longer lives. > > 3. Inequality: Critics often argue that transhumanism will exacerbate > existing social inequalities by only benefiting the wealthy. > Transhumanists respond by advocating for equal access to technologies > and advocating for policies that ensure these advancements are > available to everyone. They argue that widespread adoption of > technologies can ultimately reduce inequalities by advancing > healthcare, education, and other fields. > > 4. Loss of individuality: Some people fear that transhumanism will > lead to a homogenized society where everyone is the same due to > technological enhancements. Transhumanists argue that individuality > will be preserved and enhanced by enabling individuals to choose and > shape their own enhancements. They believe that personalization and > diversity will thrive in a technologically advanced society. > > 5. Ethical concerns: Critics often raise ethical concerns related to > transhumanism, such as the potential for playing with nature or > creating a "post-human" world. Transhumanists acknowledge the need for > ethical considerations and advocate for responsible development and > regulation of technologies. They emphasize the importance of open > dialogue, ethical frameworks, and democratic decision-making to ensure > the responsible use of transformative technologies. > > 6. Dystopian future: Some fear that transhumanism could lead to a > dystopian future where humanity loses control over technology or faces > oppression by a superintelligent AI. Transhumanists address these > concerns by emphasizing the need for robust safety measures, ethical > guidelines, and responsible development. They advocate for proactive > engagement with AI and other technologies to ensure they align with > human values and benefit society as a whole. > > Overall, transhumanists aim to dispel misconceptions by promoting the > idea that transhumanism is an inclusive movement that seeks to empower > individuals, enhance human capabilities, and address societal > challenges through responsible and beneficial applications of > technology. > ================== > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Thu Nov 30 16:06:13 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Nov 2023 08:06:13 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <1248656824.8046905.1701359482295@mail.yahoo.com> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> <033201da2322$e47b5880$ad720980$@rainier66.com> <1248656824.8046905.1701359482295@mail.yahoo.com> Message-ID: <002701da23a7$253f31a0$6fbd94e0$@rainier66.com> From: RU Sirius Sent: Thursday, 30 November, 2023 7:51 AM To: Keith Henson Cc: ExI chat list ; spike at rainier66.com Subject: Re: [ExI] Are transhumanists becoming the bad guys? howdy SIRIUS! Welcome back, our prodigal s?brother! We heard you were around, hiding somewhere. I am one of your fans from way back when we were younger than we are now. We were noting the Wiki list of transhumanists. I didn?t make the list, not even with a footnote identifying me as the court jester. But it does lead to the question: How can we know if a person is a transhumanist? Is any surgery or hormone therapy needed? Any particular article of clothing associated with transhumanism? Or can one just claim to be transhumanist, then trans back to humanist at any time? If we set up transhumanist games, can the competitors later take on Vonnegut and Asimov and Dennett and Pinker, that crowd? The mind boggles. RU Sirius, glad you checked in with us sir. Our list is richer with your presence. spike On Wednesday, November 29, 2023 at 04:53:50 PM PST, Keith Henson > wrote: On Wed, Nov 29, 2023 at 4:19?PM > wrote: smo[ > Oh oh Keith, now ya done it. With that claim you trigger the need for a contest or competition, to see who is the urest of transhumanism. I agree you are urer than most, but we need to arrange a transhumanist Olympiad of some sort. I was called the ur-transhumanist by no less an authority than R.U Sirius. Top that. ;-) Keith > spike > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Nov 30 16:14:21 2023 From: pharos at gmail.com (BillK) Date: Thu, 30 Nov 2023 16:14:21 +0000 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: On Thu, 30 Nov 2023 at 15:40, Keith Henson via extropy-chat wrote: > > Sheesh. The AIs can express themselves better than I can. > > The fact is, transhumanists are a tiny, tiny minority. I doubt there > are enough to gain the negative attention of society at large. Plus > not that many take it seriously enough to make a lot of noise. Big > overlap with cryonicists, but together they are still tiny. > > Keith > _______________________________________________ Agreed. But at least one AI is hopeful for the future. Reply - Yes, the younger generation is more likely to be open to transhumanist ideas due to their familiarity with rapidly advancing technologies. Growing up in a world where technological advancements have been constant and rapid, the younger generation has a natural affinity for technology and is more likely to embrace the idea of using technology to enhance human capabilities. This familiarity with technology has created a sense of comfort and trust in its potential to improve their lives. As a result, they are more likely to be open to the idea of transhumanism, which advocates for the use of technology to enhance human intelligence, physical ability, and lifespan. Moreover, the younger generation is more likely to be exposed to the ideas of transhumanism through popular culture, such as movies, TV shows, and books, which often explore the themes of technological advancement and human enhancement. This exposure can help to shape their attitudes and beliefs about the potential of technology to improve human life. Additionally, the younger generation is more likely to be aware of the ethical and societal implications of transhumanism, such as the potential for unequal access to technology and the risks of unintended consequences. This awareness can lead to a more nuanced and thoughtful approach to the idea of transhumanism, rather than a blind embrace of its potential. --------------------- Encouraging! BillK From spike at rainier66.com Thu Nov 30 16:20:02 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Nov 2023 08:20:02 -0800 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: <003801da23a9$133a3d40$39aeb7c0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... On Thu, 30 Nov 2023 at 15:40, Keith Henson via extropy-chat wrote: > > Sheesh. The AIs can express themselves better than I can. ... > Keith > _______________________________________________ Agreed. But at least one AI is hopeful for the future. Reply - >...Yes, the younger generation is more likely to be open to transhumanist ideas due to their familiarity with rapidly advancing technologies....This awareness can lead to a more nuanced and thoughtful approach to the idea of transhumanism, rather than a blind embrace of its potential. --------------------- >...Encouraging! BillK _______________________________________________ BillK, reading that passage you quoted from ChatGPT, it occurred to me that we wrote that. It was searching material that we contributed to, most of us here. It has the ring of truth to it because it is our collective thoughts, right here. spike From ben at zaiboc.net Thu Nov 30 16:46:34 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Thu, 30 Nov 2023 16:46:34 +0000 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: Message-ID: On 29/11/2023 22:51, efc at swisscows wrote: > Hello list, > > > > The other day I was listening to a conspiracy pod and all of a sudden, I > heard a medium talk about how the transhumanists want to > > eradicate all spiritual values and merge humanity with the machine. > > > > In my own home cooked transhumanist version, transhumanists are > optimistic, life affirming people who are solving problems with > > science and technology. > > > > So when did transhumanists start to become the bad guys in conspiracy > circles? Did anyone of you encounter this in real life or in > > virtual life? > > > > Curious shift in narrative that completely passed me by and caught me by > surprise. Transhumanists have been painted as the bad guys for ages. Since before even Francis Fukuyama and Bill Joy. The evil (or at least misguided) mad scientist (played by John Hannah) in 'Agents of Shield' was supposed to be a transhumanist. And of course just about every religion that ever existed is going to be opposed to a world-view that's based on materialism, promotes rationality, tolerance and freedom of choice, and challenges superstition. Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Nov 30 17:33:42 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 30 Nov 2023 11:33:42 -0600 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: This theme is very popular with scifi writers, as we know. I would not call people who want changes in our bodies and brains a 'tiny' part of the population. bill w On Thu, Nov 30, 2023 at 10:16?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, 30 Nov 2023 at 15:40, Keith Henson via extropy-chat > wrote: > > > > Sheesh. The AIs can express themselves better than I can. > > > > The fact is, transhumanists are a tiny, tiny minority. I doubt there > > are enough to gain the negative attention of society at large. Plus > > not that many take it seriously enough to make a lot of noise. Big > > overlap with cryonicists, but together they are still tiny. > > > > Keith > > _______________________________________________ > > > Agreed. But at least one AI is hopeful for the future. > > Reply - > > Yes, the younger generation is more likely to be open to transhumanist > ideas due to their familiarity with rapidly advancing technologies. > > Growing up in a world where technological advancements have been > constant and rapid, the younger generation has a natural affinity for > technology and is more likely to embrace the idea of using technology > to enhance human capabilities. This familiarity with technology has > created a sense of comfort and trust in its potential to improve their > lives. As a result, they are more likely to be open to the idea of > transhumanism, which advocates for the use of technology to enhance > human intelligence, physical ability, and lifespan. > > Moreover, the younger generation is more likely to be exposed to the > ideas of transhumanism through popular culture, such as movies, TV > shows, and books, which often explore the themes of technological > advancement and human enhancement. This exposure can help to shape > their attitudes and beliefs about the potential of technology to > improve human life. > > Additionally, the younger generation is more likely to be aware of the > ethical and societal implications of transhumanism, such as the > potential for unequal access to technology and the risks of unintended > consequences. This awareness can lead to a more nuanced and thoughtful > approach to the idea of transhumanism, rather than a blind embrace of > its potential. > --------------------- > > Encouraging! > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natashavita-more.com Thu Nov 30 17:55:32 2023 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Thu, 30 Nov 2023 17:55:32 +0000 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: Dear Daniel, In response to your questions before I read the many responses to it from others on the list: * So when did transhumanists start to become the bad guys in conspiracy circles? This has been going on for many decades. First by a limited few in academic circles in social science and humanities departments often because they were focused on postmodernist rhetoric that did include future vision or advances in technology. Second, and more widely cast was journalistic hyperbole that caught attention by making very broad claims that are typical of that type of journalism for affect. * Did anyone of you encounter this in real life or in virtual life? Yes, many of us have been countering this for decades as well. Conferences, events, writings, debates, and so forth. However, since the movement continues to grow, so does the antagonistic views. Natasha ________________________________ From: extropy-chat on behalf of efc--- via extropy-chat Sent: Wednesday, November 29, 2023 3:35 PM To: ExI chat list Cc: efc at swisscows.email Subject: [ExI] Are transhumanists becoming the bad guys? Hello list, The other day I was listening to a conspiracy pod and all of a sudden, I heard a medium talk about how the transhumanists want to eradicate all spiritual values and merge humanity with the machine. In my own home cooked transhumanist version, transhumanists are optimistic, life affirming people who are solving problems with science and technology. So when did transhumanists start to become the bad guys in conspiracy circles? Did anyone of you encounter this in real life or in virtual life? Curious shift in narrative that completely passed me by and caught me by surprise. Best regards, Daniel _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natashavita-more.com Thu Nov 30 17:59:10 2023 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Thu, 30 Nov 2023 17:59:10 +0000 Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: I have to disagree. I am not and never have been an atheist. The reason is that I do not know what a God could be if the religious dogma were not present. I do not agree with religious groups and their self-absorbed entitlement to what they think of as their God. However, outside of this I have no idea what God could really be. So, I do not respect religious dogma formed by religious worshiping, but I do not claim there is absolutely and unequivocally no semblance of what God could be. I do not know what c9onsciousness is either. Natasha ________________________________ From: extropy-chat on behalf of Darin Sunley via extropy-chat Sent: Wednesday, November 29, 2023 3:50 PM To: ExI chat list Cc: Darin Sunley Subject: Re: [ExI] Are transhumanists becoming the bad guys? We have been in for some time, and are heading into, an extended period of massive societal change and economic uncertainty. Per Keith Henson, this triggers a deep-seated gene complex that turns people unreasonably xenophobic. Any group radically different from one's own becomes intrinsically suspect just on general principles. I've been active on this list and other transhumanist-adjacent circles since the mid-90s. And even with the general principles set aside, some of the substance of those complaints is, broadly speaking, true. Transhumanists are still, and really always have been, at best, mildly hostile to traditional conceptions of spirituality and religion. The Venn diagram between transhumanism and New Atheism in the 90's was almost a pair of concentric circles. Not all online atheists were transhumanists, but virtually all transhumanists were atheist, frequently aggressively so. On Wed, Nov 29, 2023 at 3:37?PM efc--- via extropy-chat > wrote: Hello list, The other day I was listening to a conspiracy pod and all of a sudden, I heard a medium talk about how the transhumanists want to eradicate all spiritual values and merge humanity with the machine. In my own home cooked transhumanist version, transhumanists are optimistic, life affirming people who are solving problems with science and technology. So when did transhumanists start to become the bad guys in conspiracy circles? Did anyone of you encounter this in real life or in virtual life? Curious shift in narrative that completely passed me by and caught me by surprise. Best regards, Daniel _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Nov 30 20:23:16 2023 From: pharos at gmail.com (BillK) Date: Thu, 30 Nov 2023 20:23:16 +0000 Subject: [ExI] AI ChatGPT Outperforms Humans in Providing Social Advice Message-ID: AI ChatGPT Outperforms Humans in Providing Social Advice, Study Finds November 30, 2023 Quotes: ChatGPT has undeniably established itself as a reliable source of technical information, but can it also excel in offering social advice? A recent study published in the journal Frontiers in Psychology explores this question, revealing that later versions of ChatGPT surpass professional columnists in dispensing personal advice. A groundbreaking study compared ChatGPT?s responses to those of human advice columnists in addressing social dilemmas. Participants overwhelmingly perceived ChatGPT?s advice as more balanced, complete, empathetic, helpful, and superior overall compared to professional advice. ---------------- The complete research report is available here: (It is lengthy and technical). :) BillK From efc at swisscows.email Thu Nov 30 21:34:21 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 30 Nov 2023 22:34:21 +0100 (CET) Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: <6e90d635-5f53-90f8-2399-f037fa5f30a3@swisscows.email> Message-ID: <4a00200a-5925-6cfa-3d65-265276afa9d9@swisscows.email> Hello Natasha, Would you call yourself an agnostic? Best regards, Daniel On Thu, 30 Nov 2023, Natasha natashavita-more.com via extropy-chat wrote: > I have to disagree. I am not and never have been an atheist. The reason is that I do not know what a God could be if the religious > dogma were not present.? I do not agree with religious groups and their self-absorbed entitlement to what they think of as their God. > However, outside of this I have no idea what God could really be. > > So, I do not respect religious dogma formed by religious worshiping, but I do not claim there is absolutely and unequivocally no > semblance of what God could be. I do not know what c9onsciousness is either. > > Natasha > > _____________________________________________________________________________________________________________________________________ > From:?extropy-chat on behalf of Darin Sunley via extropy-chat > > Sent:?Wednesday, November 29, 2023 3:50 PM > To:?ExI chat list > Cc:?Darin Sunley > Subject:?Re: [ExI] Are transhumanists becoming the bad guys? ? > We have been in for some time, and are heading into, an extended period of massive societal change and economic uncertainty. Per > Keith Henson, this triggers a deep-seated gene complex that turns people unreasonably xenophobic. Any group radically different from > one's own becomes intrinsically suspect just on general principles. > > I've been active on this list and other transhumanist-adjacent circles since the mid-90s. And even with the general principles set > aside, some of the substance of those complaints is, broadly speaking, true. Transhumanists are still, and really always have been, > at best, mildly hostile to traditional conceptions of spirituality and religion. The Venn diagram between transhumanism and New > Atheism in the 90's was almost a pair of concentric circles. Not all online atheists were transhumanists, but virtually all > transhumanists were atheist, frequently aggressively so. > > On Wed, Nov 29, 2023 at 3:37?PM efc--- via extropy-chat wrote: > Hello list, > > The other day I was listening to a conspiracy pod and all of a sudden, I > heard a medium talk about how the transhumanists want to > eradicate all spiritual values and merge humanity with the machine. > > In my own home cooked transhumanist version, transhumanists are > optimistic, life affirming people who are solving problems with > science and technology. > > So when did transhumanists start to become the bad guys in conspiracy > circles? Did anyone of you encounter this in real life or in > virtual life? > > Curious shift in narrative that completely passed me by and caught me by > surprise. > > Best regards, > Daniel > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From efc at swisscows.email Thu Nov 30 21:38:12 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 30 Nov 2023 22:38:12 +0100 (CET) Subject: [ExI] Are transhumanists becoming the bad guys? In-Reply-To: References: Message-ID: <3f0117b3-d561-b4f5-6b0d-1ad808573d2e@swisscows.email> Hello Ben and thank you for your input. On Thu, 30 Nov 2023, Ben Zaiboc via extropy-chat wrote: > Transhumanists have been painted as the bad guys for ages. Since > before even Francis Fukuyama and Bill Joy. The evil (or at least > misguided) mad scientist (played by John Hannah) in 'Agents of Shield' > was supposed to be a transhumanist. And of course just about every > religion that ever existed is going to be opposed to a world-view > that's based on materialism, promotes rationality, tolerance and > freedom of choice, and challenges superstition. I guess what I reacted to was the specific term transhumanist. Scientists have been bad guys on and off throughout history, just look at Frankenstein for instance. And atheists and theists have been at each others throats. But it was the first time I heard that specifically transhumanists were part of the conspiracy to kill the soul. Best regards, Daniel