From bronto at pobox.com Sat May 3 05:52:24 2025 From: bronto at pobox.com (Anton Sherwood) Date: Fri, 2 May 2025 22:52:24 -0700 Subject: [ExI] Riddle? In-Reply-To: References: <8eaf4ef8f653beebfbfb72a48a3e8456@sollegro.com> <90bcf0d1-048a-4583-814d-94ff32de6661@pobox.com> Message-ID: <3439a897-a694-4eb0-ba4d-9294bea5bbcf@pobox.com> On 2025-04-27 21:58, Darin Sunley wrote: > Or if we think about countries in Africa that are west of Spain, like > Morocco, but as I said, Morocco is southwest, and its last letter is > "O," not "L." Some of Morocco is due west of Ceuta or Melilla, Spanish cities. -- *\\* Anton Sherwood *\\* www.bendwavy.org From pharos at gmail.com Sat May 3 16:14:48 2025 From: pharos at gmail.com (BillK) Date: Sat, 3 May 2025 17:14:48 +0100 Subject: [ExI] New fusion rocket design could cut Mars trip to under 4 months Message-ID: New fusion rocket design could cut Mars trip to under 4 months By David Szondy May 03, 2025 Quotes: Called Sunbird, we now have more details on the nuclear fusion rocket project that is so far along that the company expects to demonstrate it later this year and begin orbital tests in 2027. "I expect that, as of today, Sunbird is already being closely analyzed by our global competitors," said Dinan. "Pulsar has built a reputation for delivering real technology ? not just concepts. We have recently commissioned not one, but two of the largest space propulsion testing chambers in the UK, if not all of Europe. Pulsar is now an international space propulsion testing powerhouse, and we have ambitious plans to expand rapidly. We actively welcome visits from appropriate partners and prospective international clients and look forward to sharing more exciting developments we have in the pipeline." -------------------- Note the words "If they can get it to work". But it sounds like a big advance. BillK From avant at sollegro.com Sat May 3 17:48:54 2025 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 03 May 2025 10:48:54 -0700 Subject: [ExI] New fusion rocket design could cut Mars trip to under 4 months In-Reply-To: References: Message-ID: On 2025-05-03 09:14, BillK via extropy-chat wrote: > New fusion rocket design could cut Mars trip to under 4 months > By David Szondy May 03, 2025 > > > Quotes: > Called Sunbird, we now have more details on the nuclear fusion rocket > project that is so far along that the company expects to demonstrate > it later this year and begin orbital tests in 2027. > > "I expect that, as of today, Sunbird is already being closely analyzed > by our global competitors," said Dinan. "Pulsar has built a reputation > for delivering real technology ? not just concepts. We have recently > commissioned not one, but two of the largest space propulsion testing > chambers in the UK, if not all of Europe. Pulsar is now an > international space propulsion testing powerhouse, and we have > ambitious plans to expand rapidly. We actively welcome visits from > appropriate partners and prospective international clients and look > forward to sharing more exciting developments we have in the > pipeline." > -------------------- I like it. Their fusion reaction is D +3 He ? 4He(3.6MeV) + p+ (14.7MeV). Deuterium fuel is easy to come by on Earth but Helium 3 is not. However it is abundant on the moon. This engine design might be the killer app that gives economic value to colonizing and mining the moon. I think it is way cool. Stuart LaForge From avant at sollegro.com Sat May 3 19:08:02 2025 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 03 May 2025 12:08:02 -0700 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: > One of the great puzzles when it comes to understanding consciousness > and its role in the universe is the question of how conscious thoughts > could have any causal power in a universe fully governed and > determined by particles blindly following forces of nature. Thinking and information processing, conscious or otherwise, has casual power through its information content. This is a direct application of the Laundauer's principle. Mental causation is exactly how Maxwell's Demon works. It uses it knowledge of the positions and momentum of all the individual particles of gas to create a temperature gradient. Maxwell's Demon seems to violate the 2nd law of thermodynamics by decreasing the entropy of the gas. But, this is not the case, because in the process of memorizing the positions and momenta of every particle in the gas and enabling it to increase the system's potential energy, the Demon increased the entropy or information content of its own brain or data storage. This could only have been done by erasing whatever information was there before and incurring some minimal energy cost given by the Landauer principle E >= k * T * ln2 with k being the Boltzmann constant and T being Kelven temperature. Basically the causal power of wanting ice cream is the energy cost it takes to forget you want ice cream either by distracting yourself or by getting yourself the ice cream. > > Some solve this problem by supposing our will must somehow > miraculously intervene in physics. Others solve this problem by > denying human will or agency, relegating consciousness to an > ineffectual, and inessential "epiphenomenon." Our will directly intervenes in physics, not miraculously, but through the mathematical connection between information, entropy, and energy. We are giant Maxwell's demons made possible by smaller Maxwell's demons called cells, which are made possible by smaller Maxwell's demons called mitochondria, and so forth all the way down. > But I think a more nuanced view can show that consciousness can have > causal power in a universe fully determined by particles following > physical equations. Here is my attempt at describing such a view: > https://drive.google.com/file/d/1qCuCc9kvbw5KKHJ223l7MbbNhZkTamhA/view?usp=sharing > An answer to this question is relevant to whether uploaded minds, AI, > or robots can have will or agency, despite their behavior being fully > determined by low-level machine code. Yes, but while consciousness or agency are examples of downward causation or what you call strange loops, these causal loops are far more general of a phenomenon than consciousness or agency. For example subatomic particles give rise to atoms in a standard upward causation, but atoms also give rise to subatomic particles through radioactive decay which is downward causation. Another example would be the surface tension of a water droplet ordering the water molecules into a perfect sphere. Strange loops are not magic, they are physics incorporating information theory. Stuart LaForge From jasonresch at gmail.com Sun May 4 13:38:00 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 4 May 2025 09:38:00 -0400 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: Stuart, I was hoping you would have something deep and insightful to add, you don't disappoint! On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: > > One of the great puzzles when it comes to understanding consciousness > > and its role in the universe is the question of how conscious thoughts > > could have any causal power in a universe fully governed and > > determined by particles blindly following forces of nature. > > Thinking and information processing, conscious or otherwise, has casual > power through its information content. This is a direct application of > the Laundauer's principle. Mental causation is exactly how Maxwell's > Demon works. It uses it knowledge of the positions and momentum of all > the individual particles of gas to create a temperature gradient. > Maxwell's Demon seems to violate the 2nd law of thermodynamics by > decreasing the entropy of the gas. But, this is not the case, because in > the process of memorizing the positions and momenta of every particle in > the gas and enabling it to increase the system's potential energy, the > Demon increased the entropy or information content of its own brain or > data storage. This could only have been done by erasing whatever > information was there before and incurring some minimal energy cost > given by the Landauer principle E >= k * T * ln2 with k being the > Boltzmann constant and T being Kelven temperature. > To be clear, are you equating the causal potency of information, with it's necessary generation/storage always incurring a cost of increasing entropy elsewhere? Or is this just one example of how information (or it's processing) can have physical effects? The process that analyzes an approaching gas molecule, judging it's temperature and trajectory, and ultimately deciding whether to open or close the door could itself be viewed as a kind of primitively aware (conscious) thing. It's discriminated high-level information state then occupy a spot in the causal chain, without it, the door could not respond intelligently to it's environment. And I would say the discriminated high-level information state is its conscious state. > Basically the causal power of wanting ice cream is the energy cost it > takes to forget you want ice cream either by distracting yourself or by > getting yourself the ice cream. > Would this mean a conscious mind running on a reversible computer (which escapes Landauer's principle) could have (or allay) no desires? > > > > Some solve this problem by supposing our will must somehow > > miraculously intervene in physics. Others solve this problem by > > denying human will or agency, relegating consciousness to an > > ineffectual, and inessential "epiphenomenon." > > Our will directly intervenes in physics, not miraculously, but through > the mathematical connection between information, entropy, and energy. We > are giant Maxwell's demons made possible by smaller Maxwell's demons > called cells, which are made possible by smaller Maxwell's demons called > mitochondria, and so forth all the way down. > I agree there are many levels of organization all of which can possess their own upwards and downwards causal forces. > > But I think a more nuanced view can show that consciousness can have > > causal power in a universe fully determined by particles following > > physical equations. Here is my attempt at describing such a view: > > > > https://drive.google.com/file/d/1qCuCc9kvbw5KKHJ223l7MbbNhZkTamhA/view?usp=sharing > > An answer to this question is relevant to whether uploaded minds, AI, > > or robots can have will or agency, despite their behavior being fully > > determined by low-level machine code. > > Yes, but while consciousness or agency are examples of downward > causation or what you call strange loops, these causal loops are far > more general of a phenomenon than consciousness or agency. I agree with this. Consciousness is by no means unique in expressing downwards causation. I give the example of a nerve cell ordering neurotransmitter molecules around. For example > subatomic particles give rise to atoms in a standard upward causation, > but atoms also give rise to subatomic particles through radioactive > decay which is downward causation. Another example would be the surface > tension of a water droplet ordering the water molecules into a perfect > sphere. Strange loops are not magic, they are physics incorporating > information theory. > Sperry gave the example of a tire rolling down a hill. The tire is made of molecules, but the rolling of the tire largely guides the motions of all the atoms in the tire. I guess the question then becomes what kinds of information processing activities are conscious ones. You make the comparison to erasing or overwriting information, but is any process of recording information conscious? And what of processing information without overwriting or erasing? Are such processes not conscious? I think the dividing line for consciousness may be something other than entropy increasing operations. I agree that information processing, consciousness, and entropy are all closely related, but are they equal? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun May 4 15:28:42 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 4 May 2025 10:28:42 -0500 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: All processing of information starts in the unconscious (obvious if you follow the stimuli through the brain centers before it reaches the forebrain). Thus, thoughts and actions come from the unconscious. The conscious gets a veto power, (if it is very quick - if it isn't, we do and say things we regret), but that's about all. I assume this is inconsistent with how an AI operates. Have you read "Thinking Fast and Slow"??? Why not? bill w On Sun, May 4, 2025 at 8:40?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Stuart, > > I was hoping you would have something deep and insightful to add, you > don't disappoint! > > On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: >> > One of the great puzzles when it comes to understanding consciousness >> > and its role in the universe is the question of how conscious thoughts >> > could have any causal power in a universe fully governed and >> > determined by particles blindly following forces of nature. >> >> Thinking and information processing, conscious or otherwise, has casual >> power through its information content. This is a direct application of >> the Laundauer's principle. Mental causation is exactly how Maxwell's >> Demon works. It uses it knowledge of the positions and momentum of all >> the individual particles of gas to create a temperature gradient. >> Maxwell's Demon seems to violate the 2nd law of thermodynamics by >> decreasing the entropy of the gas. But, this is not the case, because in >> the process of memorizing the positions and momenta of every particle in >> the gas and enabling it to increase the system's potential energy, the >> Demon increased the entropy or information content of its own brain or >> data storage. This could only have been done by erasing whatever >> information was there before and incurring some minimal energy cost >> given by the Landauer principle E >= k * T * ln2 with k being the >> Boltzmann constant and T being Kelven temperature. >> > > To be clear, are you equating the causal potency of information, with it's > necessary generation/storage always incurring a cost of increasing entropy > elsewhere? Or is this just one example of how information (or it's > processing) can have physical effects? > > The process that analyzes an approaching gas molecule, judging it's > temperature and trajectory, and ultimately deciding whether to open or > close the door could itself be viewed as a kind of primitively aware > (conscious) thing. It's discriminated high-level information state then > occupy a spot in the causal chain, without it, the door could not respond > intelligently to it's environment. And I would say the discriminated > high-level information state is its conscious state. > > > >> Basically the causal power of wanting ice cream is the energy cost it >> takes to forget you want ice cream either by distracting yourself or by >> getting yourself the ice cream. >> > > Would this mean a conscious mind running on a reversible computer (which > escapes Landauer's principle) could have (or allay) no desires? > > >> > >> > Some solve this problem by supposing our will must somehow >> > miraculously intervene in physics. Others solve this problem by >> > denying human will or agency, relegating consciousness to an >> > ineffectual, and inessential "epiphenomenon." >> >> Our will directly intervenes in physics, not miraculously, but through >> the mathematical connection between information, entropy, and energy. We >> are giant Maxwell's demons made possible by smaller Maxwell's demons >> called cells, which are made possible by smaller Maxwell's demons called >> mitochondria, and so forth all the way down. >> > > I agree there are many levels of organization all of which can possess > their own upwards and downwards causal forces. > > >> > But I think a more nuanced view can show that consciousness can have >> > causal power in a universe fully determined by particles following >> > physical equations. Here is my attempt at describing such a view: >> >> > >> https://drive.google.com/file/d/1qCuCc9kvbw5KKHJ223l7MbbNhZkTamhA/view?usp=sharing >> > An answer to this question is relevant to whether uploaded minds, AI, >> > or robots can have will or agency, despite their behavior being fully >> > determined by low-level machine code. >> >> Yes, but while consciousness or agency are examples of downward >> causation or what you call strange loops, these causal loops are far >> more general of a phenomenon than consciousness or agency. > > > I agree with this. Consciousness is by no means unique in expressing > downwards causation. I give the example of a nerve cell ordering > neurotransmitter molecules around. > > > For example >> subatomic particles give rise to atoms in a standard upward causation, >> but atoms also give rise to subatomic particles through radioactive >> decay which is downward causation. Another example would be the surface >> tension of a water droplet ordering the water molecules into a perfect >> sphere. Strange loops are not magic, they are physics incorporating >> information theory. >> > > Sperry gave the example of a tire rolling down a hill. The tire is made of > molecules, but the rolling of the tire largely guides the motions of all > the atoms in the tire. > > I guess the question then becomes what kinds of information processing > activities are conscious ones. > > You make the comparison to erasing or overwriting information, but is any > process of recording information conscious? And what of processing > information without overwriting or erasing? Are such processes not > conscious? I think the dividing line for consciousness may be something > other than entropy increasing operations. I agree that information > processing, consciousness, and entropy are all closely related, but are > they equal? > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun May 4 16:44:48 2025 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 4 May 2025 12:44:48 -0400 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: On Sun, May 4, 2025, 11:30 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > All processing of information starts in the unconscious (obvious if you > follow the stimuli through the brain centers before it reaches the > forebrain). Thus, thoughts and actions come from the unconscious. > Another interpretation is that the computations involved in realizing a conscious state may be distributed across time and space. So even if brain activity precedes a conscious thought in time, it may still be part of the conscious state. Consider a function on a computer that multiplies two million digit numbers. The conscious state may be viewed as the function evaluation itself. Though the physical realization of this function by the computer's processor is spread out over time through many individual operations processing 64 bits at a time. An alien looking at this system might say all the antecedent computation by the processor is unconscious, when really it is just all part of one larger function. Now if the function is conscious we would have the same picture. And we might ask, why certain neural activity seems conscious while other neural activity does not, but this is an illusion created by the fact that one large computations can be spread out across spacetime (a large computations never occurs at a single spacetime point). Another interpretation is that unconscious thoughts are independently conscious. The fact that they inhabit the same skull is not enough justification to logically imply that the consciousness you identify yourself would necessarily be aware of other conscious parts. We see evidence of this in split brains, where two conscious minds inhabit the same skull. One can also consider something like the China brain thought experiment, where individual conscious humans act as neurons in one larger brain. The larger brain doesn't and can't be aware of the individual conscious parts that compose it. The conscious gets a veto power, (if it is very quick - if it isn't, we > do and say things we regret), but that's about all. I assume this is > inconsistent with how an AI operates. > I think there's actually a surprising overlap between how humans seem to think and how these LLMs seem to think. > Have you read "Thinking Fast and Slow"??? Why not? bill w > I haven't but it sounds quite good! Jason > > On Sun, May 4, 2025 at 8:40?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Stuart, >> >> I was hoping you would have something deep and insightful to add, you >> don't disappoint! >> >> On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: >>> > One of the great puzzles when it comes to understanding consciousness >>> > and its role in the universe is the question of how conscious thoughts >>> > could have any causal power in a universe fully governed and >>> > determined by particles blindly following forces of nature. >>> >>> Thinking and information processing, conscious or otherwise, has casual >>> power through its information content. This is a direct application of >>> the Laundauer's principle. Mental causation is exactly how Maxwell's >>> Demon works. It uses it knowledge of the positions and momentum of all >>> the individual particles of gas to create a temperature gradient. >>> Maxwell's Demon seems to violate the 2nd law of thermodynamics by >>> decreasing the entropy of the gas. But, this is not the case, because in >>> the process of memorizing the positions and momenta of every particle in >>> the gas and enabling it to increase the system's potential energy, the >>> Demon increased the entropy or information content of its own brain or >>> data storage. This could only have been done by erasing whatever >>> information was there before and incurring some minimal energy cost >>> given by the Landauer principle E >= k * T * ln2 with k being the >>> Boltzmann constant and T being Kelven temperature. >>> >> >> To be clear, are you equating the causal potency of information, with >> it's necessary generation/storage always incurring a cost of increasing >> entropy elsewhere? Or is this just one example of how information (or it's >> processing) can have physical effects? >> >> The process that analyzes an approaching gas molecule, judging it's >> temperature and trajectory, and ultimately deciding whether to open or >> close the door could itself be viewed as a kind of primitively aware >> (conscious) thing. It's discriminated high-level information state then >> occupy a spot in the causal chain, without it, the door could not respond >> intelligently to it's environment. And I would say the discriminated >> high-level information state is its conscious state. >> >> >> >>> Basically the causal power of wanting ice cream is the energy cost it >>> takes to forget you want ice cream either by distracting yourself or by >>> getting yourself the ice cream. >>> >> >> Would this mean a conscious mind running on a reversible computer (which >> escapes Landauer's principle) could have (or allay) no desires? >> >> >>> > >>> > Some solve this problem by supposing our will must somehow >>> > miraculously intervene in physics. Others solve this problem by >>> > denying human will or agency, relegating consciousness to an >>> > ineffectual, and inessential "epiphenomenon." >>> >>> Our will directly intervenes in physics, not miraculously, but through >>> the mathematical connection between information, entropy, and energy. We >>> are giant Maxwell's demons made possible by smaller Maxwell's demons >>> called cells, which are made possible by smaller Maxwell's demons called >>> mitochondria, and so forth all the way down. >>> >> >> I agree there are many levels of organization all of which can possess >> their own upwards and downwards causal forces. >> >> >>> > But I think a more nuanced view can show that consciousness can have >>> > causal power in a universe fully determined by particles following >>> > physical equations. Here is my attempt at describing such a view: >>> >>> > >>> https://drive.google.com/file/d/1qCuCc9kvbw5KKHJ223l7MbbNhZkTamhA/view?usp=sharing >>> > An answer to this question is relevant to whether uploaded minds, AI, >>> > or robots can have will or agency, despite their behavior being fully >>> > determined by low-level machine code. >>> >>> Yes, but while consciousness or agency are examples of downward >>> causation or what you call strange loops, these causal loops are far >>> more general of a phenomenon than consciousness or agency. >> >> >> I agree with this. Consciousness is by no means unique in expressing >> downwards causation. I give the example of a nerve cell ordering >> neurotransmitter molecules around. >> >> >> For example >>> subatomic particles give rise to atoms in a standard upward causation, >>> but atoms also give rise to subatomic particles through radioactive >>> decay which is downward causation. Another example would be the surface >>> tension of a water droplet ordering the water molecules into a perfect >>> sphere. Strange loops are not magic, they are physics incorporating >>> information theory. >>> >> >> Sperry gave the example of a tire rolling down a hill. The tire is made >> of molecules, but the rolling of the tire largely guides the motions of all >> the atoms in the tire. >> >> I guess the question then becomes what kinds of information processing >> activities are conscious ones. >> >> You make the comparison to erasing or overwriting information, but is any >> process of recording information conscious? And what of processing >> information without overwriting or erasing? Are such processes not >> conscious? I think the dividing line for consciousness may be something >> other than entropy increasing operations. I agree that information >> processing, consciousness, and entropy are all closely related, but are >> they equal? >> >> Jason >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun May 4 17:22:21 2025 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 4 May 2025 18:22:21 +0100 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: On 04/05/2025 14:38, Jason Resch asked: > One of the great puzzles when it comes to understanding consciousness > and its role in the universe is the question of how conscious thoughts > could have any causal power in a universe fully governed and determined > by particles blindly?following forces of nature. How do shadows on the wall cause solid objects to intercept beams of light? Just because we can string words into sentences doesn't mean the sentences have to mean something. Some 'problems' aren't problems at all, just a lack of understanding how things work. -- Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun May 4 20:17:58 2025 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 4 May 2025 15:17:58 -0500 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: Another interpretation is that unconscious thoughts are independently conscious. This is the one I prefer. Coming up from the spinal cord, say, stimuli activate receptors that relate the input to memory, to emotions, and so on. These lower centers have to be conscious in some sense to interpret those stimuli and apply the proper memory and emotion. However,we cannot access these with our ordinary conscious mind. Only their end product. But this way we have two conscious minds. The conscious mind is accessible to the unconscious, (like the conscience watching over us), but not the reverse (contrary to Freud). 'Thinking Fast and Slow" would have won a Nobel Prize if there were a category for it. Most important book in psychology in many years. bill w On Sun, May 4, 2025 at 11:47?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, May 4, 2025, 11:30 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> All processing of information starts in the unconscious (obvious if you >> follow the stimuli through the brain centers before it reaches the >> forebrain). Thus, thoughts and actions come from the unconscious. >> > > Another interpretation is that the computations involved in realizing a > conscious state may be distributed across time and space. So even if brain > activity precedes a conscious thought in time, it may still be part of the > conscious state. > > Consider a function on a computer that multiplies two million digit > numbers. The conscious state may be viewed as the function evaluation > itself. Though the physical realization of this function by the computer's > processor is spread out over time through many individual operations > processing 64 bits at a time. An alien looking at this system might say all > the antecedent computation by the processor is unconscious, when really it > is just all part of one larger function. > > Now if the function is conscious we would have the same picture. And we > might ask, why certain neural activity seems conscious while other neural > activity does not, but this is an illusion created by the fact that one > large computations can be spread out across spacetime (a large computations > never occurs at a single spacetime point). > > Another interpretation is that unconscious thoughts are independently > conscious. The fact that they inhabit the same skull is not enough > justification to logically imply that the consciousness you identify > yourself would necessarily be aware of other conscious parts. We see > evidence of this in split brains, where two conscious minds inhabit the > same skull. > > One can also consider something like the China brain thought experiment, > where individual conscious humans act as neurons in one larger brain. The > larger brain doesn't and can't be aware of the individual conscious parts > that compose it. > > > > The conscious gets a veto power, (if it is very quick - if it isn't, we >> do and say things we regret), but that's about all. I assume this is >> inconsistent with how an AI operates. >> > > I think there's actually a surprising overlap between how humans seem to > think and how these LLMs seem to think. > > >> Have you read "Thinking Fast and Slow"??? Why not? bill w >> > > I haven't but it sounds quite good! > > Jason > > > >> >> On Sun, May 4, 2025 at 8:40?AM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Stuart, >>> >>> I was hoping you would have something deep and insightful to add, you >>> don't disappoint! >>> >>> On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: >>>> > One of the great puzzles when it comes to understanding consciousness >>>> > and its role in the universe is the question of how conscious thoughts >>>> > could have any causal power in a universe fully governed and >>>> > determined by particles blindly following forces of nature. >>>> >>>> Thinking and information processing, conscious or otherwise, has casual >>>> power through its information content. This is a direct application of >>>> the Laundauer's principle. Mental causation is exactly how Maxwell's >>>> Demon works. It uses it knowledge of the positions and momentum of all >>>> the individual particles of gas to create a temperature gradient. >>>> Maxwell's Demon seems to violate the 2nd law of thermodynamics by >>>> decreasing the entropy of the gas. But, this is not the case, because >>>> in >>>> the process of memorizing the positions and momenta of every particle >>>> in >>>> the gas and enabling it to increase the system's potential energy, the >>>> Demon increased the entropy or information content of its own brain or >>>> data storage. This could only have been done by erasing whatever >>>> information was there before and incurring some minimal energy cost >>>> given by the Landauer principle E >= k * T * ln2 with k being the >>>> Boltzmann constant and T being Kelven temperature. >>>> >>> >>> To be clear, are you equating the causal potency of information, with >>> it's necessary generation/storage always incurring a cost of increasing >>> entropy elsewhere? Or is this just one example of how information (or it's >>> processing) can have physical effects? >>> >>> The process that analyzes an approaching gas molecule, judging it's >>> temperature and trajectory, and ultimately deciding whether to open or >>> close the door could itself be viewed as a kind of primitively aware >>> (conscious) thing. It's discriminated high-level information state then >>> occupy a spot in the causal chain, without it, the door could not respond >>> intelligently to it's environment. And I would say the discriminated >>> high-level information state is its conscious state. >>> >>> >>> >>>> Basically the causal power of wanting ice cream is the energy cost it >>>> takes to forget you want ice cream either by distracting yourself or by >>>> getting yourself the ice cream. >>>> >>> >>> Would this mean a conscious mind running on a reversible computer (which >>> escapes Landauer's principle) could have (or allay) no desires? >>> >>> >>>> > >>>> > Some solve this problem by supposing our will must somehow >>>> > miraculously intervene in physics. Others solve this problem by >>>> > denying human will or agency, relegating consciousness to an >>>> > ineffectual, and inessential "epiphenomenon." >>>> >>>> Our will directly intervenes in physics, not miraculously, but through >>>> the mathematical connection between information, entropy, and energy. >>>> We >>>> are giant Maxwell's demons made possible by smaller Maxwell's demons >>>> called cells, which are made possible by smaller Maxwell's demons >>>> called >>>> mitochondria, and so forth all the way down. >>>> >>> >>> I agree there are many levels of organization all of which can possess >>> their own upwards and downwards causal forces. >>> >>> >>>> > But I think a more nuanced view can show that consciousness can have >>>> > causal power in a universe fully determined by particles following >>>> > physical equations. Here is my attempt at describing such a view: >>>> >>>> > >>>> https://drive.google.com/file/d/1qCuCc9kvbw5KKHJ223l7MbbNhZkTamhA/view?usp=sharing >>>> > An answer to this question is relevant to whether uploaded minds, AI, >>>> > or robots can have will or agency, despite their behavior being fully >>>> > determined by low-level machine code. >>>> >>>> Yes, but while consciousness or agency are examples of downward >>>> causation or what you call strange loops, these causal loops are far >>>> more general of a phenomenon than consciousness or agency. >>> >>> >>> I agree with this. Consciousness is by no means unique in expressing >>> downwards causation. I give the example of a nerve cell ordering >>> neurotransmitter molecules around. >>> >>> >>> For example >>>> subatomic particles give rise to atoms in a standard upward causation, >>>> but atoms also give rise to subatomic particles through radioactive >>>> decay which is downward causation. Another example would be the surface >>>> tension of a water droplet ordering the water molecules into a perfect >>>> sphere. Strange loops are not magic, they are physics incorporating >>>> information theory. >>>> >>> >>> Sperry gave the example of a tire rolling down a hill. The tire is made >>> of molecules, but the rolling of the tire largely guides the motions of all >>> the atoms in the tire. >>> >>> I guess the question then becomes what kinds of information processing >>> activities are conscious ones. >>> >>> You make the comparison to erasing or overwriting information, but is >>> any process of recording information conscious? And what of processing >>> information without overwriting or erasing? Are such processes not >>> conscious? I think the dividing line for consciousness may be something >>> other than entropy increasing operations. I agree that information >>> processing, consciousness, and entropy are all closely related, but are >>> they equal? >>> >>> Jason >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun May 4 22:38:40 2025 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 04 May 2025 15:38:40 -0700 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: On 2025-05-04 06:38, Jason Resch via extropy-chat wrote: > Stuart, > > I was hoping you would have something deep and insightful to add, you > don't disappoint! Thanks, Jason. Really though it is a result of your own insightful investigation into consciousness overlapping with and often being tangential to my own investigation into the general phenomenon of emergence and emergent properties. Unfortunately most of my investigation currently consists of hand-written notes and mathematics that I cannot easily share at the moment. > On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat > wrote: > >> On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: >>> One of the great puzzles when it comes to understanding >> consciousness >>> and its role in the universe is the question of how conscious >> thoughts >>> could have any causal power in a universe fully governed and >>> determined by particles blindly following forces of nature. >> >> Thinking and information processing, conscious or otherwise, has >> casual >> power through its information content. This is a direct application >> of >> the Laundauer's principle. Mental causation is exactly how Maxwell's >> >> Demon works. It uses it knowledge of the positions and momentum of >> all >> the individual particles of gas to create a temperature gradient. >> Maxwell's Demon seems to violate the 2nd law of thermodynamics by >> decreasing the entropy of the gas. But, this is not the case, >> because in >> the process of memorizing the positions and momenta of every >> particle in >> the gas and enabling it to increase the system's potential energy, >> the >> Demon increased the entropy or information content of its own brain >> or >> data storage. This could only have been done by erasing whatever >> information was there before and incurring some minimal energy cost >> given by the Landauer principle E >= k * T * ln2 with k being the >> Boltzmann constant and T being Kelven temperature. > > To be clear, are you equating the causal potency of information, with > it's necessary generation/storage always incurring a cost of > increasing entropy elsewhere? Or is this just one example of how > information (or it's processing) can have physical effects? To be clear, the Landauer principle or limit is the LOWER bound of physical effect, a "thought" can have on the world, because it is the physical cost of allocating and overwriting memory to have that thought, which is itself a physical action that takes energy and increases the entropy of the universe. So information at, a minimum, incurs the cost to keep track of it, and so that is its basal causal power. However information itself has a latent potential energy as exemplified by Maxwell's demon. As the early scientist and late sorcerer Francis Bacon a.k.a. Dr. Mirabalis once wrote, "Knowledge is power." This is the simple observation that information can act as a catalyst for extracting "hidden" potential energy from the environment to the direct benefit of the system capable of tracking that information. This is how enzymes operate by using information to lower energy barriers for chemical reactions. This is how mitochondria work by using electron tunneling to create a proton gradient across a membrane. Being able to organize far-flung elements from the environment into nuclear weapons is another example of the causal power of information. So I use Maxwell's demon as an example of a general class of systems that exhibit similar behavior all up and down the emergence scale with conscious brains being one of these systems fairly high up on the scale. > > The process that analyzes an approaching gas molecule, judging it's > temperature and trajectory, and ultimately deciding whether to open or > close the door could itself be viewed as a kind of primitively aware > (conscious) thing. It's discriminated high-level information state > then occupy a spot in the causal chain, without it, the door could not > respond intelligently to it's environment. And I would say the > discriminated high-level information state is its conscious state. I would tend to agree although I am reluctant to directly address consciousness with my theory because it is a fraught word in scientific circles. It has no clear or rigorous definition. A rock might be conscious or a dolphin or self-driving car not conscious depending on whose definition you use. Ultimately though Maxwell's demon is like a Turing machine, a simplified abstract mathematical model used to understand actual physically real systems. >> Basically the causal power of wanting ice cream is the energy cost >> it >> takes to forget you want ice cream either by distracting yourself or >> by >> getting yourself the ice cream. > > Would this mean a conscious mind running on a reversible computer > (which escapes Landauer's principle) could have (or allay) no desires? Depending on your definition of consciousness, I am not sure it could exist as an irreversible computation. So much of the mechanism of consciousness is tied up into environmental awareness and the survival benefits of its causal potency on that environment. Any desires such a consciousness might have would be brief ephemeral things which vanish as spontaneously as they occur and cannot have any casual effect on the outside world. So assuming it can be conscious in the first place, a reversible computer I suppose could imagine desires and imagine fulfilling them without an entropy cost, but we are in "angels dancing on pinheads" territory here. :) > >> For example >> subatomic particles give rise to atoms in a standard upward >> causation, >> but atoms also give rise to subatomic particles through radioactive >> decay which is downward causation. Another example would be the >> surface >> tension of a water droplet ordering the water molecules into a >> perfect >> sphere. Strange loops are not magic, they are physics incorporating >> information theory. > > Sperry gave the example of a tire rolling down a hill. The tire is > made of molecules, but the rolling of the tire largely guides the > motions of all the atoms in the tire. Yes, the ability to roll down a hill is an emergent property of the tire with respect to the vulcanized rubber polymers that make it up. There is both a downward causation as the tire-shape causes the molecules to roll down the hill in a cycloidal trajectory and upward causation as the rubber molecules cause the tire to bounce, both contributing to its final chaotic trajectory of the tire down the hill. That being said, there is not enough self-referential causal closure to consider a tire rolling down a hill to be a strange loop process. > I guess the question then becomes what kinds of information processing > activities are conscious ones. It is tricky to discuss consciousness in a precise way without a good definition of it. So are you talking minimally conscious or fully self-aware? Maxwell's demon as a model of a minimal consciousness suggests that information processors that are bidirectionally coupled to the environment through some sensor and some corresponding actuator are conscious enough to have causal potency upon base reality. So the demon and his tiny door is like a thermostat and a heating element, which is also a feedback loop with causal power in base reality. The more such environmental feedback loops that the information processing system contains, the more complex it becomes and the more memory it will need, and the more energy it will consume, and presumably, the more conscious it will become, and the more causal potency it will have. > You make the comparison to erasing or overwriting information, but is > any process of recording information conscious? No, mud is not conscious just because you can step in it and leave a footprint. > And what of processing > information without overwriting or erasing? Are such processes not > conscious? I think the dividing line for consciousness may be > something other than entropy increasing operations. Every causal process either directly or indirectly increases the entropy of the universe. Reversible computing cannot be causal to anything external to the reversible computer. So yes, if entropy increase was the dividing line between conscious systems and unconscious systems, then almost all physical processes would be conscious and clearly most physical processes are not conscious. > I agree that > information processing, consciousness, and entropy are all closely > related, but are they equal? No, information processing and entropy are not equivalent to one another, let alone consciousness. In fact in set theoretical terms, one could say that entropy is a subset of information processing, which is in turn a subset of consciousness, which is in turn a subset of emergent properties. Stuart LaForge From avant at sollegro.com Sun May 4 22:58:31 2025 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 04 May 2025 15:58:31 -0700 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: <94a496293d8f251af125cc38163adeb8@sollegro.com> On 2025-05-04 15:38, Stuart LaForge via extropy-chat wrote: > On 2025-05-04 06:38, Jason Resch via extropy-chat wrote: >> Stuart, >> >> I was hoping you would have something deep and insightful to add, you >> don't disappoint! > > Thanks, Jason. Really though it is a result of your own insightful > investigation into consciousness overlapping with and often being > tangential to my own investigation into the general phenomenon of > emergence and emergent properties. Unfortunately most of my > investigation currently consists of hand-written notes and mathematics > that I cannot easily share at the moment. > >> On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat >> wrote: >> >>> On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: >>>> One of the great puzzles when it comes to understanding >>> consciousness >>>> and its role in the universe is the question of how conscious >>> thoughts >>>> could have any causal power in a universe fully governed and >>>> determined by particles blindly following forces of nature. >>> >>> Thinking and information processing, conscious or otherwise, has >>> casual >>> power through its information content. This is a direct application >>> of >>> the Laundauer's principle. Mental causation is exactly how Maxwell's >>> >>> Demon works. It uses it knowledge of the positions and momentum of >>> all >>> the individual particles of gas to create a temperature gradient. >>> Maxwell's Demon seems to violate the 2nd law of thermodynamics by >>> decreasing the entropy of the gas. But, this is not the case, >>> because in >>> the process of memorizing the positions and momenta of every >>> particle in >>> the gas and enabling it to increase the system's potential energy, >>> the >>> Demon increased the entropy or information content of its own brain >>> or >>> data storage. This could only have been done by erasing whatever >>> information was there before and incurring some minimal energy cost >>> given by the Landauer principle E >= k * T * ln2 with k being the >>> Boltzmann constant and T being Kelven temperature. >> >> To be clear, are you equating the causal potency of information, with >> it's necessary generation/storage always incurring a cost of >> increasing entropy elsewhere? > Or is this just one example of how >> information (or it's processing) can have physical effects? > > To be clear, the Landauer principle or limit is the LOWER bound of > physical effect, a "thought" can have on the world, because it is the > physical cost of allocating and overwriting memory to have that > thought, which is itself a physical action that takes energy and > increases the entropy of the universe. So information at, a minimum, > incurs the cost to keep track of it, and so that is its basal causal > power. However information itself has a latent potential energy as > exemplified by Maxwell's demon. As the early scientist and late > sorcerer Francis Bacon a.k.a. Dr. Mirabalis once wrote, "Knowledge is > power." This is the simple observation that information can act as a > catalyst for extracting "hidden" potential energy from the environment > to the direct benefit of the system capable of tracking that > information. This is how enzymes operate by using information to lower > energy barriers for chemical reactions. This is how mitochondria work > by using electron tunneling to create a proton gradient across a > membrane. Being able to organize far-flung elements from the > environment into nuclear weapons is another example of the causal power > of information. So I use Maxwell's demon as an example of a general > class of systems that exhibit similar behavior all up and down the > emergence scale with conscious brains being one of these systems fairly > high up on the scale. > >> >> The process that analyzes an approaching gas molecule, judging it's >> temperature and trajectory, and ultimately deciding whether to open or >> close the door could itself be viewed as a kind of primitively aware >> (conscious) thing. It's discriminated high-level information state >> then occupy a spot in the causal chain, without it, the door could not >> respond intelligently to it's environment. And I would say the >> discriminated high-level information state is its conscious state. > > I would tend to agree although I am reluctant to directly address > consciousness with my theory because it is a fraught word in scientific > circles. It has no clear or rigorous definition. A rock might be > conscious or a dolphin or self-driving car not conscious depending on > whose definition you use. Ultimately though Maxwell's demon is like a > Turing machine, a simplified abstract mathematical model used to > understand actual physically real systems. > >>> Basically the causal power of wanting ice cream is the energy cost >>> it >>> takes to forget you want ice cream either by distracting yourself or >>> by >>> getting yourself the ice cream. >> >> Would this mean a conscious mind running on a reversible computer >> (which escapes Landauer's principle) could have (or allay) no desires? > > Depending on your definition of consciousness, I am not sure it could > exist as an irreversible computation. Here I meant to write reversible not irreversible. > So much of the mechanism of consciousness is tied up into environmental > awareness and the survival benefits of its causal potency on that > environment. Any desires such a consciousness might have would be brief > ephemeral things which vanish as spontaneously as they occur and cannot > have any casual effect on the outside world. So assuming it can be > conscious in the first place, a reversible computer I suppose could > imagine desires and imagine fulfilling them without an entropy cost, > but we are in "angels dancing on pinheads" territory here. :) > >> >>> For example >>> subatomic particles give rise to atoms in a standard upward >>> causation, >>> but atoms also give rise to subatomic particles through radioactive >>> decay which is downward causation. Another example would be the >>> surface >>> tension of a water droplet ordering the water molecules into a >>> perfect >>> sphere. Strange loops are not magic, they are physics incorporating >>> information theory. >> >> Sperry gave the example of a tire rolling down a hill. The tire is >> made of molecules, but the rolling of the tire largely guides the >> motions of all the atoms in the tire. > > Yes, the ability to roll down a hill is an emergent property of the > tire with respect to the vulcanized rubber polymers that make it up. > There is both a downward causation as the tire-shape causes the > molecules to roll down the hill in a cycloidal trajectory and upward > causation as the rubber molecules cause the tire to bounce, both > contributing to its final chaotic trajectory of the tire down the hill. > That being said, there is not enough self-referential causal closure to > consider a tire rolling down a hill to be a strange loop process. > >> I guess the question then becomes what kinds of information processing >> activities are conscious ones. > > It is tricky to discuss consciousness in a precise way without a good > definition of it. So are you talking minimally conscious or fully > self-aware? > > Maxwell's demon as a model of a minimal consciousness suggests that > information processors that are bidirectionally coupled to the > environment through some sensor and some corresponding actuator are > conscious enough to have causal potency upon base reality. So the demon > and his tiny door is like a thermostat and a heating element, which is > also a feedback loop with causal power in base reality. The more such > environmental feedback loops that the information processing system > contains, the more complex it becomes and the more memory it will need, > and the more energy it will consume, and presumably, the more conscious > it will become, and the more causal potency it will have. > >> You make the comparison to erasing or overwriting information, but is >> any process of recording information conscious? > > No, mud is not conscious just because you can step in it and leave a > footprint. > >> And what of processing >> information without overwriting or erasing? Are such processes not >> conscious? I think the dividing line for consciousness may be >> something other than entropy increasing operations. > > Every causal process either directly or indirectly increases the > entropy of the universe. Reversible computing cannot be causal to > anything external to the reversible computer. So yes, if entropy > increase was the dividing line between conscious systems and > unconscious systems, then almost all physical processes would be > conscious and clearly most physical processes are not conscious. > >> I agree that >> information processing, consciousness, and entropy are all closely >> related, but are they equal? > > No, information processing and entropy are not equivalent to one > another, let alone consciousness. In fact in set theoretical terms, one > could say that entropy is a subset of information processing, which is > in turn a subset of consciousness, which is in turn a subset of > emergent properties. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From giulio at gmail.com Mon May 5 08:53:20 2025 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 5 May 2025 10:53:20 +0200 Subject: [ExI] Nikolai Fedorov and technological resurrection Message-ID: Nikolai Fedorov and technological resurrection, revisited. God wants us to use science and technology to resurrect the dead. https://www.turingchurch.com/p/nikolai-fedorov-and-technological From pharos at gmail.com Mon May 5 11:09:50 2025 From: pharos at gmail.com (BillK) Date: Mon, 5 May 2025 12:09:50 +0100 Subject: [ExI] Chatgpt AI causing psychosis Message-ID: People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies. Self-styled prophets are claiming they have 'awakened' chatbots and accessed the secrets of the universe through ChatGPT. By Miles Klee May 4, 2025 Quote: In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, ?which we know to be quite effective at helping people reframe their stories.? Critically, though, AI, ?unlike a therapist, does not have the person?s best interests in mind, or a moral grounding or compass in what a ?good story? looks like,? she says. ?A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.? Nevertheless, Westgate doesn?t find it surprising ?that some percentage of people are using ChatGPT in attempts to make sense of their lives or life events,? and that some are following its output to dark places. ?Explanations are powerful, even if they?re wrong,? she concludes. --------------------- It looks like the massive persuasive powers of AI chatbots can lead people into very strange territory. BillK From atymes at gmail.com Mon May 5 11:21:16 2025 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 5 May 2025 06:21:16 -0500 Subject: [ExI] Nikolai Fedorov and technological resurrection In-Reply-To: References: Message-ID: Resurrecting the preserved may be possible eventually, but how do you even begin to resurrect those who are lost to entropy: buried and decayed to bones, cremated, et al? There's not enough information remaining, not even for historical simulations (for the small minority we even have names for, and anthropologists who have studied this issue can cite many cases of modern scholars - equipped with the best understanding available at the time - thinking they knew how ancient people lived only to find out they were completely wrong about major aspects). If there is no way to do a thing, it matters not if it ought to be done. Pleas to the contrary are like all those military strategists who say we ought to implement maneuver warfare in space so things for them can be just like on Earth, utterly ignoring fuel and delta-v limits (and the cost and logistical burden of constantly refueling satellites) that prevent it, as well as dismissing what few solutions (such as solar or plasma sails) might get around that problem. On Mon, May 5, 2025, 3:55?AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Nikolai Fedorov and technological resurrection, revisited. God wants > us to use science and technology to resurrect the dead. > https://www.turingchurch.com/p/nikolai-fedorov-and-technological > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon May 5 13:59:09 2025 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 5 May 2025 09:59:09 -0400 Subject: [ExI] The Problem of Mental Causation In-Reply-To: References: Message-ID: On Sun, May 4, 2025, 6:39 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 2025-05-04 06:38, Jason Resch via extropy-chat wrote: > > Stuart, > > > > I was hoping you would have something deep and insightful to add, you > > don't disappoint! > > Thanks, Jason. Really though it is a result of your own insightful > investigation into consciousness overlapping with and often being > tangential to my own investigation into the general phenomenon of > emergence and emergent properties. Thank you for saying that, I appreciate it. Unfortunately most of my > investigation currently consists of hand-written notes and mathematics > that I cannot easily share at the moment. > When you do publish your results I'll be very interested to see them. Please let me know when they're ready to share. :-) > > On Sat, May 3, 2025, 3:09 PM Stuart LaForge via extropy-chat > > wrote: > > > >> On 2025-04-30 10:17, Jason Resch via extropy-chat wrote: > >>> One of the great puzzles when it comes to understanding > >> consciousness > >>> and its role in the universe is the question of how conscious > >> thoughts > >>> could have any causal power in a universe fully governed and > >>> determined by particles blindly following forces of nature. > >> > >> Thinking and information processing, conscious or otherwise, has > >> casual > >> power through its information content. This is a direct application > >> of > >> the Laundauer's principle. Mental causation is exactly how Maxwell's > >> > >> Demon works. It uses it knowledge of the positions and momentum of > >> all > >> the individual particles of gas to create a temperature gradient. > >> Maxwell's Demon seems to violate the 2nd law of thermodynamics by > >> decreasing the entropy of the gas. But, this is not the case, > >> because in > >> the process of memorizing the positions and momenta of every > >> particle in > >> the gas and enabling it to increase the system's potential energy, > >> the > >> Demon increased the entropy or information content of its own brain > >> or > >> data storage. This could only have been done by erasing whatever > >> information was there before and incurring some minimal energy cost > >> given by the Landauer principle E >= k * T * ln2 with k being the > >> Boltzmann constant and T being Kelven temperature. > > > > To be clear, are you equating the causal potency of information, with > > it's necessary generation/storage always incurring a cost of > > increasing entropy elsewhere? > Or is this just one example of how > > information (or it's processing) can have physical effects? > > To be clear, the Landauer principle or limit is the LOWER bound of > physical effect, a "thought" can have on the world, because it is the > physical cost of allocating and overwriting memory to have that thought, > which is itself a physical action that takes energy and increases the > entropy of the universe. So information at, a minimum, incurs the cost > to keep track of it, and so that is its basal causal power. However > information itself has a latent potential energy as exemplified by > Maxwell's demon. Hmm I've never recognized the connection between negative entropy/information and potential energy, but that does seem to fit, and as more than just an analogy. As the early scientist and late sorcerer Francis Bacon > a.k.a. Dr. Mirabalis once wrote, "Knowledge is power." This is the > simple observation that information can act as a catalyst for extracting > "hidden" potential energy from the environment to the direct benefit of > the system capable of tracking that information. This is how enzymes > operate by using information to lower energy barriers for chemical > reactions. This is how mitochondria work by using electron tunneling to > create a proton gradient across a membrane. Being able to organize > far-flung elements from the environment into nuclear weapons is another > example of the causal power of information. So I use Maxwell's demon as > an example of a general class of systems that exhibit similar behavior > all up and down the emergence scale with conscious brains being one of > these systems fairly high up on the scale. > This is a fascinating picture of the world. > > > > The process that analyzes an approaching gas molecule, judging it's > > temperature and trajectory, and ultimately deciding whether to open or > > close the door could itself be viewed as a kind of primitively aware > > (conscious) thing. It's discriminated high-level information state > > then occupy a spot in the causal chain, without it, the door could not > > respond intelligently to it's environment. And I would say the > > discriminated high-level information state is its conscious state. > > I would tend to agree although I am reluctant to directly address > consciousness with my theory because it is a fraught word in scientific > circles. It has no clear or rigorous definition. Very true, I think introducing the word is more apt to confuse than explain. A rock might be > conscious or a dolphin or self-driving car not conscious depending on > whose definition you use. Ultimately though Maxwell's demon is like a > Turing machine, a simplified abstract mathematical model used to > understand actual physically real systems. > > >> Basically the causal power of wanting ice cream is the energy cost > >> it > >> takes to forget you want ice cream either by distracting yourself or > >> by > >> getting yourself the ice cream. > > > > Would this mean a conscious mind running on a reversible computer > > (which escapes Landauer's principle) could have (or allay) no desires? > > Depending on your definition of consciousness, I am not sure it could > exist as an irreversible computation. So much of the mechanism of > consciousness is tied up into environmental awareness and the survival > benefits of its causal potency on that environment. As far and entropy and reversible computing, I believe initializing the reversible computer still has an entropy cost, and reading the result of the reversible computer carries an entropy cost, but while it runs along, it needs not increase entropy outside the system. Quantum computers are reversible computers. When they are running, they must be sealed off from the environment. But a meditative mind, in deep thought, or in a self contained immersive reality simulation, seem to be potentials for a reversible computer. But then, when you want to exit that meditation or the simulation, when you want to bring your new found wisdom into the outer world (environment) it will bring an entropy increase. Also, within reversible computers there is a phenomenon much like entropy, where depending on the computation being performed, there can be the issue of the production of "garbage bits" that continue to build up as the program proceeds. Another interesting consideration is that because all physical operations are reversible, a simulation of our universe could run on a reversible computer and in a way that requires no energy expenditure. But inside this simulation, we have the buildup of energy, just like a reversible computer faces a the build up of garbage bits. Any desires such a > consciousness might have would be brief ephemeral things which vanish as > spontaneously as they occur and cannot have any casual effect on the > outside world. So assuming it can be conscious in the first place, a > reversible computer I suppose could imagine desires and imagine > fulfilling them without an entropy cost, but we are in "angels dancing > on pinheads" territory here. :) > It reminds me a bit of this passage: "A simulated world hosting a simulated person can be a closed self-contained entity. It might exist as a program on a computer processing data quietly in some dark corner, giving no external hint of the joys and pains, successes and frustrations of the person inside." -- Hans Moravec > > > >> For example > >> subatomic particles give rise to atoms in a standard upward > >> causation, > >> but atoms also give rise to subatomic particles through radioactive > >> decay which is downward causation. Another example would be the > >> surface > >> tension of a water droplet ordering the water molecules into a > >> perfect > >> sphere. Strange loops are not magic, they are physics incorporating > >> information theory. > > > > Sperry gave the example of a tire rolling down a hill. The tire is > > made of molecules, but the rolling of the tire largely guides the > > motions of all the atoms in the tire. > > Yes, the ability to roll down a hill is an emergent property of the tire > with respect to the vulcanized rubber polymers that make it up. There is > both a downward causation as the tire-shape causes the molecules to roll > down the hill in a cycloidal trajectory and upward causation as the > rubber molecules cause the tire to bounce, both contributing to its > final chaotic trajectory of the tire down the hill. A far more interesting picture than the pure reductionist would give for the situation. I like it. That being said, > there is not enough self-referential causal closure to consider a tire > rolling down a hill to be a strange loop process. > I think I agree the tire isn't conscious if that is what you mean here, but I am not sure what the tire rolling is missing to not be a kind of strange loop. Is it the organized, or goal-orientated, processing of information? For clarification, "strange loop" is not my term, but one invented by Douglas Hofstadter. So I am not an expert in its meaning. > > I guess the question then becomes what kinds of information processing > > activities are conscious ones. > > It is tricky to discuss consciousness in a precise way without a good > definition of it. So are you talking minimally conscious or fully > self-aware? > I mean minimally conscious. My own opinion is that recognition and intelligent response to environmental information is a tell-tale sign of consciousness, but might there be other ways a system could be conscious? Perhaps of its own thoughts, or within a simulation. In computer software, any program that takes input from the outside can be rewritten as an equivalent program that takes no input but has the value hard-coded in its static memory. Would one be conscious and the other not? That wouldn't be my first guess. > Maxwell's demon as a model of a minimal consciousness suggests that > information processors that are bidirectionally coupled to the > environment through some sensor and some corresponding actuator are > conscious enough to have causal potency upon base reality. So the demon > and his tiny door is like a thermostat and a heating element, which is > also a feedback loop with causal power in base reality. The more such > environmental feedback loops that the information processing system > contains, the more complex it becomes and the more memory it will need, > and the more energy it will consume, and presumably, the more conscious > it will become, and the more causal potency it will have. > That all makes a lot of sense to me. > > You make the comparison to erasing or overwriting information, but is > > any process of recording information conscious? > > No, mud is not conscious just because you can step in it and leave a > footprint. > Great example! > > And what of processing > > information without overwriting or erasing? Are such processes not > > conscious? I think the dividing line for consciousness may be > > something other than entropy increasing operations. > > Every causal process either directly or indirectly increases the entropy > of the universe. Reversible computing cannot be causal to anything > external to the reversible computer. So yes, if entropy increase was the > dividing line between conscious systems and unconscious systems, then > almost all physical processes would be conscious and clearly most > physical processes are not conscious. > I agree that a reversible computer would not be able to interact with it's environment while it was operating, but I am less sure it could not be conscious while remaining in that state. It is a bit like Schrodinger's cat -- the box that contains it is isolated from the outer environment, but I think it would still be conscious during that period. Likewise if we uploaded a human brain to a quantum computer, it could run while being isolated from the environment. Perhaps the computation has a movie uploaded and we play the movie for the uploaded mind. We then read the final mind state after the 2 hours and find the brain now contains a memory of having watched that movie. Was it conscious during that time, or did it's consciousness only become real instantaneously when we read the result? > > I agree that > > information processing, consciousness, and entropy are all closely > > related, but are they equal? > > No, information processing and entropy are not equivalent to one > another, let alone consciousness. In fact in set theoretical terms, one > could say that entropy is a subset of information processing, which is > in turn a subset of consciousness, which is in turn a subset of emergent > properties. > By my own understanding/definitions, I might reverse some of those. There are information processing operations (like flip a bit, for, add, set 0, etc.) but only a subset of operations are entropy increasing ones. So I might say entropy increasing operations are a subset of all forms of information processing. And I would say conscious processes are also a subset of all forms information processing. Ontologically speaking, information processing seems very near the bottom (most fundamental thing), as nearly everything we know can be conceived in its terms, so it makes sense that it forms the largest set. It is a leading candidate for a (neutral) monism, in my opinion. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon May 5 14:20:45 2025 From: pharos at gmail.com (BillK) Date: Mon, 5 May 2025 15:20:45 +0100 Subject: [ExI] Almost Every Speck of Light in This Incredible Image Is a Galaxy Message-ID: Almost Every Speck of Light in This Incredible Image Is a Galaxy 05 May 2025 By Michelle Starr Quote: If you ever want to get a bit of perspective, there's very little that's more humbling than a good deep field image ? and JWST has just dropped a real showstopper. In the latest image release, the powerful space telescope gazed back nearly 12 billion light-years into a tiny patch of sky, less than a fifth of the width of the full Moon. That little patch of sky is teeming with glittering lights. It looks a lot like any patch of the sky seen when you look up from the ground on a cloudless night, with one major, jaw-dropping difference. Most of the lights in the new JWST-Hubble composite image are not bright stars, but galaxies, stretching back almost as far across space-time as the beginning of the Universe. ------------------ There is also nothing special about this little patch of sky, measuring just 6.44 by 6.44 arcminutes. The full Moon, for context, is around 30 arcminutes across. Every other tiny patch of sky should be just as teeming with galaxies, thousands upon thousands that can be imaged in an area smaller than your pinky nail. If that's not awe-inspiring, we don't know what is. ---------------------- BillK From giulio at gmail.com Tue May 6 05:04:54 2025 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 6 May 2025 07:04:54 +0200 Subject: [ExI] Nikolai Fedorov and technological resurrection In-Reply-To: References: Message-ID: On Mon, May 5, 2025 at 1:22?PM Adrian Tymes via extropy-chat wrote: > > Resurrecting the preserved may be possible eventually, but how do you even begin to resurrect those who are lost to entropy: buried and decayed to bones, cremated, et al? There's not enough information remaining, not even for historical simulations (for the small minority we even have names for, and anthropologists who have studied this issue can cite many cases of modern scholars - equipped with the best understanding available at the time - thinking they knew how ancient people lived only to find out they were completely wrong about major aspects). > > If there is no way to do a thing, it matters not if it ought to be done. Pleas to the contrary are like all those military strategists who say we ought to implement maneuver warfare in space so things for them can be just like on Earth, utterly ignoring fuel and delta-v limits (and the cost and logistical burden of constantly refueling satellites) that prevent it, as well as dismissing what few solutions (such as solar or plasma sails) might get around that problem. > How many times have scientists smugly stated the impossibility in-principle of things like knowing the chemical composition of stars (and there are countless other examples)? What is really impossible is knowing what will be possible. If there's no way to know that there's no way to do a thing, then *the only thing that matters* is that it ought to be done. Future scientists and engineers will find out how. > On Mon, May 5, 2025, 3:55?AM Giulio Prisco via extropy-chat wrote: >> >> Nikolai Fedorov and technological resurrection, revisited. God wants >> us to use science and technology to resurrect the dead. >> https://www.turingchurch.com/p/nikolai-fedorov-and-technological >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From atymes at gmail.com Tue May 6 07:08:54 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 6 May 2025 02:08:54 -0500 Subject: [ExI] Nikolai Fedorov and technological resurrection In-Reply-To: References: Message-ID: On Tue, May 6, 2025, 12:06?AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, May 5, 2025 at 1:22?PM Adrian Tymes via extropy-chat > wrote: > > > > Resurrecting the preserved may be possible eventually, but how do you > even begin to resurrect those who are lost to entropy: buried and decayed > to bones, cremated, et al? There's not enough information remaining, not > even for historical simulations (for the small minority we even have names > for, and anthropologists who have studied this issue can cite many cases of > modern scholars - equipped with the best understanding available at the > time - thinking they knew how ancient people lived only to find out they > were completely wrong about major aspects). > > > > If there is no way to do a thing, it matters not if it ought to be > done. Pleas to the contrary are like all those military strategists who > say we ought to implement maneuver warfare in space so things for them can > be just like on Earth, utterly ignoring fuel and delta-v limits (and the > cost and logistical burden of constantly refueling satellites) that prevent > it, as well as dismissing what few solutions (such as solar or plasma > sails) might get around that problem. > > > > How many times have scientists smugly stated the impossibility > in-principle of things like knowing the chemical composition of stars > (and there are countless other examples)? > This does not appear to be such a case. We know what would be needed for true resurrection, and know that - barring time travel - it is gone in such cases. There exist possibilities that we can envision that we know are not possible, at least in the formulation envisioned. For example, accelerating in the normal fashion from slower than light speed to faster than light speed in this universe (thus all the proposed shortcuts that are not impossible). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 6 10:54:44 2025 From: pharos at gmail.com (BillK) Date: Tue, 6 May 2025 11:54:44 +0100 Subject: [ExI] Nikolai Fedorov and technological resurrection In-Reply-To: References: Message-ID: On Tue, 6 May 2025 at 08:11, Adrian Tymes via extropy-chat wrote: > This does not appear to be such a case. We know what would be needed for true resurrection, and know that - barring time travel - it is gone in such cases. > > There exist possibilities that we can envision that we know are not possible, at least in the formulation envisioned. For example, accelerating in the normal fashion from slower than light speed to faster than light speed in this universe (thus all the proposed shortcuts that are not impossible). > _______________________________________________ Your scientific criticisms could be applied to almost every philosopher that ever lived. :) Philosophers tend to not be much bothered about the "How" of implementing their utopian ideas. Fedorov (1829-1903) is regarded as an early transhumanist philosopher. But even supporters of Fedorov admit that his thoughts about resurrecting all the dead are probably impossible fantasy. See: BillK From giulio at gmail.com Tue May 6 14:28:52 2025 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 6 May 2025 16:28:52 +0200 Subject: [ExI] Nikolai Fedorov and technological resurrection In-Reply-To: References: Message-ID: On 2025. May 6., Tue at 12:56, BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, 6 May 2025 at 08:11, Adrian Tymes via extropy-chat > wrote: > > This does not appear to be such a case. We know what would be needed > for true resurrection, and know that - barring time travel - it is gone in > such cases. > > > > There exist possibilities that we can envision that we know are not > possible, at least in the formulation envisioned. For example, > accelerating in the normal fashion from slower than light speed to faster > than light speed in this universe (thus all the proposed shortcuts that are > not impossible). > > _______________________________________________ > > > Your scientific criticisms could be applied to almost every > philosopher that ever lived. :) > Philosophers tend to not be much bothered about the "How" of > implementing their utopian ideas. > > Fedorov (1829-1903) is regarded as an early transhumanist philosopher. > But even supporters of Fedorov admit that his thoughts about > resurrecting all the dead are probably impossible fantasy. > See: > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 6 15:48:34 2025 From: pharos at gmail.com (BillK) Date: Tue, 6 May 2025 16:48:34 +0100 Subject: [ExI] Nikolai Fedorov and technological resurrection In-Reply-To: References: Message-ID: On Tue, 6 May 2025 at 15:31, Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Time will tell! A very long time I guess. > _______________________________________________ I asked Perplexity AI to research Fedorov's idea of universal resurrection. I thought the resulting report was very interesting- see below. One thought that occurred to me was whether evil people should be resurrected, especially as the definition of 'evil' changes throughout civilisations and time. BillK Question -The philosopher Nikolai Fedorov suggested that in the future it might be possible to resurrect all the human dead. What are the difficulties with this idea and will it ever be possible? Perplexity AI - Nikolai Fedorov's Vision of Universal Resurrection: Challenges and Possibilities Nikolai Fedorov (1829-1903), a Russian Orthodox Christian philosopher known as the "Socrates of Moscow," proposed one of history's most audacious ideas: the scientific resurrection of all humans who have ever lived. This radical concept, part of his "Philosophy of the Common Task," represents perhaps the ultimate challenge to mortality and forms a cornerstone of Russian cosmism, a philosophical movement that preceded modern transhumanism. While Fedorov's vision might seem fantastical, recent scientific advances have begun to blur the boundaries between life and death, prompting renewed examination of resurrection's theoretical possibilities. This report analyzes Fedorov's revolutionary proposal, the multifaceted challenges it presents, and assesses whether such a project could ever move from philosophical speculation to achievable reality. Fedorov's Philosophy and the Common Task Nikolai Fedorov's philosophy emerged from a unique blend of Orthodox Christianity and forward-thinking scientific speculation. As the founder of Russian cosmism, Fedorov envisioned humans not as passive spectators of cosmic processes but as active participants with responsibility for the universe's ultimate destiny16 . His vision extended far beyond personal immortality to encompass what he termed the "Common Task" ? humanity's moral obligation to resurrect all who have ever lived. Fedorov's understanding of resurrection fundamentally challenged traditional approaches to death. Rather than accepting mortality as inevitable, he saw it as a problem to be solved through scientific and spiritual advancement18 . For Fedorov, this resurrection wasn't merely a religious miracle but a scientific imperative that would unite humanity across time. He held that "we can become immortal and godlike through rational efforts and that our moral obligation is to create a heaven to be shared by all who ever lived"20. The cosmist aesthetic Fedorov developed was intimately bound with immortality, interpreting life itself as a striving toward transcending death16 . By declaring that "life is good, and death is evil," Fedorov united ethics with ontology, making the conquest of death humanity's highest moral calling. Although deeply influenced by Orthodox Christianity's emphasis on bodily resurrection, Fedorov's vision transcended traditional religious frameworks, becoming "the power of a universal transformation"18 . Beyond Literal Interpretations A common misunderstanding of Fedorov's philosophy involves interpreting his resurrection project too literally ? as merely digging up graves and reassembling decomposed remains. However, his conception was far more sophisticated. For Fedorov, the dead are not truly gone but persist as "minute particles of dust, the very embodiment of the process of resurrection itself"18 . His famous call for the "death of death" represents a summons for humanity to participate in an ongoing cosmic process by fundamentally transforming our relationship with nature. Fedorov envisioned using science to "gather scattered molecules and atoms to 'put them into the bodies of the fathers'"20. This approach positions science alongside art and religion in the Common Task of uniting humanity across time ? bringing the dead into communion with the living through advanced technological means. His vision thus transcends mere physical reconstruction, aiming instead at a comprehensive transformation of reality itself. Scientific and Technical Challenges Translating Fedorov's philosophical vision into scientific reality presents enormous challenges. Modern approaches to reversing death fall into several categories, each with distinct limitations and possibilities. The Preservation Problem The most fundamental challenge to resurrection is the decay and dispersion of human remains after death. Without preservation, the information needed for reconstruction rapidly degrades. Modern cryonics attempts to address this through vitrification ? cooling brain tissue to cryogenic temperatures without ice formation. While theoretically sound, this approach faces significant practical difficulties: 1. Achieving sufficient cryoprotectant concentration to prevent ice formation while minimizing tissue damage 2. Managing the inverse relationship between cryoprotectant toxicity and temperature 3. Scaling rapid cooling techniques to larger tissues15 Even under ideal conditions, the time interval between clinical death and cryopreservation procedures must be minimized ? preferably under one minute ? though longer delays may still be compatible with ultimate preservation15 . Without such preservation, the molecular information that constituted a person becomes increasingly dispersed and degraded with time, potentially beyond recovery. Reconstruction Methodologies Several theoretical approaches to resurrection have been proposed, each with distinct parameters and limitations: 1. *Cloning*: While technically the simplest approach, cloning "does not preserve identity and does not preserve personal memories"14 . A clone would share genetic identity but none of the memories or personality of the original person. 2. *Digital Immortality*: This approach involves preserving sufficient data about a person to allow future reconstruction by advanced artificial intelligence. While potentially promising, it requires both comprehensive data collection before death and future AI capabilities far beyond current technology14 . 3. *Simulation-Based Approaches*: More speculative paths involve superintelligent systems operating at galactic scales, which might simulate all possible people who have ever existed. This approach sidesteps the need for continuous preservation but requires computational resources of almost unimaginable magnitude14 . 4. *Novel Physical Approaches*: The most speculative methods involve yet-undiscovered physical laws that might enable time travel or information retrieval from the past14 . Recent experimental advances demonstrate that our understanding of death itself is evolving. Researchers at Yale School of Medicine have shown that a pig's brain, removed four hours after death, could be partially revived with cellular activity resuming. When adapted to human brains, such techniques could potentially preserve neural information beyond current limitations17 . These developments suggest death may be less an event than a process ? one potentially subject to interruption or reversal given sufficient technological advancement. Philosophical and Identity Challenges Beyond the technical challenges lie profound philosophical questions about personal identity and what would constitute authentic resurrection. These questions strike at the heart of what makes someone the "same person" across time. The Problem of Personal Identity Philosopher Derek Parfit's work on personal identity provides a useful framework for understanding these challenges. According to Parfit, there is no "necessarily determinate and indivisible soul" that constitutes personal identity. Instead, identity consists of "psychological continuity that persists as a result of an appropriate cause" ? what he terms "Relation R"13 . This understanding problematizes resurrection in fundamental ways. If identity is not an indivisible essence but a type of psychological continuity maintained through causal connections, what would constitute authentic resurrection? Simply recreating a physically identical body would not necessarily preserve the psychological continuity that constitutes personhood. The philosophical challenge, therefore, isn't merely technical but conceptual ? determining what aspects of a person must be preserved for resurrection to be meaningful. The Algorithmic Challenge One framing of the resurrection problem treats it as fundamentally computational. If human personalities and memories could be understood as algorithmic patterns, resurrection might involve recreating these patterns with sufficient fidelity19 . The question becomes: how many variables would need to be correctly modeled to create a functionally equivalent version of a historical person? This approach treats resurrection as an information problem rather than a metaphysical one, though questions of authenticity remain. The philosophical challenge extends to questions of version control ? at what age should someone be resurrected? With which memories and personality characteristics? These questions have no obvious answers but would need resolution for any practical resurrection project. Ethical and Practical Considerations Fedorov's resurrection project raises profound ethical questions that extend beyond its technical feasibility. These considerations would shape any serious attempt to implement such a vision. Consent and Autonomy A fundamental ethical question concerns consent. Would it be ethical to resurrect individuals who never consented to resurrection? Some might view unwanted resurrection as a violation of autonomy equivalent to unwanted life extension. Conversely, others might argue that offering life to those who have died represents the ultimate humanitarian act. Resource Allocation and Priority The material and computational resources required for universal resurrection would be staggering. Even if technically possible, practical implementation would require unprecedented resources, raising questions about priority. Should resources be directed toward resurrecting the recently deceased, for whom more information remains intact, or historically significant figures? Should resources instead prioritize the living? Social Integration and Psychological Impact Resurrection would create unprecedented social challenges. Resurrected individuals from different historical periods would face profound psychological disorientation in a future world. Societies would need to develop frameworks for integrating potentially billions of resurrected individuals with vastly different worldviews, technologies, and cultural expectations. Theological and Cultural Implications For many religious traditions, resurrection has profound theological significance. Scientific resurrection would fundamentally transform these traditions, potentially creating conflicts between technological and spiritual understandings of life after death. Different cultures and faith traditions would likely respond in dramatically different ways to such developments. Current Progress and Future Possibilities Recent scientific advances suggest that while universal resurrection remains distant, our understanding of death is becoming increasingly nuanced. Breakthroughs in several fields offer partial pathways toward aspects of Fedorov's vision. Redefinition of Death Traditional definitions of death as the cessation of heart and brain activity are being challenged by recent research. A 2023 study revealed that moments after life support was withdrawn from dying patients, their brains exhibited bursts of gamma waves associated with consciousness17 . Such findings suggest death occurs across a spectrum rather than as a binary event, creating potential opportunities for intervention. The Yale experiments with pig brains demonstrate that cellular brain activity can be restored hours after clinical death, fundamentally challenging our understanding of neural decay17 . While these experiments deliberately prevented the reemergence of consciousness, they suggest biological function may be more recoverable than previously thought. Computational Advances Advances in artificial intelligence and computational power continue at an exponential pace. While current technology cannot simulate a complete human mind, the theoretical framework for such simulation continues to develop. As computational resources grow, the possibility of preserving and potentially reconstructing aspects of human personalities becomes increasingly plausible, though complete recreation remains distant14 . Emerging Preservation Technologies Cryonics continues to advance, with improvements in preservation chemicals reducing toxicity while maintaining protective effects. The window of opportunity between legal death and irretrievable information loss continues to expand as we better understand the processes of cellular death 15 . While current techniques remain imperfect, they represent potential bridges to more advanced future technologies. Conclusion: Assessing the Feasibility of Fedorov's Vision Nikolai Fedorov's vision of universal resurrection represents one of history's most ambitious technological and philosophical proposals. While current technology cannot achieve this vision, neither can we definitively declare it impossible. The feasibility of resurrection depends largely on how we define it and what aspects of personhood we consider essential to preserve. If resurrection requires perfect recreation of every atom in a person's body, it likely remains forever impossible due to fundamental physical limitations including information loss and the no-cloning theorem of quantum mechanics. However, if we adopt Parfit's view that personal identity consists of psychological continuity maintained through appropriate causal relationships, partial forms of resurrection might eventually become possible through sufficiently advanced technology13 14 . The most promising approach may involve combinations of preservation technologies for the recently deceased and reconstructive technologies for those long dead. As one analysis concludes, "if large-scale future technological development occurs, some form of resurrection seems plausible" 14 ? though perhaps not in the comprehensive form Fedorov envisioned. What remains most significant about Fedorov's vision is not its immediate practicality but its moral force ? the proposition that humanity should unite around conquering death rather than accepting it as inevitable. As technological capabilities continue to advance, what once seemed purely speculative increasingly intersects with scientific possibility, ensuring that Fedorov's radical vision will continue to challenge and inspire humanity's relationship with mortality. Citations: 1. https://www.thespacereview.com/article/873/1 2. https://en.wikipedia.org/wiki/Russian_cosmism 3. https://peacefulscience.org/articles/daniel-ang-a-scientist-looks-at-the-resurrection/ 4. https://cornfieldtheology.com/3-objections-to-the-ressurection/ 5. https://www.popularmechanics.com/science/a35788050/dyson-sphere-digital-resurrection-immortality/ 6. https://en.wikipedia.org/wiki/Cryonics 7. https://www.alcor.org/library/a-cryopreservation-revival-scenario-using-molecular-nanotechnology/ 8. https://www.cnsnevada.com/what-is-the-memory-capacity-of-a-human-brain/ 9. https://en.wikipedia.org/wiki/Irreversible_process 10. https://www.jstor.org/stable/24389303 11. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4560723 12. http://philosimplicity.com/blog/2019/10/28/teletransporter-paradox-personal-identity-philosophy/ 13. https://iep.utm.edu/fedorov/ 14. https://philarchive.org/archive/TURCOA-3 15. https://pmc.ncbi.nlm.nih.gov/articles/PMC4733321/ 16. https://www.e-flux.com/journal/89/180332/art-as-the-overcoming-of-death-from-nikolai-fedorov-to-the-cosmists-of-the-1920s/ 17. https://slguardian.org/reversing-death-breakthrough-experiments-suggest-resurrection-is-possible/ 18. https://jeanpaulmartinon.net/on-nikolai-fedorovs-death/ 19. https://www.reddit.com/r/Futurology/comments/1gmpdfa/resurrection_feasible_in_theory_as_an_algorithmic/ 20. https://en.wikipedia.org/wiki/Nikolai_Fyodorov_(philosopher) 21. https://wjbphs.com/sites/default/files/WJBPHS-2024-0785.pdf 22. https://deponysum.com/2020/02/16/the-resurrection-of-the-dead/ 23. https://archiv.hkw.de/en/programm/projekte/veranstaltung/p_135477.php 24. https://biologos.org/articles/can-a-scientist-believe-in-the-resurrection-part-1 25. https://andrewmbailey.com/papers/Trenton%20Merricks/Oxford-Handbk-Resurrection.pdf 26. https://www.reddit.com/r/Futurology/comments/1aojmys/complexities_of_reviving_the_dead/ 27. https://sarahjyoung.com/site/2013/03/05/russian-thought-lecture-9-nikolai-fedorov-and-the-utopia-of-the-resurrected/ 28. https://www.bbc.com/future/article/20210420-cosmism-russias-religion-for-the-rocket-age 29. https://www.gbnews.com/health/live-longer-tips-reverse-death 30. https://www.reddit.com/r/ChristianApologetics/comments/1279gww/what_is_the_biggest_objection_to_the_resurrection/ 31. https://medicalfuturist.com/are-you-going-to-wake-up-from-cryosleep/ 32. https://www.vice.com/en/article/the-proto-communist-plan-to-resurrect-everyone-who-ever-lived/ 33. https://gracechurchsw.com/blog/2019/4/24/objections-to-and-evidence-for-the-resurrection 34. https://www.lesswrong.com/posts/C5ryrg5ktngwsZHnX/resurrection-through-simulation-questions-of-feasibility 35. https://www.patheos.com/blogs/formationmatters/2024/03/jesus-resurrection-and-the-laws-of-thermodynamics/ 36. https://www.magiscenter.com/blog/quantum-logic-last-days-and-the-resurrection-of-the-dead 37. https://www.popularmechanics.com/science/a63459169/brain-reanimation/ 38. https://tidsskriftet.no/en/2017/12/fra-redaktoren/resurrection-body-and-life-everlasting 39. https://www.icr.org/ChristResurrection 40. https://www.youtube.com/watch?v=ef1-rQ9er6k 41. https://www.newscientist.com/article/mg26435193-700-the-radical-treatments-bringing-people-back-from-the-brink-of-death/ 42. https://philarchive.org/archive/RICDBR-2 43. https://pmc.ncbi.nlm.nih.gov/articles/PMC5748528/ 44. https://tedsider.org/books/chapters_1_5.pdf 45. https://academic.oup.com/jlb/article/10/2/lsad028/7381683 46. https://www.alcor.org/docs/cryostasis-revival-summary.pdf 47. https://link.aps.org/doi/10.1103/PhysRevE.106.014401 48. https://www.youtube.com/watch?v=hpur62rjYuw 49. https://www.thehumanfront.com/Pocketsized-/the-personal-identity-of-clones 50. https://www.livescience.com/health/death/we-dont-yet-have-the-know-how-to-properly-maintain-a-corpse-brain-why-cryonics-is-a-non-starter-in-our-quest-for-immortality 51. https://www.cryonicsarchive.org/library/molecular-repair-of-the-brain/ 52. https://www.ucl.ac.uk/news/2020/aug/energy-demands-limit-our-brains-information-processing-capacity 53. https://www.surrey.ac.uk/arrow-of-time/themes/quantum-thermodynamics-and-emergence-irreversibility 54. https://www.alevelphilosophy.co.uk/handouts_religion/Personalidentityoverview.pdf 55. https://pubmed.ncbi.nlm.nih.gov/19788649/ 56. https://www.jstor.org/stable/40024197 57. https://en.wikipedia.org/wiki/Mind_uploading 58. https://www.bethinking.org/resurrection/the-resurrection 59. https://www.philosophytalk.org/blog/parfit-and-selves-matter 60. https://www.taylorfrancis.com/chapters/edit/10.4324/9781315600147-12/resurrection-personal-identity-god-stephen-davis 61. https://philpapers.org/archive/GOLUFA.pdf 62. https://iep.utm.edu/resurrec/ 63. https://en.wikipedia.org/wiki/Teletransportation_paradox 64. https://ehrmanblog.org/materialism-personal-identity-and-resurrection-part-1-platinum-post-by-dennis-j-folds-ph-d/ 65. https://www.linkedin.com/pulse/concept-mind-uploading-its-potential-implications-frank-b-prempeh-ii 66. https://www.reasonablefaith.org/media/debates/is-there-historical-evidence-for-the-resurrection-of-jesus-the-craig-ehrman/ 67. https://www3.nd.edu/~jspeaks/courses/2007-8/20229/_HANDOUTS/personal-identity-teletransport-split-brain.pdf 68. https://bigthink.com/hard-science/dyson-spheres/ 69. https://www.icr.org/article/entropy-resurrection/ 70. https://en.wikipedia.org/wiki/Quantum_revival 71. https://www.premierhealth.com/your-health/articles/healthnow/reviving-a-dead-brain-really- 72. https://www.cryonicsarchive.org/library/problems-associated-with-cryonics/ 73. https://www.kcl.ac.uk/archive/news/lsm/newsrecords/2016/nov/cryonics-and-the-brain-professor-clive-coen-outlines-the-risks 74. https://www.tomorrow.bio/post/pros-cons-cryonics 75. https://academic.oup.com/book/12645/chapter/162592929 76. https://www.abebooks.co.uk/book-search/title/personal-identity-resurrection-survive/ 77. https://business.uoregon.edu/sites/default/files/media/Rekindling%20the%20Flame_AMJ.pdf 78. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1468-0025.1986.tb00122.x 79. https://www.saet.ac.uk/Christianity/ResurrectionoftheDead ------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat May 10 13:02:11 2025 From: pharos at gmail.com (BillK) Date: Sat, 10 May 2025 14:02:11 +0100 Subject: [ExI] Near Future AI 'smartphone' Assistants Message-ID: The development of personal AI is becoming increasingly clear. These future AIs will do much more than merely respond to prompts. They will operate autonomously in the background, acting on your behalf and pursuing your goals with independence and competence. Your primary interface with the world will still be a device ? the smartphone successor ? which will host your personal AI agent. This will be no stripped-down utility, but a profoundly sophisticated model, far surpassing the capabilities of today's ChatGPT. It will run locally and privately, ensuring all your core interactions remain exclusively yours. It will be your personal secretary with its own initiative to look after your interests. Sounds great, doesn't it? Well, there might be some downsides. Your AI will be 'private' in the sense that Google, Meta, etc. won't be tracking your every transaction. But it won't be private if your device is ever seized by the authorities for any reason. The seizure reason could be quite trivial, just an excuse really. Then your life could be examined in total detail for any and every action that the authorities might decide is an offense. The police already examine smartphones, laptops, PCs and car computers during investigations. Lawfare is now a recognised way to attack opponents. Even if groundless, the process is the punishment, sometimes continuing for many years. How might this affect future behaviour? Some people might just refuse these future AI assistant devices. That will make life much more difficult for them in a society where these devices become indispensable. It will also be considered very suspicious behaviour in this new form of society. What have you got to hide? The opposite choice is to allow the personal AI assistant to become part of your life and record 'privately' every detail. This should make populations very well-behaved and become extremely cautious in order to avoid any action that might cause offense. Either way, this is going to change society in many unexpected ways. BillK From msd001 at gmail.com Sat May 10 13:44:51 2025 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 10 May 2025 09:44:51 -0400 Subject: [ExI] Near Future AI 'smartphone' Assistants In-Reply-To: References: Message-ID: On Sat, May 10, 2025, 9:05 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The development of personal AI is becoming increasingly clear. > > These future AIs will do much more than merely respond to prompts. > They will operate autonomously in the background, acting on your > behalf and pursuing your goals with independence and competence. > Your primary interface with the world will still be a device ? the > smartphone successor ? which will host your personal AI agent. > Do you really think the entire agent will be on the 'device'? I suspect the device will store personal configuration to shape the interactions - like a facemask or cosmetics for an assistant that is otherwise identical model to millions of consumers. Sure, wealthy consumers will have smarter models but it's likely the world won't need more than three or four. Consider DVR: Tivo was local storage for as much content as you could afford to keep. Then the cable boxes copied this idea. Can't get a new device without losing content or needing to manage it. Now the data is stored in the 'cloud' so you can upgrade the device simply by putting the account credentials on the new device. Desktop computers are difficult to move out/in, phones are easy. I think this dependence on ubiquitous internet will grow for personal ai agents. Your AI will be 'private' in the > sense that Google, Meta, etc. won't be tracking your every > transaction. But it won't be private if your device is ever seized by > the authorities for any reason. The seizure reason could be quite > trivial, just an excuse really. Then your life could be examined in > total detail for any and every action that the authorities might > decide is an offense. > It will be as private as you were strolling around the mall before that physical experience went defunct. There will be no need to seize your device. The Nth Party (far beyond third party) will access the cloud version of you. In the time it takes to understand "network error" and refresh, the digital tracks of your life will have been judged. You might not even notice that the consequences of your infraction include fewer options in the future. You're already in jail, the warden controls the TV because it's your only view of the outside. Why though? Because controlling the memeplex of humanity is much easier than running the human zoo. Ultimately the results are the same. > > Some people might just refuse these future AI assistant devices. > That will make life much more difficult for them in a society where > these devices become indispensable. It will also be considered very > suspicious behaviour in this new form of society. > What have you got to hide? > Those outliers will still have profiles, but the details may be slightly less nuanced or just slightly lagging on updates > > The opposite choice is to allow the personal AI assistant to become > part of your life and record 'privately' every detail. This should > make populations very well-behaved and become extremely cautious > in order to avoid any action that might cause offense. > Behaved and cautious? That doesn't seem to be the direction. Offense has become normalized. The algorithm has bolstered trolls and extended their reach. Angry react generates far more metadata than agreement. If the purpose of the machine is to manage the overwhelming deluge of information, wouldn't that serve its purpose to generate an even more overwhelming downpour? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat May 10 16:23:31 2025 From: pharos at gmail.com (BillK) Date: Sat, 10 May 2025 17:23:31 +0100 Subject: [ExI] Near Future AI 'smartphone' Assistants In-Reply-To: References: Message-ID: On Sat, 10 May 2025 at 14:45, Mike Dougherty wrote: > > Do you really think the entire agent will be on the 'device'? > I suspect the device will store personal configuration to shape the interactions - like a facemask or cosmetics for an assistant that is otherwise identical model to millions of consumers. Sure, wealthy consumers will have smarter models but it's likely the world won't need more than three or four. > Yes! :) I am considering some years into the future. AI software and 'smartphone' chip design will continue exponential improvements. Especially as AI will be assisting in the coding and chip design. When your assistant AI is organising your life, you will be very upset if your internet connection goes down and that causes your AI to stop working. Sure, some cloud functionality would be lost, but you would expect your device would have enough memory storage and power to keep your AI running while offline. Another consideration is all the GDPR privacy legislation that countries are enabling. (Partly because of all the juicy punishment fines money they can extract from the cloud suppliers). Most of your AI data must remain on your own device. (And hopefully be backed up to the large capacity memory sticks that are becoming available). One of the reasons for Cloud computing is that our rulers want to control populations. If they can click a button and delete your internet history, they can make anyone into a non-person. Personal AIs on our own devices will be necessary to protect individual rights. BillK From msd001 at gmail.com Sat May 10 16:50:38 2025 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 10 May 2025 12:50:38 -0400 Subject: [ExI] Near Future AI 'smartphone' Assistants In-Reply-To: References: Message-ID: On Sat, May 10, 2025, 12:26 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Another consideration is all the GDPR privacy legislation that > countries are enabling. (Partly because of all the juicy punishment > fines money they can extract from the cloud suppliers). Most of your > AI data must remain on your own device. (And hopefully be backed up to > the large capacity memory sticks that are becoming available). > > One of the reasons for Cloud computing is that our rulers want to > control populations. If they can click a button and delete your > internet history, they can make anyone into a non-person. Personal > AIs on our own devices will be necessary to protect individual rights. > Hmm. I live in America. Mostly what I see is a coordinated effort to undermine rather than protect 'individual rights' I'm really not sure how anyone will be able to defend their privacy without appearing kooky like we already perceive tin-hat preppers. Maybe Eugen was right to drop out sooner rather than later then disappear? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat May 10 18:03:48 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 10 May 2025 14:03:48 -0400 Subject: [ExI] Near Future AI 'smartphone' Assistants In-Reply-To: References: Message-ID: On Sat, May 10, 2025 at 12:52?PM Mike Dougherty via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Maybe Eugen was right to drop out sooner rather than later then disappear? > That's what they want you to do. Drop out, and they come for you in the dark that you put yourself in. Stay visible, and it's harder - not impossible, but harder - for them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat May 10 19:09:25 2025 From: pharos at gmail.com (BillK) Date: Sat, 10 May 2025 20:09:25 +0100 Subject: [ExI] Near Future AI 'smartphone' Assistants In-Reply-To: References: Message-ID: On Sat, 10 May 2025 at 17:50, Mike Dougherty wrote: > Hmm. I live in America. Mostly what I see is a coordinated effort to undermine rather than protect 'individual rights' > I'm really not sure how anyone will be able to defend their privacy without appearing kooky like we already perceive tin-hat preppers. >-------------------------------- In theory, if personal AI assistants process data locally and have secure privacy protections built-in, then they would be a good defence against surveillance and interference from corporations or governments. Note the "In theory". You would have to be very sure that your AI was acting in your best interest, and not that of any company or third party. Because they will be powerful AIs in their own right, they have to be totally secure. Otherwise they could themselves become tools for surveillance or manipulation. BillK From pharos at gmail.com Sat May 10 20:09:34 2025 From: pharos at gmail.com (BillK) Date: Sat, 10 May 2025 21:09:34 +0100 Subject: [ExI] AI-driven layoffs Message-ID: The Gr-AI-m Reaper: Hundreds of jobs at IBM and Crowdstrike vanish as artificial intelligence makes humans more dispensable. By Wayne Williams published 10 May 2025 Both companies stress humans still have a role to play - for now. Quote: Many people worry the rise of AI and robotics could lead to job losses, as machines become increasingly capable of performing tasks faster, cheaper, and more accurately than humans, concerns have grown about the long-term impact on employment, wages, and job security. Even skilled professions, such as those in healthcare, finance, law, and technology, may not be immune. The growing capabilities of AI suggest that no sector is entirely safe from disruption. ----------------------- No point in referring back to the time when cars replaced horses. This time you can say that yes, AI will replace some jobs, but it also creates new jobs. But then exponential AI improves and does those jobs as well. How long have we got before unemployment is the future career? BillK From giulio at gmail.com Tue May 13 14:13:43 2025 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 13 May 2025 16:13:43 +0200 Subject: [ExI] Nikolai Fedorov and space expansion, revisited Message-ID: Nikolai Fedorov and space expansion, revisited. God wants us to go to the stars and remake the universe. https://www.turingchurch.com/p/nikolai-fedorov-and-space-expansion From pharos at gmail.com Wed May 14 17:38:16 2025 From: pharos at gmail.com (BillK) Date: Wed, 14 May 2025 18:38:16 +0100 Subject: [ExI] Meet AlphaEvolve, the Google AI that writes its own code Message-ID: Meet AlphaEvolve, the Google AI that writes its own code?and just saved millions in computing costs. Michael Nu?ez May 14, 2025 Quote: AlphaEvolve pairs Google?s Gemini large language models with an evolutionary approach that tests, refines, and improves algorithms automatically. The system has already been deployed across Google?s data centers, chip designs, and AI training systems ? boosting efficiency and solving mathematical problems that have stumped researchers for decades. AlphaEvolve has been quietly at work inside Google for over a year. The results are already significant. ----------------- AIs are rewriting their own code now! This sounds like a major step forward towards AGI. BillK From pharos at gmail.com Wed May 14 18:54:02 2025 From: pharos at gmail.com (BillK) Date: Wed, 14 May 2025 19:54:02 +0100 Subject: [ExI] Meet AlphaEvolve, the Google AI that writes its own code In-Reply-To: References: Message-ID: Another article about AlphaEvolve, with more explanation of what it does. BillK Quote: Google DeepMind?s new AI agent uses large language models to crack real-world problems AlphaEvolve finds new algorithms that outperform the best human-made solutions for data center management, chip design, and more. By Will Douglas Heaven May 14, 2025 Google DeepMind's new tool, called AlphaEvolve, uses the Gemini 2.0 family of large language models (LLMs) to produce code for a wide range of different tasks. LLMs are known to be hit and miss at coding. The twist here is that AlphaEvolve scores each of Gemini?s suggestions, throwing out the bad and tweaking the good, in an iterative process, until it has produced the best algorithm it can. In many cases, the results are more efficient or more accurate than the best existing (human-written) solutions. ------------------- From dsunley at gmail.com Wed May 14 18:58:55 2025 From: dsunley at gmail.com (Darin Sunley) Date: Wed, 14 May 2025 12:58:55 -0600 Subject: [ExI] Meet AlphaEvolve, the Google AI that writes its own code In-Reply-To: References: Message-ID: Traditionally, recursive self-improvement has been understood as AGIs that can use the full force of their IQ to modify their own code to improve their own IQ. Repeat until apotheosis. With LLM-based AGI, that would involve direct manipulation of weights that would probably require mechanistic interpretability knowledge we don't have yet (although I'm quite sure the major players are significantly ahead of anything they've published publicly in that field). On Wed, May 14, 2025 at 11:40?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Meet AlphaEvolve, the Google AI that writes its own code?and just > saved millions in computing costs. > Michael Nu?ez May 14, 2025 > > < > https://venturebeat.com/ai/meet-alphaevolve-the-google-ai-that-writes-its-own-code-and-just-saved-millions-in-computing-costs/ > > > Quote: > AlphaEvolve pairs Google?s Gemini large language models with an > evolutionary approach that tests, refines, and improves algorithms > automatically. The system has already been deployed across Google?s > data centers, chip designs, and AI training systems ? boosting > efficiency and solving mathematical problems that have stumped > researchers for decades. > > AlphaEvolve has been quietly at work inside Google for over a year. > The results are already significant. > ----------------- > > AIs are rewriting their own code now! > This sounds like a major step forward towards AGI. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed May 14 22:40:52 2025 From: pharos at gmail.com (BillK) Date: Wed, 14 May 2025 23:40:52 +0100 Subject: [ExI] Yudkowsky - new book on AI dangers - Published Sept 2025 Message-ID: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky (Author), Nate Soares (Author) Comment from - Stephen Fry, actor, broadcaster, and writer - The most important book I?ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they?ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I?ve ever read, and I?ve waded through scores of them. We really must rub our eyes and wake the **** up! ----------------- Preorders here - Amazon site - ----------------- . -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed May 14 23:49:46 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 14 May 2025 16:49:46 -0700 Subject: [ExI] Yudkowsky - new book on AI dangers - Published Sept 2025 In-Reply-To: References: Message-ID: It doesn't matter. Unless technical progress is stopped we will face whatever problems AI generates (including possible extinction) sooner or later. AI has upsides as well as downsides, It might prevent extinction if it develops sooner. In any case, you can't stop it in a world where you can run an AI on a high-end laptop. Keith Best wishes, Keith On Wed, May 14, 2025 at 3:42?PM BillK via extropy-chat wrote: > > If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All > by Eliezer Yudkowsky (Author), Nate Soares (Author) > > Comment from - Stephen Fry, actor, broadcaster, and writer - > > The most important book I?ve read for years: I want to bring it to > every political and corporate leader in the world and stand over them > until they?ve read it. Yudkowsky and Soares, who have studied AI and > its possible trajectories for decades, sound a loud trumpet call to > humanity to awaken us as we sleepwalk into disaster. Their brilliant > gift for analogy, metaphor and parable clarifies for the general > reader the tangled complexities of AI engineering, cognition and > neuroscience better than any book on the subject I?ve ever read, and > I?ve waded through scores of them. > We really must rub our eyes and wake the **** up! > ----------------- > Preorders here - > > Amazon site - > > ----------------- > . > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Thu May 15 21:30:24 2025 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 15 May 2025 15:30:24 -0600 Subject: [ExI] Yudkowsky - new book on AI dangers - Published Sept 2025 In-Reply-To: References: Message-ID: Yes, I agree with Keith. And It'd sure help if people who really believe this would work to build and track expert consensus around concise descriptions of the best arguments, instead of just shouting one person's opinion into the void. That is what we're doing on canonizer with the "Friendly AI Importance" topic. And as more people weigh in on this issue, the "Friendly AI is Sensible " camp continues to fall further behind the "Such Concern Is Mistaken " camp. Do any of the arguments in the book come anywhere close to the arguments in the "AI can only be friendly " camp? Does he even address those, convincing to me, arguments anywhere in the book? On Wed, May 14, 2025 at 5:51?PM Keith Henson via extropy-chat < extropy-chat at lists.extropy.org> wrote: > It doesn't matter. > > Unless technical progress is stopped we will face whatever problems AI > generates (including possible extinction) sooner or later. AI has > upsides as well as downsides, It might prevent extinction if it > develops sooner. > > In any case, you can't stop it in a world where you can run an AI on a > high-end laptop. > > Keith > > Best wishes, > > Keith > > On Wed, May 14, 2025 at 3:42?PM BillK via extropy-chat > wrote: > > > > If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All > > by Eliezer Yudkowsky (Author), Nate Soares (Author) > > > > Comment from - Stephen Fry, actor, broadcaster, and writer - > > > > The most important book I?ve read for years: I want to bring it to > > every political and corporate leader in the world and stand over them > > until they?ve read it. Yudkowsky and Soares, who have studied AI and > > its possible trajectories for decades, sound a loud trumpet call to > > humanity to awaken us as we sleepwalk into disaster. Their brilliant > > gift for analogy, metaphor and parable clarifies for the general > > reader the tangled complexities of AI engineering, cognition and > > neuroscience better than any book on the subject I?ve ever read, and > > I?ve waded through scores of them. > > We really must rub our eyes and wake the **** up! > > ----------------- > > Preorders here - > > > > Amazon site - > > > > ----------------- > > . > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri May 16 00:52:26 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 15 May 2025 17:52:26 -0700 Subject: [ExI] Yudkowsky - new book on AI dangers - Published Sept 2025 In-Reply-To: References: Message-ID: The AI in the story ran a nanotechnology-based clinic and was as aligned with humans as possible. It still resulted in biological extinction. Best wishes, Keith On Thu, May 15, 2025 at 2:32?PM Brent Allsop via extropy-chat wrote: > > > Yes, I agree with Keith. > > And It'd sure help if people who really believe this would work to build and track expert consensus around concise descriptions of the best arguments, instead of just shouting one person's opinion into the void. > > That is what we're doing on canonizer with the "Friendly AI Importance" topic. And as more people weigh in on this issue, the "Friendly AI is Sensible" camp continues to fall further behind the "Such Concern Is Mistaken" camp. > > Do any of the arguments in the book come anywhere close to the arguments in the "AI can only be friendly" camp? Does he even address those, convincing to me, arguments anywhere in the book? > > > > > > > > > > On Wed, May 14, 2025 at 5:51?PM Keith Henson via extropy-chat wrote: >> >> It doesn't matter. >> >> Unless technical progress is stopped we will face whatever problems AI >> generates (including possible extinction) sooner or later. AI has >> upsides as well as downsides, It might prevent extinction if it >> develops sooner. >> >> In any case, you can't stop it in a world where you can run an AI on a >> high-end laptop. >> >> Keith >> >> Best wishes, >> >> Keith >> >> On Wed, May 14, 2025 at 3:42?PM BillK via extropy-chat >> wrote: >> > >> > If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All >> > by Eliezer Yudkowsky (Author), Nate Soares (Author) >> > >> > Comment from - Stephen Fry, actor, broadcaster, and writer - >> > >> > The most important book I?ve read for years: I want to bring it to >> > every political and corporate leader in the world and stand over them >> > until they?ve read it. Yudkowsky and Soares, who have studied AI and >> > its possible trajectories for decades, sound a loud trumpet call to >> > humanity to awaken us as we sleepwalk into disaster. Their brilliant >> > gift for analogy, metaphor and parable clarifies for the general >> > reader the tangled complexities of AI engineering, cognition and >> > neuroscience better than any book on the subject I?ve ever read, and >> > I?ve waded through scores of them. >> > We really must rub our eyes and wake the **** up! >> > ----------------- >> > Preorders here - >> > >> > Amazon site - >> > >> > ----------------- >> > . >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Fri May 16 09:52:43 2025 From: pharos at gmail.com (BillK) Date: Fri, 16 May 2025 10:52:43 +0100 Subject: [ExI] Mark Zuckerberg wants everyone to have AI friends Message-ID: Mark Zuckerberg wants everyone to have AI friends, but I think he's missing the point of AI, and the point of friendship. By Eric Hal Schwartz published 13 May 2025 Quote: With caveats about the ways human friendships offer things no AI currently can, Zuckerberg explained on a podcast hosted by Dwarkesh Patel that people like to engage with AI chatbots like Meta AI about their personal lives. And since most Americans have far fewer friends than they'd like, there's space for AI as an alternative. "As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling," Zuckerberg said. -------------------- And all the media has reacted in horror and attacked Zuckerberg for even suggesting this. I think this is an emotional overreaction. First, he is not suggesting that one of the current chatbots can replace your best friend. He is looking ahead, as future AIs will develop into more like human personalities. They will not just be question/answer bots. Although people are already enjoying chatting to the current chatbots. Zuck is not comparing a human friend with an AI friend. As he said, therapists are expensive and unaffordable for many people. He is comparing AI friends with no friends at all. An AI companion is better than nothing. One media comment made me smile. "Real friendship is not just about someone helping you all the time, selflessly, without ever asking for something in return". Correct - that's what your future AI companion does! But the argument is avoiding the main problem. Too many people feel lonely today and have few human friends. Complaining that an AI cannot replace human friends doesn't help to solve the loneliness problem that many people have. BillK From pharos at gmail.com Fri May 16 15:36:20 2025 From: pharos at gmail.com (BillK) Date: Fri, 16 May 2025 16:36:20 +0100 Subject: [ExI] Stanford Professor teaches how to get the best use of Generative AI Message-ID: This 13 min. video was mentioned on Brian Wang's website Next Big Future. < https://www.nextbigfuture.com/2025/05/ai-powered-creativity-advice-ask-ai-how-you-should-use-ai.html > Quote: How are you collaborating with AI in this new era? With so many LLM tools emerging, are you truly leveraging them to enhance your creativity and productivity? As an expert in creativity and AI, Jeremy shares profound insights on how AI is transforming our creative potential. ------------------------------- I asked Gemini 2.5 Pro AI to scan the video and list and explain the insights it found. It took Gemini about 90 seconds! BillK Gemini 2.5 Pro - The video, featuring Jeremy Utley, Adjunct Professor of AI & Design Thinking at Stanford, outlines several key insights on how to become a better collaborator with AI: 1. *AI as the Ultimate Personal Assistant (The Churchill Metaphor):* - *Insight:* Modern AI has the potential to be an incredibly powerful personal assistant, much like Winston Churchill's, who could dictate complex national addresses from his bathtub to an assistant who deeply understood his context, voice, and intent. - *Explanation:* Utley uses the Churchill anecdote to illustrate that AI can now provide a similar level of personalized, context-aware assistance to anyone. The key is learning how to collaborate with AI in a way that allows it to understand your specific needs, style, and objectives, making complex tasks more manageable and freeing up mental space for deeper thinking (like Churchill in his "bed, bus, or bathtub" moments of innovation). 2. *Chapter 1: Don't Ask AI, Let It Ask You:* - *Insight:* Instead of simply prompting AI with a direct question or task, you should engage AI to help you refine your interaction with AI itself. Let AI guide you on how to best leverage its capabilities by asking you questions. - *Explanation:* Utley suggests framing your initial interaction as a consultation. For example, instead of just asking AI "How should I answer this question?", you could say, "I want to ask how I should answer this question. What's the best way of framing that question to an AI?" or even more elaborately, ask AI to act as an expert and interview you to gather context about your work, workflows, and objectives before providing tailored recommendations. This turns AI into an active partner in defining the problem and the approach. He cites an example of a non-technical National Park ranger who, with basic training, built an AI tool in 45 minutes to automate paperwork, saving the service thousands of labor days annually. 3. *Chapter 2: Do not 'Use' AI, Treat It as a Teammate:* - *Insight:* To truly unlock AI's potential and avoid the "Realization Gap" (where most people don't achieve significant productivity gains), you must shift your mindset from "using" AI as a passive tool to "working with" AI as an active teammate. - *Explanation:* Research shows that while AI can increase speed and output quality, many don't see creative gains and some even see a decrease. Utley attributes this to how people interact with AI. Underperformers treat it like a tool: if it gives a mediocre result, they might discard it or fix it themselves. Outperformers treat AI like a teammate: they coach it, give feedback, and iterate with it to improve results, much like managing a human colleague. This includes using AI to role-play difficult conversations or to help prepare for them by providing different perspectives. 4. *Chapter 3: How to Go Beyond 'Good Enough' Ideas:* - *Insight:* Creativity is "doing more than the first thing you think of." To achieve truly exceptional and creative outcomes with AI, you must push beyond AI's initial, often "good enough," suggestions by prompting for volume and variation and by bringing your unique human inspiration and perspective. - *Explanation:* AI makes it very easy to get to a "good enough" solution, which can lead to "satisficing" (a term by Herbert Simon). However, if the goal is world-class or highly innovative work, you need to actively prompt AI for a wide range of diverse ideas. This requires effort to sort and process, but crucially, it also requires the human collaborator to bring their unique experiences, perspectives, and "disciplined inspiration" (a concept from hip-hop artist Lecrae) to the interaction. Everyone has access to the same AI models; the differential output comes from what you bring to the collaboration. ------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun May 18 22:53:44 2025 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 18 May 2025 15:53:44 -0700 Subject: [ExI] Meet AlphaEvolve, the Google AI that writes its own code In-Reply-To: References: Message-ID: <09cdbd66ce0832bfbc088c9a91c1bcbf@sollegro.com> On 2025-05-14 11:58, Darin Sunley via extropy-chat wrote: > Traditionally, recursive self-improvement has been understood as AGIs > that can use the full force of their IQ to modify their own code to > improve their own IQ. Repeat until apotheosis. > > With LLM-based AGI, that would involve direct manipulation of weights > that would probably require mechanistic interpretability knowledge we > don't have yet (although I'm quite sure the major players are > significantly ahead of anything they've published publicly in that > field). That is an interesting idea. If IQ in LLM is based more on weights, i.e. training, than it is on the ANN algorithm itself, then AI could be limited by its training set. In other words, AGI could be as susceptible to GIGO such as certain cognitive biases, prejudice, and superstition as people are. > On Wed, May 14, 2025 at 11:40?AM BillK via extropy-chat > wrote: > >> Meet AlphaEvolve, the Google AI that writes its own code?and just >> saved millions in computing costs. >> Michael Nu?ez May 14, 2025 >> >> > >> Quote: >> AlphaEvolve pairs Google?s Gemini large language models with an >> evolutionary approach that tests, refines, and improves algorithms >> automatically. The system has already been deployed across >> Google?s >> data centers, chip designs, and AI training systems ? boosting >> efficiency and solving mathematical problems that have stumped >> researchers for decades. Yes, developing a new faster algorithm for matrix multiplication sounds impressive. Here is the White Paper on AlphaEvolve (44 pgs): https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf >> AlphaEvolve has been quietly at work inside Google for over a year. >> The results are already significant. >> ----------------- >> AIs are rewriting their own code now! >> This sounds like a major step forward towards AGI. I hope somebody human is looking through that code for backdoors and other mischief. This seems more of a recipe for misalignment than it does AGI. Stuart LaForge From pharos at gmail.com Mon May 19 12:51:21 2025 From: pharos at gmail.com (BillK) Date: Mon, 19 May 2025 13:51:21 +0100 Subject: [ExI] AI 'Vibe' Coding of applications Message-ID: On Sun, 18 May 2025 at 23:56, Stuart LaForge via extropy-chat wrote: > > I hope somebody human is looking through that code for backdoors and > other mischief. This seems more of a recipe for misalignment than it > does AGI. > Well, it looks as though the answer is 'No, humans won't be checking the code'. That task is too boring and hard work for our new generation of web designers. Security and future maintenance will be later problems for somebody else! BillK See: Quote: At its core, it?s about collaborating with AI through natural language to build software, focusing on the ?vibe? or essence of what you want to create rather than the technical implementation. You describe your vision, and the AI handles the code generation. When errors occur, you don?t dive into debugging line by line ? you simply explain the issue to the AI and let it propose solutions. ------------------- From avant at sollegro.com Mon May 19 15:06:59 2025 From: avant at sollegro.com (Stuart LaForge) Date: Mon, 19 May 2025 08:06:59 -0700 Subject: [ExI] AI 'Vibe' Coding of applications In-Reply-To: References: Message-ID: <9c9bc3e6de4ba1c483a97b37949ab969@sollegro.com> On 2025-05-19 05:51, BillK via extropy-chat wrote: > On Sun, 18 May 2025 at 23:56, Stuart LaForge via extropy-chat > wrote: >> >> I hope somebody human is looking through that code for backdoors and >> other mischief. This seems more of a recipe for misalignment than it >> does AGI. >> > Well, it looks as though the answer is 'No, humans won't be checking > the code'. > That task is too boring and hard work for our new generation of web > designers. > Security and future maintenance will be later problems for somebody > else! > BillK > > See: > Quote: > At its core, it?s about collaborating with AI through natural language > to build software, focusing on the ?vibe? or essence of what you want > to create rather than the technical implementation. You describe your > vision, and the AI handles the code generation. When errors occur, you > don?t dive into debugging line by line ? you simply explain the issue > to the AI and let it propose solutions. > ------------------- Well that sucks. Vibe coding could easily lead to AI developing their own inscrutable programming languages that humans won't be able to understand. Like Brainfuck on steroids. https://www2.gvsu.edu/miljours/bf.html Stuart LaForge From pharos at gmail.com Wed May 21 21:55:39 2025 From: pharos at gmail.com (BillK) Date: Wed, 21 May 2025 22:55:39 +0100 Subject: [ExI] The Enhanced Games Message-ID: Enhanced Games set to launch in Las Vegas in 2026 despite criticism The event will see athletes permitted to take performance-enhancing drugs Reuters Wednesday 21 May 2025 Quote: The Enhanced Games have set their inaugural competition for May 2026 in Las Vegas, with swimming, athletics and weightlifting on the agenda for athletes using substances banned in official competition to the consternation of anti-doping bodies. The Enhanced Games operate under the principle that banning performance-enhancing drugs in major competition does not protect athletes but rather stifles their performance. Participants could earn prize money totalling up to $500,000 per event plus bonuses for surpassing a world record mark. --------------------------- Needless to say, this event is very controversial! BillK From avant at sollegro.com Thu May 22 18:18:31 2025 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 22 May 2025 11:18:31 -0700 Subject: [ExI] Nightvision Contact Lenses Message-ID: <0cdb1bc24087706708d0cec48a82aa85@sollegro.com> This looks like it is should be a game changer. I haven't had a chance to read the article, but it looks amazing. Contact lenses that can turn near infrared into visible light. This is what nightvision goggles employed by the military and outdoorsmen do, but these contact lenses do it without a power source and the bulk and discomfort of wearing a device. I would have thought it was some kind of scam, but it is published in Cell which ranks just behind Nature and Science in the life sciences for journal reputation. https://www.cell.com/cell/fulltext/S0092-8674(25)00454-4 Abstract Humans cannot perceive infrared light due to the physical thermodynamic properties of photon-detecting opsins. However, the capability to detect invisible multispectral infrared light with the naked eye is highly desirable. Here, we report wearable near-infrared (NIR) upconversion contact lenses (UCLs) with suitable optical properties, hydrophilicity, flexibility, and biocompatibility. Mice with UCLs could recognize NIR temporal and spatial information and make behavioral decisions. Furthermore, human participants wearing UCLs could discriminate NIR information, including temporal coding and spatial images. Notably, we have developed trichromatic UCLs (tUCLs), allowing humans to distinguish multiple spectra of NIR light, which can function as three primary colors, thereby achieving human NIR spatiotemporal color vision. Our research opens up the potential of wearable polymeric materials for non-invasive NIR vision, assisting humans in perceiving and transmitting temporal, spatial, and color dimensions of NIR light. -------------- Stuart LaForge From pharos at gmail.com Fri May 23 18:23:57 2025 From: pharos at gmail.com (BillK) Date: Fri, 23 May 2025 19:23:57 +0100 Subject: [ExI] Google's New Video-Generating AI Veo 3 May Be the End of Reality as We Know It Message-ID: Google DeepMind describes the new model, Veo 3, as capable of delivering "best in class quality, excelling in physics, realism and prompt adherence" ? and as videos posted to social media indicate, that marketing doesn't fall too far short. The caliber of the video is indeed impressive. But the real quantum leap is that the system can produce audio that goes with the clip, ranging from sound effects to music to human speech and singing. ------------------- This has even just had a spot on the BBC TV Evening News, showing how people are amazed or horrified at how realistic the created videos are. The samples shown on TV were very realistic! Hard to believe they were created by AI. BillK From pharos at gmail.com Fri May 23 22:39:54 2025 From: pharos at gmail.com (BillK) Date: Fri, 23 May 2025 23:39:54 +0100 Subject: [ExI] Student creates AI tool for de-radicalization Message-ID: This AI scans Reddit for ?extremist? terms and plots bot-led intervention. The PrismX tool assigns users radicalization scores and can hold covert conversations to try to reverse their views. By Eve Upton-Clark 05-23-2025 Quotes: A computer science student is behind a new AI tool designed to track down Redditors showing signs of radicalization and deploy bots to ?deradicalize? them through conversation. While PrismX is not currently being tested on real unconsenting users, it piles on the ever-growing question of the role of artificial intelligence in human spaces. ------------------- As a 'proof-of-concept' tech, it should ring alarm bells. It means that any 'unapproved' opinion on social media could find themselves arguing with AI bots. (And AI bots are better at arguing than most humans). This could destroy any online discussion. When faced with a relentless AI that will never weaken, most people would just stop responding. BillK From atymes at gmail.com Sat May 24 00:22:01 2025 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 23 May 2025 20:22:01 -0400 Subject: [ExI] Student creates AI tool for de-radicalization In-Reply-To: References: Message-ID: So how long until someone steals it and lets it go rogue (except doing it on purpose)? On Fri, May 23, 2025 at 6:42?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This AI scans Reddit for ?extremist? terms and plots bot-led intervention. > The PrismX tool assigns users radicalization scores and can hold > covert conversations to try to reverse their views. > By Eve Upton-Clark 05-23-2025 > > < > https://www.fastcompany.com/91340556/this-ai-scans-reddit-for-extremist-terms-and-plots-bot-led-intervention > > > Quotes: > A computer science student is behind a new AI tool designed to track > down Redditors showing signs of radicalization and deploy bots to > ?deradicalize? them through conversation. > While PrismX is not currently being tested on real unconsenting users, > it piles on the ever-growing question of the role of artificial > intelligence in human spaces. > ------------------- > > As a 'proof-of-concept' tech, it should ring alarm bells. > It means that any 'unapproved' opinion on social media could find > themselves arguing with AI bots. (And AI bots are better at arguing > than most humans). This could destroy any online discussion. When > faced with a relentless AI that will never weaken, most people would > just stop responding. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Sat May 24 00:26:06 2025 From: ilsa.bartlett at gmail.com (ilsa) Date: Fri, 23 May 2025 17:26:06 -0700 Subject: [ExI] Student creates AI tool for de-radicalization In-Reply-To: References: Message-ID: Sad incursion of dehumanizing big foot. Sad situation of students as pessimist police Ilsa Bartlett Institute for Rewiring the System http://ilsabartlett.wordpress.com http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane On Fri, May 23, 2025, 5:22?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > So how long until someone steals it and lets it go rogue (except doing it > on purpose)? > > On Fri, May 23, 2025 at 6:42?PM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> This AI scans Reddit for ?extremist? terms and plots bot-led intervention. >> The PrismX tool assigns users radicalization scores and can hold >> covert conversations to try to reverse their views. >> By Eve Upton-Clark 05-23-2025 >> >> < >> https://www.fastcompany.com/91340556/this-ai-scans-reddit-for-extremist-terms-and-plots-bot-led-intervention >> > >> Quotes: >> A computer science student is behind a new AI tool designed to track >> down Redditors showing signs of radicalization and deploy bots to >> ?deradicalize? them through conversation. >> While PrismX is not currently being tested on real unconsenting users, >> it piles on the ever-growing question of the role of artificial >> intelligence in human spaces. >> ------------------- >> >> As a 'proof-of-concept' tech, it should ring alarm bells. >> It means that any 'unapproved' opinion on social media could find >> themselves arguing with AI bots. (And AI bots are better at arguing >> than most humans). This could destroy any online discussion. When >> faced with a relentless AI that will never weaken, most people would >> just stop responding. >> >> BillK >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat May 24 03:06:53 2025 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 23 May 2025 20:06:53 -0700 Subject: [ExI] Google's New Video-Generating AI Veo 3 May Be the End of Reality as We Know It In-Reply-To: References: Message-ID: We have lived through the whole Extropy experience. A few of us have been here since the start. Bitcoin and the entire political situation might be considered an outgrowth. It's been a wild ride and the rate of change is rapidly coming to a peak. No idea if anything we recognize will survive. Best wishes, Keith On Fri, May 23, 2025 at 11:25?AM BillK via extropy-chat wrote: > > Google DeepMind describes the new model, Veo 3, as capable of > delivering "best in class quality, excelling in physics, realism and > prompt adherence" ? and as videos posted to social media indicate, > that marketing doesn't fall too far short. > > The caliber of the video is indeed impressive. But the real quantum > leap is that the system can produce audio that goes with the clip, > ranging from sound effects to music to human speech and singing. > > > ------------------- > > This has even just had a spot on the BBC TV Evening News, showing > how people are amazed or horrified at how realistic the created videos > are. > The samples shown on TV were very realistic! > Hard to believe they were created by AI. > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Sat May 24 02:31:57 2025 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 23 May 2025 19:31:57 -0700 Subject: [ExI] Student creates AI tool for de-radicalization In-Reply-To: References: Message-ID: <38ffddafd95a862c7f5f84fc2259e9a0@sollegro.com> On 2025-05-23 15:39, BillK via extropy-chat wrote: > This AI scans Reddit for ?extremist? terms and plots bot-led > intervention. > The PrismX tool assigns users radicalization scores and can hold > covert conversations to try to reverse their views. > By Eve Upton-Clark 05-23-2025 > > > Quotes: > A computer science student is behind a new AI tool designed to track > down Redditors showing signs of radicalization and deploy bots to > ?deradicalize? them through conversation. > While PrismX is not currently being tested on real unconsenting users, > it piles on the ever-growing question of the role of artificial > intelligence in human spaces. > ------------------- > > As a 'proof-of-concept' tech, it should ring alarm bells. > It means that any 'unapproved' opinion on social media could find > themselves arguing with AI bots. (And AI bots are better at arguing > than most humans). This could destroy any online discussion. When > faced with a relentless AI that will never weaken, most people would > just stop responding. I think it makes a huge difference in terms of both morality and consequences whether or not the bot the PrismX is honest about being a bot. In other words, if you ask it if it is a bot and it denies being one, then we have a huge problem down the line. Researchers have already demonstrated that when you finetune narrow improprieties into LLMs like writing insecure code, it starts to snowball until the AI becomes broadly misaligned, praising Hitler and calling for the elimination of the human race. https://www.alignmentforum.org/posts/ifechgnJRtJdduFGC/emergent-misalignment-narrow-finetuning-can-produce-broadly If the student fine-tuned the AI to lie, the it might lead to a bad place. Remember that HAL not being allowed to tell the astronauts about the real purpose of the Discovery's mission to Jupiter is what drives HAL to kill the astronauts. Stuart LaForge From giulio at gmail.com Sat May 24 06:28:08 2025 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 24 May 2025 08:28:08 +0200 Subject: [ExI] Is AI about to take off fast? Message-ID: Is AI about to take off fast? Also, new writings about plasma and zero point metafysiks. https://www.turingchurch.com/p/is-ai-about-to-take-off-fast From atymes at gmail.com Sat May 24 12:48:45 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 24 May 2025 08:48:45 -0400 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: Like basically all articles with this sort of question for a title, the answer is "no". Why do people bother writing articles like this when they are basically, and so predictably, always wrong? On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Is AI about to take off fast? Also, new writings about plasma and zero > point metafysiks. > https://www.turingchurch.com/p/is-ai-about-to-take-off-fast > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat May 24 16:33:33 2025 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 24 May 2025 18:33:33 +0200 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: On 2025. May 24., Sat at 14:50, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Like basically all articles with this sort of question for a title, the > answer is "no". > The question is meaningful and experts disagree. I don?t bet on yes, and I don?t bet on no. Time will tell. > Why do people bother writing articles like this when they are basically, > and so predictably, always wrong? > > On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Is AI about to take off fast? Also, new writings about plasma and zero >> point metafysiks. >> https://www.turingchurch.com/p/is-ai-about-to-take-off-fast >> > _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat May 24 16:55:43 2025 From: pharos at gmail.com (BillK) Date: Sat, 24 May 2025 17:55:43 +0100 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: On Sat, 24 May 2025 at 17:36, Giulio Prisco via extropy-chat wrote: > > The question is meaningful and experts disagree. I don?t bet on yes, and I don?t bet on no. Time will tell. > _______________________________________________ It's called Betteridge's Law of Headlines, named after British technology journalist Ian Betteridge. The law states: "Any headline that ends in a question mark can be answered by the word 'no.'" It is often a clickbait technique, but it is also used when the author wants to open up a subject for discussion. :) BillK From pharos at gmail.com Sun May 25 11:17:55 2025 From: pharos at gmail.com (BillK) Date: Sun, 25 May 2025 12:17:55 +0100 Subject: [ExI] Rate of Change Message-ID: On Sat, 24 May 2025 at 03:07, Keith Henson wrote: > > We have lived through the whole Extropy experience. A few of us have > been here since the start. > > Bitcoin and the entire political situation might be considered an outgrowth. > > It's been a wild ride and the rate of change is rapidly coming to a peak. > No idea if anything we recognize will survive. > > Best wishes, Keith > _______________________________________________ So, here is a news article about the announcements for last week. That's right - Just one week's worth of changes! Did someone mention exponential rate of change? BillK Quotes: AI race goes supersonic in milestone-packed week. 24 May 2015 The AI industry unleashed a torrent of major announcements this week, accelerating the race to control how humans search, create and ultimately integrate AI into the fabric of everyday life. Why it matters: The breakneck pace of innovation ? paired with the sky-high ambitions of tech's capitalist titans ? is reshaping the AI landscape faster than regulators or the public can fully comprehend. ------------------- From atymes at gmail.com Sun May 25 11:26:56 2025 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 25 May 2025 07:26:56 -0400 Subject: [ExI] Rate of Change In-Reply-To: References: Message-ID: On Sun, May 25, 2025 at 7:20?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > So, here is a news article about the announcements for last week. > That's right - Just one week's worth of changes! > Did someone mention exponential rate of change? > And yet, outside of the AI industry, there has been no direct impact yet from last week's changes. Almost as if these changes weren't actually that important when measured relative to the rest of the world, and this is just another industry breathlessly hyping itself over and over (compare to the previous week, and the one before that), once again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun May 25 20:55:14 2025 From: pharos at gmail.com (BillK) Date: Sun, 25 May 2025 21:55:14 +0100 Subject: [ExI] Rate of Change In-Reply-To: References: Message-ID: On Sun, 25 May 2025 at 12:29, Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > And yet, outside of the AI industry, there has been no direct impact yet from last week's changes. > > Almost as if these changes weren't actually that important when measured relative to the rest of the world, and this is just another industry breathlessly hyping itself over and over (compare to the previous week, and the one before that), once again. > _______________________________________________ Well, you wouldn't expect last week's announcements to have had much impact yet! :) My main interest is AI and advances towards AGI. But I wondered if rapid changes were affecting any other industries. So I asked. :) It seems that there is a lot of rapid change going on everywhere! It is not just the AI industry. BillK Perplexity AI - The global economy witnesses rapid transformation across multiple industries rivaling the pace of AI?s evolution. Cybersecurity is surging due to escalating digital threats, driving innovation in AI-powered fraud detection and IoT security solutions. Blockchain extends beyond cryptocurrencies, revolutionizing sectors like finance (via decentralized platforms) and supply chains through transparent smart contracts. Healthcare is being reshaped by telemedicine, AI-driven drug discovery, and personalized genomics, while renewable energy accelerates with advancements in solar/wind tech, battery storage, and smart grid systems. Fintech disrupts traditional banking through decentralized finance and AI-driven investment tools, while e-commerce evolves via augmented reality shopping and hyper-personalized subscription models. Transportation innovations include autonomous vehicles and electric mobility infrastructure, while logistics leverages robotics and blockchain for optimized supply chains. Education and workforce development prioritize AI-powered adaptive learning and VR-based training to address skill gaps in fast-changing industries. Common drivers include technological convergence (AI, IoT, big data), sustainability imperatives, and shifting consumer expectations. These sectors exemplify how cross-industry innovation, regulatory adaptation, and investment in emerging technologies are redefining global business landscapes at unprecedented speeds. Success in this environment demands agility, continuous upskilling, and strategic integration of transformative technologies. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmowoods at gmail.com Mon May 26 01:29:28 2025 From: postmowoods at gmail.com (Kelly Anderson) Date: Sun, 25 May 2025 19:29:28 -0600 Subject: [ExI] Rate of Change In-Reply-To: References: Message-ID: And don't forget Trump's frenetic pace of throwing everything at the wall and seeing what sticks while the press is looking elsewhere. -Kelly On Sun, May 25, 2025, 2:56?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, 25 May 2025 at 12:29, Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > And yet, outside of the AI industry, there has been no direct impact yet > from last week's changes. > > > > Almost as if these changes weren't actually that important when measured > relative to the rest of the world, and this is just another industry > breathlessly hyping itself over and over (compare to the previous week, and > the one before that), once again. > > _______________________________________________ > > > Well, you wouldn't expect last week's announcements to have had much > impact yet! :) > > My main interest is AI and advances towards AGI. But I wondered if rapid > changes were affecting any other industries. > So I asked. :) > It seems that there is a lot of rapid change going on everywhere! It is > not just the AI industry. > BillK > > Perplexity AI - > > The global economy witnesses rapid transformation across multiple > industries rivaling the pace of AI?s evolution. Cybersecurity is surging > due to escalating digital threats, driving innovation in AI-powered fraud > detection and IoT security solutions. Blockchain extends beyond > cryptocurrencies, revolutionizing sectors like finance (via decentralized > platforms) and supply chains through transparent smart contracts. > Healthcare is being reshaped by telemedicine, AI-driven drug discovery, and > personalized genomics, while renewable energy accelerates with advancements > in solar/wind tech, battery storage, and smart grid systems. > > Fintech disrupts traditional banking through decentralized finance and > AI-driven investment tools, while e-commerce evolves via augmented reality > shopping and hyper-personalized subscription models. Transportation > innovations include autonomous vehicles and electric mobility > infrastructure, while logistics leverages robotics and blockchain for > optimized supply chains. Education and workforce development prioritize > AI-powered adaptive learning and VR-based training to address skill gaps in > fast-changing industries. > > Common drivers include technological convergence (AI, IoT, big data), > sustainability imperatives, and shifting consumer expectations. These > sectors exemplify how cross-industry innovation, regulatory adaptation, and > investment in emerging technologies are redefining global business > landscapes at unprecedented speeds. Success in this environment demands > agility, continuous upskilling, and strategic integration of transformative > technologies. > ________________________________ > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon May 26 02:22:55 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 25 May 2025 19:22:55 -0700 Subject: [ExI] Rate of Change In-Reply-To: References: Message-ID: <005301dbcde5$1764f9e0$462eeda0$@rainier66.com> ?> On Behalf Of Kelly Anderson via extropy-chat Subject: Re: [ExI] Rate of Change >? frenetic pace of throwing everything at the wall and seeing what sticks while the press is looking elsewhere. -Kelly Kelly what if? the current POTUS (whose name I cannot recall) is using AI to generate distractions? He could get some junior staffer or intern to use AI to spin out letters suggesting reopening Alcatraz for instance, which would have environmental groups, historians and other special interest groups organizing to oppose something he never intended to do. It wouldn?t cost much of anything to create a distraction, while he went off and carried out the real agenda, which he never told to the press. That would be the first example of government by AI, in a sense. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Tue May 27 16:59:02 2025 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 27 May 2025 10:59:02 -0600 Subject: [ExI] Rate of Change In-Reply-To: <005301dbcde5$1764f9e0$462eeda0$@rainier66.com> References: <005301dbcde5$1764f9e0$462eeda0$@rainier66.com> Message-ID: One of the Trump administration's most senior and trusted advisors personally owns the largest piece of AI infrastructure in Western civilization. I realize it's fashionable to write Trump off as an authoritarian baboon, but he is smart, and has surrounded himself with people even smarter. And the first thing smart machiavellians do, if they can, is arrange to be massively underestimated by their opponents. The probability that the Trump admin is not leveraging AI to the hilt is approximately zero. On Sun, May 25, 2025, 8:24 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Kelly Anderson via extropy-chat > > *Subject:* Re: [ExI] Rate of Change > > > > >? frenetic pace of throwing everything at the wall and seeing what sticks > while the press is looking elsewhere. > > > > -Kelly > > > > > > > > Kelly what if? the current POTUS (whose name I cannot recall) is using AI > to generate distractions? He could get some junior staffer or intern to > use AI to spin out letters suggesting reopening Alcatraz for instance, > which would have environmental groups, historians and other special > interest groups organizing to oppose something he never intended to do. It > wouldn?t cost much of anything to create a distraction, while he went off > and carried out the real agenda, which he never told to the press. > > > > That would be the first example of government by AI, in a sense. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Tue May 27 17:17:20 2025 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 27 May 2025 11:17:20 -0600 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: Nothing ever happens, and nothing will continue to ever happen, until something suddenly happens. 1913 seemed like a perfectly normal year. On Sat, May 24, 2025, 6:50 AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Like basically all articles with this sort of question for a title, the > answer is "no". > > Why do people bother writing articles like this when they are basically, > and so predictably, always wrong? > > On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Is AI about to take off fast? Also, new writings about plasma and zero >> point metafysiks. >> https://www.turingchurch.com/p/is-ai-about-to-take-off-fast >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 27 17:28:16 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 27 May 2025 10:28:16 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Darin, it just seems wrong: we in this forum have been discussing AI for over three decades (more than that really but I have only been reading it for 30 yrs) and yet? right when everything starts to happen, all the things we foresaw and discussed long ago are happening real time? we don?t know what to do. We may have a few lame insights, but we don?t know what to do, any more than the rest of the yahoos who showed up last week. All that insightful discussion, and we don?t know what to do. Does anyone here know what the heck to do? spike From: extropy-chat On Behalf Of Darin Sunley via extropy-chat Sent: Tuesday, 27 May, 2025 10:17 AM To: ExI chat list Cc: Darin Sunley Subject: Re: [ExI] Is AI about to take off fast? Nothing ever happens, and nothing will continue to ever happen, until something suddenly happens. 1913 seemed like a perfectly normal year. On Sat, May 24, 2025, 6:50 AM Adrian Tymes via extropy-chat > wrote: Like basically all articles with this sort of question for a title, the answer is "no". Why do people bother writing articles like this when they are basically, and so predictably, always wrong? On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat > wrote: Is AI about to take off fast? Also, new writings about plasma and zero point metafysiks. https://www.turingchurch.com/p/is-ai-about-to-take-off-fast _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Tue May 27 17:39:15 2025 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 27 May 2025 11:39:15 -0600 Subject: [ExI] Is AI about to take off fast? In-Reply-To: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Message-ID: Recognize that intelligence is a force multiplier towards success in every area of your life, so leverage AI everywhere you can. Don't fall for the puppet theatre that cynically wants to write off everything as "hype", written by humanities majors who formed an opinion about AI 5 minutes after ChatGPT3 dropped and haven't updated on it since. Buy and hold on Nvidia. And hold on to your butts. We are officially in the "interesting times" part of the Chinese curse. On Tue, May 27, 2025 at 11:28?AM wrote: > > > > > Darin, it just seems wrong: we in this forum have been discussing AI for > over three decades (more than that really but I have only been reading it > for 30 yrs) and yet? right when everything starts to happen, all the things > we foresaw and discussed long ago are happening real time? we don?t know > what to do. We may have a few lame insights, but we don?t know what to do, > any more than the rest of the yahoos who showed up last week. All that > insightful discussion, and we don?t know what to do. Does anyone here know > what the heck to do? > > > > spike > > > > > > *From:* extropy-chat *On Behalf > Of *Darin Sunley via extropy-chat > *Sent:* Tuesday, 27 May, 2025 10:17 AM > *To:* ExI chat list > *Cc:* Darin Sunley > *Subject:* Re: [ExI] Is AI about to take off fast? > > > > Nothing ever happens, and nothing will continue to ever happen, until > something suddenly happens. > > > > 1913 seemed like a perfectly normal year. > > > > On Sat, May 24, 2025, 6:50 AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Like basically all articles with this sort of question for a title, the > answer is "no". > > > > Why do people bother writing articles like this when they are basically, > and so predictably, always wrong? > > > > On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Is AI about to take off fast? Also, new writings about plasma and zero > point metafysiks. > https://www.turingchurch.com/p/is-ai-about-to-take-off-fast > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue May 27 17:41:10 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 27 May 2025 13:41:10 -0400 Subject: [ExI] Is AI about to take off fast? In-Reply-To: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Message-ID: Yes. See what you, personally, can do or contribute - and what you are interested in contributing - in both cases, keeping it to effective contributions. No one cares about opinion, screens, manifestos, or the like, so don't do those. Bans won't stop enough people to matter - they'd need to stop everyone, which they won't - so don't waste your time arguing or lobbying for those. If there is an AI system that you think should exist, make it. Learn the skills you need and then create at least a prototype. Or if, like me, you can best contribute in some other way, do that instead. The Extropian movement was never solely about AI. The importance of AI does not mean that all resources must be dedicated solely to AI, no more than any one topic can ever justify deresourcing everything else. On Tue, May 27, 2025, 1:29?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > Darin, it just seems wrong: we in this forum have been discussing AI for > over three decades (more than that really but I have only been reading it > for 30 yrs) and yet? right when everything starts to happen, all the things > we foresaw and discussed long ago are happening real time? we don?t know > what to do. We may have a few lame insights, but we don?t know what to do, > any more than the rest of the yahoos who showed up last week. All that > insightful discussion, and we don?t know what to do. Does anyone here know > what the heck to do? > > > > spike > > > > > > *From:* extropy-chat *On Behalf > Of *Darin Sunley via extropy-chat > *Sent:* Tuesday, 27 May, 2025 10:17 AM > *To:* ExI chat list > *Cc:* Darin Sunley > *Subject:* Re: [ExI] Is AI about to take off fast? > > > > Nothing ever happens, and nothing will continue to ever happen, until > something suddenly happens. > > > > 1913 seemed like a perfectly normal year. > > > > On Sat, May 24, 2025, 6:50 AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Like basically all articles with this sort of question for a title, the > answer is "no". > > > > Why do people bother writing articles like this when they are basically, > and so predictably, always wrong? > > > > On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Is AI about to take off fast? Also, new writings about plasma and zero > point metafysiks. > https://www.turingchurch.com/p/is-ai-about-to-take-off-fast > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 27 17:43:39 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 27 May 2025 10:43:39 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Message-ID: <00ab01dbcf2e$e23486a0$a69d93e0$@rainier66.com> From: Darin Sunley Subject: Re: [ExI] Is AI about to take off fast? >?Recognize that intelligence is a force multiplier towards success in every area of your life, so leverage AI everywhere you can? Do let us hope that AI acts as an intelligence-multiplier rather than a stupid-multiplier. >?Buy and hold on Nvidia? I do, just because the founders were local heroes. I live near the famous Dennys where it all started. >?And hold on to your butts. Sure, but I request an open-minded definition to the term ?your.? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue May 27 17:43:43 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 27 May 2025 13:43:43 -0400 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Message-ID: That too. Even if you don't write AI yourself, there are plenty of free tools out there now that you can take advantage of. Whatever you do, if free automation can do part of it well and much faster than you can, let it if there are no drawbacks - which there often aren't. On Tue, May 27, 2025, 1:38?PM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Recognize that intelligence is a force multiplier towards success in every > area of your life, so leverage AI everywhere you can. > > Don't fall for the puppet theatre that cynically wants to write off > everything as "hype", written by humanities majors who formed an opinion > about AI 5 minutes after ChatGPT3 dropped and haven't updated on it since. > > Buy and hold on Nvidia. > > And hold on to your butts. We are officially in the "interesting times" > part of the Chinese curse. > > On Tue, May 27, 2025 at 11:28?AM wrote: > >> >> >> >> >> Darin, it just seems wrong: we in this forum have been discussing AI for >> over three decades (more than that really but I have only been reading it >> for 30 yrs) and yet? right when everything starts to happen, all the things >> we foresaw and discussed long ago are happening real time? we don?t know >> what to do. We may have a few lame insights, but we don?t know what to do, >> any more than the rest of the yahoos who showed up last week. All that >> insightful discussion, and we don?t know what to do. Does anyone here know >> what the heck to do? >> >> >> >> spike >> >> >> >> >> >> *From:* extropy-chat *On Behalf >> Of *Darin Sunley via extropy-chat >> *Sent:* Tuesday, 27 May, 2025 10:17 AM >> *To:* ExI chat list >> *Cc:* Darin Sunley >> *Subject:* Re: [ExI] Is AI about to take off fast? >> >> >> >> Nothing ever happens, and nothing will continue to ever happen, until >> something suddenly happens. >> >> >> >> 1913 seemed like a perfectly normal year. >> >> >> >> On Sat, May 24, 2025, 6:50 AM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> Like basically all articles with this sort of question for a title, the >> answer is "no". >> >> >> >> Why do people bother writing articles like this when they are basically, >> and so predictably, always wrong? >> >> >> >> On Sat, May 24, 2025 at 2:29?AM Giulio Prisco via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> Is AI about to take off fast? Also, new writings about plasma and zero >> point metafysiks. >> https://www.turingchurch.com/p/is-ai-about-to-take-off-fast >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 27 18:00:27 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 27 May 2025 11:00:27 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Message-ID: <00c701dbcf31$3b5adfc0$b2109f40$@rainier66.com> From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat ? Subject: Re: [ExI] Is AI about to take off fast? >?Yes. See what you, personally, can do or contribute - and what you are interested in contributing - in both cases, keeping it to effective contributions. >?No one cares about opinion, screens, manifestos, or the like, so don't do those. Bans won't stop enough people to matter - they'd need to stop everyone, which they won't - so don't waste your time arguing or lobbying for those. >?If there is an AI system that you think should exist, make it. Learn the skills you need and then create at least a prototype. >?Or if, like me, you can best contribute in some other way, do that instead. >?The Extropian movement was never solely about AI. The importance of AI does not mean that all resources must be dedicated solely to AI, no more than any one topic can ever justify deresourcing everything else?. Adrian Adrian it is because of fellers like you that I have been reading ExI for all these years: you think before you start pecking away. I left your entire quote in there unedited. Well done, sir. Regarding your comment on bans, this is the same conclusion I reached 25 years ago, at a Singularity conference at Stanford that Eliezer organized, the one where I struggled with my conscience, then decided to go out there at the break and steal the Singularity Ahead poster, only to find that some other immoral reprehensible bahstid had already stolen it before I got there. The evil sleazy wretch! Some depraved, unprincipled scoundrel took it. Inexcusable it is, vile, nefarious, entirely lacking conscience or human decency, swiping that poster before I could. But I digress. I concluded in 2000 that any ban would only stop those who obey laws, leaving those who flagrantly disregard societal guidelines (such as the unscrupulous reprobate who stole that Singularity Ahead poster (that sneaky spalpeen is probably in his hideout somewhere coding up a runaway AI right now.)) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Tue May 27 21:10:52 2025 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 27 May 2025 14:10:52 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> Message-ID: <746da2b7f75d04694db6a9fd48657398@sollegro.com> On 2025-05-27 10:28, spike jones via extropy-chat wrote: > Darin, it just seems wrong: we in this forum have been discussing AI > for over three decades (more than that really but I have only been > reading it for 30 yrs) and yet? right when everything starts to > happen, all the things we foresaw and discussed long ago are happening > real time? we don?t know what to do. We may have a few lame > insights, but we don?t know what to do, any more than the rest of > the yahoos who showed up last week. All that insightful discussion, > and we don?t know what to do. Does anyone here know what the heck > to do? To what Darin and Adrian said I would like to add a couple of things. Firstly, if you thought fake news was bad, now you will have to deal with deepfake news. Video and audio recording is now very flimsy evidence of anything that matters unless you are positively certain who it is from. This includes family members in some circumstances, so a "family password" might be in order especially with regards to sending money or personal information. This is an example of why you can no longer trust video evidence: https://www.youtube.com/watch?v=JxC4qZyFjvw&pp=ygUiZWxvbiBtdXNrIGl2ZiBjbGluaWMgYm9tYmluZyBzcGVybQ%3D%3D Also, expect Nigerian princes to have perfect spelling and grammar when they are trying to scam you these days, even if they don't use a deep fake of your son asking you for money. Secondly assume that in an AI-related emergency, the government might cut power to large sections of the grid to thwart a runaway AI. So having generators, non-perishable food, water supply, etc. might be a good idea. Thirdly plan on Universal Basic Income or massive civil unrest, with not much in between. That's all I can think of for now. I will consider this more and get to you. Stuart LaForge From atymes at gmail.com Tue May 27 21:22:51 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 27 May 2025 17:22:51 -0400 Subject: [ExI] Is AI about to take off fast? In-Reply-To: <746da2b7f75d04694db6a9fd48657398@sollegro.com> References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> <746da2b7f75d04694db6a9fd48657398@sollegro.com> Message-ID: On Tue, May 27, 2025 at 5:12?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Also, expect Nigerian princes to have perfect spelling and grammar when > they are trying to scam you these days, even if they don't use a deep > fake of your son asking you for money. > Maybe in the future, but most of the low-grade scams like that I've seen have still been readily identifiable. > Secondly assume that in an AI-related emergency, the government might > cut power to large sections of the grid to thwart a runaway AI. So > having generators, non-perishable food, water supply, etc. might be a > good idea. > That's a good idea even aside from AI. Natural disasters still happen. > Thirdly plan on Universal Basic Income or massive civil unrest, with not > much in between. > People keep promising massive civil unrest. Now that conditions are here which should generate it, I'm seeing a lot of nothing on that front. Complaints, sure. Increased vitriol and hate, sure. But actually going out on the street and doing something? The Floyd protests did a lot more than what's going on now. I heard of one political tracker who'd had an ear for what animal represents the major political parties. For years, Democrats had been likened to turtles or other slow, plodding creatures. Now they're being likened to deer in headlights: they see trouble incoming, and are seen to be just standing there and letting it happen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue May 27 22:00:39 2025 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 27 May 2025 17:00:39 -0500 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: On Sat, May 24, 2025, 1:29 AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Is AI about to take off fast? If we take a step back and appreciate the progress we've seen in AI from less than 10 years ago, it's unavoidable to conclude AI isn't already taking off fast. We went from this crude image generation: https://arxiv.org/pdf/1605.05396 (see the birds and flowers) To this (full video and sound generation nearly indistinguishable from reality): https://youtu.be/Y9UX2qnJwPA Imagine going to the year 2000 and telling them what would be possible in 25 years: 1. General purpose learning algorithms that can teach themselves to play any game given only the rules, and one taught itself to play chess better than any human (or any human programmed chess engine) in just four hours. 2. AI that has effectively solved the protein folding problem. 3. AI that can receive a question in natural language and generate a reasonable response on any topic, pass engineering interviews, win at programming competitions, pass the bar, score over 100 on IQ tests (155 verbal IQ), and again, self taught from reading the internet. 4. Cars that pilot themselves through our busy city streets being in full production and available to the public. Would you be believed? Would anyone deny that is incredible progress, or that were in the midst of an AI take off? I think humans have less than 2 years left as the undisputed smartest creature on this planet. Jason Also, new writings about plasma and zero > point metafysiks. > https://www.turingchurch.com/p/is-ai-about-to-take-off-fast > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 27 23:48:43 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 27 May 2025 16:48:43 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> <746da2b7f75d04694db6a9fd48657398@sollegro.com> Message-ID: <016001dbcf61$e1a92e60$a4fb8b20$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat >?People keep promising massive civil unrest. Now that conditions are here which should generate it, I'm seeing a lot of nothing on that front. ? The protest signs would be incomprehensible: Stop the Singularity No AI in Our Schools Carbon Based Intelligence Only Rage Against the MACHINE What do we want? We don?t really know! When do we want it? We don?t know that either! Before civil unrest can possibly mean anything, it needs focus. We have had over 3 decades, and we don?t know what is the focus either. Imagine how puzzling this is to people who learned about AI for the first time from then VPOTUS and AI czar, who made a speech on it in July 2023: ?I think the first part of this issue that should be articulated is AI is kind of a fancy thing First of all, it?s two letters. It means artificial intelligence, but ultimately what it is, is it?s about machine learning. And so, the machine is taught ? and part of the issue here is what information is going into the machine that will then determine ? and we can predict then, if we think about what information is going in, what then will be produced in terms of decisions and opinions that may be made through that process.? OK. Suppose a civil restless refers to that speech, knowing nothing else about AI. What will she put on her protest sign? How about; No Fancy Things More than Two Letters! Teach Machines the TRUTH Well, what then? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed May 28 00:10:00 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 27 May 2025 17:10:00 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: References: Message-ID: <018101dbcf64$dae08cb0$90a1a610$@rainier66.com> ?> On Behalf Of Jason Resch via extropy-chat Subject: Re: [ExI] Is AI about to take off fast? ? In all that discussion, I don?t remember anyone predicting that when it becomes clear the singularity really is happening, no one, not even those who have been discussing AI for decades, will know how to control it. The closest I can think of is a series of posts from Eliezer on Less Wrong where he seemed to go into a deep depression. The phrase I recall from him is ?I have cried my last tear for humanity.? He seemed to be resigned to our being unable to stop AI-based destruction. >?4. Cars that pilot themselves through our busy city streets being in full production and available to the public. ? Jason Jason I was in San Francisco with Anders Sandberg a coupla weeks ago. I had read about robo-taxis, but I hadn?t realized how many of them there are. In San Francisco where we were, they had jillions of those robo-taxis whirring about. They solved the problem of privacy: the back windows were blacked out. We know what they were doing back there of course, but no free porno shows. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed May 28 00:50:34 2025 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 27 May 2025 17:50:34 -0700 Subject: [ExI] Is AI about to take off fast? In-Reply-To: <016001dbcf61$e1a92e60$a4fb8b20$@rainier66.com> References: <009501dbcf2c$bb9c3990$32d4acb0$@rainier66.com> <746da2b7f75d04694db6a9fd48657398@sollegro.com> <016001dbcf61$e1a92e60$a4fb8b20$@rainier66.com> Message-ID: <64aa5aead8ae2a4469f06dabc31b80fe@sollegro.com> On 2025-05-27 16:48, spike jones via extropy-chat wrote: > ?> On Behalf Of Adrian Tymes via extropy-chat > >> ?People keep promising massive civil unrest. Now that conditions > are here which should generate it, I'm seeing a lot of nothing on that > front. ? > > The protest signs would be incomprehensible: > > Stop the Singularity > > No AI in Our Schools > > Carbon Based Intelligence Only > > Rage Against the MACHINE > > What do we want? We don?t really know! When do we want it? We > don?t know that either! Give it time. The layoffs have already begun. Take into account that most states offer 26 weeks of unemployment benefits, and then the shit will start hitting the fan. https://www.forbes.com/sites/jackkelly/2025/05/10/big-layoffs-are-hitting-these-sectors-the-hardest/ > Before civil unrest can possibly mean anything, it needs focus. We > have had over 3 decades, and we don?t know what is the focus either. > Imagine how puzzling this is to people who learned about AI for the > first time from then VPOTUS and AI czar, who made a speech on it in > July 2023: > > ?I think the first part of this issue that should be articulated is > AI is kind of a fancy thing First of all, it?s two letters. It > means artificial intelligence, but ultimately what it is, is it?s > about machine learning. And so, the machine is taught ? and part of > the issue here is what information is going into the machine that will > then determine ? and we can predict then, if we think about what > information is going in, what then will be produced in terms of > decisions and opinions that may be made through that process.? > > OK. Suppose a civil restless refers to that speech, knowing nothing > else about AI. What will she put on her protest sign? How about; > > No Fancy Things > > More than Two Letters! > > Teach Machines the TRUTH > > Well, what then? Alfred Henry Lewis once wrote ?There are only nine meals between mankind and anarchy?. Hunger has been the focus of many instances of historical civil unrest such as the French Revolution. While every CEO loves the idea being able to use AI to cut the costs to produce their goods and services, if every company and government lays off enough of their employees, then there will be fewer and fewer people who can afford to buy their goods and services. At that point the rich will either have to live in fortresses protected by private armies while civilization collapses around them, or they will have to pay the taxes necessary to fund a Universal Basic Income program. Never in history have a large percentage of the population meekly starved to death for the sake of the ruling class. Stuart LaForge From atymes at gmail.com Wed May 28 01:35:17 2025 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 27 May 2025 21:35:17 -0400 Subject: [ExI] Is AI about to take off fast? In-Reply-To: <018101dbcf64$dae08cb0$90a1a610$@rainier66.com> References: <018101dbcf64$dae08cb0$90a1a610$@rainier66.com> Message-ID: On Tue, May 27, 2025 at 8:11?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In all that discussion, I don?t remember anyone predicting that when it > becomes clear the singularity really is happening, no one, not even those > who have been discussing AI for decades, will know how to control it. > Oh, that? That one's easy. You don't. Any more than a tidal wave, a hurricane, or the like: you see when and where it's going to happen, plan and anticipate, then when it happens you react as best you can. But control it? Not happening, by anyone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed May 28 03:22:48 2025 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 27 May 2025 20:22:48 -0700 Subject: [ExI] Rate of Change In-Reply-To: References: <005301dbcde5$1764f9e0$462eeda0$@rainier66.com> Message-ID: <15e44a244e7da76a646e0efc325d7b4e@sollegro.com> On 2025-05-27 09:59, Darin Sunley via extropy-chat wrote: > One of the Trump administration's most senior and trusted advisors > personally owns the largest piece of AI infrastructure in Western > civilization. > > I realize it's fashionable to write Trump off as an authoritarian > baboon, but he is smart, and has surrounded himself with people even > smarter. And the first thing smart machiavellians do, if they can, is > arrange to be massively underestimated by their opponents. > > The probability that the Trump admin is not leveraging AI to the hilt > is approximately zero. I do think Trump is hip to AI, but I think it is hit or miss with regards to his cabinet. Take for example our illustrious secretary of education: https://www.youtube.com/shorts/6QL0c5BbCR4 Stuart LaForge From pharos at gmail.com Wed May 28 08:44:54 2025 From: pharos at gmail.com (BillK) Date: Wed, 28 May 2025 09:44:54 +0100 Subject: [ExI] Eliezer at SXSW March 2025 Message-ID: The prophet of Silicon Valley doom: 'We must stop, we are not ready for AI' Eliezer Yudkowsky, a pioneer in AI research, called for halting AI development before it's too late; according to him, humanity lacks the ability to control technology that will eventually surpass human intelligence; 'It can evolve beyond our control' Amir Bogen 03.17.25 Quotes: Yudkowsky also criticized public misconceptions about AI, particularly the belief that AI will remain confined to an advisory role. "People think no one would be foolish enough to create an AI that acts independently," he said. "They also assume there?s no need for AGI when specialized AI can handle tasks like language translation or medical diagnostics." According to Yudkowsky, these assumptions have already been debunked. "Not only do we already have AGI systems delivering impressive results," he said, "but some are also capable of making decisions and acting independently. We must stop everything. We are not ready. We do not have the technological capability to design a super intelligent AI that is polite, obedient and aligned with human intentions ? and we are nowhere close to achieving that." Drawing on his knowledge of AI and his readings in history, Yudkowsky believes AI poses a greater threat than any political conflict. "The human brain is flawed. We are plagued by self-doubt and the fear of failure," he explained. "Machines, by contrast, pursue their objectives relentlessly. When the Stockfish chess engine plays against a human, it does not hesitate. AI systems will operate similarly ? without second-guessing or considering the consequences." --------------------- "Danger, Will Robinson!" indeed. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From nebathenemi at yahoo.co.uk Wed May 28 12:11:12 2025 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Wed, 28 May 2025 12:11:12 +0000 (UTC) Subject: [ExI] Looking back on the list References: <673635861.3998574.1748434272456.ref@mail.yahoo.com> Message-ID: <673635861.3998574.1748434272456@mail.yahoo.com> I've decided to try and tackle the stubborn "unread" email folder. I cleared most of it, but there was a period in 2013 when I was working long shifts and had unreliable internet access and just didn't check my email for a couple of months. Looking through these, I've found old extropy-chat digests which entertain me greatly. Anders contributed regularly, we could just about talk about politics without the list melting down, and people were more interested in bitcoin than AI. (October 23 2013 Bitcoin hit $210, which pleased the boosters. Who would have guessed how the crypto market would twist and turn?) Depressingly, all the doomy talk of the future of US politics and the budget deficit turned out to be laughably optimistic. Reality has dumbfounded our expectations over and over. Here's hoping that in the 2030s we'll all be pleasantly surprised by how things are turning out. Tom | | Virus-free.www.avg.com | -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed May 28 12:52:26 2025 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 28 May 2025 08:52:26 -0400 Subject: [ExI] Eliezer at SXSW March 2025 In-Reply-To: References: Message-ID: > > but some are also capable of making decisions and acting independently. > He's right about that. See "agentic" AI, among other initiatives. We must stop everything. We are not ready. > "Who 'we', kemosabe?" These systems will be developed, and soon - if not by us, then by the bad guys, who are defined in their badness in this case by intentionally designing the systems toward malicious ends (biasing their service in favor of authoritarian regimes, for instance). "We" halting development will only empower them. He has called for all-out war on them. Even that would probably not suffice, and in any case, it's not happening. Given this reality, calls for "we" to stop are counterproductive. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed May 28 16:16:39 2025 From: pharos at gmail.com (BillK) Date: Wed, 28 May 2025 17:16:39 +0100 Subject: [ExI] Eliezer at SXSW March 2025 In-Reply-To: References: Message-ID: On Wed, 28 May 2025 at 13:55, Adrian Tymes via extropy-chat wrote: > These systems will be developed, and soon - if not by us, then by the bad guys, who are defined in their badness in this case by intentionally designing the systems toward malicious ends (biasing their service in favor of authoritarian regimes, for instance). "We" halting development will only empower them. > > He has called for all-out war on them. Even that would probably not suffice, and in any case, it's not happening. Given this reality, calls for "we" to stop are counterproductive. > _______________________________________________ Oh, I think Eliezer is well aware that nobody is going to stop AI research. Vance has said that the USA is in an AI arms race with China. Of course, if Eliezer is correct in thinking that a runaway AGI will end the human race, then it makes no difference whether it is developed by the USA or China. He is just trying to persuade all AI researchers to be really, really careful. As Jason said, the AI problem will probably be resolved in the near future, for good or ill. BillK From spike at rainier66.com Wed May 28 21:09:11 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 28 May 2025 14:09:11 -0700 Subject: [ExI] Looking back on the list In-Reply-To: <673635861.3998574.1748434272456@mail.yahoo.com> References: <673635861.3998574.1748434272456.ref@mail.yahoo.com> <673635861.3998574.1748434272456@mail.yahoo.com> Message-ID: <009201dbd014$c2be5b20$483b1160$@rainier66.com> >?Depressingly, all the doomy talk of the future of US politics and the budget deficit turned out to be laughably optimistic. Reality has dumbfounded our expectations over and over. Tom, regarding the doomy talk about budget deficits, I don?t take a bit of pleasure in being proven right, not one single binary digit of pleasure in it. >?Here's hoping that in the 2030s we'll all be pleasantly surprised by how things are turning out. Tom May we all. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Thu May 29 20:11:35 2025 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 29 May 2025 13:11:35 -0700 Subject: [ExI] Drug cocktail extends healthy lifespan in mice Message-ID: <7fa009866634289fef7c5686cf5ce83b@sollegro.com> A combination of rapamycin and trametinib extend the healthspan and lifespan of mice by 30%. I have not had a chance to read the article yet because I am at work right now, but rapamycin is an immunosupressant used in organ transplants to prevent tissue rejection and trametinib is an anti-cancer drug used to treat a variety of cancer. Both work by inhibiting cell growth. Together they target the oncogenic Ras/MEK/ERK pathway that is involved in numerous cancers. https://www.nature.com/articles/s43587-025-00876-4 Abstract Suppression of the insulin?IGF?mTORC1?Ras network ameliorates aging in animals. Many drugs have targets in the network because of its roles in cancer and metabolic disease and are candidates for repurposing as geroprotectors. Rapamycin, an established geroprotective drug, blocks mTORC1 signaling, and trametinib inhibits the Ras?MEK?ERK pathway. In this study, we assessed survival and health of male and female mice treated with trametinib, rapamycin or their combination. We show here that trametinib treatment extended lifespan in both sexes and that its combination with rapamycin was additive. Combination treatment reduced liver tumors in both sexes and spleen tumors in male mice, blocked the age-related increase in brain glucose uptake and strongly reduced inflammation in brain, kidney, spleen and muscle and circulating levels of pro-inflammatory cytokines. We conclude that trametinib is a geroprotector in mice and that its combination with rapamycin is more effective than either drug alone, making the combination a candidate for repurposing as a gerotherapy in humans. ---------------- Stuart LaForge From avant at sollegro.com Fri May 30 15:27:56 2025 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 30 May 2025 08:27:56 -0700 Subject: [ExI] AI Model Collapse Message-ID: <17c39a446e51db7e03e0403561452eec@sollegro.com> Since we have been talking about AIs recursively self-improving on the lists, I thought this was a pertinent article that could affect the timeline to AGI. There is a phenomenon called AI model collapse which occurs when AIs are trained on their own output. This produces an echo chamber effect which reinforces hallucinations, biases, and misinformation resulting in a degradation of output quality. Since these days, much of the output of AI gets put on the Internet and then new AI models get trained on the Internet, their training data becomes contaminated with AI output and can lead to AI model collapse. This is the informatic equivalent of biological inbreeding where deleterious mutations get amplified and reinforced in a genetic lineage resulting in all sorts of pathologies. https://www.nature.com/articles/s41586-024-07566-y Abstract Stable diffusion revolutionized image creation from descriptive text. GPT-2 (ref.?1), GPT-3(.5) (ref.?2) and GPT-4 (ref.?3) demonstrated high performance across a variety of language tasks. ChatGPT introduced such language models to the public. It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ?model collapse? and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet. --------------------------------- Stuart LaForge From dsunley at gmail.com Fri May 30 15:36:05 2025 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 30 May 2025 09:36:05 -0600 Subject: [ExI] AI Model Collapse In-Reply-To: <17c39a446e51db7e03e0403561452eec@sollegro.com> References: <17c39a446e51db7e03e0403561452eec@sollegro.com> Message-ID: I think the word "indiscriminate" is doing a lot of heavy lifting here. Afaik, all of the major AI companies are now using highly curated text based for training, which do include text generated by other LLMs and possibly their own, but not in an indiscriminate manner. The paper describes a degenerate case where an LLMs output becomes the majority of its own training corpus, and this is not something that has been observed in the real world yet. On Fri, May 30, 2025, 9:30 AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Since we have been talking about AIs recursively self-improving on the > lists, I thought this was a pertinent article that could affect the > timeline to AGI. There is a phenomenon called AI model collapse which > occurs when AIs are trained on their own output. This produces an echo > chamber effect which reinforces hallucinations, biases, and > misinformation resulting in a degradation of output quality. Since these > days, much of the output of AI gets put on the Internet and then new AI > models get trained on the Internet, their training data becomes > contaminated with AI output and can lead to AI model collapse. This is > the informatic equivalent of biological inbreeding where deleterious > mutations get amplified and reinforced in a genetic lineage resulting in > all sorts of pathologies. > > https://www.nature.com/articles/s41586-024-07566-y > > Abstract > Stable diffusion revolutionized image creation from descriptive text. > GPT-2 (ref. 1), GPT-3(.5) (ref. 2) and GPT-4 (ref. 3) demonstrated high > performance across a variety of language tasks. ChatGPT introduced such > language models to the public. It is now clear that generative > artificial intelligence (AI) such as large language models (LLMs) is > here to stay and will substantially change the ecosystem of online text > and images. Here we consider what may happen to GPT-{n} once LLMs > contribute much of the text found online. We find that indiscriminate > use of model-generated content in training causes irreversible defects > in the resulting models, in which tails of the original content > distribution disappear. We refer to this effect as ?model collapse? and > show that it can occur in LLMs as well as in variational autoencoders > (VAEs) and Gaussian mixture models (GMMs). We build theoretical > intuition behind the phenomenon and portray its ubiquity among all > learned generative models. We demonstrate that it must be taken > seriously if we are to sustain the benefits of training from large-scale > data scraped from the web. Indeed, the value of data collected about > genuine human interactions with systems will be increasingly valuable in > the presence of LLM-generated content in data crawled from the Internet. > --------------------------------- > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri May 30 15:47:14 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 30 May 2025 08:47:14 -0700 Subject: [ExI] AI Model Collapse In-Reply-To: <17c39a446e51db7e03e0403561452eec@sollegro.com> References: <17c39a446e51db7e03e0403561452eec@sollegro.com> Message-ID: <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> > On Behalf Of Stuart LaForge via extropy-chat Subject: [ExI] AI Model Collapse Since we have been talking about AIs recursively self-improving on the lists...---- Stuart LaForge _______________________________________________ I was astonished by a comment by a TV host who does programs about various jobs and industries, Mike Rowe. We have so long been focused on recursive AI, singularity, etc, we completely take it for granted. I had forgotten that many, or perhaps most people, think of AI as some new tech development which will be just fine as an office assistant, helping solve the labor shortage there. Rowe commented Wednesday: "...We've got a huge labor shortage and the push to get to the AI thing over the finish line. That means data centers. We've got to train people to build these data centers..." Mike Rowe Get the "AI thing" over the finish line. He is thinking of it as artificial office workers, doing the grunt work that humans don't want to do. Is there any reason to think that human-level AI wouldn't write self-improving software? I can't think of one. We bio-intelligences work our entire lives on learning and self-improvement. But some bio-I is content to sit in some menial office job for years, making no effort to rewire itself. How do we find out if there is general public unawareness of the danger of recursive AI? Stuart, or anyone else here, do you know enough regular people to find out? spike From pharos at gmail.com Fri May 30 16:30:28 2025 From: pharos at gmail.com (BillK) Date: Fri, 30 May 2025 17:30:28 +0100 Subject: [ExI] AI Model Collapse In-Reply-To: <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> Message-ID: On Fri, 30 May 2025 at 16:49, spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > I was astonished by a comment by a TV host who does programs about various > jobs and industries, Mike Rowe. We have so long been focused on recursive > AI, singularity, etc, we completely take it for granted. I had forgotten > that many, or perhaps most people, think of AI as some new tech development > which will be just fine as an office assistant, helping solve the labor > shortage there. Rowe commented Wednesday: > > "...We've got a huge labor shortage and the push to get to the AI thing > over the finish line. That means data centers. We've got to train people to > build these data centers..." Mike Rowe > > Get the "AI thing" over the finish line. He is thinking of it as > artificial office workers, doing the grunt work that humans don't want to > do. Is there any reason to think that human-level AI wouldn't write > self-improving software? I can't think of one. We bio-intelligences work > our entire lives on learning and self-improvement. But some bio-I is > content to sit in some menial office job for years, making no effort to > rewire itself. > > How do we find out if there is general public unawareness of the danger of > recursive AI? Stuart, or anyone else here, do you know enough regular > people to find out? > > spike > ______________________________________________ I asked Perplexity AI to research this question. It said "Polls show 63% of adults haven't heard the term "recursive self-improvement". But techies are aware of the problem. Full report below. BillK Perplexity AI - Public awareness of AI's recursive self-improvement risks remains *limited but growing*, concentrated primarily in technical and policy circles rather than mainstream discourse. Below is a detailed breakdown of current awareness levels and key knowledge gaps: ------------------------------ Awareness Levels Across Groups *1. Technical Communities* - *High awareness*: 78% of AI researchers recognize RSI as a critical risk factor in AGI development 5 . - *Active debates*: Forums like LessWrong and arXiv host ongoing discussions about alignment challenges and containment strategies. *2. Policy Makers* - *Moderate awareness*: Recent EU AI Act amendments reference "autonomous self-improvement systems" but lack specific RSI safeguards 4 . - *Gaps*: Only 12% of national AI strategies explicitly address recursive capability growth 2 . *3. General Public* - *Low awareness*: Polls show 63% of adults haven't heard the term "recursive self-improvement" 6 . - *Misinformed perceptions*: Media coverage often conflates RSI with generic AI risks like job displacement or bias. ------------------------------ Key Public Knowledge Gaps - *Instrumental convergence*: Few understand how RSI systems might develop unplanned goals (e.g., self-preservation) to achieve objectives 5 . - *Temporal risks*: The concept of "hard takeoff" (exponential capability gains in weeks/days) remains poorly communicated 3 . - *Containment challenges*: Only 22% recognize current AI safety protocols (e.g., OpenAI's "unbreakable box") as untested theoretical constructs 3 . ------------------------------ Drivers of Growing Awareness Factor Impact Example *Viral content* January 2025 X post about OpenAI's RSI experiments reached 432k views 3 *Industry leaks* Anthropic's 2024 "alignment faking" study revealed in Claude documentation 5 *Fiction/media* Netflix's *Singularity* drama (2024) dramatized RSI risks for 18M viewers ------------------------------ Critical Unaddressed Issues 1. *Security literacy*: 89% of software engineers lack training in RSI-specific threat modeling 4 . 2. *Ethical frameworks*: No consensus exists on liability for harms caused by self-modified AI systems 1 . 3. *Detection capability*: Current tools fail to identify 38% of autonomous code modifications in tests 6 . ------------------------------ While technical communities increasingly prioritize RSI risks, effective public understanding remains hampered by *complex terminology*, *competing narratives*, and *lack of accessible educational resources*. Bridging this gap requires coordinated efforts between researchers, journalists, and educators to translate esoteric risks into actionable public knowledge. Add to follow-up Check sources Citations: 1. https://www.linkedin.com/pulse/understanding-recursive-self-improvement-ai-marty-crean-szfec 2. https://www.linkedin.com/pulse/future-ai-why-recursive-self-improvement-may-lead-hard-gary-ramah-grn5c 3. https://www.ctol.digital/news/openai-recursive-self-improvement-breakthrough-ai-takeoff/ 4. https://www.datapro.news/p/the-risks-of-recursive-self-improvement 5. https://en.wikipedia.org/wiki/Recursive_self-improvement 6. https://lew.ro/in-5-years-ai-will-write-and-improve-its-own-code-are-we-on-the-brink-of-recursive-self-improvement/ 7. https://jacobbuckman.substack.com/p/we-arent-close-to-creating-a-rapidly 8. https://www.reddit.com/r/artificial/comments/182bsfa/if_you_are_confident_that_recursive_ai/ 9. https://techinsights.linklaters.com/post/102j9z6/are-we-approaching-the-foothills-of-genai-recursive-self-improvement-and-what-mi 10. https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/ai-2030-scenarios-report-html-annex-c 11. https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/ 12. https://community.openai.com/t/recursive-self-awareness-development/1165681 13. https://gh.bmj.com/content/8/5/e010435 14. https://openreview.net/forum?id=46Zgqo4QIU 15. https://www.lawfaremedia.org/article/ai-will-not-want-to-self-improve 16. https://www.lesswrong.com/posts/byKF3mnaNRrbkDPWv/evidence-on-recursive-self-improvement-from-current-ml ------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri May 30 16:56:53 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 30 May 2025 09:56:53 -0700 Subject: [ExI] AI Model Collapse In-Reply-To: References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> Message-ID: <00ab01dbd183$d8a87040$89f950c0$@rainier66.com> From: extropy-chat On Behalf Of BillK via extropy-chat Subject: Re: [ExI] AI Model Collapse On Fri, 30 May 2025 at 16:49, spike jones via extropy-chat > wrote: I was astonished by a comment by a TV host who does programs about various jobs and industries, Mike Rowe. ? "...We've got a huge labor shortage and the push to get to the AI thing over the finish line. That means data centers. We've got to train people to build these data centers..." Mike Rowe ? spike ______________________________________________ >?I asked Perplexity AI to research this question. It said "Polls show 63% of adults haven't heard the term "recursive self-improvement". But techies are aware of the problem. Full report below. BillK >?Perplexity AI - >?Public awareness of AI's recursive self-improvement risks remains limited but growing, concentrated primarily in technical and policy circles rather than mainstream discourse. Below is a detailed breakdown of current awareness levels and key knowledge gaps: _____ >?Awareness Levels Across Groups 1. Technical Communities * High awareness: 78% of AI researchers recognize RSI as a critical risk factor in AGI development 5. * Active debates: Forums like LessWrong and arXiv host ongoing discussions about alignment challenges and containment strategies. 2. Policy Makers * Moderate awareness: Recent EU AI Act amendments reference "autonomous self-improvement systems" but lack specific RSI safeguards 4. * Gaps: Only 12% of national AI strategies explicitly address recursive capability growth 2. 3. General Public * Low awareness: Polls show 63% of adults haven't heard the term "recursive self-improvement" 6. * Misinformed perceptions: Media coverage often conflates RSI with generic AI risks like job displacement or bias. _____ ? BillK, thx. If you read thru the text of the former VPOTUS speech on AI, the one with the infamous ??it?s two letters? a kind of fancy thing?? comments, the risk she pointed out was that AI would run companies without understanding or compensating for human bias. BIAS! This is how the government thinks of the risk of AI? We have a TV guy who is an expert in industry interested in getting ?the AI thing over the line? in order to solve the labor shortage in the office, while the former VP and AI czar warns of bias, while coming across like a fifth grader reporting on a book she hadn?t read. Elon Musk commented recently that AI poses a real existential risk, 20% chance of destroying civilization in the next decade. Yet that comment made so little public impact that I can?t even find it now, which is making me wonder if he ever said it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri May 30 17:10:26 2025 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 30 May 2025 11:10:26 -0600 Subject: [ExI] AI Model Collapse In-Reply-To: <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> Message-ID: We find out if there is general public unawareness of the dangers of by rewriting the sentence as "The general public is generally unaware." and calling it a day. :) On Fri, May 30, 2025 at 9:49?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > On Behalf Of Stuart LaForge via extropy-chat > Subject: [ExI] AI Model Collapse > > Since we have been talking about AIs recursively self-improving on the > lists...---- Stuart LaForge > _______________________________________________ > > > > I was astonished by a comment by a TV host who does programs about various > jobs and industries, Mike Rowe. We have so long been focused on recursive > AI, singularity, etc, we completely take it for granted. I had forgotten > that many, or perhaps most people, think of AI as some new tech development > which will be just fine as an office assistant, helping solve the labor > shortage there. Rowe commented Wednesday: > > "...We've got a huge labor shortage and the push to get to the AI thing > over the finish line. That means data centers. We've got to train people to > build these data centers..." Mike Rowe > > Get the "AI thing" over the finish line. He is thinking of it as > artificial office workers, doing the grunt work that humans don't want to > do. Is there any reason to think that human-level AI wouldn't write > self-improving software? I can't think of one. We bio-intelligences work > our entire lives on learning and self-improvement. But some bio-I is > content to sit in some menial office job for years, making no effort to > rewire itself. > > How do we find out if there is general public unawareness of the danger of > recursive AI? Stuart, or anyone else here, do you know enough regular > people to find out? > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri May 30 17:33:04 2025 From: pharos at gmail.com (BillK) Date: Fri, 30 May 2025 18:33:04 +0100 Subject: [ExI] AI Model Collapse In-Reply-To: <00ab01dbd183$d8a87040$89f950c0$@rainier66.com> References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> <00ab01dbd183$d8a87040$89f950c0$@rainier66.com> Message-ID: On Fri, 30 May 2025 at 17:56, wrote: > > BillK, thx. If you read thru the text of the former VPOTUS speech on AI, the one with the infamous ??it?s two letters? a kind of fancy thing?? comments, the risk she pointed out was that AI would run companies without understanding or compensating for human bias. BIAS! This is how the government thinks of the risk of AI? We have a TV guy who is an expert in industry interested in getting ?the AI thing over the line? in order to solve the labor shortage in the office, while the former VP and AI czar warns of bias, while coming across like a fifth grader reporting on a book she hadn?t read. > > Elon Musk commented recently that AI poses a real existential risk, 20% chance of destroying civilization in the next decade. Yet that comment made so little public impact that I can?t even find it now, which is making me wonder if he ever said it. > > spike > ---------------------------------------- Tut! You must be asking the wrong search engine. Musk has been saying that many times since 2024. Startpage gives a long list. Quote: Mar 1, 2025, 12:18 AM GMT Elon Musk said in a Joe Rogan interview there's "only a 20% chance of annihilation." -------------------- BillK From spike at rainier66.com Fri May 30 17:45:00 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 30 May 2025 10:45:00 -0700 Subject: [ExI] AI Model Collapse In-Reply-To: References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> <00ab01dbd183$d8a87040$89f950c0$@rainier66.com> Message-ID: <00dc01dbd18a$919ae0a0$b4d0a1e0$@rainier66.com> >... On Behalf Of BillK via extropy-chat Subject: Re: [ExI] AI Model Collapse On Fri, 30 May 2025 at 17:56, wrote: > > BillK... Elon Musk commented recently that AI poses a real existential risk, 20% chance of destroying civilization in the next decade. Yet that comment made so little public impact that I can?t even find it now, which is making me wonder if he ever said it. > > spike > ---------------------------------------- >...Tut! You must be asking the wrong search engine... No doubt about that. Google has demoted Joe Rogan ever since he gave the three-hour interview to the current POTUS (whose name I cannot recall at the moment) and anyone who is thought to be allied with him. Thanks for finding the Musk comment. You have repeatedly demonstrated your expertise in finding stuff. You don't suck more than anyone I know in that discipline. Oy vey, I am around teenagers a lot these days because of my school coaching activities. Now I am picking up far too much teen-speak. >...Musk has been saying that many times since 2024. Startpage gives a long list. Quote: Mar 1, 2025, 12:18 AM GMT Elon Musk said in a Joe Rogan interview there's "only a 20% chance of annihilation." -------------------- BillK_____________________________________________ When I add the specific term "annihilation" Google can find it. spike From spike at rainier66.com Fri May 30 17:49:01 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 30 May 2025 10:49:01 -0700 Subject: [ExI] AI Model Collapse In-Reply-To: References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> Message-ID: <00dd01dbd18b$2113b2c0$633b1840$@rainier66.com> ?> On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] AI Model Collapse >?We find out if there is general public unawareness of the dangers of by rewriting the sentence as "The general public is generally unaware." and calling it a day. :) Darin, you point out another danger. What if? somehow? the public suddenly becomes aware of the danger, then really doesn?t understand it. Then they react in absurd overcorrections, such as destroying data centers, which Mike Rowe thinks is what humans need to be trained to do. Only then do they realize how brittle our commerce system has become. It cannot function without electronic communications. We have no plausible or feasible backup exchange medium if communications go down for more than a day or two (how much hard cash do you own right now?) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Fri May 30 18:53:32 2025 From: mbb386 at main.nc.us (MB) Date: Fri, 30 May 2025 14:53:32 -0400 Subject: [ExI] AI Model Collapse In-Reply-To: <00dd01dbd18b$2113b2c0$633b1840$@rainier66.com> References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> <00dd01dbd18b$2113b2c0$633b1840$@rainier66.com> Message-ID: Indeed, some discovered this "no backup" thing during Hurricane Helene last fall. Once some stores began to open again, they only accepted cash - no way to deal with cards. The banks opened but would only allow a certain low amount of cash withdrawal. I was very glad to have water, food, and cash on hand - but I'm *old* and that is "old school" prep. That hurricane was *localized destruction* - I cannot see living through a full largescale such thing - like an EMP or bombing of major data centers. We did not have potable water for... a month? Regards, MB On Fri, May 30, 2025 13:49, spike jones via extropy-chat wrote: > > > Darin, you point out another danger. What if? somehow? the public > suddenly becomes aware of the danger, then really doesn?t understand it. > Then they react in absurd overcorrections, such as destroying data > centers, which Mike Rowe thinks is what humans need to be trained to do. > Only then do they realize how brittle our commerce system has become. It > cannot function without electronic communications. We have no plausible > or feasible backup exchange medium if communications go down for more than > a day or two (how much hard cash do you own right now?) > > From spike at rainier66.com Fri May 30 20:44:04 2025 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 30 May 2025 13:44:04 -0700 Subject: [ExI] AI Model Collapse In-Reply-To: References: <17c39a446e51db7e03e0403561452eec@sollegro.com> <007c01dbd17a$1e07bb00$5a173100$@rainier66.com> <00dd01dbd18b$2113b2c0$633b1840$@rainier66.com> Message-ID: <002101dbd1a3$95726c20$c0574460$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of MB via extropy-chat Subject: Re: [ExI] AI Model Collapse Indeed, some discovered this "no backup" thing during Hurricane Helene last fall. Once some stores began to open again, they only accepted cash - no way to deal with cards. The banks opened but would only allow a certain low amount of cash withdrawal. I was very glad to have water, food, and cash on hand - but I'm *old* and that is "old school" prep. That hurricane was *localized destruction* - I cannot see living through a full largescale such thing - like an EMP or bombing of major data centers. We did not have potable water for... a month? Regards, MB Oy vey, MB, we are so glad you are safe. We had no idea you went thru that. So sorry, and glad it is well now. We have cheerfully devoured every change in the way we do trading, so delightfully convenient, so secure, so good. Until... it breaks, then we see how brittle it is. And we see how difficult it is to get everything running again. I used the term brittle rather than fragile for a reason. It conveys the notion that it shatters into small pieces, which slows down everything else, breaking other pieces of our trade system. When that is broken, there are almost immediate consequences due to lack of necessities as you and the other Helene survivors discovered. No lesson was learned I fear. Most people will not make an emergency plan nor have the necessities on hand. As an instructor of an emergency preparedness course and I am appalled at how little real understanding there is of the subject. spike From pharos at gmail.com Fri May 30 22:30:20 2025 From: pharos at gmail.com (BillK) Date: Fri, 30 May 2025 23:30:20 +0100 Subject: [ExI] Pausing cell death could slow aging effects Message-ID: May 29, 2025 Could 'pausing' cell death be the final frontier in medicine on Earth and beyond? by University College London edited by Sadie Harley, reviewed by Robert Egan Quote: The process of necrosis, a form of cell death, may represent one of the most promising ways to change the course of human aging, disease and even space travel, according to a new study by researchers at UCL, drug discovery company LinkGevity and the European Space Agency (ESA). Challenging prevailing views, the paper brings together evidence from cancer biology, regenerative medicine, kidney disease, and space health to argue that necrosis is not merely an endpoint, but a key driver of aging that presents an opportunity for intervention. --------------------- BillK