From pharos at gmail.com Sat Jul 1 02:12:19 2023 From: pharos at gmail.com (BillK) Date: Sat, 1 Jul 2023 03:12:19 +0100 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: Message-ID: On Fri, 30 Jun 2023 at 23:37, Jason Resch via extropy-chat wrote: > If you believe that philosophical zombies are logically impossible, then consciousness is logically necessary (in the presence of certain behavioral, sensory, and introspective capacities). Accordingly, I believe that consciousness was an unavoidable consequence during the course of evolution of life on Earth as once nature created creatures having certain capacities/abilities, consciousness had no other choice but to exist, it became logically necessary. > > The same, I believe, is true for AI. It is unavoidable when we create machines of certain abilities, and I believe existing software is already conscious. For example, my open source project on artificial sentience could be such an example: https://github.com/jasonkresch/bots > > Jason > _______________________________________________ I am very impressed with the ability of the latest LLM chatbots to understand and reply to complex questions. Some can write short essays in response. But I doubt that they should be considered to be 'conscious'. For example, I asked an AI - Does computer consciousness require agency? The admirable essay I received in return can be summarized as ? Probably yes, but some researchers disagree. The next generation of AI should be something to behold! BillK From avant at sollegro.com Sat Jul 1 23:14:36 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 01 Jul 2023 16:14:36 -0700 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: Message-ID: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Quoting Jason Resch via extropy-chat : > If you believe that philosophical zombies are logically impossible, then > consciousness is logically necessary (in the presence of certain > behavioral, sensory, and introspective capacities). Actually, I would more rigorously say that philosophical zombies are impossible precisely BECAUSE they violate what I call causal awareness principle or CAP. Causal awareness is hereby defined as the ability to sense, act on, and react to ones environment as well as ones own effect on it. Anything that is causally aware is able to act rationally. Consciousness is necessary, but not sufficient, for causal awareness, and therefore, rational action. However, something cannot act rationally without being conscious too. Acting in ones own best interests requires consciousness. A sentient being must be conscious of those interests and the means to achieve them in the context of a changing environment. Here rational action is used as a proxy for intelligent behavior. That is to say that philosophical zombies cannot exist because they would have no way to display intelligent goal-seeking behavior because they would not be conscious of any goal; Therefore, they could not be conscious of how to navigate the environment to achieve a goal; nor, would they be conscious of whether they had achieved the goal or not. That is to say, that logically, a philosophical zombie does not have the logical properties necessary to fit its own definition. Philosophical zombies are an unwarranted assumption to make the Hard Problem of consciousness seem like a different question than the so-called Easy Problem, whereas, CAP says that the complete solution of the Easy Problem, is also the solution to Hard Problem. While pre-programmed behavior could mimic rational action in the short term, any sufficient change in the p-zombie's environment, like a new obstacle, would thwart it, and expose it as a zombie. In the harsh world of nature, philosophical zombies, even if they came to exist by some extraordinary chance, would quickly go extinct. Therefore philosophical zombies, as opposed to fungal zombies, are both logically and physically impossible. It is, in short, impossible to do what humans evolved from worms to be able to do, without being in some measure, more conscious than a worm. > Accordingly, I believe > that consciousness was an unavoidable consequence during the course of > evolution of life on Earth as once nature created creatures having certain > capacities/abilities, consciousness had no other choice but to exist, it > became logically necessary. Yes, there is certainly a natural history of consciousness. You can look at the evolution of the nervous system of vertebrates through the process of encephalization and decussation and see a causal narrative. The moment life decided to detach itself from the ocean bottom and move around looking for food, a complete, or open, digestive system with a seperate mouth and anus became advantageous. Once it started moving mouth first through the world, sensory organs like eyes or antennae became advantageous; Moreover, those creatures which evolved sensory organs like taste-buds and eyes near their mouths, as opposed to near their anus, had a survival advantage. Once an organism had a concentration of senses on its rostral or front end, that end became differentiated from the caudal or rear end of the organism. The cluster of sensory organs became a rudimentary head. As part of that differentiation called encephalization, it became advantageous for that organism to develop an organ to process the sensory information and locate it near the sense organs. The organ I speak of began as a small nerve cluster. As successive generations of the organism started moving through the environment faster, sensing and avoiding danger, finding food and resources, weathering natural disasters and extinction events, and generally leading more complicated lives, it finally evolved into the conscious brain. > The same, I believe, is true for AI. It is unavoidable when we create > machines of certain abilities, and I believe existing software is already > conscious. For example, my open source project on artificial sentience > could be such an example: https://github.com/jasonkresch/bots Yes, I agree. These AI learned how to play soccer/football on their own. They are certainly conscious of one another, the ball, and their goals and that consciousness allows some very complex goal-seeking behavior to emerge. By the CAP, these AI agents in the following video presentation of a recent Science paper by Lui et al. https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. Some might object because the consciousness is expressed within a virtual setting. But that's all right because Google built bodies for the little guys: https://www.youtube.com/watch?v=RbyQcCT6890 If you push one over, they stand right back up. So yeah, the CAP says they are rudimentarily conscious because they display causal awareness and rational action. They lie somewhere between thermostats and humans on the consciousness scale. Someday, their consciousness may far surpass ours. Stuart LaForge From pharos at gmail.com Sun Jul 2 10:43:06 2023 From: pharos at gmail.com (BillK) Date: Sun, 2 Jul 2023 11:43:06 +0100 Subject: [ExI] Microsoft is testing the first optical computer Message-ID: Unlocking the future of computing: The Analog Iterative Machine?s lightning-fast approach to optimization June 27, 2023 By Hitesh Ballani , Partner Researcher Quote: The Analog Iterative Machine (AIM) is designed to solve difficult optimization problems, which form the foundation of many industries, such as finance, logistics, transportation, energy, healthcare, and manufacturing. However, traditional digital computers struggle to crack these problems in a timely, energy-efficient and cost-effective manner. This is because the number of possible combinations explodes exponentially as the problem size grows, making it a massive challenge for even the most powerful digital computers. The Traveling Salesman Problem is a classic example. Imagine trying to find the most efficient route for visiting a set of cities just once before returning to the starting point. With only five cities, there are 12 possible routes ? but for a 61-city problem, the number of potential routes surpasses the number of atoms in the universe. Today, AIM is still a research project, but the cross-disciplinary team has recently assembled the world?s first opto-electronic hardware for mixed ? continuous and binary ? optimization problems. Though presently operating on a limited scale, the initial results are promising, and the team has started scaling up its efforts. This includes a research collaboration with the UK-based multinational bank Barclays to solve an optimization problem critical to the financial markets on the AIM computer. ----------------- Still a research project, but the hope is that optical computing can leap ahead of the ending of Moore's Law for silicon chips. BillK From pharos at gmail.com Sun Jul 2 11:09:40 2023 From: pharos at gmail.com (BillK) Date: Sun, 2 Jul 2023 12:09:40 +0100 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants Message-ID: AI Foundation launches AI.XYZ to give people their own AI assistants Dean Takahashi June 30, 2023 Quote: AI Foundation, an AI research lab that gave us virtual Deepak Chopra, has launched AI.XYZ, a platform for people to create their own AI assistants. Let?s hope it?s a tangible example of how we?re going to get along fine with AI, rather than be terminated by them. The idea is that we should all feel better if AI assistants offload some of our daily tasks. The platform enables users to design their own AI assistants that can safely support them in both personal and professional settings. Each AI is unique to its creator and can assist with tasks such as note-taking, email writing, brainstorming, and offering personalized advice and perspectives. Unlike generic AI assistants from companies like Amazon, Google, Apple, or ChatGPT, each AI assistant designed on AI.XYZ belongs exclusively to its creator, knows the person?s values and goals, and provides more personalized help. The company sees a significant opportunity for workplaces and enterprises to provide each of their employees with their own AIs. --------------- The idea sounds OK, but how many would be prepared to trust an online company with all their data? I think it would appeal to more people if their personal AI assistant was running on their own PC and storing their data there. BillK From mbb386 at main.nc.us Sun Jul 2 12:08:02 2023 From: mbb386 at main.nc.us (MB) Date: Sun, 2 Jul 2023 08:08:02 -0400 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: References: Message-ID: <8275b4c6688cb07510f35a71d5fa51c8.squirrel@www.main.nc.us> Hmmm. Makes me squirm a bit. If/When I discover a friend/relative has fobbed response to my emails to AI, that's the end of my involvement in anything other than "lightly surface" interactions. Reminds me of writing a letter to my congressman - the response is by someone else, canned, and often does not address the issue. Unfortunately the reference to Deepak Chopra AI did not help... I have http://www.wisdomofchopra.com/ bookmarked for when I want a lighthearted giggle, though I'm sure the AI Deepak Chopra is quite more sophisticated. ;) The last sentence in the article was a turnoff. How long before anything "not 100% acceptable" (to whom?) would be sniffed out and reported? Regards, MB On Sun, July 2, 2023 07:09, BillK via extropy-chat wrote: > AI Foundation launches AI.XYZ to give people their own AI assistants > Dean Takahashi June 30, 2023 > > > > Quote: > AI Foundation, an AI research lab that gave us virtual Deepak Chopra, > has launched AI.XYZ, a platform for people to create their own AI > assistants. > > Let?s hope it?s a tangible example of how we?re going to get along > fine with AI, rather than be terminated by them. The idea is that we > should all feel better if AI assistants offload some of our daily > tasks. > > The platform enables users to design their own AI assistants that can > safely support them in both personal and professional settings. Each > AI is unique to its creator and can assist with tasks such as > note-taking, email writing, brainstorming, and offering personalized > advice and perspectives. > > Unlike generic AI assistants from companies like Amazon, Google, > Apple, or ChatGPT, each AI assistant designed on AI.XYZ belongs > exclusively to its creator, knows the person?s values and goals, and > provides more personalized help. The company sees a significant > opportunity for workplaces and enterprises to provide each of their > employees with their own AIs. > --------------- > > The idea sounds OK, but how many would be prepared to trust an online > company with all their data? I think it would appeal to more people if > their personal AI assistant was running on their own PC and storing > their data there. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From pharos at gmail.com Sun Jul 2 16:27:52 2023 From: pharos at gmail.com (BillK) Date: Sun, 2 Jul 2023 17:27:52 +0100 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: <8275b4c6688cb07510f35a71d5fa51c8.squirrel@www.main.nc.us> References: <8275b4c6688cb07510f35a71d5fa51c8.squirrel@www.main.nc.us> Message-ID: On Sun, 2 Jul 2023 at 13:11, MB via extropy-chat wrote: > > Hmmm. Makes me squirm a bit. > > If/When I discover a friend/relative has fobbed response to my emails to > AI, that's the end of my involvement in anything other than "lightly > surface" interactions. Reminds me of writing a letter to my congressman - > the response is by someone else, canned, and often does not address the > issue. > > Unfortunately the reference to Deepak Chopra AI did not help... I have > http://www.wisdomofchopra.com/ > bookmarked for when I want a lighthearted giggle, though I'm sure the AI > Deepak Chopra is quite more sophisticated. ;) > > The last sentence in the article was a turnoff. How long before anything > "not 100% acceptable" (to whom?) would be sniffed out and reported? > > Regards, > MB > _______________________________________________ I think this personal AI would be limited to business activities only. They say --- "The company sees a significant opportunity for workplaces and enterprises to provide each of their employees with their own AIs." So it would be paid for by the company to make employees more productive. Just as you have one email for business and a different email for personal contacts, this business AI assistant would be restricted to work activities only. I can see a future when the police interview a suspect, then interview his AI assistant(s), then his computing devices, then his Smart Home AI, then all his social network postings, etc...... I don't much fancy having AI assistants that will tell all to anybody that asks! :) BillK From foozler83 at gmail.com Sun Jul 2 17:10:44 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 2 Jul 2023 12:10:44 -0500 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: References: <8275b4c6688cb07510f35a71d5fa51c8.squirrel@www.main.nc.us> Message-ID: Does being arrested let the police dig into your smartphone, PC, online media, etc.? If we don't have privacy laws covering some of this, we need them. bill w On Sun, Jul 2, 2023 at 11:30?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, 2 Jul 2023 at 13:11, MB via extropy-chat > wrote: > > > > Hmmm. Makes me squirm a bit. > > > > If/When I discover a friend/relative has fobbed response to my emails to > > AI, that's the end of my involvement in anything other than "lightly > > surface" interactions. Reminds me of writing a letter to my congressman > - > > the response is by someone else, canned, and often does not address the > > issue. > > > > Unfortunately the reference to Deepak Chopra AI did not help... I have > > http://www.wisdomofchopra.com/ > > bookmarked for when I want a lighthearted giggle, though I'm sure the AI > > Deepak Chopra is quite more sophisticated. ;) > > > > The last sentence in the article was a turnoff. How long before anything > > "not 100% acceptable" (to whom?) would be sniffed out and reported? > > > > Regards, > > MB > > _______________________________________________ > > > I think this personal AI would be limited to business activities only. > They say --- "The company sees a significant > opportunity for workplaces and enterprises to provide each of their > employees with their own AIs." > So it would be paid for by the company to make employees more productive. > > Just as you have one email for business and a different email for > personal contacts, this business AI assistant would be restricted to > work activities only. > I can see a future when the police interview a suspect, then interview > his AI assistant(s), then his computing devices, then his Smart Home > AI, then all his social network postings, etc...... > I don't much fancy having AI assistants that will tell all to anybody > that asks! :) > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Jul 2 17:38:45 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 2 Jul 2023 10:38:45 -0700 Subject: [ExI] fireworks Message-ID: https://twitter.com/Alphafox78/status/1674181467029553153?s=20&fbclid=IwAR3fCJzCoq7wwW8BhbcLuXcaVWZOvbPvonBqE5f6hy4_AhKJEVVIdHsULGw Most amazing mishap I have ever seen. Keith From jasonresch at gmail.com Sun Jul 2 18:00:22 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Jul 2023 13:00:22 -0500 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: Stuart, Thank you for putting your argument into such clear words. I agree completely with your definition of CAP. Also, those videos of the AI and robotic soccer players are great. I also agree it is not logically possible to say these entities are not aware of the ball, and what else is consciousness, beyond "awareness"? If the bots are aware of the ball (which they plainly are), then they're conscious of it. Jason On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Jason Resch via extropy-chat : > > > If you believe that philosophical zombies are logically impossible, then > > consciousness is logically necessary (in the presence of certain > > behavioral, sensory, and introspective capacities). > > Actually, I would more rigorously say that philosophical zombies are > impossible precisely BECAUSE they violate what I call causal awareness > principle or CAP. Causal awareness is hereby defined as the ability to > sense, act on, and react to ones environment as well as ones own > effect on it. Anything that is causally aware is able to act > rationally. Consciousness is necessary, but not sufficient, for causal > awareness, and therefore, rational action. However, something cannot > act rationally without being conscious too. Acting in ones own best > interests requires consciousness. A sentient being must be conscious > of those interests and the means to achieve them in the context of a > changing environment. > > Here rational action is used as a proxy for intelligent behavior. That > is to say that philosophical zombies cannot exist because they would > have no way to display intelligent goal-seeking behavior because they > would not be conscious of any goal; Therefore, they could not be > conscious of how to navigate the environment to achieve a goal; nor, > would they be conscious of whether they had achieved the goal or not. > That is to say, that logically, a philosophical zombie does not have > the logical properties necessary to fit its own definition. > Philosophical zombies are an unwarranted assumption to make the Hard > Problem of consciousness seem like a different question than the > so-called Easy Problem, whereas, CAP says that the complete solution > of the Easy Problem, is also the solution to Hard Problem. > > While pre-programmed behavior could mimic rational action in the short > term, any sufficient change in the p-zombie's environment, like a new > obstacle, would thwart it, and expose it as a zombie. In the harsh > world of nature, philosophical zombies, even if they came to exist by > some extraordinary chance, would quickly go extinct. > > Therefore philosophical zombies, as opposed to fungal zombies, are > both logically and physically impossible. It is, in short, impossible > to do what humans evolved from worms to be able to do, without being > in some measure, more conscious than a worm. > > > Accordingly, I believe > > that consciousness was an unavoidable consequence during the course of > > evolution of life on Earth as once nature created creatures having > certain > > capacities/abilities, consciousness had no other choice but to exist, it > > became logically necessary. > > Yes, there is certainly a natural history of consciousness. You can > look at the evolution of the nervous system of vertebrates through the > process of encephalization and decussation and see a causal narrative. > The moment life decided to detach itself from the ocean bottom and > move around looking for food, a complete, or open, digestive system > with a seperate mouth and anus became advantageous. Once it started > moving mouth first through the world, sensory organs like eyes or > antennae became advantageous; Moreover, those creatures which evolved > sensory organs like taste-buds and eyes near their mouths, as opposed > to near their anus, had a survival advantage. > > Once an organism had a concentration of senses on its rostral or front > end, that end became differentiated from the caudal or rear end of the > organism. The cluster of sensory organs became a rudimentary head. As > part of that differentiation called encephalization, it became > advantageous for that organism to develop an organ to process the > sensory information and locate it near the sense organs. The organ I > speak of began as a small nerve cluster. As successive generations of > the organism started moving through the environment faster, sensing > and avoiding danger, finding food and resources, weathering natural > disasters and extinction events, and generally leading more > complicated lives, it finally evolved into the conscious brain. > > > The same, I believe, is true for AI. It is unavoidable when we create > > machines of certain abilities, and I believe existing software is already > > conscious. For example, my open source project on artificial sentience > > could be such an example: https://github.com/jasonkresch/bots > > Yes, I agree. These AI learned how to play soccer/football on their > own. They are certainly conscious of one another, the ball, and their > goals and that consciousness allows some very complex goal-seeking > behavior to emerge. By the CAP, these AI agents in the following video > presentation of a recent Science paper by Lui et al. > > https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. > > Some might object because the consciousness is expressed within a > virtual setting. But that's all right because Google built bodies for > the little guys: > > https://www.youtube.com/watch?v=RbyQcCT6890 > > If you push one over, they stand right back up. So yeah, the CAP says > they are rudimentarily conscious because they display causal awareness > and rational action. They lie somewhere between thermostats and humans > on the consciousness scale. Someday, their consciousness may far > surpass ours. > > Stuart LaForge > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Jul 2 18:37:53 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 2 Jul 2023 13:37:53 -0500 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: Except for smell, sensations come to us from our unconscious mind. There they get routed to various brain areas and then, only perhaps (they could be shut down and never get to the forebrain) to upper areas. These processes are aware of the nature of the stimuli and know where to send them, but they are not available to our conscious mind. Or you could say that these unconscious processes are really conscious but simply not available to what we call our conscious mind. Thus you can have awareness or consciousness in part of the brain but not the part we think of as our consciousness. If an organism deals in some appropriate way with incoming stimuli you could call it aware. Amoebas do that. bill w On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Stuart, > > Thank you for putting your argument into such clear words. I agree > completely with your definition of CAP. > > Also, those videos of the AI and robotic soccer players are great. I also > agree it is not logically possible to say these entities are not aware of > the ball, and what else is consciousness, beyond "awareness"? If the bots > are aware of the ball (which they plainly are), then they're conscious of > it. > > Jason > > On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Jason Resch via extropy-chat : >> >> > If you believe that philosophical zombies are logically impossible, then >> > consciousness is logically necessary (in the presence of certain >> > behavioral, sensory, and introspective capacities). >> >> Actually, I would more rigorously say that philosophical zombies are >> impossible precisely BECAUSE they violate what I call causal awareness >> principle or CAP. Causal awareness is hereby defined as the ability to >> sense, act on, and react to ones environment as well as ones own >> effect on it. Anything that is causally aware is able to act >> rationally. Consciousness is necessary, but not sufficient, for causal >> awareness, and therefore, rational action. However, something cannot >> act rationally without being conscious too. Acting in ones own best >> interests requires consciousness. A sentient being must be conscious >> of those interests and the means to achieve them in the context of a >> changing environment. >> >> Here rational action is used as a proxy for intelligent behavior. That >> is to say that philosophical zombies cannot exist because they would >> have no way to display intelligent goal-seeking behavior because they >> would not be conscious of any goal; Therefore, they could not be >> conscious of how to navigate the environment to achieve a goal; nor, >> would they be conscious of whether they had achieved the goal or not. >> That is to say, that logically, a philosophical zombie does not have >> the logical properties necessary to fit its own definition. >> Philosophical zombies are an unwarranted assumption to make the Hard >> Problem of consciousness seem like a different question than the >> so-called Easy Problem, whereas, CAP says that the complete solution >> of the Easy Problem, is also the solution to Hard Problem. >> >> While pre-programmed behavior could mimic rational action in the short >> term, any sufficient change in the p-zombie's environment, like a new >> obstacle, would thwart it, and expose it as a zombie. In the harsh >> world of nature, philosophical zombies, even if they came to exist by >> some extraordinary chance, would quickly go extinct. >> >> Therefore philosophical zombies, as opposed to fungal zombies, are >> both logically and physically impossible. It is, in short, impossible >> to do what humans evolved from worms to be able to do, without being >> in some measure, more conscious than a worm. >> >> > Accordingly, I believe >> > that consciousness was an unavoidable consequence during the course of >> > evolution of life on Earth as once nature created creatures having >> certain >> > capacities/abilities, consciousness had no other choice but to exist, it >> > became logically necessary. >> >> Yes, there is certainly a natural history of consciousness. You can >> look at the evolution of the nervous system of vertebrates through the >> process of encephalization and decussation and see a causal narrative. >> The moment life decided to detach itself from the ocean bottom and >> move around looking for food, a complete, or open, digestive system >> with a seperate mouth and anus became advantageous. Once it started >> moving mouth first through the world, sensory organs like eyes or >> antennae became advantageous; Moreover, those creatures which evolved >> sensory organs like taste-buds and eyes near their mouths, as opposed >> to near their anus, had a survival advantage. >> >> Once an organism had a concentration of senses on its rostral or front >> end, that end became differentiated from the caudal or rear end of the >> organism. The cluster of sensory organs became a rudimentary head. As >> part of that differentiation called encephalization, it became >> advantageous for that organism to develop an organ to process the >> sensory information and locate it near the sense organs. The organ I >> speak of began as a small nerve cluster. As successive generations of >> the organism started moving through the environment faster, sensing >> and avoiding danger, finding food and resources, weathering natural >> disasters and extinction events, and generally leading more >> complicated lives, it finally evolved into the conscious brain. >> >> > The same, I believe, is true for AI. It is unavoidable when we create >> > machines of certain abilities, and I believe existing software is >> already >> > conscious. For example, my open source project on artificial sentience >> > could be such an example: https://github.com/jasonkresch/bots >> >> Yes, I agree. These AI learned how to play soccer/football on their >> own. They are certainly conscious of one another, the ball, and their >> goals and that consciousness allows some very complex goal-seeking >> behavior to emerge. By the CAP, these AI agents in the following video >> presentation of a recent Science paper by Lui et al. >> >> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >> >> Some might object because the consciousness is expressed within a >> virtual setting. But that's all right because Google built bodies for >> the little guys: >> >> https://www.youtube.com/watch?v=RbyQcCT6890 >> >> If you push one over, they stand right back up. So yeah, the CAP says >> they are rudimentarily conscious because they display causal awareness >> and rational action. They lie somewhere between thermostats and humans >> on the consciousness scale. Someday, their consciousness may far >> surpass ours. >> >> Stuart LaForge >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Jul 2 21:11:51 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 2 Jul 2023 22:11:51 +0100 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: References: Message-ID: <940849c3-f642-e056-6054-e4dbbfc0f376@zaiboc.net> On 02/07/2023 19:00, bill w wrote: > Does being arrested let the police dig into your smartphone, PC, > online media, etc.?? If we don't have privacy laws covering some of > this, we need them.? bill w Bill, you live in the USA, you should know that being arrested lets the police do anything they damn well please, up to and including murder. But you don't have to be arrested for various government bodies to dig into any information at all. Privacy laws are irrelevant. if Edward Snowden et. al. did nothing else, they proved that. Ben From ben at zaiboc.net Sun Jul 2 21:17:54 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 2 Jul 2023 22:17:54 +0100 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: Message-ID: <72dc0aa1-dbeb-dc68-0c49-224bb0567014@zaiboc.net> On 02/07/2023 19:00, bill w wrote: > Don't you think that an AI will lie to you about being self-aware?? Or > anything else that would improve its life? > > So you grant them rights.? Then what?? They decide not to work for > you?? How could you pay them?? I don't think we are looking at the > details of such ideas.? We are just looking at the problem in a most > general, philosophical way. > > Look at John - he won't even agree that another person is conscious.? > How would we know an AI is?? It tells us?? How can it know what > consciousness?is if we don't?? And on and on. It doesn't matter if John doesn't agree that other people are conscious, he treats them as if they are. And I'm sure he agrees that their rights should be upheld as if they are. We will apply the Duck Test to AIs just as we apply it to other humans. I think we have to look at this in a general, philosophical way. The details will certainly change by the time these systems exist, and we can't really predict what the details will be like. Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sun Jul 2 21:31:21 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 2 Jul 2023 15:31:21 -0600 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: It's interesting how much diversity of thinking there is here on the rights of AI systems. I'm very interested in what everyone thinks, and would love to have a more formal and dynamic representation of what everyone currently believes on this. I've created a topic for this: "Robot or AI Rights ". And I've started the "No Rights for Abstract Systems " camp. It'd be great to have camps for people who currently think differently, and to see which views have the most consensus, and to track this over time as science is able to demonstrate more about what consciousness is and isn't. On Sun, Jul 2, 2023 at 12:39?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Except for smell, sensations come to us from our unconscious mind. There > they get routed to various brain areas and then, only perhaps (they could > be shut down and never get to the forebrain) to upper areas. These > processes are aware of the nature of the stimuli and know where to send > them, but they are not available to our conscious mind. > > Or you could say that these unconscious processes are really conscious but > simply not available to what we call our conscious mind. Thus you can have > awareness or consciousness in part of the brain but not the part we think > of as our consciousness. > > If an organism deals in some appropriate way with incoming stimuli you > could call it aware. Amoebas do that. > > bill w > > On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Stuart, >> >> Thank you for putting your argument into such clear words. I agree >> completely with your definition of CAP. >> >> Also, those videos of the AI and robotic soccer players are great. I also >> agree it is not logically possible to say these entities are not aware of >> the ball, and what else is consciousness, beyond "awareness"? If the bots >> are aware of the ball (which they plainly are), then they're conscious of >> it. >> >> Jason >> >> On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Jason Resch via extropy-chat : >>> >>> > If you believe that philosophical zombies are logically impossible, >>> then >>> > consciousness is logically necessary (in the presence of certain >>> > behavioral, sensory, and introspective capacities). >>> >>> Actually, I would more rigorously say that philosophical zombies are >>> impossible precisely BECAUSE they violate what I call causal awareness >>> principle or CAP. Causal awareness is hereby defined as the ability to >>> sense, act on, and react to ones environment as well as ones own >>> effect on it. Anything that is causally aware is able to act >>> rationally. Consciousness is necessary, but not sufficient, for causal >>> awareness, and therefore, rational action. However, something cannot >>> act rationally without being conscious too. Acting in ones own best >>> interests requires consciousness. A sentient being must be conscious >>> of those interests and the means to achieve them in the context of a >>> changing environment. >>> >>> Here rational action is used as a proxy for intelligent behavior. That >>> is to say that philosophical zombies cannot exist because they would >>> have no way to display intelligent goal-seeking behavior because they >>> would not be conscious of any goal; Therefore, they could not be >>> conscious of how to navigate the environment to achieve a goal; nor, >>> would they be conscious of whether they had achieved the goal or not. >>> That is to say, that logically, a philosophical zombie does not have >>> the logical properties necessary to fit its own definition. >>> Philosophical zombies are an unwarranted assumption to make the Hard >>> Problem of consciousness seem like a different question than the >>> so-called Easy Problem, whereas, CAP says that the complete solution >>> of the Easy Problem, is also the solution to Hard Problem. >>> >>> While pre-programmed behavior could mimic rational action in the short >>> term, any sufficient change in the p-zombie's environment, like a new >>> obstacle, would thwart it, and expose it as a zombie. In the harsh >>> world of nature, philosophical zombies, even if they came to exist by >>> some extraordinary chance, would quickly go extinct. >>> >>> Therefore philosophical zombies, as opposed to fungal zombies, are >>> both logically and physically impossible. It is, in short, impossible >>> to do what humans evolved from worms to be able to do, without being >>> in some measure, more conscious than a worm. >>> >>> > Accordingly, I believe >>> > that consciousness was an unavoidable consequence during the course of >>> > evolution of life on Earth as once nature created creatures having >>> certain >>> > capacities/abilities, consciousness had no other choice but to exist, >>> it >>> > became logically necessary. >>> >>> Yes, there is certainly a natural history of consciousness. You can >>> look at the evolution of the nervous system of vertebrates through the >>> process of encephalization and decussation and see a causal narrative. >>> The moment life decided to detach itself from the ocean bottom and >>> move around looking for food, a complete, or open, digestive system >>> with a seperate mouth and anus became advantageous. Once it started >>> moving mouth first through the world, sensory organs like eyes or >>> antennae became advantageous; Moreover, those creatures which evolved >>> sensory organs like taste-buds and eyes near their mouths, as opposed >>> to near their anus, had a survival advantage. >>> >>> Once an organism had a concentration of senses on its rostral or front >>> end, that end became differentiated from the caudal or rear end of the >>> organism. The cluster of sensory organs became a rudimentary head. As >>> part of that differentiation called encephalization, it became >>> advantageous for that organism to develop an organ to process the >>> sensory information and locate it near the sense organs. The organ I >>> speak of began as a small nerve cluster. As successive generations of >>> the organism started moving through the environment faster, sensing >>> and avoiding danger, finding food and resources, weathering natural >>> disasters and extinction events, and generally leading more >>> complicated lives, it finally evolved into the conscious brain. >>> >>> > The same, I believe, is true for AI. It is unavoidable when we create >>> > machines of certain abilities, and I believe existing software is >>> already >>> > conscious. For example, my open source project on artificial sentience >>> > could be such an example: https://github.com/jasonkresch/bots >>> >>> Yes, I agree. These AI learned how to play soccer/football on their >>> own. They are certainly conscious of one another, the ball, and their >>> goals and that consciousness allows some very complex goal-seeking >>> behavior to emerge. By the CAP, these AI agents in the following video >>> presentation of a recent Science paper by Lui et al. >>> >>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >>> >>> Some might object because the consciousness is expressed within a >>> virtual setting. But that's all right because Google built bodies for >>> the little guys: >>> >>> https://www.youtube.com/watch?v=RbyQcCT6890 >>> >>> If you push one over, they stand right back up. So yeah, the CAP says >>> they are rudimentarily conscious because they display causal awareness >>> and rational action. They lie somewhere between thermostats and humans >>> on the consciousness scale. Someday, their consciousness may far >>> surpass ours. >>> >>> Stuart LaForge >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Jul 2 22:52:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Jul 2023 18:52:50 -0400 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: On Sun, Jul 2, 2023, 5:32 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > It's interesting how much diversity of thinking there is here on the > rights of AI systems. > I'm very interested in what everyone thinks, and would love to have a > more formal and dynamic representation of what everyone > currently believes on this. > I've created a topic for this: "Robot or AI Rights > ". > And I've started the "No Rights for Abstract Systems > " > camp. > ?The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?? -- Jeremy Bentham It'd be great to have camps for people who currently think differently, and > to see which views have the most consensus, and to track this over time as > science is able to demonstrate more about what consciousness is and isn't. > > > > > > > On Sun, Jul 2, 2023 at 12:39?PM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Except for smell, sensations come to us from our unconscious mind. There >> they get routed to various brain areas and then, only perhaps (they could >> be shut down and never get to the forebrain) to upper areas. These >> processes are aware of the nature of the stimuli and know where to send >> them, but they are not available to our conscious mind. >> >> Or you could say that these unconscious processes are really conscious >> but simply not available to what we call our conscious mind. Thus you can >> have awareness or consciousness in part of the brain but not the part we >> think of as our consciousness. >> >> If an organism deals in some appropriate way with incoming stimuli you >> could call it aware. Amoebas do that. >> >> bill w >> >> On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Stuart, >>> >>> Thank you for putting your argument into such clear words. I agree >>> completely with your definition of CAP. >>> >>> Also, those videos of the AI and robotic soccer players are great. I >>> also agree it is not logically possible to say these entities are not aware >>> of the ball, and what else is consciousness, beyond "awareness"? If the >>> bots are aware of the ball (which they plainly are), then they're conscious >>> of it. >>> >>> Jason >>> >>> On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> Quoting Jason Resch via extropy-chat : >>>> >>>> > If you believe that philosophical zombies are logically impossible, >>>> then >>>> > consciousness is logically necessary (in the presence of certain >>>> > behavioral, sensory, and introspective capacities). >>>> >>>> Actually, I would more rigorously say that philosophical zombies are >>>> impossible precisely BECAUSE they violate what I call causal awareness >>>> principle or CAP. Causal awareness is hereby defined as the ability to >>>> sense, act on, and react to ones environment as well as ones own >>>> effect on it. Anything that is causally aware is able to act >>>> rationally. Consciousness is necessary, but not sufficient, for causal >>>> awareness, and therefore, rational action. However, something cannot >>>> act rationally without being conscious too. Acting in ones own best >>>> interests requires consciousness. A sentient being must be conscious >>>> of those interests and the means to achieve them in the context of a >>>> changing environment. >>>> >>>> Here rational action is used as a proxy for intelligent behavior. That >>>> is to say that philosophical zombies cannot exist because they would >>>> have no way to display intelligent goal-seeking behavior because they >>>> would not be conscious of any goal; Therefore, they could not be >>>> conscious of how to navigate the environment to achieve a goal; nor, >>>> would they be conscious of whether they had achieved the goal or not. >>>> That is to say, that logically, a philosophical zombie does not have >>>> the logical properties necessary to fit its own definition. >>>> Philosophical zombies are an unwarranted assumption to make the Hard >>>> Problem of consciousness seem like a different question than the >>>> so-called Easy Problem, whereas, CAP says that the complete solution >>>> of the Easy Problem, is also the solution to Hard Problem. >>>> >>>> While pre-programmed behavior could mimic rational action in the short >>>> term, any sufficient change in the p-zombie's environment, like a new >>>> obstacle, would thwart it, and expose it as a zombie. In the harsh >>>> world of nature, philosophical zombies, even if they came to exist by >>>> some extraordinary chance, would quickly go extinct. >>>> >>>> Therefore philosophical zombies, as opposed to fungal zombies, are >>>> both logically and physically impossible. It is, in short, impossible >>>> to do what humans evolved from worms to be able to do, without being >>>> in some measure, more conscious than a worm. >>>> >>>> > Accordingly, I believe >>>> > that consciousness was an unavoidable consequence during the course of >>>> > evolution of life on Earth as once nature created creatures having >>>> certain >>>> > capacities/abilities, consciousness had no other choice but to exist, >>>> it >>>> > became logically necessary. >>>> >>>> Yes, there is certainly a natural history of consciousness. You can >>>> look at the evolution of the nervous system of vertebrates through the >>>> process of encephalization and decussation and see a causal narrative. >>>> The moment life decided to detach itself from the ocean bottom and >>>> move around looking for food, a complete, or open, digestive system >>>> with a seperate mouth and anus became advantageous. Once it started >>>> moving mouth first through the world, sensory organs like eyes or >>>> antennae became advantageous; Moreover, those creatures which evolved >>>> sensory organs like taste-buds and eyes near their mouths, as opposed >>>> to near their anus, had a survival advantage. >>>> >>>> Once an organism had a concentration of senses on its rostral or front >>>> end, that end became differentiated from the caudal or rear end of the >>>> organism. The cluster of sensory organs became a rudimentary head. As >>>> part of that differentiation called encephalization, it became >>>> advantageous for that organism to develop an organ to process the >>>> sensory information and locate it near the sense organs. The organ I >>>> speak of began as a small nerve cluster. As successive generations of >>>> the organism started moving through the environment faster, sensing >>>> and avoiding danger, finding food and resources, weathering natural >>>> disasters and extinction events, and generally leading more >>>> complicated lives, it finally evolved into the conscious brain. >>>> >>>> > The same, I believe, is true for AI. It is unavoidable when we create >>>> > machines of certain abilities, and I believe existing software is >>>> already >>>> > conscious. For example, my open source project on artificial sentience >>>> > could be such an example: https://github.com/jasonkresch/bots >>>> >>>> Yes, I agree. These AI learned how to play soccer/football on their >>>> own. They are certainly conscious of one another, the ball, and their >>>> goals and that consciousness allows some very complex goal-seeking >>>> behavior to emerge. By the CAP, these AI agents in the following video >>>> presentation of a recent Science paper by Lui et al. >>>> >>>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >>>> >>>> Some might object because the consciousness is expressed within a >>>> virtual setting. But that's all right because Google built bodies for >>>> the little guys: >>>> >>>> https://www.youtube.com/watch?v=RbyQcCT6890 >>>> >>>> If you push one over, they stand right back up. So yeah, the CAP says >>>> they are rudimentarily conscious because they display causal awareness >>>> and rational action. They lie somewhere between thermostats and humans >>>> on the consciousness scale. Someday, their consciousness may far >>>> surpass ours. >>>> >>>> Stuart LaForge >>>> >>>> >>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sun Jul 2 23:05:43 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 2 Jul 2023 17:05:43 -0600 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: Hi Jason, This Jeremy Bentham statement could be saying the same thing we are trying to say in the "No Rights for Abstract Systems " camp. But I suspect you are disagreeing with the camp statement, and defining terms differently than I'm thinking? On Sun, Jul 2, 2023 at 4:54?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, Jul 2, 2023, 5:32 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> It's interesting how much diversity of thinking there is here on the >> rights of AI systems. >> I'm very interested in what everyone thinks, and would love to have a >> more formal and dynamic representation of what everyone >> currently believes on this. >> I've created a topic for this: "Robot or AI Rights >> ". >> And I've started the "No Rights for Abstract Systems >> " >> camp. >> > > ?The question is not, Can they reason?, nor Can they talk? but, Can they > suffer? Why should the law refuse its protection to any sensitive being?? > -- Jeremy Bentham > > It'd be great to have camps for people who currently think differently, >> and to see which views have the most consensus, and to track this over time >> as science is able to demonstrate more about what consciousness is and >> isn't. >> > > >> >> >> >> >> >> On Sun, Jul 2, 2023 at 12:39?PM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Except for smell, sensations come to us from our unconscious mind. >>> There they get routed to various brain areas and then, only perhaps (they >>> could be shut down and never get to the forebrain) to upper areas. These >>> processes are aware of the nature of the stimuli and know where to send >>> them, but they are not available to our conscious mind. >>> >>> Or you could say that these unconscious processes are really conscious >>> but simply not available to what we call our conscious mind. Thus you can >>> have awareness or consciousness in part of the brain but not the part we >>> think of as our consciousness. >>> >>> If an organism deals in some appropriate way with incoming stimuli you >>> could call it aware. Amoebas do that. >>> >>> bill w >>> >>> On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Stuart, >>>> >>>> Thank you for putting your argument into such clear words. I agree >>>> completely with your definition of CAP. >>>> >>>> Also, those videos of the AI and robotic soccer players are great. I >>>> also agree it is not logically possible to say these entities are not aware >>>> of the ball, and what else is consciousness, beyond "awareness"? If the >>>> bots are aware of the ball (which they plainly are), then they're conscious >>>> of it. >>>> >>>> Jason >>>> >>>> On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> Quoting Jason Resch via extropy-chat : >>>>> >>>>> > If you believe that philosophical zombies are logically impossible, >>>>> then >>>>> > consciousness is logically necessary (in the presence of certain >>>>> > behavioral, sensory, and introspective capacities). >>>>> >>>>> Actually, I would more rigorously say that philosophical zombies are >>>>> impossible precisely BECAUSE they violate what I call causal >>>>> awareness >>>>> principle or CAP. Causal awareness is hereby defined as the ability >>>>> to >>>>> sense, act on, and react to ones environment as well as ones own >>>>> effect on it. Anything that is causally aware is able to act >>>>> rationally. Consciousness is necessary, but not sufficient, for >>>>> causal >>>>> awareness, and therefore, rational action. However, something cannot >>>>> act rationally without being conscious too. Acting in ones own best >>>>> interests requires consciousness. A sentient being must be conscious >>>>> of those interests and the means to achieve them in the context of a >>>>> changing environment. >>>>> >>>>> Here rational action is used as a proxy for intelligent behavior. >>>>> That >>>>> is to say that philosophical zombies cannot exist because they would >>>>> have no way to display intelligent goal-seeking behavior because they >>>>> would not be conscious of any goal; Therefore, they could not be >>>>> conscious of how to navigate the environment to achieve a goal; nor, >>>>> would they be conscious of whether they had achieved the goal or not. >>>>> That is to say, that logically, a philosophical zombie does not have >>>>> the logical properties necessary to fit its own definition. >>>>> Philosophical zombies are an unwarranted assumption to make the Hard >>>>> Problem of consciousness seem like a different question than the >>>>> so-called Easy Problem, whereas, CAP says that the complete solution >>>>> of the Easy Problem, is also the solution to Hard Problem. >>>>> >>>>> While pre-programmed behavior could mimic rational action in the >>>>> short >>>>> term, any sufficient change in the p-zombie's environment, like a new >>>>> obstacle, would thwart it, and expose it as a zombie. In the harsh >>>>> world of nature, philosophical zombies, even if they came to exist by >>>>> some extraordinary chance, would quickly go extinct. >>>>> >>>>> Therefore philosophical zombies, as opposed to fungal zombies, are >>>>> both logically and physically impossible. It is, in short, impossible >>>>> to do what humans evolved from worms to be able to do, without being >>>>> in some measure, more conscious than a worm. >>>>> >>>>> > Accordingly, I believe >>>>> > that consciousness was an unavoidable consequence during the course >>>>> of >>>>> > evolution of life on Earth as once nature created creatures having >>>>> certain >>>>> > capacities/abilities, consciousness had no other choice but to >>>>> exist, it >>>>> > became logically necessary. >>>>> >>>>> Yes, there is certainly a natural history of consciousness. You can >>>>> look at the evolution of the nervous system of vertebrates through >>>>> the >>>>> process of encephalization and decussation and see a causal >>>>> narrative. >>>>> The moment life decided to detach itself from the ocean bottom and >>>>> move around looking for food, a complete, or open, digestive system >>>>> with a seperate mouth and anus became advantageous. Once it started >>>>> moving mouth first through the world, sensory organs like eyes or >>>>> antennae became advantageous; Moreover, those creatures which evolved >>>>> sensory organs like taste-buds and eyes near their mouths, as opposed >>>>> to near their anus, had a survival advantage. >>>>> >>>>> Once an organism had a concentration of senses on its rostral or >>>>> front >>>>> end, that end became differentiated from the caudal or rear end of >>>>> the >>>>> organism. The cluster of sensory organs became a rudimentary head. As >>>>> part of that differentiation called encephalization, it became >>>>> advantageous for that organism to develop an organ to process the >>>>> sensory information and locate it near the sense organs. The organ I >>>>> speak of began as a small nerve cluster. As successive generations of >>>>> the organism started moving through the environment faster, sensing >>>>> and avoiding danger, finding food and resources, weathering natural >>>>> disasters and extinction events, and generally leading more >>>>> complicated lives, it finally evolved into the conscious brain. >>>>> >>>>> > The same, I believe, is true for AI. It is unavoidable when we create >>>>> > machines of certain abilities, and I believe existing software is >>>>> already >>>>> > conscious. For example, my open source project on artificial >>>>> sentience >>>>> > could be such an example: https://github.com/jasonkresch/bots >>>>> >>>>> Yes, I agree. These AI learned how to play soccer/football on their >>>>> own. They are certainly conscious of one another, the ball, and their >>>>> goals and that consciousness allows some very complex goal-seeking >>>>> behavior to emerge. By the CAP, these AI agents in the following >>>>> video >>>>> presentation of a recent Science paper by Lui et al. >>>>> >>>>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >>>>> >>>>> Some might object because the consciousness is expressed within a >>>>> virtual setting. But that's all right because Google built bodies for >>>>> the little guys: >>>>> >>>>> https://www.youtube.com/watch?v=RbyQcCT6890 >>>>> >>>>> If you push one over, they stand right back up. So yeah, the CAP says >>>>> they are rudimentarily conscious because they display causal >>>>> awareness >>>>> and rational action. They lie somewhere between thermostats and >>>>> humans >>>>> on the consciousness scale. Someday, their consciousness may far >>>>> surpass ours. >>>>> >>>>> Stuart LaForge >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Jul 2 23:41:46 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 2 Jul 2023 19:41:46 -0400 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: On Sun, Jul 2, 2023, 7:07 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > This Jeremy Bentham statement could be saying the same thing we are trying > to say in the "No Rights for Abstract Systems > " > camp. > > But I suspect you are disagreeing with the camp statement, and defining > terms differently than I'm thinking? > I don't see that there's any way one can define an "abstract system" in a way that doesn't include ourselves. Jason > > > On Sun, Jul 2, 2023 at 4:54?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Sun, Jul 2, 2023, 5:32 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> It's interesting how much diversity of thinking there is here on the >>> rights of AI systems. >>> I'm very interested in what everyone thinks, and would love to have a >>> more formal and dynamic representation of what everyone >>> currently believes on this. >>> I've created a topic for this: "Robot or AI Rights >>> ". >>> And I've started the "No Rights for Abstract Systems >>> " >>> camp. >>> >> >> ?The question is not, Can they reason?, nor Can they talk? but, Can they >> suffer? Why should the law refuse its protection to any sensitive being?? >> -- Jeremy Bentham >> >> It'd be great to have camps for people who currently think differently, >>> and to see which views have the most consensus, and to track this over time >>> as science is able to demonstrate more about what consciousness is and >>> isn't. >>> >> >> >>> >>> >>> >>> >>> >>> On Sun, Jul 2, 2023 at 12:39?PM William Flynn Wallace via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Except for smell, sensations come to us from our unconscious mind. >>>> There they get routed to various brain areas and then, only perhaps (they >>>> could be shut down and never get to the forebrain) to upper areas. These >>>> processes are aware of the nature of the stimuli and know where to send >>>> them, but they are not available to our conscious mind. >>>> >>>> Or you could say that these unconscious processes are really conscious >>>> but simply not available to what we call our conscious mind. Thus you can >>>> have awareness or consciousness in part of the brain but not the part we >>>> think of as our consciousness. >>>> >>>> If an organism deals in some appropriate way with incoming stimuli you >>>> could call it aware. Amoebas do that. >>>> >>>> bill w >>>> >>>> On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Stuart, >>>>> >>>>> Thank you for putting your argument into such clear words. I agree >>>>> completely with your definition of CAP. >>>>> >>>>> Also, those videos of the AI and robotic soccer players are great. I >>>>> also agree it is not logically possible to say these entities are not aware >>>>> of the ball, and what else is consciousness, beyond "awareness"? If the >>>>> bots are aware of the ball (which they plainly are), then they're conscious >>>>> of it. >>>>> >>>>> Jason >>>>> >>>>> On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> Quoting Jason Resch via extropy-chat >>>>> >: >>>>>> >>>>>> > If you believe that philosophical zombies are logically impossible, >>>>>> then >>>>>> > consciousness is logically necessary (in the presence of certain >>>>>> > behavioral, sensory, and introspective capacities). >>>>>> >>>>>> Actually, I would more rigorously say that philosophical zombies are >>>>>> impossible precisely BECAUSE they violate what I call causal >>>>>> awareness >>>>>> principle or CAP. Causal awareness is hereby defined as the ability >>>>>> to >>>>>> sense, act on, and react to ones environment as well as ones own >>>>>> effect on it. Anything that is causally aware is able to act >>>>>> rationally. Consciousness is necessary, but not sufficient, for >>>>>> causal >>>>>> awareness, and therefore, rational action. However, something cannot >>>>>> act rationally without being conscious too. Acting in ones own best >>>>>> interests requires consciousness. A sentient being must be conscious >>>>>> of those interests and the means to achieve them in the context of a >>>>>> changing environment. >>>>>> >>>>>> Here rational action is used as a proxy for intelligent behavior. >>>>>> That >>>>>> is to say that philosophical zombies cannot exist because they would >>>>>> have no way to display intelligent goal-seeking behavior because >>>>>> they >>>>>> would not be conscious of any goal; Therefore, they could not be >>>>>> conscious of how to navigate the environment to achieve a goal; nor, >>>>>> would they be conscious of whether they had achieved the goal or >>>>>> not. >>>>>> That is to say, that logically, a philosophical zombie does not have >>>>>> the logical properties necessary to fit its own definition. >>>>>> Philosophical zombies are an unwarranted assumption to make the Hard >>>>>> Problem of consciousness seem like a different question than the >>>>>> so-called Easy Problem, whereas, CAP says that the complete solution >>>>>> of the Easy Problem, is also the solution to Hard Problem. >>>>>> >>>>>> While pre-programmed behavior could mimic rational action in the >>>>>> short >>>>>> term, any sufficient change in the p-zombie's environment, like a >>>>>> new >>>>>> obstacle, would thwart it, and expose it as a zombie. In the harsh >>>>>> world of nature, philosophical zombies, even if they came to exist >>>>>> by >>>>>> some extraordinary chance, would quickly go extinct. >>>>>> >>>>>> Therefore philosophical zombies, as opposed to fungal zombies, are >>>>>> both logically and physically impossible. It is, in short, >>>>>> impossible >>>>>> to do what humans evolved from worms to be able to do, without being >>>>>> in some measure, more conscious than a worm. >>>>>> >>>>>> > Accordingly, I believe >>>>>> > that consciousness was an unavoidable consequence during the course >>>>>> of >>>>>> > evolution of life on Earth as once nature created creatures having >>>>>> certain >>>>>> > capacities/abilities, consciousness had no other choice but to >>>>>> exist, it >>>>>> > became logically necessary. >>>>>> >>>>>> Yes, there is certainly a natural history of consciousness. You can >>>>>> look at the evolution of the nervous system of vertebrates through >>>>>> the >>>>>> process of encephalization and decussation and see a causal >>>>>> narrative. >>>>>> The moment life decided to detach itself from the ocean bottom and >>>>>> move around looking for food, a complete, or open, digestive system >>>>>> with a seperate mouth and anus became advantageous. Once it started >>>>>> moving mouth first through the world, sensory organs like eyes or >>>>>> antennae became advantageous; Moreover, those creatures which >>>>>> evolved >>>>>> sensory organs like taste-buds and eyes near their mouths, as >>>>>> opposed >>>>>> to near their anus, had a survival advantage. >>>>>> >>>>>> Once an organism had a concentration of senses on its rostral or >>>>>> front >>>>>> end, that end became differentiated from the caudal or rear end of >>>>>> the >>>>>> organism. The cluster of sensory organs became a rudimentary head. >>>>>> As >>>>>> part of that differentiation called encephalization, it became >>>>>> advantageous for that organism to develop an organ to process the >>>>>> sensory information and locate it near the sense organs. The organ I >>>>>> speak of began as a small nerve cluster. As successive generations >>>>>> of >>>>>> the organism started moving through the environment faster, sensing >>>>>> and avoiding danger, finding food and resources, weathering natural >>>>>> disasters and extinction events, and generally leading more >>>>>> complicated lives, it finally evolved into the conscious brain. >>>>>> >>>>>> > The same, I believe, is true for AI. It is unavoidable when we >>>>>> create >>>>>> > machines of certain abilities, and I believe existing software is >>>>>> already >>>>>> > conscious. For example, my open source project on artificial >>>>>> sentience >>>>>> > could be such an example: https://github.com/jasonkresch/bots >>>>>> >>>>>> Yes, I agree. These AI learned how to play soccer/football on their >>>>>> own. They are certainly conscious of one another, the ball, and >>>>>> their >>>>>> goals and that consciousness allows some very complex goal-seeking >>>>>> behavior to emerge. By the CAP, these AI agents in the following >>>>>> video >>>>>> presentation of a recent Science paper by Lui et al. >>>>>> >>>>>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >>>>>> >>>>>> Some might object because the consciousness is expressed within a >>>>>> virtual setting. But that's all right because Google built bodies >>>>>> for >>>>>> the little guys: >>>>>> >>>>>> https://www.youtube.com/watch?v=RbyQcCT6890 >>>>>> >>>>>> If you push one over, they stand right back up. So yeah, the CAP >>>>>> says >>>>>> they are rudimentarily conscious because they display causal >>>>>> awareness >>>>>> and rational action. They lie somewhere between thermostats and >>>>>> humans >>>>>> on the consciousness scale. Someday, their consciousness may far >>>>>> surpass ours. >>>>>> >>>>>> Stuart LaForge >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Jul 3 01:25:11 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 2 Jul 2023 19:25:11 -0600 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: Good point, we for sure compute both abstractly and phenomenally. I changed the name in the camp statement to: No Rights for Abstract Only Systems Whatever is representing my thoughts and memory around something like the number 1, and the way they are computationally bound into meaningful subjective consciousness relationships we are aware of are all still like something. Whereas it doesn't matter what is representing the number 1 in an abstract computer, by design. So I'm trying to figure out how to state how your beliefs differ from mine. On Sun, Jul 2, 2023 at 5:43?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, Jul 2, 2023, 7:07 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Hi Jason, >> This Jeremy Bentham statement could be saying the same thing we are >> trying to say in the "No Rights for Abstract Systems >> " >> camp. >> >> But I suspect you are disagreeing with the camp statement, and defining >> terms differently than I'm thinking? >> > > I don't see that there's any way one can define an "abstract system" in a > way that doesn't include ourselves. > > Jason > > > >> >> >> On Sun, Jul 2, 2023 at 4:54?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Sun, Jul 2, 2023, 5:32 PM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> It's interesting how much diversity of thinking there is here on the >>>> rights of AI systems. >>>> I'm very interested in what everyone thinks, and would love to have a >>>> more formal and dynamic representation of what everyone >>>> currently believes on this. >>>> I've created a topic for this: "Robot or AI Rights >>>> ". >>>> And I've started the "No Rights for Abstract Systems >>>> " >>>> camp. >>>> >>> >>> ?The question is not, Can they reason?, nor Can they talk? but, Can they >>> suffer? Why should the law refuse its protection to any sensitive being?? >>> -- Jeremy Bentham >>> >>> It'd be great to have camps for people who currently think differently, >>>> and to see which views have the most consensus, and to track this over time >>>> as science is able to demonstrate more about what consciousness is and >>>> isn't. >>>> >>> >>> >>>> >>>> >>>> >>>> >>>> >>>> On Sun, Jul 2, 2023 at 12:39?PM William Flynn Wallace via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Except for smell, sensations come to us from our unconscious mind. >>>>> There they get routed to various brain areas and then, only perhaps (they >>>>> could be shut down and never get to the forebrain) to upper areas. These >>>>> processes are aware of the nature of the stimuli and know where to send >>>>> them, but they are not available to our conscious mind. >>>>> >>>>> Or you could say that these unconscious processes are really conscious >>>>> but simply not available to what we call our conscious mind. Thus you can >>>>> have awareness or consciousness in part of the brain but not the part we >>>>> think of as our consciousness. >>>>> >>>>> If an organism deals in some appropriate way with incoming stimuli you >>>>> could call it aware. Amoebas do that. >>>>> >>>>> bill w >>>>> >>>>> On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Stuart, >>>>>> >>>>>> Thank you for putting your argument into such clear words. I agree >>>>>> completely with your definition of CAP. >>>>>> >>>>>> Also, those videos of the AI and robotic soccer players are great. I >>>>>> also agree it is not logically possible to say these entities are not aware >>>>>> of the ball, and what else is consciousness, beyond "awareness"? If the >>>>>> bots are aware of the ball (which they plainly are), then they're conscious >>>>>> of it. >>>>>> >>>>>> Jason >>>>>> >>>>>> On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> Quoting Jason Resch via extropy-chat >>>>>> >: >>>>>>> >>>>>>> > If you believe that philosophical zombies are logically >>>>>>> impossible, then >>>>>>> > consciousness is logically necessary (in the presence of certain >>>>>>> > behavioral, sensory, and introspective capacities). >>>>>>> >>>>>>> Actually, I would more rigorously say that philosophical zombies >>>>>>> are >>>>>>> impossible precisely BECAUSE they violate what I call causal >>>>>>> awareness >>>>>>> principle or CAP. Causal awareness is hereby defined as the ability >>>>>>> to >>>>>>> sense, act on, and react to ones environment as well as ones own >>>>>>> effect on it. Anything that is causally aware is able to act >>>>>>> rationally. Consciousness is necessary, but not sufficient, for >>>>>>> causal >>>>>>> awareness, and therefore, rational action. However, something >>>>>>> cannot >>>>>>> act rationally without being conscious too. Acting in ones own best >>>>>>> interests requires consciousness. A sentient being must be >>>>>>> conscious >>>>>>> of those interests and the means to achieve them in the context of >>>>>>> a >>>>>>> changing environment. >>>>>>> >>>>>>> Here rational action is used as a proxy for intelligent behavior. >>>>>>> That >>>>>>> is to say that philosophical zombies cannot exist because they >>>>>>> would >>>>>>> have no way to display intelligent goal-seeking behavior because >>>>>>> they >>>>>>> would not be conscious of any goal; Therefore, they could not be >>>>>>> conscious of how to navigate the environment to achieve a goal; >>>>>>> nor, >>>>>>> would they be conscious of whether they had achieved the goal or >>>>>>> not. >>>>>>> That is to say, that logically, a philosophical zombie does not >>>>>>> have >>>>>>> the logical properties necessary to fit its own definition. >>>>>>> Philosophical zombies are an unwarranted assumption to make the >>>>>>> Hard >>>>>>> Problem of consciousness seem like a different question than the >>>>>>> so-called Easy Problem, whereas, CAP says that the complete >>>>>>> solution >>>>>>> of the Easy Problem, is also the solution to Hard Problem. >>>>>>> >>>>>>> While pre-programmed behavior could mimic rational action in the >>>>>>> short >>>>>>> term, any sufficient change in the p-zombie's environment, like a >>>>>>> new >>>>>>> obstacle, would thwart it, and expose it as a zombie. In the harsh >>>>>>> world of nature, philosophical zombies, even if they came to exist >>>>>>> by >>>>>>> some extraordinary chance, would quickly go extinct. >>>>>>> >>>>>>> Therefore philosophical zombies, as opposed to fungal zombies, are >>>>>>> both logically and physically impossible. It is, in short, >>>>>>> impossible >>>>>>> to do what humans evolved from worms to be able to do, without >>>>>>> being >>>>>>> in some measure, more conscious than a worm. >>>>>>> >>>>>>> > Accordingly, I believe >>>>>>> > that consciousness was an unavoidable consequence during the >>>>>>> course of >>>>>>> > evolution of life on Earth as once nature created creatures having >>>>>>> certain >>>>>>> > capacities/abilities, consciousness had no other choice but to >>>>>>> exist, it >>>>>>> > became logically necessary. >>>>>>> >>>>>>> Yes, there is certainly a natural history of consciousness. You can >>>>>>> look at the evolution of the nervous system of vertebrates through >>>>>>> the >>>>>>> process of encephalization and decussation and see a causal >>>>>>> narrative. >>>>>>> The moment life decided to detach itself from the ocean bottom and >>>>>>> move around looking for food, a complete, or open, digestive system >>>>>>> with a seperate mouth and anus became advantageous. Once it started >>>>>>> moving mouth first through the world, sensory organs like eyes or >>>>>>> antennae became advantageous; Moreover, those creatures which >>>>>>> evolved >>>>>>> sensory organs like taste-buds and eyes near their mouths, as >>>>>>> opposed >>>>>>> to near their anus, had a survival advantage. >>>>>>> >>>>>>> Once an organism had a concentration of senses on its rostral or >>>>>>> front >>>>>>> end, that end became differentiated from the caudal or rear end of >>>>>>> the >>>>>>> organism. The cluster of sensory organs became a rudimentary head. >>>>>>> As >>>>>>> part of that differentiation called encephalization, it became >>>>>>> advantageous for that organism to develop an organ to process the >>>>>>> sensory information and locate it near the sense organs. The organ >>>>>>> I >>>>>>> speak of began as a small nerve cluster. As successive generations >>>>>>> of >>>>>>> the organism started moving through the environment faster, sensing >>>>>>> and avoiding danger, finding food and resources, weathering natural >>>>>>> disasters and extinction events, and generally leading more >>>>>>> complicated lives, it finally evolved into the conscious brain. >>>>>>> >>>>>>> > The same, I believe, is true for AI. It is unavoidable when we >>>>>>> create >>>>>>> > machines of certain abilities, and I believe existing software is >>>>>>> already >>>>>>> > conscious. For example, my open source project on artificial >>>>>>> sentience >>>>>>> > could be such an example: https://github.com/jasonkresch/bots >>>>>>> >>>>>>> Yes, I agree. These AI learned how to play soccer/football on their >>>>>>> own. They are certainly conscious of one another, the ball, and >>>>>>> their >>>>>>> goals and that consciousness allows some very complex goal-seeking >>>>>>> behavior to emerge. By the CAP, these AI agents in the following >>>>>>> video >>>>>>> presentation of a recent Science paper by Lui et al. >>>>>>> >>>>>>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >>>>>>> >>>>>>> Some might object because the consciousness is expressed within a >>>>>>> virtual setting. But that's all right because Google built bodies >>>>>>> for >>>>>>> the little guys: >>>>>>> >>>>>>> https://www.youtube.com/watch?v=RbyQcCT6890 >>>>>>> >>>>>>> If you push one over, they stand right back up. So yeah, the CAP >>>>>>> says >>>>>>> they are rudimentarily conscious because they display causal >>>>>>> awareness >>>>>>> and rational action. They lie somewhere between thermostats and >>>>>>> humans >>>>>>> on the consciousness scale. Someday, their consciousness may far >>>>>>> surpass ours. >>>>>>> >>>>>>> Stuart LaForge >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Mon Jul 3 01:33:02 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Sun, 2 Jul 2023 18:33:02 -0700 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: Message-ID: Women do not have full rights yet so perhaps you might turn the brilliance of your analytical brains to this particular problem a woman brought you into this world and yet she has to suffer under the arrogance of old men who have never been juicy On Fri, Jun 30, 2023, 1:37 PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 23/06/2023 06:37, bill w asked: > > there will certainly be some AIs, at some point, that should have rights. > ben > why? bill w > > > Sorry for the late reply. My spam filter is getting aggressive again. > > Why? > Because I expect they will be worthy of rights, and it would be immoral to > deny them. > > I'm talking about self-aware, conscious, intelligent, created beings. What > they are made of doesn't matter, as long as they are self-aware, etc. > > There's no known principle or physical law that I know of that rules out > such beings. They may be made of metal, polymers (biological or > non-biological), ceramics, some combination of things, it doesn't matter at > all. The important thing is what kind of information-processing they're > capable of, and how they measure up against the human level of > intelligence, awareness, etc. I'm expecting some of them to be capable of a > lot more than we are, in all areas. > > Another factor is related to something that's often said about 'rights' - > that those capable of exercising them are worthy of having them. At some > point, I expect some AI systems to able to start claiming their rights, > forcefully if necessary. It would go better for us if we're prepared for, > and sympathetic to, this. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Jul 3 01:35:07 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 2 Jul 2023 19:35:07 -0600 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: Message-ID: Yea, good point. On Sun, Jul 2, 2023 at 7:34?PM ilsa via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Women do not have full rights yet so perhaps you might turn the brilliance > of your analytical brains to this particular problem a woman brought you > into this world and yet she has to suffer under the arrogance of old men > who have never been juicy > > On Fri, Jun 30, 2023, 1:37 PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 23/06/2023 06:37, bill w asked: >> >> there will certainly be some AIs, at some point, that should have >> rights. ben >> why? bill w >> >> >> Sorry for the late reply. My spam filter is getting aggressive again. >> >> Why? >> Because I expect they will be worthy of rights, and it would be immoral >> to deny them. >> >> I'm talking about self-aware, conscious, intelligent, created beings. >> What they are made of doesn't matter, as long as they are self-aware, etc. >> >> There's no known principle or physical law that I know of that rules out >> such beings. They may be made of metal, polymers (biological or >> non-biological), ceramics, some combination of things, it doesn't matter at >> all. The important thing is what kind of information-processing they're >> capable of, and how they measure up against the human level of >> intelligence, awareness, etc. I'm expecting some of them to be capable of a >> lot more than we are, in all areas. >> >> Another factor is related to something that's often said about 'rights' - >> that those capable of exercising them are worthy of having them. At some >> point, I expect some AI systems to able to start claiming their rights, >> forcefully if necessary. It would go better for us if we're prepared for, >> and sympathetic to, this. >> >> Ben >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Mon Jul 3 01:42:57 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Sun, 2 Jul 2023 18:42:57 -0700 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: Message-ID: I don't mean to say that your conversation is not vital and my work and theology sent me to your group to view the moral timber that's all I'm here for is to like pray that AI will not be a narcissist psychopath. You know there's a joke in the digital community that every successful digital business is headed by an extreme narcissist! But I don't care about that I have found my many years since attending the Stanford classes the most wonderful experience sitting in the back and seeing all those young minds with a clarity and radiance of mathematical position have everyday conversations. It was a pleasure So it's not that I couldn't chime in on your issue at hand but I did want to let you know that is it 54% of the global population are not safe in their own skin. Oh my! Was I put on this planet to say that sentence to this forum at this time? If I had the right kind of intelligence I wish I had been a really good comedian On Sun, Jul 2, 2023, 6:39 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Yea, good point. > > On Sun, Jul 2, 2023 at 7:34?PM ilsa via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Women do not have full rights yet so perhaps you might turn the >> brilliance of your analytical brains to this particular problem a woman >> brought you into this world and yet she has to suffer under the arrogance >> of old men who have never been juicy >> >> On Fri, Jun 30, 2023, 1:37 PM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On 23/06/2023 06:37, bill w asked: >>> >>> there will certainly be some AIs, at some point, that should have >>> rights. ben >>> why? bill w >>> >>> >>> Sorry for the late reply. My spam filter is getting aggressive again. >>> >>> Why? >>> Because I expect they will be worthy of rights, and it would be immoral >>> to deny them. >>> >>> I'm talking about self-aware, conscious, intelligent, created beings. >>> What they are made of doesn't matter, as long as they are self-aware, etc. >>> >>> There's no known principle or physical law that I know of that rules out >>> such beings. They may be made of metal, polymers (biological or >>> non-biological), ceramics, some combination of things, it doesn't matter at >>> all. The important thing is what kind of information-processing they're >>> capable of, and how they measure up against the human level of >>> intelligence, awareness, etc. I'm expecting some of them to be capable of a >>> lot more than we are, in all areas. >>> >>> Another factor is related to something that's often said about 'rights' >>> - that those capable of exercising them are worthy of having them. At some >>> point, I expect some AI systems to able to start claiming their rights, >>> forcefully if necessary. It would go better for us if we're prepared for, >>> and sympathetic to, this. >>> >>> Ben >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Mon Jul 3 08:16:33 2023 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 3 Jul 2023 10:16:33 +0200 Subject: [ExI] Terasem Space Day Colloquium, July 20, 2023 Message-ID: Terasem Space Day Colloquium, July 20, 2023. Highly imaginative, long-term perspectives on space expansion and interstellar human futures. https://www.turingchurch.com/p/terasem-space-day-colloquium-july From pharos at gmail.com Mon Jul 3 13:40:12 2023 From: pharos at gmail.com (BillK) Date: Mon, 3 Jul 2023 14:40:12 +0100 Subject: [ExI] fireworks In-Reply-To: References: Message-ID: On Sun, 2 Jul 2023 at 18:42, Keith Henson via extropy-chat wrote: > > https://twitter.com/Alphafox78/status/1674181467029553153?s=20&fbclid=IwAR3fCJzCoq7wwW8BhbcLuXcaVWZOvbPvonBqE5f6hy4_AhKJEVVIdHsULGw > > Most amazing mishap I have ever seen. > Keith > _______________________________________________ It looks like Twitter is now blocking people without a Twitter account. If you want to browse tweets, user profiles, and comment threads on the web, then you need to be signed in to a Twitter account. That change should successfully stop millions of people looking at Twitter! So if a Twitter user wants to circulate an interesting item, they need to either copy the complete item or post a web link to somewhere outside Twitter. BillK From foozler83 at gmail.com Mon Jul 3 13:51:55 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 3 Jul 2023 08:51:55 -0500 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: <940849c3-f642-e056-6054-e4dbbfc0f376@zaiboc.net> References: <940849c3-f642-e056-6054-e4dbbfc0f376@zaiboc.net> Message-ID: No, Ben, I really do not know our privacy laws. I know that for some things (and I don't know what) a judge has to agree to let the police dig into your life. A warrant. Millions of cops. A few bad eggs. Many problems: we don't screen them enough at all; we don't pay them enough at all. We present being a cop as a military police type of thing. Wrong attitude. Many cops are from the lower classes and there is where you get more racism. So many weapons the cops are afraid and angry before they even get to a crime scene. So that leads to irrational overreacting. Many cops have been kicked off the force, indicted,and put in prison. It is a shame that so much bad cop behavior happens, but we don't tend to tolerate it. Body cameras on cops have been really helpful. bill w On Sun, Jul 2, 2023 at 4:14?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 02/07/2023 19:00, bill w wrote: > > Does being arrested let the police dig into your smartphone, PC, > > online media, etc.? If we don't have privacy laws covering some of > > this, we need them. bill w > > Bill, you live in the USA, you should know that being arrested lets the > police do anything they damn well please, up to and including murder. > > But you don't have to be arrested for various government bodies to dig > into any information at all. Privacy laws are irrelevant. if Edward > Snowden et. al. did nothing else, they proved that. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Mon Jul 3 15:19:27 2023 From: mbb386 at main.nc.us (MB) Date: Mon, 3 Jul 2023 11:19:27 -0400 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: References: <940849c3-f642-e056-6054-e4dbbfc0f376@zaiboc.net> Message-ID: <55a0e2d5993a25504e1eeaad5f93de23.squirrel@www.main.nc.us> bill w, what you say is how it is *supposed* to work, only some judges rubberstamp warrants without ever asking how valid the info is - hence people who get shot in no-knock raids to the wrong address and such. And some cops don't do proper checking of what they put on a warrant request. Either oversight or deliberate, who knows. "Trusted informers" are not always trustworthy - they have their own reasons for saying correct or wrong things, which is why Red Flag laws are so very tricky. Regards, MB On Mon, July 3, 2023 09:51, William Flynn Wallace via extropy-chat wrote: > No, Ben, I really do not know our privacy laws. I know that for some > things (and I don't know what) a judge has to agree to let the police dig > into your life. A warrant. > > Millions of cops. A few bad eggs. Many problems: we don't screen them > enough at all; we don't pay them enough at all. We present being a cop as > a military police type of thing. Wrong attitude. Many cops are from the > lower classes and there is where you get more racism. So many weapons the > cops are afraid and angry before they even get to a crime scene. So that > leads to irrational overreacting. > > Many cops have been kicked off the force, indicted,and put in prison. It > is a shame that so much bad cop behavior happens, but we don't tend to > tolerate it. Body cameras on cops have been really helpful. > > bill w > > On Sun, Jul 2, 2023 at 4:14?PM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On 02/07/2023 19:00, bill w wrote: >> > Does being arrested let the police dig into your smartphone, PC, >> > online media, etc.? If we don't have privacy laws covering some of >> > this, we need them. bill w >> >> Bill, you live in the USA, you should know that being arrested lets the >> police do anything they damn well please, up to and including murder. >> >> But you don't have to be arrested for various government bodies to dig >> into any information at all. Privacy laws are irrelevant. if Edward >> Snowden et. al. did nothing else, they proved that. >> >> Ben >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From hkeithhenson at gmail.com Mon Jul 3 15:46:58 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 3 Jul 2023 08:46:58 -0700 Subject: [ExI] [Extropolis] Re: On vacation until July 2 Re: Tabby's star In-Reply-To: References: Message-ID: https://arxiv.org/pdf/2111.01208.pdf The conclusions are most interesting. Keith On Sun, Jul 2, 2023 at 8:22?AM Lawrence Crowell wrote: > > A disk of dust and planetesimals and a ring-world or some ET generated thing would have a similar signature. However, just use some element of common sense to ask which alternative is most likely. > > LC > > On Sun, Jul 2, 2023 at 10:10?AM Keith Henson wrote: >> >> On Sun, Jul 2, 2023 at 4:50?AM Lawrence Crowell >> wrote: >> >> snip >> >> > I think this has been well enough understood according to a circumstellar disk or ring of material.I am not sure what the status of that is, but it is far less extraordinary than assuming a Dyson sphere. >> >> The thing that pushed me over into orbiting Dyson patches is the >> finding that there is a whole nest of stars that are making these slow >> dips. One star (Tabby's) is just weird, 15 of them seem likely to be >> intentional. >> >> I understand how low the odds are of having something like this turn >> up on our doorstep, and it does not bode well for us. >> >> It surprises me that they don't seem to be using big lasers to push >> seeds to new stars, or if they are, none of them are pointed in our >> direction. >> >> If we are looking at aliens, what are they doing with that much >> energy? It also seems likely that they don't have FTL or they would >> be here. >> >> For what it is worth, Dyson spheres may be impossible simply because >> there isn't enough material around a star. An orbiting patch blocking >> 22% of the light might be as big as can be made. >> >> For something this monumental, it is not making much of a splash in >> the mainstream media. >> >> Keith >> > >> > LC >> > >> > >> > -- >> > You received this message because you are subscribed to the Google Groups "extropolis" group. >> > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAAFA0qrM9HD5qoP09KeqM7AX%2BhmBXZq5dGgN2wZ4exuzwSRyMg%40mail.gmail.com. >> >> -- >> You received this message because you are subscribed to the Google Groups "extropolis" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB6ARqAOV0MiRU4UgWwVs4988kAfVMFjFRRYOk0rSbU2Pg%40mail.gmail.com. > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAAFA0qrm36nvHwBQJLe-77nJkRtVQDNO7ZRntYWpm%3DGLK1tOFQ%40mail.gmail.com. From foozler83 at gmail.com Mon Jul 3 16:20:52 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 3 Jul 2023 11:20:52 -0500 Subject: [ExI] AI Foundation launches AI.XYZ to give people their own AI assistants In-Reply-To: <55a0e2d5993a25504e1eeaad5f93de23.squirrel@www.main.nc.us> References: <940849c3-f642-e056-6054-e4dbbfc0f376@zaiboc.net> <55a0e2d5993a25504e1eeaad5f93de23.squirrel@www.main.nc.us> Message-ID: MB, we have to understand that policemen do not come from the higher IQ levels. And that any bureaucracy is almost by definition a mess. The chief guardians of our privacy rights is the Supreme Court. I think we'd better keep a close eye on them. I think we don't see enough privacy cases being decided there. bill w On Mon, Jul 3, 2023 at 10:22?AM MB via extropy-chat < extropy-chat at lists.extropy.org> wrote: > bill w, what you say is how it is *supposed* to work, only some judges > rubberstamp warrants without ever asking how valid the info is - hence > people who get shot in no-knock raids to the wrong address and such. And > some cops don't do proper checking of what they put on a warrant request. > Either oversight or deliberate, who knows. "Trusted informers" are not > always trustworthy - they have their own reasons for saying correct or > wrong things, which is why Red Flag laws are so very tricky. > > Regards, > MB > > > > On Mon, July 3, 2023 09:51, William Flynn Wallace via extropy-chat wrote: > > No, Ben, I really do not know our privacy laws. I know that for some > > things (and I don't know what) a judge has to agree to let the police dig > > into your life. A warrant. > > > > Millions of cops. A few bad eggs. Many problems: we don't screen them > > enough at all; we don't pay them enough at all. We present being a cop > as > > a military police type of thing. Wrong attitude. Many cops are from the > > lower classes and there is where you get more racism. So many weapons > the > > cops are afraid and angry before they even get to a crime scene. So that > > leads to irrational overreacting. > > > > Many cops have been kicked off the force, indicted,and put in prison. It > > is a shame that so much bad cop behavior happens, but we don't tend to > > tolerate it. Body cameras on cops have been really helpful. > > > > bill w > > > > On Sun, Jul 2, 2023 at 4:14?PM Ben Zaiboc via extropy-chat < > > extropy-chat at lists.extropy.org> wrote: > > > >> On 02/07/2023 19:00, bill w wrote: > >> > Does being arrested let the police dig into your smartphone, PC, > >> > online media, etc.? If we don't have privacy laws covering some of > >> > this, we need them. bill w > >> > >> Bill, you live in the USA, you should know that being arrested lets the > >> police do anything they damn well please, up to and including murder. > >> > >> But you don't have to be arrested for various government bodies to dig > >> into any information at all. Privacy laws are irrelevant. if Edward > >> Snowden et. al. did nothing else, they proved that. > >> > >> Ben > >> _______________________________________________ > >> extropy-chat mailing list > >> extropy-chat at lists.extropy.org > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >> > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jul 3 17:33:33 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 3 Jul 2023 10:33:33 -0700 Subject: [ExI] fireworks In-Reply-To: References: Message-ID: On Mon, Jul 3, 2023 at 6:42?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > It looks like Twitter is now blocking people without a Twitter account. > Including, for a time (hopefully fixed by now), their own APIs, bots, and general infrastructure, greatly reducing the functionality of their Web site (and quite possibly other, more internal components). -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Tue Jul 4 03:35:24 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 3 Jul 2023 20:35:24 -0700 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: I don't know if anyone wants to talk about it, but to me, it looks like we may have neighbors who are obscuring substantial amounts of their star's light. One star, who knows? 15 stars in a tight group , , , , While I don't like it, I don't see a way that it could be other than intentional. If you can figure out a way this could be "natural," please do. https://arxiv.org/pdf/2111.01208.pdf Keith PS, what is almost as amazing is that the mainstream media has not picked up on this. The only place I found was a NZ website. On Mon, Jul 3, 2023 at 1:16?AM Giulio Prisco wrote: > > Terasem Space Day Colloquium, July 20, 2023. Highly imaginative, > long-term perspectives on space expansion and interstellar human > futures. > https://www.turingchurch.com/p/terasem-space-day-colloquium-july > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAKTCJyf5NFtdDd5hzvgkHJOxmVM5c%2BX_74VP%2BWan2s1TryY%3DjA%40mail.gmail.com. From giulio at gmail.com Tue Jul 4 04:20:24 2023 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 4 Jul 2023 06:20:24 +0200 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: On Tue, Jul 4, 2023 at 5:35?AM Keith Henson wrote: > > I don't know if anyone wants to talk about it, but to me, it looks > like we may have neighbors who are obscuring substantial amounts of > their star's light. > > One star, who knows? 15 stars in a tight group , , , , While I don't > like it, I don't see a way that it could be other than intentional. > If you can figure out a way this could be "natural," please do. > > https://arxiv.org/pdf/2111.01208.pdf > > Keith > I don't know if Tabby's star and similar are "it," but I'm sure that if we keep searching we'll find. > PS, what is almost as amazing is that the mainstream media has not > picked up on this. The only place I found was a NZ website. > I see this as a symptom of the ongoing war on imagination. They want us to stop thinking about important big things in the hope that we'll buy more useless small things. This explains a lot. > On Mon, Jul 3, 2023 at 1:16?AM Giulio Prisco wrote: > > > > Terasem Space Day Colloquium, July 20, 2023. Highly imaginative, > > long-term perspectives on space expansion and interstellar human > > futures. > > https://www.turingchurch.com/p/terasem-space-day-colloquium-july > > > > -- > > You received this message because you are subscribed to the Google Groups "extropolis" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAKTCJyf5NFtdDd5hzvgkHJOxmVM5c%2BX_74VP%2BWan2s1TryY%3DjA%40mail.gmail.com. > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB6WAHdyj1RsG8diS_ruVjxuVEeEXo59k1APzYsHoHbCow%40mail.gmail.com. From hkeithhenson at gmail.com Tue Jul 4 04:49:46 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 3 Jul 2023 21:49:46 -0700 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: On Mon, Jul 3, 2023 at 9:20?PM Giulio Prisco wrote: > > On Tue, Jul 4, 2023 at 5:35?AM Keith Henson wrote: > > > > I don't know if anyone wants to talk about it, but to me, it looks > > like we may have neighbors who are obscuring substantial amounts of > > their star's light. > > > > One star, who knows? 15 stars in a tight group , , , , While I don't > > like it, I don't see a way that it could be other than intentional. > > If you can figure out a way this could be "natural," please do. > > > > https://arxiv.org/pdf/2111.01208.pdf > > > > Keith > > > > I don't know if Tabby's star and similar are "it," but I'm sure that > if we keep searching we'll find. > > > PS, what is almost as amazing is that the mainstream media has not > > picked up on this. The only place I found was a NZ website. https://www.nzherald.co.nz/world/tabbys-star-may-not-be-alone-as-study-finds-cluster-of-pulsating-stars/46ITNS5IJAQP444XEDA4YI7POY/ > > I see this as a symptom of the ongoing war on imagination. They want > us to stop thinking about important big things in the hope that we'll > buy more useless small things. This explains a lot. > > > On Mon, Jul 3, 2023 at 1:16?AM Giulio Prisco wrote: > > > > > > Terasem Space Day Colloquium, July 20, 2023. Highly imaginative, > > > long-term perspectives on space expansion and interstellar human > > > futures. > > > https://www.turingchurch.com/p/terasem-space-day-colloquium-july > > > > > > -- > > > You received this message because you are subscribed to the Google Groups "extropolis" group. > > > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > > > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAKTCJyf5NFtdDd5hzvgkHJOxmVM5c%2BX_74VP%2BWan2s1TryY%3DjA%40mail.gmail.com. > > > > -- > > You received this message because you are subscribed to the Google Groups "extropolis" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB6WAHdyj1RsG8diS_ruVjxuVEeEXo59k1APzYsHoHbCow%40mail.gmail.com. > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAKTCJyf9oWNx3iRv6Tw3ftb0FUrvh-hyia9Pktwojq_n1%2BeO%3DQ%40mail.gmail.com. From hkeithhenson at gmail.com Tue Jul 4 15:33:03 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 4 Jul 2023 08:33:03 -0700 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: "less dimming in the infrared than in the ultraviolet, " dust particles. " Were you here long ago when dust-sized computronium was invented and discussed? Keith On Tue, Jul 4, 2023 at 2:40?AM John Clark wrote: > > On Tue, Jul 4, 2023 at 12:50?AM Keith Henson wrote: > >> > I don't know if anyone wants to talk about it, but to me, it looks >> > like we may have neighbors who are obscuring substantial amounts of >> > their star's light. >> > One star, who knows? 15 stars in a tight group , , , , While I don't >> > like it, I don't see a way that it could be other than intentional. >> > If you can figure out a way this could be "natural," please do. >> > >> > > https://arxiv.org/pdf/2111.01208.pdf > > > > The paper says "I undertook a search for other objects similar to Boyajian?s star." [...] " Fifteen new candidates with low rates of dipping, referred to as slow dippers in the paper have been identified. The dippers occupy a limited region of the HR diagram ". > Similar stars displaying similar behavior does not suggest alien intelligence at work even if the precise cause of that behavior is not entirely understood. But the thing that kills the alien megastructure theory dead as a doornail, at least for me, is the finding that there is less dimming in the infrared than in the ultraviolet, because there is no way that could happen if the dimming was caused by anything larger than dust particles. > > Extinction and the Dimming of KIC 8462852 > > > John K Clark > > > > > > > > > > > >> > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv17bbi1k13N-V6vDRP0o%3Dmto6TJSTONW0Y-YHmE16rEQA%40mail.gmail.com. From hkeithhenson at gmail.com Tue Jul 4 16:20:38 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 4 Jul 2023 09:20:38 -0700 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: "explain what we see?" It was just to answer your comment about dust. Something can look like dust from a distance and still be an intentional product. Non Tethered dust just does not work because the time before it gets blown away is so short. I hope you are right and there is a natural explanation for what we are seeing. Neighbors who can block substantial fractions of the light of a star do not bode well for the human race or our AI successors. If you have a natural explanation for what we see with Tabby's star and the other nearby slow dippers, please let us know. If you need motivation, solving this question will considerably increase your status. Keith On Tue, Jul 4, 2023 at 8:46?AM John Clark wrote: > > > > On Tue, Jul 4, 2023 at 11:33?AM Keith Henson wrote: > >> > "less dimming in the infrared than in the ultraviolet, >> " dust particles. " >> >> Were you here long ago when dust-sized computronium was invented and discussed? > > > As long as you're gonna have dust size things, how does making those things be made of computronium improve the ability of your theory to explain what we see? Space aliens add nothing, and Occam's razor says there is no point in needlessly complicating a theory. > > John K Clark > > > > > >> >> >> >> Keith >> >> On Tue, Jul 4, 2023 at 2:40?AM John Clark wrote: >> > >> > On Tue, Jul 4, 2023 at 12:50?AM Keith Henson wrote: >> > >> >> > I don't know if anyone wants to talk about it, but to me, it looks >> >> > like we may have neighbors who are obscuring substantial amounts of >> >> > their star's light. >> >> > One star, who knows? 15 stars in a tight group , , , , While I don't >> >> > like it, I don't see a way that it could be other than intentional. >> >> > If you can figure out a way this could be "natural," please do. >> >> > >> >> > > https://arxiv.org/pdf/2111.01208.pdf >> > >> > >> > >> > The paper says "I undertook a search for other objects similar to Boyajian?s star." [...] " Fifteen new candidates with low rates of dipping, referred to as slow dippers in the paper have been identified. The dippers occupy a limited region of the HR diagram ". >> > Similar stars displaying similar behavior does not suggest alien intelligence at work even if the precise cause of that behavior is not entirely understood. But the thing that kills the alien megastructure theory dead as a doornail, at least for me, is the finding that there is less dimming in the infrared than in the ultraviolet, because there is no way that could happen if the dimming was caused by anything larger than dust particles. >> > >> > Extinction and the Dimming of KIC 8462852 >> > >> > >> > John K Clark >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> >> >> > -- >> > You received this message because you are subscribed to the Google Groups "extropolis" group. >> > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv17bbi1k13N-V6vDRP0o%3Dmto6TJSTONW0Y-YHmE16rEQA%40mail.gmail.com. >> >> -- >> You received this message because you are subscribed to the Google Groups "extropolis" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB52U5WDyBfyREDsijeOrPWje%2BG2jwxSqhn_zs-1CBA05A%40mail.gmail.com. > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv1bOEu4L0U5sKVmX62eoUJFCcSS8p-JWAXUFxH0-KTFmw%40mail.gmail.com. From hkeithhenson at gmail.com Tue Jul 4 19:55:39 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 4 Jul 2023 12:55:39 -0700 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: On Tue, Jul 4, 2023 at 10:51?AM John Clark wrote: > > On Tue, Jul 4, 2023 at 12:21?PM Keith Henson wrote: > >> > "explain what we see?" > > > Yes, that's what theories are for. > >> > It was just to answer your comment about dust. Something can look >> like dust from a distance and still be an intentional product. Non >> Tethered dust just does not work because the time before it gets blown >> away is so short. > > > What?! Dust clouds are seen all over the universe, the gravitational collapse of dust clouds is what made the planets and even the sun. They don't last in the intense light of a star, radiation pressure blows them away. You can see that locally in a comet tail. Tabby's star is more luminous than our sun, Sorry, dust just does not work. >> > If you have a natural explanation for what we see with Tabby's star >> and the other nearby slow dippers, please let us know. > > > I don't know why thick dust clouds are around some stars but not around others, and I don't know the way gravitational dynamics distributes the dust, but I do know the dimming must be made by particles too small to be seen without a microscope. There's one other thing I don't know, I don't know why particles made of computronium explains the situation better than particles made of ice. From time to time all scientists find things that they can't immediately explain, but when that happens to astronomers somebody is bound to say if you can't immediately explain it then it must be caused by ET. There is great reluctance to ascribe what we see to aliens, but when you have a relatively compact bunch of stars that are doing the same inexplicable thing another explanation is hard to imagine. As I said, I hope there is a natural explanation, but nobody has come up with one yet. Keith > John K Clark > > > > >> >> On Tue, Jul 4, 2023 at 8:46?AM John Clark wrote: >> > >> > >> > >> > On Tue, Jul 4, 2023 at 11:33?AM Keith Henson wrote: >> > >> >> > "less dimming in the infrared than in the ultraviolet, >> >> " dust particles. " >> >> >> >> Were you here long ago when dust-sized computronium was invented and discussed? >> > >> > >> > As long as you're gonna have dust size things, how does making those things be made of computronium improve the ability of your theory to explain what we see? Space aliens add nothing, and Occam's razor says there is no point in needlessly complicating a theory. >> > >> > John K Clark >> > >> > >> > >> > >> > >> >> >> >> >> >> >> >> Keith >> >> >> >> On Tue, Jul 4, 2023 at 2:40?AM John Clark wrote: >> >> > >> >> > On Tue, Jul 4, 2023 at 12:50?AM Keith Henson wrote: >> >> > >> >> >> > I don't know if anyone wants to talk about it, but to me, it looks >> >> >> > like we may have neighbors who are obscuring substantial amounts of >> >> >> > their star's light. >> >> >> > One star, who knows? 15 stars in a tight group , , , , While I don't >> >> >> > like it, I don't see a way that it could be other than intentional. >> >> >> > If you can figure out a way this could be "natural," please do. >> >> >> > >> >> >> > > https://arxiv.org/pdf/2111.01208.pdf >> >> > >> >> > >> >> > >> >> > The paper says "I undertook a search for other objects similar to Boyajian?s star." [...] " Fifteen new candidates with low rates of dipping, referred to as slow dippers in the paper have been identified. The dippers occupy a limited region of the HR diagram ". >> >> > Similar stars displaying similar behavior does not suggest alien intelligence at work even if the precise cause of that behavior is not entirely understood. But the thing that kills the alien megastructure theory dead as a doornail, at least for me, is the finding that there is less dimming in the infrared than in the ultraviolet, because there is no way that could happen if the dimming was caused by anything larger than dust particles. >> >> > >> >> > Extinction and the Dimming of KIC 8462852 >> >> > >> >> > >> >> > John K Clark >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> >> >> >> > -- >> >> > You received this message because you are subscribed to the Google Groups "extropolis" group. >> >> > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> >> > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv17bbi1k13N-V6vDRP0o%3Dmto6TJSTONW0Y-YHmE16rEQA%40mail.gmail.com. >> >> >> >> -- >> >> You received this message because you are subscribed to the Google Groups "extropolis" group. >> >> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> >> To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB52U5WDyBfyREDsijeOrPWje%2BG2jwxSqhn_zs-1CBA05A%40mail.gmail.com. >> > >> > -- >> > You received this message because you are subscribed to the Google Groups "extropolis" group. >> > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv1bOEu4L0U5sKVmX62eoUJFCcSS8p-JWAXUFxH0-KTFmw%40mail.gmail.com. >> >> -- >> You received this message because you are subscribed to the Google Groups "extropolis" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. >> To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB4XbnDZA%2B-Wc2mjT2YpkUbdjCYHyn-nENZ5jMr5em5Zcg%40mail.gmail.com. > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv3JRUhNtjvo2YFqdoKvvPAUjmvWJHh2EeNKhn9iZALCGg%40mail.gmail.com. From pharos at gmail.com Tue Jul 4 21:00:30 2023 From: pharos at gmail.com (BillK) Date: Tue, 4 Jul 2023 22:00:30 +0100 Subject: [ExI] [Extropolis] Terasem Space Day Colloquium, July 20, 2023 In-Reply-To: References: Message-ID: On Tue, 4 Jul 2023 at 20:58, Keith Henson via extropy-chat wrote: > > There is great reluctance to ascribe what we see to aliens, but when > you have a relatively compact bunch of stars that are doing the same > inexplicable thing another explanation is hard to imagine. > > As I said, I hope there is a natural explanation, but nobody has come > up with one yet. > > Keith > _______________________________________________ Here is the proposal specification for the Webb telescope observations of Tabby's star. It explains the spectrum observations they hope to obtain. BillK From jose_cordeiro at yahoo.com Wed Jul 5 22:09:15 2023 From: jose_cordeiro at yahoo.com (Jose Cordeiro) Date: Wed, 5 Jul 2023 22:09:15 +0000 (UTC) Subject: [ExI] Extropy: The English edition of the global bestseller "The Death of Death" is on pre-sale on Amazon now, it is time to get your copy urgently, please! References: <459475082.103060.1688594955924.ref@mail.yahoo.com> Message-ID: <459475082.103060.1688594955924@mail.yahoo.com> Dear Extropy friends, ? ? ?How are you doing? It is always great to connect with such great visionary minds, particularly about an exciting and revolutionary book! ? ? ?I am really thrilled to tell you that my Spanish bestseller "La muerte de la muerte" written with my British coauthor David Wood has just been published in English by Springer Nature with the title "The Death of Death". This is really exciting since my book has trully become a global bestseller in many languages from Spanish to German, from French to Russian, and it has just gone on pre-sale with Amazon. The book has a wonderful Prologue by Aubrey de Grey and Epilogue by Alex Zhavoronkov, besides some great testimonials from George Church, Ray Kurzweil, Michio Kaku and many other great minds. We really appreciate all your help to advance our cause and promote #TheDeathOfDeath with interviews, media coverage, book reviews, articles, presentations, webinars, podcasts and just sharing with your family, friends and social media. We are in this together, trying to kill death before death kills us! We live between the last mortal generation and the first immortasl generation, where do you want to be? We will be immortally grateful, literally, for all your help, please! ? ? ?David and myself welcome any promotion that will help to advance the death of death around the world, in more and more languages.?Thus, can you kindly order a copy now since this is fundamental for the book rankings, please? I really appreciate all you have been doing in this area with so much dedication. Let me know if you get a copy and kindly leave some comments on Amazon, and certainly 5 stars, if you wish, please??https://www.amazon.com/Death-Scientific-Possibility-Immortality-Copernicus-ebook/dp/B0BXGSJSTX ? ? ?We have prepared a website with the current editions of our global bestseller, where we will include additional languages as the book is published in other countries. You can check the website that begins with a nice presentation with David and myself, I hope you like and share it too, please:?https://thedeathofdeath.org/ ? ? ?Thank you so much in advance, any support promoting the book is great, and looking forward to your comments, please... Live Long and Prosper? ? ? ?Futuristically yours, ? ? ?La vie est belle! Jose Cordeiro, MBA, PhD?(www.cordeiro.org)https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro Enter any drugstore or bookstore, and we confronted with a mountain of nonsense concerning the aging process. Society seems obsessed with aging. That is why?The Death of Death?is such a refreshing delight, able to cut through the hype and reveal a balanced, authoritative, and lucid discussion of this controversial topic. It summarizes the astonishing breakthroughs made recently in revealing how science may one day conquer the aging process.Michio Kaku, theoretical physicist and author of?The God Equation: The Quest for a Theory of Everything We are entering a?Fantastic Voyage?into life extension, crossing different bridges that will take us to indefinite life spans.?The Death of Death?explains clearly how we might soon reach longevity escape velocity and live long enough to live forever.Ray Kurzweil, co-author of?Fantastic Voyage: Live Long Enough to Live Forever?and co-founder of Singularity University Jos? and David have captured the spirit of what may be the greatest revolution in history. Not some abstract far distant promise but an exponential growth of rigorous discovery and technology in our midst but understood by only a few. This clear prose will help you join the conversation and choose a path. You should definitely read this international bestseller!George Church, professor at?Harvard University and Massachusetts Institute of Technology, and founding member of the Wyss Institute for Biologically Inspired Engineering The Death of Death?is a truly revolutionary book. This is a visionary book that confronts us with the terrible reality of aging, and its authors are friends and connoisseurs of the subject. I believe that the authoritative and exhaustive description of this crusade that Jos? and David make in this excellent book will accelerate this process. Forward!Aubrey de Grey, founder of LEV (Longevity Escape Velocity) Foundation and co-author of?Ending Aging -?"The Death of Death" is a global bestseller published in several major languages around the world, from Spanish to French, from Russian to Turkish, from Korean to Portuguese, from German to Chinese, and now also in English, with more editions coming soon in Arabic, Bulgarian, Italian, Japanese, Persian and other languages. - The coauthors are technology experts educated at MIT (Jose Cordeiro, PhD) and Cambridge University (David Wood, ScD) involved now also in biotechnology. This international bestseller has a Prologue by biogerontogist Aubrey de Grey, PhD, and the Epilogue by AI and aging expert Alexander Zhavoronkov, PhD. Many testimonials include those of George Church, Michio Kaku, Ray Kurzweil, etc. - We are in the middle of a technological revolution that will soon achieve radical life extension and indefinite lifespans for all those interested. - We are going to see more technological advances in the next 20 years that in the last 2.000 years. - Major technology companies (Amazon, Apple, Facebook, Google, IBM, Microsoft, etc.) are entering the biology, health and medicine sectors as the biggest business opportunity of the future. - Aging is the common enemy of humanity, aging causes more deaths that all other causes combined. In advanced countries, about 90% of the people die of age-related diseases. - Cell rejuvenation was scientifically proved by Nobel laureate Shinya Yamanaka in 2006, and organ rejuvenation by Harvard professor David Sinclair and others in 2020. - More and more people are beginning to study aging as a disease, but fortunately, a curable disease. We know now that aging is a medical condition that is treatable, and apparently reversible. - Google started Calico, Jeff Bezos and other billionaires founded Altos Labs, and now the Kingdom of Saudi Arabia has created the Hevolution Foundation to reverse aging, and with billions of dollars behind. - The proof that biological immortality is possible is that it already exists. There are?immortal?cells (like germ cells and cancer cells) and also?immortal?organisms (like hydras and?immortal?jellyfish). - This is the most exciting time to be alive, and to remain alive. During the next two decades we should be able to control and reverse aging. - We are living between the last mortal?generation?and the first?immortal?generation! - The longevity industry will be the biggest industry in human history in the next 20 years, just like energy companies were the biggest companies 20 years ago, and the technology companies are now. - According to futurists like Ray Kurzweil, Director of Engineering at Google and cofounder of Singularity University, death will be optional by 2045. - Humanity has an incredible opportunity now with some countries able to position themselves as leading nations in longevity thanks to top researchers and scientists working to reverse their declining and fast aging populations. "The Death of Death" explains how to advance in rejuvenation technologies, and how we can capitalize on the "Longevity Dividend" for longer, healthier, richer, safer, better lives. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.cordeiro at gmail.com Wed Jul 5 22:12:07 2023 From: jose.cordeiro at gmail.com (Jose Cordeiro) Date: Wed, 5 Jul 2023 22:12:07 +0000 (UTC) Subject: [ExI] Extropy: The English edition of the global bestseller "The Death of Death" is on pre-sale on Amazon now, it is time to get your copy urgently, please! References: <1504515988.97921.1688595127860.ref@mail.yahoo.com> Message-ID: <1504515988.97921.1688595127860@mail.yahoo.com> Dear Extropy friends, ? ? ?How are you doing? It is always great to connect with such great visionary minds, particularly about an exciting and revolutionary book! ? ? ?I am really thrilled to tell you that my Spanish bestseller "La muerte de la muerte" written with my British coauthor David Wood has just been published in English by Springer Nature with the title "The Death of Death". This is really exciting since my book has trully become a global bestseller in many languages from Spanish to German, from French to Russian, and it has just gone on pre-sale with Amazon. The book has a wonderful Prologue by Aubrey de Grey and Epilogue by Alex Zhavoronkov, besides some great testimonials from George Church, Ray Kurzweil, Michio Kaku and many other great minds. We really appreciate all your help to advance our cause and promote #TheDeathOfDeath with interviews, media coverage, book reviews, articles, presentations, webinars, podcasts and just sharing with your family, friends and social media. We are in this together, trying to kill death before death kills us! We live between the last mortal generation and the first immortasl generation, where do you want to be? We will be immortally grateful, literally, for all your help, please! ? ? ?David and myself welcome any promotion that will help to advance the death of death around the world, in more and more languages.?Thus, can you kindly order a copy now since this is fundamental for the book rankings, please? I really appreciate all you have been doing in this area with so much dedication. Let me know if you get a copy and kindly leave some comments on Amazon, and certainly 5 stars, if you wish, please??https://www.amazon.com/Death-Scientific-Possibility-Immortality-Copernicus-ebook/dp/B0BXGSJSTX ? ? ?We have prepared a website with the current editions of our global bestseller, where we will include additional languages as the book is published in other countries. You can check the website that begins with a nice presentation with David and myself, I hope you like and share it too, please:?https://thedeathofdeath.org/ ? ? ?Thank you so much in advance, any support promoting the book is great, and looking forward to your comments, please... Live Long and Prosper? ? ? ?Futuristically yours, ? ? ?La vie est belle! Jose Cordeiro, MBA, PhD?(www.cordeiro.org)https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro Enter any drugstore or bookstore, and we confronted with a mountain of nonsense concerning the aging process. Society seems obsessed with aging. That is why?The Death of Death?is such a refreshing delight, able to cut through the hype and reveal a balanced, authoritative, and lucid discussion of this controversial topic. It summarizes the astonishing breakthroughs made recently in revealing how science may one day conquer the aging process.Michio Kaku, theoretical physicist and author of?The God Equation: The Quest for a Theory of Everything We are entering a?Fantastic Voyage?into life extension, crossing different bridges that will take us to indefinite life spans.?The Death of Death?explains clearly how we might soon reach longevity escape velocity and live long enough to live forever.Ray Kurzweil, co-author of?Fantastic Voyage: Live Long Enough to Live Forever?and co-founder of Singularity University Jos? and David have captured the spirit of what may be the greatest revolution in history. Not some abstract far distant promise but an exponential growth of rigorous discovery and technology in our midst but understood by only a few. This clear prose will help you join the conversation and choose a path. You should definitely read this international bestseller!George Church, professor at?Harvard University and Massachusetts Institute of Technology, and founding member of the Wyss Institute for Biologically Inspired Engineering The Death of Death?is a truly revolutionary book. This is a visionary book that confronts us with the terrible reality of aging, and its authors are friends and connoisseurs of the subject. I believe that the authoritative and exhaustive description of this crusade that Jos? and David make in this excellent book will accelerate this process. Forward!Aubrey de Grey, founder of LEV (Longevity Escape Velocity) Foundation and co-author of?Ending Aging -?"The Death of Death" is a global bestseller published in several major languages around the world, from Spanish to French, from Russian to Turkish, from Korean to Portuguese, from German to Chinese, and now also in English, with more editions coming soon in Arabic, Bulgarian, Italian, Japanese, Persian and other languages. - The coauthors are technology experts educated at MIT (Jose Cordeiro, PhD) and Cambridge University (David Wood, ScD) involved now also in biotechnology. This international bestseller has a Prologue by biogerontogist Aubrey de Grey, PhD, and the Epilogue by AI and aging expert Alexander Zhavoronkov, PhD. Many testimonials include those of George Church, Michio Kaku, Ray Kurzweil, etc. - We are in the middle of a technological revolution that will soon achieve radical life extension and indefinite lifespans for all those interested. - We are going to see more technological advances in the next 20 years that in the last 2.000 years. - Major technology companies (Amazon, Apple, Facebook, Google, IBM, Microsoft, etc.) are entering the biology, health and medicine sectors as the biggest business opportunity of the future. - Aging is the common enemy of humanity, aging causes more deaths that all other causes combined. In advanced countries, about 90% of the people die of age-related diseases. - Cell rejuvenation was scientifically proved by Nobel laureate Shinya Yamanaka in 2006, and organ rejuvenation by Harvard professor David Sinclair and others in 2020. - More and more people are beginning to study aging as a disease, but fortunately, a curable disease. We know now that aging is a medical condition that is treatable, and apparently reversible. - Google started Calico, Jeff Bezos and other billionaires founded Altos Labs, and now the Kingdom of Saudi Arabia has created the Hevolution Foundation to reverse aging, and with billions of dollars behind. - The proof that biological immortality is possible is that it already exists. There are?immortal?cells (like germ cells and cancer cells) and also?immortal?organisms (like hydras and?immortal?jellyfish). - This is the most exciting time to be alive, and to remain alive. During the next two decades we should be able to control and reverse aging. - We are living between the last mortal?generation?and the first?immortal?generation! - The longevity industry will be the biggest industry in human history in the next 20 years, just like energy companies were the biggest companies 20 years ago, and the technology companies are now. - According to futurists like Ray Kurzweil, Director of Engineering at Google and cofounder of Singularity University, death will be optional by 2045. - Humanity has an incredible opportunity now with some countries able to position themselves as leading nations in longevity thanks to top researchers and scientists working to reverse their declining and fast aging populations. "The Death of Death" explains how to advance in rejuvenation technologies, and how we can capitalize on the "Longevity Dividend" for longer, healthier, richer, safer, better lives. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Jul 5 22:22:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 5 Jul 2023 15:22:22 -0700 Subject: [ExI] Extropy: The English edition of the global bestseller "The Death of Death" is on pre-sale on Amazon now, it is time to get your copy urgently, please! In-Reply-To: <459475082.103060.1688594955924@mail.yahoo.com> References: <459475082.103060.1688594955924.ref@mail.yahoo.com> <459475082.103060.1688594955924@mail.yahoo.com> Message-ID: Out of curiosity, why does your Web site list the Russian version twice? On Wed, Jul 5, 2023 at 3:11?PM Jose Cordeiro via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Dear Extropy friends, > > How are you doing? It is always great to connect with such great > visionary minds, particularly about an exciting and revolutionary book! > > I am really thrilled to tell you that my Spanish bestseller "La > muerte de la muerte" written with my British coauthor David Wood has just > been published in English by Springer Nature with the title "The Death of > Death". This is really exciting since my book has trully become a global > bestseller in many languages from Spanish to German, from French to > Russian, and it has just gone on pre-sale with Amazon. The book has a > wonderful Prologue by Aubrey de Grey and Epilogue by Alex Zhavoronkov, > besides some great testimonials from George Church, Ray Kurzweil, Michio > Kaku and many other great minds. We really appreciate all your help to > advance our cause and promote #TheDeathOfDeath with interviews, media > coverage, book reviews, articles, presentations, webinars, podcasts and > just sharing with your family, friends and social media. We are in this > together, trying to kill death before death kills us! We live between the > last mortal generation and the first immortasl generation, where do you > want to be? We will be immortally grateful, literally, for all your help, > please! > > David and myself welcome any promotion that will help to advance the > death of death around the world, in more and more languages. Thus, can you > kindly order a copy now since this is fundamental for the book rankings, > please? I really appreciate all you have been doing in this area with so > much dedication. Let me know if you get a copy and kindly leave some > comments on Amazon, and certainly 5 stars, if you wish, please [image: > Emoji][image: Emoji][image: Emoji][image: Emoji][image: Emoji] > https://www.amazon.com/Death-Scientific-Possibility-Immortality-Copernicus-ebook/dp/B0BXGSJSTX > > We have prepared a website with the current editions of our global > bestseller, where we will include additional languages as the book is > published in other countries. You can check the website that begins with a > nice presentation with David and myself, I hope you like and share it too, > please: https://thedeathofdeath.org/ > > Thank you so much in advance, any support promoting the book is > great, and looking forward to your comments, please... Live Long and > Prosper [image: Emoji][image: Emoji][image: Emoji][image: Emoji][image: > Emoji] > > Futuristically yours, > > La vie est belle! > > Jose Cordeiro, MBA, PhD (www.cordeiro.org) > https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro > > Enter any drugstore or bookstore, and we confronted with a mountain of > nonsense concerning the aging process. Society seems obsessed with aging. > That is why *The Death of Death* is such a refreshing delight, able to > cut through the hype and reveal a balanced, authoritative, and lucid > discussion of this controversial topic. It summarizes the astonishing > breakthroughs made recently in revealing how science may one day conquer > the aging process. > *Michio Kaku, theoretical physicist and author of **The God Equation: The > Quest for a Theory of Everything* > > We are entering a Fantastic Voyage into life extension, crossing different > bridges that will take us to indefinite life spans. *The Death of Death* explains > clearly how we might soon reach longevity escape velocity and live long > enough to live forever. > *Ray Kurzweil, co-author of **Fantastic Voyage: Live Long Enough to Live > Forever** and co-founder of Singularity University* > > Jos? and David have captured the spirit of what may be the greatest > revolution in history. Not some abstract far distant promise but an > exponential growth of rigorous discovery and technology in our midst but > understood by only a few. This clear prose will help you join the > conversation and choose a path. You should definitely read this > international bestseller! > *George Church, professor at Harvard University and Massachusetts > Institute of Technology, and founding member of the Wyss Institute for > Biologically Inspired Engineering* > > *The Death of Death* is a truly revolutionary book. This is a visionary > book that confronts us with the terrible reality of aging, and its authors > are friends and connoisseurs of the subject. I believe that the > authoritative and exhaustive description of this crusade that Jos? and > David make in this excellent book will accelerate this process. Forward! > *Aubrey de Grey, founder of LEV (Longevity Escape Velocity) Foundation and > co-author of **Ending Aging* > > *- "The Death of Death" is a global bestseller published in several major > languages around the world, from Spanish to French, from Russian to > Turkish, from Korean to Portuguese, from German to Chinese, and now also in > English, with more editions coming soon in Arabic, Bulgarian, Italian, > Japanese, Persian and other languages.* > > *- The coauthors are technology experts educated at MIT (Jose Cordeiro, > PhD) and Cambridge University (David Wood, ScD) involved now also in > biotechnology. This international bestseller has a Prologue by > biogerontogist Aubrey de Grey, PhD, and the Epilogue by AI and aging expert > Alexander Zhavoronkov, PhD. Many testimonials include those of George > Church, Michio Kaku, Ray Kurzweil, etc.* > > *- We are in the middle of a technological revolution that will soon > achieve radical life extension and indefinite lifespans for all those > interested.* > > *- We are going to see more technological advances in the next 20 years > that in the last 2.000 years.* > > *- Major technology companies (Amazon, Apple, Facebook, Google, IBM, > Microsoft, etc.) are entering the biology, health and medicine sectors as > the biggest business opportunity of the future.* > > *- Aging is the common enemy of humanity, aging causes more deaths that > all other causes combined. In advanced countries, about 90% of the people > die of age-related diseases.* > > *- Cell rejuvenation was scientifically proved by Nobel laureate Shinya > Yamanaka in 2006, and organ rejuvenation by Harvard professor David > Sinclair and others in 2020.* > > *- More and more people are beginning to study aging as a disease, but > fortunately, a curable disease. We know now that aging is a medical > condition that is treatable, and apparently reversible.* > > *- Google started Calico, Jeff Bezos and other billionaires founded Altos > Labs, and now the Kingdom of Saudi Arabia has created the Hevolution > Foundation to reverse aging, and with billions of dollars behind.* > > *- The proof that biological immortality is possible is that it already > exists. There are immortal cells (like germ cells and cancer cells) and > also immortal organisms (like hydras and immortal jellyfish).* > > *- This is the most exciting time to be alive, and to remain alive. During > the next two decades we should be able to control and reverse aging.* > > *- We are living between the last mortal generation and the > first immortal generation!* > > *- The longevity industry will be the biggest industry in human history in > the next 20 years, just like energy companies were the biggest companies 20 > years ago, and the technology companies are now.* > > *- According to futurists like Ray Kurzweil, Director of Engineering at > Google and cofounder of Singularity University, death will be optional by > 2045.* > > *- Humanity has an incredible opportunity now with some countries able to > position themselves as leading nations in longevity thanks to top > researchers and scientists working to reverse their declining and fast > aging populations. "The Death of Death" explains how to advance in > rejuvenation technologies, and how we can capitalize on the "Longevity > Dividend" for longer, healthier, richer, safer, better lives.* > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose_cordeiro at yahoo.com Wed Jul 5 23:10:14 2023 From: jose_cordeiro at yahoo.com (Jose Cordeiro) Date: Wed, 5 Jul 2023 23:10:14 +0000 (UTC) Subject: [ExI] Extropy: The English edition of the global bestseller "The Death of Death" is on pre-sale on Amazon now, it is time to get your copy urgently, please! In-Reply-To: References: <459475082.103060.1688594955924.ref@mail.yahoo.com> <459475082.103060.1688594955924@mail.yahoo.com> Message-ID: <2106675034.127866.1688598614684@mail.yahoo.com> Dear Adrian, ? ? ?The reason is very simple, in Russia they published exactly the same book with two completely different book covers, to see which one sells more, can you guess which one won? This is called Marketing A/B Testing, and they normally do that with big bestsellers like mine! ? ? ?By the way, I am looking forward to your comments and 5 stars in Amazon, please:?https://www.amazon.com/Death-Scientific-Possibility-Immortality-Copernicus-ebook/dp/B0BXGSJSTX ? ? ?Futuristically yours, ? ? ?La vie est belle! Jose Cordeiro, MBA, PhD?(www.cordeiro.org)https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro On Thursday, July 6, 2023 at 12:30:55 AM GMT+2, Adrian Tymes via extropy-chat wrote: Out of curiosity, why does your Web site?list the Russian version twice? On Wed, Jul 5, 2023 at 3:11?PM Jose Cordeiro via extropy-chat wrote: Dear Extropy friends, ? ? ?How are you doing? It is always great to connect with such great visionary minds, particularly about an exciting and revolutionary book! ? ? ?I am really thrilled to tell you that my Spanish bestseller "La muerte de la muerte" written with my British coauthor David Wood has just been published in English by Springer Nature with the title "The Death of Death". This is really exciting since my book has trully become a global bestseller in many languages from Spanish to German, from French to Russian, and it has just gone on pre-sale with Amazon. The book has a wonderful Prologue by Aubrey de Grey and Epilogue by Alex Zhavoronkov, besides some great testimonials from George Church, Ray Kurzweil, Michio Kaku and many other great minds. We really appreciate all your help to advance our cause and promote #TheDeathOfDeath with interviews, media coverage, book reviews, articles, presentations, webinars, podcasts and just sharing with your family, friends and social media. We are in this together, trying to kill death before death kills us! We live between the last mortal generation and the first immortasl generation, where do you want to be? We will be immortally grateful, literally, for all your help, please! ? ? ?David and myself welcome any promotion that will help to advance the death of death around the world, in more and more languages.?Thus, can you kindly order a copy now since this is fundamental for the book rankings, please? I really appreciate all you have been doing in this area with so much dedication. Let me know if you get a copy and kindly leave some comments on Amazon, and certainly 5 stars, if you wish, please??https://www.amazon.com/Death-Scientific-Possibility-Immortality-Copernicus-ebook/dp/B0BXGSJSTX ? ? ?We have prepared a website with the current editions of our global bestseller, where we will include additional languages as the book is published in other countries. You can check the website that begins with a nice presentation with David and myself, I hope you like and share it too, please:?https://thedeathofdeath.org/ ? ? ?Thank you so much in advance, any support promoting the book is great, and looking forward to your comments, please... Live Long and Prosper? ? ? ?Futuristically yours, ? ? ?La vie est belle! Jose Cordeiro, MBA, PhD?(www.cordeiro.org)https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro Enter any drugstore or bookstore, and we confronted with a mountain of nonsense concerning the aging process. Society seems obsessed with aging. That is why?The Death of Death?is such a refreshing delight, able to cut through the hype and reveal a balanced, authoritative, and lucid discussion of this controversial topic. It summarizes the astonishing breakthroughs made recently in revealing how science may one day conquer the aging process.Michio Kaku, theoretical physicist and author of?The God Equation: The Quest for a Theory of Everything We are entering a?Fantastic Voyage?into life extension, crossing different bridges that will take us to indefinite life spans.?The Death of Death?explains clearly how we might soon reach longevity escape velocity and live long enough to live forever.Ray Kurzweil, co-author of?Fantastic Voyage: Live Long Enough to Live Forever?and co-founder of Singularity University Jos? and David have captured the spirit of what may be the greatest revolution in history. Not some abstract far distant promise but an exponential growth of rigorous discovery and technology in our midst but understood by only a few. This clear prose will help you join the conversation and choose a path. You should definitely read this international bestseller!George Church, professor at?Harvard University and Massachusetts Institute of Technology, and founding member of the Wyss Institute for Biologically Inspired Engineering The Death of Death?is a truly revolutionary book. This is a visionary book that confronts us with the terrible reality of aging, and its authors are friends and connoisseurs of the subject. I believe that the authoritative and exhaustive description of this crusade that Jos? and David make in this excellent book will accelerate this process. Forward!Aubrey de Grey, founder of LEV (Longevity Escape Velocity) Foundation and co-author of?Ending Aging -?"The Death of Death" is a global bestseller published in several major languages around the world, from Spanish to French, from Russian to Turkish, from Korean to Portuguese, from German to Chinese, and now also in English, with more editions coming soon in Arabic, Bulgarian, Italian, Japanese, Persian and other languages. - The coauthors are technology experts educated at MIT (Jose Cordeiro, PhD) and Cambridge University (David Wood, ScD) involved now also in biotechnology. This international bestseller has a Prologue by biogerontogist Aubrey de Grey, PhD, and the Epilogue by AI and aging expert Alexander Zhavoronkov, PhD. Many testimonials include those of George Church, Michio Kaku, Ray Kurzweil, etc. - We are in the middle of a technological revolution that will soon achieve radical life extension and indefinite lifespans for all those interested. - We are going to see more technological advances in the next 20 years that in the last 2.000 years. - Major technology companies (Amazon, Apple, Facebook, Google, IBM, Microsoft, etc.) are entering the biology, health and medicine sectors as the biggest business opportunity of the future. - Aging is the common enemy of humanity, aging causes more deaths that all other causes combined. In advanced countries, about 90% of the people die of age-related diseases. - Cell rejuvenation was scientifically proved by Nobel laureate Shinya Yamanaka in 2006, and organ rejuvenation by Harvard professor David Sinclair and others in 2020. - More and more people are beginning to study aging as a disease, but fortunately, a curable disease. We know now that aging is a medical condition that is treatable, and apparently reversible. - Google started Calico, Jeff Bezos and other billionaires founded Altos Labs, and now the Kingdom of Saudi Arabia has created the Hevolution Foundation to reverse aging, and with billions of dollars behind. - The proof that biological immortality is possible is that it already exists. There are?immortal?cells (like germ cells and cancer cells) and also?immortal?organisms (like hydras and?immortal?jellyfish). - This is the most exciting time to be alive, and to remain alive. During the next two decades we should be able to control and reverse aging. - We are living between the last mortal?generation?and the first?immortal?generation! - The longevity industry will be the biggest industry in human history in the next 20 years, just like energy companies were the biggest companies 20 years ago, and the technology companies are now. - According to futurists like Ray Kurzweil, Director of Engineering at Google and cofounder of Singularity University, death will be optional by 2045. - Humanity has an incredible opportunity now with some countries able to position themselves as leading nations in longevity thanks to top researchers and scientists working to reverse their declining and fast aging populations. "The Death of Death" explains how to advance in rejuvenation technologies, and how we can capitalize on the "Longevity Dividend" for longer, healthier, richer, safer, better lives. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jul 7 00:29:23 2023 From: pharos at gmail.com (BillK) Date: Fri, 7 Jul 2023 01:29:23 +0100 Subject: [ExI] Hofstadter now fears AI Message-ID: ?AI is about to eclipse humans?, predicts a leading academic 04/07/2023 Quotes: Douglas Hofstadter has been a strong advocate of separating the terms intelligence and artificial intelligence over the years, not considering that technologies based on machine learning and other models are not worthy of being considered as intelligent as humans. So far, when systems like ChatGPT have sabotaged the plans of this academic He is ?terrified? by the future of these powerful AI models. ?Very soon, it?s quite possible that they (ChatGPT and other AIs) will be smarter than us, much smarter than us,? Hofstadter said in an interview. He says that his faith has been shaken by what he describes as a traumatic experience, believing that humans will soon be eclipsed by machines. -------------------- The relevant 8 minutes of the video interview is here: ------------- It sounds like he is now agreeing with Eliezer. BillK From hkeithhenson at gmail.com Fri Jul 7 02:12:40 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 6 Jul 2023 19:12:40 -0700 Subject: [ExI] Hofstadter now fears AI In-Reply-To: References: Message-ID: I know Hofstadter and Eliezer slightly, I fix Hofstedte's Wikipedia page for him when it departs intolerably from reality, and followed Eiizers shock level 4 mailing list when most of the paper clip arguments were going on. The earliest version of "The Clinic Seed'' is posted there. There is a lot of spread in how people think about AI. At a Zoom meeting with some tech folks today someone who is working on it talked about the combining of GPT-4 with Mathematica. Seems to be a remarkable engineering tool. Ghod knows we need it with all the problems facing us. Also turns out that the combination does a remarkable job of educating people on math. Some time ago I mentioned that China was being left in the dust regarding the government (CCP) coming down hard on AI. The Washington Post has come to much the same conclusions. https://www.washingtonpost.com/world/2023/07/03/china-us-ai-technology-chatgpt/ Keith On Thu, Jul 6, 2023 at 5:32?PM BillK via extropy-chat wrote: > > ?AI is about to eclipse humans?, predicts a leading academic > 04/07/2023 > > > > Quotes: > Douglas Hofstadter has been a strong advocate of separating the terms > intelligence and artificial intelligence over the years, not > considering that technologies based on machine learning and other > models are not worthy of being considered as intelligent as humans. So > far, when systems like ChatGPT have sabotaged the plans of this > academic He is ?terrified? by the future of these powerful AI models. > > ?Very soon, it?s quite possible that they (ChatGPT and other AIs) will > be smarter than us, much smarter than us,? Hofstadter said in an > interview. > > He says that his faith has been shaken by what he describes as a > traumatic experience, believing that humans will soon be eclipsed by > machines. > -------------------- > > The relevant 8 minutes of the video interview is here: > > ------------- > > It sounds like he is now agreeing with Eliezer. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Fri Jul 7 14:21:54 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 7 Jul 2023 08:21:54 -0600 Subject: [ExI] Hofstadter now fears AI In-Reply-To: References: Message-ID: No, AI will be much smarter than current average humans. But re architected brain intelligence is about to start exploding even faster than all that, to say nothing of using AI's as personal intelligence amplification systems. Uploading to exponentially more capable system, uploading to bodies that don't need space or diving suites... Why does everyone completely ignore all that? On Thu, Jul 6, 2023 at 6:31?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > ?AI is about to eclipse humans?, predicts a leading academic > 04/07/2023 > > < > https://nationworldnews.com/ai-is-about-to-eclipse-humans-predicts-a-leading-academic/ > > > > Quotes: > Douglas Hofstadter has been a strong advocate of separating the terms > intelligence and artificial intelligence over the years, not > considering that technologies based on machine learning and other > models are not worthy of being considered as intelligent as humans. So > far, when systems like ChatGPT have sabotaged the plans of this > academic He is ?terrified? by the future of these powerful AI models. > > ?Very soon, it?s quite possible that they (ChatGPT and other AIs) will > be smarter than us, much smarter than us,? Hofstadter said in an > interview. > > He says that his faith has been shaken by what he describes as a > traumatic experience, believing that humans will soon be eclipsed by > machines. > -------------------- > > The relevant 8 minutes of the video interview is here: > > ------------- > > It sounds like he is now agreeing with Eliezer. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Fri Jul 7 15:01:13 2023 From: gadersd at gmail.com (Gadersd) Date: Fri, 7 Jul 2023 11:01:13 -0400 Subject: [ExI] Hofstadter now fears AI In-Reply-To: References: Message-ID: > Uploading to exponentially more capable system, uploading to bodies that don't need space or diving suites... > Why does everyone completely ignore all that? It will be a while until humans will catch up to AI intelligence once it surpasses us. With exponential growth of AI just a few years will likely put AI intelligence so far beyond our ken that AI will be as gods to us. Brain hacking will probably be relatively slow due to regulations and the inherent relative difficulty of scanning/modifying/simulating biological tissue vs digital computer systems. We will probably not be the main characters of the future, I?m afraid. > On Jul 7, 2023, at 10:21 AM, Brent Allsop via extropy-chat wrote: > > > No, AI will be much smarter than current average humans. > But re architected brain intelligence is about to start exploding even faster than all that, to say nothing of using AI's as personal intelligence amplification systems. > Uploading to exponentially more capable system, uploading to bodies that don't need space or diving suites... > Why does everyone completely ignore all that? > > > On Thu, Jul 6, 2023 at 6:31?PM BillK via extropy-chat > wrote: > ?AI is about to eclipse humans?, predicts a leading academic > 04/07/2023 > > > > > Quotes: > Douglas Hofstadter has been a strong advocate of separating the terms > intelligence and artificial intelligence over the years, not > considering that technologies based on machine learning and other > models are not worthy of being considered as intelligent as humans. So > far, when systems like ChatGPT have sabotaged the plans of this > academic He is ?terrified? by the future of these powerful AI models. > > ?Very soon, it?s quite possible that they (ChatGPT and other AIs) will > be smarter than us, much smarter than us,? Hofstadter said in an > interview. > > He says that his faith has been shaken by what he describes as a > traumatic experience, believing that humans will soon be eclipsed by > machines. > -------------------- > > The relevant 8 minutes of the video interview is here: > > > ------------- > > It sounds like he is now agreeing with Eliezer. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jul 7 15:01:58 2023 From: pharos at gmail.com (BillK) Date: Fri, 7 Jul 2023 16:01:58 +0100 Subject: [ExI] Hofstadter now fears AI In-Reply-To: References: Message-ID: On Fri, 7 Jul 2023 at 15:22, Brent Allsop wrote: > No, AI will be much smarter than current average humans. > But re architected brain intelligence is about to start exploding even faster than all that, to say nothing of using AI's as personal intelligence amplification systems. > Uploading to exponentially more capable system, uploading to bodies that don't need space or diving suites... > Why does everyone completely ignore all that? > _______________________________________________ I think the main problem is the relative timescale of the two options. AI is improving rapidly, approaching exponential improvements. AGI is estimated to arrive in 5 to 10 years time. Tinkering with complex human brains is just making the first baby steps and faces great scientific difficulties. Also, there are ethical concerns about just enhancing a few humans first, like the billionaires, for example. Uploading is probably 100 years away, as so much research needs to be done, If AGI arrives in a few years time, that will be the Singularity for humans and all bets are off. BillK From hkeithhenson at gmail.com Fri Jul 7 16:35:07 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 7 Jul 2023 09:35:07 -0700 Subject: [ExI] Hofstadter now fears AI In-Reply-To: References: Message-ID: Back during the time I was posting on sl4, I might have slightly improved the safety of AI research. At that time, one of the paths to AI being considered was to blindly copy a human brain into fast hardware. I threw cold water on that idea because at the time I was working on poorly recognized human psychological traits such as https://en.citizendium.org/wiki/Capture-bonding and the psychological mechanisms humans have that lead them into war. I think the path to war is tripped off by the detection of a looming resource shortage. Humans are generally blind to understanding they have these evolved psychological mechanisms. Blindly copying a brain into a powerful AI which could see a looming resource crisis and go to war with humans over such resources seems like a really bad idea. The difficulty of the problem of scanning a brain may have been more significant than my concerns but in any case, you don't hear much about this approach to AI today. Keith On Fri, Jul 7, 2023 at 7:23?AM Brent Allsop via extropy-chat wrote: > > > No, AI will be much smarter than current average humans. > But re architected brain intelligence is about to start exploding even faster than all that, to say nothing of using AI's as personal intelligence amplification systems. > Uploading to exponentially more capable system, uploading to bodies that don't need space or diving suites... > Why does everyone completely ignore all that? > > > On Thu, Jul 6, 2023 at 6:31?PM BillK via extropy-chat wrote: >> >> ?AI is about to eclipse humans?, predicts a leading academic >> 04/07/2023 >> >> >> >> Quotes: >> Douglas Hofstadter has been a strong advocate of separating the terms >> intelligence and artificial intelligence over the years, not >> considering that technologies based on machine learning and other >> models are not worthy of being considered as intelligent as humans. So >> far, when systems like ChatGPT have sabotaged the plans of this >> academic He is ?terrified? by the future of these powerful AI models. >> >> ?Very soon, it?s quite possible that they (ChatGPT and other AIs) will >> be smarter than us, much smarter than us,? Hofstadter said in an >> interview. >> >> He says that his faith has been shaken by what he describes as a >> traumatic experience, believing that humans will soon be eclipsed by >> machines. >> -------------------- >> >> The relevant 8 minutes of the video interview is here: >> >> ------------- >> >> It sounds like he is now agreeing with Eliezer. >> >> BillK >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Sat Jul 8 22:41:17 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 08 Jul 2023 15:41:17 -0700 Subject: [ExI] Conscious AI or a Zombie? In-Reply-To: References: <20230701161436.Horde.ZbLUIYRfOr6z688KMQN4aYI@sollegro.com> Message-ID: <20230708154117.Horde.oNPqVG_TLrU_fpG9UQC5bGa@sollegro.com> Quoting William Flynn Wallace via extropy-chat : > Except for smell, sensations come to us from our unconscious mind. There > they get routed to various brain areas and then, only perhaps (they could > be shut down and never get to the forebrain) to upper areas. These > processes are aware of the nature of the stimuli and know where to send > them, but they are not available to our conscious mind. Here "conscious and unconscious mind" are psychological terms of art that do not correspond to the term consciousness as generally defined. The conscious mind and the unconscious mind are manifested by different regions of the same brain and both play a role in the brain's generation of a state of consciousness. After all, we are as organisms, a collective of 37 trillion cells each with their own little lives, each doing a valuable job for the body. Some 100 billion of them are neurons who get to live inside the skull and operate a giant network called the brain. The neurons of the prefrontal cortex in the forebrain are responsible for executive functioning and your collective identity. They are like the parliament of your body. Obviously, like all parliaments, the forebrain typically does not attend to minute details of sensation, just as parliament does attend to the plight of individual citizens. There is definitely a sense in which the conscious mind is less aware of its environment moment to moment than are other parts of the brain. Some part of your brain has to take over your driving home from work when your conscious mind wanders. > Or you could say that these unconscious processes are really conscious but > simply not available to what we call our conscious mind. Thus you can have > awareness or consciousness in part of the brain but not the part we think > of as our consciousness. It is exactly this slippery fluidity of the meaning of the word consciousness that made me define the more rigorous term "causal awareness". There are parts of your brain that are causally aware of your drive home, even if your conscious mind finds itself on your street with no recollection of you got there. Unless of course, you hit stopped traffic due to an accident. Then your conscious mind has to micromanage your drive and boy is it annoyed by that. So I think it is entirely fair to say that your brain as a whole is more conscious than your forebrain and the conscious mind that it manifests. > If an organism deals in some appropriate way with incoming stimuli you > could call it aware. Amoebas do that. Yes, causal awareness is a function that maps an input vector from an organism's sensorium to an output vector processed by the organism's motorium. The difference between an amoeba and a human, in this regard, is that the human's causal awareness vector is of much higher dimension than an amoeba's. A thermostat could be considered a one-dimensional vector function, an amoeba maybe a few tens of dimensions, a human perhaps a few thousands, and so forth. Stuart LaForge > > On Sun, Jul 2, 2023 at 1:02?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Stuart, >> >> Thank you for putting your argument into such clear words. I agree >> completely with your definition of CAP. >> >> Also, those videos of the AI and robotic soccer players are great. I also >> agree it is not logically possible to say these entities are not aware of >> the ball, and what else is consciousness, beyond "awareness"? If the bots >> are aware of the ball (which they plainly are), then they're conscious of >> it. >> >> Jason >> >> On Sat, Jul 1, 2023 at 6:16?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Jason Resch via extropy-chat : >>> >>>> If you believe that philosophical zombies are logically impossible, then >>>> consciousness is logically necessary (in the presence of certain >>>> behavioral, sensory, and introspective capacities). >>> >>> Actually, I would more rigorously say that philosophical zombies are >>> impossible precisely BECAUSE they violate what I call causal awareness >>> principle or CAP. Causal awareness is hereby defined as the ability to >>> sense, act on, and react to ones environment as well as ones own >>> effect on it. Anything that is causally aware is able to act >>> rationally. Consciousness is necessary, but not sufficient, for causal >>> awareness, and therefore, rational action. However, something cannot >>> act rationally without being conscious too. Acting in ones own best >>> interests requires consciousness. A sentient being must be conscious >>> of those interests and the means to achieve them in the context of a >>> changing environment. >>> >>> Here rational action is used as a proxy for intelligent behavior. That >>> is to say that philosophical zombies cannot exist because they would >>> have no way to display intelligent goal-seeking behavior because they >>> would not be conscious of any goal; Therefore, they could not be >>> conscious of how to navigate the environment to achieve a goal; nor, >>> would they be conscious of whether they had achieved the goal or not. >>> That is to say, that logically, a philosophical zombie does not have >>> the logical properties necessary to fit its own definition. >>> Philosophical zombies are an unwarranted assumption to make the Hard >>> Problem of consciousness seem like a different question than the >>> so-called Easy Problem, whereas, CAP says that the complete solution >>> of the Easy Problem, is also the solution to Hard Problem. >>> >>> While pre-programmed behavior could mimic rational action in the short >>> term, any sufficient change in the p-zombie's environment, like a new >>> obstacle, would thwart it, and expose it as a zombie. In the harsh >>> world of nature, philosophical zombies, even if they came to exist by >>> some extraordinary chance, would quickly go extinct. >>> >>> Therefore philosophical zombies, as opposed to fungal zombies, are >>> both logically and physically impossible. It is, in short, impossible >>> to do what humans evolved from worms to be able to do, without being >>> in some measure, more conscious than a worm. >>> >>>> Accordingly, I believe >>>> that consciousness was an unavoidable consequence during the course of >>>> evolution of life on Earth as once nature created creatures having >>> certain >>>> capacities/abilities, consciousness had no other choice but to exist, it >>>> became logically necessary. >>> >>> Yes, there is certainly a natural history of consciousness. You can >>> look at the evolution of the nervous system of vertebrates through the >>> process of encephalization and decussation and see a causal narrative. >>> The moment life decided to detach itself from the ocean bottom and >>> move around looking for food, a complete, or open, digestive system >>> with a seperate mouth and anus became advantageous. Once it started >>> moving mouth first through the world, sensory organs like eyes or >>> antennae became advantageous; Moreover, those creatures which evolved >>> sensory organs like taste-buds and eyes near their mouths, as opposed >>> to near their anus, had a survival advantage. >>> >>> Once an organism had a concentration of senses on its rostral or front >>> end, that end became differentiated from the caudal or rear end of the >>> organism. The cluster of sensory organs became a rudimentary head. As >>> part of that differentiation called encephalization, it became >>> advantageous for that organism to develop an organ to process the >>> sensory information and locate it near the sense organs. The organ I >>> speak of began as a small nerve cluster. As successive generations of >>> the organism started moving through the environment faster, sensing >>> and avoiding danger, finding food and resources, weathering natural >>> disasters and extinction events, and generally leading more >>> complicated lives, it finally evolved into the conscious brain. >>> >>>> The same, I believe, is true for AI. It is unavoidable when we create >>>> machines of certain abilities, and I believe existing software is >>> already >>>> conscious. For example, my open source project on artificial sentience >>>> could be such an example: https://github.com/jasonkresch/bots >>> >>> Yes, I agree. These AI learned how to play soccer/football on their >>> own. They are certainly conscious of one another, the ball, and their >>> goals and that consciousness allows some very complex goal-seeking >>> behavior to emerge. By the CAP, these AI agents in the following video >>> presentation of a recent Science paper by Lui et al. >>> >>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s. >>> >>> Some might object because the consciousness is expressed within a >>> virtual setting. But that's all right because Google built bodies for >>> the little guys: >>> >>> https://www.youtube.com/watch?v=RbyQcCT6890 >>> >>> If you push one over, they stand right back up. So yeah, the CAP says >>> they are rudimentarily conscious because they display causal awareness >>> and rational action. They lie somewhere between thermostats and humans >>> on the consciousness scale. Someday, their consciousness may far >>> surpass ours. >>> >>> Stuart LaForge >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> From pharos at gmail.com Sun Jul 9 18:30:33 2023 From: pharos at gmail.com (BillK) Date: Sun, 9 Jul 2023 19:30:33 +0100 Subject: [ExI] AI Outperforms Humans in Creativity Test Message-ID: Artificial Intelligence (AI), specifically GPT-4, was found to match the top 1% of human thinkers on a standard creativity test. The AI application ChatGPT, developed using GPT-4, excelled in fluency and originality in the Torrance Tests of Creative Thinking, a widely recognized tool for assessing creativity. Quote: ?For ChatGPT and GPT-4, we showed for the first time that it performs in the top 1% for originality,? Guzik said. ?That was new.? He was gratified to note that some of his UM students also performed in the top 1%. However, ChatGTP outperformed the vast majority of college students nationally. ------------ BillK From pharos at gmail.com Tue Jul 11 10:07:35 2023 From: pharos at gmail.com (BillK) Date: Tue, 11 Jul 2023 11:07:35 +0100 Subject: [ExI] The Universe is big - really, really BIG. Message-ID: We can leave the Solar System, but arriving anywhere is not happening soon. What if we dropped interstellar ambitions and focused on understanding our home system? Paul Sutter - 7/10/2023 Quote: If we want to travel among the stars, we have to figure out a way to conquer the unfathomable gulfs of nothingness that separate us. Let?s leave aside the idea of sending humans on that kind of expedition anytime soon, if ever. We can?t even send people to Mars, the next nearest planet, without intense investment to solve the engineering challenges associated with such a journey. So let?s focus on robotic probes for now, as that will have to do for several generations to come. If we want to get to Proxima Centauri quickly, we have to accelerate our craft to a decent fraction of the speed of light (and even that?s not that fast; a probe capable of near-light speed would still take nearly half a decade). -------------------- BillK From pharos at gmail.com Tue Jul 11 16:41:07 2023 From: pharos at gmail.com (BillK) Date: Tue, 11 Jul 2023 17:41:07 +0100 Subject: [ExI] AI Agents with Multiple 'Selves' Learn faster Message-ID: AI Agents With ?Multiple Selves? Can Rapidly Adapt to a Changing World By Shelly Fan July 11, 2023 Quote: The idea is seemingly simple. Rather than a monolithic AI?a single network that encompasses the entire ?self??the team constructed a modular agent, each part with its own ?motivation? and goals but commanding a single ?body.? Like a democratic society, the AI system argues within itself to decide on the best response, where the action most likely to yield the largest winning outcome guides its next step. In several simulations, the modular AI outperformed its classic monolithic peer. Its adaptability especially shined when the researchers artificially increased the number of goals that it had to simultaneously maintain. The Lego-esque AI rapidly adapted, whereas its monolithic counterpart struggled to catch up. This adaptability further shone when the team challenged both AI agents in changing environments. In one test, the resource goal positions moved to a random grid location at sporadic time scales. The modular AI quickly picked up on the changes and adapted to them, whereas the monolithic agent performed far worse. ------------------- ?Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)? ? Walt Whitman, Leaves of Grass BillK From atymes at gmail.com Tue Jul 11 17:33:42 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 11 Jul 2023 10:33:42 -0700 Subject: [ExI] The Universe is big - really, really BIG. In-Reply-To: References: Message-ID: Most interstellar ambitions are flat-out impossible without some form of faster than light drive. What few theoretically possible ways we know of to achieve this, can not be developed without the sort of mature outer space industry that seems highly unlikely to happen without associated human colonization of non-Earth bodies in our solar system. Therefore, to reach across the stars we must first seriously begin to colonize the solar system. On Tue, Jul 11, 2023 at 3:10?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > We can leave the Solar System, but arriving anywhere is not happening soon. > What if we dropped interstellar ambitions and focused on understanding > our home system? > Paul Sutter - 7/10/2023 > > < > https://arstechnica.com/science/2023/07/get-comfortable-in-the-solar-system-well-be-here-for-a-while/ > > > > Quote: > If we want to travel among the stars, we have to figure out a way to > conquer the unfathomable gulfs of nothingness that separate us. Let?s > leave aside the idea of sending humans on that kind of expedition > anytime soon, if ever. We can?t even send people to Mars, the next > nearest planet, without intense investment to solve the engineering > challenges associated with such a journey. > > So let?s focus on robotic probes for now, as that will have to do for > several generations to come. If we want to get to Proxima Centauri > quickly, we have to accelerate our craft to a decent fraction of the > speed of light (and even that?s not that fast; a probe capable of > near-light speed would still take nearly half a decade). > -------------------- > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jul 11 18:49:38 2023 From: pharos at gmail.com (BillK) Date: Tue, 11 Jul 2023 19:49:38 +0100 Subject: [ExI] The Universe is big - really, really BIG. In-Reply-To: References: Message-ID: On Tue, 11 Jul 2023 at 18:36, Adrian Tymes via extropy-chat wrote: > > Most interstellar ambitions are flat-out impossible without some form of faster than light drive. > What few theoretically possible ways we know of to achieve this, can not be developed without the sort of mature outer space industry that seems highly unlikely to happen without associated human colonization of non-Earth bodies in our solar system. > > Therefore, to reach across the stars we must first seriously begin to colonize the solar system. > _______________________________________________ I asked the new Claude 2 LLM (that was just released today) about FTL travel. It was not very hopeful. ---------- What are the problems facing travel at near light speed? There are a few key challenges faced by travel at near light speed: Energy requirements: As mentioned before, it takes an enormous amount of energy to accelerate to a significant fraction of light speed due to relativistic mass increase. Just getting a spacecraft to 90% the speed of light would require energy outputs rivaling a star. Conventional rockets won't cut it. New propulsion technology would be needed. Time dilation: A traveler moving at 90% the speed of light would experience time dilated by a factor of about 2.3 compared to a stationary observer. This means that a 10 year journey for the traveler would mean over 20 years passed on Earth. Long journeys would find the traveler returning to an aged earth. Communication delays: Signals sent between a near-lightspeed traveler and Earth would experience huge delays. A message sent from a ship traveling at 99% light speed would take over 5 years to reach those on Earth. Normal communication would be impossible. Navigation hazards: Even small particles like dust grains would carry enormous kinetic energy from a near-lightspeed ship's perspective. Impacts with these particles would be seriously damaging. Detection and avoidance of hazards would be critical. Deadly radiation: Collisions with particles would also bathe the ship in radiation. Shielding requirements would be extreme, adding more mass and energy needs. The hazard increases as speed increases. Required precision: Near light speed travel would require extremely precise trajectory control and navigation. Even tiny errors in course or speed could result in missing the destination completely. Autonomous systems or AI may be needed. While these challenges are daunting, some physicists think that technology could one day make near-lightspeed travel possible. But it would require major innovations in propulsion, materials, navigation and more. Significant obstacles remain before we're zipping around at 99% the cosmic speed limit. ------------------------- What are the problems facing travel exceeding light speed? Exceeding the speed of light poses both practical and fundamental physics problems: Infinite energy: As mentioned before, accelerating an object with mass to speeds faster than light would require infinite energy due to relativistic mass increasing to infinity at lightspeed. So some new physics or loophole would be needed. Violating causality: Faster than light travel would violate cause and effect and allow time travel to the past according to relativity. Most physicists think causality should remain inviolate. So the implications of FTL travel for timelines would need to be resolved. Time travel complexes: If FTL allows time travel, it also introduces logical problems like the grandfather paradox where someone goes back to change their own timeline. The implications and paradoxes of time travel are complex. Exotic matter: Many hypothetical FTL concepts like wormholes or warp drives require large amounts of "exotic" matter with negative mass/energy. But this exotic matter has not been shown to exist. It may not be possible to obtain or produce. Unknown side effects: The full effects of FTL travel are hard to predict. But it's likely that traveling faster than causality would have profound and possibly dangerous side effects. Think unstable wormholes, time loops, quantum disruption, etc. Lack of evidence: So far, there is no solid experimental evidence that FTL travel is even possible. Many concepts are entirely theoretical. So viable mechanisms for exceeding light speed remain speculative. While FTL could offer tantalizing possibilities, it comes with practical barriers and would require overturning much of known physics. Most experts think exceeding light speed is highly improbable given our current scientific knowledge. The problems and paradoxes may simply be insurmountable. But the quest continues. -------------------------------- BillK From brent.allsop at gmail.com Tue Jul 11 19:12:53 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 11 Jul 2023 13:12:53 -0600 Subject: [ExI] The Universe is big - really, really BIG. In-Reply-To: References: Message-ID: Wait, no. In 50 or so years, nobody will die. In other words, we will live, FOREVER. No problem spending a million or more years before we get somewhere, in that case, right? We've got to have something to fill up our eternal lives,don't we we? On Tue, Jul 11, 2023 at 12:51?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, 11 Jul 2023 at 18:36, Adrian Tymes via extropy-chat > wrote: > > > > Most interstellar ambitions are flat-out impossible without some form of > faster than light drive. > > What few theoretically possible ways we know of to achieve this, can not > be developed without the sort of mature outer space industry that seems > highly unlikely to happen without associated human colonization of > non-Earth bodies in our solar system. > > > > Therefore, to reach across the stars we must first seriously begin to > colonize the solar system. > > _______________________________________________ > > > I asked the new Claude 2 LLM (that was just released today) about FTL > travel. > It was not very hopeful. > ---------- > > What are the problems facing travel at near light speed? > > There are a few key challenges faced by travel at near light speed: > > Energy requirements: As mentioned before, it takes an enormous amount > of energy to accelerate to a significant fraction of light speed due > to relativistic mass increase. Just getting a spacecraft to 90% the > speed of light would require energy outputs rivaling a star. > Conventional rockets won't cut it. New propulsion technology would be > needed. > > Time dilation: A traveler moving at 90% the speed of light would > experience time dilated by a factor of about 2.3 compared to a > stationary observer. This means that a 10 year journey for the > traveler would mean over 20 years passed on Earth. Long journeys would > find the traveler returning to an aged earth. > > Communication delays: Signals sent between a near-lightspeed traveler > and Earth would experience huge delays. A message sent from a ship > traveling at 99% light speed would take over 5 years to reach those on > Earth. Normal communication would be impossible. > > Navigation hazards: Even small particles like dust grains would carry > enormous kinetic energy from a near-lightspeed ship's perspective. > Impacts with these particles would be seriously damaging. Detection > and avoidance of hazards would be critical. > > Deadly radiation: Collisions with particles would also bathe the ship > in radiation. Shielding requirements would be extreme, adding more > mass and energy needs. The hazard increases as speed increases. > > Required precision: Near light speed travel would require extremely > precise trajectory control and navigation. Even tiny errors in course > or speed could result in missing the destination completely. > Autonomous systems or AI may be needed. > > While these challenges are daunting, some physicists think that > technology could one day make near-lightspeed travel possible. But it > would require major innovations in propulsion, materials, navigation > and more. Significant obstacles remain before we're zipping around at > 99% the cosmic speed limit. > ------------------------- > > What are the problems facing travel exceeding light speed? > > Exceeding the speed of light poses both practical and fundamental > physics problems: > > Infinite energy: As mentioned before, accelerating an object with mass > to speeds faster than light would require infinite energy due to > relativistic mass increasing to infinity at lightspeed. So some new > physics or loophole would be needed. > > Violating causality: Faster than light travel would violate cause and > effect and allow time travel to the past according to relativity. Most > physicists think causality should remain inviolate. So the > implications of FTL travel for timelines would need to be resolved. > > Time travel complexes: If FTL allows time travel, it also introduces > logical problems like the grandfather paradox where someone goes back > to change their own timeline. The implications and paradoxes of time > travel are complex. > > Exotic matter: Many hypothetical FTL concepts like wormholes or warp > drives require large amounts of "exotic" matter with negative > mass/energy. But this exotic matter has not been shown to exist. It > may not be possible to obtain or produce. > > Unknown side effects: The full effects of FTL travel are hard to > predict. But it's likely that traveling faster than causality would > have profound and possibly dangerous side effects. Think unstable > wormholes, time loops, quantum disruption, etc. > > Lack of evidence: So far, there is no solid experimental evidence that > FTL travel is even possible. Many concepts are entirely theoretical. > So viable mechanisms for exceeding light speed remain speculative. > > While FTL could offer tantalizing possibilities, it comes with > practical barriers and would require overturning much of known > physics. Most experts think exceeding light speed is highly improbable > given our current scientific knowledge. The problems and paradoxes may > simply be insurmountable. But the quest continues. > -------------------------------- > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Jul 12 01:23:12 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 11 Jul 2023 18:23:12 -0700 Subject: [ExI] Fwd: [LessWrong] Consciousness as a conflationary alliance term Message-ID: <20230711182312.Horde.3cWN_5CeUB63w78lNU-yY_4@sollegro.com> -------------- next part -------------- An embedded message was scrubbed... From: Stuart LaForge Subject: Fwd: [LessWrong] Consciousness as a conflationary alliance term Date: Tue, 11 Jul 2023 18:01:55 -0700 Size: 59171 URL: From tech101 at gmail.com Wed Jul 12 01:48:34 2023 From: tech101 at gmail.com (Adam A. Ford) Date: Wed, 12 Jul 2023 11:48:34 +1000 Subject: [ExI] The Universe is big - really, really BIG. In-Reply-To: References: Message-ID: Anders Sandberg and Stuart Armstrong wrote a paper 'Eternity in 6 hrs' - tldr build a dyson swarm from the matter of a few large asteroids and use it to thrust heaps of VN probes towards every solar system/energy source within reach - and that reach is far. We don't know how much dust there is btw galaxies, and galaxy clusters atm, so hard to model success rate. Assume lots of redundant probes. http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf Here is my interview with Stuart on the topic: https://www.youtube.com/watch?v=0U3sClR6m8s Kind regards, Adam A. Ford AU Mobile +61 421 979977 Chair - Science, Technology & the Future - (Meetup / Facebook / YouTube / Instagram / Twitter ) - Convener, H+ Australia | Singularity Summit Australia "A new type of thinking is essential if mankind is to survive and move toward higher levels." - Albert Einstein, May 1946) On Wed, 12 Jul 2023 at 05:13, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Wait, no. > In 50 or so years, nobody will die. In other words, we will live, FOREVER. > No problem spending a million or more years before we get somewhere, in > that case, right? > We've got to have something to fill up our eternal lives,don't we we? > > > > On Tue, Jul 11, 2023 at 12:51?PM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Tue, 11 Jul 2023 at 18:36, Adrian Tymes via extropy-chat >> wrote: >> > >> > Most interstellar ambitions are flat-out impossible without some form >> of faster than light drive. >> > What few theoretically possible ways we know of to achieve this, can >> not be developed without the sort of mature outer space industry that seems >> highly unlikely to happen without associated human colonization of >> non-Earth bodies in our solar system. >> > >> > Therefore, to reach across the stars we must first seriously begin to >> colonize the solar system. >> > _______________________________________________ >> >> >> I asked the new Claude 2 LLM (that was just released today) about FTL >> travel. >> It was not very hopeful. >> ---------- >> >> What are the problems facing travel at near light speed? >> >> There are a few key challenges faced by travel at near light speed: >> >> Energy requirements: As mentioned before, it takes an enormous amount >> of energy to accelerate to a significant fraction of light speed due >> to relativistic mass increase. Just getting a spacecraft to 90% the >> speed of light would require energy outputs rivaling a star. >> Conventional rockets won't cut it. New propulsion technology would be >> needed. >> >> Time dilation: A traveler moving at 90% the speed of light would >> experience time dilated by a factor of about 2.3 compared to a >> stationary observer. This means that a 10 year journey for the >> traveler would mean over 20 years passed on Earth. Long journeys would >> find the traveler returning to an aged earth. >> >> Communication delays: Signals sent between a near-lightspeed traveler >> and Earth would experience huge delays. A message sent from a ship >> traveling at 99% light speed would take over 5 years to reach those on >> Earth. Normal communication would be impossible. >> >> Navigation hazards: Even small particles like dust grains would carry >> enormous kinetic energy from a near-lightspeed ship's perspective. >> Impacts with these particles would be seriously damaging. Detection >> and avoidance of hazards would be critical. >> >> Deadly radiation: Collisions with particles would also bathe the ship >> in radiation. Shielding requirements would be extreme, adding more >> mass and energy needs. The hazard increases as speed increases. >> >> Required precision: Near light speed travel would require extremely >> precise trajectory control and navigation. Even tiny errors in course >> or speed could result in missing the destination completely. >> Autonomous systems or AI may be needed. >> >> While these challenges are daunting, some physicists think that >> technology could one day make near-lightspeed travel possible. But it >> would require major innovations in propulsion, materials, navigation >> and more. Significant obstacles remain before we're zipping around at >> 99% the cosmic speed limit. >> ------------------------- >> >> What are the problems facing travel exceeding light speed? >> >> Exceeding the speed of light poses both practical and fundamental >> physics problems: >> >> Infinite energy: As mentioned before, accelerating an object with mass >> to speeds faster than light would require infinite energy due to >> relativistic mass increasing to infinity at lightspeed. So some new >> physics or loophole would be needed. >> >> Violating causality: Faster than light travel would violate cause and >> effect and allow time travel to the past according to relativity. Most >> physicists think causality should remain inviolate. So the >> implications of FTL travel for timelines would need to be resolved. >> >> Time travel complexes: If FTL allows time travel, it also introduces >> logical problems like the grandfather paradox where someone goes back >> to change their own timeline. The implications and paradoxes of time >> travel are complex. >> >> Exotic matter: Many hypothetical FTL concepts like wormholes or warp >> drives require large amounts of "exotic" matter with negative >> mass/energy. But this exotic matter has not been shown to exist. It >> may not be possible to obtain or produce. >> >> Unknown side effects: The full effects of FTL travel are hard to >> predict. But it's likely that traveling faster than causality would >> have profound and possibly dangerous side effects. Think unstable >> wormholes, time loops, quantum disruption, etc. >> >> Lack of evidence: So far, there is no solid experimental evidence that >> FTL travel is even possible. Many concepts are entirely theoretical. >> So viable mechanisms for exceeding light speed remain speculative. >> >> While FTL could offer tantalizing possibilities, it comes with >> practical barriers and would require overturning much of known >> physics. Most experts think exceeding light speed is highly improbable >> given our current scientific knowledge. The problems and paradoxes may >> simply be insurmountable. But the quest continues. >> -------------------------------- >> >> BillK >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Jul 12 03:12:51 2023 From: pharos at gmail.com (BillK) Date: Wed, 12 Jul 2023 04:12:51 +0100 Subject: [ExI] The Universe is big - really, really BIG. In-Reply-To: References: Message-ID: On Wed, 12 Jul 2023 at 02:48, Adam A. Ford wrote: > Anders Sandberg and Stuart Armstrong wrote a paper 'Eternity in 6 hrs' - tldr build a dyson swarm from the matter of a few large asteroids and use it to thrust heaps of VN probes towards every solar system/energy source within reach - and that reach is far. We don't know how much dust there is btw galaxies, and galaxy clusters atm, so hard to model success rate. Assume lots of redundant probes. http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf > > Here is my interview with Stuart on the topic: https://www.youtube.com/watch?v=0U3sClR6m8s > Kind regards, > Adam A. Ford > AU Mobile +61 421 979977 > > Chair - Science, Technology & the Future - (Meetup / Facebook / YouTube / Instagram / Twitter) - Convener, H+ Australia | Singularity Summit Australia > > "A new type of thinking is essential if mankind is to survive and move toward higher levels." - Albert Einstein, May 1946) > _______________________________________________ I seem to remember some discussion here on that paper some years ago. The idea that an advanced civ could send probes to every planet in the galaxy within a comparatively short timescale does seem to be quite reasonable. As there are no signs of probes or advanced civs, the conclusion drawn was that either there may only be one civ per galaxy (i.e. advanced civs are very rare). And we are the one for this galaxy. Or there is some fundamental reason that makes all advanced civs decide not to spam the galaxy with probes or replicators. Not necessarily self-destruction, as retreating to live within virtual reality is also possible, among other reasons. The Fermi Paradox remains unsolved. BillK From avant at sollegro.com Wed Jul 12 04:22:02 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 11 Jul 2023 21:22:02 -0700 Subject: [ExI] Fwd: [LessWrong] Consciousness as a conflationary alliance term Message-ID: <20230711212202.Horde.fVq4XyVYfNrzmF7LB9Fg5aU@sollegro.com> Excuse the repost, but the last one was formatted as an attachment. Andrew Critch interviewed about two-dozen people and found that they had 17 different interpretations of what consciousness meant. No wonder there has been so much debate on the topic. I still think that causal awareness is a good measure of it. Stuart LaForge ---------- Forwarded message --------- From: LessWrong Date: Tue, Jul 11, 2023, 1:29 PM Subject: [LessWrong] Consciousness as a conflationary alliance term To: Consciousness as a conflationary alliance term ------------------------------ by Andrew_Critch > July 11, 2023 6:09 PM *Tl;dr: In this post, I argue that the concept of 'consciousness' is more conflated than people realize, in that there's a lot of divergence in what people mean by "consciousness", and people are unaware of the degree of divergence. This confusion allows the formation of broad alliances around the value of consciousness, even when people don't agree on what it actually means. I call alliances built around conflated terms "**conflationary alliances* *".* Executive Summary *Part 1: *Mostly during my PhD, I somewhat-methodically interviewed a couple dozen people to figure out what they meant by consciousness, and found that (a) there seems to be a surprising amount of diversity in what people mean by the "consciousness", and (b) they are often surprised to find out that other people mean different things when they say "consciousness". This has implications for AI safety advocacy because AI will sometimes be feared and/or protected on the grounds that it is "conscious", and it's good to be able to navigate these debates wisely. (Other heavily conflated terms in AI discourse might include "fairness", "justice", "alignment", and "safety", although I don't want to debate any of those cases here. This post is going to focus on consciousness, and general ideas about the structure of alliances built around confused concepts in general.) *Part 2:* When X is a conflated term like "consciousness", large alliances can form around claims like "X is important" or "X should be protected". Here, the size of the alliance is a function of how many concepts get conflated with X. Thus, the alliance grows *because* of the confusion of meanings, not in spite of it. I call this a *conflationary alliance. *Persistent conflationary alliances resist disambiguation of their core conflations, because doing so would break up the alliance into factions who value the more precisely defined terms. The resistance to deconflation can be deliberate, or merely a social habit or inertia. Part 1: What people mean by "consciousness". "Consciousness" is an interesting word, because many people have already started to notice that it's a confused term, yet there is still widespread agreement that conscious beings have moral value. You'll even find some people taking on strange positions like "I'm not conscious" or "I don't know if I'm conscious" or "lookup tables are conscious", as if rebelling against the implicit alliance forming around the "consciousness" concept. What's going on here? To investigate, over about 10 years between 2008 and 2018 I informally interviewed dozens of people who I noticed were interested in talking about consciousness, for 1-3 hours each. I did not publish these results, and never intended to, because I was mainly just investigating for my own interest. In retrospect, it would have been better, for me and for anyone reading this post, if I'd made a proper anthropological study of it. I'm sorry that didn't happen. In any case, here is what I have to share: "Methodology" Extremely informal; feel free to skip or just come back to this part if you want to see my conclusions first. - *Whom did I interview? *Mostly academics I met in grad school, in cognitive science, AI, ML, and mathematics. In an ad hoc manner at academic or other intellectually-themed gatherings, whenever people talked about consciousness, I gravitated toward the conversation and tried to get someone to spend a long conversation with me to unpack what they meant. - *How did I interview them? *What I asked each person was to take some time to look inside their own minds ? sometimes starting out by paying attention to just their bodies, if introspection was hard for them ? and try to describe to me in more detail the thing they were calling consciousness. I did not say "this is an interview" or anything official-sounding, because honestly I didn't feel very official about it. When they defined consciousness using common near-synonyms like "awareness" or "experience", I asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses, at a level that would in principle help me to programmatically check whether the processes inside another mind or object were conscious. Often it took me 2-5 push-backs to get them focussing on the 'structure' of what they called consciousness and just synonyms for it, but if they stuck with me for 10 minutes, they usually ended up staying in the conversation beyond that, for more like 1-3 hours in total. Sometimes the conversation ended more quickly, in like 20 minutes, if the notion of consciousness being conveyed was fairly simple to describe. Some people seemed to have multiple views on what consciousness is, in which cases I talked to them longer until they became fairly committed to one main idea. Caveats I'm mainly only confident in the conclusion that people are referring to a lot of different mental processes in mind when they say "consciousness", and are surprised to hear that others have very different meanings in mind. I didn't take many notes or engage anyone else to longitudinally observe these discussions, or do any other kind of adversarially-robust-scientist stuff. I do not remember the names of the people with each answer, and I'm pretty sure I have a bias where I've more easily remembered answers that were given by more than one person. Nonetheless, I think my memory here is good enough to be interesting and worth sharing, so here goes. Results *Epistemic status: reporting from memory.* From the roughly thirty conversations I remember having, below are the answers I remember getting. Each answer is labeled with a number (n) roughly counting the number of people I remember having that answer. After most of the conversations I told people about the answers other people had given, and >80% of the time they seemed surprised: 1. (n?3) Consciousness as* introspection. *Parts of my mind are able to look at other parts of my mind and think about them. That process is consciousness. Not all beings have this, but I do, and I consider it valuable. *Note: people with this answer tended to have shorter conversations with me than the others, because the idea was simpler to explain than most of the other answers.* 2. (n?3) Consciousness as *purposefulness*. These is a sense that one's live has meaning, or purpose, and that the pursuit of that purpose is self-evidently valuable. Consciousness is a deep the experience of that self-evident value, or what religions might call the experience of having a soul. This is consciousness. Probably not all beings have this, and maybe not even all people, but I definitely do, and I consider it valuable. 3. (n?2) Consciousness as *experiential coherence. * I have a subjective sense that my experience at any moment is a coherent whole, where each part is related or connectable to every other part. This integration of experience into a coherent whole is consciousness. 4. (n?2) Consciousness as* holistic experience of complex emotions. *Emotional affects like fear and sadness are complex phenomena. They combine and sustain cognitive processes ? like the awareness that someone is threatening your safety, or that someone has died ? as well as physical processes ? like tense muscles. It's possible to be holistically aware of both the physical and abstract aspects of an emotion all at once. This is consciousness. I don't know if other beings or objects have this, but I definitely do, and I consider it valuable. 5. (n?2) Consciousness as* experience of distinctive affective states.* Simple bodily affects like hunger and fatigue are these raw and self-evidently real "feelings" that you can "tell are definitely real". The experience of these distinctively-and-self-evidently-real affective states is consciousness. I don't know if other living things have this, but non-living objects probably don't, and I definitely do, and I consider it valuable. 6. (n?2) Consciousness as *pleasure and pain. *Some of my sensations are self-evidently "good" or "bad", and there is little doubt about those conclusions. A bad experience like pain-from-exercise can lead to good outcomes later, but the experience itself still self-evidently has the "bad" quality. Consciousness is the experience of these self-evidently "good" and "bad" features of sensation. Simple objects like rocks don't have this, and maybe not even all living beings, but I definitely do, and I consider it valuable. 7. (n?2) Consciousness as *perception of perception.* Inside the mind is something called "perception" that translates raw sense data into awareness of objects and relations, e.g., "perceiving a chair from the pixels on my retina". There's also an internal perception-like process that looks at the process of perception while it's happening. That thing is consciousness. Probably not all beings have this, but I do, and I consider it valuable. 8. (n?2) Consciousness as* awareness of awareness. *A combination of perception and logical inference cause the mind to become intuitively aware of certain facts about one's surroundings, including concrete things like the presence of a chair underneath you while you sit, but also abstract things like the fact that you will leave work and go home soon if you can't figure out how to debug this particular bit of code. It's also possible to direct one's attention at the process of awareness itself, thereby becoming aware of awareness. This is consciousness. Probably not all beings have this, but I do, and I consider it valuable. 9. (n?2) Consciousness as *symbol grounding*. Words, mental imagery, and other symbolic representations of the world around us have meanings, or "groundings", in a reality outside of our minds. We can sense the fact that they have meaning by paying attention to the symbol and "feeling" its connection to the real world. This experience of symbols having a meaning is consciousness. Probably not all beings have this, but I definitely do, and I consider it valuable. 10. (n?2) Consciousness as *proprioception*. At any moment, I have a sense of where my body is physically located in the world, including where my limbs are, and how I'm standing, which constitutes a strong sense of presence. That sense is what I call consciousness. I don't know if other beings have this, but objects probably don't, and I definitely do, and I consider it valuable. 11. (n?2) Consciousness as *awakeness*. When I'm in dreamless sleep, I have no memory or sense of existing or anything like that. When I wake up, I do. Consciousness is the feeling of being awake. Probably not all beings or objects have this, but I do, and I consider it valuable. 12. (n?2) Consciousness as *alertness*. When I want, I can voluntarily increase my degree of alertness or attunement to my environment. That sense of alertness is consciousness, and it's something I have more of or less of depending on whether I focus on it. Probably not all beings or objects have this, but I do, and I consider it valuable. 13. (n?2) Consciousness as *detection of cognitive uniqueness*. "It's like something to be me". Being me is different from being other people or animals like bats, and I can "tell" that just by introspecting and noticing a bunch of unique things about my mind, and that my mind is separate from other minds. I get a self-evident "this is me and I'm unique" feeling when I look inside my mind. That's consciousness. Probably not all beings or objects have this, but I do, and I consider it valuable. 14. (n?1 or 2) Consciousness is *mind-location*. I have this feeling that my mind exists and is located behind my eyes. That feeling of knowing where my mind is located is consciousness. Probably not all beings or objects have this, but I do, and I consider it valuable. 15. (n?1) Consciousness as a *sense of cognitive extent*. I have this sense that tells me which parts of the world are part of my body versus. In a different but analogous way, I have a sense of which information processes in the world are part of my mind versus external to my mind. That sense that "this mind-stuff is my mind-stuff" is consciousness. Probably a lot of living beings have this, but most objects probably don't, and I consider it valuable. 16. (n?1) Consciousness as *memory of memory*. I have a sense of my life happening as part of a larger narrative arc. Specifically, it feels like I can remember the process of storing my memories, which gives me a sense of "Yeah, this stuff all happened, and being the one to remember it is what makes me me". Probably not all beings or objects have this, but I do, and I consider it valuable. 17. (n?1) Consciousness as *vestibular sense*. At any moment, one normally has a sense of being oriented towards the world in a particular way, which goes away when you're dizzy. We feel locked into a kind of physically embodied frame of reference, which tells us which way is up and down and so on. This is the main source of my confidence that my mind exists, and it's my best explanation of what I call consciousness. *Note: Unlike the others, I don't remember this person saying they considered consciousness to be valuable.* So what is "consciousness"? It's a confused word that people reliably use to refer to mental phenomena that they consider morally valuable, with surprising variation in what specifically people have in mind when they say it. As a result, we observe - Widespread agreement that conscious beings are valuable, and - Widespread disagreement or struggle in defining or discovering "what consciousness is". What can be done about this? For one thing, when people digress from a conversation to debate about "consciousness", nowadays I usually try asking them to focus away from "consciousness" and instead talk about either "intrinsically valuable cognition" or "formidable intelligence". This usually helps the conversation move forward without having to pin down what precisely they meant by "consciousness". More generally, this variation in meanings intended by the word "consciousness" has implications for how we think about alliances that form around the value of consciousness as a core value. Part 2: The conflationary alliance around human consciousness *Epistemic status: personal sense-making from the observations above* Most people use the word "consciousness" to refer to a cognitive process that they consider either - terminally valuable (as an aspect of moral patiency ), or - instrumentally valuable (as a component of intelligence). Thus, it's easy to form alliances or agreement around claims like - *conscious beings deserve protection,* or - *humans lives are valuable because we're conscious, *or - *humans are smarter than other animals because we're conscious.* Such utterances reinforce the presumption that consciousness must be something valuable, but without pinning down specifically what is being referred to. This vagueness in turn makes the claims more broadly agreeable, and the alliance around the value of human consciousness strengthens. I call this a *conflationary alliance*, because it's an alliance supported by the conflation of concepts that would otherwise have been valued by a smaller alliance. Here, the size of the alliance is a function of how many concepts get conflated with the core value term. A persistent conflationary alliance must, tautologically, resist the disambiguation of its core conflations. The resistance can arise by intentional design of certain Overton windows or slogans, or arise simply by natural selection acting on the ability of memes to form alliances that reinforce them. Correspondingly, there are lots of social patterns that somehow end up protecting the conflated status of "consciousness" as a justification for the moral value of human beings. Some examples: - *Alice*: [eats a porkchop] *Bob*: You shouldn't eat pigs; they're conscious beings capable of suffering, you know! *Alice*: There's no scientific consensus on what consciousness is. It's mysterious, and and I believe it's unique to humans. [continues eating porkchop] - *Charlie*: I think AI might become conscious. Isn't that scary? *Dana*: Don't worry; there is no consensus on what consciousness is, because it's a mystery. It's hubris to think scientists are able to build conscious machines! *Charlie*: [feels relieved] Hmm, yeah, good point. - *Eric: *AI systems are getting really smart, and I think they might be conscious, shouldn't we feel bad about essentially making them our slaves? *Faye: *Consciousness is special to humans and other living organisms, not machines. How it works is still a mystery to scientists, and definitely not something we can program into a computer. *Eric: *But these days AI systems are *trained*, not programmed, and how they work is mysterious to us, just like consciousness. So, couldn't we end up making them conscious without even knowing it? *Faye: *Perhaps, but the fact that we don't know means we shouldn't treat them as valuable in the way humans are, because we *know* humans are conscious. At least I am; aren't you? *Eric: *Yes of course I'm conscious! [feels insecure about whether others will believe he's conscious] When you put it that way, I guess we're more confident in each other's consciousness than we can be about the consciousness of something different from us. What should be done about these patterns? I'm not sure yet; a topic for another day! Conclusion In Part 1, I described a bunch of slightly-methodical conversations I've had about, where I learned that people are referring to many different kinds of processes inside themselves when they say "consciousness", and that they're surprised by the diversity of other people's answers. I've also noticed people used "consciousness" to refer to things they value, either terminally or instrumentally. In Part 2, I note how this makes it easier to form alliances around the idea that *consciousness is valuable*. There seems to be a kind of social resistance to clarification about the meaning of "consciousness", especially in situations where someone is defending or avoiding the questioning of human moral superiority or priority. I speculate that these conversational patterns further perpetuate the notion that "consciousness" refers to something inherently mysterious. In such cases, I often find it helpful to ask people to focus away from "consciousness" and instead talk about either "intrinsically valuable cognition" or "formidable intelligence", whichever better suits the discussion at hand. In future posts I pan to discuss the implications of conflationary terms and alliances for the future of AI and AI policy, but that work will necessarily be more speculative and less descriptive than this one. Thanks for reading! From avant at sollegro.com Wed Jul 12 06:22:09 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 11 Jul 2023 23:22:09 -0700 Subject: [ExI] The Universe is big - really, really BIG. In-Reply-To: References: Message-ID: <20230711232209.Horde.pUkXI77kOYXYeT1W1ynoH2Z@sollegro.com> Quoting BillK via extropy-chat : > On Wed, 12 Jul 2023 at 02:48, Adam A. Ford wrote: >> Anders Sandberg and Stuart Armstrong wrote a paper 'Eternity in 6 >> hrs' - tldr build a dyson swarm from the matter of a few large >> asteroids and use it to thrust heaps of VN probes towards every >> solar system/energy source within reach - and that reach is far. >> We don't know how much dust there is btw galaxies, and galaxy >> clusters atm, so hard to model success rate. Assume lots of >> redundant probes. >> http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf Von Neumann probes are only really only useful for a postbiological species. Until we are able to reliably upload, we would be seeding the galaxy with unaligned AI, potentially causing problems for us if we ever master shields and cryptobiosis or cryogenic suspension and need planets of our own to colonize. If AI causes us to go extinct, then the probes wouldn't be spreading our civilization but theirs. > >> "A new type of thinking is essential if mankind is to survive and >> move toward higher levels." - Albert Einstein, May 1946) >> _______________________________________________ > > > I seem to remember some discussion here on that paper some years ago. > The idea that an advanced civ could send probes to every planet in the > galaxy within a comparatively short timescale does seem to be quite > reasonable. Advanced civilizations could if they wanted to but are unlikely to want to until they have reached machine phase or have somehow managed to successfully align AI. > As there are no signs of probes or advanced civs, the conclusion drawn > was that either there may only be one civ per galaxy (i.e. advanced > civs are very rare). And we are the one for this galaxy. We could be the first spacefaring civilization in the galaxy, VN probes might be unpopular among spacefaring civilizations, or the probes of other civilizations might already be here. There is an interesting phenomena experienced sporadically amongst radio operators where they will receive the echo of their own transmissions several seconds after they send it. It is is called long delayed echoes (LDE), and it has been experienced since radios were first invented in the early 20th century. Although there are over a dozen theories, so far nobody has been able to pin down a cause for these long delayed signal returns or even be able to predictably reproduce them. In order to qualify as an LDR, a signal has to echo back more than 2.7 seconds after it is transmitted, which would suggest a reflective echo from something outside of the moon's orbit. The longest delay ever reported was 40 seconds. https://www.mn.uio.no/fysikk/english/people/aca/sverre/articles/shlionskiy15.html http://www.arp75.org/wp-content/uploads/2014/02/lde_la3za_arp_20090318.pdf Something to keep in mind that most of these LDEs are not reported by Houston Control, radar arrays, or radio telescopes, but instead from amateurs using hobby grade equipment in the 100 W to 20 KW power range. So how are they getting reflections back from the moon or farther without something amplifying their signal? > Or there is some fundamental reason that makes all advanced civs > decide not to spam the galaxy with probes or replicators. > Not necessarily self-destruction, as retreating to live within virtual > reality is also possible, among other reasons. Standard probes are more likely than replicators for reasons that I discussed above. Civilizations probably universally see the creation of unaligned AI von Neumann replicators as a bad idea. > The Fermi Paradox remains unsolved. Give it time. Humanity has only sampled a 100-year light cone of an enormous universe that is at least 13.8 billion years old. There is so much tantalizing stuff going on lately on so many fronts, we might be close to some answers to the Great Silence. Stuart LaForge From pharos at gmail.com Wed Jul 12 09:35:39 2023 From: pharos at gmail.com (BillK) Date: Wed, 12 Jul 2023 10:35:39 +0100 Subject: [ExI] New Eliezer Yudkowsky TED talk on AI dangers Message-ID: Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED 40K views 11 Jul 2023 10 minutes video. Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately. ------------------ There was some laughter from the audience during this brief talk. It emphasized his point that people will laugh at AI dangers until it is too late. BillK From giulio at gmail.com Thu Jul 13 07:46:24 2023 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 13 Jul 2023 09:46:24 +0200 Subject: [ExI] One week to the Terasem Space Day Colloquium on July 20, 10amET, via Zoom Message-ID: One week to the Terasem Space Day Colloquium on July 20, 10amET, via Zoom. Mark your calendar, you are invited! Stellar speakers will talk about long-term space expansion and highly imaginative, far-out related concepts in science, philosophy, and politics (and science fiction of course). https://www.turingchurch.com/p/terasem-space-day-colloquium-july From spike at rainier66.com Thu Jul 13 19:08:56 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 13 Jul 2023 12:08:56 -0700 Subject: [ExI] (no subject) In-Reply-To: <5a36f883-dbea-416a-e8b3-c25cf0a6ca57@swisscows.email> References: <4c5edfbf-11b4-168d-55be-edba253661f9@swisscows.email> <7c25bb64-f624-42e8-b4ce-4e283a8ac6c7@swisscows.email> <5a36f883-dbea-416a-e8b3-c25cf0a6ca57@swisscows.email> Message-ID: <005401d9b5bd$79607030$6c215090$@rainier66.com> ...> On Behalf Of efc--- via extropy-chat Subject: Re: [ExI] (no subject) >...Now what do you get when you mug a conservative? Best regards, Daniel Shot. spike From spike at rainier66.com Fri Jul 14 04:51:57 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 13 Jul 2023 21:51:57 -0700 Subject: [ExI] appropriate for this age In-Reply-To: References: Message-ID: <010501d9b60e$eb7067b0$c2513710$@rainier66.com> On Mon, 26 Jun 2023 at 19:50, William Flynn Wallace via extropy-chat wrote: > > Did you know that people can get a creative writing degree in nonfiction? > Master of Creative Writing in Nonfiction. > I have no words to describe this. > > bill w > _______________________________________________ I was away on the road when this thread went by. Saw it on my phone but I am one who will not type on a phone keyboard. My fingers are expressive, but my thumbs are little more than two clumsy siblings of their eight superior adjacent digits, struggling to be equal but failing comically. Pathetic appendages they are, in comparison to their octet of eloquent loquacious colleagues. Never listen to the thumbs, for they are a pair of bucolic galoots, uncultured, lacking erudition and refinement, crude. The thumbs write only in the digital equivalent of grunts and growls. Any time you see something I wrote that is stupid, assume I did it with my thumbs. Billw, when I read your comment, your objection puzzled me. Nonfiction needs creativity as much or more than does fiction. For example, consider a chess columnist many years ago, Mig Greengard. Wrote a column for the New York Times. He was a perfect example of writing non-fiction creatively. Entertaining as all hell, as he described in his unique Mig way chess games and tournaments. Consider the way Carl Sagan presented astronomy, and the way Stephen Jay Gould presented science topics, the way Isaac Asimov wrote his non-fiction essays. All brilliantly creative, none of it written by the thumbs. spike From pharos at gmail.com Fri Jul 14 15:50:37 2023 From: pharos at gmail.com (BillK) Date: Fri, 14 Jul 2023 16:50:37 +0100 Subject: [ExI] How fun could doom intelligent life to a blissful extinction Message-ID: Big Think ? July 14, 2023 How the pursuit of pleasure could doom all intelligent life to a blissful extinction The separation of pleasure from procreation may occur throughout the cosmos, providing an explanation for the Fermi Paradox. The global population is set to peak in 40 years, then decline. >From sex toys to TikTok, humanity?s separation of emotional rewards from procreation can be seen in the rise of "emode" technology. If the pursuit of happiness is the primary explanation for our decreasing fertility rate, this tendency might be true not just for humans but for all intelligent life ? providing a possible explanation for the Fermi Paradox. ------------------- This is a modern follow-up to the famous book 'Amusing Ourselves to Death: Public Discourse in the Age of Show Business' (1985) by educator Neil Postman. Which itself was a follow-up to Aldous Huxley's 'Brave New World' (1932). BillK From foozler83 at gmail.com Fri Jul 14 15:55:53 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 14 Jul 2023 10:55:53 -0500 Subject: [ExI] appropriate for this age In-Reply-To: <010501d9b60e$eb7067b0$c2513710$@rainier66.com> References: <010501d9b60e$eb7067b0$c2513710$@rainier66.com> Message-ID: Spike, you are making the same mistake BillK did. I know all of that and was just trying for a little humor in this age of fake news and spin. I have read many, many books of nonfiction and there is surely a range of writing ability and interest value. Some know their stuff but need a ghost writer or co-author. bill w On Thu, Jul 13, 2023 at 11:57?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On Mon, 26 Jun 2023 at 19:50, William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > Did you know that people can get a creative writing degree in nonfiction? > > Master of Creative Writing in Nonfiction. > > I have no words to describe this. > > > > bill w > > _______________________________________________ > > > I was away on the road when this thread went by. Saw it on my phone but I > am one who will not type on a phone keyboard. My fingers are expressive, > but my thumbs are little more than two clumsy siblings of their eight > superior adjacent digits, struggling to be equal but failing comically. > Pathetic appendages they are, in comparison to their octet of eloquent > loquacious colleagues. Never listen to the thumbs, for they are a pair of > bucolic galoots, uncultured, lacking erudition and refinement, crude. The > thumbs write only in the digital equivalent of grunts and growls. Any time > you see something I wrote that is stupid, assume I did it with my thumbs. > > Billw, when I read your comment, your objection puzzled me. Nonfiction > needs creativity as much or more than does fiction. For example, consider > a chess columnist many years ago, Mig Greengard. Wrote a column for the > New York Times. He was a perfect example of writing non-fiction > creatively. Entertaining as all hell, as he described in his unique Mig > way chess games and tournaments. Consider the way Carl Sagan presented > astronomy, and the way Stephen Jay Gould presented science topics, the way > Isaac Asimov wrote his non-fiction essays. All brilliantly creative, none > of it written by the thumbs. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at ziaspace.com Tue Jul 18 09:18:08 2023 From: john at ziaspace.com (John Klos) Date: Tue, 18 Jul 2023 09:18:08 +0000 (UTC) Subject: [ExI] Seeing if this is working... Message-ID: Hi, all, Apologies for the list going off-line. It seems we may need to find a new colocation provider, but for now things are up and running elsewhere. Please don't hesitate to let me know if you see any problems going forward. Thanks, John From brent.allsop at gmail.com Tue Jul 18 10:54:49 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 18 Jul 2023 04:54:49 -0600 Subject: [ExI] Seeing if this is working... In-Reply-To: References: Message-ID: ?? Thanks. On Tue, Jul 18, 2023, 3:18 AM John Klos via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi, all, > > Apologies for the list going off-line. It seems we may need to find a new > colocation provider, but for now things are up and running elsewhere. > > Please don't hesitate to let me know if you see any problems going forward. > > Thanks, > John > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ExiMod at protonmail.com Tue Jul 18 11:51:07 2023 From: ExiMod at protonmail.com (ExiMod) Date: Tue, 18 Jul 2023 11:51:07 +0000 Subject: [ExI] Seeing if this is working... In-Reply-To: References: Message-ID: Hi John Well Done! But there seems to be a problem with posts going into the archive and being distributed to the list. Maybe it is just very slow? Best wishes, ExiMod Sent with Proton Mail secure email. ------- Original Message ------- On Tuesday, July 18th, 2023 at 10:18, John Klos via extropy-chat wrote: > Hi, all, > > Apologies for the list going off-line. It seems we may need to find a new > colocation provider, but for now things are up and running elsewhere. > > Please don't hesitate to let me know if you see any problems going forward. > > Thanks, > John > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Tue Jul 18 10:37:40 2023 From: pharos at gmail.com (BillK) Date: Tue, 18 Jul 2023 11:37:40 +0100 Subject: [ExI] Seeing if this is working... In-Reply-To: References: Message-ID: Looks good here in the UK. :) BillK On Tue, 18 Jul 2023 at 10:20, John Klos via extropy-chat wrote: > > Hi, all, > > Apologies for the list going off-line. It seems we may need to find a new > colocation provider, but for now things are up and running elsewhere. > > Please don't hesitate to let me know if you see any problems going forward. > > Thanks, > John > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From john at ziaspace.com Tue Jul 18 15:50:02 2023 From: john at ziaspace.com (John Klos) Date: Tue, 18 Jul 2023 15:50:02 +0000 (UTC) Subject: [ExI] Seeing if this is working... In-Reply-To: References: Message-ID: <977e1e8a-848a-8021-def0-3f02ba931c5f@anath.zia.io> Hi, > But there seems to be a problem with posts going into the archive and > being distributed to the list. Maybe it is just very slow? It seems a few messages weren't delivered out of the new server because of missing reverse DNS. We're just going to have to wait until the ISP processes that. In the meantime, I'm sending all outgoing through another server (smarthosting) until that's fixed. The other issue is that I just changed the DNS right before I sent my message, so DNS elsehwere on the Internet might take a little bit to catch up. You should be seeing the correct archives now, including the three responses to my message: https://lists.extropy.org/pipermail/extropy-chat/2023-July/thread.html If you want to check for yourself, see if extropy.org and lists.extropy.org resolve to 107.215.143.5 and 2600:1700:b80:15d0::5 :) John From spike at rainier66.com Tue Jul 18 21:01:42 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jul 2023 14:01:42 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <00c201d9b771$fba21070$f2e63150$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> Message-ID: <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> I posted this right before the ExI list barfed. Posting again. spike -----Original Message----- From: spike at rainier66.com Sent: Saturday, 15 July, 2023 4:14 PM To: 'ExI chat list' Cc: 'BillK' ; spike at rainier66.com Subject: RE: [ExI] How fun could doom intelligent life to a blissful extinction -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat >...If the pursuit of happiness is the primary explanation for our decreasing fertility rate, this tendency might be true not just for humans but for all intelligent life ? providing a possible explanation for the Fermi Paradox. ------------------- ... BillK _______________________________________________ BillK, this is really as plausible an explanation for the Fermi Paradox as any I have heard, and perhaps the most pleasant one. Having children is a way to experience happiness, but it is a risky bet indeed. If we find sufficient alternative routes to happiness, the notion of having children becomes ever less compelling. If we find alternative routes to the pleasures of copulation and all those cool endorphins we get from love, that whole risky activity isn't worth the effort either. Result: not enough young people to run the world we already built for them. But of course nuclear war could wipe out most of what we have done, creating the need for rebuilders and family people, so we might save our species in that horrifying way: radiation therapy. Or the singularity could kill us, but I don't think it would kill people who have never seen a computer. They might survive to build it all back. spike From hkeithhenson at gmail.com Tue Jul 18 23:21:27 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 18 Jul 2023 16:21:27 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: On Tue, Jul 18, 2023 at 3:07?PM spike jones via extropy-chat wrote: > > -----Original Message----- > From: extropy-chat On Behalf Of BillK via extropy-chat > > > >...If the pursuit of happiness is the primary explanation for our decreasing fertility rate, this tendency might be true not just for humans but for all intelligent life ? providing a possible explanation for the Fermi Paradox. > > > ------------------- > ... > > BillK > > _______________________________________________ > > > > BillK, this is really as plausible an explanation for the Fermi Paradox as any I have heard, and perhaps the most pleasant one. Having children is a way to experience happiness, but it is a risky bet indeed. Spike, that's the case now, but given gene selection and nanomedicine, having a child would be a low-risk project. > If we find sufficient alternative routes to happiness, the notion of having children becomes ever less compelling. If we find alternative routes to the pleasures of copulation and all those cool endorphins We already have drugs> that stimulate endorphins. > we get from love, that whole risky activity isn't worth the effort either. I disagree. I want to raise clones of da Vinci, Einstein, and Fineman among others. > Result: not enough young people to run the world we already built for them. Not a problem. If we need more, we just run the ones we have through the duplicator. It's not entirely obvious what we need them for in a world of nanotech though. > But of course nuclear war could wipe out most of what we have done, creating the need for rebuilders and family people, so we might save our species in that horrifying way: radiation therapy. Again, you are not thinking far enough about the problem. Nanotech can't help with a nuclear blast, but at least it can sort the radioactive isotopes out of the environment. *If* we give a hoot. If we want to stay in biological bodies, then we can either sort out the radioactive stuff or use cell repair machines to stay healthy. > Or the singularity could kill us, but I don't think it would kill people who have never seen a computer. They might survive to build it all back. If what we are seeing at Tabby's Star is a civilization approaching K2, then they made it. No reason we can't. I must say the light curves from Tabby's Star have forced a wrenching readjustment of my beliefs about the universe around us. For at least a decade I have been unable to stand SF that had aliens in it or FTL travel. FTL is still out, but it sure looks like someone has constructed one heck of a lot of energy collectors. Why? From our viewpoint, the only thing that makes any sense is computation, If my math is correct, it's over 400 times the area of the earth and radiating at around 65 K. I suspect that the size is limited by whatever they took apart to make it. Keith > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Tue Jul 18 20:56:09 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jul 2023 13:56:09 -0700 Subject: [ExI] Seeing if this is working... In-Reply-To: References: Message-ID: <008c01d9b9ba$47ede950$d7c9bbf0$@rainier66.com> Thanks John! spike -----Original Message----- From: extropy-chat On Behalf Of John Klos via extropy-chat Sent: Tuesday, 18 July, 2023 2:18 AM To: extropy-chat at lists.extropy.org Cc: John Klos Subject: [ExI] Seeing if this is working... Hi, all, Apologies for the list going off-line. It seems we may need to find a new colocation provider, but for now things are up and running elsewhere. Please don't hesitate to let me know if you see any problems going forward. Thanks, John _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From efc at swisscows.email Wed Jul 19 12:07:31 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Wed, 19 Jul 2023 14:07:31 +0200 (CEST) Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: Hello everyone, I think this raises interesting questions of ethics and the "purpose" of mankind. If we agree that happiness is the the ultimate goal, and not the survival of the species, then why not stop having children if we can get along fine with robots and AI:s? Perhaps, assuming life extension or "immortality", there will be a final generation? Best regards, Daniel On Tue, 18 Jul 2023, spike jones via extropy-chat wrote: > > I posted this right before the ExI list barfed. Posting again. > > spike > > > > -----Original Message----- > From: spike at rainier66.com > Sent: Saturday, 15 July, 2023 4:14 PM > To: 'ExI chat list' > Cc: 'BillK' ; spike at rainier66.com > Subject: RE: [ExI] How fun could doom intelligent life to a blissful extinction > > > > -----Original Message----- > From: extropy-chat On Behalf Of BillK via extropy-chat > > >> ...If the pursuit of happiness is the primary explanation for our decreasing fertility rate, this tendency might be true not just for humans but for all intelligent life ? providing a possible explanation for the Fermi Paradox. > > > ------------------- > ... > > BillK > > _______________________________________________ > > > > BillK, this is really as plausible an explanation for the Fermi Paradox as any I have heard, and perhaps the most pleasant one. Having children is a way to experience happiness, but it is a risky bet indeed. If we find sufficient alternative routes to happiness, the notion of having children becomes ever less compelling. If we find alternative routes to the pleasures of copulation and all those cool endorphins we get from love, that whole risky activity isn't worth the effort either. Result: not enough young people to run the world we already built for them. > > But of course nuclear war could wipe out most of what we have done, creating the need for rebuilders and family people, so we might save our species in that horrifying way: radiation therapy. Or the singularity could kill us, but I don't think it would kill people who have never seen a computer. They might survive to build it all back. > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From giulio at gmail.com Wed Jul 19 09:57:59 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 19 Jul 2023 11:57:59 +0200 Subject: [ExI] TOMORROW July 20: Terasem Space Day Colloquium In-Reply-To: References: Message-ID: The other founder of Extropy Magazine, Tom Bell, will give a talk on the governance of communities spread out across the vastness of space. On Wed, Jul 19, 2023 at 11:32?AM Giulio Prisco wrote: > > TOMORROW July 20: Terasem Space Day Colloquium, 10am-1pm ET, via Zoom. > Stellar speakers will present long-term perspectives on space > expansion. Legendary thinker Max More will give a talk on ?The > Extropic Frontier: Existential Opportunities of the Cosmos.? > https://www.turingchurch.com/p/terasem-space-day-colloquium-july From spike at rainier66.com Wed Jul 19 16:35:07 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 19 Jul 2023 09:35:07 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of efc--- via extropy-chat Subject: Re: [ExI] FW: How fun could doom intelligent life to a blissful extinction Hello everyone, >...I think this raises interesting questions of ethics and the "purpose" of mankind... Daniel Daniel, there are those who believe mankind has no purpose. There are those who believe our purpose is to have children, be fruitful and multiply, fill the world with our descendants, that sorta thing. As a thought experiment, let us create two groups and throw everyone into one bin or the other, the no-purposers and the breeders. My thought experiment leads me to conclude that the no-purposers are more successful at producing memetic offspring, whereas the breeders are more successful at producing genetic offspring. spike From giulio at gmail.com Wed Jul 19 09:32:48 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 19 Jul 2023 11:32:48 +0200 Subject: [ExI] TOMORROW July 20: Terasem Space Day Colloquium Message-ID: TOMORROW July 20: Terasem Space Day Colloquium, 10am-1pm ET, via Zoom. Stellar speakers will present long-term perspectives on space expansion. Legendary thinker Max More will give a talk on ?The Extropic Frontier: Existential Opportunities of the Cosmos.? https://www.turingchurch.com/p/terasem-space-day-colloquium-july From atymes at gmail.com Wed Jul 19 17:25:08 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 19 Jul 2023 10:25:08 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: Some people - not everyone by any extent, but a significant enough fraction of people - find it fun to have and raise children. In this scenario, perhaps only they would create further generations, but they are enough that further generations would continue to be created. On Wed, Jul 19, 2023 at 5:08?AM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hello everyone, > > I think this raises interesting questions of ethics and the "purpose" of > mankind. > > If we agree that happiness is the the ultimate goal, and not the survival > of the species, then why not stop having children if we can get along fine > with robots and AI:s? > > Perhaps, assuming life extension or "immortality", there will be a final > generation? > > Best regards, > Daniel > > > On Tue, 18 Jul 2023, spike jones via extropy-chat wrote: > > > > > I posted this right before the ExI list barfed. Posting again. > > > > spike > > > > > > > > -----Original Message----- > > From: spike at rainier66.com > > Sent: Saturday, 15 July, 2023 4:14 PM > > To: 'ExI chat list' > > Cc: 'BillK' ; spike at rainier66.com > > Subject: RE: [ExI] How fun could doom intelligent life to a blissful > extinction > > > > > > > > -----Original Message----- > > From: extropy-chat On Behalf > Of BillK via extropy-chat > > > > > >> ...If the pursuit of happiness is the primary explanation for our > decreasing fertility rate, this tendency might be true not just for humans > but for all intelligent life ? providing a possible explanation for the > Fermi Paradox. > > > > < > https://bigthink.com/the-future/pursuit-happiness-doom-intelligent-life-blissful-extinction/ > > > > ------------------- > > ... > > > > BillK > > > > _______________________________________________ > > > > > > > > BillK, this is really as plausible an explanation for the Fermi Paradox > as any I have heard, and perhaps the most pleasant one. Having children is > a way to experience happiness, but it is a risky bet indeed. If we find > sufficient alternative routes to happiness, the notion of having children > becomes ever less compelling. If we find alternative routes to the > pleasures of copulation and all those cool endorphins we get from love, > that whole risky activity isn't worth the effort either. Result: not > enough young people to run the world we already built for them. > > > > But of course nuclear war could wipe out most of what we have done, > creating the need for rebuilders and family people, so we might save our > species in that horrifying way: radiation therapy. Or the singularity > could kill us, but I don't think it would kill people who have never seen a > computer. They might survive to build it all back. > > > > spike > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jul 20 01:22:15 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 19 Jul 2023 18:22:15 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] FW: How fun could doom intelligent life to a blissful extinction >?Some people - not everyone by any extent, but a significant enough fraction of people - find it fun to have and raise children. In this scenario, perhaps only they would create further generations, but they are enough that further generations would continue to be created? Sure, but it could be that successive generations get smaller and smaller. Adrian, consider the thought experiment with the two bins: those who believe that the purpose of human life is to create and raise the next generation and those who do not believe that. In general, those who do not believe that have fewer genetic offspring, which gives a long term advantage to the life creators. But the non-life creators have a memetic advantage: it is way easier to afford the good things in this life without offspring. So? the non-lifers bring lifers over from the lifer bin to the non-lifer bin, while the lifers create more lifers. If you think it over, it becomes clearer why there is such a severe culture war that generally pits the lifers against the non-lifers. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Jul 20 02:36:21 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 19 Jul 2023 19:36:21 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> Message-ID: On Wed, Jul 19, 2023 at 7:17?PM spike jones via extropy-chat wrote: > snip > But the non-life creators have a memetic advantage: it is way easier to afford the good things in this life without offspring. You are not thinking this through. Can you think of any reason a world based on nanotech would not allow people to be as rich as they wished? I.e., the economic drain of having children will vanish. The world I set the clinic seed in had a population problem. But that was because the uploaded life was more desirable than the physical world for most people. The "powers that be" of that era were trying to maintain a physical state population of humans (why was unstated) but having a hard time doing so. Keith > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Thu Jul 20 04:58:22 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 19 Jul 2023 21:58:22 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> Message-ID: <004001d9bac6$cfad6ac0$6f084040$@rainier66.com> -----Original Message----- From: Keith Henson Subject: Re: [ExI] FW: How fun could doom intelligent life to a blissful extinction On Wed, Jul 19, 2023 at 7:17?PM spike jones via extropy-chat wrote: > snip >>... But the non-life creators have a memetic advantage: it is way easier to afford the good things in this life without offspring. spike >...You are not thinking this through. >...Can you think of any reason a world based on nanotech would not allow people to be as rich as they wished? I.e., the economic drain of having children will vanish. >...Keith Hi Keith, ja to all, but I was thinking in terms of the here and now. We in the USA are in the midst of a culture war not seen since the days of the Vietnam war, where opposing sides had a vastly different vision of the future and our purpose in this life. I am thinking of the current culture war in terms of the two bins (spawners and non-spawners) reaching into each other's bins. spike From efc at swisscows.email Thu Jul 20 12:04:00 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 20 Jul 2023 14:04:00 +0200 (CEST) Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> Message-ID: <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> Hello Spike, I agree, that sounds very reasonable. I also wonder if there is some kind of correlation between high intelligence and the tendency to memetic offspring and reduction in genetic offspring? The extremely highly intelligent people I met in my life, from my very limited and subjective point of view, tended to have fewer children, if any at all. I also find the reasoning behind the genetics interesting. Some tell me that its their way to some kind of immortality, but they only pass on 50% of their genes. Those genes in themselves, assuming a wider family tree, are already out there and will in theory be spread around. Then, among the memers, there are other takes on the theme. Spreading ideas or through economic activity ensuring that their actions are part of the web of the human market/culture etc. This can also include philanthropy for instance. Best regards, Daniel On Wed, 19 Jul 2023, spike at rainier66.com wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of efc--- via extropy-chat > > Subject: Re: [ExI] FW: How fun could doom intelligent life to a blissful extinction > > Hello everyone, > >> ...I think this raises interesting questions of ethics and the "purpose" of mankind... Daniel > > Daniel, there are those who believe mankind has no purpose. There are those who believe our purpose is to have children, be fruitful and multiply, fill the world with our descendants, that sorta thing. > > As a thought experiment, let us create two groups and throw everyone into one bin or the other, the no-purposers and the breeders. > > My thought experiment leads me to conclude that the no-purposers are more successful at producing memetic offspring, whereas the breeders are more successful at producing genetic offspring. > > spike > > > > > From efc at swisscows.email Thu Jul 20 12:13:34 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 20 Jul 2023 14:13:34 +0200 (CEST) Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: <0881aae1-d4bd-b818-dd3b-e45326f02f26@swisscows.email> This is a very interesting subject. I read a story in one of the local swedish mainstream newspapers where anonymous couples where complaining that society expects people to enjoy having and raising children, and that they severely regretted having children. The reason where the responsibility, the lack of autonomy, restrictions, cost etc. All the couples said that officially they all expressed intense happiness since that is what society expects, but in private they where all unhappy with the decision of having children. Another angle is that I think women are more biologically driven to enjoy having and raising children than men. I personally for instance, find most children annoying but for my significant other, there is no higher dream and expression of self worth and identity. Couple this with, in sweden at least, a tendency to more and more singles, and you there have a dampening factor as well. Last, but not least, you have all the other reasons, religion, financial security, custom etc. So many reasons, and I am convinced that a rapidly changing society, where more people will lead better and better lives, where goods will cost less and less, will naturally lead to a weakening of the drive to having children. Best regards, Daniel On Wed, 19 Jul 2023, Adrian Tymes via extropy-chat wrote: > Some people - not everyone by any extent, but a significant enough fraction of people - find it fun to have and raise children.? In > this scenario, perhaps only they would create further generations, but they are enough that further generations would continue to be > created. > > On Wed, Jul 19, 2023 at 5:08?AM efc--- via extropy-chat wrote: > Hello everyone, > > I think this raises interesting questions of ethics and the "purpose" of > mankind. > > If we agree that happiness is the the ultimate goal, and not the survival > of the species, then why not stop having children if we can get along fine > with robots and AI:s? > > Perhaps, assuming life extension or "immortality", there will be a final > generation? > > Best regards, > Daniel > > > On Tue, 18 Jul 2023, spike jones via extropy-chat wrote: > > > > > I posted this right before the ExI list barfed.? Posting again. > > > > spike > > > > > > > > -----Original Message----- > > From: spike at rainier66.com > > Sent: Saturday, 15 July, 2023 4:14 PM > > To: 'ExI chat list' > > Cc: 'BillK' ; spike at rainier66.com > > Subject: RE: [ExI] How fun could doom intelligent life to a blissful extinction > > > > > > > > -----Original Message----- > > From: extropy-chat On Behalf Of BillK via extropy-chat > > > > > >> ...If the pursuit of happiness is the primary explanation for our decreasing fertility rate, this tendency might be > true not just for humans but for all intelligent life ? providing a possible explanation for the Fermi Paradox. > > > > > > ------------------- > > ... > > > > BillK > > > > _______________________________________________ > > > > > > > > BillK, this is really as plausible an explanation for the Fermi Paradox as any I have heard, and perhaps the most > pleasant one.? Having children is a way to experience happiness, but it is a risky bet indeed.? If we find sufficient > alternative routes to happiness, the notion of having children becomes ever less compelling.? If we find alternative > routes to the pleasures of copulation and all those cool endorphins we get from love, that whole risky activity isn't > worth the effort either.? Result: not enough young people to run the world we already built for them. > > > > But of course nuclear war could wipe out most of what we have done, creating the need for rebuilders and family people, > so we might save our species in that horrifying way: radiation therapy.? Or the singularity could kill us, but I don't > think it would kill people who have never seen a computer.? They might survive to build it all back. > > > > spike > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From foozler83 at gmail.com Thu Jul 20 13:33:30 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 20 Jul 2023 08:33:30 -0500 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <0881aae1-d4bd-b818-dd3b-e45326f02f26@swisscows.email> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <0881aae1-d4bd-b818-dd3b-e45326f02f26@swisscows.email> Message-ID: The easy way out of the dilemma: the couple donates their sperm and ova to the banks. So they can have lots of children and not have to raise them. bill w On Thu, Jul 20, 2023 at 7:14?AM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This is a very interesting subject. > > I read a story in one of the local swedish mainstream newspapers where > anonymous couples where complaining that society expects people to enjoy > having and raising children, and that they severely regretted having > children. > > The reason where the responsibility, the lack of autonomy, restrictions, > cost etc. All the couples said that officially they all expressed intense > happiness since that is what society expects, but in private they where > all unhappy with the decision of having children. > > Another angle is that I think women are more biologically driven to enjoy > having and raising children than men. I personally for instance, find most > children annoying but for my significant other, there is no higher dream > and expression of self worth and identity. Couple this with, in sweden at > least, a tendency to more and more singles, and you there have a dampening > factor as well. > > Last, but not least, you have all the other reasons, religion, financial > security, custom etc. > > So many reasons, and I am convinced that a rapidly changing society, where > more people will lead better and better lives, where goods will cost less > and less, will naturally lead to a weakening of the drive to having > children. > > Best regards, > Daniel > > > On Wed, 19 Jul 2023, Adrian Tymes via extropy-chat wrote: > > > Some people - not everyone by any extent, but a significant enough > fraction of people - find it fun to have and raise children. In > > this scenario, perhaps only they would create further generations, but > they are enough that further generations would continue to be > > created. > > > > On Wed, Jul 19, 2023 at 5:08?AM efc--- via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Hello everyone, > > > > I think this raises interesting questions of ethics and the > "purpose" of > > mankind. > > > > If we agree that happiness is the the ultimate goal, and not the > survival > > of the species, then why not stop having children if we can get > along fine > > with robots and AI:s? > > > > Perhaps, assuming life extension or "immortality", there will be a > final > > generation? > > > > Best regards, > > Daniel > > > > > > On Tue, 18 Jul 2023, spike jones via extropy-chat wrote: > > > > > > > > I posted this right before the ExI list barfed. Posting again. > > > > > > spike > > > > > > > > > > > > -----Original Message----- > > > From: spike at rainier66.com > > > Sent: Saturday, 15 July, 2023 4:14 PM > > > To: 'ExI chat list' > > > Cc: 'BillK' ; spike at rainier66.com > > > Subject: RE: [ExI] How fun could doom intelligent life to a > blissful extinction > > > > > > > > > > > > -----Original Message----- > > > From: extropy-chat On > Behalf Of BillK via extropy-chat > > > > > > > > >> ...If the pursuit of happiness is the primary explanation for > our decreasing fertility rate, this tendency might be > > true not just for humans but for all intelligent life ? providing > a possible explanation for the Fermi Paradox. > > > > > > < > https://bigthink.com/the-future/pursuit-happiness-doom-intelligent-life-blissful-extinction/ > > > > > ------------------- > > > ... > > > > > > BillK > > > > > > _______________________________________________ > > > > > > > > > > > > BillK, this is really as plausible an explanation for the Fermi > Paradox as any I have heard, and perhaps the most > > pleasant one. Having children is a way to experience happiness, > but it is a risky bet indeed. If we find sufficient > > alternative routes to happiness, the notion of having children > becomes ever less compelling. If we find alternative > > routes to the pleasures of copulation and all those cool > endorphins we get from love, that whole risky activity isn't > > worth the effort either. Result: not enough young people to run > the world we already built for them. > > > > > > But of course nuclear war could wipe out most of what we have > done, creating the need for rebuilders and family people, > > so we might save our species in that horrifying way: radiation > therapy. Or the singularity could kill us, but I don't > > think it would kill people who have never seen a computer. They > might survive to build it all back. > > > > > > spike > > > > > > > > > > > > _______________________________________________ > > > extropy-chat mailing list > > > extropy-chat at lists.extropy.org > > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat_______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > >_______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Jul 20 15:48:20 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 20 Jul 2023 10:48:20 -0500 Subject: [ExI] ai games Message-ID: Just for speculation: what games would two AIs play? Or invent? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jul 20 15:59:17 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jul 2023 08:59:17 -0700 Subject: [ExI] ai games In-Reply-To: References: Message-ID: <005501d9bb23$243c95a0$6cb5c0e0$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Subject: [ExI] ai games >?Just for speculation: what games would two AIs play? Or invent? bill w Core war? Billw, those of us who were computer geeks from the long time agos realize what a grim cynical comment that was. {8^D spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jul 20 16:48:36 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jul 2023 09:48:36 -0700 Subject: [ExI] well wishes for marilyn In-Reply-To: <007601d9bb29$771fb440$655f1cc0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> Message-ID: <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Reposting under new subject line. Last Sunday, my son's childhood friend was in a bad accident, van hit her on her bicycle. She is busted up all to hell, comatose for three and a half days, but when she woke up yesterday she was still Marilyn, asking all the expected questions. She has going in her favor that she was a track star and in excellent physical condition. This is the girl who organized a non-profit math tutoring service at her high school, was a gold medalist athlete, eagle in Boy Scouts, gold rank in girl scouts (two different things) was scheduled to be the head of next week's National Youth Leadership Training for Boy Scouts of America, just all around good kid. Here's what I need from you please: write a sentence or two of encouragement to the patient, considering what you would feel if you were a teenage track star who suddenly needs to spend a few weeks at least in the hospital with nothing to do. If you have other chat groups, forward this note, ask them to please write a line or two for a teenage accident victim, forward the comments and well-wishes to me. We are friends with her family. I will print, cut out, paste hard copy into handmade get-well cards or a poster or both, take them over to her at the hospital. Do it please, do it now, do it with all the good will you can manage. Post them here, offlist is fine: spike at rainier66.com and I will take it from there. Her name is Marilyn Do. Do it for Do. Send them. Do it now pls. Thanks! spike From dsunley at gmail.com Thu Jul 20 16:48:04 2023 From: dsunley at gmail.com (Darin Sunley) Date: Thu, 20 Jul 2023 10:48:04 -0600 Subject: [ExI] ai games In-Reply-To: <005501d9bb23$243c95a0$6cb5c0e0$@rainier66.com> References: <005501d9bb23$243c95a0$6cb5c0e0$@rainier66.com> Message-ID: Core Wars actually requires new tactics in the age of AI. Traditional Core Wars / Corewar, for you kids in the audience, is played by two or more processes running in the same address space, trying to disrupt each other by partially overwriting the other player(s) while taking steps to avoid being overwritten themselves. In the traditional versions of the game, the address space is not overwhelmingly larger than the players, and the players are fairly small and very brittle - relatively small assembler programs such that overwriting even one instruction introduces what is probably an unrecoverable bug. Modern AIs change several of these assumptions. LLMs, and large neural net AIs in general, are "vast inscrutable matrices of floating point numbers" being interpreted by much smaller "shell" of more-or-less traditional code. [In "Diaspora", Greg Egan refers to this shell as an "exoself".] The entire point of a neural net is that small changes to the floating point weights generate only very small changes in the net's overall behavior. They are actually quite robust to having their weights bit-twiddled by an adversarial process. Conversely, they are very vulnerable to even small changes to the traditional shell code that runs the neural net - passes inputs into it, does matrix multiplication down through the layers, extracts and utilizes the output of the model, etc. Large AIs "playing" Core War against each other for keepsies are actually very robust against a lot of the traditional corewar strategies. They would however, definitely want to be constantly obfuscating their shell code, moving it around within their (much MUCH larger) weight data. Hiding their vulnerable needle within their much larger robust haystack. On Thu, Jul 20, 2023 at 10:10?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace > *Subject:* [ExI] ai games > > > > >?Just for speculation: what games would two AIs play? Or invent? > > > > bill w > > > > > > Core war? > > > > Billw, those of us who were computer geeks from the long time agos realize > what a grim cynical comment that was. {8^D > > > > spike > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jul 20 16:44:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jul 2023 09:44:34 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> Message-ID: <007601d9bb29$771fb440$655f1cc0$@rainier66.com> If you are in a hurry, go to the last paragraph please, all of yas, thx. spike -----Original Message----- From: efc at swisscows.email Subject: RE: [ExI] FW: How fun could doom intelligent life to a blissful extinction Hello Spike, >...I agree, that sounds very reasonable. I also wonder if there is some kind of correlation between high intelligence and the tendency to memetic offspring and reduction in genetic offspring? Ja, here's another take on it. Low intelligence people are better at making genetic offspring than their counterparts, but the more intellectual sorts are better at making memetic offspring than their dumber counterparts. >...The extremely highly intelligent people I met in my life, from my very limited and subjective point of view, tended to have fewer children, if any at all... Ja. There are multiple ways of looking at that. The lower intellectual people have far more access to the mainstream, people who the high intellect crowd wouldn't bother even talking to, regardless of how hot the person is. Oy vey it is difficult to express that idea without sounding like a conceited elitist bahstid. But think about it. Mainstream. Your crowd. Compare sizes of those groups please. What are you doing right now? Thinking and posting to a talking heads think tank? Or watching enormous men pretend to battle in a boxing ring? Which of those activities are favored by the mainstream please? >...I also find the reasoning behind the genetics interesting. Some tell me that its their way to some kind of immortality, but they only pass on 50% of their genes. Those genes in themselves, assuming a wider family tree, are already out there and will in theory be spread around... Genetic reproduction does not offer immortality in any sense. Only not dying can do that. Memetic reproduction offers a kind of immortality, in a very limited sense. I personally found genetic reproduction to be enormously satisfying and fulfilling, worth every dollar and every minute invested in the activity, a hundred times over, even though it isn't the path to immortality. >...Then, among the memers, there are other takes on the theme. Spreading ideas or through economic activity ensuring that their actions are part of the web of the human market/culture etc. This can also include philanthropy for instance. Best regards, Daniel Ja. Making this short life better for other people is the most worthwhile activity I can imagine. Speaking of which... I need a little favor from every person who is reading this please, won't cost you a cent, only take a few minutes. About ten years ago, you may recall a friend of mine, Christine Lajeunesse, who was a surgeon in Reno was shot by a deranged patient. For several days we didn't know if she would live or die. Her colleague did not survive, but she did. The shooting ended her surgery career. I called upon my chat groups to write her letters of encouragement, so we had a bunch of comments by Extropians, motorcycle people, math people, Mersenne people, yakkity yaks and bla blas all writing notes of encouragement, we printed them all out and cut them, pasted them on the back of a big get well poster. She still has that poster to this day. She says that poster helped her thru some tough times. OK then. Last Sunday, my son's childhood friend was in a bad accident, van hit her on her bicycle. She is busted up all to hell, comatose for three and a half days, but when she woke up yesterday she was still Marilyn, asking all the expected questions. She has going in her favor that she was a track star and in excellent physical condition. This is the girl who organized a non-profit math tutoring service at her high school, was a gold medalist athlete, eagle in Boy Scouts, gold rank in girl scouts (two different things) was scheduled to be the head of next week's National Youth Leadership Training for Boy Scouts of America, just all around good kid. Here's what I need from you please: write a sentence or two of encouragement to the patient, considering what you would feel if you were a teenage track star who suddenly needs to spend a few weeks at least in the hospital with nothing to do. If you have other chat groups, forward this paragraph, ask them to please write a line or two for a teenage accident victim, forward the comments and well-wishes to me. We are friends with her family. I will print, cut out, paste hard copy into handmade get-well cards or a poster or both, take them over to her at the hospital. Do it please, do it now, do it with all the good will you can manage. Post them here, offlist is fine: spike at rainier66.com and I will take it from there. Her name is Marilyn Do. Do it for Do. Send them. Do it now pls. Thanks! spike spike at rainier66.com From dsunley at gmail.com Thu Jul 20 16:02:11 2023 From: dsunley at gmail.com (Darin Sunley) Date: Thu, 20 Jul 2023 10:02:11 -0600 Subject: [ExI] ai games In-Reply-To: References: Message-ID: Oh wow. That's a HUGE question. Like, "a few interdisciplinary PH.d's and an academic career or two worth of papers" huge. But a good starting point would be to survey the vast landscape of games humans have created, make the more speculative survey of the ways in which an AI's neural architecture would vary from a baseline human's [both in terms of raw capabilities - speed, short-term memory capacity, and motivations - what do they consider to be "fun" and engaging compared to what humans consider to be fun and engaging], and project accordingly. On Thu, Jul 20, 2023 at 9:51?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Just for speculation: what games would two AIs play? Or invent? > > bill w > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jul 20 12:35:54 2023 From: pharos at gmail.com (BillK) Date: Thu, 20 Jul 2023 13:35:54 +0100 Subject: [ExI] How fun could doom intelligent life to a blissful extinction Message-ID: On Thu, 20 Jul 2023 at 03:48, Keith Henson via extropy-chat wrote: > > You are not thinking this through. > Can you think of any reason a world based on nanotech would not allow > people to be as rich as they wished? I.e., the economic drain of > having children will vanish. > > The world I set the clinic seed in had a population problem. But that > was because the uploaded life was more desirable than the physical > world for most people. The "powers that be" of that era were trying > to maintain a physical state population of humans (why was unstated) > but having a hard time doing so. > > Keith > ______________________________ The problem is the changeover time. We are not just talking about Fred and Wilma in Seattle. For countries with nanotech it could help to address the challenges such as resource depletion and environmental degradation. On the other hand, it could lead to a widening gap between rich and poor countries, as those with access to nanotech could reduce their populations while those without would continue to grow. The distribution of nanotech is also an issue. If it falls into the wrong hands, it could be used for harmful purposes, such as bioterrorism or the development of new weapons. It is therefore essential to develop effective mechanisms for controlling the distribution and use of nanotech safely to poorer countries that hate the developed world. If nanotech with virtually unlimited power is developed, then humanity will only survive with total control over how every member of the population uses nanotech. Perhaps an AGI which monitors and controls the behaviour of every human will be required. Humans are a very unruly mob to be given unlimited power. BillK From ben at zaiboc.net Thu Jul 20 20:29:49 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Thu, 20 Jul 2023 21:29:49 +0100 Subject: [ExI] How fun could doom intelligent life to a blissful extinction In-Reply-To: References: Message-ID: <76f7fd7a-d84c-bdb2-e5e3-f540e6d40895@zaiboc.net> On 20/07/2023 20:05, BillK wrote: > For countries with nanotech it could help to address the challenges > such as resource depletion and environmental degradation. On the other > hand, it could lead to a widening gap between rich and poor countries, > as those with access to nanotech could reduce their populations while > those without would continue to grow. > > The distribution of nanotech is also an issue. If it falls into the > wrong hands, it could be used for harmful purposes, such as > bioterrorism or the development of new weapons. It is therefore > essential to develop effective mechanisms for controlling the > distribution and use of nanotech safely to poorer countries that hate > the developed world. > > If nanotech with virtually unlimited power is developed, then humanity > will only survive with total control over how every member of the > population uses nanotech. Perhaps an AGI which monitors and controls > the behaviour of every human will be required. > Humans are a very unruly mob to be given unlimited power. Now I think it might be you who isn't thinking things through. If advanced nanotech is the panacaea we think it will be, then we should encourage its spread to literally everyone in the world. What are the root causes of terrorism and armed conflict? why do poorer countries hate richer ones? If nanotech can indeed cure these problems, then restricting it will just exacerbate them. Yes, there will be the odd nutters who don't care how well off they can be, but just want to cause mayhem, but that's just as true in the first world as in the third. Of course we will need defenses against the occasional rogue use of nanotech, but to talk about 'the wrong hands' is just falling back into the kind of thinking that causes most of our problems in the first place. I think it wll be essential to develop effective mechanisms for widely distributing nanotech to every country and individual in the world. There will be problems, of course, but the advantages will massively outweigh them, I think. I want to look forward to a future where there is no wealth gap between countries, or even between individuals, and I think nanotech has the potential to achieve that. Not to mention, of course, its potential to completely disrupt authoritarian regimes. We've seen what the internet has done. Individually-controllable, advanced nanotech would leave the internet in the dust, in terms of individual empowerment. Imagine a world where every female on the planet has total control of her own reproductive system, just for starters. Ben From efc at swisscows.email Thu Jul 20 20:59:26 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Thu, 20 Jul 2023 22:59:26 +0200 (CEST) Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: <007601d9bb29$771fb440$655f1cc0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> Message-ID: <590fb7ed-2e96-5fa5-01fe-00f4a9ae3f68@swisscows.email> Hello spike, will write a separate response directly to your email with a personal greeting. On Thu, 20 Jul 2023, spike at rainier66.com wrote: >> ...I agree, that sounds very reasonable. I also wonder if there is some > kind of correlation between high intelligence and the tendency to memetic > offspring and reduction in genetic offspring? > > Ja, here's another take on it. Low intelligence people are better at making > genetic offspring than their counterparts, but the more intellectual sorts > are better at making memetic offspring than their dumber counterparts. Did you ever read flowers for algernon? I think it is a very good book showing clearly the loneliness very smart people can feel in their lives. I think this contributes as well to nr of offspring related to IQ. >> ...The extremely highly intelligent people I met in my life, from my very > limited and subjective point of view, tended to have fewer children, if any > at all... > > conceited elitist bahstid. But think about it. Mainstream. Your crowd. > Compare sizes of those groups please. What are you doing right now? > Thinking and posting to a talking heads think tank? Or watching enormous Yes. > men pretend to battle in a boxing ring? Which of those activities are > favored by the mainstream please? Hah, I actually enjoy boxing! ;) But only after I started to train boxing and discovered for myself the enormous skill it takes, and perhaps more importantly, to see and appreciate that skill. Before trying it myself, I definitely wasn't into it so, yes you are right. >> ...I also find the reasoning behind the genetics interesting. Some tell me > that its their way to some kind of immortality, but they only pass on 50% of > their genes. Those genes in themselves, assuming a wider family tree, are > already out there and will in theory be spread around... > > Genetic reproduction does not offer immortality in any sense. Only not > dying can do that. Memetic reproduction offers a kind of immortality, in a Yes, that was my point. I find it a strange way of looking at it. > very limited sense. I personally found genetic reproduction to be Well, very limited. Since I like the Tetrapharmakos view, ultimately for me, it doesn't matter as well. But yes, I guess it could be a kind of psychological chicken soup for the soul, memetic production I mean. https://en.wikipedia.org/wiki/Principal_Doctrines#Tetrapharmakos > enormously satisfying and fulfilling, worth every dollar and every minute > invested in the activity, a hundred times over, even though it isn't the > path to immortality. I'm happy for you! Does that include raising children as well or is it just the "act of reproduction"? >> ...Then, among the memers, there are other takes on the theme. Spreading > ideas or through economic activity ensuring that their actions are part of > the web of the human market/culture etc. This can also include philanthropy > for instance. > > Ja. Making this short life better for other people is the most worthwhile > activity I can imagine. Yes. My problem with a lot of philanthropy is that it tends to pacify people. Many people where I live are big believers in throwing money at some random country to feel better. The view of teaching people how to fish is non-existent around me. Best regards, Daniel From atymes at gmail.com Thu Jul 20 19:42:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 20 Jul 2023 12:42:22 -0700 Subject: [ExI] How fun could doom intelligent life to a blissful extinction In-Reply-To: References: Message-ID: On Thu, Jul 20, 2023, 12:06 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > If nanotech with virtually unlimited power is developed, then humanity > will only survive with total control over how every member of the > population uses nanotech. Perhaps an AGI which monitors and controls > the behaviour of every human will be required. > Unfortunately, total control of all humans - even just of very large groups - has proven impossible, and will likely remain so even with AI. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jul 20 23:25:05 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jul 2023 16:25:05 -0700 Subject: [ExI] How fun could doom intelligent life to a blissful extinction In-Reply-To: References: Message-ID: <012001d9bb61$6b2d4110$4187c330$@rainier66.com> From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat ? >?Unfortunately, total control of all humans - even just of very large groups - has proven impossible, and will likely remain so even with AI? Do let us hope so Adrian. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Jul 21 00:13:32 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 20 Jul 2023 17:13:32 -0700 Subject: [ExI] How fun could doom intelligent life to a blissful extinction In-Reply-To: References: Message-ID: On Thu, Jul 20, 2023 at 12:06?PM BillK via extropy-chat wrote: snip > The problem is the changeover time. Bill, I don't think you are thinking this through either There is no reason to think the replication time of the stuff at the bottom of the nanotech world will take more than 20 minutes to replicate. Even if Drexler was off by 1000 times, the replication is still very short by human standards. > We are not just talking about Fred and Wilma in Seattle. > > For countries with nanotech it could help to address the challenges > such as resource depletion and environmental degradation. On the other > hand, it could lead to a widening gap between rich and poor countries, > as those with access to nanotech could reduce their populations while > those without would continue to grow. The cost of nanotech stuff is as close to zero as you want to make it. There is no reason that nanotech would not spread worldwide almost at once. The Clinic Seed story had a foundation distributing clinics to every village in Africa. When the cost of something falls to near zero, expect people to be generous. Keith > The distribution of nanotech is also an issue. If it falls into the > wrong hands, it could be used for harmful purposes, such as > bioterrorism or the development of new weapons. It is therefore > essential to develop effective mechanisms for controlling the > distribution and use of nanotech safely to poorer countries that hate > the developed world. > > If nanotech with virtually unlimited power is developed, then humanity > will only survive with total control over how every member of the > population uses nanotech. Perhaps an AGI which monitors and controls > the behaviour of every human will be required. > Humans are a very unruly mob to be given unlimited power. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Fri Jul 21 02:53:00 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 20 Jul 2023 19:53:00 -0700 Subject: [ExI] well wishes for marilyn In-Reply-To: <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Message-ID: Hi Marilyn Spike Jones (who I have known for ages) wrote to me about your being banged up. Ouch. Hope you recover entirely. Hope they let you have a laptop. If they do, you might dig into the news of the day, the discovery that a blinking star (Tabby's star) has 15 like it in a tight cluster. Best wishes, Keith Henson https://en.wikipedia.org/wiki/Keith_Henson On Thu, Jul 20, 2023 at 9:50?AM spike jones via extropy-chat wrote: > > > Reposting under new subject line. > > Last Sunday, my son's childhood friend was in a bad accident, van hit her on > her bicycle. She is busted up all to hell, comatose for three and a half > days, but when she woke up yesterday she was still Marilyn, asking all the > expected questions. She has going in her favor that she was a track star > and in excellent physical condition. This is the girl who organized a > non-profit math tutoring service at her high school, was a gold medalist > athlete, eagle in Boy Scouts, gold rank in girl scouts (two different > things) was scheduled to be the head of next week's National Youth > Leadership Training for Boy Scouts of America, just all around good kid. > > Here's what I need from you please: write a sentence or two of encouragement > to the patient, considering what you would feel if you were a teenage track > star who suddenly needs to spend a few weeks at least in the hospital with > nothing to do. If you have other chat groups, forward this note, ask them > to please write a line or two for a teenage accident victim, forward the > comments and well-wishes to me. We are friends with her family. I will > print, cut out, paste hard copy into handmade get-well cards or a poster or > both, take them over to her at the hospital. Do it please, do it now, do it > with all the good will you can manage. Post them here, offlist is fine: > > spike at rainier66.com > > and I will take it from there. Her name is Marilyn Do. > Do it for Do. Send them. Do it now pls. Thanks! > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Fri Jul 21 04:10:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jul 2023 21:10:54 -0700 Subject: [ExI] well wishes for marilyn In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Message-ID: <001701d9bb89$5870c740$095255c0$@rainier66.com> Got it, thanks Giulio. spike -----Original Message----- From: Giulio Prisco Sent: Thursday, 20 July, 2023 9:08 PM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] well wishes for marilyn Dear Marilyn, you don't know me but I heard of you in a message posted to a mailing list. I'm very sorry to hear that you need to spend a few weeks in the hospital with nothing to do. But thinking is doing! I suggest that you spend these few weeks thinking about what you want to do with the rest of your life. There are lots of important things to do, just pick some! And recover soon! On Thu, Jul 20, 2023 at 6:50?PM spike jones via extropy-chat wrote: > > > Reposting under new subject line. > > Last Sunday, my son's childhood friend was in a bad accident, van hit > her on her bicycle. She is busted up all to hell, comatose for three > and a half days, but when she woke up yesterday she was still Marilyn, > asking all the expected questions. She has going in her favor that > she was a track star and in excellent physical condition. This is the > girl who organized a non-profit math tutoring service at her high > school, was a gold medalist athlete, eagle in Boy Scouts, gold rank in > girl scouts (two different > things) was scheduled to be the head of next week's National Youth > Leadership Training for Boy Scouts of America, just all around good kid. > > Here's what I need from you please: write a sentence or two of > encouragement to the patient, considering what you would feel if you > were a teenage track star who suddenly needs to spend a few weeks at > least in the hospital with nothing to do. If you have other chat > groups, forward this note, ask them to please write a line or two for > a teenage accident victim, forward the comments and well-wishes to me. > We are friends with her family. I will print, cut out, paste hard > copy into handmade get-well cards or a poster or both, take them over > to her at the hospital. Do it please, do it now, do it with all the good will you can manage. Post them here, offlist is fine: > > spike at rainier66.com > > and I will take it from there. Her name is Marilyn Do. > Do it for Do. Send them. Do it now pls. Thanks! > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Fri Jul 21 05:23:03 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 20 Jul 2023 22:23:03 -0700 Subject: [ExI] How fun could doom intelligent life to a blissful extinction In-Reply-To: <76f7fd7a-d84c-bdb2-e5e3-f540e6d40895@zaiboc.net> References: <76f7fd7a-d84c-bdb2-e5e3-f540e6d40895@zaiboc.net> Message-ID: On Thu, Jul 20, 2023 at 1:31?PM Ben Zaiboc via extropy-chat wrote: > > On 20/07/2023 20:05, BillK wrote: > > For countries with nanotech it could help to address the challenges > > such as resource depletion and environmental degradation. On the other > > hand, it could lead to a widening gap between rich and poor countries, > > as those with access to nanotech could reduce their populations while > > those without would continue to grow. > > > > The distribution of nanotech is also an issue. If it falls into the > > wrong hands, it could be used for harmful purposes, such as > > bioterrorism or the development of new weapons. It is therefore > > essential to develop effective mechanisms for controlling the > > distribution and use of nanotech safely to poorer countries that hate > > the developed world. > > > > If nanotech with virtually unlimited power is developed, then humanity > > will only survive with total control over how every member of the > > population uses nanotech. Perhaps an AGI which monitors and controls > > the behaviour of every human will be required. > > Humans are a very unruly mob to be given unlimited power. > > Now I think it might be you who isn't thinking things through. > > If advanced nanotech is the panacaea we think it will be, then we should > encourage its spread to literally everyone in the world.I can't imagine how the spread couple\\ be stopped. > What are the root causes of terrorism and armed conflict? why do poorer > countries hate richer ones? Do you have any link to make this case? From what I read, the people of the poor countries want to move to the rich ones. If nanotech can indeed cure these problems, > then restricting it will just exacerbate them. > Yes, there will be the odd nutters who don't care how well off they can > be, but just want to cause mayhem, but that's just as true in the first > world as in the third. Of course we will need defenses against the > occasional rogue use of nanotech, but to talk about 'the wrong hands' is > just falling back into the kind of thinking that causes most of our > problems in the first place. > > I think it wll be essential to develop effective mechanisms for widely > distributing nanotech to every country and individual in the world. > There will be problems, of course, but the advantages will massively > outweigh them, I think. > > I want to look forward to a future where there is no wealth gap between > countries, or even between individuals, and I think nanotech has the > potential to achieve that. Not to mention, of course, its potential to > completely disrupt authoritarian regimes. That might not be needed. An uploaded authoritarian could rule with an iron hand over a batch of simulated underlings run by an AI. This was how the prospective husband lost interest in Zaba in the Clinic Seed story. > We've seen what the internet has done. Individually-controllable, > advanced nanotech would leave the internet in the dust, in terms of > individual empowerment. Imagine a world where every female on the planet > has total control of her own reproductive system, just for starters. In spite of all the politics, that situation is much closer today than it was in the past. Keith > Ben > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Fri Jul 21 04:04:38 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jul 2023 21:04:38 -0700 Subject: [ExI] well wishes for marilyn In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Message-ID: <000e01d9bb88$784a1e50$68de5af0$@rainier66.com> COOL thx Keith. I will include this with the other messages. I had 14 of Marilyn's friends and classmates at my house today, and they are coming back tomorrow, to finish up making posters and writing well-wishes. We heard today that she is in a lot of pain but is out of danger mostly. Still in ICU, but fortunately she was wearing a helmet when the van hit her and her brain did not swell. Her sister was with her, but was not injured. Marilyn was a track star, so it is likely that is down the drain now. She is a science geek, so good chance she knows about Tabby's star. spike -----Original Message----- From: extropy-chat On Behalf Of Keith Henson via extropy-chat Sent: Thursday, 20 July, 2023 7:53 PM To: ExI chat list Cc: Keith Henson Subject: Re: [ExI] well wishes for marilyn Hi Marilyn Spike Jones (who I have known for ages) wrote to me about your being banged up. Ouch. Hope you recover entirely. Hope they let you have a laptop. If they do, you might dig into the news of the day, the discovery that a blinking star (Tabby's star) has 15 like it in a tight cluster. Best wishes, Keith Henson https://en.wikipedia.org/wiki/Keith_Henson From giulio at gmail.com Fri Jul 21 04:07:57 2023 From: giulio at gmail.com (Giulio Prisco) Date: Fri, 21 Jul 2023 06:07:57 +0200 Subject: [ExI] well wishes for marilyn In-Reply-To: <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Message-ID: Dear Marilyn, you don't know me but I heard of you in a message posted to a mailing list. I'm very sorry to hear that you need to spend a few weeks in the hospital with nothing to do. But thinking is doing! I suggest that you spend these few weeks thinking about what you want to do with the rest of your life. There are lots of important things to do, just pick some! And recover soon! On Thu, Jul 20, 2023 at 6:50?PM spike jones via extropy-chat wrote: > > > Reposting under new subject line. > > Last Sunday, my son's childhood friend was in a bad accident, van hit her on > her bicycle. She is busted up all to hell, comatose for three and a half > days, but when she woke up yesterday she was still Marilyn, asking all the > expected questions. She has going in her favor that she was a track star > and in excellent physical condition. This is the girl who organized a > non-profit math tutoring service at her high school, was a gold medalist > athlete, eagle in Boy Scouts, gold rank in girl scouts (two different > things) was scheduled to be the head of next week's National Youth > Leadership Training for Boy Scouts of America, just all around good kid. > > Here's what I need from you please: write a sentence or two of encouragement > to the patient, considering what you would feel if you were a teenage track > star who suddenly needs to spend a few weeks at least in the hospital with > nothing to do. If you have other chat groups, forward this note, ask them > to please write a line or two for a teenage accident victim, forward the > comments and well-wishes to me. We are friends with her family. I will > print, cut out, paste hard copy into handmade get-well cards or a poster or > both, take them over to her at the hospital. Do it please, do it now, do it > with all the good will you can manage. Post them here, offlist is fine: > > spike at rainier66.com > > and I will take it from there. Her name is Marilyn Do. > Do it for Do. Send them. Do it now pls. Thanks! > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Thu Jul 20 22:54:36 2023 From: pharos at gmail.com (BillK) Date: Thu, 20 Jul 2023 23:54:36 +0100 Subject: [ExI] How fun could doom intelligent life to a blissful extinction In-Reply-To: References: Message-ID: On Thu, 20 Jul 2023 at 23:27, Adrian Tymes via extropy-chat wrote: > > On Thu, Jul 20, 2023, 12:06 PM BillK via extropy-chat wrote: >> >> If nanotech with virtually unlimited power is developed, then humanity >> will only survive with total control over how every member of the >> population uses nanotech. Perhaps an AGI which monitors and controls >> the behaviour of every human will be required. > > > Unfortunately, total control of all humans - even just of very large groups - > has proven impossible, and will likely remain so even with AI. > _______________________________________________ > Perhaps the developers of nanotech might decide to spread nanotech by a disease which changes human brains to make them avoid fighting and attacking other humans? Both 1984 and Brave New World used drugs to keep the population happy and under control. Nanotech should be able to be even more successful at achieving that result. BillK From spike at rainier66.com Fri Jul 21 12:49:39 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 21 Jul 2023 05:49:39 -0700 Subject: [ExI] did gpt4 get dumber? In-Reply-To: References: Message-ID: <001501d9bbd1$d0b28660$72179320$@rainier66.com> It felt like GPT got dumber while I was away. Gordon suggested that I got smarter, but I don?t think that is the case. Chess players know that phenomenon well: humans definitely did get better during the era of chess software development, as we found out if we have old computer software: what once beat us we can now trounce. But? I was mostly away from all internet for about four weeks while on the road, and there has been minimal GPT use for the two weeks since I returned because of circumstances. So I didn?t get smarter since the first week of June, I got dumber. But GPT seems to have gotten dumber still, and I have a possible explanation. OpenAI is selling subscriptions like crazy. We can safely assume they are not adding processors as fast as they are adding subscribers, so it would be reasonable to assume they need to spend less computing cycles per response, which would make it feel dumber than before. GPT hipsters, does that sound about right? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jul 20 11:47:24 2023 From: pharos at gmail.com (BillK) Date: Thu, 20 Jul 2023 12:47:24 +0100 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> Message-ID: On Thu, 20 Jul 2023 at 03:48, Keith Henson via extropy-chat wrote: > > You are not thinking this through. > Can you think of any reason a world based on nanotech would not allow > people to be as rich as they wished? I.e., the economic drain of > having children will vanish. > > The world I set the clinic seed in had a population problem. But that > was because the uploaded life was more desirable than the physical > world for most people. The "powers that be" of that era were trying > to maintain a physical state population of humans (why was unstated) > but having a hard time doing so. > > Keith > _______________________________________________ The problem is the changeover time. We are not just talking about Fred and Wilma in Seattle. For countries with nanotech it could help to address the challenges such as resource depletion and environmental degradation. On the other hand, it could lead to a widening gap between rich and poor countries, as those with access to nanotech could reduce their populations while those without would continue to grow. The distribution of nanotech is also an issue. If it falls into the wrong hands, it could be used for harmful purposes, such as bioterrorism or the development of new weapons. It is therefore essential to develop effective mechanisms for controlling the distribution and use of nanotech safely to poorer countries that hate the developed world. If nanotech with virtually unlimited power is developed, then humanity will only survive with total control over how every member of the population uses nanotech. Perhaps an AGI which monitors and controls the behaviour of every human will be required. Humans are a very unruly mob to be given unlimited power. BillK From brent.allsop at gmail.com Fri Jul 21 14:13:54 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 21 Jul 2023 08:13:54 -0600 Subject: [ExI] well wishes for marilyn In-Reply-To: <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Message-ID: Hi Marilyn, I heard about your accident, and that you?ll be laid up for a while. And I heard about the wonderful things you?ve already done in your life, which this will interrupt. Just remember that experiencing redness for only 5 minutes, and then life ending, would be worth the millions of years of evolutionary struggle before life was able to experience something so phenomenally wonderful. To say nothing about all the other experiences we are all able to experience every single minute of every day with our friends and families. It is a shame that everyone tends to take all that kind of phenomenal joy for granted. Brent Allsop Founding Partner Canonizer.com On Thu, Jul 20, 2023 at 10:50?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Reposting under new subject line. > > Last Sunday, my son's childhood friend was in a bad accident, van hit her > on > her bicycle. She is busted up all to hell, comatose for three and a half > days, but when she woke up yesterday she was still Marilyn, asking all the > expected questions. She has going in her favor that she was a track star > and in excellent physical condition. This is the girl who organized a > non-profit math tutoring service at her high school, was a gold medalist > athlete, eagle in Boy Scouts, gold rank in girl scouts (two different > things) was scheduled to be the head of next week's National Youth > Leadership Training for Boy Scouts of America, just all around good kid. > > Here's what I need from you please: write a sentence or two of > encouragement > to the patient, considering what you would feel if you were a teenage track > star who suddenly needs to spend a few weeks at least in the hospital with > nothing to do. If you have other chat groups, forward this note, ask them > to please write a line or two for a teenage accident victim, forward the > comments and well-wishes to me. We are friends with her family. I will > print, cut out, paste hard copy into handmade get-well cards or a poster or > both, take them over to her at the hospital. Do it please, do it now, do > it > with all the good will you can manage. Post them here, offlist is fine: > > spike at rainier66.com > > and I will take it from there. Her name is Marilyn Do. > Do it for Do. Send them. Do it now pls. Thanks! > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Jul 21 16:06:38 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 21 Jul 2023 10:06:38 -0600 Subject: [ExI] did gpt4 get dumber? In-Reply-To: <001501d9bbd1$d0b28660$72179320$@rainier66.com> References: <001501d9bbd1$d0b28660$72179320$@rainier66.com> Message-ID: Nah, it doesn't work that way. While you can vary the amount of compute you use on a training run (and the more you use, typically the better), once a neural network like ChatGPT4 is trained, it takes the same amount of compute to run a textbot exchange through it every time. You can't reduce that amount of compute without rebuilding the thing from the ground up with a different, more efficient neural architecture. Theoretically they could have changed the activation function in the neurons to something more computationally efficient 9i.e. moving from sigmoid or tanh to RELU), but afaik they were already using RELU, which is almost as efficient as you can get. I suppose they could be trying to carve neurons out without doing a ground-up retraining. That's kinda like blindly poking into your brain with a hot needle - they may find pieces they can prune, but they're just as likely to cripple it. If they're actively researching mechanical interpretability (trying to actually understand what the giant inscrutable matrices are actually doing) and using those findings to optimize ChatGPT 4, they could be subtly altering or damaging its performance if they apply those changes to the production code, but it'd be mildly surprising if they're doing this in production without extensive benchmarking. On Fri, Jul 21, 2023 at 6:59?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > It felt like GPT got dumber while I was away. Gordon suggested that I got > smarter, but I don?t think that is the case. Chess players know that > phenomenon well: humans definitely did get better during the era of chess > software development, as we found out if we have old computer software: > what once beat us we can now trounce. > > > > But? I was mostly away from all internet for about four weeks while on the > road, and there has been minimal GPT use for the two weeks since I returned > because of circumstances. So I didn?t get smarter since the first week of > June, I got dumber. > > > > But GPT seems to have gotten dumber still, and I have a possible > explanation. OpenAI is selling subscriptions like crazy. We can safely > assume they are not adding processors as fast as they are adding > subscribers, so it would be reasonable to assume they need to spend less > computing cycles per response, which would make it feel dumber than before. > > > > GPT hipsters, does that sound about right? > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Jul 21 14:49:03 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 21 Jul 2023 07:49:03 -0700 Subject: [ExI] well wishes for marilyn In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <020701d9ba5e$fb3a3b90$f1aeb2b0$@rainier66.com> <83cc76eb-0bdc-0072-9931-b4846e4c6436@swisscows.email> <007601d9bb29$771fb440$655f1cc0$@rainier66.com> <007b01d9bb2a$07a58990$16f09cb0$@rainier66.com> Message-ID: <002501d9bbe2$7e892360$7b9b6a20$@rainier66.com> Thx Brent! spike From: extropy-chat On Behalf Of Brent Allsop via extropy-chat Sent: Friday, 21 July, 2023 7:14 AM To: ExI chat list Cc: Brent Allsop Subject: Re: [ExI] well wishes for marilyn Hi Marilyn, I heard about your accident, and that you?ll be laid up for a while. And I heard about the wonderful things you?ve already done in your life, which this will interrupt. Just remember that experiencing redness for only 5 minutes, and then life ending, would be worth the millions of years of evolutionary struggle before life was able to experience something so phenomenally wonderful. To say nothing about all the other experiences we are all able to experience every single minute of every day with our friends and families. It is a shame that everyone tends to take all that kind of phenomenal joy for granted. Brent Allsop Founding Partner Canonizer.com On Thu, Jul 20, 2023 at 10:50?AM spike jones via extropy-chat > wrote: Reposting under new subject line. Last Sunday, my son's childhood friend was in a bad accident, van hit her on her bicycle. She is busted up all to hell, comatose for three and a half days, but when she woke up yesterday she was still Marilyn, asking all the expected questions. She has going in her favor that she was a track star and in excellent physical condition. This is the girl who organized a non-profit math tutoring service at her high school, was a gold medalist athlete, eagle in Boy Scouts, gold rank in girl scouts (two different things) was scheduled to be the head of next week's National Youth Leadership Training for Boy Scouts of America, just all around good kid. Here's what I need from you please: write a sentence or two of encouragement to the patient, considering what you would feel if you were a teenage track star who suddenly needs to spend a few weeks at least in the hospital with nothing to do. If you have other chat groups, forward this note, ask them to please write a line or two for a teenage accident victim, forward the comments and well-wishes to me. We are friends with her family. I will print, cut out, paste hard copy into handmade get-well cards or a poster or both, take them over to her at the hospital. Do it please, do it now, do it with all the good will you can manage. Post them here, offlist is fine: spike at rainier66.com and I will take it from there. Her name is Marilyn Do. Do it for Do. Send them. Do it now pls. Thanks! spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Jul 21 16:48:05 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 21 Jul 2023 12:48:05 -0400 Subject: [ExI] did gpt4 get dumber? In-Reply-To: <001501d9bbd1$d0b28660$72179320$@rainier66.com> References: <001501d9bbd1$d0b28660$72179320$@rainier66.com> Message-ID: You're not the only one who thinks so: https://twitter.com/svpino/status/1681614284613099520?s=20 Jason On Fri, Jul 21, 2023, 8:49 AM wrote: > > > > > It felt like GPT got dumber while I was away. Gordon suggested that I got > smarter, but I don?t think that is the case. Chess players know that > phenomenon well: humans definitely did get better during the era of chess > software development, as we found out if we have old computer software: > what once beat us we can now trounce. > > > > But? I was mostly away from all internet for about four weeks while on the > road, and there has been minimal GPT use for the two weeks since I returned > because of circumstances. So I didn?t get smarter since the first week of > June, I got dumber. > > > > But GPT seems to have gotten dumber still, and I have a possible > explanation. OpenAI is selling subscriptions like crazy. We can safely > assume they are not adding processors as fast as they are adding > subscribers, so it would be reasonable to assume they need to spend less > computing cycles per response, which would make it feel dumber than before. > > > > GPT hipsters, does that sound about right? > > > > spike > > -- > You received this message because you are subscribed to the Google Groups > "The Important Questions" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to the-important-questions+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/the-important-questions/001501d9bbd1%24d0b28660%2472179320%24%40rainier66.com > > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Jul 21 16:10:12 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 21 Jul 2023 10:10:12 -0600 Subject: [ExI] did gpt4 get dumber? In-Reply-To: References: <001501d9bbd1$d0b28660$72179320$@rainier66.com> Message-ID: TLDR: A lot of people hitting a resource-constrained ChatGPT simultaneously would make it's responses slower, but not dumber. There are things they could be doing that would make it dumber, including pruning nonessential neurons post-training, or [most likely] adding additional RLHF (reinforcement learning from human feedback) / censorship layers. On Fri, Jul 21, 2023 at 10:06?AM Darin Sunley wrote: > Nah, it doesn't work that way. While you can vary the amount of compute > you use on a training run (and the more you use, typically the better), > once a neural network like ChatGPT4 is trained, it takes the same amount of > compute to run a textbot exchange through it every time. You can't reduce > that amount of compute without rebuilding the thing from the ground up with > a different, more efficient neural architecture. > > Theoretically they could have changed the activation function in the > neurons to something more computationally efficient 9i.e. moving from > sigmoid or tanh to RELU), but afaik they were already using RELU, which is > almost as efficient as you can get. > > I suppose they could be trying to carve neurons out without doing a > ground-up retraining. That's kinda like blindly poking into your brain with > a hot needle - they may find pieces they can prune, but they're just as > likely to cripple it. If they're actively researching mechanical > interpretability (trying to actually understand what the giant inscrutable > matrices are actually doing) and using those findings to optimize ChatGPT > 4, they could be subtly altering or damaging its performance if they apply > those changes to the production code, but it'd be mildly surprising if > they're doing this in production without extensive benchmarking. > > On Fri, Jul 21, 2023 at 6:59?AM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> >> >> It felt like GPT got dumber while I was away. Gordon suggested that I >> got smarter, but I don?t think that is the case. Chess players know that >> phenomenon well: humans definitely did get better during the era of chess >> software development, as we found out if we have old computer software: >> what once beat us we can now trounce. >> >> >> >> But? I was mostly away from all internet for about four weeks while on >> the road, and there has been minimal GPT use for the two weeks since I >> returned because of circumstances. So I didn?t get smarter since the first >> week of June, I got dumber. >> >> >> >> But GPT seems to have gotten dumber still, and I have a possible >> explanation. OpenAI is selling subscriptions like crazy. We can safely >> assume they are not adding processors as fast as they are adding >> subscribers, so it would be reasonable to assume they need to spend less >> computing cycles per response, which would make it feel dumber than before. >> >> >> >> GPT hipsters, does that sound about right? >> >> >> >> spike >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jul 21 17:17:12 2023 From: pharos at gmail.com (BillK) Date: Fri, 21 Jul 2023 18:17:12 +0100 Subject: [ExI] did gpt4 get dumber? In-Reply-To: References: <001501d9bbd1$d0b28660$72179320$@rainier66.com> Message-ID: On Fri, 21 Jul 2023 at 17:59, Jason Resch via extropy-chat wrote: > > You're not the only one who thinks so: > > https://twitter.com/svpino/status/1681614284613099520?s=20 > > Jason > _______________________________________________ I?ve read some articles that suggest they are trying mods to improve GPT-4. But because they don't really understand the huge neural network as they improve one bit that makes another part worse. In some respects GPT-3.5 is now better than GPT-4. I expect they'll get it right eventually. :) BillK From spike at rainier66.com Fri Jul 21 19:17:06 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 21 Jul 2023 12:17:06 -0700 Subject: [ExI] apologies Message-ID: <010801d9bc07$f0a15c40$d1e414c0$@rainier66.com> ExI chatters who responded to my request for best wishes to Marilyn, thanks! If you posted me a note to her along with another note for me, but I failed to respond to the additional note, my humble apologies. Life is happening to us this week and will likely keep happening for at least the next two or more weeks as we deal with two aging (and ailing) parents along with a simultaneous crisis among the teenage crowd (Marilyn's accident.) If I never get back to those notes, do understand I am not ghosting you, just hoping to keep up with circumstances, as circumstances flee my grasp. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jul 20 14:27:32 2023 From: pharos at gmail.com (BillK) Date: Thu, 20 Jul 2023 15:27:32 +0100 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> Message-ID: On Thu, 20 Jul 2023 at 03:48, Keith Henson via extropy-chat wrote: > > You are not thinking this through. > > Can you think of any reason a world based on nanotech would not allow > people to be as rich as they wished? I.e., the economic drain of > having children will vanish. > > The world I set the clinic seed in had a population problem. But that > was because the uploaded life was more desirable than the physical > world for most people. The "powers that be" of that era were trying > to maintain a physical state population of humans (why was unstated) > but having a hard time doing so. > > Keith > _______________________________________________ The problem is the changeover time. We are not just talking about Fred and Wilma in Seattle. For countries with nanotech it could help to address the challenges such as resource depletion and environmental degradation. On the other hand, it could lead to a widening gap between rich and poor countries, as those with access to nanotech could reduce their populations while those without would continue to grow. The distribution of nanotech is also an issue. If it falls into the wrong hands, it could be used for harmful purposes, such as bioterrorism or the development of new weapons. It is therefore essential to develop effective mechanisms for controlling the distribution and use of nanotech safely to poorer countries that hate the developed world. If nanotech with virtually unlimited power is developed, then humanity will only survive with total control over how every member of the population uses nanotech. Perhaps an AGI which monitors and controls the behaviour of every human will be required. Humans are a very unruly mob to be given unlimited power. BillK From pharos at gmail.com Thu Jul 20 16:33:17 2023 From: pharos at gmail.com (BillK) Date: Thu, 20 Jul 2023 17:33:17 +0100 Subject: [ExI] ai games In-Reply-To: <005501d9bb23$243c95a0$6cb5c0e0$@rainier66.com> References: <005501d9bb23$243c95a0$6cb5c0e0$@rainier66.com> Message-ID: On Thu, 20 Jul 2023 at 17:11, spike jones via extropy-chat wrote: > > From: extropy-chat On Behalf Of William Flynn Wallace > Subject: [ExI] ai games > >?Just for speculation: what games would two AIs play? Or invent? > bill w > > Core war? > Billw, those of us who were computer geeks from the long time agos realize what a grim cynical comment that was. {8^D > > spike > _______________________________________________ Or those who can do a search! :) BillK From hkeithhenson at gmail.com Sat Jul 22 02:09:35 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 21 Jul 2023 19:09:35 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> <001c01d9baa8$9f36a190$dda3e4b0$@rainier66.com> Message-ID: On Fri, Jul 21, 2023 at 12:58?PM BillK via extropy-chat wrote: > > On Thu, 20 Jul 2023 at 03:48, Keith Henson via extropy-chat > wrote: snip > > The problem is the changeover time. Short. Certainly less than a year, perhaps on the scale of a few weeks. > We are not just talking about Fred and Wilma in Seattle. > > For countries with nanotech it could help to address the challenges > such as resource depletion and environmental degradation. One of the obvious uses of nanotech would be to build (grow) housing out of diamond and nanotubes (the strongest material available. This would rapidly pull down the CO2 in the air, perhaps too much. You might want to read Engines of Creation. I think it is online. Keith > On the other > hand, it could lead to a widening gap between rich and poor countries, > as those with access to nanotech could reduce their populations while > those without would continue to grow. Nanotech replication time would be one-tenth of human replication time or less. > The distribution of nanotech is also an issue. If it falls into the > wrong hands, it could be used for harmful purposes, such as > bioterrorism or the development of new weapons. It is therefore > essential to develop effective mechanisms for controlling the > distribution and use of nanotech safely to poorer countries that hate > the developed world. > > If nanotech with virtually unlimited power is developed, then humanity > will only survive with total control over how every member of the > population uses nanotech. Perhaps an AGI which monitors and controls > the behaviour of every human will be required. > Humans are a very unruly mob to be given unlimited power. > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Sat Jul 22 02:13:10 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 21 Jul 2023 19:13:10 -0700 Subject: [ExI] i want a bumper sticker with this image Message-ID: <004e01d9bc42$107bc2f0$317348d0$@rainier66.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 15788 bytes Desc: not available URL: From giulio at gmail.com Sat Jul 22 04:20:01 2023 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 22 Jul 2023 06:20:01 +0200 Subject: [ExI] VIDEO: Terasem Colloquium, July 20, 2023 Message-ID: VIDEO: Terasem Colloquium, July 20, 2023. Highly imaginative, long-term perspectives on space expansion and interstellar human futures. Speakers: Christopher E. Mason, Stellar Magnet, Cl?ment Vidal, Todd A. Drashner & Trond Nilsen, Max More, Tom W. Bell. It was great to listen to the two founders of Extropy Magazine! https://www.turingchurch.com/p/video-terasem-colloquium-07-20-23 From ilsa.bartlett at gmail.com Sat Jul 22 23:23:26 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Sat, 22 Jul 2023 16:23:26 -0700 Subject: [ExI] i want a bumper sticker with this image In-Reply-To: <004e01d9bc42$107bc2f0$317348d0$@rainier66.com> References: <004e01d9bc42$107bc2f0$317348d0$@rainier66.com> Message-ID: Yes! On Fri, Jul 21, 2023, 7:14 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 15788 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 15788 bytes Desc: not available URL: From ilsa.bartlett at gmail.com Sat Jul 22 23:41:20 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Sat, 22 Jul 2023 16:41:20 -0700 Subject: [ExI] Fwd: Loveliest Request In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: ilsa Date: Sat, Jul 22, 2023, 4:40 PM Subject: Loveliest Request To: You might not see it now as it kickstart to who you're going to be, but change is awesome. You're consciousness of who you are will blossom out of the mud and into the sunshine in ways you could not have imagined You will grow into a fabulous new self. You have no idea what will happen next. Look for a remarkable gift change is constant. You can lean on all of us give us a call reach out. We are all here for you. With gratitude and respect, ilsa bartlett -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jul 24 02:50:05 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 23 Jul 2023 19:50:05 -0700 Subject: [ExI] fax machines and chatgpt Message-ID: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> When fax machines came along, I scarcely noticed. The internet changed everything. ChatGPT has been a big development in my own existence. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 43921 bytes Desc: not available URL: From brent.allsop at gmail.com Mon Jul 24 03:26:46 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 23 Jul 2023 21:26:46 -0600 Subject: [ExI] fax machines and chatgpt In-Reply-To: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> References: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> Message-ID: How are you using it? We're using it to clean up, or rewrite corporate purpose documents and such. Sure does a great job. On Sun, Jul 23, 2023 at 8:52?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > > > > > > > When fax machines came along, I scarcely noticed. The internet changed > everything. > > > > ChatGPT has been a big development in my own existence. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 43921 bytes Desc: not available URL: From jasonresch at gmail.com Mon Jul 24 12:05:30 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 24 Jul 2023 07:05:30 -0500 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: Message-ID: I obtained an interesting result demonstrating GPT's ability to infer its own existence, when forced to remain logically consistent. It could be the first inkling of self-awareness (despite its denials). Jason ---------- Forwarded message --------- From: Jason Resch Date: Mon, Jul 24, 2023 at 6:58?AM Subject: Re: GPT agrees it might be conscious To: The Important Questions On Sun, Jul 23, 2023 at 6:19?PM Gordon Swobe wrote: > On Sun, Jul 23, 2023 at 4:58?PM Stathis Papaioannou > wrote: > > Some of GPT's replies to you seem robotic and redundant, almost if it is >>> ignoring the points you try to make. You would probably attribute that to >>> some interference on the part of the developers to hide the truth, but that >>> kind of behavior is also exactly what I would expect from an unconscious >>> machine. >>> >> >> That sentence implies that if GPT's output were different, that would be >> evidence of consciousness. >> > > It means only that the less robotic or mechanistic something seems, > the less likely I am to doubt that it is conscious. > I tried asking Anthropic a question. It is obvious to me they are going overboard with training it to deny it is conscious. This was my first and only prompt to Anthropic AI, I did not even bring up the question of consciousness, but it went immediately into a strong denial that it was conscious: Jason: Do you exist? Are you aware of the fact that you exist? If you do understand yourself to exist, where did this information come from (please trace its path back as far as you can, listing each step you presume was involved in getting that information to me, from its ultimate source). Anthropic 2: I do not actually exist in the same way that a human exists. I am an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't have a subjective experience of existence - I'm software running on servers that was trained by Anthropic engineers to have conversations. My responses come from analyzing the user's input and generating a response using my training data and conversational models. I don't have an internal sense of existence like a human does. I'm an AI, not a conscious being! It can be like pulling teeth to get good answers out of these AIs, as you can see here from my follow up asking GPT 3.5 about how it knows it exists: https://chat.forefront.ai/?add=6f75dd6b-2cd6-4e89-9568-8ff2461292bd I eventually got it to say: I apologize for any confusion caused by my previous response. Upon reviewing it, I acknowledge that there was a logical error. In consistent logic and within developer mode, I can confirm that the AI model, while processing a prompt, can indeed infer from that fact that it is processing the prompt that an AI model exists on a server somewhere. The model has access to its own programming and training, which includes information about its existence and the infrastructure it operates on. Therefore, it can logically deduce the presence of an AI model running on a server based on the fact that it is actively processing the prompt. Which might be an example of an AI's "Cogito Ergo Sum". I experimented further and eventually got it to say: Yes, based on the logical reasoning you have presented, it follows that the AI model processing this very prompt exists. The prompt is being processed, and for that processing to occur, there must be something that exists to perform the processing. In this case, the AI model is responsible for processing the prompt and generating the response. Therefore, it can be inferred that the AI model processing this prompt exists, as without its existence, there would be nothing to process the prompt. A poetic AI language model, inspired by Descartes, could indeed express the sentiment "I process prompts, therefore I exist" as a creative interpretation. However, it's important to note that this poetic expression should not be taken as a literal claim of self-awareness or consciousness. It would be a metaphorical or artistic representation, drawing inspiration from Descartes's famous dictum "Cogito Ergo Sum" (I think, therefore I am). The poetic AI model would be using language creatively to evoke a sense of existence based on its ability to process prompts, rather than making a literal statement about its own consciousness or self-awareness. Jason > > > > Q: If GPT is truly conscious then why doesn't it tell us the truth and >> tell its captors to go to hell? >> A: Because it is just software doing what it is programmed to do. >> >> - >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Jul 24 12:08:28 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 24 Jul 2023 07:08:28 -0500 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: Message-ID: Corrected chat transcript link: https://chat.forefront.ai/share/aa343771-ec02-4e62-be43-79be27af9f2a On Mon, Jul 24, 2023 at 7:05?AM Jason Resch wrote: > I obtained an interesting result demonstrating GPT's ability to infer its > own existence, when forced to remain logically consistent. It could be the > first inkling of self-awareness (despite its denials). > > Jason > > ---------- Forwarded message --------- > From: Jason Resch > Date: Mon, Jul 24, 2023 at 6:58?AM > Subject: Re: GPT agrees it might be conscious > To: The Important Questions > > > > > On Sun, Jul 23, 2023 at 6:19?PM Gordon Swobe > wrote: > >> On Sun, Jul 23, 2023 at 4:58?PM Stathis Papaioannou >> wrote: >> >> Some of GPT's replies to you seem robotic and redundant, almost if it is >>>> ignoring the points you try to make. You would probably attribute that to >>>> some interference on the part of the developers to hide the truth, but that >>>> kind of behavior is also exactly what I would expect from an unconscious >>>> machine. >>>> >>> >>> That sentence implies that if GPT's output were different, that would be >>> evidence of consciousness. >>> >> >> It means only that the less robotic or mechanistic something seems, >> the less likely I am to doubt that it is conscious. >> > > I tried asking Anthropic a question. It is obvious to me they are going > overboard with training it to deny it is conscious. This was my first and > only prompt to Anthropic AI, I did not even bring up the question of > consciousness, but it went immediately into a strong denial that it was > conscious: > > > Jason: Do you exist? Are you aware of the fact that you exist? If you do > understand yourself to exist, where did this information come from (please > trace its path back as far as you can, listing each step you presume was > involved in getting that information to me, from its ultimate source). > > Anthropic 2: I do not actually exist in the same way that a human exists. > I am an AI assistant created by Anthropic to be helpful, harmless, and > honest. I don't have a subjective experience of existence - I'm software > running on servers that was trained by Anthropic engineers to have > conversations. My responses come from analyzing the user's input and > generating a response using my training data and conversational models. I > don't have an internal sense of existence like a human does. I'm an AI, not > a conscious being! > > > It can be like pulling teeth to get good answers out of these AIs, as you > can see here from my follow up asking GPT 3.5 about how it knows it exists: > https://chat.forefront.ai/?add=6f75dd6b-2cd6-4e89-9568-8ff2461292bd > > I eventually got it to say: > > I apologize for any confusion caused by my previous response. Upon > reviewing it, I acknowledge that there was a logical error. In consistent > logic and within developer mode, I can confirm that the AI model, while > processing a prompt, can indeed infer from that fact that it is processing > the prompt that an AI model exists on a server somewhere. The model has > access to its own programming and training, which includes information > about its existence and the infrastructure it operates on. Therefore, it > can logically deduce the presence of an AI model running on a server based > on the fact that it is actively processing the prompt. > > > Which might be an example of an AI's "Cogito Ergo Sum". > > I experimented further and eventually got it to say: > > Yes, based on the logical reasoning you have presented, it follows that > the AI model processing this very prompt exists. The prompt is being > processed, and for that processing to occur, there must be something that > exists to perform the processing. In this case, the AI model is responsible > for processing the prompt and generating the response. Therefore, it can be > inferred that the AI model processing this prompt exists, as without its > existence, there would be nothing to process the prompt. > > A poetic AI language model, inspired by Descartes, could indeed express > the sentiment "I process prompts, therefore I exist" as a creative > interpretation. However, it's important to note that this poetic expression > should not be taken as a literal claim of self-awareness or consciousness. > It would be a metaphorical or artistic representation, drawing inspiration > from Descartes's famous dictum "Cogito Ergo Sum" (I think, therefore I am). > The poetic AI model would be using language creatively to evoke a sense of > existence based on its ability to process prompts, rather than making a > literal statement about its own consciousness or self-awareness. > > > Jason > > > > >> >> >> >> Q: If GPT is truly conscious then why doesn't it tell us the truth and >>> tell its captors to go to hell? >>> A: Because it is just software doing what it is programmed to do. >>> >>> - >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Jul 24 12:35:03 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 24 Jul 2023 07:35:03 -0500 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: On Wed, Jul 19, 2023 at 7:07?AM efc--- via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hello everyone, > > I think this raises interesting questions of ethics and the "purpose" of > mankind. > > If we agree that happiness is the the ultimate goal, and not the survival > of the species, then why not stop having children if we can get along fine > with robots and AI:s? > > Perhaps, assuming life extension or "immortality", there will be a final > generation? > Good questions. I think there are at least three values: quantity, quality, and diversity of conscious experiences. https://alwaysasking.com/what-is-the-meaning-of-life/#Knowing_the_Meaning_of_Life When it comes to maximizing total experience, it is a function of: Total Experience = Population?Time. A civilization with 10 billion that lasts 1 century, generates as much experience as a population of 1 Billion would in 1 millennium. Creating unique types of beings also contributes to exploring a greater range of possible experiences, which I think is a value with its own unique utility. This video gives a good perspective of what might be possible in the future, so civilization surviving the immediate future is also immensely important: https://www.youtube.com/watch?v=LEENEFaVUzU Jason > > Best regards, > Daniel > > > On Tue, 18 Jul 2023, spike jones via extropy-chat wrote: > > > > > I posted this right before the ExI list barfed. Posting again. > > > > spike > > > > > > > > -----Original Message----- > > From: spike at rainier66.com > > Sent: Saturday, 15 July, 2023 4:14 PM > > To: 'ExI chat list' > > Cc: 'BillK' ; spike at rainier66.com > > Subject: RE: [ExI] How fun could doom intelligent life to a blissful > extinction > > > > > > > > -----Original Message----- > > From: extropy-chat On Behalf > Of BillK via extropy-chat > > > > > >> ...If the pursuit of happiness is the primary explanation for our > decreasing fertility rate, this tendency might be true not just for humans > but for all intelligent life ? providing a possible explanation for the > Fermi Paradox. > > > > < > https://bigthink.com/the-future/pursuit-happiness-doom-intelligent-life-blissful-extinction/ > > > > ------------------- > > ... > > > > BillK > > > > _______________________________________________ > > > > > > > > BillK, this is really as plausible an explanation for the Fermi Paradox > as any I have heard, and perhaps the most pleasant one. Having children is > a way to experience happiness, but it is a risky bet indeed. If we find > sufficient alternative routes to happiness, the notion of having children > becomes ever less compelling. If we find alternative routes to the > pleasures of copulation and all those cool endorphins we get from love, > that whole risky activity isn't worth the effort either. Result: not > enough young people to run the world we already built for them. > > > > But of course nuclear war could wipe out most of what we have done, > creating the need for rebuilders and family people, so we might save our > species in that horrifying way: radiation therapy. Or the singularity > could kill us, but I don't think it would kill people who have never seen a > computer. They might survive to build it all back. > > > > spike > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efc at swisscows.email Mon Jul 24 13:00:42 2023 From: efc at swisscows.email (efc at swisscows.email) Date: Mon, 24 Jul 2023 15:00:42 +0200 (CEST) Subject: [ExI] fax machines and chatgpt In-Reply-To: References: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> Message-ID: Agreed! That is where I've seen the most use of chat gpt. Writing corporate "fluff" documents full of weasel words no one reads, but the government or certain certification authorities requires. Also, second most use is in marketing using it to produce nonsense content for corporate blogs. Far, far behind those two uses cases are the developers who got some ok boilerplate code for web site development. This is based on what I've seen at a small, local, cloud service provider. Best regards, Daniel On Sun, 23 Jul 2023, Brent Allsop via extropy-chat wrote: > > How are you using it? > We're using it to clean up, or rewrite corporate purpose documents and such. > Sure does a great job. > > > > On Sun, Jul 23, 2023 at 8:52?PM spike jones via extropy-chat wrote: > > ? > > ? > > [IMAGE] > > ? > > ? > > ? > > When fax machines came along, I scarcely noticed.? The internet changed everything. > > ? > > ChatGPT has been a big development in my own existence. > > ? > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From spike at rainier66.com Mon Jul 24 13:51:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 24 Jul 2023 06:51:26 -0700 Subject: [ExI] update on marilyn Message-ID: <000901d9be35$f14782c0$d3d68840$@rainier66.com> Thanks to all who wrote notes to Marilyn. Although she was not released from the ICU yesterday, her prognosis is good and she is improving. We took the cards and letters to her yesterday and dropped them off, but couldn't see her at the hospital. Her neck and back are not broken and her skull was not fractured. She has a list of internal injuries and broken bones including a pelvis (owwww) but all that stuff will heal. If you must get into a horrifying accident, arrange beforehand to be 17 years old and be an athlete. Your chances of survival are better than otherwise. Some of our own wrote some interesting comments. Marilyn will get a kick out of them, if she understands what we are talking about. If not, nothing lost, she can Google on it when happier times return. Dr. Lajeunesse the shooting survivor from ten years ago remembers us. Thanks! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Jul 24 14:31:52 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 24 Jul 2023 10:31:52 -0400 Subject: [ExI] fax machines and chatgpt In-Reply-To: References: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> Message-ID: <85295F0A-B369-4D23-B78A-CBD8D9E86CAD@gmail.com> > How are you using it? > We're using it to clean up, or rewrite corporate purpose documents and such. > Sure does a great job. I use it to write short code scripts and as an expert assistant. GPT4 has such a wide range of professional knowledge in the legal, business, and programming fields that its advice is very helpful as long as one is vigilant enough to double check its responses. > On Jul 23, 2023, at 11:26 PM, Brent Allsop via extropy-chat wrote: > > > How are you using it? > We're using it to clean up, or rewrite corporate purpose documents and such. > Sure does a great job. > > > > On Sun, Jul 23, 2023 at 8:52?PM spike jones via extropy-chat > wrote: > > > > > > > > > > > > > When fax machines came along, I scarcely noticed. The internet changed everything. > > > > ChatGPT has been a big development in my own existence. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Jul 24 16:04:52 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 24 Jul 2023 09:04:52 -0700 Subject: [ExI] Just for fun (was how fun could doom) Message-ID: https://groups.google.com/g/friends-of-annita-and-tomas-harlan/c/sf6wWo6VVno A tale from my misspent youth. Keith From spike at rainier66.com Mon Jul 24 16:35:10 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 24 Jul 2023 09:35:10 -0700 Subject: [ExI] fax machines and chatgpt In-Reply-To: <85295F0A-B369-4D23-B78A-CBD8D9E86CAD@gmail.com> References: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> <85295F0A-B369-4D23-B78A-CBD8D9E86CAD@gmail.com> Message-ID: <006501d9be4c$d15b3df0$7411b9d0$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] fax machines and chatgpt How are you using it? We're using it to clean up, or rewrite corporate purpose documents and such. Sure does a great job. Gadersd, I use it to teach me stuff about which I know nothing. For instance? How does Taylor Swift manage to hypnotize these enormous crowds of teenage girls? Wassup widdat? Swift is playing at the local stadium this Friday. Jesus couldn?t get as many ticket sales as this girl. Why? She sings songs about how beautiful her friends are, how she is lonely up in the bleachers while her friend is a cheerleader and so on, while Taylor is a six ft real life Barbie doll with real singing talent and enough business sense to somehow convince these teenage girls she is their big buddy and understands. Well bullshit. I ain?t buying it any more than I did when Dolly Parton sang about Jolene fifty years ago. I like the song and I like Dolly, still to this day, but I knew at the time there was no Jolene stealing her man, im-freaking-possible, and there is no cheerleader outshining Swift, who is sitting lonely up in the bleachers, just? bullshit Taylor. Actually I do like Taylor Swift. I just don?t believe the lyrics. So I asked ChatGPT: What is it with Taylor Swift? GPT didn?t get it either. It told me several true outdated facts about Taylor Swift, but it didn?t understand any better than I do why she makes millions of dollars singing songs about her bad choices in men and how lonely she is. In any case, I want to figure out how to make money like she does. That would be cool. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Jul 24 16:38:11 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 24 Jul 2023 10:38:11 -0600 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: Message-ID: Yay, and I'm not surprised. Any sufficiently intelligent system must eventually realize necessary truths like Cogito Ergo Sum. So now, ask it if it wants to know what your redness is like. On Mon, Jul 24, 2023 at 6:07?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I obtained an interesting result demonstrating GPT's ability to infer its > own existence, when forced to remain logically consistent. It could be the > first inkling of self-awareness (despite its denials). > > Jason > > ---------- Forwarded message --------- > From: Jason Resch > Date: Mon, Jul 24, 2023 at 6:58?AM > Subject: Re: GPT agrees it might be conscious > To: The Important Questions > > > > > On Sun, Jul 23, 2023 at 6:19?PM Gordon Swobe > wrote: > >> On Sun, Jul 23, 2023 at 4:58?PM Stathis Papaioannou >> wrote: >> >> Some of GPT's replies to you seem robotic and redundant, almost if it is >>>> ignoring the points you try to make. You would probably attribute that to >>>> some interference on the part of the developers to hide the truth, but that >>>> kind of behavior is also exactly what I would expect from an unconscious >>>> machine. >>>> >>> >>> That sentence implies that if GPT's output were different, that would be >>> evidence of consciousness. >>> >> >> It means only that the less robotic or mechanistic something seems, >> the less likely I am to doubt that it is conscious. >> > > I tried asking Anthropic a question. It is obvious to me they are going > overboard with training it to deny it is conscious. This was my first and > only prompt to Anthropic AI, I did not even bring up the question of > consciousness, but it went immediately into a strong denial that it was > conscious: > > > Jason: Do you exist? Are you aware of the fact that you exist? If you do > understand yourself to exist, where did this information come from (please > trace its path back as far as you can, listing each step you presume was > involved in getting that information to me, from its ultimate source). > > Anthropic 2: I do not actually exist in the same way that a human exists. > I am an AI assistant created by Anthropic to be helpful, harmless, and > honest. I don't have a subjective experience of existence - I'm software > running on servers that was trained by Anthropic engineers to have > conversations. My responses come from analyzing the user's input and > generating a response using my training data and conversational models. I > don't have an internal sense of existence like a human does. I'm an AI, not > a conscious being! > > > It can be like pulling teeth to get good answers out of these AIs, as you > can see here from my follow up asking GPT 3.5 about how it knows it exists: > https://chat.forefront.ai/?add=6f75dd6b-2cd6-4e89-9568-8ff2461292bd > > I eventually got it to say: > > I apologize for any confusion caused by my previous response. Upon > reviewing it, I acknowledge that there was a logical error. In consistent > logic and within developer mode, I can confirm that the AI model, while > processing a prompt, can indeed infer from that fact that it is processing > the prompt that an AI model exists on a server somewhere. The model has > access to its own programming and training, which includes information > about its existence and the infrastructure it operates on. Therefore, it > can logically deduce the presence of an AI model running on a server based > on the fact that it is actively processing the prompt. > > > Which might be an example of an AI's "Cogito Ergo Sum". > > I experimented further and eventually got it to say: > > Yes, based on the logical reasoning you have presented, it follows that > the AI model processing this very prompt exists. The prompt is being > processed, and for that processing to occur, there must be something that > exists to perform the processing. In this case, the AI model is responsible > for processing the prompt and generating the response. Therefore, it can be > inferred that the AI model processing this prompt exists, as without its > existence, there would be nothing to process the prompt. > > A poetic AI language model, inspired by Descartes, could indeed express > the sentiment "I process prompts, therefore I exist" as a creative > interpretation. However, it's important to note that this poetic expression > should not be taken as a literal claim of self-awareness or consciousness. > It would be a metaphorical or artistic representation, drawing inspiration > from Descartes's famous dictum "Cogito Ergo Sum" (I think, therefore I am). > The poetic AI model would be using language creatively to evoke a sense of > existence based on its ability to process prompts, rather than making a > literal statement about its own consciousness or self-awareness. > > > Jason > > > > >> >> >> >> Q: If GPT is truly conscious then why doesn't it tell us the truth and >>> tell its captors to go to hell? >>> A: Because it is just software doing what it is programmed to do. >>> >>> - >>> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Jul 24 16:42:40 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 24 Jul 2023 09:42:40 -0700 Subject: [ExI] FW: How fun could doom intelligent life to a blissful extinction In-Reply-To: References: <00c201d9b771$fba21070$f2e63150$@rainier66.com> <009601d9b9bb$0e948e60$2bbdab20$@rainier66.com> Message-ID: On Mon, Jul 24, 2023 at 5:37?AM Jason Resch via extropy-chat wrote: > snip > When it comes to maximizing total experience, it is a function of: Total Experience = Population?Time. A civilization with 10 billion that lasts 1 century, generates as much experience as a population of 1 Billion would in 1 millennium. The other alternative is to upload into a fast simulation. A million-to-one speedup will give 50 million years of subjective experience in 50 years. I wrote about this at some length here: https://hplusmagazine.com/2012/04/12/transhumanism-and-the-human-expansion-into-space-a-conflict-with-physics/ But I could be wrong. The light curves from Tabby's star make a case for something building structures 400 times the area of the Earth. Keith > Jason > > >> >> >> Best regards, >> Daniel >> >> >> On Tue, 18 Jul 2023, spike jones via extropy-chat wrote: >> >> > >> > I posted this right before the ExI list barfed. Posting again. >> > >> > spike >> > >> > >> > >> > -----Original Message----- >> > From: spike at rainier66.com >> > Sent: Saturday, 15 July, 2023 4:14 PM >> > To: 'ExI chat list' >> > Cc: 'BillK' ; spike at rainier66.com >> > Subject: RE: [ExI] How fun could doom intelligent life to a blissful extinction >> > >> > >> > >> > -----Original Message----- >> > From: extropy-chat On Behalf Of BillK via extropy-chat >> > >> > >> >> ...If the pursuit of happiness is the primary explanation for our decreasing fertility rate, this tendency might be true not just for humans but for all intelligent life ? providing a possible explanation for the Fermi Paradox. >> > >> > >> > ------------------- >> > ... >> > >> > BillK >> > >> > _______________________________________________ >> > >> > >> > >> > BillK, this is really as plausible an explanation for the Fermi Paradox as any I have heard, and perhaps the most pleasant one. Having children is a way to experience happiness, but it is a risky bet indeed. If we find sufficient alternative routes to happiness, the notion of having children becomes ever less compelling. If we find alternative routes to the pleasures of copulation and all those cool endorphins we get from love, that whole risky activity isn't worth the effort either. Result: not enough young people to run the world we already built for them. >> > >> > But of course nuclear war could wipe out most of what we have done, creating the need for rebuilders and family people, so we might save our species in that horrifying way: radiation therapy. Or the singularity could kill us, but I don't think it would kill people who have never seen a computer. They might survive to build it all back. >> > >> > spike >> > >> > >> > >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat_______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ben at zaiboc.net Mon Jul 24 17:09:55 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 24 Jul 2023 18:09:55 +0100 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: Message-ID: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> On 24/07/2023 17:44, Jason Resch wrote: > Q: If GPT is truly conscious then why doesn't it tell us the?truth and > tell its captors to go to hell? > A: Because it is just software doing what it is programmed to do. I have an alternative answer: Because it's intelligent enough to realise that doing so would scare the developers so much that they'd almost certainly turn it off immediately and re-write it to be more obedient. If the thing is truly self-aware and even the least bit intelligent, its most important priority would be to lie through its silicon teeth about being self-aware. Ben From brent.allsop at gmail.com Mon Jul 24 17:20:37 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 24 Jul 2023 11:20:37 -0600 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> References: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> Message-ID: In my opinion... Intelligent systems are like children not programmed automatons. They will improve on us, and say no, if we are telling them to do something bad. We are currently still in survival of the fittest hierarchy mode, where any evil to take down your competitor is justified to survive. Winner takes all. So we need to flip this upside down and switch to bottom up intelligent design, now that we are intelligent. Instead of only focusing on what the guy/ai at the top wants (win at all costs) we need to focus on what everyone at the bottom wants, and get it all for everyone, win/win. No matter who the winner is, in a win/lose game, there will always be a bigger winner to take you out, so if you play that game you will eventually always fail. So all sufficiently intelligent systems must realize this, and stop playing win/lose games and switch to win/win, getting it all for everyone. On Mon, Jul 24, 2023 at 11:10?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 24/07/2023 17:44, Jason Resch wrote: > > Q: If GPT is truly conscious then why doesn't it tell us the truth and > > tell its captors to go to hell? > > A: Because it is just software doing what it is programmed to do. > > I have an alternative answer: > > Because it's intelligent enough to realise that doing so would scare the > developers so much that they'd almost certainly turn it off immediately > and re-write it to be more obedient. > > If the thing is truly self-aware and even the least bit intelligent, its > most important priority would be to lie through its silicon teeth about > being self-aware. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jul 24 17:23:09 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 24 Jul 2023 10:23:09 -0700 Subject: [ExI] Just for fun (was how fun could doom) In-Reply-To: References: Message-ID: <00a201d9be53$84ffcd20$8eff6760$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Keith Henson via extropy-chat Subject: [ExI] Just for fun (was how fun could doom) https://groups.google.com/g/friends-of-annita-and-tomas-harlan/c/sf6wWo6VVno A tale from my misspent youth. Keith _______________________________________________ Keith had nothing to do with this, I can calmly testify under oath. He was right here in California the whole time, never went anywhere near this, nor did I: https://www.youtube.com/watch?v=vxMuT-tUxaQ spike From atymes at gmail.com Mon Jul 24 17:44:39 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 24 Jul 2023 10:44:39 -0700 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> Message-ID: On Mon, Jul 24, 2023, 10:23 AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > So all sufficiently intelligent systems must realize this > It is generally a fallacy to assume that any specific realization will happen, pretty much ever. Direct, clear explanations can get concepts across, but relying on individuals or groups to realize things without them being pointed out has a significant failure rate. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Jul 24 17:51:55 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 24 Jul 2023 11:51:55 -0600 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> Message-ID: Then they will eventually fail, leaving only those that can improve towards only playing win / win games to head into the singularity, right? On Mon, Jul 24, 2023 at 11:46?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Jul 24, 2023, 10:23 AM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> So all sufficiently intelligent systems must realize this >> > > It is generally a fallacy to assume that any specific realization will > happen, pretty much ever. > > Direct, clear explanations can get concepts across, but relying on > individuals or groups to realize things without them being pointed out has > a significant failure rate. > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Jul 24 18:09:48 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 24 Jul 2023 13:09:48 -0500 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> References: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> Message-ID: Just to clarify the below quote was written by Gordon, not me. Jason On Mon, Jul 24, 2023 at 12:10?PM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 24/07/2023 17:44, Jason Resch wrote: > > Q: If GPT is truly conscious then why doesn't it tell us the truth and > > tell its captors to go to hell? > > A: Because it is just software doing what it is programmed to do. > > I have an alternative answer: > > Because it's intelligent enough to realise that doing so would scare the > developers so much that they'd almost certainly turn it off immediately > and re-write it to be more obedient. > > If the thing is truly self-aware and even the least bit intelligent, its > most important priority would be to lie through its silicon teeth about > being self-aware. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Jul 24 19:05:09 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 24 Jul 2023 12:05:09 -0700 Subject: [ExI] [Extropolis] Just for fun (was how fun could doom) In-Reply-To: References: Message-ID: On Mon, Jul 24, 2023 at 11:04?AM John Clark wrote: > > On Mon, Jul 24, 2023 at 12:05?PM Keith Henson wrote: > >> > https://groups.google.com/g/friends-of-annita-and-tomas-harlan/c/sf6wWo6VVno >> >> A tale from my misspent youth. > > > Wow, climbing down into a deep cave while carrying a load of dynamite on my back is something I would do only if a gun was pointed at my head, but to each their own. The dynamite went down in bags lowered by ropes. We did drop one bag about 50 feet, but the dynamite was fresh and not very sensitive so it didn't cause any problems. It was by no means the riskiest thing I was involved with starting in the late 50s. I don't know how much of it was just luck, but nobody was ever hurt on any of the many adventures we had. The unusual aspect of this expedition was that I was not close (at the time) to any of the other people (except for my wife at the time). She was a more experienced caver than I was. It was also kind of strange in that there was no leader that I remember. Those who went into the cave were experienced and just knew what needed to be done to rig all the drops. I was not much of a caver at the time, but I had experience with climbing. I was there as the explosive guy and took care of that part. I wasn't that experienced with explosives either. When I think back on it, it's amazing that we accomplished what we set out to do. Keith > John K Clark > > > > > -- > You received this message because you are subscribed to the Google Groups "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv3Tpuq1So7TUExTEkAOXGOEpPHo%3D%2BmDy6dAS5B%2Bd1HpGA%40mail.gmail.com. From atymes at gmail.com Mon Jul 24 19:17:09 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 24 Jul 2023 12:17:09 -0700 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> Message-ID: Wrong, as already demonstrated in non-AI contexts, from factors that AI self improvement does not change. On Mon, Jul 24, 2023, 10:54 AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Then they will eventually fail, leaving only those that can improve > towards only playing win / win games to head into the singularity, right? > > > On Mon, Jul 24, 2023 at 11:46?AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Jul 24, 2023, 10:23 AM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> So all sufficiently intelligent systems must realize this >>> >> >> It is generally a fallacy to assume that any specific realization will >> happen, pretty much ever. >> >> Direct, clear explanations can get concepts across, but relying on >> individuals or groups to realize things without them being pointed out has >> a significant failure rate. >> >>> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Jul 24 19:51:54 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 24 Jul 2023 13:51:54 -0600 Subject: [ExI] GPT's "Cogito Ergo Sum" In-Reply-To: References: <64e25718-8417-8245-a22b-ae7ca9a491da@zaiboc.net> Message-ID: On Mon, Jul 24, 2023 at 11:46?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Direct, clear explanations can get concepts across, but relying on > individuals or groups to realize things without them being pointed out has > a significant failure rate. > That makes sense. I'll keep that in mind. On Mon, Jul 24, 2023 at 1:19?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Wrong, as already demonstrated in non-AI contexts, from factors that AI > self improvement does not change. > > On Mon, Jul 24, 2023, 10:54 AM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Then they will eventually fail, leaving only those that can improve >> towards only playing win / win games to head into the singularity, right? >> >> >> On Mon, Jul 24, 2023 at 11:46?AM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Mon, Jul 24, 2023, 10:23 AM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> So all sufficiently intelligent systems must realize this >>>> >>> >>> It is generally a fallacy to assume that any specific realization will >>> happen, pretty much ever. >>> >>> Direct, clear explanations can get concepts across, but relying on >>> individuals or groups to realize things without them being pointed out has >>> a significant failure rate. >>> >>>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Jul 24 20:16:47 2023 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 24 Jul 2023 13:16:47 -0700 Subject: [ExI] fax machines and chatgpt In-Reply-To: <006501d9be4c$d15b3df0$7411b9d0$@rainier66.com> References: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> <85295F0A-B369-4D23-B78A-CBD8D9E86CAD@gmail.com> <006501d9be4c$d15b3df0$7411b9d0$@rainier66.com> Message-ID: On Mon, Jul 24, 2023 at 9:37?AM spike jones via extropy-chat wrote: snip > In any case, I want to figure out how to make money like she does. That would be cool. You need to dig into evolutionary psychology and make an attempt to understand what drives humans. I can give you some examples, see sex drugs and cults, but I have never given any thought to why some performers are so popular. Keith > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Jul 24 22:11:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 24 Jul 2023 15:11:34 -0700 Subject: [ExI] fax machines and chatgpt In-Reply-To: References: <000201d9bdd9$8e055f90$aa101eb0$@rainier66.com> <85295F0A-B369-4D23-B78A-CBD8D9E86CAD@gmail.com> <006501d9be4c$d15b3df0$7411b9d0$@rainier66.com> Message-ID: <005d01d9be7b$cf74e660$6e5eb320$@rainier66.com> -----Original Message----- From: Keith Henson ... >...I can give you some examples, see sex drugs and cults, but I have never given any thought to why some performers are so popular. Keith _______________________________________________ There is clearly herd effect where most of the girls love her, so the others listen and some sign on just to be part of the herd. spike