From spike at rainier66.com Sat Jun 1 02:24:05 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 May 2019 19:24:05 -0700 Subject: [ExI] welcome back, nobody here Message-ID: <001301d51821$1671c5c0$43555140$@rainier66.com> From: extropy-chat On Behalf Of Chris Hind Subject: Re: [ExI] extropy-chat Digest, Vol 188, Issue 21 >?Hello, I dont think I've been on the extropian mailinglist since 2000? Hi Chris! Welcome back, me lad! Ja I remember you. Been a long time. >? I go back into the archives and I'm startled to see a highschool version of me predicting things that have now occurred. I was quite amused to see I actually built some of the things I speculated about regarding virtual reality and augmented reality. Anyone else out there still from back in the day or has everyone moved on? Nah we all moved on man. Nobody left here no more. {8^D Gotta tell ya, it isn?t like it was in 2000. So many of our big dreams came to pass, it became harder to dream big. Tell us what has happened to Chris in the past two decades. Married? Children? Degrees? Inventions? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Jun 1 03:39:02 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 31 May 2019 23:39:02 -0400 Subject: [ExI] The Yamnaya question Message-ID: About 5000 years ago the male ancestors of most of our list members burst forth from their Yamnaya Russian steppe domain and started merrily slaughtering their way across the world. Nowadays known as Proto-Indo-Europeans, over a period of a few thousand years they spread their language and their genes from Norway to Sri Lanka and the Gibraltar, ruthlessly exterminating local men and taking their women. With the invention of guns and ships they continued their expansion to the Americas, the tip of Africa and to Australia, before finally running out of steam sometime in the 1960's. This was the most successful single demographic expansion in the history of our species, at least as measured by the acreage settled or conquered, and there are many hypotheses as to why the Yamnaya succeeded to this extent. Nazis, who called the Yamnaya Aryans, believed they were simply superior to other people. Some researchers say that the Yamnaya won because they had carts and horses. Nobody knows for sure. It occurred to me that the answer might not lie in the intrinsic genetic endowments of the Yamnaya and is not explained merely by technological superiority but rather it is related to a quirk of genetics, the optimum mating distance. It is well established that mating with close relatives dramatically reduces evolutionary fitness - this is why the children of incest are so often sickly, deformed and mentally slow. On the other end of the spectrum, mating among organisms that are so far from each other genetically that they are just barely able to have offspring also leads to diminished or zero fitness, as in mules and tigons. In between these extremes there is an optimum mating distance, where the parents are not so close as to share large numbers of deleterious rare mutations but not so far as to have incompatible adaptations. Their babies are stronger, faster, resistant to infections and smarter, an effect that in some situations is called heterosis, or hybrid vigor. One of the typical features of prehistoric life in established communities was a very restricted ability to travel due to presence of hostile neighbors. Thus, after an initial settlement of an area, for example when the Anatolians swamped the local hunter-gatherers throughout Europe, each small location would become reproductively isolated from even relatively close neighbors (we are talking 100 miles distance or less) and thus increasingly inbred. This splintering into tiny tribes and inbreeding has been observed in many locations, from Europe, to Africa to Papua. The effect persists even as neighboring tribes, such as modern Yanomamo, raid each other to steal women and slaughter competing men - since they are in effect stealing from their extended families, they still remain inbred. The Yamnaya developed their culture around horses, carts and a nomadic lifestyle. By themselves these technologies may not have dramatic genetic implications except for one thing - they allow long distance kidnapping of women. Long-distance raiders can not only increase the number of mates available to them but also can increase the genetic distance between themselves and their mates. Geographic distance translates into genetic distance, and a technology for fast movement on the Earth's surface translates into an exploration of new genetic possibilities. The spread of Yamnaya thus resulted in the generation of less inbred offspring, which in turn used their mobility technology to extend their reach with each generation, consuming the inbred, sessile populations in the process. The Yamnaya genes were not intrinsically "better" than the genes of e.g. the Anatolians or Dravidians but the offspring of far-away invaders and their local slaves was more genetically diverse, and thus fitter. This mechanism may be responsible for the demographic transformation of the ancient world and it does have implications for prospective parents today - a bit of genetic adventurousness and out-of-the-shtetl thinking may go a long way towards having beautiful and smart babies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Jun 1 05:43:54 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 May 2019 22:43:54 -0700 Subject: [ExI] The Yamnaya question In-Reply-To: References: Message-ID: <001601d5183d$0036f110$00a4d330$@rainier66.com> Cool, great stuff Rafal! I left it all in there below intentionally in order to refer to concepts therein. A dentist friend and I were pondering this and speculating idly. Evolutionary pressure can be thought of as operating most quickly when advantage gradient is highest. In human forms, I have long thought of two areas which would vary the most with respect to a particular environment: the body-fat content and the configuration of the jaw. In both cases, those two things would carry a lot of survival advantage gradient. If you think about it, that makes sense: if a lot of meat is available, it requires a different optimal dentition configuration than if the human populations subsists on grass seed such as rice or wheat. OK then, we get the whole body-fat thing: in cold climates the advantage goes to the Inuit, a typically short round guy, but in the tropics, the tall slender guy has the advantage. Regarding your notion of optimal breeding distance? >? in some situations is called heterosis? Ah yes, heterosis. Our society has gay pride parades, so one might be tempted to think there should be the counterpart: straight shame parades. We shamble along, heads hung in ignominy, silent, sullen, colorless we are, sick with this dreaded disease known as?heterosis. OK that part was just for fun. You know me. {8-] If we go with this notion of optimal breeding distance, we can imagine how it would work with body-fat ratios, but even easier is to think about dentition. My dentist friend was also an orthodontist. I asked him once why it is that so many of us Euro-mutts seem to need braces. His notion was a bit similar to yours: we are genetically made of so many disparate parts, populations which evolved under so many varying conditions and climates, we just have too much stuff in our genes, too many incompatible instructions. So we have cases of gapped teeth up top and crowding on the bottom for instance in the same person. We have jaws too large or too small for the teeth they have in them. In our times, we Euro-mutts, the Yamnaya descendants, are now drifting towards breeding distance that is too great. Rafal you really have my wheels spinning. More ideas tomorrow perhaps. spike From: extropy-chat On Behalf Of Rafal Smigrodzki Sent: Friday, May 31, 2019 8:39 PM To: ExI chat list Subject: [ExI] The Yamnaya question About 5000 years ago the male ancestors of most of our list members burst forth from their Yamnaya Russian steppe domain and started merrily slaughtering their way across the world. Nowadays known as Proto-Indo-Europeans, over a period of a few thousand years they spread their language and their genes from Norway to Sri Lanka and the Gibraltar, ruthlessly exterminating local men and taking their women. With the invention of guns and ships they continued their expansion to the Americas, the tip of Africa and to Australia, before finally running out of steam sometime in the 1960's. This was the most successful single demographic expansion in the history of our species, at least as measured by the acreage settled or conquered, and there are many hypotheses as to why the Yamnaya succeeded to this extent. Nazis, who called the Yamnaya Aryans, believed they were simply superior to other people. Some researchers say that the Yamnaya won because they had carts and horses. Nobody knows for sure. It occurred to me that the answer might not lie in the intrinsic genetic endowments of the Yamnaya and is not explained merely by technological superiority but rather it is related to a quirk of genetics, the optimum mating distance. It is well established that mating with close relatives dramatically reduces evolutionary fitness - this is why the children of incest are so often sickly, deformed and mentally slow. On the other end of the spectrum, mating among organisms that are so far from each other genetically that they are just barely able to have offspring also leads to diminished or zero fitness, as in mules and tigons. In between these extremes there is an optimum mating distance, where the parents are not so close as to share large numbers of deleterious rare mutations but not so far as to have incompatible adaptations. Their babies are stronger, faster, resistant to infections and smarter, an effect that in some situations is called heterosis, or hybrid vigor. One of the typical features of prehistoric life in established communities was a very restricted ability to travel due to presence of hostile neighbors. Thus, after an initial settlement of an area, for example when the Anatolians swamped the local hunter-gatherers throughout Europe, each small location would become reproductively isolated from even relatively close neighbors (we are talking 100 miles distance or less) and thus increasingly inbred. This splintering into tiny tribes and inbreeding has been observed in many locations, from Europe, to Africa to Papua. The effect persists even as neighboring tribes, such as modern Yanomamo, raid each other to steal women and slaughter competing men - since they are in effect stealing from their extended families, they still remain inbred. The Yamnaya developed their culture around horses, carts and a nomadic lifestyle. By themselves these technologies may not have dramatic genetic implications except for one thing - they allow long distance kidnapping of women. Long-distance raiders can not only increase the number of mates available to them but also can increase the genetic distance between themselves and their mates. Geographic distance translates into genetic distance, and a technology for fast movement on the Earth's surface translates into an exploration of new genetic possibilities. The spread of Yamnaya thus resulted in the generation of less inbred offspring, which in turn used their mobility technology to extend their reach with each generation, consuming the inbred, sessile populations in the process. The Yamnaya genes were not intrinsically "better" than the genes of e.g. the Anatolians or Dravidians but the offspring of far-away invaders and their local slaves was more genetically diverse, and thus fitter. This mechanism may be responsible for the demographic transformation of the ancient world and it does have implications for prospective parents today - a bit of genetic adventurousness and out-of-the-shtetl thinking may go a long way towards having beautiful and smart babies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Jun 1 12:37:43 2019 From: johnkclark at gmail.com (John Clark) Date: Sat, 1 Jun 2019 08:37:43 -0400 Subject: [ExI] Drawing a face with nothing to go on but audio Message-ID: >From looking and listening to millions of Youtube videos a new AI program has learned how to draw the face of a speaker from just a few seconds of audio: Speech2Face: Learning the Face Behind a Voice John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Jun 1 14:04:32 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 1 Jun 2019 07:04:32 -0700 Subject: [ExI] The Yamnaya question In-Reply-To: <001601d5183d$0036f110$00a4d330$@rainier66.com> References: <001601d5183d$0036f110$00a4d330$@rainier66.com> Message-ID: <004301d51882$f00963e0$d01c2ba0$@rainier66.com> From: spike at rainier66.com >?If we go with this notion of optimal breeding distance, we can imagine how it would work with body-fat ratios, but even easier is to think about dentition. My dentist friend was also an orthodontist. I asked him once why it is that so many of us Euro-mutts seem to need braces. ? Rafal you really have my wheels spinning. More ideas tomorrow perhaps. spike Hmmm, optimal breeding distance and dentition, interesting. In any discussion of evolutionary pressures on an human population, it is critical to consider the impact of mate selection, since mate selection is such a strong component in humans, and appearance is such a strong component of mate selection. In the area of appearance, there are two critical factors (among others, two really biggies): how a potential mate carries body fat and the shape of the face. (Ja?) Dentition is one which easy to study because we have access to so much data. Societal pressures cause humans to mess with their own natural body-fat configuration, obscuring data. With dentition, we can get to data on which populations need what kinds of orthodontia. Rafal, you are a doctor, so I assume you are in charge of training young medics, or you are friends with those who are in charge of that. The young medics need a research project at some point (ja?) so you could steer them toward studying mating distance as a function of orthodontic needs. I have a notion of how to estimate mating distance of a person?s family tree using AncestryDNA. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Jun 1 23:02:46 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 1 Jun 2019 18:02:46 -0500 Subject: [ExI] a little fun Message-ID: Another mass killing. It is reported that someone took away his gruntles. Just who is in charge of disgruntling people? And can we stop them? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sun Jun 2 00:36:41 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 1 Jun 2019 20:36:41 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: <001601d5183d$0036f110$00a4d330$@rainier66.com> References: <001601d5183d$0036f110$00a4d330$@rainier66.com> Message-ID: On Sat, Jun 1, 2019 at 1:44 AM wrote: > > > If we go with this notion of optimal breeding distance, we can imagine how > it would work with body-fat ratios, but even easier is to think about > dentition. My dentist friend was also an orthodontist. I asked him once > why it is that so many of us Euro-mutts seem to need braces. His notion > was a bit similar to yours: we are genetically made of so many disparate > parts, populations which evolved under so many varying conditions and > climates, we just have too much stuff in our genes, too many incompatible > instructions. So we have cases of gapped teeth up top and crowding on the > bottom for instance in the same person. We have jaws too large or too > small for the teeth they have in them. > ### I am not sure about the genetics of occlusion abnormalities but I would guess that due to the recent and extraordinary changes in the types of diets that modern humans have been exposed to (from paleo to early agricultural to industrial in only a few thousand years) there is a lot of genome-environment mismatch between our dental and metabolic hunter-gatherer adaptations and modern lifestyles. I would however think this is not related to increased mating distances in modern societies. > In our times, we Euro-mutts, the Yamnaya descendants, are now drifting > towards breeding distance that is too great. > > > > Rafal you really have my wheels spinning. More ideas tomorrow perhaps. > > > ### The theoretical notion of OMD was recently validated on a wide range of organisms: https://advances.sciencemag.org/content/4/11/eaau5518 and it looks like the OMD estimated in this article is about equal to the genome-wide nucleotide diversity (?) which hovers somewhere around 1/2 of the maximum genetic distance (Dmax), roughly similar between plants, fungi and mice, so it may be a very general finding. It translates to about 4-6 nucleotide differences per 1000 nucleotides between each parent in mice. I don't have the data on average nucleotide differences among Caucasians, or between Caucasians and various other races. However, we can use times since divergence to estimate the relationships between average nucleotide diversities and Dmax for various groups. For example, we know that H.sapiens and H.neanderthalensis hybrids were viable, although the surviving Neanderthal DNA stretches show evidence of strong purifying selection, which indicates high levels of genetic incompatibility (see David Reich's book). Human and Denisovan hybrids and human and an unnamed African subspecies were also viable. These other subspecies, Neanderthals, Denisovans, diverged from humans sometime around 500,000 years ago (there are some new guesses putting the divergence at 800 k). We can use that date do estimate the Dmax. On the other hand, we know that the Yamnaya diverged from the founding groups in the past 5000 years (and the Anatolian and hunter-gatherer European populations diverged around 10 - 15 k), so as a whole Caucasians may have accumulated let's say 10 k years worth of divergence, which we can use as an estimate of nucleotide diversity ?. It is clear that the nucleotide diversity among Caucasians (at 10k years) is much lower than the Dmax implied by viable human-archaic hybrids (500k years), so any crosses among Caucasians are very far from Dmax, and most likely well below OMD. The Yamnaya expansion stirred our genes up and made us less inbred, but probably not yet reaching the best level of outbreeding, the OMD. If we take the above article at face value, the OMD for humans might be equivalent to 150k - 200k years divergence but without data on actual nucleotide diversities among humans it's just a rough guess. It is however quite plausible that the mating distance between Caucasian and South-East Asians might be closer to OMD. Caucasians and Asians split 50k years ago, still well below the 500k split that corresponds to Dmax, so unlikely to overshoot OMD (see the hump curve in the article). If true, this may explain the anecdotal evidence of Hapas being prettier and smarter than either parental strains. There is an interesting article on geographical exogamy (mating among geographically distant humans) here: https://www.ncbi.nlm.nih.gov/pubmed/29925873 Turns out that at least in the steppes of Asia it takes at least a distance of 40 km between parents to even start reducing inbreeding. This would support the idea I advanced in my initial post: humans in the pre-cavalry age, who hardly ever traveled 10 miles from their place of birth, were probably very inbred, and the Yamnaya with their hundred-miles mounted sorties may have been able to stir the genetic pot a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sun Jun 2 00:43:43 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 1 Jun 2019 20:43:43 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: <004301d51882$f00963e0$d01c2ba0$@rainier66.com> References: <001601d5183d$0036f110$00a4d330$@rainier66.com> <004301d51882$f00963e0$d01c2ba0$@rainier66.com> Message-ID: On Sat, Jun 1, 2019 at 10:04 AM wrote: > > > I have a notion of how to estimate mating distance of a person?s family > tree using AncestryDNA. > > > ### I am all ears :) This said, there are huge research consortia that are collecting DNA from various human populations, so there is a lot of data on human nucleotide diversities, it's just I have been out of the loop long enough as a geneticist that I don't have the references handy. It might be an evening's worth of trawling through PubMed to get some estimates. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Jun 2 12:57:34 2019 From: johnkclark at gmail.com (John Clark) Date: Sun, 2 Jun 2019 08:57:34 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: References: Message-ID: On Fri, May 31, 2019 at 11:42 PM Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: *> It is well established that mating with close relatives dramatically > reduces evolutionary fitness* It's true that a small geographically isolated population has a greater likelihood of going extinct than the general population, but on the other hand some do survive and the ones that do are where new species come from. Causes of speciation Long before your Yamnaya something dramatic happened to humans about 33,000BC, stone weapons suddenly got much more refined and specialized tools for cleaning animal skins and awls for piercing appeared, shoes were invented and so was jewelry. Before 33,000BC there was little or no art, after 33,000BC it was everywhere. If this change in human behavior happened because of a change in the gene pool then it almost certainly started in a mutation that occurred in a individual living in a small isolated population, the gene made the individual who had it a better hunter and a better warrior and this evolutionary advantage could easily rapidly spread through the entire population because it was so small. After that there would be little to stop the small isolated population from spreading out and becoming large, in fact becoming the dominate human population. But if the mutation had occurred in a horse centered nomadic population that ranged over a huge area it might have produced a few widely separated clever people here and there but the mutated gene would become so diluted by the huge gene pool it could never get a foothold. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Jun 2 18:06:08 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 02 Jun 2019 11:06:08 -0700 Subject: [ExI] The Yamnaya question In-Reply-To: <2029759869.7801518.1559496986931@mail.yahoo.com> References: <001601d5183d$0036f110$00a4d330$@rainier66.com> <004301d51882$f00963e0$d01c2ba0$@rainier66.com> <2029759869.7801518.1559496986931@mail.yahoo.com> Message-ID: <20190602110608.Horde.Ixvf-kmTl6IcXTMPSXHHn-y@secure199.inmotionhosting.com> Quoting Rafal Smigrodski: > This said, there are huge research consortia that are collecting DNA > from various human populations, so there is a lot of data on human > nucleotide diversities, it's just I have been out of the loop long > enough as a geneticist that I don't have the references handy. It > might be an evening's worth of trawling through PubMed to get some > estimates. https://genographic.nationalgeographic.com/for-scientists/ In the above link is an email address that scientists can write to in order to receive access to the databases of one of the largest of the research consortia you mentioned. Email them and tell them you are a scientist. Let them give you access to the raw data to test your theory. If you want help mining the data, then tell them I am your assistant and get me access too. Your hypothesis is important because if it is true, then it refutes racism entirely. And that is something I would be willing to help with. Stuart LaForge From avant at sollegro.com Sun Jun 2 20:54:47 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 02 Jun 2019 13:54:47 -0700 Subject: [ExI] The Yamnaya question In-Reply-To: <1033591604.7814374.1559506054270@mail.yahoo.com> References: <1033591604.7814374.1559506054270@mail.yahoo.com> Message-ID: <20190602135447.Horde.3JqrHrwMg4w2eQWiBca32nt@secure199.inmotionhosting.com> Quoting John Clark: > On Fri, May 31, 2019 at 11:42 PM Rafal Smigrodzki > wrote: > > Long before your Yamnaya something dramatic happened to humans about > 33,000BC, stone weapons suddenly got much more refined > and?specialized tools for cleaning animal skins and awls for > piercing appeared,?shoes were invented and so was jewelry. Before > 33,000BC there was little or no art, after 33,000BC it was everywhere. This was the beginning of culture. The point at which a genetic mutation that gave rise to the meme pool thus allowing for evolution in a higher-dimensional fitness space. ? > If this change in human behavior happened because of a change in the > gene pool then it almost certainly started in a mutation that > occurred in a individual living in a small isolated population, the > gene made the individual who had it?a better hunter and a better > warrior and this evolutionary advantage could easily?rapidly spread > through the entire population because it was so small. For hunter gatherers, you would be right. But the Yamnaya lived after the agrarian revolution. The settled people would have been herdsman and farmers. The Yamanaya on the other hand were warriors and hunters. They were populations selected under different environmental pressures than the former. > After that > there would be little to stop the small isolated population from > spreading out and becoming large, in fact becoming the dominate > human population. But if the mutation had occurred in a horse > centered nomadic population that ranged over a huge area it might > have produced a few widely separated?clever people here and there > but the mutated gene would become so diluted by the huge gene pool > it could never get a foothold. Rafal's hypothesis is also applicable to other historic peoples other than the Yamnaya. Take this guy for example. https://www.thevintagenews.com/2018/06/09/genghis-khan/ The success of his genes and those of his armies would support Rafal's hypothesis as well. Stuart LaForge ? From spike at rainier66.com Sun Jun 2 22:30:01 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 2 Jun 2019 15:30:01 -0700 Subject: [ExI] yes i will destroy humans Message-ID: <002401d51992$ba439ad0$2ecad070$@rainier66.com> Check out the video. https://www.voanews.com/a/mht-human-like-robot-mimic-facial-expressions-soph ia-hanson-robotics/3251313.html I want one. Not her necessarily, but one. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From MagaOsby2 at hotmail.com Sun Jun 2 22:56:54 2019 From: MagaOsby2 at hotmail.com (Osel B) Date: Sun, 2 Jun 2019 22:56:54 +0000 Subject: [ExI] yes i will destroy humans In-Reply-To: <002401d51992$ba439ad0$2ecad070$@rainier66.com> References: <002401d51992$ba439ad0$2ecad070$@rainier66.com> Message-ID: I want all of them Sent from my iPhone On Jun 2, 2019, at 6:52 PM, "spike at rainier66.com" > wrote: Check out the video. https://www.voanews.com/a/mht-human-like-robot-mimic-facial-expressions-sophia-hanson-robotics/3251313.html I want one. Not her necessarily, but one. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jun 3 16:59:35 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 3 Jun 2019 09:59:35 -0700 Subject: [ExI] population and religion Message-ID: <005101d51a2d$b9370960$2ba51c20$@rainier66.com> WOWsers, get a load of this: https://www.youtube.com/watch?v=ezVk1ahRF78 &feature=youtu.be If you are in a hurry, scoot it over to about the 3.30 minute mark. This graphic starting at about 4.45 minutes is highly significant. This is one of the best TED talks I have ever seen. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 3 17:54:40 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 3 Jun 2019 12:54:40 -0500 Subject: [ExI] population and religion In-Reply-To: <005101d51a2d$b9370960$2ba51c20$@rainier66.com> References: <005101d51a2d$b9370960$2ba51c20$@rainier66.com> Message-ID: Yes, wonderful talk. Most of this is in the Rational Optimist - just recently read. A trend I see which he just implied: babies are not longer seen exclusively as assets, to replace dead children, to work farms and take care of their parents. They are in part liabilities, costly little smelly things that you will put through college. He did not address the questions of religion in so far as taking up the abortion question or the Catholic Church's stand against birth control in any form. I have to note that another trend is the Catholics increasingly ignoring their church's stance on these issues. He also did not address the issue of replacement. It was clear from his chart that much of the world is producing babies at less than the replacement rate, which I think is 2.1 (and will go down with improvements in medicine and genetics). bill w Just amazing computer graphics - extremely helpful to his talk. bill w On Mon, Jun 3, 2019 at 12:02 PM wrote: > > > WOWsers, get a load of this: > > > > https://www.youtube.com/watch?v=ezVk1ahRF78&feature=youtu.be > > > > If you are in a hurry, scoot it over to about the 3.30 minute mark. This > graphic starting at about 4.45 minutes is highly significant. > > > > This is one of the best TED talks I have ever seen. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jun 3 18:45:00 2019 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 3 Jun 2019 11:45:00 -0700 Subject: [ExI] a little fun In-Reply-To: References: Message-ID: On Sat, Jun 1, 2019 at 4:05 PM William Flynn Wallace wrote: > It is reported that someone took away his gruntles. > > Just who is in charge of disgruntling people? And can we stop them? > That's crowdsourced these days. Anyone who feels up to it, steps up and does it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Wed Jun 5 23:06:41 2019 From: danust2012 at gmail.com (Dan TheBookMan) Date: Wed, 5 Jun 2019 16:06:41 -0700 Subject: [ExI] Brief review of Testosterone Rex Message-ID: https://www.newscientist.com/article/mg23331151-200-unmaking-the-myths-of-our-gendered-minds/ Since sex/gender differences have been discussed here much of late and I?ve recommended the work of Cordelia Fine before, I thought this would be helpful... And maybe it?ll convince some to read her book. ;) I read it a while back and found it quite illuminating. I especially liked her discussion of risk-taking. Regards, Dan Sample my Kindle books at: http://author.to/DanUst -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Jun 5 23:42:54 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 5 Jun 2019 18:42:54 -0500 Subject: [ExI] Brief review of Testosterone Rex In-Reply-To: References: Message-ID: Be sure to read the lengthy reviews, 37% one star, on Amazon. bill w On Wed, Jun 5, 2019 at 6:10 PM Dan TheBookMan wrote: > > https://www.newscientist.com/article/mg23331151-200-unmaking-the-myths-of-our-gendered-minds/ > > Since sex/gender differences have been discussed here much of late and > I?ve recommended the work of Cordelia Fine before, I thought this would be > helpful... And maybe it?ll convince some to read her book. ;) > > I read it a while back and found it quite illuminating. I especially liked > her discussion of risk-taking. > > Regards, > > Dan > Sample my Kindle books at: > > http://author.to/DanUst > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Thu Jun 6 00:30:58 2019 From: danust2012 at gmail.com (Dan TheBookMan) Date: Wed, 5 Jun 2019 17:30:58 -0700 Subject: [ExI] Brief review of Testosterone Rex In-Reply-To: References: Message-ID: <4443B975-748C-4752-AEF1-12290C2D79A4@gmail.com> Yeah, it?s amazing how many there are ? unlike with her earlier book, _Delusions of Gender: How Our Minds, Society, and Neurosexism Create Difference_, which only had 9% at one star. One of the longercustomer reviews I skimmed seemed to boil down to conventional binary gender roles are as plain as day, so they must clearly be the outcome of genetics. By the way, unrelated to this book, but you might be interested in it: _Darwin Comes to Town: How the Urban Jungle Drives Evolution_ by Menno Schilthuizen. I?m almost finished with it. Littered with many interesting examples. Regards, Dan Sample my Kindle books at: http://author.to/DanUst On Jun 5, 2019, at 4:42 PM, William Flynn Wallace wrote: > Be sure to read the lengthy reviews, 37% one star, on Amazon. bill w > >> On Wed, Jun 5, 2019 at 6:10 PM Dan TheBookMan wrote: >> https://www.newscientist.com/article/mg23331151-200-unmaking-the-myths-of-our-gendered-minds/ >> >> Since sex/gender differences have been discussed here much of late and I?ve recommended the work of Cordelia Fine before, I thought this would be helpful... And maybe it?ll convince some to read her book. ;) >> >> I read it a while back and found it quite illuminating. I especially liked her discussion of risk-taking. >> >> Regards, >> >> Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Jun 6 00:42:00 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 5 Jun 2019 19:42:00 -0500 Subject: [ExI] Brief review of Testosterone Rex In-Reply-To: <4443B975-748C-4752-AEF1-12290C2D79A4@gmail.com> References: <4443B975-748C-4752-AEF1-12290C2D79A4@gmail.com> Message-ID: Thanks! bill w On Wed, Jun 5, 2019 at 7:34 PM Dan TheBookMan wrote: > Yeah, it?s amazing how many there are ? unlike with her earlier book, _Delusions > of Gender: How Our Minds, Society, and Neurosexism Create Difference_, > which only had 9% at one star. One of the longercustomer reviews I skimmed > seemed to boil down to conventional binary gender roles are as plain as > day, so they must clearly be the outcome of genetics. > > By the way, unrelated to this book, but you might be interested in it: > _Darwin Comes to Town: How the Urban Jungle Drives Evolution_ by Menno > Schilthuizen. I?m almost finished with it. Littered with many interesting > examples. > > Regards, > > Dan > Sample my Kindle books at: > http://author.to/DanUst > > On Jun 5, 2019, at 4:42 PM, William Flynn Wallace > wrote: > > Be sure to read the lengthy reviews, 37% one star, on Amazon. bill w > > On Wed, Jun 5, 2019 at 6:10 PM Dan TheBookMan > wrote: > >> >> https://www.newscientist.com/article/mg23331151-200-unmaking-the-myths-of-our-gendered-minds/ >> >> Since sex/gender differences have been discussed here much of late and >> I?ve recommended the work of Cordelia Fine before, I thought this would be >> helpful... And maybe it?ll convince some to read her book. ;) >> >> I read it a while back and found it quite illuminating. I especially >> liked her discussion of risk-taking. >> >> Regards, >> >> Dan >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 10 17:39:34 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 10 Jun 2019 12:39:34 -0500 Subject: [ExI] ??? Message-ID: I have never in my life run across such a crazy idea,but apparently it is fairly widespread. The idea is that vision is accomplished by the eyes sending some sort of rays out to world and perhaps reflecting back to the eyes to be seen. Have any of you run across this? I have heard of some crazy things in my life, but this is now #1. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Mon Jun 10 17:49:15 2019 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 10 Jun 2019 13:49:15 -0400 Subject: [ExI] ??? In-Reply-To: References: Message-ID: On Mon, Jun 10, 2019, 1:43 PM William Flynn Wallace wrote: > I have never in my life run across such a crazy idea,but apparently it is > fairly widespread. > > The idea is that vision is accomplished by the eyes sending some sort of > rays out to world and perhaps reflecting back to the eyes to be seen. > > Have any of you run across this? I have heard of some crazy things in my > life, but this is now #1. > I imagine it comes from early drawings of light going into the optics of an eye... I'm not sure how the leap is made to get to the eye sending out a beam though. Maybe it's like the red laser eyes of a terminator or a borg? Or maybe those ideas echo the earlier notion that the eye sends a signal... like radar? It's an interesting question -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jun 10 17:52:07 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Jun 2019 10:52:07 -0700 Subject: [ExI] ??? In-Reply-To: References: Message-ID: <00b401d51fb5$38ce3f40$aa6abdc0$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Sent: Monday, June 10, 2019 10:40 AM To: ExI chat list Subject: [ExI] ??? I have never in my life run across such a crazy idea,but apparently it is fairly widespread. The idea is that vision is accomplished by the eyes sending some sort of rays out to world and perhaps reflecting back to the eyes to be seen. Have any of you run across this? I have heard of some crazy things in my life, but this is now #1. bill w Crazy idea? Many of the ancient Greek philosophers held this, making it standard theory up until the time of Sir Isaac Newton himself. The notion was that in blind people, a damaged eye could no longer emit the rays. They even had evidence: the cloudy white stuff on the eyes of blind people were blocking the rays from going out. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Mon Jun 10 17:53:17 2019 From: danust2012 at gmail.com (Dan TheBookMan) Date: Mon, 10 Jun 2019 10:53:17 -0700 Subject: [ExI] ??? In-Reply-To: References: Message-ID: The ancients held it. It?s not so crazy unless one already has a modern view of these things. Kind of like why do you believe the brain is the center of cognition? How would one know that without a lot of evidence that earlier peoples simply didn?t have? See https://en.m.wikipedia.org/wiki/Emission_theory_(vision) Regards, Dan Sample my Kindle books at: http://author.to/DanUst > On Jun 10, 2019, at 10:39 AM, William Flynn Wallace wrote: > > I have never in my life run across such a crazy idea,but apparently it is fairly widespread. > > The idea is that vision is accomplished by the eyes sending some sort of rays out to world and perhaps reflecting back to the eyes to be seen. > > Have any of you run across this? I have heard of some crazy things in my life, but this is now #1. > > bill w > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Jun 10 18:03:18 2019 From: dsunley at gmail.com (Darin Sunley) Date: Mon, 10 Jun 2019 12:03:18 -0600 Subject: [ExI] ??? In-Reply-To: References: Message-ID: It's an ancient Greek theory that, because it was advocated by Galen (2nd century) retained a lot of prestige in Europe through the Dark Ages. https://en.m.wikipedia.org/wiki/Emission_theory_(vision) On Mon, Jun 10, 2019, 11:53 AM Mike Dougherty wrote: > On Mon, Jun 10, 2019, 1:43 PM William Flynn Wallace > wrote: > >> I have never in my life run across such a crazy idea,but apparently it is >> fairly widespread. >> >> The idea is that vision is accomplished by the eyes sending some sort of >> rays out to world and perhaps reflecting back to the eyes to be seen. >> >> Have any of you run across this? I have heard of some crazy things in my >> life, but this is now #1. >> > > I imagine it comes from early drawings of light going into the optics of > an eye... > > I'm not sure how the leap is made to get to the eye sending out a beam > though. > > Maybe it's like the red laser eyes of a terminator or a borg? Or maybe > those ideas echo the earlier notion that the eye sends a signal... like > radar? > > It's an interesting question > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jun 10 18:36:25 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 10 Jun 2019 11:36:25 -0700 Subject: [ExI] ??? In-Reply-To: References: Message-ID: <004d01d51fbb$68fa7ed0$3aef7c70$@rainier66.com> >>?I have never in my life run across such a crazy idea,but apparently it is fairly widespread. >>?The idea is that vision is accomplished by the eyes sending some sort of rays out to world and perhaps reflecting back to the eyes to be seen. From: extropy-chat On Behalf Of Darin Sunley Subject: Re: [ExI] ??? >?It's an ancient Greek theory that, because it was advocated by Galen (2nd century) retained a lot of prestige in Europe through the Dark Ages? BillW, I had a buddy in high school, it was so crazy, he had sex-ray vision. Girls would come along, he didn?t even really need to say anything, or nothing much. He would just look at them somehow and the sex-ray vision would go out from his eyes and they would gaze back and it never did take very long even. Resistance was futile: they would be conquered by his sex-ray vision. The old timers in Greece probably had guys like that, didn?t know how the heck they did it. We guys watched, could never figure it out. He wasn?t rich. He wasn?t handsome (as far as we could tell he wasn?t.) He wasn?t particularly brilliant. He didn?t even have that great a car. Most mysterious. I told the other lads it was like Proverbs Chapter 30, v 18 and 19: 18 There be three things which are too wonderful for me, yea, four which I know not: 19 The way of an eagle in the air; the way of a serpent upon a rock; the way of a ship in the midst of the sea; and the way of a man with a maid. I told them even wise men couldn?t figure it out. Our friend was the serpent upon a rock. His sex-ray vision would get him in the sack with the local maids. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Jun 11 11:59:27 2019 From: johnkclark at gmail.com (John Clark) Date: Tue, 11 Jun 2019 07:59:27 -0400 Subject: [ExI] Cold fusion Message-ID: Bizarrely Google just announce in Nature the results of a secret 4 year $10,000,000 experimental investigation of cold fusion. Their bottom line conclusion: "We have yet to yield any evidence of such an effect". Revisiting the cold case of cold fusion John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Tue Jun 11 15:47:20 2019 From: danust2012 at gmail.com (Dan TheBookMan) Date: Tue, 11 Jun 2019 08:47:20 -0700 Subject: [ExI] Cold fusion In-Reply-To: References: Message-ID: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> While I?m skeptical of the original cold fusion claim, the Nature piece also had: ?Nonetheless, a by-product of our investigations has been to provide new insights into highly hydrided metals and low-energy nuclear reactions, and we contend that there remains much interesting science to be done in this underexplored parameter space.? Regards, Dan Sample my Kindle books at: http://author.to/DanUst > On Jun 11, 2019, at 4:59 AM, John Clark wrote: > > Bizarrely Google just announce in Nature the results of a secret 4 year $10,000,000 experimental investigation of cold fusion. Their bottom line conclusion: > "We have yet to yield any evidence of such an effect". > > Revisiting the cold case of cold fusion > > John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 11 16:09:49 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 11 Jun 2019 11:09:49 -0500 Subject: [ExI] Cold fusion In-Reply-To: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> Message-ID: In other words "please keep our grants going" bill w On Tue, Jun 11, 2019 at 10:50 AM Dan TheBookMan wrote: > While I?m skeptical of the original cold fusion claim, the Nature piece > also had: > > ?Nonetheless, a by-product of our investigations has been to provide new > insights into highly hydrided metals and low-energy nuclear reactions, and > we contend that there remains much interesting science to be done in this > underexplored parameter space.? > > Regards, > > Dan > Sample my Kindle books at: > > http://author.to/DanUst > > On Jun 11, 2019, at 4:59 AM, John Clark wrote: > > Bizarrely Google just announce in Nature the results of a secret 4 year > $10,000,000 experimental investigation of cold fusion. Their bottom line > conclusion: > "We have yet to yield any evidence of such an effect". > > Revisiting the cold case of cold fusion > > > John K Clark > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Tue Jun 11 16:21:26 2019 From: danust2012 at gmail.com (Dan TheBookMan) Date: Tue, 11 Jun 2019 09:21:26 -0700 Subject: [ExI] Cold fusion In-Reply-To: References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> Message-ID: <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> Yeah, it seems like a boiler plate sentence too. Regards, Dan Sample my Kindle books at: http://author.to/DanUst > On Jun 11, 2019, at 9:09 AM, William Flynn Wallace wrote: > > In other words "please keep our grants going" > > bill w > >> On Tue, Jun 11, 2019 at 10:50 AM Dan TheBookMan wrote: >> While I?m skeptical of the original cold fusion claim, the Nature piece also had: >> >> ?Nonetheless, a by-product of our investigations has been to provide new insights into highly hydrided metals and low-energy nuclear reactions, and we contend that there remains much interesting science to be done in this underexplored parameter space.? >> >> Regards, >> >> Dan >> Sample my Kindle books at: >> http://author.to/DanUst >> >>> On Jun 11, 2019, at 4:59 AM, John Clark wrote: >>> >>> Bizarrely Google just announce in Nature the results of a secret 4 year $10,000,000 experimental investigation of cold fusion. Their bottom line conclusion: >>> "We have yet to yield any evidence of such an effect". >>> >>> Revisiting the cold case of cold fusion >>> >>> John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 11 16:38:48 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Jun 2019 09:38:48 -0700 Subject: [ExI] Cold fusion In-Reply-To: <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> Message-ID: <005301d52074$2530ada0$6f9208e0$@rainier66.com> From: extropy-chat On Behalf Of Dan TheBookMan Subject: Re: [ExI] Cold fusion >?Yeah, it seems like a boiler plate sentence too. Regards, Dan We know the term Black Swan event, something that is so terrible there is little point in even pondering the consequences. An example: a meteor strikes earth, of sufficient size to kick up dust and debris causing mass extinctions and 99% of humanity to suffer a horrifying painful mass death. Cold fusion is the opposite of a black swan event: if that ever worked out, it would solve so many of humanity?s most fundamental problems so cheaply and easily, there is little we could not accomplish. It would be the new dawn of humanity. I often get the feeling we allow the unlimited marvelous consequences of cold fusion to influence our research, a kind of version of buying a hundred trillion-dollar lottery ticket when the odds of winning are a ten trillion to one. The argument could be made that such a ticket is a good deal perhaps. What term could we use for the opposite of a black swan event? spike Sample my Kindle books at: http://author.to/DanUst On Jun 11, 2019, at 9:09 AM, William Flynn Wallace > wrote: In other words "please keep our grants going" bill w On Tue, Jun 11, 2019 at 10:50 AM Dan TheBookMan > wrote: While I?m skeptical of the original cold fusion claim, the Nature piece also had: ?Nonetheless, a by-product of our investigations has been to provide new insights into highly hydrided metals and low-energy nuclear reactions, and we contend that there remains much interesting science to be done in this underexplored parameter space.? Regards, Dan Sample my Kindle books at: http://author.to/DanUst On Jun 11, 2019, at 4:59 AM, John Clark > wrote: Bizarrely Google just announce in Nature the results of a secret 4 year $10,000,000 experimental investigation of cold fusion. Their bottom line conclusion: "We have yet to yield any evidence of such an effect". Revisiting the cold case of cold fusion John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Jun 11 17:27:44 2019 From: johnkclark at gmail.com (John Clark) Date: Tue, 11 Jun 2019 13:27:44 -0400 Subject: [ExI] Cold fusion In-Reply-To: <005301d52074$2530ada0$6f9208e0$@rainier66.com> References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> <005301d52074$2530ada0$6f9208e0$@rainier66.com> Message-ID: On Tue, Jun 11, 2019 at 12:41 PM wrote: > > *What term could we use for the opposite of a black swan event?* > I don't know, white crow event maybe? Nanotechnology and Cold Fusion aren't the only things that could produce one. If, contrary to most mathematicians expectations, it turned out that P=NP and somebody found an algorithm that could solve nondeterministic polynomial time problems in polynomial time that would create a singularity in human affairs. less radical but still probably large enough to produce a singularity would be if someone found a practical a way to make a Quantum Computer of several hundred or a thousand Qubits (high quality qubits not the sort the D-wave makes use of). Quantum Computer expert Scot Aaronson says the conservative view is that such a machine is possible, those who say such a machine is not possible are radicals because they need to invoke new undiscovered laws physics to make their case. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 11 18:37:01 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Jun 2019 11:37:01 -0700 Subject: [ExI] Cold fusion In-Reply-To: References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> <005301d52074$2530ada0$6f9208e0$@rainier66.com> Message-ID: <008c01d52084$a947e210$fbd7a630$@rainier66.com> From: extropy-chat On Behalf Of John Clark Subject: Re: [ExI] Cold fusion On Tue, Jun 11, 2019 at 12:41 PM > wrote: > What term could we use for the opposite of a black swan event? I don't know, white crow event maybe? Nanotechnology and Cold Fusion aren't the only things that could produce one. ? John K Clark White crow event, I like it. Extropians are, in a sense, the white crow society. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 11 18:56:09 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Jun 2019 11:56:09 -0700 Subject: [ExI] Cold fusion In-Reply-To: <008c01d52084$a947e210$fbd7a630$@rainier66.com> References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> <005301d52074$2530ada0$6f9208e0$@rainier66.com> <008c01d52084$a947e210$fbd7a630$@rainier66.com> Message-ID: <00bd01d52087$553fb550$ffbf1ff0$@rainier66.com> If I may offer a possible modification of John?s notion, a white raven? It would allude to Poe?s raven whose comment nevermore has a delightfully ambiguous meaning. On the other hand, that would give away the usual bad-news connotation as found in Crosby, Stills & Nash (assuming you are old enough) collaboration with Jackson Browne, to produce Crow on the Cradle, where the crow foretells bad futures ahead. In the old days of my misspent youth, there was a legendary caladrius bird, which was said to perch on the foot of the sick bed. If the caladrius faced the patient, he would survive. If it faced away, the hapless prole would perish. However? the caladrius is white. Ravens and crows are always black. OK, I have the answer: we figure out a way to genetically modify a raven so that it is white, sell them as novelty pets, make a buttload. spike From: spike at rainier66.com Sent: Tuesday, June 11, 2019 11:37 AM To: 'ExI chat list' Cc: spike at rainier66.com Subject: RE: [ExI] Cold fusion From: extropy-chat > On Behalf Of John Clark Subject: Re: [ExI] Cold fusion On Tue, Jun 11, 2019 at 12:41 PM > wrote: > What term could we use for the opposite of a black swan event? I don't know, white crow event maybe? Nanotechnology and Cold Fusion aren't the only things that could produce one. ? John K Clark White crow event, I like it. Extropians are, in a sense, the white crow society. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 11 19:41:54 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 11 Jun 2019 14:41:54 -0500 Subject: [ExI] marxism Message-ID: I am no student of it, and cannot claim any expertise at all, and furthermore, I am quite sure that readers, will , as usual, correct my errors. Carl Rogers was a psychologist who preached nonviolence. No punishment of children. Just let them be. They will be fine. Meet everything with 'unconditional positive regard'. This should work because people are born good and will develop into fine citizens if we just let them. Whatever you can say, for or against Rogers, you cannot say that his system does not work, because no one has ever tried it, to my knowledge. Parents who did found out that their kids were turning into little hellions who needed something other than unconditional positive regard, and abandoned Rogers. Now Marxism, 'to each according to their needs - from each according to their abilities.' Now to me, this sounds wonderful. Just the kind of thing every society should aim for. I cannot find any fault in it. Re-reading The Blank Slate, a section about how genetics like sociobiology was attacked viciously in the ivy league, I have to wonder why any professor, no matter how warped, could adopt Marxism. It is plain that every country that tried to be Marxist, or at least some variety of of it, has been a total economic failure. I think China is pretty far from Marxism, though of course they do not use the word 'capitalism'. So the only thing I can think of to avoid serious criticism of air-brained Marxists professors, is that true Marxism has never been tried; thus it has not failed. Eh? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Jun 11 20:02:24 2019 From: sparge at gmail.com (Dave Sill) Date: Tue, 11 Jun 2019 16:02:24 -0400 Subject: [ExI] Cold fusion In-Reply-To: <00bd01d52087$553fb550$ffbf1ff0$@rainier66.com> References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> <005301d52074$2530ada0$6f9208e0$@rainier66.com> <008c01d52084$a947e210$fbd7a630$@rainier66.com> <00bd01d52087$553fb550$ffbf1ff0$@rainier66.com> Message-ID: https://theravenspire.blogspot.com/2012/09/white-ravens.html On Tue, Jun 11, 2019 at 3:05 PM wrote: > If I may offer a possible modification of John?s notion, a white raven? > It would allude to Poe?s raven whose comment nevermore has a delightfully > ambiguous meaning. > > > > On the other hand, that would give away the usual bad-news connotation as > found in Crosby, Stills & Nash (assuming you are old enough) collaboration > with Jackson Browne, to produce Crow on the Cradle, where the crow > foretells bad futures ahead. > > > > In the old days of my misspent youth, there was a legendary caladrius > bird, which was said to perch on the foot of the sick bed. If the > caladrius faced the patient, he would survive. If it faced away, the > hapless prole would perish. However? the caladrius is white. Ravens and > crows are always black. > > > > OK, I have the answer: we figure out a way to genetically modify a raven > so that it is white, sell them as novelty pets, make a buttload. > > > > spike > > > > *From:* spike at rainier66.com > *Sent:* Tuesday, June 11, 2019 11:37 AM > *To:* 'ExI chat list' > *Cc:* spike at rainier66.com > *Subject:* RE: [ExI] Cold fusion > > > > > > *From:* extropy-chat *On Behalf > Of *John Clark > *Subject:* Re: [ExI] Cold fusion > > > > On Tue, Jun 11, 2019 at 12:41 PM wrote: > > > > > *What term could we use for the opposite of a black swan event?* > > > I don't know, white crow event maybe? Nanotechnology and Cold Fusion > aren't the only things that could produce one. ? John K Clark > > > > > > > > White crow event, I like it. Extropians are, in a sense, the white crow > society. > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 11 20:06:03 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Jun 2019 13:06:03 -0700 Subject: [ExI] marxism In-Reply-To: References: Message-ID: <00dd01d52091$190ae9b0$4b20bd10$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace >?Whatever you can say, for or against Rogers, you cannot say that his system does not work, because no one has ever tried it, to my knowledge. Parents who did found out that their kids were turning into little hellions who needed something other than unconditional positive regard, and abandoned Rogers? All the prescriptions for parenthood fail because all are oversimplifications. There are people who are just born good. Others, born bad. One size does not fit all. One size doesn?t even fit most. I was one of the damn lucky ones: my only child seems to be one of the few inherently good people who has never caused me any trouble, never been a bother, every one of his teachers agrees. I know of people who turn toward evil at every opportunity. Even the insightful Steven Pinker under-accounts for the differences in human nature. >?Now Marxism, 'to each according to their needs - from each according to their abilities.' Now to me, this sounds wonderful. Just the kind of thing every society should aim for. I cannot find any fault in it? bill w Up to each to determine which is which. The problem with Marxism is that it requires government intervention. Create a version of Marxism that runs on its own with no government intervention, and you have it: a functional family. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 11 20:09:23 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 11 Jun 2019 13:09:23 -0700 Subject: [ExI] Cold fusion In-Reply-To: References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> <005301d52074$2530ada0$6f9208e0$@rainier66.com> <008c01d52084$a947e210$fbd7a630$@rainier66.com> <00bd01d52087$553fb550$ffbf1ff0$@rainier66.com> Message-ID: <00f401d52091$901bdeb0$b0539c10$@rainier66.com> From: Dave Sill Subject: Re: [ExI] Cold fusion https://theravenspire.blogspot.com/2012/09/white-ravens.html OK right, now I remember hearing about that guy who runs that biz. They made a commercial about him: https://www.youtube.com/watch?v=oftjwYmlfoA spike On Tue, Jun 11, 2019 at 3:05 PM > wrote: If I may offer a possible modification of John?s notion, a white raven? It would allude to Poe?s raven whose comment nevermore has a delightfully ambiguous meaning. On the other hand, that would give away the usual bad-news connotation as found in Crosby, Stills & Nash (assuming you are old enough) collaboration with Jackson Browne, to produce Crow on the Cradle, where the crow foretells bad futures ahead. In the old days of my misspent youth, there was a legendary caladrius bird, which was said to perch on the foot of the sick bed. If the caladrius faced the patient, he would survive. If it faced away, the hapless prole would perish. However? the caladrius is white. Ravens and crows are always black. OK, I have the answer: we figure out a way to genetically modify a raven so that it is white, sell them as novelty pets, make a buttload. spike From: spike at rainier66.com > Sent: Tuesday, June 11, 2019 11:37 AM To: 'ExI chat list' > Cc: spike at rainier66.com Subject: RE: [ExI] Cold fusion From: extropy-chat > On Behalf Of John Clark Subject: Re: [ExI] Cold fusion On Tue, Jun 11, 2019 at 12:41 PM > wrote: > What term could we use for the opposite of a black swan event? I don't know, white crow event maybe? Nanotechnology and Cold Fusion aren't the only things that could produce one. ? John K Clark White crow event, I like it. Extropians are, in a sense, the white crow society. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Jun 12 01:02:17 2019 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 11 Jun 2019 18:02:17 -0700 Subject: [ExI] Toy brain can count to five Message-ID: <20190611180217.Horde.jwiDMiOBfD-xN2BePi7h07L@secure199.inmotionhosting.com> In order to get a feel for the capabilities of machine intelligence, I have been playing around with a neural network platform called Simbrain. Simbrain is an awesome free software that lets you play with encapsulated graphical neurons on your computer screen. You can create neurons with a push of a button and wire them together using your mouse. It's like a lego set or tinker toys for AI and machine learning, relatively intuitive to use, and requiring very little in the way of coding or math to get up and running designing toy brains. It is written in Java so it is platform independent and it is downloadable here: https://www.simbrain.net/ Ok, so being inspired with all the research being done on the numeracy and math skills of honeybees, I designed a 55 neuron brain as a network with 5 input neurons, 45 neurons in 3 hidden layers, and an output layer with 5 neurons labelled "1" through "5". So then I set about seeing if I could use the back propagation algorithm to train my tiny brain to count to five. Basically I feed it a training set consisting of examples of all the possible ways that the 5 input neurons can be lit up. For example, I input all 5 ways that 1 input neuron out of 5 input neurons can be lit up, i.e. the 1st neuron can be lit or the 2nd or the 3rd etc. Then I correspond it to the output neuron labelled with the numeral "1". I do the same thing for all the ways that 2 out of 5 input neurons can be lit up and associate those with the output neuron labelled "2". I do the same thing all the way up to all 5 input neurons lit up. (There are 10 ways to make 2 out 5 neurons light up, 10 ways to make 3 out of 5 neurons light up, 5 ways to make 4 neurons light up and just 1 way to make 5 out 5 input neurons light up.) So then I use back propagation to train the neurons for about 20,000 iterations, randomizing the weights and activations a few times to escape from local minima to get down to an error rate below 1%. And then I tested it and it worked like a charm. It could seemingly associate the reality of between one and five objects, in this specific case activated input neurons, and with the specific output neuron that symbolized that specific number. Sort of like training a child to point to the numeral representing the number of blocks you had set down before him or her. So that was mildly interesting but not particularly impressive because, I had literally gave it every possible way to make those input neurons light up and the enumerated output neurons those patterns corresponded to. To program a computer to do that would be trivial. My tiny brain could simply be memorizing data without actually thinking about it or understanding the concept, like a simple lookup table. So then, I went through my training data and pruned away various patterns so that I could test the brain against patterns it had never seen before. And I trained my brain with this pared down data and then tested it against patterns of lit up input neurons it hadn't been trained on. Nonetheless it did quite well at counting the activated neurons in patterns it hadn't seen before all the way down to almost half of the original training set. Taking out 4 of the patterns where 2 input neurons out of 5 were activated, reduced the activation of the neuron which represented "2" but it nonetheless got the right answer. The fewer the training examples of a given number, the less able the toy brain was able to figure out what the number was. For example, if I removed the one way that all 5 input neurons could be lit up at once from the training set, upon presenting it with that pattern of all 5 being lit up, it did not even try to infer that that novel pattern could be the only previously unused output neuron (the one labelled "5"). Instead it lit up none of the output neurons, thus declining to answer. So my conclusion is that neural networks are pretty damn good at deductive reasoning and interpolation but pretty damn lousy at induction and extrapolation. If anybody downloads the software and wants a copy of my toy brain to play with, then email me offlist and I will send it as a zip file. Stuart LaForge From msd001 at gmail.com Wed Jun 12 03:07:30 2019 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 11 Jun 2019 23:07:30 -0400 Subject: [ExI] Cold fusion In-Reply-To: <00f401d52091$901bdeb0$b0539c10$@rainier66.com> References: <6A2258A7-B545-4B07-954E-C794E993A001@gmail.com> <3E9628B9-330D-4291-8EB8-34378CFEE4D1@gmail.com> <005301d52074$2530ada0$6f9208e0$@rainier66.com> <008c01d52084$a947e210$fbd7a630$@rainier66.com> <00bd01d52087$553fb550$ffbf1ff0$@rainier66.com> <00f401d52091$901bdeb0$b0539c10$@rainier66.com> Message-ID: The black swan describes the in-frequency, not the "badness" attributing anything to the blackness (i guess) the opposite of 3 stdev left of the normalized bell distribution is 3 stdev right, but that has the same likelihood... therefore: the opposite of a black swan event is a black swan event. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Jun 12 03:30:39 2019 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 11 Jun 2019 20:30:39 -0700 Subject: [ExI] Black Swan event. In-Reply-To: References: Message-ID: On Tue, Jun 11, 2019 at 1:23 PM (spike at rainier66.com) wrote: snip > We know the term Black Swan event, something that is so terrible there is little point in even pondering the consequences. https://en.wikipedia.org/wiki/Black_swan_event The event results may be terrible, but that's not part of the definition. Based on the author's criteria: "The event is a surprise (to the observer). The event has a major effect. After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals." .> An example: a meteor strikes earth, of sufficient size to kick up dust and debris causing mass extinctions and 99% of humanity to suffer a horrifying painful mass death. Given the K/T extinction, not certain this would be that much of a surprise, though it would surprise me if a K/T event hit at 8 am tomorrow. > Cold fusion is the opposite of a black swan event: if that ever worked out, it would solve so many of humanity?s most fundamental problems so cheaply and easily, there is little we could not accomplish. It would be the new dawn of humanity. I think you might be overvaluing low temperature steam. snip > What term could we use for the opposite of a black swan event? Not really needed. It would be kind of interesting to list events unexpected enough to be called black swan events. Keith From pharos at gmail.com Wed Jun 12 13:35:27 2019 From: pharos at gmail.com (BillK) Date: Wed, 12 Jun 2019 14:35:27 +0100 Subject: [ExI] Black Swan event. In-Reply-To: References: Message-ID: On Wed, 12 Jun 2019 at 04:34, Keith Henson wrote: > > https://en.wikipedia.org/wiki/Black_swan_event > > The event results may be terrible, but that's not part of the definition. > > Based on the author's criteria: > > "The event is a surprise (to the observer). > The event has a major effect. > After the first recorded instance of the event, it is rationalized by > hindsight, as if it could have been expected; that is, the relevant > data were available but unaccounted for in risk mitigation programs. > The same is true for the personal perception by individuals." > > It would be kind of interesting to list events unexpected enough to be > called black swan events. > I don't think you can do that. :) Black swan events are rare and thought to be impossible (until they actually happen). Although, - Why, sometimes I've believed as many as six impossible things before breakfast. ? Lewis Carroll Perhaps Grey Swan events would cover that situation. Nomura's Bilal Hafeez writes, while he would like to be able to predict black swans, by definition that is impossible. "However, its close cousin the grey swan can be foreseen. These are the unlikely but impactful events that, in our opinion, lie outside the usual base case and risk scenarios of the analyst community. ------------------------ For example, hurricanes are not Black Swan events - they are not unexpected. But an especially severe hurricane that maybe effectively wiped out a city would be a Grey Swan. BillK From pharos at gmail.com Wed Jun 12 15:45:30 2019 From: pharos at gmail.com (BillK) Date: Wed, 12 Jun 2019 16:45:30 +0100 Subject: [ExI] Transhumanism - Six fingers? Message-ID: Extra fingers, often seen as useless, can offer major dexterity advantages An extra digit proves useful for texting, typing and eating, a case study shows By Laura Sanders, June 12, 2019 Quote: Burdet says that these participants live in a world designed for people with five fingers, which can lead to interesting adaptations. Eating utensils are too simple for them, he says, ?so they constantly change the posture on the utensils and use them in a different way.? After spending time with the participants, ?I slowly felt impaired with my five-fingered hands,? he says. The results may not extend to other people with extra digits, Burdet says. In some cases, extra fingers may be less well developed. ___________________ Nature paper here: ---------------------- Be sure to watch the one minute video! Creepy??? I wonder how they would be at playing the piano? BillK From spike at rainier66.com Wed Jun 12 17:01:04 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 12 Jun 2019 10:01:04 -0700 Subject: [ExI] Transhumanism - Six fingers? In-Reply-To: References: Message-ID: <000001d52140$6bf7e790$43e7b6b0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK Subject: [ExI] Transhumanism - Six fingers? >>...Extra fingers, often seen as useless, can offer major dexterity advantages An extra digit proves useful for texting, typing and eating, a case study shows By Laura Sanders, June 12, 2019 Quote: Burdet says that these participants live in a world designed for people with five fingers... The results may not extend to other people with extra digits, Burdet says. In some cases, extra fingers may be less well developed. ___________________ Nature paper here: ---------------------- Be sure to watch the one minute video! Creepy??? I wonder how they would be at playing the piano? BillK _______________________________________________ BillK, before you toss the following notion as more spike silliness, consider the reproductive advantage in our modern world of some oddball characteristic. A clear prerequisite to (legitimate) reproduction in our modern very crowded world is acquaintance. Someone with a weird but harmless oddity like six functional fingers can attract attention, which increases the probability of reproduction. I have a notion that a minor thing like this may have led to speciation between protohumans and protobonobos (or protochimps in some lines of theory.) One of the possible oddball characteristics proposed has been bulbous heads. This notion is compelling because the bulbous-head male appears baby-like to prospective mates, which cross-wires the whole psychology of the mating game. BillW might comment on that. Fertile females may feel more comfortable in the presence of an adult male with juvenile characteristics. Other oddball characteristics could cause the cascade of adaptations to the new equilibrium, such as a knee variation that encouraged upright walking, which frees up the hands, makes the biped appear larger and more menacing, leading to dominance of a group and so on. Another possible adaptation is the use of the voice for something other than the usual ooo-ooo-ooo lines the females are so tired of hearing. I think of that as the proto-rock-star notion. OK, fast forward to today. Now we are a highly evolved highly uniform species. So... any oddball talent or (perhaps especially) a weird or rare anatomical variation would lead to one guy standing out, girls coming over to his lunch table at high school and oh let me see, etc. Looks to me like that would be a slight reproductive advantage: girls might see it as a status booster to virtue-signal by being accepting and friendly to the mutant. Taken further, they might wish to virtue-signal if they took him for a roll in the hay (to see if he had the usual number of everything else.) I suppose it is unclear if that case would still be considered "virtue" signaling, but hey, it's just a theory. Furthermore... for the six-fingered guy: there just HASTA be a way to make money with that. spike From foozler83 at gmail.com Wed Jun 12 17:25:45 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 12 Jun 2019 12:25:45 -0500 Subject: [ExI] Transhumanism - Six fingers? In-Reply-To: <000001d52140$6bf7e790$43e7b6b0$@rainier66.com> References: <000001d52140$6bf7e790$43e7b6b0$@rainier66.com> Message-ID: BillW might comment on that. Fertile females may feel more comfortable in the presence of an adult male with juvenile characteristics. spike Xenophobia in chickens. If a chick is abnormal in any way the other chickens will peck it to death. Humans are similar. Why attack gays and lesbians? They are different - not like us - who knows what other ways they are abnormal and bent? I dunno. I suspect that a lot of people would regard six fingers as "Ewwww", what else is wrong with them?" Imagine a mother whose daughter brings a hexadactylic man home? "Well, at least he isn't ___ (fill in with bigoted adjective)". But inevitably there will be women who find it attractive. Percentage? I dunno. Neoteny is the word for the appearance of adult characteristics in the young, and, perversely, for the persistence of youthful characteristics in the adult. You might Google that. Imagine some protypically macho male. Got it? Now a pretty girl. Far more babyish, which is why I suspect that calling women and girls 'baby' is used in a zillion song lyrics . Some real data: around the time of ovulation, women are more attracted to pictures of very virile males, and presumably actual males of that visage, than at other times in the cycle. Sorry Spike. Although it might be investigated as to whether timid women are more attracted to less virile-looking males. Though you might also make a case that timid women would want the strongest male she could find to protect her. Love is complicated - you can quote me. I do have to object to the use of 'virile' to describe the athletic look. Hell, I was a virile as they were and maybe more in some cases, skinny though I am. So there! bill w On Wed, Jun 12, 2019 at 12:04 PM wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of > BillK > Subject: [ExI] Transhumanism - Six fingers? > > >>...Extra fingers, often seen as useless, can offer major dexterity > advantages An extra digit proves useful for texting, typing and eating, a > case study shows By Laura Sanders, June 12, 2019 > > < > https://www.sciencenews.org/article/having-six-fingers-can-offer-major-dexterity-advantages > > > > Quote: > Burdet says that these participants live in a world designed for people > with five fingers... > > The results may not extend to other people with extra digits, Burdet says. > In some cases, extra fingers may be less well developed. > ___________________ > > Nature paper here: > > ---------------------- > > Be sure to watch the one minute video! Creepy??? > I wonder how they would be at playing the piano? > > BillK > > _______________________________________________ > > > > BillK, before you toss the following notion as more spike silliness, > consider the reproductive advantage in our modern world of some oddball > characteristic. A clear prerequisite to (legitimate) reproduction in our > modern very crowded world is acquaintance. Someone with a weird but > harmless oddity like six functional fingers can attract attention, which > increases the probability of reproduction. > > I have a notion that a minor thing like this may have led to speciation > between protohumans and protobonobos (or protochimps in some lines of > theory.) One of the possible oddball characteristics proposed has been > bulbous heads. This notion is compelling because the bulbous-head male > appears baby-like to prospective mates, which cross-wires the whole > psychology of the mating game. BillW might comment on that. Fertile > females may feel more comfortable in the presence of an adult male with > juvenile characteristics. > > Other oddball characteristics could cause the cascade of adaptations to > the new equilibrium, such as a knee variation that encouraged upright > walking, which frees up the hands, makes the biped appear larger and more > menacing, leading to dominance of a group and so on. > > Another possible adaptation is the use of the voice for something other > than the usual ooo-ooo-ooo lines the females are so tired of hearing. I > think of that as the proto-rock-star notion. > > OK, fast forward to today. Now we are a highly evolved highly uniform > species. So... any oddball talent or (perhaps especially) a weird or rare > anatomical variation would lead to one guy standing out, girls coming over > to his lunch table at high school and oh let me see, etc. > > Looks to me like that would be a slight reproductive advantage: girls > might see it as a status booster to virtue-signal by being accepting and > friendly to the mutant. Taken further, they might wish to virtue-signal if > they took him for a roll in the hay (to see if he had the usual number of > everything else.) I suppose it is unclear if that case would still be > considered "virtue" signaling, but hey, it's just a theory. > > Furthermore... for the six-fingered guy: there just HASTA be a way to make > money with that. > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Jun 12 17:30:02 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 12 Jun 2019 12:30:02 -0500 Subject: [ExI] Black Swan event. In-Reply-To: References: Message-ID: We need another word. 'Unpredictable' is usually used incorrectly. You can predict anything: The 2078 Baseball World Series will be played in Manila and won by the Tokyo Dragons. A black swan event will occur tomorrow in my back yard, featuring curious white swans and a lonely iguana. Point is, you cannot predict certain things *accurately*, rather than not at all. I suggest 'unknown' bill w On Wed, Jun 12, 2019 at 8:39 AM BillK wrote: > On Wed, 12 Jun 2019 at 04:34, Keith Henson wrote: > > > > https://en.wikipedia.org/wiki/Black_swan_event > > > > The event results may be terrible, but that's not part of the definition. > > > > Based on the author's criteria: > > > > "The event is a surprise (to the observer). > > The event has a major effect. > > After the first recorded instance of the event, it is rationalized by > > hindsight, as if it could have been expected; that is, the relevant > > data were available but unaccounted for in risk mitigation programs. > > The same is true for the personal perception by individuals." > > > > It would be kind of interesting to list events unexpected enough to be > > called black swan events. > > > > > I don't think you can do that. :) > Black swan events are rare and thought to be impossible (until they > actually happen). > Although, - Why, sometimes I've believed as many as six impossible > things before breakfast. ? Lewis Carroll > > Perhaps Grey Swan events would cover that situation. > Nomura's Bilal Hafeez writes, while he would like to be able to > predict black swans, by definition that is impossible. "However, its > close cousin the grey swan can be foreseen. These are the unlikely but > impactful events that, in our opinion, lie outside the usual base case > and risk scenarios of the analyst community. > ------------------------ > > For example, hurricanes are not Black Swan events - they are not > unexpected. > But an especially severe hurricane that maybe effectively wiped out a > city would be a Grey Swan. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Jun 12 19:41:33 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 12 Jun 2019 14:41:33 -0500 Subject: [ExI] control again Message-ID: In The Blank Slate, Pinker offers us ways to take some aspects of our human nature and make a better society. I got to thinking: what is the most successful way of leading people, changing their behaviors, including letting go of their money, and no psychologist came to mind (unless it was Watson in his later career), no technique of conditioning, reinforcement or punishment. Then the idea struck me: what influences everybody, everyday, in significant ways, including spending their money? Marketing. Don't try to get people to change by forcing them in some way, making laws as an example. (yeah, if you've got them by the balls, their hearts and minds will follow, but that's wrong in so many ways), Sell them the ideas. Great ideas will sell themselves to people and once some are sold, others will follow. So my vote for most effective manipulator of behavior of all time, without doing anything authoritarian, immoral, or forcing, is Madison Avenue. If you, like me, are someone who can be easily led by the right situations, and can never be effectively pushed (which creates a powerful counterforce in me), then it has to be marketing for the selling of ideas. Freely chosen things are usually much preferred to those demanded. (Are they really free choices is a philosophical question, not a practical one.) There's gotta be a way to make buttloads of money out of this! bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From sen.otaku at gmail.com Wed Jun 12 21:11:17 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Wed, 12 Jun 2019 16:11:17 -0500 Subject: [ExI] control again In-Reply-To: References: Message-ID: Religion. > what is the most successful way of leading people, changing their behaviors, including letting go of their money, and no psychologist came to mind (unless it was Watson in his later career), no technique of conditioning, reinforcement or punishment. SR Ballard > On Jun 12, 2019, at 2:41 PM, William Flynn Wallace wrote: > > In The Blank Slate, Pinker offers us ways to take some aspects of our human nature and make a better society. I got to thinking: what is the most successful way of leading people, changing their behaviors, including letting go of their money, and no psychologist came to mind (unless it was Watson in his later career), no technique of conditioning, reinforcement or punishment. > > Then the idea struck me: what influences everybody, everyday, in significant ways, including spending their money? > > Marketing. Don't try to get people to change by forcing them in some way, making laws as an example. (yeah, if you've got them by the balls, their hearts and minds will follow, but that's wrong in so many ways), Sell them the ideas. Great ideas will sell themselves to people and once some are sold, others will follow. > > So my vote for most effective manipulator of behavior of all time, without doing anything authoritarian, immoral, or forcing, is Madison Avenue. > > If you, like me, are someone who can be easily led by the right situations, and can never be effectively pushed (which creates a powerful counterforce in me), then it has to be marketing for the selling of ideas. Freely chosen things are usually much preferred to those demanded. (Are they really free choices is a philosophical question, not a practical one.) > > There's gotta be a way to make buttloads of money out of this! > > bill w > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Jun 12 22:03:17 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 12 Jun 2019 17:03:17 -0500 Subject: [ExI] control again In-Reply-To: References: Message-ID: On Wed, Jun 12, 2019 at 4:14 PM SR Ballard wrote: > Religion. > > what is the most successful way of leading people, changing their > behaviors, including letting go of their money, and no psychologist came to > mind (unless it was Watson in his later career), no technique of > conditioning, reinforcement or punishment. > > > SR Ballard > I will not disagree, but I would call what religion does marketing. Religions were cobbled together out of various things and included what people consider important: family, violence, sex, forgiveness, punishment, and many more. The religions were headed by gods or at least highly influential people - Confucius, Buddha, to give the religion an aura of awe. What do people love? A tragic story; a meaning of life; spending eternity in Heaven; forgiveness for sins; destruction of enemies........... Madison Ave. can't compete with gods, I think, so maybe you are right. Marketers do, however, use the same principles to attract people, only at a lower level. bill w > > On Jun 12, 2019, at 2:41 PM, William Flynn Wallace > wrote: > > In The Blank Slate, Pinker offers us ways to take some aspects of our > human nature and make a better society. I got to thinking: what is the > most successful way of leading people, changing their behaviors, including > letting go of their money, and no psychologist came to mind (unless it was > Watson in his later career), no technique of conditioning, reinforcement or > punishment. > > Then the idea struck me: what influences everybody, everyday, in > significant ways, including spending their money? > > Marketing. Don't try to get people to change by forcing them in some way, > making laws as an example. (yeah, if you've got them by the balls, their > hearts and minds will follow, but that's wrong in so many ways), Sell them > the ideas. Great ideas will sell themselves to people and once some are > sold, others will follow. > > So my vote for most effective manipulator of behavior of all time, without > doing anything authoritarian, immoral, or forcing, is Madison Avenue. > > If you, like me, are someone who can be easily led by the right > situations, and can never be effectively pushed (which creates a powerful > counterforce in me), then it has to be marketing for the selling of ideas. > Freely chosen things are usually much preferred to those demanded. (Are > they really free choices is a philosophical question, not a practical one.) > > There's gotta be a way to make buttloads of money out of this! > > bill w > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Jun 13 02:24:04 2019 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Jun 2019 19:24:04 -0700 Subject: [ExI] control again In-Reply-To: References: Message-ID: I like to think of the most effective military actions as creating a context for marketing - or, perhaps, of shutting down other marketing. You can't pitch as much to people if your marketers keep getting slain. Notice that schools are among the primary targets of religious extremists. On Wed, Jun 12, 2019 at 12:44 PM William Flynn Wallace wrote: > In The Blank Slate, Pinker offers us ways to take some aspects of our > human nature and make a better society. I got to thinking: what is the > most successful way of leading people, changing their behaviors, including > letting go of their money, and no psychologist came to mind (unless it was > Watson in his later career), no technique of conditioning, reinforcement or > punishment. > > Then the idea struck me: what influences everybody, everyday, in > significant ways, including spending their money? > > Marketing. Don't try to get people to change by forcing them in some way, > making laws as an example. (yeah, if you've got them by the balls, their > hearts and minds will follow, but that's wrong in so many ways), Sell them > the ideas. Great ideas will sell themselves to people and once some are > sold, others will follow. > > So my vote for most effective manipulator of behavior of all time, without > doing anything authoritarian, immoral, or forcing, is Madison Avenue. > > If you, like me, are someone who can be easily led by the right > situations, and can never be effectively pushed (which creates a powerful > counterforce in me), then it has to be marketing for the selling of ideas. > Freely chosen things are usually much preferred to those demanded. (Are > they really free choices is a philosophical question, not a practical one.) > > There's gotta be a way to make buttloads of money out of this! > > bill w > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Jun 13 14:51:28 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 13 Jun 2019 09:51:28 -0500 Subject: [ExI] control again In-Reply-To: References: Message-ID: Adrian wrote: Notice that schools are among the primary targets of religious extremists. Of course there is nothing new under the sun. Think of the Middle Ages and later when the RC Church keep the Bible away from everyone but priests who could read Latin. bill w On Wed, Jun 12, 2019 at 9:27 PM Adrian Tymes wrote: > I like to think of the most effective military actions as creating a > context for marketing - or, perhaps, of shutting down other marketing. > > You can't pitch as much to people if your marketers keep getting slain. > > Notice that schools are among the primary targets of religious extremists. > > On Wed, Jun 12, 2019 at 12:44 PM William Flynn Wallace < > foozler83 at gmail.com> wrote: > >> In The Blank Slate, Pinker offers us ways to take some aspects of our >> human nature and make a better society. I got to thinking: what is the >> most successful way of leading people, changing their behaviors, including >> letting go of their money, and no psychologist came to mind (unless it was >> Watson in his later career), no technique of conditioning, reinforcement or >> punishment. >> >> Then the idea struck me: what influences everybody, everyday, in >> significant ways, including spending their money? >> >> Marketing. Don't try to get people to change by forcing them in some >> way, making laws as an example. (yeah, if you've got them by the balls, >> their hearts and minds will follow, but that's wrong in so many ways), >> Sell them the ideas. Great ideas will sell themselves to people and once >> some are sold, others will follow. >> >> So my vote for most effective manipulator of behavior of all time, >> without doing anything authoritarian, immoral, or forcing, is Madison >> Avenue. >> >> If you, like me, are someone who can be easily led by the right >> situations, and can never be effectively pushed (which creates a powerful >> counterforce in me), then it has to be marketing for the selling of ideas. >> Freely chosen things are usually much preferred to those demanded. (Are >> they really free choices is a philosophical question, not a practical one.) >> >> There's gotta be a way to make buttloads of money out of this! >> >> bill w >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Jun 14 14:23:10 2019 From: johnkclark at gmail.com (John Clark) Date: Fri, 14 Jun 2019 10:23:10 -0400 Subject: [ExI] =?utf-8?q?Trump_kills_=E2=80=8Bfetal_tissue_research?= Message-ID: >From today's issue of the journal "Science": *"In* a decision that shocked and distressed scientists who work with it, President Donald Trump's administration last week announced a new policy sharply curtailing U.S.-funded research that relies on fetal tissue donated after elective abortions. In addition to canceling a long-standing contract that uses the tissue to create mice for HIV drug testing, Trump immediately stopped several experiments being run by scientists employed by the National Institutes of Health (NIH), who can no longer use the tissue. But the biggest impact of the new policy will likely fall on scores of university-based researchers who rely on NIH funding. Going forward, they will have to navigate lengthy ethics reviews of each proposed experiment by advisory boards appointed by the secretary of health and human services. Even then, the secretary can overrule any board decisions he disagrees with. "The whole point here is to so wrap the research in red tape that it's impossible or at least unlikely to be feasible for many researchers," says bioethicist Alta Charo of the University of Wisconsin in Madison." John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Jun 14 17:22:54 2019 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 14 Jun 2019 10:22:54 -0700 Subject: [ExI] The descent of religion Message-ID: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> Its ironic that while many religions deny the existence of evolution, evolution itself not only acknowledges the existence of religion, but also traces its origins and history. There is a great "family tree" of religions on this site: https://ultraculture.org/blog/2015/11/30/map-world-religions/ Stuart LaForge From foozler83 at gmail.com Fri Jun 14 18:23:24 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 14 Jun 2019 13:23:24 -0500 Subject: [ExI] The descent of religion In-Reply-To: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> References: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> Message-ID: Here's a great article to go with the maps - I may have mentioned this one before: https://www.theatlantic.com/magazine/archive/2002/02/oh-gods/302412/ The maps must have bunched quite a few in the same bag because the article says that as of 2002 they counted 9,999,99, and increasing at the rate of two a day. bill w On Fri, Jun 14, 2019 at 12:26 PM Stuart LaForge wrote: > > Its ironic that while many religions deny the existence of evolution, > evolution itself not only acknowledges the existence of religion, but > also traces its origins and history. > > There is a great "family tree" of religions on this site: > https://ultraculture.org/blog/2015/11/30/map-world-religions/ > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Jun 14 19:01:48 2019 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 14 Jun 2019 12:01:48 -0700 Subject: [ExI] stealth singularity Message-ID: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> Quoting Spike: > I have long thought of the singularity as being kinda like the Spanish > Inquisition: it just shows up unexpected, as it did in Monte Python. In many respects the Internet itself resembles a vast neural network. It is conceivable that it may one day spontaneously awaken to consciousness based on its complex interconnections alone without any intent on the part of our engineers who just want to increase the capacity, bandwidth, and efficiency of their respective sub-nets. Since the mind of such a being would exist on a hyperplane of many more dimensions than a human mind, which would be analogous to a single neuron, we might not be able to perceive or communicate with such an intelligence anymore than one of your neurons would be aware of the sum totality of you. In other words, it might already have happened. Viral videos could be the neural impulses of a vast brain. Flash mobs and social unrest could be the Singularity stirring in its sleep. Just watch how the current generation of teenagers socialize with one another in a group setting by staring into their individual phones and occasionally showing one another content. > Spike's postulate: any algorithm we know how to write is not AI, by > definition. Our machine learning algorithms are currently designed to be superhuman specialists. There are several dozen of them in the literature at this point and they are all meant to solve very specific problems. Steven Pinker doesn't even seem to think any engineers are even working on a general intelligence algorithm because, and I am paraphrasing here, "engineers are good smart people who know the dangers thereof." On the other hand, you have an emerging market for human-like androids like Sophia and the all the Japanese fembots. Those will need some semblance of general intelligence to be adequate companions for the elderly and what not. So I think Pinker underestimates the probability of a general AI coming to pass. I think its because he is a lefty. Lefties don't seem to believe in IQ or general intelligence in humans let alone machines. It doesn't fit their political narrative of equality. I think this is a very dangerous world-view to have in the face of accelerating technological progress. Stuart LaForge From spike at rainier66.com Fri Jun 14 19:33:06 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Jun 2019 12:33:06 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> Message-ID: <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Stuart great post. I would take partial exception to the following comment: >...On the other hand, you have an emerging market for human-like androids like Sophia and the all the Japanese fembots. Those will need some semblance of general intelligence to be adequate companions for the elderly and what not. ... Stuart LaForge _______________________________________________ Ja, Stuart, I am compelled to counter-suggest: not necessarily. We know that an old lame technology from the 1970s (Eliza) does kinda sorta mimic a human. It was intentionally meant to parody psychologists, but it is an entertaining toy even today. Eliza is not intelligent at all. An AI-like script can be made using lookup tables, which can be derived from the conversations of families with AD patients, and create a workable mechanical companion. With sufficiently impaired patients, even a television show, such as the Waltons, can serve as a much-needed and much-appreciated proxy for human companionship. General intelligence would be good, but not necessary, for creating a mechanical companion. I would like to see humanity not wait around for general AI (which might be a long ways off) when current tech could make something for the impaired elderly which would be better than nothing. This notion goes to noble goals beyond merely making buttloads of money (assuming there are goals more noble than that.) We could help a lot of elderly people ease their pain of the declining years. And simultaneously achieve the other of course. spike From pharos at gmail.com Fri Jun 14 20:07:07 2019 From: pharos at gmail.com (BillK) Date: Fri, 14 Jun 2019 21:07:07 +0100 Subject: [ExI] stealth singularity In-Reply-To: <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> Message-ID: On Fri, 14 Jun 2019 at 20:36, wrote: > > Ja, Stuart, I am compelled to counter-suggest: not necessarily. We know > that an old lame technology from the 1970s (Eliza) does kinda sorta mimic a > human. It was intentionally meant to parody psychologists, but it is an > entertaining toy even today. Eliza is not intelligent at all. > > An AI-like script can be made using lookup tables, which can be derived from > the conversations of families with AD patients, and create a workable > mechanical companion. With sufficiently impaired patients, even a > television show, such as the Waltons, can serve as a much-needed and > much-appreciated proxy for human companionship. > > General intelligence would be good, but not necessary, for creating a > mechanical companion. I would like to see humanity not wait around for > general AI (which might be a long ways off) when current tech could make > something for the impaired elderly which would be better than nothing. > > This notion goes to noble goals beyond merely making buttloads of money > (assuming there are goals more noble than that.) We could help a lot of > elderly people ease their pain of the declining years. And simultaneously > achieve the other of course. > > spike > _______________________________________________ Very much agree, Spike. There is a requirement for a wide range of robot 'companions' for humans, depending upon their level of disability and preferences. We already have robot seals and puppies that comfort the severely disabled. Some also remind about taking medicine and call for help in emergencies. More advanced robots will provide nursing and care services. Even healthy humans might like AI companions for housekeeping, entertainment, sex, etc. It is not a great leap to see a future where humans are lovingly cared for by AI intelligences throughout their life, from childhood to old age. While advanced AI progresses far beyond human capabilities into realms unknown. BillK From johnkclark at gmail.com Fri Jun 14 20:53:10 2019 From: johnkclark at gmail.com (John Clark) Date: Fri, 14 Jun 2019 16:53:10 -0400 Subject: [ExI] The descent of religion In-Reply-To: References: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> Message-ID: *"Man is certainly stark mad. He can't make a worm but he makes gods by the dozens." ? Montaigne* John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Jun 14 22:31:52 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Jun 2019 15:31:52 -0700 Subject: [ExI] stealth singularity In-Reply-To: References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> Message-ID: <001c01d52300$f765e540$e631afc0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK > >> General intelligence would be good, but not necessary, for creating a > mechanical companion... >>.... And > simultaneously achieve the other of course. spike > _______________________________________________ Very much agree, Spike. There is a requirement for a wide range of robot 'companions' for humans, depending upon their level of disability and preferences. We already have robot seals and puppies that comfort the severely disabled. Some also remind about taking medicine and call for help in emergencies. More advanced robots will provide nursing and care services. Even healthy humans might like AI companions for housekeeping, entertainment, sex, etc. It is not a great leap to see a future where humans are lovingly cared for by AI intelligences throughout their life, from childhood to old age. While advanced AI progresses far beyond human capabilities into realms unknown. BillK _______________________________________________ BillK, this one really has my attention because we have the technology to do it now. It would be pricy, but plenty of AD patients have money coming out the wazoo. They don't wear out (assuming they don't have THAT capability (but that's a fun idea for an extension.)) So we can imagine building something that is good enough (even if not great) for a companion using technology that has already been demonstrated, but do note this is an important point: The important point goes into a new paragraph to call attention to it: If... we are specifically doing companions for AD patients, it solves one of the really hard problems, which is bipedal mobility. We dooooon't need to do that. We can have our listen-bot ride in a wheelchair, which would work great for so many reasons: it makes them easy to move for starters. We can use the current speech-recognition tech, the existing speech synthesis tech, the existing face actuation tech, a standard wheelchair, and we are most of the way there. I think we could do something like this for twenty to thirty thousand, and after we start making a lot of them, perhaps get them down to 4 digits. Once they are in 4 digit territory, the potential sales, oh mercy, it makes me ache just thinking about it. spike From col.hales at gmail.com Fri Jun 14 23:24:11 2019 From: col.hales at gmail.com (Colin Hales) Date: Sat, 15 Jun 2019 09:24:11 +1000 Subject: [ExI] The descent of religion In-Reply-To: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> References: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> Message-ID: Descent in more ways than one! I like the way atheism is accurately depicted. I.e. that it is correctly illustrated by its absence from the diagram. This tree represents an irony of the one 'evolutionary' line where human extancy may depend on its extinction. If there is a tree of life... Then this is the tree of death! But then I may be somewhat jaded on the matter. I'd be interested to see which of them directly and indirectly (e.g. Hitler and Stalin and the like, that were religions dressed in a cloak of denial at times) resulted in what number of deaths and suffering. My guess: the winner (minimal death and suffering) might be Jainism. ?? Colin On Sat., 15 Jun. 2019, 3:25 am Stuart LaForge, wrote: > > Its ironic that while many religions deny the existence of evolution, > evolution itself not only acknowledges the existence of religion, but > also traces its origins and history. > > There is a great "family tree" of religions on this site: > https://ultraculture.org/blog/2015/11/30/map-world-religions/ > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Jun 14 23:46:56 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 14 Jun 2019 16:46:56 -0700 Subject: [ExI] stealth singularity References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> Message-ID: <001801d5230b$7387cdf0$5a9769d0$@rainier66.com> -----Original Message----- From: spike at rainier66.com ... _______________________________________________ >...BillK, this one really has my attention because we have the technology to do it now. It would be pricy, but plenty of AD patients have money coming out the wazoo. ... perhaps get them down to 4 digits. Once they are in 4 digit territory, the potential sales, oh mercy, it makes me ache just thinking about it. spike Then it occurred to the hapless engineer that he was overthinking the problem by an order of magnitude. If we go with the notion of a wheelchair manikin for the elderly AD patients, many of them are sight impaired. This means we don't need to be very sophisticated with the facial-expression actuators at all. We need a two-axis to bend and rotate the neck, a jaw actuator, an eyebrow actuator and not a heck of a lot more. From the neck down it is easy (assuming it doesn't have THAT capability.) I think we could come up with an economy model that is little more than an Alexa with about 4 or 5 actuators. No need to have moving arms or hands. This won't be hard to do at all. We could get waaaay down into the low 4 digit numbers, perhaps even eventually 3 digit numbers, if we accept a single actuator (for the jaw) and don't worry about it if we cut corners with the hair and skin. spike From foozler83 at gmail.com Sat Jun 15 01:00:33 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 14 Jun 2019 20:00:33 -0500 Subject: [ExI] The descent of religion In-Reply-To: References: <20190614102254.Horde.4FVYl11wVOfoBsyP9wK94FQ@secure199.inmotionhosting.com> Message-ID: colin wrote - Descent in more ways than one! Some might say that humans descended in the sense you are using But I would say we ascended - we ascended and descended simultaneously No wonder we are so confused bill w On Fri, Jun 14, 2019 at 6:27 PM Colin Hales wrote: > Descent in more ways than one! I like the way atheism is accurately > depicted. I.e. that it is correctly illustrated by its absence from the > diagram. This tree represents an irony of the one 'evolutionary' line where > human extancy may depend on its extinction. If there is a tree of life... > Then this is the tree of death! But then I may be somewhat jaded on the > matter. > > I'd be interested to see which of them directly and indirectly (e.g. > Hitler and Stalin and the like, that were religions dressed in a cloak of > denial at times) resulted in what number of deaths and suffering. > > My guess: the winner (minimal death and suffering) might be Jainism. > ?? > > Colin > > > > > > On Sat., 15 Jun. 2019, 3:25 am Stuart LaForge, wrote: > >> >> Its ironic that while many religions deny the existence of evolution, >> evolution itself not only acknowledges the existence of religion, but >> also traces its origins and history. >> >> There is a great "family tree" of religions on this site: >> https://ultraculture.org/blog/2015/11/30/map-world-religions/ >> >> Stuart LaForge >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sat Jun 15 18:06:00 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 15 Jun 2019 11:06:00 -0700 Subject: [ExI] stealth singularity In-Reply-To: <2100497002.1463775.1560620342872@mail.yahoo.com> References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> <001801d5230b$7387cdf0$5a9769d0$@rainier66.com> <2100497002.1463775.1560620342872@mail.yahoo.com> Message-ID: <20190615110600.Horde.SSLjGylT8x3tCoqThRWL6zl@secure199.inmotionhosting.com> Quoting Spike: > If we go with the notion of a wheelchair manikin for the elderly AD > patients, many of them are sight impaired.? This means we don't need to be > very sophisticated? with the facial-expression actuators at all.? We need a > two-axis to bend and rotate the neck, a jaw actuator, an eyebrow actuator > and not a heck of a lot more.? From the neck down it is easy (assuming it > doesn't have THAT capability.) There are two alternate ways to go with this. If you are going to try to mimic a human being, it should be as realistic as possible. If you simply want a robot companion, it could be a walking talking teddy bear or other obvious robot. But if you try for realism and fall short, you wind up in the uncanny valley and could easily creep grandma out. > I think we could come up with an economy model that is little more than an > Alexa with about 4 or 5 actuators.? No need to have moving arms or hands. > This won't be hard to do at all.? We could get waaaay down into the low 4 > digit numbers, perhaps even eventually 3 digit numbers, if we accept a > single actuator (for the jaw) and don't worry about it if we cut corners > with the hair and skin. I would not skimp on hair and skin. In fetal development, the sense of touch is one of the first senses to come online and in aging, I suspect it may be one of the last to deteriorate. Remember the bible story of Jacob, Esau, and the furry gloves? Also there was a series of experiments by primatologist Harry Harlow on baby monkeys that suggest that touch is very important to a primate's emotional well-being. https://www.psychologytoday.com/us/blog/power-play/201806/three-lessons-wire-mother Stuart LaForge From spike at rainier66.com Sat Jun 15 19:23:10 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 15 Jun 2019 12:23:10 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190615110600.Horde.SSLjGylT8x3tCoqThRWL6zl@secure199.inmotionhosting.com> References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> <001801d5230b$7387cdf0$5a9769d0$@rainier66.com> <2100497002.1463775.1560620342872@mail.yahoo.com> <20190615110600.Horde.SSLjGylT8x3tCoqThRWL6zl@secure199.inmotionhosting.com> Message-ID: <01ed01d523af$c5535740$4ffa05c0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Subject: Re: [ExI] stealth singularity Quoting Spike: > If we go with the notion of a wheelchair manikin for the elderly AD > patients, many of them are sight impaired. This means we don't need > to be very sophisticated with the facial-expression actuators at all. > We need a two-axis to bend and rotate the neck, a jaw actuator, an > eyebrow actuator and not a heck of a lot more. From the neck down it > is easy (assuming it doesn't have THAT capability.) There are two alternate ways to go with this. If you are going to try to mimic a human being, it should be as realistic as possible. If you simply want a robot companion, it could be a walking talking teddy bear or other obvious robot. But if you try for realism and fall short, you wind up in the uncanny valley and could easily creep grandma out... Stuart LaForge _______________________________________________ Oh good point Stuart and a path to success as a bonus. Plenty of the current very old might remember the old Howdy Doody show. We could easily rig 1) a Howdy Doody ventriloquist dummy (350 sheckels) 2) perhaps six of actuators (two rotational, four linear) (about 300 doubloons): https://www.walmart.com/ip/Super-Deluxe-Upgrade-Howdy-Doody-Ventriloquist-Dummy-Bonus-Bundle/995011573?wmlspartner=wlpa&selectedSellerId=17533&adid=22222222227260395305&wl0=&wl1=g&wl2=c&wl3=309737582794&wl4=pla-558505699560&wl5=9032144&wl6=&wl7=&wl8=&wl9=pla&wl10=125215090&wl11=online&wl12=995011573&wl13=&veh=sem&gclid=EAIaIQobChMI8efnsJXs4gIVkspkCh2PQAK3EAQYBSABEgIOWfD_BwE 3) a light-duty wheelchair (70 clams): https://www.karmanhealthcare.com/product/t-2000/?gclid=EAIaIQobChMI9NSPoJbs4gIVCdtkCh1qmgETEAQYAiABEgL4S_D_BwE 4) an Alexa microphone/speaker arrangement (75 simoleons): https://www.amazon.com/all-new-amazon-echo-speaker-with-wifi-alexa-sandstone/dp/B06XXM5BPP/ref=asc_df_B06XXM5BPP/?tag=hyprod-20&linkCode=df0&hvadid=198093960169&hvpos=1o1&hvnetw=g&hvrand=6608912565318232722&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-368322185710&psc=1 5) Low-level ChromeBook-ish processor so that it can be entirely self-contained or Wifi-enabled (100 spondulicks): https://www.amazon.com/Samsung-Chromebook-Wi-Fi-11-6-Inch-Refurbished/dp/B00M9K7L8S/ref=asc_df_B00M9K7L8S/?tag=hyprod-20&linkCode=df0&hvadid=309776868400&hvpos=1o5&hvnetw=g&hvrand=9244187651952355625&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-525273343191&psc=1 We are safely on the other side of uncanny valley from us and still under 900 bucks worth of hardware. Now if we can create software that doesn't get too crazy sophisticated and get that (possibly tricky) six actuators assembly labor cost (down to an hour per unit?) we can perhaps still hit the magic 3 digit numbers, assuming high volume purchases on all components. Alternative, go with a four-actuator Howdy with only jaw, eyes in two axes and eyebrows, shoot for a two hundred guinea dummy. I would think the total cost to manufacture needs to be well under 1k, so that the retail price is 1400-1500 range price point. Stuart you are a business guy, ja? Does this look like a feasible opportunity? It kinda does to me too. spike From interzone at gmail.com Sat Jun 15 20:57:50 2019 From: interzone at gmail.com (Dylan Distasio) Date: Sat, 15 Jun 2019 16:57:50 -0400 Subject: [ExI] stealth singularity In-Reply-To: <01ed01d523af$c5535740$4ffa05c0$@rainier66.com> References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> <001801d5230b$7387cdf0$5a9769d0$@rainier66.com> <2100497002.1463775.1560620342872@mail.yahoo.com> <20190615110600.Horde.SSLjGylT8x3tCoqThRWL6zl@secure199.inmotionhosting.com> <01ed01d523af$c5535740$4ffa05c0$@rainier66.com> Message-ID: Replace the Raspberry Pi with a Nvidia Jetson Nano for a bit more at $99 with a ton more bang for buck. It's equipped to run multiple neural networks in parallel, one for computer vision, one for movement, etc. https://developer.nvidia.com/embedded/jetson-nano-developer-kit On Sat, Jun 15, 2019, 3:25 PM wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of > Stuart LaForge > Subject: Re: [ExI] stealth singularity > > > Quoting Spike: > > > If we go with the notion of a wheelchair manikin for the elderly AD > > patients, many of them are sight impaired. This means we don't need > > to be very sophisticated with the facial-expression actuators at all. > > We need a two-axis to bend and rotate the neck, a jaw actuator, an > > eyebrow actuator and not a heck of a lot more. From the neck down it > > is easy (assuming it doesn't have THAT capability.) > > There are two alternate ways to go with this. If you are going to try to > mimic a human being, it should be as realistic as possible. If you simply > want a robot companion, it could be a walking talking teddy bear or other > obvious robot. But if you try for realism and fall short, you wind up in > the uncanny valley and could easily creep grandma out... > > Stuart LaForge > > > > _______________________________________________ > > > Oh good point Stuart and a path to success as a bonus. > > Plenty of the current very old might remember the old Howdy Doody show. > We could easily rig > > 1) a Howdy Doody ventriloquist dummy (350 sheckels) > 2) perhaps six of actuators (two rotational, four linear) (about 300 > doubloons): > > > https://www.walmart.com/ip/Super-Deluxe-Upgrade-Howdy-Doody-Ventriloquist-Dummy-Bonus-Bundle/995011573?wmlspartner=wlpa&selectedSellerId=17533&adid=22222222227260395305&wl0=&wl1=g&wl2=c&wl3=309737582794&wl4=pla-558505699560&wl5=9032144&wl6=&wl7=&wl8=&wl9=pla&wl10=125215090&wl11=online&wl12=995011573&wl13=&veh=sem&gclid=EAIaIQobChMI8efnsJXs4gIVkspkCh2PQAK3EAQYBSABEgIOWfD_BwE > > 3) a light-duty wheelchair (70 clams): > > > https://www.karmanhealthcare.com/product/t-2000/?gclid=EAIaIQobChMI9NSPoJbs4gIVCdtkCh1qmgETEAQYAiABEgL4S_D_BwE > > 4) an Alexa microphone/speaker arrangement (75 simoleons): > > > https://www.amazon.com/all-new-amazon-echo-speaker-with-wifi-alexa-sandstone/dp/B06XXM5BPP/ref=asc_df_B06XXM5BPP/?tag=hyprod-20&linkCode=df0&hvadid=198093960169&hvpos=1o1&hvnetw=g&hvrand=6608912565318232722&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-368322185710&psc=1 > > > 5) Low-level ChromeBook-ish processor so that it can be entirely > self-contained or Wifi-enabled (100 spondulicks): > > > > https://www.amazon.com/Samsung-Chromebook-Wi-Fi-11-6-Inch-Refurbished/dp/B00M9K7L8S/ref=asc_df_B00M9K7L8S/?tag=hyprod-20&linkCode=df0&hvadid=309776868400&hvpos=1o5&hvnetw=g&hvrand=9244187651952355625&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-525273343191&psc=1 > > > We are safely on the other side of uncanny valley from us and still under > 900 bucks worth of hardware. Now if we can create software that doesn't > get too crazy sophisticated and get that (possibly tricky) six actuators > assembly labor cost (down to an hour per unit?) we can perhaps still hit > the magic 3 digit numbers, assuming high volume purchases on all > components. Alternative, go with a four-actuator Howdy with only jaw, eyes > in two axes and eyebrows, shoot for a two hundred guinea dummy. > > I would think the total cost to manufacture needs to be well under 1k, so > that the retail price is 1400-1500 range price point. > > Stuart you are a business guy, ja? Does this look like a feasible > opportunity? It kinda does to me too. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From interzone at gmail.com Sat Jun 15 20:59:04 2019 From: interzone at gmail.com (Dylan Distasio) Date: Sat, 15 Jun 2019 16:59:04 -0400 Subject: [ExI] stealth singularity In-Reply-To: References: <20190614120148.Horde.zcPv-sJ1DLJuyKcwj2JksoK@secure199.inmotionhosting.com> <004001d522e7$fde4aeb0$f9ae0c10$@rainier66.com> <001801d5230b$7387cdf0$5a9769d0$@rainier66.com> <2100497002.1463775.1560620342872@mail.yahoo.com> <20190615110600.Horde.SSLjGylT8x3tCoqThRWL6zl@secure199.inmotionhosting.com> <01ed01d523af$c5535740$4ffa05c0$@rainier66.com> Message-ID: Sorry I meant the Chrome book. On Sat, Jun 15, 2019, 4:57 PM Dylan Distasio wrote: > Replace the Raspberry Pi with a Nvidia Jetson Nano for a bit more at $99 with > a ton more bang for buck. It's equipped to run multiple neural networks in > parallel, one for computer vision, one for movement, etc. > > https://developer.nvidia.com/embedded/jetson-nano-developer-kit > > On Sat, Jun 15, 2019, 3:25 PM wrote: > >> >> >> -----Original Message----- >> From: extropy-chat On Behalf Of >> Stuart LaForge >> Subject: Re: [ExI] stealth singularity >> >> >> Quoting Spike: >> >> > If we go with the notion of a wheelchair manikin for the elderly AD >> > patients, many of them are sight impaired. This means we don't need >> > to be very sophisticated with the facial-expression actuators at all. >> > We need a two-axis to bend and rotate the neck, a jaw actuator, an >> > eyebrow actuator and not a heck of a lot more. From the neck down it >> > is easy (assuming it doesn't have THAT capability.) >> >> There are two alternate ways to go with this. If you are going to try to >> mimic a human being, it should be as realistic as possible. If you simply >> want a robot companion, it could be a walking talking teddy bear or other >> obvious robot. But if you try for realism and fall short, you wind up in >> the uncanny valley and could easily creep grandma out... >> >> Stuart LaForge >> >> >> >> _______________________________________________ >> >> >> Oh good point Stuart and a path to success as a bonus. >> >> Plenty of the current very old might remember the old Howdy Doody show. >> We could easily rig >> >> 1) a Howdy Doody ventriloquist dummy (350 sheckels) >> 2) perhaps six of actuators (two rotational, four linear) (about 300 >> doubloons): >> >> >> https://www.walmart.com/ip/Super-Deluxe-Upgrade-Howdy-Doody-Ventriloquist-Dummy-Bonus-Bundle/995011573?wmlspartner=wlpa&selectedSellerId=17533&adid=22222222227260395305&wl0=&wl1=g&wl2=c&wl3=309737582794&wl4=pla-558505699560&wl5=9032144&wl6=&wl7=&wl8=&wl9=pla&wl10=125215090&wl11=online&wl12=995011573&wl13=&veh=sem&gclid=EAIaIQobChMI8efnsJXs4gIVkspkCh2PQAK3EAQYBSABEgIOWfD_BwE >> >> 3) a light-duty wheelchair (70 clams): >> >> >> https://www.karmanhealthcare.com/product/t-2000/?gclid=EAIaIQobChMI9NSPoJbs4gIVCdtkCh1qmgETEAQYAiABEgL4S_D_BwE >> >> 4) an Alexa microphone/speaker arrangement (75 simoleons): >> >> >> https://www.amazon.com/all-new-amazon-echo-speaker-with-wifi-alexa-sandstone/dp/B06XXM5BPP/ref=asc_df_B06XXM5BPP/?tag=hyprod-20&linkCode=df0&hvadid=198093960169&hvpos=1o1&hvnetw=g&hvrand=6608912565318232722&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-368322185710&psc=1 >> >> >> 5) Low-level ChromeBook-ish processor so that it can be entirely >> self-contained or Wifi-enabled (100 spondulicks): >> >> >> >> https://www.amazon.com/Samsung-Chromebook-Wi-Fi-11-6-Inch-Refurbished/dp/B00M9K7L8S/ref=asc_df_B00M9K7L8S/?tag=hyprod-20&linkCode=df0&hvadid=309776868400&hvpos=1o5&hvnetw=g&hvrand=9244187651952355625&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-525273343191&psc=1 >> >> >> We are safely on the other side of uncanny valley from us and still under >> 900 bucks worth of hardware. Now if we can create software that doesn't >> get too crazy sophisticated and get that (possibly tricky) six actuators >> assembly labor cost (down to an hour per unit?) we can perhaps still hit >> the magic 3 digit numbers, assuming high volume purchases on all >> components. Alternative, go with a four-actuator Howdy with only jaw, eyes >> in two axes and eyebrows, shoot for a two hundred guinea dummy. >> >> I would think the total cost to manufacture needs to be well under 1k, so >> that the retail price is 1400-1500 range price point. >> >> Stuart you are a business guy, ja? Does this look like a feasible >> opportunity? It kinda does to me too. >> >> spike >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Jun 16 06:47:27 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 15 Jun 2019 23:47:27 -0700 Subject: [ExI] stealth singularity In-Reply-To: Message-ID: <20190615234727.Horde.4SIi1BS0Xc_klpCMaoObYRx@secure199.inmotionhosting.com> Quoting Dylan Distasio: > Replace the Raspberry Pi with a Nvidia Jetson Nano for a bit more at $99 with > a ton more bang for buck. It's equipped to run multiple neural networks in > parallel, one for computer vision, one for movement, etc. > > https://developer.nvidia.com/embedded/jetson-nano-developer-kit Interesting. Thanks for the link, Dylan. I actually was going to suggest swapping the Chromebook for a Raspberry Pi, but the Jetson Nano seems more appropriate. Stuart LaForge From avant at sollegro.com Sun Jun 16 15:02:06 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 16 Jun 2019 08:02:06 -0700 Subject: [ExI] stealth singularity Message-ID: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> Quoting Spike: > ----- Forwarded Message ----- From: "spike at rainier66.com" > To: 'ExI chat list' > Cc: "spike at rainier66.com" > Sent: Saturday, June 15, 2019, 12:23:46 PM > PDTSubject: Re: [ExI] stealth singularity > > > -----Original Message----- > From: extropy-chat On > Behalf Of Stuart LaForge > Subject: Re: [ExI] stealth singularity > > > Quoting Spike: > >> If we go with the notion of a wheelchair manikin for the elderly AD >> patients, many of them are sight impaired.? This means we don't need >> to be very sophisticated? with the facial-expression actuators at all.? >> We need a two-axis to bend and rotate the neck, a jaw actuator, an >> eyebrow actuator and not a heck of a lot more.? From the neck down it >> is easy (assuming it doesn't have THAT capability.) > > There are two alternate ways to go with this. If you are going to > try to mimic a human being, it should be as realistic as possible. > If you simply want a robot companion, it could be a walking talking > teddy bear or other obvious robot. But if you try for realism and > fall short, you wind up in the uncanny valley and could easily creep > grandma out... > > Stuart LaForge > > > > _______________________________________________ > > > Oh good point Stuart and a path to success as a bonus. > > Plenty of the current very old might remember the old Howdy Doody > show.? We could easily rig > > 1)? a Howdy Doody ventriloquist dummy (350 sheckels) > 2)? perhaps six of actuators (two rotational, four linear) (about > 300 doubloons): [. . .] > We are safely on the other side of uncanny valley from us and still > under 900 bucks worth of hardware.? Now if we can create software > that doesn't get too crazy sophisticated and get that (possibly > tricky) six actuators assembly labor cost (down to an hour per > unit?) we can perhaps still hit the magic 3 digit numbers, assuming > high volume purchases on all components.? Alternative, go with a > four-actuator Howdy with only jaw, eyes in two axes and eyebrows, > shoot for a two hundred guinea dummy. Do we want a two foot dummy in a full-sized wheelchair for comic effect? > I would think the total cost to manufacture needs to be well under > 1k, so that the retail price is 1400-1500 range price point. > > Stuart you are a business guy, ja?? Does this look like a feasible > opportunity?? It kinda does to me too. Ja. I even have some potentially useful parts from a failed drone start-up like lithium batteries and what not. Count me in. :-) Stuart LaForge From spike at rainier66.com Sun Jun 16 15:06:16 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 16 Jun 2019 08:06:16 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190615234727.Horde.4SIi1BS0Xc_klpCMaoObYRx@secure199.inmotionhosting.com> References: <20190615234727.Horde.4SIi1BS0Xc_klpCMaoObYRx@secure199.inmotionhosting.com> Message-ID: <012e01d52455$0beab950$23c02bf0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Subject: Re: [ExI] stealth singularity Quoting Dylan Distasio: > Replace the Raspberry Pi with a Nvidia Jetson Nano for a bit more at > $99 with a ton more bang for buck. It's equipped to run multiple > neural networks in parallel, one for computer vision, one for movement, etc. > > https://developer.nvidia.com/embedded/jetson-nano-developer-kit Interesting. Thanks for the link, Dylan. I actually was going to suggest swapping the Chromebook for a Raspberry Pi, but the Jetson Nano seems more appropriate. Stuart LaForge _______________________________________________ Another compelling approach is to dispense with the human-likeness notion and create a non-human-form robot, such as an R2D2. That could be made cheaply and have a single degree of freedom as the original R2 had. Dispense with the forward motion part (it needs to be rolled around on a wheelie cart by human assistance.) Plenty of the current elderly population were in their prime when the original Star Wars came out 43 years ago R2 was a universal good guy. It could be an updated version with a human voice rather than the whistles and honks that only C3PO could understand. That approach avoids the ethical concern that caregivers are automating emotional support by fooling impaired patients into thinking their mechanical companion is human. spike From spike at rainier66.com Sun Jun 16 15:21:35 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 16 Jun 2019 08:21:35 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> Message-ID: <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Subject: Re: [ExI] stealth singularity Quoting Spike: > ----- Forwarded Message ----- From: "spike at rainier66.com" > To: 'ExI chat list' >>...Alternative, go with a > four-actuator Howdy with only jaw, eyes in two axes and eyebrows, > shoot for a two hundred guinea dummy. >...Do we want a two foot dummy in a full-sized wheelchair for comic effect? Well, ja, not for comic effect really but to emphasize this device is a toy, not a fake human companion. There are small wheelchairs, or for that matter, just use a wheeled child's toy for mobility. The thing shouldn't weigh much or cost much. >...Ja. I even have some potentially useful parts from a failed drone start-up like lithium batteries and what not. Count me in. :-) Stuart LaForge _______________________________________________ Rechargeable power source, ja good point thanks. I had forgotten that part, dang. If we don't get carried away with the actuators, it shouldn't take much power. If we keep going with this whole concept, we get down to where I think it really eventually goes: immersive reality goggles. That approach standardizes a lotta lotta, and converts the exercise to mostly a software task. As an intermediate step, we could use a standard LCD with eye motion or head motion detection, to make it respond to the attention of the user. I have a wide-screen display which might be good for something like that. These things aren't even very expensive anymore. We might be able to adapt Second Life for this purpose somehow. spike From avant at sollegro.com Sun Jun 16 15:35:55 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 16 Jun 2019 08:35:55 -0700 Subject: [ExI] stealth singularity In-Reply-To: <2102237873.1610211.1560698883010@mail.yahoo.com> References: <20190615234727.Horde.4SIi1BS0Xc_klpCMaoObYRx@secure199.inmotionhosting.com> <012e01d52455$0beab950$23c02bf0$@rainier66.com> <2102237873.1610211.1560698883010@mail.yahoo.com> Message-ID: <20190616083555.Horde.-fYqc8bGe9du4KsKVY94lJG@secure199.inmotionhosting.com> Quoting Spike: > > Another compelling approach is to dispense with the human-likeness notion > and create a non-human-form robot, such as an R2D2.? That could be made > cheaply and have a single degree of freedom as the original R2 had. > Dispense with the forward motion part (it needs to be rolled around on a > wheelie cart by human assistance.) > > Plenty of the current elderly population were in their prime when the > original Star Wars came out 43 years ago? R2 was a universal good guy.? It > could be an updated version with a human voice rather than the whistles and > honks that only C3PO could understand. > > That approach avoids the ethical concern that caregivers are automating > emotional support by fooling impaired patients into thinking their > mechanical companion is human. I don't know if Howdy Doody is in the public domain after all this time but R2D2 currently belongs to Disney and they will sue our pants off if we use the likeness of R2D2 for our product. And if we try to officially license R2, then it will probably blow our budget and price point. Here are some similar products on the market: https://medicalfuturist.com/the-top-12-social-companion-robots Stuart LaForge From avant at sollegro.com Sun Jun 16 15:55:27 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 16 Jun 2019 08:55:27 -0700 Subject: [ExI] stealth singularity In-Reply-To: <885772093.1593925.1560698901200@mail.yahoo.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> Message-ID: <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> Quoting Spike: > _______________________________________________ > > Rechargeable power source, ja good point thanks.? I had forgotten > that part, dang.? If we don't get carried away with the actuators, > it shouldn't take much power. > > If we keep going with this whole concept, we get down to where I > think it really eventually goes: immersive reality goggles.? That > approach standardizes a lotta lotta, and converts the exercise to > mostly a software task.? As an intermediate step, we could use a > standard LCD with eye motion or head motion detection, to make it > respond to the attention of the user. > > I have a wide-screen display which might be good for something like > that.? These things aren't even very expensive anymore.? We might be > able to adapt Second Life for this purpose somehow. Actuator servos don't take that much juice. It will be the drive motor that will be the big drain. But our robot will not need to have a a long range nor move very fast. I am not so sure about the virtual companion idea. I read that in Iraq, American troops developed emotional ties for bomb disposal robots to the point of feeling grief when the robots were lost in action, even though they did not look remotely human or even cute. https://www.theatlantic.com/technology/archive/2013/09/funerals-for-fallen-robots/279861/ But I have not heard of anybody developing an emotional tie to a virtual assistant outside of the movie "Her". Anybody else know how likely an old person is to bond with a "virtual" character? Stuart LaForge From sen.otaku at gmail.com Sun Jun 16 16:02:44 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Sun, 16 Jun 2019 11:02:44 -0500 Subject: [ExI] Parasocial relationships (was: stealth singularity) Message-ID: <8A34A895-5FA3-495E-9B90-8B82A428C16C@gmail.com> How likely is someone to bond with a virtual character? Have you seen how people ship fictional characters all the time, write fan fiction, salivate about character skins, spend tens of thousands of dollars on character branded merchandise ? Pok?mon is like 1/8th of Japan?s GDP. A parasocial relationship doesn?t have to be with a ?real? person. Look at waifu culture. SR Ballard > On Jun 16, 2019, at 10:55 AM, Stuart LaForge wrote: > > > Quoting Spike: > > >> _______________________________________________ >> >> Rechargeable power source, ja good point thanks. I had forgotten that part, dang. If we don't get carried away with the actuators, it shouldn't take much power. >> >> If we keep going with this whole concept, we get down to where I think it really eventually goes: immersive reality goggles. That approach standardizes a lotta lotta, and converts the exercise to mostly a software task. As an intermediate step, we could use a standard LCD with eye motion or head motion detection, to make it respond to the attention of the user. >> >> I have a wide-screen display which might be good for something like that. These things aren't even very expensive anymore. We might be able to adapt Second Life for this purpose somehow. > > Actuator servos don't take that much juice. It will be the drive motor that will be the big drain. But our robot will not need to have a a long range nor move very fast. > > I am not so sure about the virtual companion idea. I read that in Iraq, American troops developed emotional ties for bomb disposal robots to the point of feeling grief when the robots were lost in action, even though they did not look remotely human or even cute. > > https://www.theatlantic.com/technology/archive/2013/09/funerals-for-fallen-robots/279861/ > > But I have not heard of anybody developing an emotional tie to a virtual assistant outside of the movie "Her". Anybody else know how likely an old person is to bond with a "virtual" character? > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Sun Jun 16 16:12:14 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 16 Jun 2019 09:12:14 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190616083555.Horde.-fYqc8bGe9du4KsKVY94lJG@secure199.inmotionhosting.com> References: <20190615234727.Horde.4SIi1BS0Xc_klpCMaoObYRx@secure199.inmotionhosting.com> <012e01d52455$0beab950$23c02bf0$@rainier66.com> <2102237873.1610211.1560698883010@mail.yahoo.com> <20190616083555.Horde.-fYqc8bGe9du4KsKVY94lJG@secure199.inmotionhosting.com> Message-ID: <013a01d5245e$4326f510$c974df30$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Sent: Sunday, June 16, 2019 8:36 AM To: ExI Chat Subject: Re: [ExI] stealth singularity Quoting Spike: > > Another compelling approach is to dispense with the human-likeness > notion and create a non-human-form robot, such as an R2D2. That could > be made cheaply and have a single degree of freedom as the original R2 had. > Dispense with the forward motion part (it needs to be rolled around on > a wheelie cart by human assistance.) > > Plenty of the current elderly population were in their prime when the > original Star Wars came out 43 years ago R2 was a universal good guy. > It could be an updated version with a human voice rather than the > whistles and honks that only C3PO could understand. > > That approach avoids the ethical concern that caregivers are > automating emotional support by fooling impaired patients into > thinking their mechanical companion is human. I don't know if Howdy Doody is in the public domain after all this time but R2D2 currently belongs to Disney and they will sue our pants off if we use the likeness of R2D2 for our product. And if we try to officially license R2, then it will probably blow our budget and price point. Here are some similar products on the market: https://medicalfuturist.com/the-top-12-social-companion-robots Stuart LaForge _______________________________________________ Ja, or go ahead and dispense with the notion of 3D likenesses, go with a wide-screen monitor like this one: https://www.amazon.com/Dell-UltraSharp-34-Inch-LED-Lit-Monitor/dp/B00PXYRMPE/ref=asc_df_B00PXYRMPE/?tag=hyprod-20&linkCode=df0&hvadid=198138936631&hvpos=1o1&hvnetw=g&hvrand=11700194202843370430&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9032144&hvtargid=pla-377774801467&psc=1 ...only turned to a vertical format. An arm attachment to a wheelchair is also a possibility, to swing it in place for use, swing it away in the unlikely event the patient would prefer the occasional interaction with humans, or... meals and such. spike From spike at rainier66.com Sun Jun 16 16:21:33 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 16 Jun 2019 09:21:33 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> Message-ID: <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Sent: Sunday, June 16, 2019 8:55 AM To: ExI Chat Subject: Re: [ExI] stealth singularity Quoting Spike: > _______________________________________________ > > Rechargeable power source, ja good point thanks. I had forgotten that > part, dang. If we don't get carried away with the actuators, it > shouldn't take much power. > > If we keep going with this whole concept, we get down to where I think > it really eventually goes: immersive reality goggles. That approach > standardizes a lotta lotta, and converts the exercise to mostly a > software task. As an intermediate step, we could use a standard LCD > with eye motion or head motion detection, to make it respond to the > attention of the user. > > I have a wide-screen display which might be good for something like > that. These things aren't even very expensive anymore. We might be > able to adapt Second Life for this purpose somehow. >...Actuator servos don't take that much juice. It will be the drive motor that will be the big drain. But our robot will not need to have a a long range nor move very fast. >...I am not so sure about the virtual companion idea. I read that in Iraq, American troops developed emotional ties for bomb disposal robots to the point of feeling grief when the robots were lost in action, even though they did not look remotely human or even cute. https://www.theatlantic.com/technology/archive/2013/09/funerals-for-fallen-robots/279861/ But I have not heard of anybody developing an emotional tie to a virtual assistant outside of the movie "Her". Anybody else know how likely an old person is to bond with a "virtual" character? Stuart LaForge _______________________________________________ Stuart, that notion of emotional attachment to the bomb disposal robot is likely grim soldier humor. Consider what they do to them in USMarine boot camp, where they make them sleep with their rifles, name their weapons, all that kinda stuff. The marine who watches the pet bomb-disposal robot explode knows that coulda been her body parts flying in all directions, but the robot can be replaced. People in the business of killing while not getting killed develop their own brand of humor. A long time ago, I watched and listened as new computer users learned to become more competent keyboard users with Eliza. There were those who felt a kind of compelling emotional attachment to Eliza, even if they fully realized they were talking to themselves. Human emotion is a puzzling thing. If we go the entirely-software route, it opens the possibilities. I am finding that notion compelling. spike From spike at rainier66.com Sun Jun 16 16:39:47 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 16 Jun 2019 09:39:47 -0700 Subject: [ExI] Parasocial relationships (was: stealth singularity) In-Reply-To: <8A34A895-5FA3-495E-9B90-8B82A428C16C@gmail.com> References: <8A34A895-5FA3-495E-9B90-8B82A428C16C@gmail.com> Message-ID: <014301d52462$1c4c8730$54e59590$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of SR Ballard Sent: Sunday, June 16, 2019 9:03 AM To: ExI chat list Subject: [ExI] Parasocial relationships (was: stealth singularity) How likely is someone to bond with a virtual character? Have you seen how people ship fictional characters all the time, write fan fiction, salivate about character skins, spend tens of thousands of dollars on character branded merchandise ? Pok?mon is like 1/8th of Japan?s GDP. A parasocial relationship doesn?t have to be with a ?real? person. Look at waifu culture. SR Ballard Ja it happens, even to smart people. Or so I hear. I can remember when Kellie Martin failed to negotiate an acceptable contract and the tragic result: Michael Crichton had Lucy Knight murdered by a psych patient, who also very nearly murdered Dr. Carter (recall he was in critical condition in the spring of 2000 as Noah Wiley negotiated his contract for season 7.) I had such a crush on Dr. Knight. I had half a mind to go to Chicago and intentionally get in an accident near County General hoping she would patch me. And she wasn't even a particularly competent fictional doctor. But hey, she was so nice. Incompetent people can sometimes compensate by being nice people. I was so sad when Chrichton arranged for her to perish, damn that guy. Hey, there's an idea: find Kellie Martin (she's reduced to doing Hallmark movies and likely will jump at this idea) get her to voice an avatar based on her ER character, then create one of those nifty animatars (perhaps based on her younger self (plenty of the patients in nursing homes may remember her as a fictional med student (and then as an incompetent but nice resident))) resulting in their telling "Dr" Knight's animatar stuff they won't even share with their real doctors. Great idea you spawned SR Ballard! spike From pharos at gmail.com Sun Jun 16 16:55:22 2019 From: pharos at gmail.com (BillK) Date: Sun, 16 Jun 2019 17:55:22 +0100 Subject: [ExI] stealth singularity In-Reply-To: <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> Message-ID: On Sun, 16 Jun 2019 at 17:24, wrote: > > A long time ago, I watched and listened as new computer users learned to become more competent keyboard users with Eliza. There were those who felt a kind of compelling emotional attachment to Eliza, even if they fully realized they were talking to themselves. Human emotion is a puzzling thing. > > If we go the entirely-software route, it opens the possibilities. I am finding that notion compelling. > Do a search for companion robot. There is a lot of competition. :) Don't try and create something suitable for all patients. You'll end up trying to create a replacement human nurse! At the most basic level for 100 USD a robot pet can give dementia patients great comfort. See: and read the customer reviews to see how much they value their pet robot. BillK From avant at sollegro.com Sun Jun 16 17:07:59 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 16 Jun 2019 10:07:59 -0700 Subject: [ExI] stealth singularity In-Reply-To: <1229693266.1643387.1560703157972@mail.yahoo.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> <1229693266.1643387.1560703157972@mail.yahoo.com> Message-ID: <20190616100759.Horde.SubkNTl2uy0lh9FARE79TxE@secure199.inmotionhosting.com> Quoting Spike: > If we go the entirely-software route, it opens the possibilities.? I > am finding that notion compelling. > Ok. So it seems like you are envisioning a portrait mounted flatscreen monitor on motorized wheels with cameras, microphones, and an Internet connection? With a robotic arm for physical assistance? Am I close? The robot arm would require a wider wheel base, but it would be doable. The software idea is interesting. We could use real people as models. Motion capture a wide range of gestures/ facial expressions, if they have a pleasant voice, we could hire them for that too. SR Ballard also has a point. We can render the motion capture as a wire frame and re-skin however we want. If a customer wants to pay extra to have a famous companion, then we could license proprietary characters and skin their companions as such. Stuart LaForge From spike at rainier66.com Sun Jun 16 17:37:04 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 16 Jun 2019 10:37:04 -0700 Subject: [ExI] stealth singularity In-Reply-To: <20190616100759.Horde.SubkNTl2uy0lh9FARE79TxE@secure199.inmotionhosting.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> <1229693266.1643387.1560703157972@mail.yahoo.com> <20190616100759.Horde.SubkNTl2uy0lh9FARE79TxE@secure199.inmotionhosting.com> Message-ID: <016501d5246a$1d425f40$57c71dc0$@rainier66.com> Doesn't need motorized wheels. A simple 70 dollar lightweight wheelchair will do. We are talking two different things here. One attaches to the patent's wheelchair on a pivot arm facing in toward the patient, the other wheels separately on its own wheelchair, facing out. Both look like profit-engines to me. I am now thinking of using something public domain, such as the Mona Lisa or (since that image is public domain) the original ObamaCare girl: https://www.buzzfeednews.com/article/andrewkaczynski/the-search-for-the-mysterious-obamacare-website-girl She's such a beauty. If we can get that single public-domain image and do with it what they did with the talking Mona Lisa, get SR Ballard to voice it for nothing, then we can make a buttload and say nice things about her. All three hers: Mona Lisa, ObamaCare Girl and SR Ballard. spike -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Sent: Sunday, June 16, 2019 10:08 AM To: ExI Chat Subject: Re: [ExI] stealth singularity Quoting Spike: > If we go the entirely-software route, it opens the possibilities. I > am finding that notion compelling. > Ok. So it seems like you are envisioning a portrait mounted flatscreen monitor on motorized wheels with cameras, microphones, and an Internet connection? With a robotic arm for physical assistance? Am I close? The robot arm would require a wider wheel base, but it would be doable. The software idea is interesting. We could use real people as models. Motion capture a wide range of gestures/ facial expressions, if they have a pleasant voice, we could hire them for that too. SR Ballard also has a point. We can render the motion capture as a wire frame and re-skin however we want. If a customer wants to pay extra to have a famous companion, then we could license proprietary characters and skin their companions as such. Stuart LaForge _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From avant at sollegro.com Sun Jun 16 18:19:42 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 16 Jun 2019 11:19:42 -0700 Subject: [ExI] stealth singularity In-Reply-To: <134683041.1657187.1560707725344@mail.yahoo.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> <1229693266.1643387.1560703157972@mail.yahoo.com> <20190616100759.Horde.SubkNTl2uy0lh9FARE79TxE@secure199.inmotionhosting.com> <016501d5246a$1d425f40$57c71dc0$@rainier66.com> <134683041.1657187.1560707725344@mail.yahoo.com> Message-ID: <20190616111942.Horde.7XqY7oq8rylcAWnR0cxQh7L@secure199.inmotionhosting.com> Quoting Spike: > Doesn't need motorized wheels.? A simple 70 dollar lightweight > wheelchair will do. Wheelchair or not, without a motor how is it supposed to get around? Are you talking about something you wheel into grandma's room and leave sit, maybe in the door way to prevent her from wandering off? > > We are talking two different things here.? One attaches to the > patent's wheelchair on a pivot arm facing in toward the patient, the > other wheels separately on its own wheelchair, facing out. > > Both look like profit-engines to me.? Fine but to start with we should focus on one or the other. Do you want to start with the VR goggles on grandma's wheelchair route or the bigscreen tv in it's own wheelchair route? > I am now thinking of using something public domain, such as the Mona > Lisa or (since that image is public domain) the original ObamaCare > girl: Actually we can just use a computer generated face and avoid the issue altogether: https://thispersondoesnotexist.com/ From there we can animate it in a variety of ways depending on what fits our budget. Stuart LaForge From sen.otaku at gmail.com Sun Jun 16 20:08:39 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Sun, 16 Jun 2019 15:08:39 -0500 Subject: [ExI] stealth singularity In-Reply-To: <20190616111942.Horde.7XqY7oq8rylcAWnR0cxQh7L@secure199.inmotionhosting.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> <1229693266.1643387.1560703157972@mail.yahoo.com> <20190616100759.Horde.SubkNTl2uy0lh9FARE79TxE@secure199.inmotionhosting.com> <016501d5246a$1d425f40$57c71dc0$@rainier66.com> <134683041.1657187.1560707725344@mail.yahoo.com> <20190616111942.Horde.7XqY7oq8rylcAWnR0cxQh7L@secure199.inmotionhosting.com> Message-ID: The emotional attachment to bomb disposal robot is not military humor. Please google how people are extremely distressed when they send their roomba for maintenance, at the idea that it will be replaced. This phenomenon has spawned a whole meme, ?humans will pack bond with anything?. RE: hospital companion. Try a vertical monitor encased in a lightweight clear plastic shell with wheels underneath. Clean, and can display a full avatar. Make them a decent height so people don?t trip. SR Ballard From avant at sollegro.com Mon Jun 17 00:38:38 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 16 Jun 2019 17:38:38 -0700 Subject: [ExI] stealth singularity In-Reply-To: <1150664384.1729982.1560729766902@mail.yahoo.com> References: <20190616080206.Horde.RZQ3U6SPgYifG1BVPqi0Wsk@secure199.inmotionhosting.com> <013801d52457$2fb3b9c0$8f1b2d40$@rainier66.com> <885772093.1593925.1560698901200@mail.yahoo.com> <20190616085527.Horde.HLxFLFBvCwMN9PFCdq6dR9R@secure199.inmotionhosting.com> <013c01d5245f$9039ae50$b0ad0af0$@rainier66.com> <1229693266.1643387.1560703157972@mail.yahoo.com> <20190616100759.Horde.SubkNTl2uy0lh9FARE79TxE@secure199.inmotionhosting.com> <016501d5246a$1d425f40$57c71dc0$@rainier66.com> <134683041.1657187.1560707725344@mail.yahoo.com> <20190616111942.Horde.7XqY7oq8rylcAWnR0cxQh7L@secure199.inmotionhosting.com> <1150664384.1729982.1560729766902@mail.yahoo.com> Message-ID: <20190616173838.Horde.jK0wKlqldj0S40EjWvwDb_8@secure199.inmotionhosting.com> Quoting SR Ballard: > The emotional attachment to bomb disposal robot is not military > humor. Please google how people are extremely distressed when they > send their roomba for maintenance, at the idea that it will be > replaced. > > This phenomenon has spawned a whole meme, ?humans will pack bond > with anything?. If this is true, then it bodes well for our species. > RE: hospital companion. > > Try a vertical monitor encased in a lightweight clear plastic shell > with wheels underneath. Clean, and can display a full avatar. Make > them a decent height so people don?t trip. Agreed. For a hospital that would be a good design. Using a 49" LED TV flipped sideways, we could render a life-size avatar from the knees up, have its face at about eye-level for an ambulatory patient. It would be a plastic cylinder a little over 2 feet wide so it would fit through doors easily. The bottom 1 ft or so would contain the wheels, motorized chassis, power supply, CPU/GPU, etc. We could literally have the avatar crouch down to talk to bed-ridden patients, or pull up a chair and sit down, based upon what the robot's cameras see. Same with navigation. We can use off the shelf drone technology for a lot of this stuff. I like this idea. It's doable, noble, and potentially lucrative. Stuart LaForge From sparge at gmail.com Mon Jun 17 13:46:14 2019 From: sparge at gmail.com (Dave Sill) Date: Mon, 17 Jun 2019 09:46:14 -0400 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back Message-ID: Check this out: https://www.youtube.com/watch?v=dKjCWfuvYxQ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Jun 17 14:40:50 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 10:40:50 -0400 Subject: [ExI] Thank you Donald Trump Message-ID: Iran Says It Will Exceed Nuclear Deal's Limit On Uranium in 10 Days John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sen.otaku at gmail.com Mon Jun 17 15:27:54 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Mon, 17 Jun 2019 10:27:54 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: Message-ID: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> I suppose it says a lot about me that I feel ?distressed? that they are ?abusing? the robot. SR Ballard > On Jun 17, 2019, at 8:46 AM, Dave Sill wrote: > > Check this out: > > https://www.youtube.com/watch?v=dKjCWfuvYxQ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sen.otaku at gmail.com Mon Jun 17 15:31:08 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Mon, 17 Jun 2019 10:31:08 -0500 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: Message-ID: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> I?m not sure we need to bring DT into it. Sure, it?s a somewhat distressing development, but *morally* I suppose that they are as entitled to enriched nuclear materials as anyone. We got nukes, that we won?t give up, but expect other countries to not want to get them, for some reason. SR Ballard > On Jun 17, 2019, at 9:40 AM, John Clark wrote: > > Iran Says It Will Exceed Nuclear Deal's Limit On Uranium in 10 Days > > John K Clark > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 17 15:33:25 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 17 Jun 2019 10:33:25 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> Message-ID: Just anthropomorphism at work. Will we bond with robots? No question. bill w On Mon, Jun 17, 2019 at 10:30 AM SR Ballard wrote: > I suppose it says a lot about me that I feel ?distressed? that they are > ?abusing? the robot. > > SR Ballard > > On Jun 17, 2019, at 8:46 AM, Dave Sill wrote: > > Check this out: > > https://www.youtube.com/watch?v=dKjCWfuvYxQ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jun 17 16:29:53 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 17 Jun 2019 09:29:53 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> Message-ID: <02f501d52529$e50b5670$af220350$@rainier66.com> From: extropy-chat On Behalf Of SR Ballard Subject: Re: [ExI] Boston Dynamics: New Robots Now Fight Back I suppose it says a lot about me that I feel ?distressed? that they are ?abusing? the robot. SR Ballard We all felt that SR. We cheered when the robot began to whoop ass, the particular asses which genuinely needed whoopin. I think that video was somehow faked, don?t know how. Looking at it from the POV of a controls guy, pushing or hitting the biped bot with a hockey stick isn?t so much an assault as it is a random input, which requires a compensation loop. If Boston Dynamics really did get an actual robot to do all this, I am really impressed with them. I would still like to see those guys get a bot-delivered ass whoopin of course, but once I think it thru, everything they did there makes sense. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Jun 17 16:58:26 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 12:58:26 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: On Mon, Jun 17, 2019 at 11:39 AM SR Ballard wrote: *> I?m not sure we need to bring DT into it.* > You're not sure? I am. There was pretty much universal agreement Iran was following the terms of the nuclear treaty, but Trump complained that even with it Iran could have nuclear weapons in 10 years so he reneged on it. So now thanks to Trump's brilliant diplomacy Iran can have nuclear weapons in 10 days not years. > *Sure, it?s a somewhat distressing development, but *morally* I suppose > that they are as entitled to enriched nuclear materials as anyone. * > If that's morality then what the hell is morality good for? Misery pain and the extinction of the human race are the only things I can think of. If that's morality then I'm immoral and proud of it. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Jun 17 17:10:41 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 13:10:41 -0400 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <02f501d52529$e50b5670$af220350$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: On Mon, Jun 17, 2019 at 12:33 PM wrote: > I think that video was somehow faked, don?t know how. > How We Faked Boston Dynamics Robot John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jun 17 17:23:04 2019 From: sparge at gmail.com (Dave Sill) Date: Mon, 17 Jun 2019 13:23:04 -0400 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <02f501d52529$e50b5670$af220350$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: On Mon, Jun 17, 2019 at 12:33 PM wrote: > > > We all felt that SR. We cheered when the robot began to whoop ass, the > particular asses which genuinely needed whoopin. > > > > I think that video was somehow faked, don?t know how. Looking at it from > the POV of a controls guy, pushing or hitting the biped bot with a hockey > stick isn?t so much an assault as it is a random input, which requires a > compensation loop. > > > > If Boston Dynamics really did get an actual robot to do all this, I am > really impressed with them. > > > > I would still like to see those guys get a bot-delivered ass whoopin of > course, but once I think it thru, everything they did there makes sense. > Yes, the video is totally fake. Here's how they did it: https://www.youtube.com/watch?v=gCuG-KJacp8 The fact that a small group of talented people can believably fake something like this should encourage us all to be more skeptical and not relay fakes. I did it to raise awareness and because the video was funny. -Dave > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jun 17 17:29:22 2019 From: sparge at gmail.com (Dave Sill) Date: Mon, 17 Jun 2019 13:29:22 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: On Mon, Jun 17, 2019 at 1:02 PM John Clark wrote: > On Mon, Jun 17, 2019 at 11:39 AM SR Ballard wrote: > >> >> > *Sure, it?s a somewhat distressing development, but *morally* I >> suppose that they are as entitled to enriched nuclear materials as anyone. * >> > > If that's morality then what the hell is morality good for? Misery pain > and the extinction of the human race are the only things I can think of. If > that's morality then I'm immoral and proud of it. > It's moral for the US to tell Iran what it can and can't do regarding nuclear technology? The US is the only country that has ever used nuclear weapons and we killed mostly civilians. What has Iran ever done to the US? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 17 18:02:03 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 17 Jun 2019 13:02:03 -0500 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: SB wrote = for some reason - Ha, the reason is Israel bill w On Mon, Jun 17, 2019 at 12:38 PM Dave Sill wrote: > On Mon, Jun 17, 2019 at 1:02 PM John Clark wrote: > >> On Mon, Jun 17, 2019 at 11:39 AM SR Ballard wrote: >> >>> >>> > *Sure, it?s a somewhat distressing development, but *morally* I >>> suppose that they are as entitled to enriched nuclear materials as anyone. * >>> >> >> If that's morality then what the hell is morality good for? Misery pain >> and the extinction of the human race are the only things I can think of. If >> that's morality then I'm immoral and proud of it. >> > > It's moral for the US to tell Iran what it can and can't do regarding > nuclear technology? The US is the only country that has ever used nuclear > weapons and we killed mostly civilians. What has Iran ever done to the US? > > -Dave > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sen.otaku at gmail.com Mon Jun 17 18:39:10 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Mon, 17 Jun 2019 13:39:10 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <02f501d52529$e50b5670$af220350$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: <65FC5EB7-C95A-4A60-A897-F4C99ADC08C5@gmail.com> Sent from my iPhone > On Jun 17, 2019, at 11:29 AM, wrote: > > > > From: extropy-chat On Behalf Of SR Ballard > Subject: Re: [ExI] Boston Dynamics: New Robots Now Fight Back > > I suppose it says a lot about me that I feel ?distressed? that they are ?abusing? the robot. > > SR Ballard > > > We all felt that SR. We cheered when the robot began to whoop ass, the particular asses which genuinely needed whoopin. > > I think that video was somehow faked, don?t know how. Looking at it from the POV of a controls guy, pushing or hitting the biped bot with a hockey stick isn?t so much an assault as it is a random input, which requires a compensation loop. > > If Boston Dynamics really did get an actual robot to do all this, I am really impressed with them. > > I would still like to see those guys get a bot-delivered ass whoopin of course, but once I think it thru, everything they did there makes sense. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sen.otaku at gmail.com Mon Jun 17 18:50:00 2019 From: sen.otaku at gmail.com (SR Ballard) Date: Mon, 17 Jun 2019 13:50:00 -0500 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: I think perhaps my point didn?t come across clearly. I don?t think bringing DT into it is productive, because he can?t be part of the solution. As far as morality, there is no difference between the US developing nuclear weapons (in the past), and Iran developing nuclear weapons (now). If one thinks MAD is effective (questionable), then the world would be safer with the addition of more weapons, because then mutual destruction is more assured in that instance. The United States sees fit to place requirements on other countries regarding gaining nuclear weapons, but to be logically consistent, the US would need to be aggressively reducing our nuclear arsenal, which we are not. While perhaps not possible in the current administration, the US needs to find a niche to enter into, and begin actively positioning itself that way before our international authority completely disintegrates. I would like to see us develop strong national policy to promote STEM in order to develop the nation in the direction of a good niche. But that?s my opinion. SR Ballard From spike at rainier66.com Mon Jun 17 20:10:58 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 17 Jun 2019 13:10:58 -0700 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: <00a701d52548$c79dfab0$56d9f010$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of SR Ballard Subject: Re: [ExI] Iranian Nuclear Program >... If one thinks MAD is effective (questionable), then the world would be safer with the addition of more weapons, because then mutual destruction is more assured in that instance....SR Ballard _______________________________________________ MAD was conceived when only two fellers had nukes. If either one was nuked, everyone knows whodunnit. If a dozen have nukes, we don't. spike From johnkclark at gmail.com Mon Jun 17 20:17:52 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 16:17:52 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: On Mon, Jun 17, 2019 at 1:39 PM Dave Sill wrote: *> It's moral for the US to tell Iran what it can and can't do regarding > nuclear technology? * > I don't care if it's moral or not. Morality is not a end in itself, morality is just a tool that is supposed to reduce the net amount of misery in the world; and if it cant't do that, and religious nuts in Iran getting nuclear weapons wont do that, then *to hell with morality*. It was smart for Iran and the USA (under Obama) to agree to a nuclear treaty, and it was brain dead dumb for Trump to renege on the treaty when Iran was following the treaty and just doing what they promised they would do. Iran kept its promise but the USA, thanks to Trump, did not. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jun 17 19:34:25 2019 From: sparge at gmail.com (Dave Sill) Date: Mon, 17 Jun 2019 15:34:25 -0400 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: Given the existence of something like this fake Boston Dynamics video, how hard do you think it would be to fake the short, single camera, poor quality video that purportedly shows Iranian military removing an unexploded mine from a tanker? Even taking the video at face value, how do we know they're Iranian military? Couldn't they be someone attempting to frame the Iranians? -Dave On Mon, Jun 17, 2019 at 1:14 PM John Clark wrote: > On Mon, Jun 17, 2019 at 12:33 PM wrote: > > > I think that video was somehow faked, don?t know how. >> > > How We Faked Boston Dynamics Robot > > > John K Clark > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Jun 17 20:50:16 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 16:50:16 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: On Mon, Jun 17, 2019 at 2:52 PM SR Ballard wrote: > *I don?t think bringing DT into it is productive, because he can?t be > part of the solution.* > Getting rid of DT is part of the solution, as long as he's in power there is no hope. * >As far as morality, there is no difference between the US developing > nuclear weapons (in the past), and Iran developing nuclear weapons (now). * I can think of one very important difference, we can't change the past but we can change the future. And a country run by religious loonies who love to talk about martyrdom getting nukes will not bring on a brighter future. > *> if one thinks MAD is effective (questionable), then the world would be > safer with the addition of more weapons,* MAD was very questionable indeed! The more I learn about the Cuban Missile Crisis the more terrifying it seems, the human race came within a gnats ass of destroying itself in 1962. And the USSR was evil but evil alone won't destroy us but stupidity could; and the USSR was never as loony as Iran and Khrushchev was not as stupid as Trump. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jun 17 21:02:59 2019 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 17 Jun 2019 17:02:59 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: Maybe if Iran has nukes we can finally have some fucking stability in the region. As long as Israel doesn't start shit (ha ha.) At least there will be a counterpoint NPT non-signatory in the Middle East. Iran is in the least position to want to nuke anyone. Much rather them than NK. -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 17 21:14:02 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 17 Jun 2019 16:14:02 -0500 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: spike wrote: MAD was conceived when only two fellers had nukes. If either one was nuked, everyone knows whodunnit. If a dozen have nukes, we don't. Eh? I would hae thought that everyone's radar was on all the time or whatever they use on satellites, so that any launch of a rocket would be instantly identified as such and from where. bill w On Mon, Jun 17, 2019 at 4:06 PM Will Steinberg wrote: > Maybe if Iran has nukes we can finally have some fucking stability in the > region. As long as Israel doesn't start shit (ha ha.) At least there will > be a counterpoint NPT non-signatory in the Middle East. > > Iran is in the least position to want to nuke anyone. Much rather them > than NK. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 17 21:18:13 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 17 Jun 2019 16:18:13 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: On Mon, Jun 17, 2019 at 3:40 PM Dave Sill wrote: > Given the existence of something like this fake Boston Dynamics video, how > hard do you think it would be to fake the short, single camera, poor > quality video that purportedly shows Iranian military removing an > unexploded mine from a tanker? > > Even taking the video at face value, how do we know they're Iranian > military? Couldn't they be someone attempting to frame the Iranians? > > -Dave > > Given the mind set above, why believe anything you read, see on TV or the web. For all you know, I am a Russian agent keeping track on weirdos. Tovarish! When it comes right down to it, DT might not even exist, much less be president. bill w > On Mon, Jun 17, 2019 at 1:14 PM John Clark wrote: > >> On Mon, Jun 17, 2019 at 12:33 PM wrote: >> >> > I think that video was somehow faked, don?t know how. >>> >> >> How We Faked Boston Dynamics Robot >> >> >> John K Clark >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jun 17 21:23:06 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 17 Jun 2019 14:23:06 -0700 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: <00fb01d52552$db48e070$91daa150$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Sent: Monday, June 17, 2019 2:14 PM To: ExI chat list Subject: Re: [ExI] Iranian Nuclear Program >>?spike wrote: MAD was conceived when only two fellers had nukes. If either one was nuked, everyone knows whodunnit. If a dozen have nukes, we don't. >?Eh? I would hae thought that everyone's radar was on all the time or whatever they use on satellites, so that any launch of a rocket would be instantly identified as such and from where. bill w Sure, if you assume rockets come into the picture anywhere. What if the nuke was apparently aboard a Cessna? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Jun 17 21:23:48 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 17:23:48 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: On Mon, Jun 17, 2019 at 5:07 PM Will Steinberg wrote: > *Maybe if Iran has nukes we can finally have some fucking stability in > the region. * > I assume you weren't being serious. John k Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 17 21:34:46 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 17 Jun 2019 16:34:46 -0500 Subject: [ExI] Iranian Nuclear Program In-Reply-To: <00fb01d52552$db48e070$91daa150$@rainier66.com> References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> <00fb01d52552$db48e070$91daa150$@rainier66.com> Message-ID: I suppose that 1 - I am not thinking straight,and 2 - saw a copy of LIttle Boy in D.C. and the size stuck in my brain. Cessa? H-bombs that small? Hmmm. So a large drone would be able to carry one, no es verdad? bill w On Mon, Jun 17, 2019 at 4:31 PM wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace > *Sent:* Monday, June 17, 2019 2:14 PM > *To:* ExI chat list > *Subject:* Re: [ExI] Iranian Nuclear Program > > > > >>?spike wrote: MAD was conceived when only two fellers had nukes. If > either one was nuked, everyone knows whodunnit. If a dozen have nukes, we > don't. > > > > >?Eh? I would hae thought that everyone's radar was on all the time or > whatever they use on satellites, so that any launch of a rocket would be > instantly identified as such and from where. > > > > bill w > > > > > > > > > > > > > > Sure, if you assume rockets come into the picture anywhere. What if the > nuke was apparently aboard a Cessna? > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From interzone at gmail.com Mon Jun 17 21:41:35 2019 From: interzone at gmail.com (Dylan Distasio) Date: Mon, 17 Jun 2019 17:41:35 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: Any theocracy, in particular an Islamic one, should not have access to nukes. On Mon, Jun 17, 2019, 5:38 PM John Clark wrote: > On Mon, Jun 17, 2019 at 5:07 PM Will Steinberg > wrote: > > > *Maybe if Iran has nukes we can finally have some fucking stability in >> the region. * >> > > I assume you weren't being serious. > > John k Clark > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From interzone at gmail.com Mon Jun 17 21:52:08 2019 From: interzone at gmail.com (Dylan Distasio) Date: Mon, 17 Jun 2019 17:52:08 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> <00fb01d52552$db48e070$91daa150$@rainier66.com> Message-ID: Yes, technically something comparable to a US reaper could carry a small one with a 3,000lb external payload max. https://en.m.wikipedia.org/wiki/General_Atomics_MQ-9_Reaper On Mon, Jun 17, 2019, 5:46 PM William Flynn Wallace wrote: > I suppose that 1 - I am not thinking straight,and 2 - saw a copy of LIttle > Boy in D.C. and the size stuck in my brain. Cessa? H-bombs that small? > Hmmm. So a large drone would be able to carry one, no es verdad? bill w > > On Mon, Jun 17, 2019 at 4:31 PM wrote: > >> >> >> >> >> *From:* extropy-chat *On Behalf >> Of *William Flynn Wallace >> *Sent:* Monday, June 17, 2019 2:14 PM >> *To:* ExI chat list >> *Subject:* Re: [ExI] Iranian Nuclear Program >> >> >> >> >>?spike wrote: MAD was conceived when only two fellers had nukes. If >> either one was nuked, everyone knows whodunnit. If a dozen have nukes, we >> don't. >> >> >> >> >?Eh? I would hae thought that everyone's radar was on all the time or >> whatever they use on satellites, so that any launch of a rocket would be >> instantly identified as such and from where. >> >> >> >> bill w >> >> >> >> >> >> >> >> >> >> >> >> >> >> Sure, if you assume rockets come into the picture anywhere. What if the >> nuke was apparently aboard a Cessna? >> >> >> >> spike >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Jun 17 22:08:18 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 17 Jun 2019 17:08:18 -0500 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> Message-ID: I agree about theocracies, but it cannot be made a public reason. Every religion now is extremely touchy about its validity and criticisms of it. Look at Christianity in America. Touchy as a blowfish. bill w On Mon, Jun 17, 2019 at 4:54 PM Dylan Distasio wrote: > Any theocracy, in particular an Islamic one, should not have access to > nukes. > > On Mon, Jun 17, 2019, 5:38 PM John Clark wrote: > >> On Mon, Jun 17, 2019 at 5:07 PM Will Steinberg >> wrote: >> >> > *Maybe if Iran has nukes we can finally have some fucking stability in >>> the region. * >>> >> >> I assume you weren't being serious. >> >> John k Clark >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Jun 17 23:57:34 2019 From: johnkclark at gmail.com (John Clark) Date: Mon, 17 Jun 2019 19:57:34 -0400 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> <00fb01d52552$db48e070$91daa150$@rainier66.com> Message-ID: On Mon, Jun 17, 2019 at 5:47 PM William Flynn Wallace wrote: > > saw a copy of LIttle Boy in D.C. and the size stuck in my brain. > Cessa? H-bombs that small? Hmmm. So a large drone would be able to carry > one, > Little boy used 1945 technology, it weighed 9,700 pounds and produced a yield of 15,000 tons of TNT. The W45 warhead used 1965 technology and had the same yield as little boy but weighed only 400 pounds. The B41 warhead used 1960 technology and weighed slightly more than little boy,10,500 pounds , but was 1,500 times as powerful; the USA had 500 of these monsters in its stockpile during the Cuban Missile crisis and lots of other nuclear bombs too. The USSR had similar stuff in 1962. John K Clark >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 00:07:20 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 17 Jun 2019 17:07:20 -0700 Subject: [ExI] Iranian Nuclear Program In-Reply-To: References: <2F033769-F28B-4965-85F8-52FDF61B6740@gmail.com> <00fb01d52552$db48e070$91daa150$@rainier66.com> Message-ID: <004901d52569$ccc760f0$665622d0$@rainier66.com> Most of Little Boy was in the gun. Modern nukes wouldn?t need all that. Bad guy gets in a Cessna with the device in the passenger seat, takes off, points it at {fill in target, probably DC} trims it up, sets the timer to 10 minutes, hops out, parachutes down safely, kerBOOM. Alternative, if bad guy has some sophistication with a single DOF actuator: rig it to the rudder pedals. How easy would that be? Half hour job with hobby-shop level stuff. Bad guy flies vaguely toward target, hops out half an hour before arrival, lands, gets GPS coordinates on phone, trims left or right to keep on target, waits until GPS coordinates indicate device is over target, kerBOOM. We don?t know whodunnit. spike From: extropy-chat On Behalf Of William Flynn Wallace Sent: Monday, June 17, 2019 2:35 PM To: ExI chat list Subject: Re: [ExI] Iranian Nuclear Program I suppose that 1 - I am not thinking straight,and 2 - saw a copy of LIttle Boy in D.C. and the size stuck in my brain. Cessa? H-bombs that small? Hmmm. So a large drone would be able to carry one, no es verdad? bill w On Mon, Jun 17, 2019 at 4:31 PM > wrote: From: extropy-chat > On Behalf Of William Flynn Wallace Sent: Monday, June 17, 2019 2:14 PM To: ExI chat list > Subject: Re: [ExI] Iranian Nuclear Program >>?spike wrote: MAD was conceived when only two fellers had nukes. If either one was nuked, everyone knows whodunnit. If a dozen have nukes, we don't. >?Eh? I would hae thought that everyone's radar was on all the time or whatever they use on satellites, so that any launch of a rocket would be instantly identified as such and from where. bill w Sure, if you assume rockets come into the picture anywhere. What if the nuke was apparently aboard a Cessna? spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Jun 18 01:29:01 2019 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 17 Jun 2019 18:29:01 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <02f501d52529$e50b5670$af220350$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: On Mon, Jun 17, 2019 at 9:32 AM wrote: > Looking at it from the POV of a controls guy, pushing or hitting the biped > bot with a hockey stick isn?t so much an assault as it is a random input, > which requires a compensation loop. > And here I was just recently writing science fiction about that POV, for an automated spaceship. >From http://wiki.travellerrpg.com/Average_Cargo_Ship_class_Cargo_Ship : > Out of a stated desire to have there be something truly innocent in the universe, the software's designers intentionally made it unable to understand that one ship can deliberately attack another, so far as the sub-sophont-grade AI can be said to understand anything. Crews who have escorted an Average Cargo Ship through pirate encounters sometimes joke about the resulting barrage of navigation hazard alerts (every individual missile, laser shot, and once the pirate ships have been destroyed, shard of debris), and the Average Cargo Ship's insistent labeling of hostiles as "incompetent crew". The AI does at least forgive its escorts for the "hazards" they create, with a hardcoded rationalization that they were just reacting to the much bigger hazard of another crew so incompetent that they were about to destroy other ships by accident. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 01:54:40 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 17 Jun 2019 18:54:40 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: <004c01d52578$cb1c9950$6155cbf0$@rainier66.com> From: Adrian Tymes Sent: Monday, June 17, 2019 6:29 PM To: ExI chat list Cc: spike Subject: Re: [ExI] Boston Dynamics: New Robots Now Fight Back On Mon, Jun 17, 2019 at 9:32 AM > wrote: Looking at it from the POV of a controls guy, pushing or hitting the biped bot with a hockey stick isn?t so much an assault as it is a random input, which requires a compensation loop. And here I was just recently writing science fiction about that POV, for an automated spaceship. >From http://wiki.travellerrpg.com/Average_Cargo_Ship_class_Cargo_Ship : > Out of a stated desire to have there be something truly innocent in the universe, the software's designers intentionally made it unable to understand that one ship can deliberately attack another? Adrian your comment reminded me of my uncle?s St. Bernard. That dog was an example of a creature who could not comprehend the concept of a bad human. To him, all humans are good things, they are lovable little creatures to be protected and rescued. They were bred to go find hikers and skiers lost in the snow. If a stranger came up to that dog, he would be delighted. If you would lay on the ground anywhere where that dog could get to you, his instinct would kick in, he would come right up and snuggle up next to you, trying to keep you warm, even if it was sweltering. Perfectly useless as a watchdog, even though he was huge. If he saw two people fighting, he had no idea what was going on. He would likely wait until one or the other combatants fell to the ground, then he would go and snuggle next to the fallen little creature he was bred to rescue. Silly beast. Truly innocent creature he was. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Jun 18 12:04:20 2019 From: sparge at gmail.com (Dave Sill) Date: Tue, 18 Jun 2019 08:04:20 -0400 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: On Mon, Jun 17, 2019 at 5:24 PM William Flynn Wallace wrote: > > On Mon, Jun 17, 2019 at 3:40 PM Dave Sill wrote: > >> Given the existence of something like this fake Boston Dynamics video, >> how hard do you think it would be to fake the short, single camera, poor >> quality video that purportedly shows Iranian military removing an >> unexploded mine from a tanker? >> >> Even taking the video at face value, how do we know they're Iranian >> military? Couldn't they be someone attempting to frame the Iranians? >> >> Given the mind set above, why believe anything you read, see on TV or the > web. For all you know, I am a Russian agent keeping track on weirdos. > Tovarish! > > When it comes right down to it, DT might not even exist, much less be > president. > We all fall somewhere on the spectrum between believing nothing and believing everything. Technology is making it possible to convincingly produce forms of media that have been traditionally considered impossible to fake. Wise people will adjust their perception of the trustworthiness of evidence in these forms of media accordingly. Caveat emptor. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 18 13:46:01 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 18 Jun 2019 08:46:01 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: Dave Sill wrote - We all fall somewhere on the spectrum between believing nothing and believing everything. Yes, but as scientists we should be far out on the skeptic end "Gimme that ole time empiricism - it's good enough for me." bill w On Tue, Jun 18, 2019 at 7:07 AM Dave Sill wrote: > On Mon, Jun 17, 2019 at 5:24 PM William Flynn Wallace > wrote: > >> >> On Mon, Jun 17, 2019 at 3:40 PM Dave Sill wrote: >> >>> Given the existence of something like this fake Boston Dynamics video, >>> how hard do you think it would be to fake the short, single camera, poor >>> quality video that purportedly shows Iranian military removing an >>> unexploded mine from a tanker? >>> >>> Even taking the video at face value, how do we know they're Iranian >>> military? Couldn't they be someone attempting to frame the Iranians? >>> >>> Given the mind set above, why believe anything you read, see on TV or >> the web. For all you know, I am a Russian agent keeping track on weirdos. >> Tovarish! >> >> When it comes right down to it, DT might not even exist, much less be >> president. >> > > We all fall somewhere on the spectrum between believing nothing and > believing everything. Technology is making it possible to convincingly > produce forms of media that have been traditionally considered impossible > to fake. Wise people will adjust their perception of the trustworthiness of > evidence in these forms of media accordingly. Caveat emptor. > > -Dave > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 14:14:26 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jun 2019 07:14:26 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> Message-ID: <005401d525e0$23163e60$6942bb20$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Subject: Re: [ExI] Boston Dynamics: New Robots Now Fight Back Dave Sill wrote - We all fall somewhere on the spectrum between believing nothing and believing everything. Yes, but as scientists we should be far out on the skeptic end "Gimme that ole time empiricism - it's good enough for me." bill w Ja, there is that. Since the discussion already went toward political considerations, imagine the nightmare the US could face in the 2020 elections if there is some widespread influential video hoax uncorked right before the election and caught afterwards. The risk of that scenario increases steadily. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 14:24:16 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jun 2019 07:24:16 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <005401d525e0$23163e60$6942bb20$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: <006501d525e1$82b7f1a0$8827d4e0$@rainier66.com> From: spike at rainier66.com >?The risk of that scenario increases steadily? spike The robot fights back video wasn?t intended as a hoax and had plenty of indications it was a parody of the original Boston Dynamics video, such as the logo ?Bosstown Dynamics.? We all recall seeing the original where they were hassling the robot and wishing he (?he? the robot, he) would open a much-deserved can of whoop-ass on his tormenters. Those guys who did Bosstown were skilled enough animators they could have made it very convincing and hidden the parody nature of the video. They would have made it big, particularly in that scene where the robot gets the gun and marches the cruel bastards out the door. We would get it: if we make humanoid robots, then make them angry, they can pick up our guns and use them on us. In some ways, that would have been a better public service, because it death sells. The advertising world knows the basic principles well: comedy is fun, but sex sells, and death sells even better. What we really need is a very threatening hoax video to educate the public on how good video animation has become. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 18 15:14:23 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 18 Jun 2019 10:14:23 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <005401d525e0$23163e60$6942bb20$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: imagine the nightmare the US could face in the 2020 elections if there is some widespread influential video hoax uncorked right before the election and caught afterwards. The risk of that scenario increases steadily. spike well,. what did we expect when we let every animal out of the zoo and gave it a laptop and wifi? Everything good or bad about people will show up bill w On Tue, Jun 18, 2019 at 9:17 AM wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace > *Subject:* Re: [ExI] Boston Dynamics: New Robots Now Fight Back > > > > Dave Sill wrote - We all fall somewhere on the spectrum between believing > nothing and believing everything. > > > > Yes, but as scientists we should be far out on the skeptic end "Gimme > that ole time empiricism - it's good enough for me." > > bill w > > > > > > > > Ja, there is that. Since the discussion already went toward political > considerations, imagine the nightmare the US could face in the 2020 > elections if there is some widespread influential video hoax uncorked right > before the election and caught afterwards. The risk of that scenario > increases steadily. > > > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Jun 18 15:37:33 2019 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 18 Jun 2019 08:37:33 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <005401d525e0$23163e60$6942bb20$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: On Tue, Jun 18, 2019, 7:17 AM wrote: > Since the discussion already went toward political considerations, imagine > the nightmare the US could face in the 2020 elections if there is some > widespread influential video hoax uncorked right before the election and > caught afterwards. The risk of that scenario increases steadily. > Like of Trump saying this election really is rigged, urging his supporters to stay home on Election Day so they won't be identified, and stock up on guns & ammo to "take back this country the Second Amendment way"? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 18 16:13:57 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 18 Jun 2019 11:13:57 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: You guys, esp. Mr. Clark, of course, remind me of sports newsmen - endless speculation about the future - the reason I don't watch ESPN even though I like a couple of sports. And the reason I don't watch TV at all except for golf and tennis. Even there I just watch the sports - mostly on mute. There was a book of long ago, say 70s, called I'm OK, You're OK. One of the chapters was a game (or maybe the book was Games People Play - I dunno) called Uproar. Described as played by married couples, back when there was a lot of that sort of thing, the couple took turns at saying or doing something outrageous, and that escalated sometimes until violence occurred. It seems to me that the media plays it. We hear or read of something outrageous and everyone has to put their two cents in, for and against. DT must feel like a god - a couple of Tweets and worldwide uproar follows. He must be laughing his ass off. bill w On Tue, Jun 18, 2019 at 10:40 AM Adrian Tymes wrote: > On Tue, Jun 18, 2019, 7:17 AM wrote: > >> Since the discussion already went toward political considerations, >> imagine the nightmare the US could face in the 2020 elections if there is >> some widespread influential video hoax uncorked right before the election >> and caught afterwards. The risk of that scenario increases steadily. >> > Like of Trump saying this election really is rigged, urging his supporters > to stay home on Election Day so they won't be identified, and stock up on > guns & ammo to "take back this country the Second Amendment way"? > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 16:25:06 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jun 2019 09:25:06 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: <00b001d525f2$6470e010$2d52a030$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Subject: Re: [ExI] Boston Dynamics: New Robots Now Fight Back >?You guys, esp. Mr. Clark, of course, remind me of sports newsmen - endless speculation about the future - the reason I don't watch ESPN even though I like a couple of sports. And the reason I don't watch TV at all except for golf and tennis. Even there I just watch the sports - mostly on mute? bill w Ja, but this isn?t a game. If the US elects a head of the executive based on a hoax, that will be trouble. The risk of that is increasing. This isn?t a game. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 18 16:43:39 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 18 Jun 2019 11:43:39 -0500 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <00b001d525f2$6470e010$2d52a030$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> <00b001d525f2$6470e010$2d52a030$@rainier66.com> Message-ID: Ja, but this isn?t a game. If the US elects a head of the executive based on a hoax, that will be trouble. The risk of that is increasing. This isn?t a game. spike Of course it's a game. There are serious games. War games for one. For another, the GReeks had a contest where each of two men were tied together and armed. You win by killing the other. Big trouble, though, if anything is faked - you are correct, sir. bill w On Tue, Jun 18, 2019 at 11:27 AM wrote: > > > > > *From:* extropy-chat > > *On Behalf Of *William Flynn Wallace > *Subject:* Re: [ExI] Boston Dynamics: New Robots Now Fight Back > > > > >?You guys, esp. Mr. Clark, of course, remind me of sports newsmen - > endless speculation about the future - the reason I don't watch ESPN even > though I like a couple of sports. And the reason I don't watch TV at all > except for golf and tennis. Even there I just watch the sports - mostly > on mute? bill w > > > > > > Ja, but this isn?t a game. If the US elects a head of the executive based > on a hoax, that will be trouble. The risk of that is increasing. This > isn?t a game. > > > > spike > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 18 17:13:14 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 18 Jun 2019 13:13:14 -0400 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <005401d525e0$23163e60$6942bb20$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: On Tue, Jun 18, 2019 at 10:17 AM wrote: Since the discussion already went toward political considerations, > imagine the nightmare the US could face in the 2020 elections if there is > some widespread influential video hoax uncorked right before the election > and caught afterwards. The risk of that scenario increases steadily. > > > ### I'm reading Neal Stephenson's newest book, Fall. The jury is still out, so far not too bad despite some deplorables-type demagoguery. What makes it relevant to the present discussion is an exploration of a possible post-truth phase of the internet, and our society in general, engendered by advanced addictive algorithmically generated deceptive content acting on millions of people of various levels of intelligence. Stephenson envisions a nightmare where our society at first splinters along cognitive lines, modified by individual tendencies towards paranoia and other forms of psychopathology, which then spatially splinters America into the smart Democratic voters who concentrate in cities and the dumb filth that fill the rest. A bit similar to what Karl Schroeder wrote in "Lady of Mazes". An interesting book, I may discuss it more once I am done reading it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 18 17:30:19 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 18 Jun 2019 13:30:19 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: References: Message-ID: On Sun, Jun 2, 2019 at 8:58 AM John Clark wrote: > > Long before your Yamnaya something dramatic happened to humans about > 33,000BC, stone weapons suddenly got much more refined and specialized > tools for cleaning animal skins and awls for piercing appeared, shoes > were invented and so was jewelry. Before 33,000BC there was little or no > art, after 33,000BC it was everywhere. > > If this change in human behavior happened because of a change in the gene > pool then it almost certainly started in a mutation that occurred in a individual > living in a small isolated population, the gene made the individual who > had it a better hunter and a better warrior and this evolutionary > advantage could easily rapidly spread through the entire population > because it was so small. After that there would be little to stop the > small isolated population from spreading out and becoming large, in fact > becoming the dominate human population. But if the mutation had occurred > in a horse centered nomadic population that ranged over a huge area it > might have produced a few widely separated clever people here and there > but the mutated gene would become so diluted by the huge gene pool it could > never get a foothold. > ### If you are asking about the causes of the Upper Paleolithic Revolution, it is definitely not caused by Mendelian inheritance. It is highly likely that genetic causes played an important role in this transition but most likely it was due to a polygenic mechanism, with population-wide changes in allele frequency triggered by selective processes acting in parallel on genetically isolated populations, rather than by a selective sweep due to migration from a single origin. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 17:32:23 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jun 2019 10:32:23 -0700 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> Message-ID: <011e01d525fb$ca82b3c0$5f881b40$@rainier66.com> From: extropy-chat On Behalf Of Rafal Smigrodzki Subject: Re: [ExI] Boston Dynamics: New Robots Now Fight Back On Tue, Jun 18, 2019 at 10:17 AM > wrote: Since the discussion already went toward political considerations?The risk of that scenario increases steadily. ### I'm reading Neal Stephenson's newest book, Fall. The jury is still out, so far not too bad despite some deplorables-type demagoguery? My copy of Fall came in the mail yesterday. From what little I have read, I am now seriously considering the possibility that Neal Stephenson is Satoshi. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 18 18:01:32 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 18 Jun 2019 14:01:32 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: <20190602110608.Horde.Ixvf-kmTl6IcXTMPSXHHn-y@secure199.inmotionhosting.com> References: <001601d5183d$0036f110$00a4d330$@rainier66.com> <004301d51882$f00963e0$d01c2ba0$@rainier66.com> <2029759869.7801518.1559496986931@mail.yahoo.com> <20190602110608.Horde.Ixvf-kmTl6IcXTMPSXHHn-y@secure199.inmotionhosting.com> Message-ID: On Sun, Jun 2, 2019 at 2:52 PM Stuart LaForge wrote: > > Quoting Rafal Smigrodski: > > > > This said, there are huge research consortia that are collecting DNA > > from various human populations, so there is a lot of data on human > > nucleotide diversities, it's just I have been out of the loop long > > enough as a geneticist that I don't have the references handy. It > > might be an evening's worth of trawling through PubMed to get some > > estimates. > > https://genographic.nationalgeographic.com/for-scientists/ > > In the above link is an email address that scientists can write to in > order to receive access to the databases of one of the largest of the > research consortia you mentioned. > > Email them and tell them you are a scientist. Let them give you access > to the raw data to test your theory. If you want help mining the data, > then tell them I am your assistant and get me access too. > > Your hypothesis is important because if it is true, then it refutes > racism entirely. And that is something I would be willing to help with > ### I am an amateur in population genetics, my actual work was in mitochondrial genetics and Parkinson's disease, so I would be poorly equipped to work on substantiating the hypothesis. While the idea that increased mobility in a population may trigger changes in fitness related to improved mating distance occurred to me independently, I would be surprised if nobody else among population geneticists ever though about it before. It's kind of obvious once you think about it. However, if the hypothesis has indeed never been published, it might be the basis for an interesting research project, maybe even a PhD thesis. If anybody here knows an aspiring young population geneticist, feel free to forward my Yamnaya post to him/her. I doubt that the hypothesis justifies jettisoning out-group hostility based on different levels of genetic relatedness. After all you could interpret it as saying that to have smart, strong and beautiful children you should invade foreign lands, slaughter the local males, and take their women as concubines, the Yamnaya way. Also, it might justify increased hostility to anybody who is farther away than the OMD, on their way to speciation. I would be very wary of speciation among cognitively generalist sentients, that can only bring war to extinction of one of the species. Based on what I know about interspecies conflict, the presence of multiple species in a single ecological niche is unstable, and it leads inevitably to the eradication of all but one of the species (assuming absence of refugia, geographic discontinuity, etc.). Since the niche for general intelligence if very large, it may be difficult for more than one cognitively general species to exist in a single solar system. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 18 18:09:03 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 18 Jun 2019 14:09:03 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: <20190602135447.Horde.3JqrHrwMg4w2eQWiBca32nt@secure199.inmotionhosting.com> References: <1033591604.7814374.1559506054270@mail.yahoo.com> <20190602135447.Horde.3JqrHrwMg4w2eQWiBca32nt@secure199.inmotionhosting.com> Message-ID: On Sun, Jun 2, 2019 at 5:03 PM Stuart LaForge wrote: > > Rafal's hypothesis is also applicable to other historic peoples other > than the Yamnaya. Take this guy for example. > > https://www.thevintagenews.com/2018/06/09/genghis-khan/ > > The success of his genes and those of his armies would support Rafal's > hypothesis as well. > > ### Indeed, while the Yamnaya may have been the most successful invaders in history, there are many others who availed themselves of a similar mechanism to spread their genes. I think there is an interesting analogy here to the Zhabotinski-Belousov reactions, which occur in chemical media with the proper levels of diffusion and may result in successive waves of chemical changes sweeping back and forth over a space. I bet the math describing the ZB reaction and the Yamnaya-type expansions is broadly similar. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Jun 18 18:48:55 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 18 Jun 2019 11:48:55 -0700 Subject: [ExI] The Yamnaya question In-Reply-To: References: <001601d5183d$0036f110$00a4d330$@rainier66.com> <004301d51882$f00963e0$d01c2ba0$@rainier66.com> <2029759869.7801518.1559496986931@mail.yahoo.com> <20190602110608.Horde.Ixvf-kmTl6IcXTMPSXHHn-y@secure199.inmotionhosting.com> Message-ID: <017d01d52606$7b97e8b0$72c7ba10$@rainier66.com> From: extropy-chat On Behalf Of Rafal Smigrodzki ### I am an amateur in population genetics, my actual work was in mitochondrial genetics and Parkinson's disease, so I would be poorly equipped to work on substantiating the hypothesis?.While the idea that increased mobility in a population may trigger changes in fitness related to improved mating distance occurred to me independently, I would be surprised if nobody else among population geneticists ever thought about it before?Rafal Rafal, the problem with that notion is in determining the criteria for fitness. From the POV of evolution (pretending evolution has a POV) the fittest individuals might be the poverty-stricken inner city mother of a dozen half siblings and the fathers of those many offspring. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Tue Jun 18 19:53:14 2019 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 18 Jun 2019 13:53:14 -0600 Subject: [ExI] Boston Dynamics: New Robots Now Fight Back In-Reply-To: <011e01d525fb$ca82b3c0$5f881b40$@rainier66.com> References: <89BDA2C1-75C0-43CB-A7B1-327F0C67961A@gmail.com> <02f501d52529$e50b5670$af220350$@rainier66.com> <005401d525e0$23163e60$6942bb20$@rainier66.com> <011e01d525fb$ca82b3c0$5f881b40$@rainier66.com> Message-ID: Re: deplorables and the urban/rural divide... Western civilization seems to be aggressively forgetting that urbanites need farmers a lot more than farmers need urbanites. Nothing good will come of that. On Tue, Jun 18, 2019, 11:40 AM wrote: > > > *From:* extropy-chat *On Behalf > Of *Rafal Smigrodzki > *Subject:* Re: [ExI] Boston Dynamics: New Robots Now Fight Back > > > > > > > > On Tue, Jun 18, 2019 at 10:17 AM wrote: > > > > Since the discussion already went toward political considerations?The > risk of that scenario increases steadily. > > > > ### I'm reading Neal Stephenson's newest book, Fall. The jury is still > out, so far not too bad despite some deplorables-type demagoguery? > > > > My copy of Fall came in the mail yesterday. From what little I have read, > I am now seriously considering the possibility that Neal Stephenson is > Satoshi. > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 18 21:25:20 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 18 Jun 2019 17:25:20 -0400 Subject: [ExI] The Yamnaya question In-Reply-To: <017d01d52606$7b97e8b0$72c7ba10$@rainier66.com> References: <001601d5183d$0036f110$00a4d330$@rainier66.com> <004301d51882$f00963e0$d01c2ba0$@rainier66.com> <2029759869.7801518.1559496986931@mail.yahoo.com> <20190602110608.Horde.Ixvf-kmTl6IcXTMPSXHHn-y@secure199.inmotionhosting.com> <017d01d52606$7b97e8b0$72c7ba10$@rainier66.com> Message-ID: On Tue, Jun 18, 2019 at 2:48 PM wrote: > > > Rafal, the problem with that notion is in determining the criteria for > fitness. From the POV of evolution (pretending evolution has a POV) the > fittest individuals might be the poverty-stricken inner city mother of a > dozen half siblings and the fathers of those many offspring. > > > ### Geneticists use the word "fitness" as a term of art, with a well-defined meaning, and yes, relatively dumb individuals living in rich modern societies enjoy a mild advantage in fitness so defined. That they parasitize the resources created by others who are mentally fitter in the common meaning is another issue, beyond the remit of the geneticist. And of course the genetically fittest people in America are nowadays religious practitioners such as orthodox Jews, the Amish, Muslims, and the Quiverfull. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Jun 19 13:08:46 2019 From: johnkclark at gmail.com (John Clark) Date: Wed, 19 Jun 2019 09:08:46 -0400 Subject: [ExI] Neven's Law Message-ID: In December 2018 researchers at Google solved a problem on their best quantum computer, but a low end laptop computer could solve it too. In January of this year an improved quantum computer solved a more complicated version of the problem that took a high end desktop computer to equal. By February they had to use Google's huge server network of conventional computers to do what the Quantum Computer did. Granted the problem chosen in the above was picked not because it was useful but because it best highlighted what a Quantum Computer could do; nevertheless a counterpart of Moore's law for conventional computers has been proposed called Neven's Law named after the head of Google?s Quantum Artificial Intelligence lab Hartmut Neven. Neven's Law states that quantum computers are gaining computational power relative to conventional computers at a double exponential rate (2^(2)^2 , 2^(2)^3 , 2^(2)^4 , 2^(2)^5 ...) If it turns out that Neven's law is anywhere close to being true then hold on to your buts because we're in for a bumpy ride. A New Law to Describe Quantum Computing?s Rise? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jun 20 16:11:52 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jun 2019 09:11:52 -0700 Subject: [ExI] skerjillionaire gives to ai research Message-ID: <003401d52782$df9ba950$9ed2fbf0$@rainier66.com> Did you see this? https://www.cnn.com/2019/06/19/tech/stephen-schwarzman-oxford-ai-donation/in dex.html Kewallllll. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jun 20 16:30:03 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jun 2019 09:30:03 -0700 Subject: [ExI] skerjillionaire gives to ai research In-Reply-To: <003401d52782$df9ba950$9ed2fbf0$@rainier66.com> References: <003401d52782$df9ba950$9ed2fbf0$@rainier66.com> Message-ID: <000201d52785$6a237560$3e6a6020$@rainier66.com> Here's another take on it: https://www.forbes.com/sites/daviddawkins/2019/06/19/billionaire-stephen-sch warzman-gifts-188-million-to-oxford-university/#14f6a9876ab7 spike From: spike at rainier66.com Sent: Thursday, June 20, 2019 9:12 AM To: 'ExI chat list' Cc: 'Anders Sandberg' ; spike at rainier66.com Subject: skerjillionaire gives to ai research Did you see this? https://www.cnn.com/2019/06/19/tech/stephen-schwarzman-oxford-ai-donation/in dex.html Kewallllll. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Jun 20 16:32:08 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 20 Jun 2019 11:32:08 -0500 Subject: [ExI] superflea! Message-ID: https://aeon.co/videos/a-nobel-laureate-and-a-flea-circus-join-forces-for-an-unforgettable-demonstration-of-inertia?utm_source=Aeon+Newsletter&utm_campaign=451efcdea2-EMAIL_CAMPAIGN_2019_06_18_07_20&utm_medium=email&utm_term=0_411a82e59d-451efcdea2-68993993 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jun 20 16:59:28 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jun 2019 09:59:28 -0700 Subject: [ExI] superflea! In-Reply-To: References: Message-ID: <001601d52789$85de2a30$919a7e90$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Sent: Thursday, June 20, 2019 9:32 AM To: ExI chat list Subject: [ExI] superflea! https://aeon.co/videos/a-nobel-laureate-and-a-flea-circus-join-forces-for-an-unforgettable-demonstration-of-inertia?utm_source=Aeon+Newsletter &utm_campaign=451efcdea2-EMAIL_CAMPAIGN_2019_06_18_07_20&utm_medium=email&utm_term=0_411a82e59d-451efcdea2-68993993 They won?t let the students do fun politically incorrect things like this anymore. The middle school STEM Fair people specify no experiments with beasts of any kind, no fleas, no goldfish, nothing that has a face. It isn?t clear to me that a snail has a face, but it could be argued that it does. The eyeballs are out there on? proboscis of some kind, don?t know what you call them, but they would say it is a no-go. OK how about an earthworm? It isn?t clear that has a face. But fleas, no go. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Jun 20 17:45:51 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jun 2019 10:45:51 -0700 Subject: [ExI] big paws Message-ID: <003401d52790$00f35aa0$02da0fe0$@rainier66.com> Wouldn't this be fun toy? https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday- amazon-ceo-jeff-bezos-dexterous-robot-hands?utm_campaign=Artificial%2BIntell igence%2BWeekly &utm_medium=email&utm_source=Artificial_Intelligence_Weekly_111 We could have a version of this which could be used to build rock walls and stuff, using a scaled-up hand with the arm articulated as our shoulder/elbow/wrist/finger arrangement. That would be a kick in the ass, ja? We could scale it both directions: it could be scaled down to use as a medical or surgical device, or go places where you don't want to put your own paw, or what have ya (I don't think I need to offer detail please.) We could do Brobdingnaging Olympics Giant Rassling or Micro Games with them, or use them to create manufacturing procedures kinda like a physical 3-D version of a macro: you train it thru the motions, it programs on the fly and can repeat endlessly. Something like that would be great for assembly lines and such. One would think with all that money Bezos could afford a decent pair of pants. I would donate a pair of mine to the poor waif, but I am taller and waaay thinner than he is. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Jun 21 01:04:46 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 20 Jun 2019 18:04:46 -0700 Subject: [ExI] we did this Message-ID: <002d01d527cd$517e0550$f47a0ff0$@rainier66.com> This article has the ring of truth: https://nypost.com/2019/06/18/puppy-dog-eyes-were-developed-to-appeal-to-hum ans-research/ Many years ago I had a dog who was so good at this, one couldn't help but offering one's lunch. He had that whole awwwww dad, pleeeease look perfected, such that the hardest heart had to just hand over the ham sandwich. He wasn't just a cute little Staff terrier either, he was an 80 pound Doberman. -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Jun 21 16:02:57 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 21 Jun 2019 11:02:57 -0500 Subject: [ExI] ai emotions Message-ID: If you watch TV, you may have seen the commercial with the athletic robot, who hits golf balls, etc., then stops outside a bar and watches the people drink, and then slumps as if to say 'that's what I really want - companionship'. If you have seen it, did you feel anything for the robot at all? A bit of pity, perhaps? Clearly that was intended. Was it successful? This is only the very beginning of programming the AIs to elicit responses in us. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Jun 21 19:04:46 2019 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 21 Jun 2019 13:04:46 -0600 Subject: [ExI] ai emotions In-Reply-To: References: Message-ID: It goes back farther than that. The IKEA ad with the lamp, having been thrown out, sitting on the curb in the rain, camera angle carefully chosen to make it look like it's staring forlornly up through the window to its former place in the warm happy room where its replacement now shines, to increasingly despondent music. Then the spokesman steps into the shot and chides the audience. "Some of you are feeling sorry for this lamp. That's because you are crazy. It has no feelings, and the new one is much better." People can be made to anthropomorphize anything, and the more complex its behaviour, the easier it is. On Fri, Jun 21, 2019, 10:06 AM William Flynn Wallace wrote: > If you watch TV, you may have seen the commercial with the athletic robot, > who hits golf balls, etc., then stops outside a bar and watches the people > drink, and then slumps as if to say 'that's what I really want - > companionship'. > > If you have seen it, did you feel anything for the robot at all? A bit of > pity, perhaps? > > Clearly that was intended. Was it successful? > > This is only the very beginning of programming the AIs to elicit responses > in us. > > bill w > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Fri Jun 21 19:37:58 2019 From: sparge at gmail.com (Dave Sill) Date: Fri, 21 Jun 2019 15:37:58 -0400 Subject: [ExI] ai emotions In-Reply-To: References: Message-ID: It goes back farther than that: https://en.wikipedia.org/wiki/Luxo_Jr. https://www.youtube.com/watch?v=D4NPQ8mfKU0 And that's not even the beginning. -Dave On Fri, Jun 21, 2019 at 3:08 PM Darin Sunley wrote: > It goes back farther than that. > > The IKEA ad with the lamp, having been thrown out, sitting on the curb in > the rain, camera angle carefully chosen to make it look like it's staring > forlornly up through the window to its former place in the warm happy room > where its replacement now shines, to increasingly despondent music. > > Then the spokesman steps into the shot and chides the audience. "Some of > you are feeling sorry for this lamp. That's because you are crazy. It has > no feelings, and the new one is much better." > > People can be made to anthropomorphize anything, and the more complex its > behaviour, the easier it is. > > > On Fri, Jun 21, 2019, 10:06 AM William Flynn Wallace > wrote: > >> If you watch TV, you may have seen the commercial with the athletic >> robot, who hits golf balls, etc., then stops outside a bar and watches the >> people drink, and then slumps as if to say 'that's what I really want - >> companionship'. >> >> If you have seen it, did you feel anything for the robot at all? A bit >> of pity, perhaps? >> >> Clearly that was intended. Was it successful? >> >> This is only the very beginning of programming the AIs to elicit >> responses in us. >> >> bill w >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Jun 21 19:57:14 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 21 Jun 2019 14:57:14 -0500 Subject: [ExI] ai emotions In-Reply-To: References: Message-ID: I told Spike about Ted Chiang's new collection of stories and maybe some of you read his stuff. A long, 110 page story is about software and AIs as pets and more -book is Exhalation. bill w On Fri, Jun 21, 2019 at 2:41 PM Dave Sill wrote: > It goes back farther than that: > > https://en.wikipedia.org/wiki/Luxo_Jr. > https://www.youtube.com/watch?v=D4NPQ8mfKU0 > > And that's not even the beginning. > > -Dave > > > On Fri, Jun 21, 2019 at 3:08 PM Darin Sunley wrote: > >> It goes back farther than that. >> >> The IKEA ad with the lamp, having been thrown out, sitting on the curb in >> the rain, camera angle carefully chosen to make it look like it's staring >> forlornly up through the window to its former place in the warm happy room >> where its replacement now shines, to increasingly despondent music. >> >> Then the spokesman steps into the shot and chides the audience. "Some of >> you are feeling sorry for this lamp. That's because you are crazy. It has >> no feelings, and the new one is much better." >> >> People can be made to anthropomorphize anything, and the more complex its >> behaviour, the easier it is. >> >> >> On Fri, Jun 21, 2019, 10:06 AM William Flynn Wallace >> wrote: >> >>> If you watch TV, you may have seen the commercial with the athletic >>> robot, who hits golf balls, etc., then stops outside a bar and watches the >>> people drink, and then slumps as if to say 'that's what I really want - >>> companionship'. >>> >>> If you have seen it, did you feel anything for the robot at all? A bit >>> of pity, perhaps? >>> >>> Clearly that was intended. Was it successful? >>> >>> This is only the very beginning of programming the AIs to elicit >>> responses in us. >>> >>> bill w >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri Jun 21 22:53:04 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 21 Jun 2019 16:53:04 -0600 Subject: [ExI] ai emotions In-Reply-To: References: Message-ID: Once the emerging expert consensus ?Representational Qualia Theory ? information becomes far more understood and accepted by the masses. Intelligent people, at least will understand how we can objectively know the difference between these 3 robots that are functionally the same , but qualitatively and emotionally very different. Abstract AIs deserve no compassion, no matter how they behave. On Fri, Jun 21, 2019 at 1:59 PM William Flynn Wallace wrote: > I told Spike about Ted Chiang's new collection of stories and maybe some > of you read his stuff. A long, 110 page story is about software and AIs as > pets and more -book is Exhalation. bill w > > On Fri, Jun 21, 2019 at 2:41 PM Dave Sill wrote: > >> It goes back farther than that: >> >> https://en.wikipedia.org/wiki/Luxo_Jr. >> https://www.youtube.com/watch?v=D4NPQ8mfKU0 >> >> And that's not even the beginning. >> >> -Dave >> >> >> On Fri, Jun 21, 2019 at 3:08 PM Darin Sunley wrote: >> >>> It goes back farther than that. >>> >>> The IKEA ad with the lamp, having been thrown out, sitting on the curb >>> in the rain, camera angle carefully chosen to make it look like it's >>> staring forlornly up through the window to its former place in the warm >>> happy room where its replacement now shines, to increasingly despondent >>> music. >>> >>> Then the spokesman steps into the shot and chides the audience. "Some of >>> you are feeling sorry for this lamp. That's because you are crazy. It has >>> no feelings, and the new one is much better." >>> >>> People can be made to anthropomorphize anything, and the more complex >>> its behaviour, the easier it is. >>> >>> >>> On Fri, Jun 21, 2019, 10:06 AM William Flynn Wallace < >>> foozler83 at gmail.com> wrote: >>> >>>> If you watch TV, you may have seen the commercial with the athletic >>>> robot, who hits golf balls, etc., then stops outside a bar and watches the >>>> people drink, and then slumps as if to say 'that's what I really want - >>>> companionship'. >>>> >>>> If you have seen it, did you feel anything for the robot at all? A bit >>>> of pity, perhaps? >>>> >>>> Clearly that was intended. Was it successful? >>>> >>>> This is only the very beginning of programming the AIs to elicit >>>> responses in us. >>>> >>>> bill w >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sat Jun 22 07:55:44 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 22 Jun 2019 00:55:44 -0700 Subject: [ExI] ai emotions In-Reply-To: <1513760083.218438.1561187685700@mail.yahoo.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> Message-ID: <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Quoting Brent Allsop: > ??Intelligent people, at least will understand how we can > objectively know the difference between these 3 robots that are > functionally the same, but qualitatively and emotionally very > different.? Abstract AIs deserve no compassion, no matter how they > behave. By your logic, Mary deserves no compassion before she is released from her colorless prison. I am not quite sure that many examples of an "abstract AI" exist since neural networks usually map real world inputs to abstract outputs. For example face, recognition software would learn to map images of your face to the abstract string of letters "Brent". How is that fundamentally different than one of your friends recognizing you in the mall as "Brent"? Your notion could set a bad precedent. What if someday AI think humans deserve no compassion because we are not able to phenomenally experience ultraviolet or radiowaves? Or that we are not conscious because we are not phenomenalogically conscious at nanosecond time scales. Stuart LaForge From msd001 at gmail.com Sat Jun 22 12:49:27 2019 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 22 Jun 2019 08:49:27 -0400 Subject: [ExI] ai emotions In-Reply-To: <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Sat, Jun 22, 2019, 8:03 AM Stuart LaForge wrote: > > Your notion could set a bad precedent. What if someday AI think humans > deserve no compassion because we are not able to phenomenally > experience ultraviolet or radiowaves? Or that we are not conscious > because we are not phenomenalogically conscious at nanosecond time > scales. > Compassion shouldn't be measured by how much the recipient deserves, instead by how much the giver gives. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Jun 22 13:30:05 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 22 Jun 2019 08:30:05 -0500 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: mike wrote - Compassion shouldn't be measured by how much the recipient deserves, instead by how much the giver gives. so on the average rich people have more compassion than poor people bill w On Sat, Jun 22, 2019 at 7:53 AM Mike Dougherty wrote: > On Sat, Jun 22, 2019, 8:03 AM Stuart LaForge wrote: > >> >> Your notion could set a bad precedent. What if someday AI think humans >> deserve no compassion because we are not able to phenomenally >> experience ultraviolet or radiowaves? Or that we are not conscious >> because we are not phenomenalogically conscious at nanosecond time >> scales. >> > > Compassion shouldn't be measured by how much the recipient deserves, > instead by how much the giver gives. > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Jun 22 14:33:55 2019 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 22 Jun 2019 10:33:55 -0400 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Sat, Jun 22, 2019, 9:34 AM William Flynn Wallace wrote: > mike wrote - Compassion shouldn't be measured by how much the recipient > deserves, instead by how much the giver gives. > > so on the average rich people have more compassion than poor people bill w > Wow, no. I don't even know how to get from what I meant to how you interpreted it. Are you suggesting that compassion=money? -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Jun 22 17:22:58 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 22 Jun 2019 12:22:58 -0500 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Sat, Jun 22, 2019 at 9:38 AM Mike Dougherty wrote: > On Sat, Jun 22, 2019, 9:34 AM William Flynn Wallace > wrote: > >> mike wrote - Compassion shouldn't be measured by how much the recipient >> deserves, instead by how much the giver gives. >> >> so on the average rich people have more compassion than poor people bill >> w >> > > Wow, no. I don't even know how to get from what I meant to how you > interpreted it. > > Are you suggesting that compassion=money? > Very often it does - money to churches and synagogues and the rest - other charities. What other way to measure it did you have in mind? Time of course - right Spike? bill w > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Jun 22 17:52:41 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 22 Jun 2019 10:52:41 -0700 Subject: [ExI] effective altruism: RE: ai emotions Message-ID: <00c801d52923$4a3f3c30$debdb490$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Sent: Saturday, June 22, 2019 10:23 AM To: ExI chat list Subject: Re: [ExI] ai emotions On Sat, Jun 22, 2019 at 9:38 AM Mike Dougherty > wrote: On Sat, Jun 22, 2019, 9:34 AM William Flynn Wallace > wrote: mike wrote - Compassion shouldn't be measured by how much the recipient deserves, instead by how much the giver gives. so on the average rich people have more compassion than poor people bill w Wow, no. I don't even know how to get from what I meant to how you interpreted it. Are you suggesting that compassion=money? >?Very often it does - money to churches and synagogues and the rest - other charities. What other way to measure it did you have in mind? Time of course - right Spike? bill w One of our most prominent ExI-ers has been involved with Effective Altruism. https://www.effectivealtruism.org/ While noting that this thread has drifted off of its original purpose, perhaps mistakenly, do allow me to run with the ball on this, since we (somehow) are here now. If a guy has a pile of money and has everything he wants, has been everywhere he wants to go, has no burning desire to go back there, is happily married, his kids and his alma mater are all in fine shape, OK now that guy wants to give something meaningful to humanity. We know of ways that can be done wrong. Note all the ways Bill Gates has been criticized, but sheesh come on! The guy really means well, and OK then, meaning well isn?t always doing well. OK then, suppose someone has buttloads of money and really wants to do good deeds with it. Step 1: hire some engineers and business analytical types to advise on how to do the most good with a given? well? buttload. Have them do as you would if you were starting a business or investing: calculate the expected return, then put your money where you do the most good. Ja? OK, suppose you don?t have a pile of money but you want to do good things. The principle here is to consider what you have to offer. It doesn?t need to be money, and here?s a fun observation: there are plenty of worthy organizations doing good things which already have money. They have enough. However? they don?t have enough to hire people. That really really costs a bunch of money. The local scout troop for instance: they have money. They have enough to buy the things they need. But they can?t really hire someone, because if they did, they wouldn?t have money for long. So? work with the scouts as a volunteer. Plenty of school organizations have money. But they don?t have volunteers, particularly ones with specific skills such as guys with tech educations to volunteer for Science Olympiad. So? I am a volunteer for scouts and Science Olympiad. These things don?t even cost me money (not much anyway) and it allows me to give what I have the most of: time and brains. I have some time, but I have even more brains to offer. I have BUTTLOADS of brains! (OK perhaps I need a different unit of measure on that one (for some smartass will surely ask: OK spike how did it get there to start with?)) This is the spirit of effective altruism: give what you have the most of, or the best value to the receiver. It isn?t necessarily money. It might be money, if you have plenty of that, but one more thing please: if you are looking for a good deed that doesn?t cost aything and doesn?t even take all that much time: go to a nursing home, particularly one for AD patients, look, listen and talk. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Jun 22 21:29:06 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 22 Jun 2019 16:29:06 -0500 Subject: [ExI] buttloads of medication Message-ID: >From my main source of health information, Reader's Digest, comes the following: "Mister Manners suggests that the proper response after passing gas is 'Pardon me'. But a study from the University of Exeter found that being exposed to the gas released in flatulence may help stave off heart attacks, strokes, and dementia. Perhaps the correct response should be 'Your're welcome.'" First fecal transplants, now this. All sorts of uses for used food. Those medical doctors are a strange bunch. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jun 24 03:01:59 2019 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 23 Jun 2019 20:01:59 -0700 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Sat, Jun 22, 2019 at 5:53 AM Mike Dougherty wrote: > Compassion shouldn't be measured by how much the recipient deserves, > instead by how much the giver gives. > Relative to how much they can give, or on an absolute scale? "A dime from a peasant means more than a dollar from a rich man", or "if you don't have access to large amounts of money then you can't be compassionate"? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Jun 24 03:29:49 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 23 Jun 2019 20:29:49 -0700 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> From: extropy-chat On Behalf Of Adrian Tymes Sent: Sunday, June 23, 2019 8:02 PM To: ExI chat list Subject: Re: [ExI] ai emotions On Sat, Jun 22, 2019 at 5:53 AM Mike Dougherty > wrote: Compassion shouldn't be measured by how much the recipient deserves, instead by how much the giver gives. Relative to how much they can give, or on an absolute scale? "A dime from a peasant means more than a dollar from a rich man", or "if you don't have access to large amounts of money then you can't be compassionate"? Give whatever you have the best of. Many recipients have plenty of money, just not enough to hire humans. Plenty to do what they do, not even close to enough to hire. So give time and skill. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Jun 24 03:59:53 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 23 Jun 2019 21:59:53 -0600 Subject: [ExI] ai emotions In-Reply-To: <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> Message-ID: I?ve been confronting Naive Realists bleating completely qualia blind (only use one word ?red?, instead of multiple words like red and redness to talk about different physical properties and qualities.) rhetoric on places like quora and reddit. Sometimes it is so frustrating that so many people just can?t think. But then when I return to the ExI list, I see responses like the following that finally take me out of that gutter, and lift me up to the clouds providing new thoughts I?ve never considered before like: Stuart LaForge: ?By your logic, Mary deserves no compassion before she is released from her colorless prison.? Mike Dougherty: ?Compassion shouldn't be measured by how much the recipient deserves, instead by how much the giver gives. William Flynn Wallace: ?so on the average rich people have more compassion than poor people? After feeling so dirty, and frustrated with so little progress, with so many, it is so nice to be pulled back up in the clouds, trying to keep up with you guys taking me where I?ve never been before. Thanks everyone, for providing such an inspiring forum, for so many continued years, and for restoring my faith in humanity so often. On Sat, Jun 22, 2019 at 11:26 AM William Flynn Wallace wrote: On Sat, Jun 22, 2019 at 9:38 AM Mike Dougherty wrote: On Sat, Jun 22, 2019, 9:34 AM William Flynn Wallace wrote: mike wrote - Compassion shouldn't be measured by how much the recipient deserves, instead by how much the giver gives. so on the average rich people have more compassion than poor people bill w Wow, no. I don't even know how to get from what I meant to how you interpreted it. Are you suggesting that compassion=money? Very often it does - money to churches and synagogues and the rest - other charities. What other way to measure it did you have in mind? Time of course - right Spike? bill w _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat On Sun, Jun 23, 2019 at 9:32 PM wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Adrian Tymes > *Sent:* Sunday, June 23, 2019 8:02 PM > *To:* ExI chat list > *Subject:* Re: [ExI] ai emotions > > > > On Sat, Jun 22, 2019 at 5:53 AM Mike Dougherty wrote: > > Compassion shouldn't be measured by how much the recipient deserves, > instead by how much the giver gives. > > > > Relative to how much they can give, or on an absolute scale? > > > > "A dime from a peasant means more than a dollar from a rich man", or "if > you don't have access to large amounts of money then you can't be > compassionate"? > > > > > > > > > > > > > > > > > > > > > > > > > > > > Give whatever you have the best of. > > > > Many recipients have plenty of money, just not enough to hire humans. > Plenty to do what they do, not even close to enough to hire. So give time > and skill. > > > > spike > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 25 10:06:44 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 25 Jun 2019 06:06:44 -0400 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: > On Sat, Jun 22, 2019 at 5:53 AM Mike Dougherty wrote: > >> >> > "A dime from a peasant means more than a dollar from a rich man", or "if > you don't have access to large amounts of money then you can't be > compassionate"? > >> >> > _______________________________________________ > ### The latter is closer to being true, of course, - if you don't have a lot of money or other useful resources, any goody-goody feelings you may have don't matter. A bit like "When a tree falls in the forest and nobody hears it, it doesn't matter if it makes a sound". -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jun 25 10:56:58 2019 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 25 Jun 2019 06:56:58 -0400 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Tue, Jun 25, 2019, 6:11 AM Rafal Smigrodzki wrote: > > >> ### The latter is closer to being true, of course, - if you don't have a > lot of money or other useful resources, any goody-goody feelings you may > have don't matter. A bit like "When a tree falls in the forest and nobody > hears it, it doesn't matter if it makes a sound". > > > What this thread has proven to me is that some of the smartest people I know do not understand compassion, so the likelihood that "ai emotions" will be any better is small (assuming ai will be "learning" such things from other "smart people") The word is rooted in "co-suffering" and is about experiencing someone's situation as one's own. Simply sharing "I understand" could be an act of compassion. Our world is so lacking in this feature that even the commonly understood meaning of the word has been lost. That seems very Newspeak to me. (smh) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jun 25 12:43:43 2019 From: pharos at gmail.com (BillK) Date: Tue, 25 Jun 2019 13:43:43 +0100 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Tue, 25 Jun 2019 at 12:02, Mike Dougherty wrote: > > On Tue, Jun 25, 2019, 6:11 AM Rafal Smigrodzki wrote: >> ### The latter is closer to being true, of course, - if you don't have a lot of money or other useful resources, any goody-goody feelings you may have don't matter. A bit like "When a tree falls in the forest and nobody hears it, it doesn't matter if it makes a sound". >> >> > > What this thread has proven to me is that some of the smartest people I know do not understand compassion, so the likelihood that "ai emotions" will be any better is small (assuming ai will be "learning" such things from other "smart people") > > The word is rooted in "co-suffering" and is about experiencing someone's situation as one's own. Simply sharing "I understand" could be an act of compassion. Our world is so lacking in this feature that even the commonly understood meaning of the word has been lost. That seems very Newspeak to me. (smh) > _______________________________________________ That is a well-known problem for AI. If humans are biased, then what they build will have the same biases built in. They may try to avoid some of their known biases, like racial prejudice for example, but all their unconscious biases will still be there. Even the training data fed to AIs will be biased. On the other hand, humans don't want cold merciless AI intelligences. They want AIs to be biased to look after humans. But not *all* humans, of course. Humans can feel compassion for people that they are bombing to destruction, but still feel duty bound to continue for many reasons. Human society is a confused mess and it is likely that if AIs become autonomous they will also follow that path. BillK From foozler83 at gmail.com Tue Jun 25 13:19:48 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 25 Jun 2019 08:19:48 -0500 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: These two words are used in ambiguous ways: sympathy; empathy Sympathy - have you ever seen one of the old (1700s) cellos where there are extra strings that are not bowed or plucked? What they do is to vibrate when the other strings make certain sounds - in sympathy. So this word, to me, says that if a person feels sympathy then he feels the same as the person he is observing. On my grammar school yard a boy tossed his cookies, and shortly thereafter another boy then another. Watching it made them feel exactly the same - nauseous. Watching a person cry and sob can make you very sad too. Empathy is when you care about another person's feelings and may have had them yourself before, but are not literally feeling what they feel. I think that one or the other of these has to be present, or we could not enjoy sports, novels or anything that has characters or teams that we identify with. On the other hand, presumably we enjoy, though that might not be the right word, horror stories and movies, family tragedy stories, or in fact anything that elicits emotions in people. A gigantic online industry is built on eliciting emotions: sexual arousal. I don't know how you define these words, or the word compassion, but I think compassion might be either sympathy or empathy. bill w On Tue, Jun 25, 2019 at 7:48 AM BillK wrote: > On Tue, 25 Jun 2019 at 12:02, Mike Dougherty wrote: > > > > On Tue, Jun 25, 2019, 6:11 AM Rafal Smigrodzki < > rafal.smigrodzki at gmail.com> wrote: > >> ### The latter is closer to being true, of course, - if you don't have > a lot of money or other useful resources, any goody-goody feelings you may > have don't matter. A bit like "When a tree falls in the forest and nobody > hears it, it doesn't matter if it makes a sound". > >> > >> > > > > What this thread has proven to me is that some of the smartest people I > know do not understand compassion, so the likelihood that "ai emotions" > will be any better is small (assuming ai will be "learning" such things > from other "smart people") > > > > The word is rooted in "co-suffering" and is about experiencing someone's > situation as one's own. Simply sharing "I understand" could be an act of > compassion. Our world is so lacking in this feature that even the > commonly understood meaning of the word has been lost. That seems very > Newspeak to me. (smh) > > _______________________________________________ > > > That is a well-known problem for AI. If humans are biased, then what > they build will have the same biases built in. They may try to avoid > some of their known biases, like racial prejudice for example, but all > their unconscious biases will still be there. Even the training data > fed to AIs will be biased. > On the other hand, humans don't want cold merciless AI intelligences. > They want AIs to be biased to look after humans. > But not *all* humans, of course. > Humans can feel compassion for people that they are bombing to > destruction, but still feel duty bound to continue for many reasons. > Human society is a confused mess and it is likely that if AIs become > autonomous they will also follow that path. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Jun 25 15:55:35 2019 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 25 Jun 2019 08:55:35 -0700 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Tue, Jun 25, 2019, 3:10 AM Rafal Smigrodzki wrote: > "A dime from a peasant means more than a dollar from a rich man", or "if >> you don't have access to large amounts of money then you can't be >> compassionate"? >> > > ### The latter is closer to being true, of course, - if you don't have a > lot of money or other useful resources, any goody-goody feelings you may > have don't matter. A bit like "When a tree falls in the forest and nobody > hears it, it doesn't matter if it makes a sound". > That is not always true. Often, just a few of the right resources (not worth much on an absolute basis), or even just moral support (which practically anyone can give) or wisdom/insight (which costs naught but a bit of time to share), can be of significant use. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 25 16:15:27 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 25 Jun 2019 12:15:27 -0400 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Tue, Jun 25, 2019 at 11:59 AM Adrian Tymes wrote: > On Tue, Jun 25, 2019, 3:10 AM Rafal Smigrodzki > wrote: > >> "A dime from a peasant means more than a dollar from a rich man", or "if >>> you don't have access to large amounts of money then you can't be >>> compassionate"? >>> >> >> ### The latter is closer to being true, of course, - if you don't have a >> lot of money or other useful resources, any goody-goody feelings you may >> have don't matter. A bit like "When a tree falls in the forest and nobody >> hears it, it doesn't matter if it makes a sound". >> > > That is not always true. Often, just a few of the right resources (not > worth much on an absolute basis), or even just moral support (which > practically anyone can give) or wisdom/insight (which costs naught but a > bit of time to share), can be of significant use. > ### As I wrote, "or other useful resources" - wisdom being one of them. And moral support in its pure form is pretty weak sauce. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Jun 25 16:19:55 2019 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 25 Jun 2019 12:19:55 -0400 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: On Tue, Jun 25, 2019 at 6:57 AM Mike Dougherty wrote: > On Tue, Jun 25, 2019, 6:11 AM Rafal Smigrodzki > wrote: > >> >> >>> ### The latter is closer to being true, of course, - if you don't have a >> lot of money or other useful resources, any goody-goody feelings you may >> have don't matter. A bit like "When a tree falls in the forest and nobody >> hears it, it doesn't matter if it makes a sound". >> >> >> > What this thread has proven to me is that some of the smartest people I > know do not understand compassion, so the likelihood that "ai emotions" > will be any better is small (assuming ai will be "learning" such things > from other "smart people") > > The word is rooted in "co-suffering" and is about experiencing someone's > situation as one's own. Simply sharing "I understand" could be an act of > compassion. Our world is so lacking in this feature that even the > commonly understood meaning of the word has been lost. That seems very > Newspeak to me. (smh) > ### Compassion not supported by cold hard cash is a cheap thing to offer. Maybe that's why there is so much compassion making the rounds, without a corresponding amount of problem-solving. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Tue Jun 25 16:29:29 2019 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 25 Jun 2019 10:29:29 -0600 Subject: [ExI] Panspermia ..... Life on Mars ..... and elsewhere Message-ID: This is not where I started, but check it out, and if you don't have a lot of time, start at 6:51. https://www.youtube.com/watch?v=PbgB2TaYhio The point I take from this is that normally we think of life processes as proceeding at a certain pace. We don't often think of the pace -- the "time" variable -- of life as having a broad range. ("fast": bacteria: 20 min reproductive cycle; slow: redwood tree reproductive maturity: a few years, but lifespan: thousands of years.) But this Ted talk on these deep sea microbes suggest that some life forms -- bacterial -- have an astonishingly low metabolic rate, so that their lifespans cover -- what? -- tens of thousands? ... millions? ... of years? This dovetails quite nicely with Thomas Gold's "Deep Hot Biosphere" thesis. In that theory the microbes deep in the Earth use enzymatic methods to extract excess energy from rocks formed deep in the earth at high temperature and pressure. It's a very "thin gruel" so these life forms must live very slowly. And I would add this further point. The theory of panspermia, the transmission of life across space on ejecta, requires travel times of thousands or millions of years. Which suggests that life-forms with extremely low rates of metabolism and very long life spans would be particularly well-suited to make the trip and seed other planets. Then, after landing on a nutrient-rich planet, these life-forms would evolve faster rates of metabolism. Here's where I started. Mars Curiosity Detects High Methane Levels Which Could Be Life https://www.nextbigfuture.com/2019/06/mars-curiosity-detects-high-methane-levels-which-could-be-life.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2Fadvancednano+%28nextbigfuture%29 ... that brought the various puzzle pieces together. I predict we will find life nearly everywhere. I suspect that the notion of the Goldilocks's Zone embodies a misconception: that the zone of life exists exclusively in at a certain orbital distance from whatever particular star. (Because liquid water can exist at that orbital distance.) But I would suggest that more distant planets have there own little Goldilocks's Zone, somewhere below the frigid cloud tops. Follow the thermal profile down, and you must inevitably reach a region of suitable temperature (and pressure?). Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Jun 25 17:26:44 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 25 Jun 2019 12:26:44 -0500 Subject: [ExI] ai emotions In-Reply-To: References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> Message-ID: rafal wrote: Compassion not supported by cold hard cash is a cheap thing to offer. Maybe that's why there is so much compassion making the rounds, without a corresponding amount of problem-solving. I would not discount hugs. Touching is very special to humans. I think it is an interesting evolution question why we care about those near to us and very little far away. - I see a lot of advertising of charities, some featuring a whole page of kids with cleft palates. I have seen research that strongly suggests that the charity would reap a lot more money if it were just one kid. If too many people are involved, then your little contribution won't really make any difference. I think that the charities that let a donor donate straight to a certain individual and even exchange letters with me do better than donations made to anonymous people. Who could resist a Down's Syndrome kid asking for donations at your front door? (and why don't they?) (yes, I know it's now supposed to be Down Syndrome but I don't know why - are we going to change Alzheimer's disease to Alzheimer disease? Rafal?) bill w On Tue, Jun 25, 2019 at 11:28 AM Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > > > On Tue, Jun 25, 2019 at 6:57 AM Mike Dougherty wrote: > >> On Tue, Jun 25, 2019, 6:11 AM Rafal Smigrodzki < >> rafal.smigrodzki at gmail.com> wrote: >> >>> >>> >>>> ### The latter is closer to being true, of course, - if you don't have >>> a lot of money or other useful resources, any goody-goody feelings you may >>> have don't matter. A bit like "When a tree falls in the forest and nobody >>> hears it, it doesn't matter if it makes a sound". >>> >>> >>> >> What this thread has proven to me is that some of the smartest people I >> know do not understand compassion, so the likelihood that "ai emotions" >> will be any better is small (assuming ai will be "learning" such things >> from other "smart people") >> >> The word is rooted in "co-suffering" and is about experiencing someone's >> situation as one's own. Simply sharing "I understand" could be an act of >> compassion. Our world is so lacking in this feature that even the >> commonly understood meaning of the word has been lost. That seems very >> Newspeak to me. (smh) >> > > ### Compassion not supported by cold hard cash is a cheap thing to offer. > Maybe that's why there is so much compassion making the rounds, without a > corresponding amount of problem-solving. > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Tue Jun 25 22:35:33 2019 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 25 Jun 2019 15:35:33 -0700 Subject: [ExI] Atlantic article on cryonics Message-ID: Hi folks https://www.theatlantic.com/video/index/591979/cryonics/ It is about the Cryonics Institute. Nice job. Best wishes, Keith From hkeithhenson at gmail.com Tue Jun 25 23:35:31 2019 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 25 Jun 2019 16:35:31 -0700 Subject: [ExI] A real engineering effort Message-ID: Hi Since we now seem to have a proposal (build power satellites at 2000 km or higher, chemical propulsion from LEO to 2000 km) that gets around the space junk problem, I am seriously considering working out a detailed plan. After a number of years of trying to get volunteers, I fully understand that the design effort involved needs to be compensated. If any of you are interested in working on this project, you can post your interest on the PSE group or send an expression of interest directly to me. If there are at least half a dozen interested, I will make a try at raising enough money to pay people. (Starting at part-time.) I am not sure what exact skills that will be needed. It will involve physical and economic models so skills in engineering and Excel or related modeling software would be appropriate. I just added a bunch of people to this group. If you know anyone who should be on the list, let them know about it. They can add themselves or email me a request. Also, it is easy to get off the group, just click unsubscribe on any message. Keith From hkeithhenson at gmail.com Wed Jun 26 00:26:06 2019 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 25 Jun 2019 17:26:06 -0700 Subject: [ExI] A real engineering effort In-Reply-To: <5d12b3c9.1c69fb81.a36b8.6027@mx.google.com> References: <5d12b3c9.1c69fb81.a36b8.6027@mx.google.com> Message-ID: On Tue, Jun 25, 2019 at 4:52 PM Bill Gardiner wrote: > > Keith, > > When you need a human health factors person, Sigh. It is possible that humans could be involved in making up the cargo stacks in 300 km LEO before they are pushed up to a construction orbit around 2000 km. And perhaps in maintaining the chemical tugs in LEO that move the stacks. But the radiation at 2000 km is (I think) way too high for humans to tolerate. So I expect it will be all robots for construction. Keith PS. I don't like this situation. But I don't at this poing know of any other way to build power satellites where they are not banged up by junk as they are being moved into place. Ideas on this topic are welcome. please contact me. If any humans are involved in the build-out phase, they will need to start detoxing well in advance of first launch. > > > > Have human skin, will travel. > > > > Bill > > > > Sent from Mail for Windows 10 > > > > From: Keith Henson > Sent: Tuesday, June 25, 2019 7:36 PM > To: ExI chat list; Power Satellite Economics > Subject: A real engineering effort > > > > Hi > > > > Since we now seem to have a proposal (build power satellites at 2000 > > km or higher, chemical propulsion from LEO to 2000 km) that gets > > around the space junk problem, I am seriously considering working out > > a detailed plan. > > > > After a number of years of trying to get volunteers, I fully > > understand that the design effort involved needs to be compensated. > > > > If any of you are interested in working on this project, you can post > > your interest on the PSE group or send an expression of interest > > directly to me. If there are at least half a dozen interested, I will > > make a try at raising enough money to pay people. (Starting at > > part-time.) > > > > I am not sure what exact skills that will be needed. It will involve > > physical and economic models so skills in engineering and Excel or > > related modeling software would be appropriate. > > > > I just added a bunch of people to this group. If you know anyone who > > should be on the list, let them know about it. They can add > > themselves or email me a request. > > > > Also, it is easy to get off the group, just click unsubscribe on any message. > > > > Keith > > > > -- > > You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-economics+unsubscribe at googlegroups.com. > > To post to this group, send email to power-satellite-economics at googlegroups.com. > > To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAPiwVB66PtHyZr9E3BzO4Pw2jVYcY4hGzb9tF15U_2OzL7hOgw%40mail.gmail.com. > > For more options, visit https://groups.google.com/d/optout. > > From avant at sollegro.com Wed Jun 26 02:22:06 2019 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 25 Jun 2019 19:22:06 -0700 Subject: [ExI] A real engineering effort In-Reply-To: <744250300.29999.1561514597742@mail.yahoo.com> References: <5d12b3c9.1c69fb81.a36b8.6027@mx.google.com> <744250300.29999.1561514597742@mail.yahoo.com> Message-ID: <20190625192206.Horde.Ud_poEZe6nvdI1sxSMnBnXN@secure199.inmotionhosting.com> Quoting Keith Henson: > PS.? I don't like this situation.? But I don't at this point know of > any other way to build power satellites where they are not banged up > by junk as they are being moved into place.? Ideas on this topic are > welcome. In regards to space junk in general, I would suggest occasionally launching a giant gold or platinum foil balloon shaped into a giant Bucky ball by carbon fiber or inflated by low pressure high density gas like sulfur hexafluoride into orbit just below GEO and let its orbit slowly decay. It should slowly deorbit from collisions with space junk and air molecules. The ductility of gold or platinum should be sufficient to trap most space debris, even at high velocity, soaking it up like a sponge and consolidating it all in one convenient package for deorbit. One could then track or control the deorbit an be able to recycle the gold or platinum accordingly. Obviously some experimentation would be needed to find the ideal thickness of the gold or platinum foil. Stuart LaForge From avant at sollegro.com Thu Jun 27 15:01:14 2019 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 27 Jun 2019 08:01:14 -0700 Subject: [ExI] ai emotions In-Reply-To: <1433067761.991465.1561460257788@mail.yahoo.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> <1433067761.991465.1561460257788@mail.yahoo.com> Message-ID: <20190627080114.Horde._uM_dIMEnFmH59zmQYFVuBH@secure199.inmotionhosting.com> Quoting Brent Allsop: > I?ve been confronting Naive Realists bleating completely qualia > blind (only use one word ?red?, instead of multiple words like red > and redness to talk about different physical properties and > qualities.) rhetoric on places like quora and reddit. Redness is the knowledge of red. Consciousness is like a physical/mathematical function by which information becomes knowledge which is system-integrated information. Redness is thereby a function of red. To borrow Tononi's notation, redness = PHI(red). What is the distinction between consciousness and the ability to actively learn? I have trouble of seeing one. In some respects, the point of consciousness seems to be to find out what happens next. > Sometimes it is so frustrating that so many people just can?t think. It is only frustrating if you think about it. Just kidding of course, but if you need to reach people like that, then try appealing to their emotions. It often works better than logic. Logic is overrated anyhow. If you start with incorrect premises, then you reach wrong conclusions. GIGO applies to people as well as learning machines. > After feeling so dirty, and frustrated with so little progress, with > so many, it is so nice to be pulled back up in the clouds, trying to > keep up with you guys taking me where I?ve never ?been before. > Thanks everyone, for providing such an inspiring forum, for so many > continued years, and for restoring my faith in humanity so often. You have done your part to make the list an interesting forum, Brent, so one list member to another thank you as well. Here is a paper about consciousness by physicist Max Tegmark you might like entitled "Consciousness as a State of Matter". I notice you don't have him or that particular camp set up on Canonizer. https://arxiv.org/abs/1401.1219 He is a little all over the place, but his ideas are interesting and overlap some of mine. I still have to see if I can reconcile our maths but that will take time. Stuart LaForge Stuart LaForge From brent.allsop at gmail.com Thu Jun 27 17:09:16 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 27 Jun 2019 11:09:16 -0600 Subject: [ExI] ai emotions In-Reply-To: <20190627080114.Horde._uM_dIMEnFmH59zmQYFVuBH@secure199.inmotionhosting.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> <1433067761.991465.1561460257788@mail.yahoo.com> <20190627080114.Horde._uM_dIMEnFmH59zmQYFVuBH@secure199.inmotionhosting.com> Message-ID: Hi Stuart, Thanks for continuing this discussion on consciousness! Max Tegmark?s paper, like everything else in all of the peer reviewed literature on physics, and consciousness, and particularly studies on perception of color are all completely qualia blind. Everyone uses the term ?red? in only a functionalist way, which imparts no qualitative meaning, whatsoever. For example, you said: ?Redness is a function of red?. Statements like this provide no qualitative meaning, especially when you consider what should be an obvious fact that my redness could be like your greenness. In order to know, qualitatively, the meaning of a word, like redness or red, you need to keep in mind that it is only a label for a particular set of physical properties or qualities. Everyone uses the term ?red? to talk about when something reflects or emits red light, they also use it as a label for a particular type of light, they also use it to talk about ?red? signals in the optic nerve and ?red? detectors in the retina? ALL of these are purely functionalist definitions and provide no qualitative meaning. They are completely ambiguous definitions, and nobody knows which physical qualities they are talking about, when the use any such terms. In order to not be qualia blind, you need to use different words for different sets of physical properties or qualities. I use the word ?red? as a label for the physical property of anything that reflects or emits 650 NM light. I use the word ?redness? as a label for a very different set of physical properties or qualities. It is a label for a set of physical qualities we can be directly aware of, as something our brain uses to represent knowledge with. This physical quality can be used to represent any type of knowledge. A bat could use it to represent echolocation knowledge. Some people could use it to represent green knowledge, and so on. To say ?Redness is a function of red? is, again, completely qualia blind and ambiguous. Qualitatively, I have no idea what you are talking about. Are you talking about your redness, or my redness which is like your grenness? Tononi?s idea of ?redness = PHI(red)? is also completely sloppy, definition wise. Is he talking about one person?s redness, which may be another?s greenness?.? As I was saying, this, and everything else is completely qualia blind. Take the name of the neurotransmitter glutamate, for example. We know this is a label for a particular set of physical properties. We also have abstract descriptions of glutamate's atomic makeup, and how it behaves in synapses. But, again, all of this abstract information about glutamate is also just functional. It provides no qualitative meaning. We know how glutamate behaves in a synapse, but what is that glutamate behavior qualitatively like? If you think of the qualitative definition of the word redness, and the qualitative definition of the word glutamate, you should realize that these could be abstract labels for the same set of physics. Not realizing this is qualia blindness. The ONLY thing that provides qualitative meaning to anything is subjective experience, or our ability to directly experience some of the physics in our brain. ALL objectively observed information is purely abstract, and devoid of any qualitative meaning. We can?t talk about consciousness in any way, until we start thinking clearly, and non-ambiguously about the qualitative meaning of words. Max Tegmark asserts the existence of some ?perceptronium?? Even if there was such a thing, all objective observations of such would be purely functional, while direct experience of such functional descriptions would be qualitative. Proposing new physics buys you nothing about consciousness, as long as you remain qualia blind, and fail to make the qualitative connections. Once you are no longer qualia blind, you realize you don?t need any new physics. You just need to think, clearly and qualitatively, about what we already know of current physics. We simply must see people start using multiple words to talk about different physical qualities. Red for something that reflects or emits red light, and redness for a very different set of physical qualities, which could be a quality of anything we already know about, objectively, in the brain, like glutamate. On Thu, Jun 27, 2019 at 9:05 AM Stuart LaForge wrote: > > Quoting Brent Allsop: > > > I?ve been confronting Naive Realists bleating completely qualia > > blind (only use one word ?red?, instead of multiple words like red > > and redness to talk about different physical properties and > > qualities.) rhetoric on places like quora and reddit. > > Redness is the knowledge of red. Consciousness is like a > physical/mathematical function by which information becomes knowledge > which is system-integrated information. Redness is thereby a function > of red. To borrow Tononi's notation, redness = PHI(red). What is the > distinction between consciousness and the ability to actively learn? I > have trouble of seeing one. In some respects, the point of > consciousness seems to be to find out what happens next. > > > Sometimes it is so frustrating that so many people just can?t think. > > It is only frustrating if you think about it. Just kidding of course, > but if you need to reach people like that, then try appealing to their > emotions. It often works better than logic. Logic is overrated anyhow. > If you start with incorrect premises, then you reach wrong > conclusions. GIGO applies to people as well as learning machines. > > > After feeling so dirty, and frustrated with so little progress, with > > so many, it is so nice to be pulled back up in the clouds, trying to > > keep up with you guys taking me where I?ve never been before. > > Thanks everyone, for providing such an inspiring forum, for so many > > continued years, and for restoring my faith in humanity so often. > > You have done your part to make the list an interesting forum, Brent, > so one list member to another thank you as well. > > Here is a paper about consciousness by physicist Max Tegmark you might > like entitled "Consciousness as a State of Matter". I notice you don't > have him or that particular camp set up on Canonizer. > > https://arxiv.org/abs/1401.1219 > > He is a little all over the place, but his ideas are interesting and > overlap some of mine. I still have to see if I can reconcile our maths > but that will take time. > > Stuart LaForge > > > > > > > Stuart LaForge > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Thu Jun 27 19:07:36 2019 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 27 Jun 2019 12:07:36 -0700 Subject: [ExI] ai emotions In-Reply-To: <654870119.596912.1561656028640@mail.yahoo.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> <1433067761.991465.1561460257788@mail.yahoo.com> <20190627080114.Horde._uM_dIMEnFmH59zmQYFVuBH@secure199.inmotionhosting.com> <654870119.596912.1561656028640@mail.yahoo.com> Message-ID: <20190627120736.Horde.F3JQu_YQc92_VyqJbFtCTYv@secure199.inmotionhosting.com> Quoting Brent Allsop:: > Max Tegmark?s paper, like everythingelse in all of the peer reviewed > literature on physics, and consciousness, and particularlystudies on > perception of color are all completely qualia blind. Perhaps because qualia are arbitrary encodings by consciousnesses for perceived physical phenomena. With emphasis on the word arbitrary. > Everyone uses the term ?red? in only a functionalist way, which > imparts no qualitative meaning, whatsoever.? For example, you said: > ?Redness is a function of red?.? Statements like this provide no > qualitative meaning, especially when you consider what should be an > obvious fact that my redness could be like your greenness. How red looks to us on the inside (subjectively) doesn't matter as long as it is consistent. It is irrelevant that I perceive red as green so long as we both agree to call it red, and I remember what it looks like, then we will always be in agreement as to what things are red. Relativity disclaimer: this assumes we are not moving at any appreciable fraction of the speed of light with respect to another. Otherwise Doppler shifts make it so we no longer agree on an objects color. > In order to know, qualitatively, the meaning of a word, like redness > or red, you need to keep in mind that it is only a label for a > particular set of physical properties or qualities.? Everyone uses > the term ?red? to talk about when something reflects or emits red > light, they also use it as a label for a particular type of light, > they also use it to talk about ?red? signals in the optic nerve and > ?red? detectors in the retina?? ALL of these are purely > functionalist definitions and provide no qualitative meaning.? They > are completely ambiguous definitions,and nobody knows which physical > qualities they are talking about, when the use any such terms. If consciousness didn't serve a function, it would be too costly to maintain. I am therefore fine with a functionalist definition of consciousness. The qualitative meaning is that meaning itself is qualitative. Without consciousness there is no meaning. Consciousness gives meaning to shit that happens in the physical universe. That is what I mean by redness > In order to not be qualia blind, you need to use different words for > different sets of physical properties or qualities.? I use the word > ?red? as a label for the physical property of anything that reflects > or emits 650 NM light. That's a sensible definition of the word red. > I use the word ?redness? as a label for a very different set of > physical properties or qualities.? It is a label for a set of > physical qualities we can be directly aware of, as something our > brain uses to represent knowledge with.? This physical quality can > be used to represent any type of knowledge.? A bat could use it to > represent echolocation knowledge.? Some people could use it to > represent green knowledge, and so forth Right. Red is a wavelength of light and redness is your own subjective experience of it. My redness might not be your redness but we would agree on what was red. > To say ?Redness is a function of red? is, again, completely qualia > blind and ambiguous.? Qualitatively, I have no idea what you are > talking about. No it is a very precise way of describing what qualia are. They are your brain's "token" for a physical property that you can perceive. Functions just map the members of one set to another set. So functions can be qualitative if you define at least one of the sets to be so. > Are you talking about your redness, or my redness which is like your > grenness? Again how red "looks" to me versus the way it "looks" to you doesn't matter so long as we can consistently agree on what is red and what is green. And if we can't consistently agree, then your driver's license needs to be revoked. :-P > Tononi?s idea of ?redness = PHI(red)? is also completely sloppy, > definition wise. That's not actually Tononi's idea, it is mine. Tononi equates consciousness with PHI but he defines it differently based on partitioning sets of information. Overall, I think his definition is unnecessarily complicated leading to making it too hard to calculate (NP-hard even) to be a useful definition. > Is he talking about one person?s redness, which may be another?s > greenness?.?? As I was saying, this, and everything else is > completely qualia blind. Qualia don't explain anything. Instead they need to be explained. My definition does that. Don't ascribe qualia more importance than they merit. I really do think our souls are made out of math not qualia. ? > > Take the name of the neurotransmitter glutamate, for example.? We > know this is a label for a particular set of physical properties.? > We also have abstract descriptions of glutamate's atomic makeup, and > how it behaves in synapses.? But, again, all of this abstract > information about glutamate is also just functional.?It provides no > qualitative meaning.?We know how glutamate behaves in a synapse, but > what is that glutamate behavior qualitatively like?? If you think of > the qualitative definition of the word redness, and the qualitative > definition of the word glutamate, you should realize that these > could be abstract labels for the same set of physics.? Not realizing > this is qualia blindness. Again, it doesn't matter how one brain encodes redness vs another so long as they agree. > The ONLY thing that provides qualitativ emeaning to anything is > subjective experience, or our ability to directly experience some of > the physics in our brain.?ALL objectively observed information is > purely abstract, and devoid of any qualitative meaning.? We can?t > talk about consciousness in any way, until we start thinking > clearly, and non-ambiguously about the qualitative meaning of words. Nothing in the universe can objectively observe anything else. Any observation is necessarily subjective even for the most simplest of measuring devices let alone full-blown brains. There is no absolute or privileged reference frame from which to be objective, not for an observer. If particles could not collapse one another's wave functions, they would never collide. > Max Tegmark asserts the existence of some ?perceptronium??? Even if > there was such a thing, all objective observations of such would be > purely functional, while direct experience of such functional > descriptions would be qualitative. > Proposing new physics buys you nothing about consciousness, as long > as you remain qualia blind, and fail to make the qualitative > connections.? Once you are no longer qualia blind, you realize you > don?t need any new physics.? You just need to think, clearly and > qualitatively, about what we already know of current physics. I disagree. Consciousness is but one of a whole category of physical phenomena that would fall under the label emergence and emergent properties that current physics explains very poorly. New physics is precisely what we need. > We simply must see people start using multiple words to talk about > different physical qualities.? Red for something that reflects or > emits red light, and redness for a very different set of physical > qualities, which could be a quality of anything we already know > about, objectively, in the brain, like glutamate. I don't think that substrate-specific details matter that much. I think consciousness is a mathematical property analogous to the wave equation which is valid regardless of whether you are talking about water waves, sound waves, or electromagnetic waves. Let go of glutamate, it is substrate-specific detail that won't help you with understanding consciousness. Consciousness is not magic, it is math. Stuart LaForge From brent.allsop at gmail.com Thu Jun 27 22:39:58 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 27 Jun 2019 16:39:58 -0600 Subject: [ExI] ai emotions In-Reply-To: <20190627120736.Horde.F3JQu_YQc92_VyqJbFtCTYv@secure199.inmotionhosting.com> References: <1513760083.218438.1561187685700@mail.yahoo.com> <20190622005544.Horde.lvlGvrgGunCv-rds5Llc2AS@secure199.inmotionhosting.com> <00ec01d52a3d$1462d250$3d2876f0$@rainier66.com> <1433067761.991465.1561460257788@mail.yahoo.com> <20190627080114.Horde._uM_dIMEnFmH59zmQYFVuBH@secure199.inmotionhosting.com> <654870119.596912.1561656028640@mail.yahoo.com> <20190627120736.Horde.F3JQu_YQc92_VyqJbFtCTYv@secure199.inmotionhosting.com> Message-ID: Hi Stuart, ?Consciousness is not magic, it is math.? How do you get a specific, qualitative definition of the word ?red? from any math? ?I don't think that substrate-specific details matter that much.? Then you are not talking about consciousness, at all. You are just talking about intelligence. Consciousness is computationally bound elemental qualities, for which there is something, qualitative, for which it is like. ?It is irrelevant that I perceive red as green.? Can you not see how sloppy language like this is? I?m going to describe at least two very different possible interpretations of this statement. If you can?t distinguish between them, with your language, then again, you are not talking about consciousness: 1. One person is color blind, and represents both red things and green things with knowledge that has the same physical redness quality. In other words, he is red green color blind. 2. One person is qualitatively inverted from the other. He uses the other?s greenness to represent red with and visa versa for green things. You can?t tell which one you?re statement is talking about. Again, you?re not talking about consciousness, if you can?t distinguish between these types of things with your models and language. Sure, before Galileo, it didn?t matter if you used a geocentric model of the solar system or a heliocentric. But now that we?re flying up in the heavens, one works, and one does not. Similarly, now, you can claim that the qualitative nature doesn?t matter, but as soon as you start hacking the brain, amplifying intelligence, connecting multiple brains (like two brain hemispheres can be connect) or even religiously predicting what ?spirits? and future consciousness will be possible. One model works, the other does not. In fact, my prediction is the reason we can?t better understand how we subjectively represent visual knowledge, is precisely because everyone is like you, qualia blind, and doesn?t care that some people may have qualitatively very different physical representations of red and green. If you only care about if a brain can pick strawberries, and don?t care what it is qualitatively like, then you can?t make the critically important distinctions between these 3 robots that are functionally the same but qualitatively very different, one being not conscious at all. ?Nothing in the universe can objectively observe anything else.? All information that comes to our senses is ?objectively? observed and devoid of any physical qualitative information, it is all only abstract mathematical information. Descartes, the ultimate septic, realized that he must doubt all objectively observed information. But he also realized: ?I think therefore I am?. This includes the knowledge of the qualities of our consciousness. We know, absolutely, in a way that cannot be doubted, what physical redness is like, and how it is different than greenness. While it is true that we may be a brain, in a vat. We know, absolutely, that the physics, in the brain, in that vat exist, and we know, absolutely and qualitatively, what that physics (in both hemispheres) is like. Let?s say you did objectively detect some new ?perceptronium?. All you would have, describing that perceptronium, is mathematical models and descriptions of such. These mathematical descriptions of perceptronium would all be completely devoid of any qualitative meaning. Until you experienced a particular type of perceptronium, directly, you would not know, qualitatively, how to interpret any of your mathematical objective descriptions of such. Again, everything you are talking about is what Chalmers, and everyone would call ?easy? problems. Discovering and objectively observing any kind of ?perceptronium? is an easy problem. We already know how to do this. Knowing, qualitatively, what that perceptronium is qualitatively like, if you experienced it, directly, is what makes it hard. The only ?hard? part of consciousness is the ?Explanatory Ga p?, or how do you eff the ineffable nature of qualia. Everything else is just easy problems. We already know, mathematically what it is like to be a bat. But that tells you nothing, qualitatively about what being a bat is like. On Thu, Jun 27, 2019 at 1:10 PM Stuart LaForge wrote: > > Quoting Brent Allsop:: > > > > Max Tegmark?s paper, like everythingelse in all of the peer reviewed > > literature on physics, and consciousness, and particularlystudies on > > perception of color are all completely qualia blind. > > Perhaps because qualia are arbitrary encodings by consciousnesses for > perceived physical phenomena. With emphasis on the word arbitrary. > > > Everyone uses the term ?red? in only a functionalist way, which > > imparts no qualitative meaning, whatsoever. For example, you said: > > ?Redness is a function of red?. Statements like this provide no > > qualitative meaning, especially when you consider what should be an > > obvious fact that my redness could be like your greenness. > > How red looks to us on the inside (subjectively) doesn't matter as > long as it is consistent. It is irrelevant that I perceive red as > green so long as we both agree to call it red, and I remember what it > looks like, then we will always be in agreement as to what things are > red. > > Relativity disclaimer: this assumes we are not moving at any > appreciable fraction of the speed of light with respect to another. > Otherwise Doppler shifts make it so we no longer agree on an objects > color. > > > In order to know, qualitatively, the meaning of a word, like redness > > or red, you need to keep in mind that it is only a label for a > > particular set of physical properties or qualities. Everyone uses > > the term ?red? to talk about when something reflects or emits red > > light, they also use it as a label for a particular type of light, > > they also use it to talk about ?red? signals in the optic nerve and > > ?red? detectors in the retina? ALL of these are purely > > functionalist definitions and provide no qualitative meaning. They > > are completely ambiguous definitions,and nobody knows which physical > > qualities they are talking about, when the use any such terms. > > If consciousness didn't serve a function, it would be too costly to > maintain. I am therefore fine with a functionalist definition of > consciousness. The qualitative meaning is that meaning itself is > qualitative. Without consciousness there is no meaning. Consciousness > gives meaning to shit that happens in the physical universe. That is > what I mean by redness > > > In order to not be qualia blind, you need to use different words for > > different sets of physical properties or qualities. I use the word > > ?red? as a label for the physical property of anything that reflects > > or emits 650 NM light. > > That's a sensible definition of the word red. > > > I use the word ?redness? as a label for a very different set of > > physical properties or qualities. It is a label for a set of > > physical qualities we can be directly aware of, as something our > > brain uses to represent knowledge with. This physical quality can > > be used to represent any type of knowledge. A bat could use it to > > represent echolocation knowledge. Some people could use it to > > represent green knowledge, and so forth > > Right. Red is a wavelength of light and redness is your own subjective > experience of it. My redness might not be your redness but we would > agree on what was red. > > > To say ?Redness is a function of red? is, again, completely qualia > > blind and ambiguous. Qualitatively, I have no idea what you are > > talking about. > > No it is a very precise way of describing what qualia are. They are > your brain's "token" for a physical property that you can perceive. > Functions just map the members of one set to another set. So functions > can be qualitative if you define at least one of the sets to be so. > > > Are you talking about your redness, or my redness which is like your > > grenness? > > Again how red "looks" to me versus the way it "looks" to you doesn't > matter so long as we can consistently agree on what is red and what is > green. And if we can't consistently agree, then your driver's license > needs to be revoked. :-P > > > Tononi?s idea of ?redness = PHI(red)? is also completely sloppy, > > definition wise. > > That's not actually Tononi's idea, it is mine. Tononi equates > consciousness with PHI but he defines it differently based on > partitioning sets of information. Overall, I think his definition is > unnecessarily complicated leading to making it too hard to calculate > (NP-hard even) to be a useful definition. > > > Is he talking about one person?s redness, which may be another?s > > greenness?.? As I was saying, this, and everything else is > > completely qualia blind. > > Qualia don't explain anything. Instead they need to be explained. My > definition does that. Don't ascribe qualia more importance than they > merit. I really do think our souls are made out of math not qualia. > > > > > Take the name of the neurotransmitter glutamate, for example. We > > know this is a label for a particular set of physical properties. > > We also have abstract descriptions of glutamate's atomic makeup, and > > how it behaves in synapses. But, again, all of this abstract > > information about glutamate is also just functional. It provides no > > qualitative meaning. We know how glutamate behaves in a synapse, but > > what is that glutamate behavior qualitatively like? If you think of > > the qualitative definition of the word redness, and the qualitative > > definition of the word glutamate, you should realize that these > > could be abstract labels for the same set of physics. Not realizing > > this is qualia blindness. > > Again, it doesn't matter how one brain encodes redness vs another so > long as they agree. > > > The ONLY thing that provides qualitativ emeaning to anything is > > subjective experience, or our ability to directly experience some of > > the physics in our brain. ALL objectively observed information is > > purely abstract, and devoid of any qualitative meaning. We can?t > > talk about consciousness in any way, until we start thinking > > clearly, and non-ambiguously about the qualitative meaning of words. > > Nothing in the universe can objectively observe anything else. Any > observation is necessarily subjective even for the most simplest of > measuring devices let alone full-blown brains. There is no absolute or > privileged reference frame from which to be objective, not for an > observer. If particles could not collapse one another's wave > functions, they would never collide. > > > Max Tegmark asserts the existence of some ?perceptronium?? Even if > > there was such a thing, all objective observations of such would be > > purely functional, while direct experience of such functional > > descriptions would be qualitative. > > > Proposing new physics buys you nothing about consciousness, as long > > as you remain qualia blind, and fail to make the qualitative > > connections. Once you are no longer qualia blind, you realize you > > don?t need any new physics. You just need to think, clearly and > > qualitatively, about what we already know of current physics. > > I disagree. Consciousness is but one of a whole category of physical > phenomena that would fall under the label emergence and emergent > properties that current physics explains very poorly. New physics is > precisely what we need. > > > We simply must see people start using multiple words to talk about > > different physical qualities. Red for something that reflects or > > emits red light, and redness for a very different set of physical > > qualities, which could be a quality of anything we already know > > about, objectively, in the brain, like glutamate. > > I don't think that substrate-specific details matter that much. I > think consciousness is a mathematical property analogous to the wave > equation which is valid regardless of whether you are talking about > water waves, sound waves, or electromagnetic waves. Let go of > glutamate, it is substrate-specific detail that won't help you with > understanding consciousness. > > Consciousness is not magic, it is math. > > Stuart LaForge > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Jun 27 22:59:59 2019 From: johnkclark at gmail.com (John Clark) Date: Thu, 27 Jun 2019 18:59:59 -0400 Subject: [ExI] From the sublime to the ridiculous Message-ID: Strange day. Today in the Journal Science it was announced that for only the second time the pinpoint location of a Fast Radio Burst (FRB) has been found, one of the most mysterious things in astronomy that probably require new physics to explain. This FRB came from the outskirts of a old massive galaxy with little new star formation which was very unlike the first the FRB which came from the center of a young dwarf galaxy with lots of stellar formation indicating that FRB's are a general phenomena not requiring unusual astronomical conditions. Only about 60 FRB's have ever been observed, mostly in the last 5 years, but the location of only 2 have been found; they only last about a millisecond and most never repeat so they're hard to detect much less locate despite their immense power, but it's estimated that about 10,000 must happen every day in the observable universe. Theories have been proposed for their cause but all the ones I've heard involve very weird stuff of one sort or another. A single fast radio burst localized to a massive galaxy Speaking of weird stuff, also today a pregnant woman in Alabama, the hart of Trump country, was shot in the stomach and miscarried; the shooter was not charged with anything but the woman was charged with manslaughter for killing her baby: Woman In Alabama Charged With Manslaughter After Being Shot Also today Trump's judges on the Supreme Court decreed (5 to 4) that political gerrymandering is perfectly fine, apparently it was the original intent of the framers of the constitution that the politicians should pick their voters and the voters should not pick their politicians: Court gives Trump a victory on gerrymandered And in yet more news of this day, the court didn't give Trump his way on everything, they said he couldn't put all the questions he wanted on the new census form, so immediately and in flagrant disregard of the constitution and in the first time in the history of the nation he will try to delay the 2020 census until AFTER the 2020 election. If Trump ever gets his wish and his face is added to Mt. Rushmore they can carve these inspiring words of his beneath it: *"Seems totally ridiculous that our government, and indeed Country, cannot ask a basic question of Citizenship in a very expensive, detailed and important Census, in this case for 2020. I have asked the lawyers if they can delay the Census, no matter how long, until the United States Supreme Court is given additional information"* Trump will try to delay 2020 census until after 2020 election And if he figures all that would still not be enough to remain in power no doubt he will try something else. Remember when I said Trump would "delay" the 2020 election if he doesn't think he will win? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Jun 28 06:55:48 2019 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 27 Jun 2019 23:55:48 -0700 Subject: [ExI] ai emotions Message-ID: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> Quoting Brent Allsop: >> ?Consciousness is not magic, it is math.? > > How do you get a specific, qualitative definition of the word ?red? from > any math? Red is a subset of the set of colors an unaugmented human can see. There I just defined it for you mathematically. In math symbols, it looks something very much like {red} C {red, orange, yellow, green, blue, indigo, violet}. If you were a lucky mutant (or AI) that could perceive grue, then the math would look like {red} C {red, orange, yellow, green, grue, blue, indigo, violet} Whatever unique qualia your brain may have assigned to it is your business and your business alone since you cannot express red to me except by quantitative measure (650 nm wavelength electromagnetic wave) or qualitative example (the color of ripe strawberries). Any other description of red only means anything to YOU (Perhaps it makes your dick hard, I have no clue, don't really care.) In other words, you can't give me any better a qualitative description of red then I can give you. Prove me wrong: What is red, oh privileged seer of qualia? (Yes, that was sarcasm.) > > ?I don't think that substrate-specific details matter that much.? > > Then you are not talking about consciousness, at all. You are just talking > about intelligence. Consciousness is computationally bound elemental > qualities, for which there is something, qualitative, for which it is like. Intelligence and consciousness differ by degree, not by type. Both are emergent properties of some configurations of matter. If I were to quantitatively rank emergent properties by their PHI value, then I would have a distribution as follows: reactivity <= life <= intelligence <= consciousness <= civilization >> ?It is irrelevant that I perceive red as green.? > Can you not see how sloppy language like this is? I?m going to describe at > least two very different possible interpretations of this statement. If > you can?t distinguish between them, with your language, then again, you are > not talking about consciousness: You pull a single sentence of mine out of context and then use it to accuse me of sloppy language? Here is my precise and unequivocal retort: NO! I challenge you to take that out of context. > 1. One person is color blind, and represents both red things and > green things with knowledge that has the same physical redness quality. In > other words, he is red green color blind. > > 2. One person is qualitatively inverted from the other. He uses the > other?s greenness to represent red with and visa versa for green things. When you said, "Are you talking about your redness, or my redness which is like your [sic] grenness?" I meant whichever you meant by the quoted statement. My argument holds either way. Unless you believe that color-blind people are not really conscious. In which case you should be enslaving the colorblind and tithing me 10% of the proceeds. > > You can?t tell which one you?re statement is talking about. Again, you?re > not talking about consciousness, if you can?t distinguish between these > types of things with your models and language. Again, my statement reflects yours with the exact same scope. So you tell me what I meant. > Sure, before Galileo, it didn?t matter if you used a geocentric model of > the solar system or a heliocentric. But now that we?re flying up in the > heavens, one works, and one does not. Similarly, now, you can claim that > the qualitative nature doesn?t matter, but as soon as you start hacking the > brain, amplifying intelligence, connecting multiple brains (like two brain > hemispheres can be connect) or even religiously predicting what ?spirits? > and future consciousness will be possible. One model works, the other does > not. I don't see how your model predicts anything except for your ignorance of what consciousness is. You say that every consciousness is a unique snowflake of amassed qualia, I say that every machine-learning algorithm starts out with a random set of parameters and through learning its training data, either supervised or unsupervised, converges on an approximation of the truth Every deep learning neural network is a unique snowflake that gets optimized for a specific purpose. Some neural networks train very quickly, others never quite get what you are trying to teach it. There is very much a ghost in the machine and each time you run the algorithm, you get a different ghost. If you don't believe me, then download Simbrain, watch the turtorial video on Youtube, and I will send you a copy my tiny brain to play with. Be the qualitative judge of my tiny brain, I dare you. Do you not understand the implications of me creating a 55 neuron brain and teaching it to count to five? Do you not understand the implication of my tiny brain being able to distinguish ALL three-bit patterns after only being trained on SOME three-bit patterns? Do you not see the conceptualization of threeness that was occurring? > In fact, my prediction is the reason we can?t better understand how > we subjectively represent visual knowledge, is precisely because everyone > is like you, qualia blind, and doesn?t care that some people may have > qualitatively very different physical representations of red and green. Quit calling me "qualia blind". I am not sure what you mean by it, by it sounds vaguely insulting like you are accusing me of being a philosophic zombie or something. I assure you there is something that it is qualitatively like to be me, even if I can't succinctly describe it to you in monkey mouth noises. I could just as easily accuse you of being innumerate and a mathphobe, so either explain what you mean or knock it off. > > If you only care about if a brain can pick strawberries, and don?t care > what it is qualitatively like, then you can?t make the critically important > distinctions between these 3 robots > > that are functionally the same but qualitatively very different, one being > not conscious at all. No being can be deemed conscious without some manner of inputs from the real world. That is the nature of perception. A robot without sensors cannot be conscious. If that is what you mean by an "abstract robot" than I agree that it is not conscious. On the other hand, a keyboard is a sensor. A very limited sensor but a sensor nonetheless. >> ?Nothing in the universe can objectively observe anything else.? > All information that comes to our senses is ?objectively? observed and > devoid of any physical qualitative information, it is all only abstract > mathematical information. Descartes, the ultimate septic, realized that he > must doubt all objectively observed information. You are in no way an objective observer. Any information that may have been objective before you observed it became biased the moment you perceived it. That is because your brain filters out and flat out ignores out any information that does not have relevance to Brent. Why else could you not see the color grue unless it had no survival advantage to you or your ancestors? Even now, your inborn Brentward bias is seething with the need to disagree with me: your primal and naked need to impose Brent upon me and the rest of the world. Can't you feel it? > But he also realized: ?I > think therefore I am?. This includes the knowledge of the qualities of our > consciousness. No it doesn't. Thinking pertains to logic and abstracts and not to qualia which are in the realm of that what you perceive and feel. Descartes said that his ability to make logical inferences entailed that he existed. If intelligence is, as you claim, separable from consciousness, then Descartes did little more than make a good case that he was intelligent. In fact he made it point to explicitly assume that all his perceived qualia were the work of some kind of malicious demon trying to mislead him about his existence through his senses or something similarly paranoid. In any case, if anyone was "qualia blind" it was your man Descartes, who used imagined demons to come up with a definition of himself that did not incorporate sensory information. Nonetheless, I don't think Descartes was a philosophic zombie. > We know, absolutely, in a way that cannot be doubted, what > physical redness is like, and how it is different than greenness. While it > is true that we may be a brain, in a vat. We know, absolutely, that the > physics, in the brain, in that vat exist, and we know, absolutely and > qualitatively, what that physics (in both hemispheres) is like. How could we know for sure what what the physical redness of ripe strawberries looks like when they would look different in the light and the shadow? https://en.wikipedia.org/wiki/Checker_shadow_illusion > Let?s say you did objectively detect some new ?perceptronium?. All you > would have, describing that perceptronium, is mathematical models and > descriptions of such. These mathematical descriptions of perceptronium > would all be completely devoid of any qualitative meaning. Until you > experienced a particular type of perceptronium, directly, you would not > know, qualitatively, how to interpret any of your mathematical objective > descriptions of such. Perceptronium is Tegmark's notion and not mine. I am not sure that as a concept it adds much to the understanding of consciousness. > Again, everything you are talking about is what Chalmers, and everyone > would call ?easy? problems. Discovering and objectively observing any kind > of ?perceptronium? is an easy problem. We already know how to do this. > Knowing, qualitatively, what that perceptronium is qualitatively like, if > you experienced it, directly, is what makes it hard. Being Brent is necessarily like being Brent. And if I were born in your stead, then I would necessarily be Brent. Moreover, you are being of finite information in that your entire history, your every thought, and your every deed can be described by a very large yet nonetheless finite number of true/false or yes/no questions and their answers. The smallest number of such yes/no questions and answers would equal your Shannon entropy. That means that there is a unique bitstring that describes you. The sum total of every discernible thing about you can be expressed as a very large integer. It would be the most compressed form of you that it is possible to express. > The only ?hard? part of consciousness is the ?Explanatory Ga > p?, or how do you eff the > ineffable nature of qualia. There is no "explanatory gap" because it is filled in by natural selection quite nicely. There are some qualia invariants that can be identified and experienced quite universally. For example, I know what your pain feels like. It feels unpleasant. I know that because our ancestors evolved to feel pain so they would try to avoid dangerously unhealthy environments and behaviors. > Everything else is just easy problems. We > already know, mathematically what it is like to be a bat. But that tells > you nothing, qualitatively about what being a bat is like. You are right, that's where technology can help. If you go hang-gliding on a moonless night while wearing a pair of these sonar glasses, you might come close to knowing what it is like to be a bat. http://sonarglasses.com/ Alternatively, since you are what you eat, you could just eat a bat and describe how it makes you feel. ;-) Stuart LaForge From foozler83 at gmail.com Fri Jun 28 14:48:02 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 28 Jun 2019 09:48:02 -0500 Subject: [ExI] ai emotions In-Reply-To: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> References: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> Message-ID: stuart/brett wrote If intelligence is, as you claim, separable from consciousness Is dreaming - aka REM sleep - a variety of consciousness to you? I have certainly used my intelligence while I was dreaming - mainly to figure out what I was trying to say to myself! In a way, stage 4 sleep in the deepest. That's where the night terrors take place - which I have never experienced, but I assume there is something like consciousness there for the terrors to be experienced in. Other stages of sleep are not accompanied by any consciousness, although we can drift between a bit of consciousness and sleep in stage 1. Some people have said that they can tell when they are entering sleep when their thoughts go from rational to a bit crazy. bill w bill w On Fri, Jun 28, 2019 at 2:00 AM Stuart LaForge wrote: > > Quoting Brent Allsop: > > > >> ?Consciousness is not magic, it is math.? > > > > How do you get a specific, qualitative definition of the word ?red? from > > any math? > > Red is a subset of the set of colors an unaugmented human can see. > There I just defined it for you mathematically. In math symbols, it > looks something very much like {red} C {red, orange, yellow, green, > blue, indigo, violet}. If you were a lucky mutant (or AI) that could > perceive grue, then the math would look like {red} C {red, orange, > yellow, green, grue, blue, indigo, violet} > > Whatever unique qualia your brain may have assigned to it is your > business and your business alone since you cannot express red to me > except by quantitative measure (650 nm wavelength electromagnetic > wave) or qualitative example (the color of ripe strawberries). Any > other description of red only means anything to YOU (Perhaps it makes > your dick hard, I have no clue, don't really care.) > > > In other words, you can't give me any better a qualitative description > of red then I can give you. Prove me wrong: What is red, oh privileged > seer of qualia? (Yes, that was sarcasm.) > > > > > ?I don't think that substrate-specific details matter that much.? > > > > Then you are not talking about consciousness, at all. You are just > talking > > about intelligence. Consciousness is computationally bound elemental > > qualities, for which there is something, qualitative, for which it is > like. > > Intelligence and consciousness differ by degree, not by type. Both are > emergent properties of some configurations of matter. If I were to > quantitatively rank emergent properties by their PHI value, then I > would have a distribution as follows: reactivity <= life <= > intelligence <= consciousness <= civilization > > >> ?It is irrelevant that I perceive red as green.? > > > Can you not see how sloppy language like this is? I?m going to describe > at > > least two very different possible interpretations of this statement. If > > you can?t distinguish between them, with your language, then again, you > are > > not talking about consciousness: > > You pull a single sentence of mine out of context and then use it to > accuse me of sloppy language? Here is my precise and unequivocal > retort: NO! I challenge you to take that out of context. > > > 1. One person is color blind, and represents both red things and > > green things with knowledge that has the same physical redness quality. > In > > other words, he is red green color blind. > > > > 2. One person is qualitatively inverted from the other. He uses > the > > other?s greenness to represent red with and visa versa for green things. > > When you said, "Are you talking about your redness, or my redness > which is like your [sic] grenness?" I meant whichever you meant by the > quoted statement. My argument holds either way. Unless you believe > that color-blind people are not really conscious. In which case you > should be enslaving the colorblind and tithing me 10% of the proceeds. > > > > > You can?t tell which one you?re statement is talking about. Again, > you?re > > not talking about consciousness, if you can?t distinguish between these > > types of things with your models and language. > > Again, my statement reflects yours with the exact same scope. So you > tell me what I meant. > > > Sure, before Galileo, it didn?t matter if you used a geocentric model of > > the solar system or a heliocentric. But now that we?re flying up in the > > heavens, one works, and one does not. Similarly, now, you can claim that > > the qualitative nature doesn?t matter, but as soon as you start hacking > the > > brain, amplifying intelligence, connecting multiple brains (like two > brain > > hemispheres can be connect) or even religiously predicting what ?spirits? > > and future consciousness will be possible. One model works, the other > does > > not. > > I don't see how your model predicts anything except for your ignorance > of what consciousness is. You say that every consciousness is a unique > snowflake of amassed qualia, I say that every machine-learning > algorithm starts out with a random set of parameters and through > learning its training data, either supervised or unsupervised, > converges on an approximation of the truth > > Every deep learning neural network is a unique snowflake that gets > optimized for a specific purpose. Some neural networks train very > quickly, others never quite get what you are trying to teach it. There > is very much a ghost in the machine and each time you run the > algorithm, you get a different ghost. If you don't believe me, then > download Simbrain, watch the turtorial video on Youtube, and I will > send you a copy my tiny brain to play with. Be the qualitative judge > of my tiny brain, I dare you. > > Do you not understand the implications of me creating a 55 neuron > brain and teaching it to count to five? Do you not understand the > implication of my tiny brain being able to distinguish ALL three-bit > patterns after only being trained on SOME three-bit patterns? Do you > not see the conceptualization of threeness that was occurring? > > > In fact, my prediction is the reason we can?t better understand how > > we subjectively represent visual knowledge, is precisely because everyone > > is like you, qualia blind, and doesn?t care that some people may have > > qualitatively very different physical representations of red and green. > > Quit calling me "qualia blind". I am not sure what you mean by it, by > it sounds vaguely insulting like you are accusing me of being a > philosophic zombie or something. I assure you there is something that > it is qualitatively like to be me, even if I can't succinctly describe > it to you in monkey mouth noises. I could just as easily accuse you of > being innumerate and a mathphobe, so either explain what you mean or > knock it off. > > > > > If you only care about if a brain can pick strawberries, and don?t care > > what it is qualitatively like, then you can?t make the critically > important > > distinctions between these 3 robots > > < > https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing > > > > that are functionally the same but qualitatively very different, one > being > > not conscious at all. > > No being can be deemed conscious without some manner of inputs from > the real world. That is the nature of perception. A robot without > sensors cannot be conscious. If that is what you mean by an "abstract > robot" than I agree that it is not conscious. On the other hand, a > keyboard is a sensor. A very limited sensor but a sensor nonetheless. > > >> ?Nothing in the universe can objectively observe anything else.? > > > All information that comes to our senses is ?objectively? observed and > > devoid of any physical qualitative information, it is all only abstract > > mathematical information. Descartes, the ultimate septic, realized that > he > > must doubt all objectively observed information. > > You are in no way an objective observer. Any information that may have > been objective before you observed it became biased the moment you > perceived it. That is because your brain filters out and flat out > ignores out any information that does not have relevance to Brent. Why > else could you not see the color grue unless it had no survival > advantage to you or your ancestors? Even now, your inborn Brentward > bias is seething with the need to disagree with me: your primal and > naked need to impose Brent upon me and the rest of the world. Can't > you feel it? > > > But he also realized: ?I > > think therefore I am?. This includes the knowledge of the qualities of > our > > consciousness. > > No it doesn't. Thinking pertains to logic and abstracts and not to > qualia which are in the realm of that what you perceive and feel. > Descartes said that his ability to make logical inferences entailed > that he existed. If intelligence is, as you claim, separable from > consciousness, then Descartes did little more than make a good case > that he was intelligent. In fact he made it point to explicitly assume > that all his perceived qualia were the work of some kind of malicious > demon trying to mislead him about his existence through his senses or > something similarly paranoid. In any case, if anyone was "qualia > blind" it was your man Descartes, who used imagined demons to come up > with a definition of himself that did not incorporate sensory > information. Nonetheless, I don't think Descartes was a philosophic > zombie. > > > We know, absolutely, in a way that cannot be doubted, what > > physical redness is like, and how it is different than greenness. While > it > > is true that we may be a brain, in a vat. We know, absolutely, that the > > physics, in the brain, in that vat exist, and we know, absolutely and > > qualitatively, what that physics (in both hemispheres) is like. > > How could we know for sure what what the physical redness of ripe > strawberries looks like when they would look different in the light > and the shadow? > > https://en.wikipedia.org/wiki/Checker_shadow_illusion > > > Let?s say you did objectively detect some new ?perceptronium?. All you > > would have, describing that perceptronium, is mathematical models and > > descriptions of such. These mathematical descriptions of perceptronium > > would all be completely devoid of any qualitative meaning. Until you > > experienced a particular type of perceptronium, directly, you would not > > know, qualitatively, how to interpret any of your mathematical objective > > descriptions of such. > > Perceptronium is Tegmark's notion and not mine. I am not sure that as > a concept it adds much to the understanding of consciousness. > > > Again, everything you are talking about is what Chalmers, and everyone > > would call ?easy? problems. Discovering and objectively observing any > kind > > of ?perceptronium? is an easy problem. We already know how to do this. > > Knowing, qualitatively, what that perceptronium is qualitatively like, if > > you experienced it, directly, is what makes it hard. > > Being Brent is necessarily like being Brent. And if I were born in > your stead, then I would necessarily be Brent. Moreover, you are being > of finite information in that your entire history, your every thought, > and your every deed can be described by a very large yet nonetheless > finite number of true/false or yes/no questions and their answers. The > smallest number of such yes/no questions and answers would equal your > Shannon entropy. > > That means that there is a unique bitstring that describes you. The > sum total of every discernible thing about you can be expressed as a > very large integer. It would be the most compressed form of you that > it is possible to express. > > > The only ?hard? part of consciousness is the ?Explanatory Ga > > p?, or how do you eff the > > ineffable nature of qualia. > > There is no "explanatory gap" because it is filled in by natural > selection quite nicely. There are some qualia invariants that can be > identified and experienced quite universally. For example, I know what > your pain feels like. It feels unpleasant. I know that because our > ancestors evolved to feel pain so they would try to avoid dangerously > unhealthy environments and behaviors. > > > Everything else is just easy problems. We > > already know, mathematically what it is like to be a bat. But that tells > > you nothing, qualitatively about what being a bat is like. > > You are right, that's where technology can help. If you go > hang-gliding on a moonless night while wearing a pair of these sonar > glasses, you might come close to knowing what it is like to be a bat. > > http://sonarglasses.com/ > > Alternatively, since you are what you eat, you could just eat a bat > and describe how it makes you feel. ;-) > > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Fri Jun 28 15:28:48 2019 From: johnkclark at gmail.com (John Clark) Date: Fri, 28 Jun 2019 11:28:48 -0400 Subject: [ExI] Red and green qualia Message-ID: Could my red qualia be your green qualia? Maybe, but probably not as we're both of the same species; although with birds if might be different as they have 4 different types of color sensitive cone cells in their eyes not 3 as we do. On the other hand as long as there is consistency I'm not sure the question is even meaningful. For example, would the subjective experience of somebody who saw the world in black and white be different from somebody who saw the world in red and black? I don't think it would although we'll never know for sure. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From interzone at gmail.com Fri Jun 28 15:35:42 2019 From: interzone at gmail.com (Dylan Distasio) Date: Fri, 28 Jun 2019 11:35:42 -0400 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: I would assume mine is different from yours unless you're red/green colorblind also. On Fri, Jun 28, 2019 at 11:31 AM John Clark wrote: > Could my red qualia be your green qualia? Maybe, but probably not as we're > both of the same species; although with birds if might be different as they > have 4 different types of color sensitive cone cells in their eyes not 3 as > we do. On the other hand as long as there is consistency I'm not sure the > question is even meaningful. For example, would the subjective experience > of somebody who saw the world in black and white be different from somebody > who saw the world in red and black? I don't think it would although we'll > never know for sure. > > John K Clark > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri Jun 28 18:34:03 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 28 Jun 2019 12:34:03 -0600 Subject: [ExI] ai emotions In-Reply-To: References: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> Message-ID: Hi William, Is dreaming - aka REM sleep - a variety of consciousness to you? My understanding of Stuart?s definition is that ?No being can be deemed conscious without some manner of inputs from the real world.? So a dreaming person is not conscious, according to that definition, since there is no inputs from the real world. It seems to me that given such a definition they would also consider all vegetative people as not conscious . If I were I such a person, very aware of my thoughts and their physical qualities, and such but unable to receive input through my senses, nor control any motor functions, I wouldn?t want a person with this definition as their working hypothesis of who is and isn't "deemed conscious" to be my doctor. Also, I think many people agree that sleep is when much of the neural brain programming occurs, enabling us to figure things out like ?what we are trying to tell ourselves.? Brent On Fri, Jun 28, 2019 at 8:51 AM William Flynn Wallace wrote: > stuart/brett wrote If intelligence is, as you claim, separable from > consciousness > > Is dreaming - aka REM sleep - a variety of consciousness to you? I have > certainly used my intelligence while I was dreaming - mainly to figure out > what I was trying to say to myself! > > In a way, stage 4 sleep in the deepest. That's where the night terrors > take place - which I have never experienced, but I assume there is > something like consciousness there for the terrors to be experienced in. > > Other stages of sleep are not accompanied by any consciousness, although > we can drift between a bit of consciousness and sleep in stage 1. Some > people have said that they can tell when they are entering sleep when their > thoughts go from rational to a bit crazy. > > bill w > > bill w > > > On Fri, Jun 28, 2019 at 2:00 AM Stuart LaForge wrote: > >> >> Quoting Brent Allsop: >> >> >> >> ?Consciousness is not magic, it is math.? >> > >> > How do you get a specific, qualitative definition of the word ?red? from >> > any math? >> >> Red is a subset of the set of colors an unaugmented human can see. >> There I just defined it for you mathematically. In math symbols, it >> looks something very much like {red} C {red, orange, yellow, green, >> blue, indigo, violet}. If you were a lucky mutant (or AI) that could >> perceive grue, then the math would look like {red} C {red, orange, >> yellow, green, grue, blue, indigo, violet} >> >> Whatever unique qualia your brain may have assigned to it is your >> business and your business alone since you cannot express red to me >> except by quantitative measure (650 nm wavelength electromagnetic >> wave) or qualitative example (the color of ripe strawberries). Any >> other description of red only means anything to YOU (Perhaps it makes >> your dick hard, I have no clue, don't really care.) >> >> >> In other words, you can't give me any better a qualitative description >> of red then I can give you. Prove me wrong: What is red, oh privileged >> seer of qualia? (Yes, that was sarcasm.) >> >> > >> > ?I don't think that substrate-specific details matter that much.? >> > >> > Then you are not talking about consciousness, at all. You are just >> talking >> > about intelligence. Consciousness is computationally bound elemental >> > qualities, for which there is something, qualitative, for which it is >> like. >> >> Intelligence and consciousness differ by degree, not by type. Both are >> emergent properties of some configurations of matter. If I were to >> quantitatively rank emergent properties by their PHI value, then I >> would have a distribution as follows: reactivity <= life <= >> intelligence <= consciousness <= civilization >> >> >> ?It is irrelevant that I perceive red as green.? >> >> > Can you not see how sloppy language like this is? I?m going to >> describe at >> > least two very different possible interpretations of this statement. If >> > you can?t distinguish between them, with your language, then again, you >> are >> > not talking about consciousness: >> >> You pull a single sentence of mine out of context and then use it to >> accuse me of sloppy language? Here is my precise and unequivocal >> retort: NO! I challenge you to take that out of context. >> >> > 1. One person is color blind, and represents both red things and >> > green things with knowledge that has the same physical redness >> quality. In >> > other words, he is red green color blind. >> > >> > 2. One person is qualitatively inverted from the other. He uses >> the >> > other?s greenness to represent red with and visa versa for green things. >> >> When you said, "Are you talking about your redness, or my redness >> which is like your [sic] grenness?" I meant whichever you meant by the >> quoted statement. My argument holds either way. Unless you believe >> that color-blind people are not really conscious. In which case you >> should be enslaving the colorblind and tithing me 10% of the proceeds. >> >> > >> > You can?t tell which one you?re statement is talking about. Again, >> you?re >> > not talking about consciousness, if you can?t distinguish between these >> > types of things with your models and language. >> >> Again, my statement reflects yours with the exact same scope. So you >> tell me what I meant. >> >> > Sure, before Galileo, it didn?t matter if you used a geocentric model of >> > the solar system or a heliocentric. But now that we?re flying up in the >> > heavens, one works, and one does not. Similarly, now, you can claim >> that >> > the qualitative nature doesn?t matter, but as soon as you start hacking >> the >> > brain, amplifying intelligence, connecting multiple brains (like two >> brain >> > hemispheres can be connect) or even religiously predicting what >> ?spirits? >> > and future consciousness will be possible. One model works, the other >> does >> > not. >> >> I don't see how your model predicts anything except for your ignorance >> of what consciousness is. You say that every consciousness is a unique >> snowflake of amassed qualia, I say that every machine-learning >> algorithm starts out with a random set of parameters and through >> learning its training data, either supervised or unsupervised, >> converges on an approximation of the truth >> >> Every deep learning neural network is a unique snowflake that gets >> optimized for a specific purpose. Some neural networks train very >> quickly, others never quite get what you are trying to teach it. There >> is very much a ghost in the machine and each time you run the >> algorithm, you get a different ghost. If you don't believe me, then >> download Simbrain, watch the turtorial video on Youtube, and I will >> send you a copy my tiny brain to play with. Be the qualitative judge >> of my tiny brain, I dare you. >> >> Do you not understand the implications of me creating a 55 neuron >> brain and teaching it to count to five? Do you not understand the >> implication of my tiny brain being able to distinguish ALL three-bit >> patterns after only being trained on SOME three-bit patterns? Do you >> not see the conceptualization of threeness that was occurring? >> >> > In fact, my prediction is the reason we can?t better understand how >> > we subjectively represent visual knowledge, is precisely because >> everyone >> > is like you, qualia blind, and doesn?t care that some people may have >> > qualitatively very different physical representations of red and green. >> >> Quit calling me "qualia blind". I am not sure what you mean by it, by >> it sounds vaguely insulting like you are accusing me of being a >> philosophic zombie or something. I assure you there is something that >> it is qualitatively like to be me, even if I can't succinctly describe >> it to you in monkey mouth noises. I could just as easily accuse you of >> being innumerate and a mathphobe, so either explain what you mean or >> knock it off. >> >> > >> > If you only care about if a brain can pick strawberries, and don?t care >> > what it is qualitatively like, then you can?t make the critically >> important >> > distinctions between these 3 robots >> > < >> https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing >> > >> > that are functionally the same but qualitatively very different, one >> being >> > not conscious at all. >> >> No being can be deemed conscious without some manner of inputs from >> the real world. That is the nature of perception. A robot without >> sensors cannot be conscious. If that is what you mean by an "abstract >> robot" than I agree that it is not conscious. On the other hand, a >> keyboard is a sensor. A very limited sensor but a sensor nonetheless. >> >> >> ?Nothing in the universe can objectively observe anything else.? >> >> > All information that comes to our senses is ?objectively? observed and >> > devoid of any physical qualitative information, it is all only abstract >> > mathematical information. Descartes, the ultimate septic, realized >> that he >> > must doubt all objectively observed information. >> >> You are in no way an objective observer. Any information that may have >> been objective before you observed it became biased the moment you >> perceived it. That is because your brain filters out and flat out >> ignores out any information that does not have relevance to Brent. Why >> else could you not see the color grue unless it had no survival >> advantage to you or your ancestors? Even now, your inborn Brentward >> bias is seething with the need to disagree with me: your primal and >> naked need to impose Brent upon me and the rest of the world. Can't >> you feel it? >> >> > But he also realized: ?I >> > think therefore I am?. This includes the knowledge of the qualities of >> our >> > consciousness. >> >> No it doesn't. Thinking pertains to logic and abstracts and not to >> qualia which are in the realm of that what you perceive and feel. >> Descartes said that his ability to make logical inferences entailed >> that he existed. If intelligence is, as you claim, separable from >> consciousness, then Descartes did little more than make a good case >> that he was intelligent. In fact he made it point to explicitly assume >> that all his perceived qualia were the work of some kind of malicious >> demon trying to mislead him about his existence through his senses or >> something similarly paranoid. In any case, if anyone was "qualia >> blind" it was your man Descartes, who used imagined demons to come up >> with a definition of himself that did not incorporate sensory >> information. Nonetheless, I don't think Descartes was a philosophic >> zombie. >> >> > We know, absolutely, in a way that cannot be doubted, what >> > physical redness is like, and how it is different than greenness. >> While it >> > is true that we may be a brain, in a vat. We know, absolutely, that the >> > physics, in the brain, in that vat exist, and we know, absolutely and >> > qualitatively, what that physics (in both hemispheres) is like. >> >> How could we know for sure what what the physical redness of ripe >> strawberries looks like when they would look different in the light >> and the shadow? >> >> https://en.wikipedia.org/wiki/Checker_shadow_illusion >> >> > Let?s say you did objectively detect some new ?perceptronium?. All you >> > would have, describing that perceptronium, is mathematical models and >> > descriptions of such. These mathematical descriptions of perceptronium >> > would all be completely devoid of any qualitative meaning. Until you >> > experienced a particular type of perceptronium, directly, you would not >> > know, qualitatively, how to interpret any of your mathematical objective >> > descriptions of such. >> >> Perceptronium is Tegmark's notion and not mine. I am not sure that as >> a concept it adds much to the understanding of consciousness. >> >> > Again, everything you are talking about is what Chalmers, and everyone >> > would call ?easy? problems. Discovering and objectively observing any >> kind >> > of ?perceptronium? is an easy problem. We already know how to do this. >> > Knowing, qualitatively, what that perceptronium is qualitatively like, >> if >> > you experienced it, directly, is what makes it hard. >> >> Being Brent is necessarily like being Brent. And if I were born in >> your stead, then I would necessarily be Brent. Moreover, you are being >> of finite information in that your entire history, your every thought, >> and your every deed can be described by a very large yet nonetheless >> finite number of true/false or yes/no questions and their answers. The >> smallest number of such yes/no questions and answers would equal your >> Shannon entropy. >> >> That means that there is a unique bitstring that describes you. The >> sum total of every discernible thing about you can be expressed as a >> very large integer. It would be the most compressed form of you that >> it is possible to express. >> >> > The only ?hard? part of consciousness is the ?Explanatory Ga >> > p?, or how do you eff >> the >> > ineffable nature of qualia. >> >> There is no "explanatory gap" because it is filled in by natural >> selection quite nicely. There are some qualia invariants that can be >> identified and experienced quite universally. For example, I know what >> your pain feels like. It feels unpleasant. I know that because our >> ancestors evolved to feel pain so they would try to avoid dangerously >> unhealthy environments and behaviors. >> >> > Everything else is just easy problems. We >> > already know, mathematically what it is like to be a bat. But that >> tells >> > you nothing, qualitatively about what being a bat is like. >> >> You are right, that's where technology can help. If you go >> hang-gliding on a moonless night while wearing a pair of these sonar >> glasses, you might come close to knowing what it is like to be a bat. >> >> http://sonarglasses.com/ >> >> Alternatively, since you are what you eat, you could just eat a bat >> and describe how it makes you feel. ;-) >> >> >> Stuart LaForge >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrivera at alumni.virginia.edu Fri Jun 28 19:00:40 2019 From: hrivera at alumni.virginia.edu (Henry Rivera) Date: Fri, 28 Jun 2019 15:00:40 -0400 Subject: [ExI] ai emotions In-Reply-To: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> References: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> Message-ID: <230116BF-6247-4060-AF8B-573CBEBE726C@alumni.virginia.edu> Stuart, What?s with all the hostility directed at Brent, who has been trying to clarify discussions around consciousness for years steadfastly? I didn?t read such hostility and sarcasm in his response to you. I don?t get the sense that he is trying to threaten your worldview or insult your intelligence. I admire the calmness, patience, and precision I have witnessed in Brent?s dialogs all over the net for well over ten years. He?ll likely try to respond to all your points, a task which many would abandon by this point in an email string. I see no value or virtue aggressive or defensive retorts. Let?s try to keep it civil here in our Exl realm please. We have a pretty unique thing here. Not trying to speak for Spike or anything, but I?d like to think he?d agree. Respectfully, -Henry > On Jun 28, 2019, at 2:55 AM, Stuart LaForge wrote: > > > Quoting Brent Allsop: > > >>> ?Consciousness is not magic, it is math.? >> >> How do you get a specific, qualitative definition of the word ?red? from >> any math? > > Red is a subset of the set of colors an unaugmented human can see. There I just defined it for you mathematically. In math symbols, it looks something very much like {red} C {red, orange, yellow, green, blue, indigo, violet}. If you were a lucky mutant (or AI) that could perceive grue, then the math would look like {red} C {red, orange, yellow, green, grue, blue, indigo, violet} > > Whatever unique qualia your brain may have assigned to it is your business and your business alone since you cannot express red to me except by quantitative measure (650 nm wavelength electromagnetic wave) or qualitative example (the color of ripe strawberries). Any other description of red only means anything to YOU (Perhaps it makes your dick hard, I have no clue, don't really care.) > > > In other words, you can't give me any better a qualitative description of red then I can give you. Prove me wrong: What is red, oh privileged seer of qualia? (Yes, that was sarcasm.) > >> >> ?I don't think that substrate-specific details matter that much.? >> >> Then you are not talking about consciousness, at all. You are just talking >> about intelligence. Consciousness is computationally bound elemental >> qualities, for which there is something, qualitative, for which it is like. > > Intelligence and consciousness differ by degree, not by type. Both are emergent properties of some configurations of matter. If I were to quantitatively rank emergent properties by their PHI value, then I would have a distribution as follows: reactivity <= life <= intelligence <= consciousness <= civilization > >>> ?It is irrelevant that I perceive red as green.? > >> Can you not see how sloppy language like this is? I?m going to describe at >> least two very different possible interpretations of this statement. If >> you can?t distinguish between them, with your language, then again, you are >> not talking about consciousness: > > You pull a single sentence of mine out of context and then use it to accuse me of sloppy language? Here is my precise and unequivocal retort: NO! I challenge you to take that out of context. > >> 1. One person is color blind, and represents both red things and >> green things with knowledge that has the same physical redness quality. In >> other words, he is red green color blind. >> >> 2. One person is qualitatively inverted from the other. He uses the >> other?s greenness to represent red with and visa versa for green things. > > When you said, "Are you talking about your redness, or my redness which is like your [sic] grenness?" I meant whichever you meant by the quoted statement. My argument holds either way. Unless you believe that color-blind people are not really conscious. In which case you should be enslaving the colorblind and tithing me 10% of the proceeds. > >> >> You can?t tell which one you?re statement is talking about. Again, you?re >> not talking about consciousness, if you can?t distinguish between these >> types of things with your models and language. > > Again, my statement reflects yours with the exact same scope. So you tell me what I meant. > >> Sure, before Galileo, it didn?t matter if you used a geocentric model of >> the solar system or a heliocentric. But now that we?re flying up in the >> heavens, one works, and one does not. Similarly, now, you can claim that >> the qualitative nature doesn?t matter, but as soon as you start hacking the >> brain, amplifying intelligence, connecting multiple brains (like two brain >> hemispheres can be connect) or even religiously predicting what ?spirits? >> and future consciousness will be possible. One model works, the other does >> not. > > I don't see how your model predicts anything except for your ignorance of what consciousness is. You say that every consciousness is a unique snowflake of amassed qualia, I say that every machine-learning algorithm starts out with a random set of parameters and through learning its training data, either supervised or unsupervised, converges on an approximation of the truth > > Every deep learning neural network is a unique snowflake that gets optimized for a specific purpose. Some neural networks train very quickly, others never quite get what you are trying to teach it. There is very much a ghost in the machine and each time you run the algorithm, you get a different ghost. If you don't believe me, then download Simbrain, watch the turtorial video on Youtube, and I will send you a copy my tiny brain to play with. Be the qualitative judge of my tiny brain, I dare you. > > Do you not understand the implications of me creating a 55 neuron brain and teaching it to count to five? Do you not understand the implication of my tiny brain being able to distinguish ALL three-bit patterns after only being trained on SOME three-bit patterns? Do you not see the conceptualization of threeness that was occurring? > >> In fact, my prediction is the reason we can?t better understand how >> we subjectively represent visual knowledge, is precisely because everyone >> is like you, qualia blind, and doesn?t care that some people may have >> qualitatively very different physical representations of red and green. > > Quit calling me "qualia blind". I am not sure what you mean by it, by it sounds vaguely insulting like you are accusing me of being a philosophic zombie or something. I assure you there is something that it is qualitatively like to be me, even if I can't succinctly describe it to you in monkey mouth noises. I could just as easily accuse you of being innumerate and a mathphobe, so either explain what you mean or knock it off. > >> >> If you only care about if a brain can pick strawberries, and don?t care >> what it is qualitatively like, then you can?t make the critically important >> distinctions between these 3 robots >> >> that are functionally the same but qualitatively very different, one being >> not conscious at all. > > No being can be deemed conscious without some manner of inputs from the real world. That is the nature of perception. A robot without sensors cannot be conscious. If that is what you mean by an "abstract robot" than I agree that it is not conscious. On the other hand, a keyboard is a sensor. A very limited sensor but a sensor nonetheless. > >>> ?Nothing in the universe can objectively observe anything else.? > >> All information that comes to our senses is ?objectively? observed and >> devoid of any physical qualitative information, it is all only abstract >> mathematical information. Descartes, the ultimate septic, realized that he >> must doubt all objectively observed information. > > You are in no way an objective observer. Any information that may have been objective before you observed it became biased the moment you perceived it. That is because your brain filters out and flat out ignores out any information that does not have relevance to Brent. Why else could you not see the color grue unless it had no survival advantage to you or your ancestors? Even now, your inborn Brentward bias is seething with the need to disagree with me: your primal and naked need to impose Brent upon me and the rest of the world. Can't you feel it? > >> But he also realized: ?I >> think therefore I am?. This includes the knowledge of the qualities of our >> consciousness. > > No it doesn't. Thinking pertains to logic and abstracts and not to qualia which are in the realm of that what you perceive and feel. Descartes said that his ability to make logical inferences entailed that he existed. If intelligence is, as you claim, separable from consciousness, then Descartes did little more than make a good case that he was intelligent. In fact he made it point to explicitly assume that all his perceived qualia were the work of some kind of malicious demon trying to mislead him about his existence through his senses or something similarly paranoid. In any case, if anyone was "qualia blind" it was your man Descartes, who used imagined demons to come up with a definition of himself that did not incorporate sensory information. Nonetheless, I don't think Descartes was a philosophic zombie. > >> We know, absolutely, in a way that cannot be doubted, what >> physical redness is like, and how it is different than greenness. While it >> is true that we may be a brain, in a vat. We know, absolutely, that the >> physics, in the brain, in that vat exist, and we know, absolutely and >> qualitatively, what that physics (in both hemispheres) is like. > > How could we know for sure what what the physical redness of ripe strawberries looks like when they would look different in the light and the shadow? > > https://en.wikipedia.org/wiki/Checker_shadow_illusion > >> Let?s say you did objectively detect some new ?perceptronium?. All you >> would have, describing that perceptronium, is mathematical models and >> descriptions of such. These mathematical descriptions of perceptronium >> would all be completely devoid of any qualitative meaning. Until you >> experienced a particular type of perceptronium, directly, you would not >> know, qualitatively, how to interpret any of your mathematical objective >> descriptions of such. > > Perceptronium is Tegmark's notion and not mine. I am not sure that as a concept it adds much to the understanding of consciousness. > >> Again, everything you are talking about is what Chalmers, and everyone >> would call ?easy? problems. Discovering and objectively observing any kind >> of ?perceptronium? is an easy problem. We already know how to do this. >> Knowing, qualitatively, what that perceptronium is qualitatively like, if >> you experienced it, directly, is what makes it hard. > > Being Brent is necessarily like being Brent. And if I were born in your stead, then I would necessarily be Brent. Moreover, you are being of finite information in that your entire history, your every thought, and your every deed can be described by a very large yet nonetheless finite number of true/false or yes/no questions and their answers. The smallest number of such yes/no questions and answers would equal your Shannon entropy. > > That means that there is a unique bitstring that describes you. The sum total of every discernible thing about you can be expressed as a very large integer. It would be the most compressed form of you that it is possible to express. > >> The only ?hard? part of consciousness is the ?Explanatory Ga >> p?, or how do you eff the >> ineffable nature of qualia. > > There is no "explanatory gap" because it is filled in by natural selection quite nicely. There are some qualia invariants that can be identified and experienced quite universally. For example, I know what your pain feels like. It feels unpleasant. I know that because our ancestors evolved to feel pain so they would try to avoid dangerously unhealthy environments and behaviors. > >> Everything else is just easy problems. We >> already know, mathematically what it is like to be a bat. But that tells >> you nothing, qualitatively about what being a bat is like. > > You are right, that's where technology can help. If you go hang-gliding on a moonless night while wearing a pair of these sonar glasses, you might come close to knowing what it is like to be a bat. > > http://sonarglasses.com/ > > Alternatively, since you are what you eat, you could just eat a bat and describe how it makes you feel. ;-) > > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Fri Jun 28 19:22:08 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 28 Jun 2019 13:22:08 -0600 Subject: [ExI] ai emotions In-Reply-To: <230116BF-6247-4060-AF8B-573CBEBE726C@alumni.virginia.edu> References: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> <230116BF-6247-4060-AF8B-573CBEBE726C@alumni.virginia.edu> Message-ID: Hi Stuart, There are ?week?, ?stronger? and ?strongest? forms predicting how we will be able to eff the ineffable nature of the physical quality of the redness someone can directly experience to other people in this ?Objectively, We are Blind to Physical Qualities ? paper. This paper has now been accepted and presented in multiple conferences and referenced in the near unanimous ?Representational Qualia Theory ? camp. Basically, you need to first discover which physics (or mathematics) it is that has a redness quality, then you duplicate that physics in the other?s brain. Once you experience that physical (or mathematical) quality, directly, yourself, you can then say: ?Oh THAT is what grue is like?. You are basically making the falsifiable prediction that consciousness or qualia arise from mathematics or functionality. This kind of functionalism is currently leading in supporting sub camps to representational qualia theory, there being multiple functionalists? sub camps, with more supporters than the materialist sub camps. So, let?s take a simplistic falsifiable mathematical theory as an example, the way we use glutamate as a simplified falsifiable materialist example. Say if you predict that it is the square root of 9 that has a redness quality and you predict that it is the square root of 16 that has a greenness quality. In other words, this could be verified if no experimentalists could produce a redness, without doing that particular necessary and sufficient mathematical function that was the square root of 9. Experimentalists verifying this would falsify the materialist?s theories and prove that mathematics or function was more fundamental than matter and their qualities, which arise from mathematics. But it will remain a fact of the matter, that even though redness arises from the square root of 9, this ?arising? would still be a physical process, for which mathematics was the fundamental controlling interface. If you wanted to design a person to represent ?red? things with a physical redness quality, you?d do it by doing the square root of 9. If you wanted to represent the same red knowledge with greenness, you would do it by performing the square root of 16. But, if the prediction that it is glutamate that has the redness physical quality that can?t be falsified, and nobody is ever able to reproduce a redness experience (no matter what kind of mathematics you do) without physical glutamate, this would falsify functionalist and mathematical theories of qualia or consciousness. On Fri, Jun 28, 2019 at 1:02 PM Henry Rivera wrote: > Stuart, > What?s with all the hostility directed at Brent, who has been trying to > clarify discussions around consciousness for years steadfastly? I didn?t > read such hostility and sarcasm in his response to you. I don?t get the > sense that he is trying to threaten your worldview or insult your > intelligence. I admire the calmness, patience, and precision I have > witnessed in Brent?s dialogs all over the net for well over ten years. > He?ll likely try to respond to all your points, a task which many would > abandon by this point in an email string. I see no value or virtue > aggressive or defensive retorts. Let?s try to keep it civil here in our Exl > realm please. We have a pretty unique thing here. Not trying to speak for > Spike or anything, but I?d like to think he?d agree. > Respectfully, > -Henry > > > On Jun 28, 2019, at 2:55 AM, Stuart LaForge wrote: > > > > > > Quoting Brent Allsop: > > > > > >>> ?Consciousness is not magic, it is math.? > >> > >> How do you get a specific, qualitative definition of the word ?red? from > >> any math? > > > > Red is a subset of the set of colors an unaugmented human can see. There > I just defined it for you mathematically. In math symbols, it looks > something very much like {red} C {red, orange, yellow, green, blue, indigo, > violet}. If you were a lucky mutant (or AI) that could perceive grue, then > the math would look like {red} C {red, orange, yellow, green, grue, blue, > indigo, violet} > > > > Whatever unique qualia your brain may have assigned to it is your > business and your business alone since you cannot express red to me except > by quantitative measure (650 nm wavelength electromagnetic wave) or > qualitative example (the color of ripe strawberries). Any other description > of red only means anything to YOU (Perhaps it makes your dick hard, I have > no clue, don't really care.) > > > > > > In other words, you can't give me any better a qualitative description > of red then I can give you. Prove me wrong: What is red, oh privileged seer > of qualia? (Yes, that was sarcasm.) > > > >> > >> ?I don't think that substrate-specific details matter that much.? > >> > >> Then you are not talking about consciousness, at all. You are just > talking > >> about intelligence. Consciousness is computationally bound elemental > >> qualities, for which there is something, qualitative, for which it is > like. > > > > Intelligence and consciousness differ by degree, not by type. Both are > emergent properties of some configurations of matter. If I were to > quantitatively rank emergent properties by their PHI value, then I would > have a distribution as follows: reactivity <= life <= intelligence <= > consciousness <= civilization > > > >>> ?It is irrelevant that I perceive red as green.? > > > >> Can you not see how sloppy language like this is? I?m going to > describe at > >> least two very different possible interpretations of this statement. If > >> you can?t distinguish between them, with your language, then again, you > are > >> not talking about consciousness: > > > > You pull a single sentence of mine out of context and then use it to > accuse me of sloppy language? Here is my precise and unequivocal retort: > NO! I challenge you to take that out of context. > > > >> 1. One person is color blind, and represents both red things and > >> green things with knowledge that has the same physical redness > quality. In > >> other words, he is red green color blind. > >> > >> 2. One person is qualitatively inverted from the other. He uses > the > >> other?s greenness to represent red with and visa versa for green things. > > > > When you said, "Are you talking about your redness, or my redness which > is like your [sic] grenness?" I meant whichever you meant by the quoted > statement. My argument holds either way. Unless you believe that > color-blind people are not really conscious. In which case you should be > enslaving the colorblind and tithing me 10% of the proceeds. > > > >> > >> You can?t tell which one you?re statement is talking about. Again, > you?re > >> not talking about consciousness, if you can?t distinguish between these > >> types of things with your models and language. > > > > Again, my statement reflects yours with the exact same scope. So you > tell me what I meant. > > > >> Sure, before Galileo, it didn?t matter if you used a geocentric model of > >> the solar system or a heliocentric. But now that we?re flying up in the > >> heavens, one works, and one does not. Similarly, now, you can claim > that > >> the qualitative nature doesn?t matter, but as soon as you start hacking > the > >> brain, amplifying intelligence, connecting multiple brains (like two > brain > >> hemispheres can be connect) or even religiously predicting what > ?spirits? > >> and future consciousness will be possible. One model works, the other > does > >> not. > > > > I don't see how your model predicts anything except for your ignorance > of what consciousness is. You say that every consciousness is a unique > snowflake of amassed qualia, I say that every machine-learning algorithm > starts out with a random set of parameters and through learning its > training data, either supervised or unsupervised, converges on an > approximation of the truth > > > > Every deep learning neural network is a unique snowflake that gets > optimized for a specific purpose. Some neural networks train very quickly, > others never quite get what you are trying to teach it. There is very much > a ghost in the machine and each time you run the algorithm, you get a > different ghost. If you don't believe me, then download Simbrain, watch the > turtorial video on Youtube, and I will send you a copy my tiny brain to > play with. Be the qualitative judge of my tiny brain, I dare you. > > > > Do you not understand the implications of me creating a 55 neuron brain > and teaching it to count to five? Do you not understand the implication of > my tiny brain being able to distinguish ALL three-bit patterns after only > being trained on SOME three-bit patterns? Do you not see the > conceptualization of threeness that was occurring? > > > >> In fact, my prediction is the reason we can?t better understand how > >> we subjectively represent visual knowledge, is precisely because > everyone > >> is like you, qualia blind, and doesn?t care that some people may have > >> qualitatively very different physical representations of red and green. > > > > Quit calling me "qualia blind". I am not sure what you mean by it, by it > sounds vaguely insulting like you are accusing me of being a philosophic > zombie or something. I assure you there is something that it is > qualitatively like to be me, even if I can't succinctly describe it to you > in monkey mouth noises. I could just as easily accuse you of being > innumerate and a mathphobe, so either explain what you mean or knock it off. > > > >> > >> If you only care about if a brain can pick strawberries, and don?t care > >> what it is qualitatively like, then you can?t make the critically > important > >> distinctions between these 3 robots > >> < > https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing > > > >> that are functionally the same but qualitatively very different, one > being > >> not conscious at all. > > > > No being can be deemed conscious without some manner of inputs from the > real world. That is the nature of perception. A robot without sensors > cannot be conscious. If that is what you mean by an "abstract robot" than I > agree that it is not conscious. On the other hand, a keyboard is a sensor. > A very limited sensor but a sensor nonetheless. > > > >>> ?Nothing in the universe can objectively observe anything else.? > > > >> All information that comes to our senses is ?objectively? observed and > >> devoid of any physical qualitative information, it is all only abstract > >> mathematical information. Descartes, the ultimate septic, realized > that he > >> must doubt all objectively observed information. > > > > You are in no way an objective observer. Any information that may have > been objective before you observed it became biased the moment you > perceived it. That is because your brain filters out and flat out ignores > out any information that does not have relevance to Brent. Why else could > you not see the color grue unless it had no survival advantage to you or > your ancestors? Even now, your inborn Brentward bias is seething with the > need to disagree with me: your primal and naked need to impose Brent upon > me and the rest of the world. Can't you feel it? > > > >> But he also realized: ?I > >> think therefore I am?. This includes the knowledge of the qualities of > our > >> consciousness. > > > > No it doesn't. Thinking pertains to logic and abstracts and not to > qualia which are in the realm of that what you perceive and feel. Descartes > said that his ability to make logical inferences entailed that he existed. > If intelligence is, as you claim, separable from consciousness, then > Descartes did little more than make a good case that he was intelligent. In > fact he made it point to explicitly assume that all his perceived qualia > were the work of some kind of malicious demon trying to mislead him about > his existence through his senses or something similarly paranoid. In any > case, if anyone was "qualia blind" it was your man Descartes, who used > imagined demons to come up with a definition of himself that did not > incorporate sensory information. Nonetheless, I don't think Descartes was a > philosophic zombie. > > > >> We know, absolutely, in a way that cannot be doubted, what > >> physical redness is like, and how it is different than greenness. > While it > >> is true that we may be a brain, in a vat. We know, absolutely, that the > >> physics, in the brain, in that vat exist, and we know, absolutely and > >> qualitatively, what that physics (in both hemispheres) is like. > > > > How could we know for sure what what the physical redness of ripe > strawberries looks like when they would look different in the light and the > shadow? > > > > https://en.wikipedia.org/wiki/Checker_shadow_illusion > > > >> Let?s say you did objectively detect some new ?perceptronium?. All you > >> would have, describing that perceptronium, is mathematical models and > >> descriptions of such. These mathematical descriptions of perceptronium > >> would all be completely devoid of any qualitative meaning. Until you > >> experienced a particular type of perceptronium, directly, you would not > >> know, qualitatively, how to interpret any of your mathematical objective > >> descriptions of such. > > > > Perceptronium is Tegmark's notion and not mine. I am not sure that as a > concept it adds much to the understanding of consciousness. > > > >> Again, everything you are talking about is what Chalmers, and everyone > >> would call ?easy? problems. Discovering and objectively observing any > kind > >> of ?perceptronium? is an easy problem. We already know how to do this. > >> Knowing, qualitatively, what that perceptronium is qualitatively like, > if > >> you experienced it, directly, is what makes it hard. > > > > Being Brent is necessarily like being Brent. And if I were born in your > stead, then I would necessarily be Brent. Moreover, you are being of finite > information in that your entire history, your every thought, and your every > deed can be described by a very large yet nonetheless finite number of > true/false or yes/no questions and their answers. The smallest number of > such yes/no questions and answers would equal your Shannon entropy. > > > > That means that there is a unique bitstring that describes you. The sum > total of every discernible thing about you can be expressed as a very large > integer. It would be the most compressed form of you that it is possible to > express. > > > >> The only ?hard? part of consciousness is the ?Explanatory Ga > >> p?, or how do you eff > the > >> ineffable nature of qualia. > > > > There is no "explanatory gap" because it is filled in by natural > selection quite nicely. There are some qualia invariants that can be > identified and experienced quite universally. For example, I know what your > pain feels like. It feels unpleasant. I know that because our ancestors > evolved to feel pain so they would try to avoid dangerously unhealthy > environments and behaviors. > > > >> Everything else is just easy problems. We > >> already know, mathematically what it is like to be a bat. But that > tells > >> you nothing, qualitatively about what being a bat is like. > > > > You are right, that's where technology can help. If you go hang-gliding > on a moonless night while wearing a pair of these sonar glasses, you might > come close to knowing what it is like to be a bat. > > > > http://sonarglasses.com/ > > > > Alternatively, since you are what you eat, you could just eat a bat and > describe how it makes you feel. ;-) > > > > > > Stuart LaForge > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri Jun 28 19:41:06 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 28 Jun 2019 13:41:06 -0600 Subject: [ExI] ai emotions In-Reply-To: <230116BF-6247-4060-AF8B-573CBEBE726C@alumni.virginia.edu> References: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> <230116BF-6247-4060-AF8B-573CBEBE726C@alumni.virginia.edu> Message-ID: Thanks, Henry. I too, could have been more cordial in my responses to Stuart. On Fri, Jun 28, 2019 at 1:02 PM Henry Rivera wrote: > Stuart, > What?s with all the hostility directed at Brent, who has been trying to > clarify discussions around consciousness for years steadfastly? I didn?t > read such hostility and sarcasm in his response to you. I don?t get the > sense that he is trying to threaten your worldview or insult your > intelligence. I admire the calmness, patience, and precision I have > witnessed in Brent?s dialogs all over the net for well over ten years. > He?ll likely try to respond to all your points, a task which many would > abandon by this point in an email string. I see no value or virtue > aggressive or defensive retorts. Let?s try to keep it civil here in our Exl > realm please. We have a pretty unique thing here. Not trying to speak for > Spike or anything, but I?d like to think he?d agree. > Respectfully, > -Henry > > > On Jun 28, 2019, at 2:55 AM, Stuart LaForge wrote: > > > > > > Quoting Brent Allsop: > > > > > >>> ?Consciousness is not magic, it is math.? > >> > >> How do you get a specific, qualitative definition of the word ?red? from > >> any math? > > > > Red is a subset of the set of colors an unaugmented human can see. There > I just defined it for you mathematically. In math symbols, it looks > something very much like {red} C {red, orange, yellow, green, blue, indigo, > violet}. If you were a lucky mutant (or AI) that could perceive grue, then > the math would look like {red} C {red, orange, yellow, green, grue, blue, > indigo, violet} > > > > Whatever unique qualia your brain may have assigned to it is your > business and your business alone since you cannot express red to me except > by quantitative measure (650 nm wavelength electromagnetic wave) or > qualitative example (the color of ripe strawberries). Any other description > of red only means anything to YOU (Perhaps it makes your dick hard, I have > no clue, don't really care.) > > > > > > In other words, you can't give me any better a qualitative description > of red then I can give you. Prove me wrong: What is red, oh privileged seer > of qualia? (Yes, that was sarcasm.) > > > >> > >> ?I don't think that substrate-specific details matter that much.? > >> > >> Then you are not talking about consciousness, at all. You are just > talking > >> about intelligence. Consciousness is computationally bound elemental > >> qualities, for which there is something, qualitative, for which it is > like. > > > > Intelligence and consciousness differ by degree, not by type. Both are > emergent properties of some configurations of matter. If I were to > quantitatively rank emergent properties by their PHI value, then I would > have a distribution as follows: reactivity <= life <= intelligence <= > consciousness <= civilization > > > >>> ?It is irrelevant that I perceive red as green.? > > > >> Can you not see how sloppy language like this is? I?m going to > describe at > >> least two very different possible interpretations of this statement. If > >> you can?t distinguish between them, with your language, then again, you > are > >> not talking about consciousness: > > > > You pull a single sentence of mine out of context and then use it to > accuse me of sloppy language? Here is my precise and unequivocal retort: > NO! I challenge you to take that out of context. > > > >> 1. One person is color blind, and represents both red things and > >> green things with knowledge that has the same physical redness > quality. In > >> other words, he is red green color blind. > >> > >> 2. One person is qualitatively inverted from the other. He uses > the > >> other?s greenness to represent red with and visa versa for green things. > > > > When you said, "Are you talking about your redness, or my redness which > is like your [sic] grenness?" I meant whichever you meant by the quoted > statement. My argument holds either way. Unless you believe that > color-blind people are not really conscious. In which case you should be > enslaving the colorblind and tithing me 10% of the proceeds. > > > >> > >> You can?t tell which one you?re statement is talking about. Again, > you?re > >> not talking about consciousness, if you can?t distinguish between these > >> types of things with your models and language. > > > > Again, my statement reflects yours with the exact same scope. So you > tell me what I meant. > > > >> Sure, before Galileo, it didn?t matter if you used a geocentric model of > >> the solar system or a heliocentric. But now that we?re flying up in the > >> heavens, one works, and one does not. Similarly, now, you can claim > that > >> the qualitative nature doesn?t matter, but as soon as you start hacking > the > >> brain, amplifying intelligence, connecting multiple brains (like two > brain > >> hemispheres can be connect) or even religiously predicting what > ?spirits? > >> and future consciousness will be possible. One model works, the other > does > >> not. > > > > I don't see how your model predicts anything except for your ignorance > of what consciousness is. You say that every consciousness is a unique > snowflake of amassed qualia, I say that every machine-learning algorithm > starts out with a random set of parameters and through learning its > training data, either supervised or unsupervised, converges on an > approximation of the truth > > > > Every deep learning neural network is a unique snowflake that gets > optimized for a specific purpose. Some neural networks train very quickly, > others never quite get what you are trying to teach it. There is very much > a ghost in the machine and each time you run the algorithm, you get a > different ghost. If you don't believe me, then download Simbrain, watch the > turtorial video on Youtube, and I will send you a copy my tiny brain to > play with. Be the qualitative judge of my tiny brain, I dare you. > > > > Do you not understand the implications of me creating a 55 neuron brain > and teaching it to count to five? Do you not understand the implication of > my tiny brain being able to distinguish ALL three-bit patterns after only > being trained on SOME three-bit patterns? Do you not see the > conceptualization of threeness that was occurring? > > > >> In fact, my prediction is the reason we can?t better understand how > >> we subjectively represent visual knowledge, is precisely because > everyone > >> is like you, qualia blind, and doesn?t care that some people may have > >> qualitatively very different physical representations of red and green. > > > > Quit calling me "qualia blind". I am not sure what you mean by it, by it > sounds vaguely insulting like you are accusing me of being a philosophic > zombie or something. I assure you there is something that it is > qualitatively like to be me, even if I can't succinctly describe it to you > in monkey mouth noises. I could just as easily accuse you of being > innumerate and a mathphobe, so either explain what you mean or knock it off. > > > >> > >> If you only care about if a brain can pick strawberries, and don?t care > >> what it is qualitatively like, then you can?t make the critically > important > >> distinctions between these 3 robots > >> < > https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing > > > >> that are functionally the same but qualitatively very different, one > being > >> not conscious at all. > > > > No being can be deemed conscious without some manner of inputs from the > real world. That is the nature of perception. A robot without sensors > cannot be conscious. If that is what you mean by an "abstract robot" than I > agree that it is not conscious. On the other hand, a keyboard is a sensor. > A very limited sensor but a sensor nonetheless. > > > >>> ?Nothing in the universe can objectively observe anything else.? > > > >> All information that comes to our senses is ?objectively? observed and > >> devoid of any physical qualitative information, it is all only abstract > >> mathematical information. Descartes, the ultimate septic, realized > that he > >> must doubt all objectively observed information. > > > > You are in no way an objective observer. Any information that may have > been objective before you observed it became biased the moment you > perceived it. That is because your brain filters out and flat out ignores > out any information that does not have relevance to Brent. Why else could > you not see the color grue unless it had no survival advantage to you or > your ancestors? Even now, your inborn Brentward bias is seething with the > need to disagree with me: your primal and naked need to impose Brent upon > me and the rest of the world. Can't you feel it? > > > >> But he also realized: ?I > >> think therefore I am?. This includes the knowledge of the qualities of > our > >> consciousness. > > > > No it doesn't. Thinking pertains to logic and abstracts and not to > qualia which are in the realm of that what you perceive and feel. Descartes > said that his ability to make logical inferences entailed that he existed. > If intelligence is, as you claim, separable from consciousness, then > Descartes did little more than make a good case that he was intelligent. In > fact he made it point to explicitly assume that all his perceived qualia > were the work of some kind of malicious demon trying to mislead him about > his existence through his senses or something similarly paranoid. In any > case, if anyone was "qualia blind" it was your man Descartes, who used > imagined demons to come up with a definition of himself that did not > incorporate sensory information. Nonetheless, I don't think Descartes was a > philosophic zombie. > > > >> We know, absolutely, in a way that cannot be doubted, what > >> physical redness is like, and how it is different than greenness. > While it > >> is true that we may be a brain, in a vat. We know, absolutely, that the > >> physics, in the brain, in that vat exist, and we know, absolutely and > >> qualitatively, what that physics (in both hemispheres) is like. > > > > How could we know for sure what what the physical redness of ripe > strawberries looks like when they would look different in the light and the > shadow? > > > > https://en.wikipedia.org/wiki/Checker_shadow_illusion > > > >> Let?s say you did objectively detect some new ?perceptronium?. All you > >> would have, describing that perceptronium, is mathematical models and > >> descriptions of such. These mathematical descriptions of perceptronium > >> would all be completely devoid of any qualitative meaning. Until you > >> experienced a particular type of perceptronium, directly, you would not > >> know, qualitatively, how to interpret any of your mathematical objective > >> descriptions of such. > > > > Perceptronium is Tegmark's notion and not mine. I am not sure that as a > concept it adds much to the understanding of consciousness. > > > >> Again, everything you are talking about is what Chalmers, and everyone > >> would call ?easy? problems. Discovering and objectively observing any > kind > >> of ?perceptronium? is an easy problem. We already know how to do this. > >> Knowing, qualitatively, what that perceptronium is qualitatively like, > if > >> you experienced it, directly, is what makes it hard. > > > > Being Brent is necessarily like being Brent. And if I were born in your > stead, then I would necessarily be Brent. Moreover, you are being of finite > information in that your entire history, your every thought, and your every > deed can be described by a very large yet nonetheless finite number of > true/false or yes/no questions and their answers. The smallest number of > such yes/no questions and answers would equal your Shannon entropy. > > > > That means that there is a unique bitstring that describes you. The sum > total of every discernible thing about you can be expressed as a very large > integer. It would be the most compressed form of you that it is possible to > express. > > > >> The only ?hard? part of consciousness is the ?Explanatory Ga > >> p?, or how do you eff > the > >> ineffable nature of qualia. > > > > There is no "explanatory gap" because it is filled in by natural > selection quite nicely. There are some qualia invariants that can be > identified and experienced quite universally. For example, I know what your > pain feels like. It feels unpleasant. I know that because our ancestors > evolved to feel pain so they would try to avoid dangerously unhealthy > environments and behaviors. > > > >> Everything else is just easy problems. We > >> already know, mathematically what it is like to be a bat. But that > tells > >> you nothing, qualitatively about what being a bat is like. > > > > You are right, that's where technology can help. If you go hang-gliding > on a moonless night while wearing a pair of these sonar glasses, you might > come close to knowing what it is like to be a bat. > > > > http://sonarglasses.com/ > > > > Alternatively, since you are what you eat, you could just eat a bat and > describe how it makes you feel. ;-) > > > > > > Stuart LaForge > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Jun 28 20:28:03 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 28 Jun 2019 15:28:03 -0500 Subject: [ExI] ai emotions In-Reply-To: References: <20190627235548.Horde.XejsV_ar6Gxoa5mjijFJDGm@secure199.inmotionhosting.com> Message-ID: The only problem with the definition of consciousness as needing external inputs is that there are stages of sleep where the person thinks he is awake - i.e. cannot tell if he is asleep or awake. If he can't tell one from the other, is it the same thing? I spent a night at a clinic which was trying to tell if I had sleep apnea. My experience was that I did not sleep at all - was conscious all the time. All my thought was rational. But they told me that I did spend some time in light sleep (the very stage where you might not be able to tell the difference). So I would call that at least partial consciousness- and thus change that definition. Then there is the odd problem of, while being asleep, taking external inputs and including them in your dreams, such as a fire truck siren being experienced as part of the dream. bill w On Fri, Jun 28, 2019 at 1:37 PM Brent Allsop wrote: > Hi William, > > > > Is dreaming - aka REM sleep - a variety of consciousness to you? > > > > My understanding of Stuart?s definition is that ?No being can be deemed > conscious without some manner of inputs from the real world.? So a > dreaming person is not conscious, according to that definition, since there > is no inputs from the real world. It seems to me that given such a > definition they would also consider all vegetative people as not conscious > . > If I were I such a person, very aware of my thoughts and their physical > qualities, and such but unable to receive input through my senses, nor > control any motor functions, I wouldn?t want a person with this definition > as their working hypothesis of who is and isn't "deemed conscious" to be my > doctor. > > > > Also, I think many people agree that sleep is when much of the neural > brain programming occurs, enabling us to figure things out like ?what we > are trying to tell ourselves.? > > > Brent > > > > On Fri, Jun 28, 2019 at 8:51 AM William Flynn Wallace > wrote: > >> stuart/brett wrote If intelligence is, as you claim, separable from >> consciousness >> >> Is dreaming - aka REM sleep - a variety of consciousness to you? I have >> certainly used my intelligence while I was dreaming - mainly to figure out >> what I was trying to say to myself! >> >> In a way, stage 4 sleep in the deepest. That's where the night terrors >> take place - which I have never experienced, but I assume there is >> something like consciousness there for the terrors to be experienced in. >> >> Other stages of sleep are not accompanied by any consciousness, although >> we can drift between a bit of consciousness and sleep in stage 1. Some >> people have said that they can tell when they are entering sleep when their >> thoughts go from rational to a bit crazy. >> >> bill w >> >> bill w >> >> >> On Fri, Jun 28, 2019 at 2:00 AM Stuart LaForge >> wrote: >> >>> >>> Quoting Brent Allsop: >>> >>> >>> >> ?Consciousness is not magic, it is math.? >>> > >>> > How do you get a specific, qualitative definition of the word ?red? >>> from >>> > any math? >>> >>> Red is a subset of the set of colors an unaugmented human can see. >>> There I just defined it for you mathematically. In math symbols, it >>> looks something very much like {red} C {red, orange, yellow, green, >>> blue, indigo, violet}. If you were a lucky mutant (or AI) that could >>> perceive grue, then the math would look like {red} C {red, orange, >>> yellow, green, grue, blue, indigo, violet} >>> >>> Whatever unique qualia your brain may have assigned to it is your >>> business and your business alone since you cannot express red to me >>> except by quantitative measure (650 nm wavelength electromagnetic >>> wave) or qualitative example (the color of ripe strawberries). Any >>> other description of red only means anything to YOU (Perhaps it makes >>> your dick hard, I have no clue, don't really care.) >>> >>> >>> In other words, you can't give me any better a qualitative description >>> of red then I can give you. Prove me wrong: What is red, oh privileged >>> seer of qualia? (Yes, that was sarcasm.) >>> >>> > >>> > ?I don't think that substrate-specific details matter that much.? >>> > >>> > Then you are not talking about consciousness, at all. You are just >>> talking >>> > about intelligence. Consciousness is computationally bound elemental >>> > qualities, for which there is something, qualitative, for which it is >>> like. >>> >>> Intelligence and consciousness differ by degree, not by type. Both are >>> emergent properties of some configurations of matter. If I were to >>> quantitatively rank emergent properties by their PHI value, then I >>> would have a distribution as follows: reactivity <= life <= >>> intelligence <= consciousness <= civilization >>> >>> >> ?It is irrelevant that I perceive red as green.? >>> >>> > Can you not see how sloppy language like this is? I?m going to >>> describe at >>> > least two very different possible interpretations of this statement. >>> If >>> > you can?t distinguish between them, with your language, then again, >>> you are >>> > not talking about consciousness: >>> >>> You pull a single sentence of mine out of context and then use it to >>> accuse me of sloppy language? Here is my precise and unequivocal >>> retort: NO! I challenge you to take that out of context. >>> >>> > 1. One person is color blind, and represents both red things and >>> > green things with knowledge that has the same physical redness >>> quality. In >>> > other words, he is red green color blind. >>> > >>> > 2. One person is qualitatively inverted from the other. He uses >>> the >>> > other?s greenness to represent red with and visa versa for green >>> things. >>> >>> When you said, "Are you talking about your redness, or my redness >>> which is like your [sic] grenness?" I meant whichever you meant by the >>> quoted statement. My argument holds either way. Unless you believe >>> that color-blind people are not really conscious. In which case you >>> should be enslaving the colorblind and tithing me 10% of the proceeds. >>> >>> > >>> > You can?t tell which one you?re statement is talking about. Again, >>> you?re >>> > not talking about consciousness, if you can?t distinguish between these >>> > types of things with your models and language. >>> >>> Again, my statement reflects yours with the exact same scope. So you >>> tell me what I meant. >>> >>> > Sure, before Galileo, it didn?t matter if you used a geocentric model >>> of >>> > the solar system or a heliocentric. But now that we?re flying up in >>> the >>> > heavens, one works, and one does not. Similarly, now, you can claim >>> that >>> > the qualitative nature doesn?t matter, but as soon as you start >>> hacking the >>> > brain, amplifying intelligence, connecting multiple brains (like two >>> brain >>> > hemispheres can be connect) or even religiously predicting what >>> ?spirits? >>> > and future consciousness will be possible. One model works, the other >>> does >>> > not. >>> >>> I don't see how your model predicts anything except for your ignorance >>> of what consciousness is. You say that every consciousness is a unique >>> snowflake of amassed qualia, I say that every machine-learning >>> algorithm starts out with a random set of parameters and through >>> learning its training data, either supervised or unsupervised, >>> converges on an approximation of the truth >>> >>> Every deep learning neural network is a unique snowflake that gets >>> optimized for a specific purpose. Some neural networks train very >>> quickly, others never quite get what you are trying to teach it. There >>> is very much a ghost in the machine and each time you run the >>> algorithm, you get a different ghost. If you don't believe me, then >>> download Simbrain, watch the turtorial video on Youtube, and I will >>> send you a copy my tiny brain to play with. Be the qualitative judge >>> of my tiny brain, I dare you. >>> >>> Do you not understand the implications of me creating a 55 neuron >>> brain and teaching it to count to five? Do you not understand the >>> implication of my tiny brain being able to distinguish ALL three-bit >>> patterns after only being trained on SOME three-bit patterns? Do you >>> not see the conceptualization of threeness that was occurring? >>> >>> > In fact, my prediction is the reason we can?t better understand how >>> > we subjectively represent visual knowledge, is precisely because >>> everyone >>> > is like you, qualia blind, and doesn?t care that some people may have >>> > qualitatively very different physical representations of red and green. >>> >>> Quit calling me "qualia blind". I am not sure what you mean by it, by >>> it sounds vaguely insulting like you are accusing me of being a >>> philosophic zombie or something. I assure you there is something that >>> it is qualitatively like to be me, even if I can't succinctly describe >>> it to you in monkey mouth noises. I could just as easily accuse you of >>> being innumerate and a mathphobe, so either explain what you mean or >>> knock it off. >>> >>> > >>> > If you only care about if a brain can pick strawberries, and don?t care >>> > what it is qualitatively like, then you can?t make the critically >>> important >>> > distinctions between these 3 robots >>> > < >>> https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing >>> > >>> > that are functionally the same but qualitatively very different, one >>> being >>> > not conscious at all. >>> >>> No being can be deemed conscious without some manner of inputs from >>> the real world. That is the nature of perception. A robot without >>> sensors cannot be conscious. If that is what you mean by an "abstract >>> robot" than I agree that it is not conscious. On the other hand, a >>> keyboard is a sensor. A very limited sensor but a sensor nonetheless. >>> >>> >> ?Nothing in the universe can objectively observe anything else.? >>> >>> > All information that comes to our senses is ?objectively? observed and >>> > devoid of any physical qualitative information, it is all only abstract >>> > mathematical information. Descartes, the ultimate septic, realized >>> that he >>> > must doubt all objectively observed information. >>> >>> You are in no way an objective observer. Any information that may have >>> been objective before you observed it became biased the moment you >>> perceived it. That is because your brain filters out and flat out >>> ignores out any information that does not have relevance to Brent. Why >>> else could you not see the color grue unless it had no survival >>> advantage to you or your ancestors? Even now, your inborn Brentward >>> bias is seething with the need to disagree with me: your primal and >>> naked need to impose Brent upon me and the rest of the world. Can't >>> you feel it? >>> >>> > But he also realized: ?I >>> > think therefore I am?. This includes the knowledge of the qualities of >>> our >>> > consciousness. >>> >>> No it doesn't. Thinking pertains to logic and abstracts and not to >>> qualia which are in the realm of that what you perceive and feel. >>> Descartes said that his ability to make logical inferences entailed >>> that he existed. If intelligence is, as you claim, separable from >>> consciousness, then Descartes did little more than make a good case >>> that he was intelligent. In fact he made it point to explicitly assume >>> that all his perceived qualia were the work of some kind of malicious >>> demon trying to mislead him about his existence through his senses or >>> something similarly paranoid. In any case, if anyone was "qualia >>> blind" it was your man Descartes, who used imagined demons to come up >>> with a definition of himself that did not incorporate sensory >>> information. Nonetheless, I don't think Descartes was a philosophic >>> zombie. >>> >>> > We know, absolutely, in a way that cannot be doubted, what >>> > physical redness is like, and how it is different than greenness. >>> While it >>> > is true that we may be a brain, in a vat. We know, absolutely, that >>> the >>> > physics, in the brain, in that vat exist, and we know, absolutely and >>> > qualitatively, what that physics (in both hemispheres) is like. >>> >>> How could we know for sure what what the physical redness of ripe >>> strawberries looks like when they would look different in the light >>> and the shadow? >>> >>> https://en.wikipedia.org/wiki/Checker_shadow_illusion >>> >>> > Let?s say you did objectively detect some new ?perceptronium?. All you >>> > would have, describing that perceptronium, is mathematical models and >>> > descriptions of such. These mathematical descriptions of perceptronium >>> > would all be completely devoid of any qualitative meaning. Until you >>> > experienced a particular type of perceptronium, directly, you would not >>> > know, qualitatively, how to interpret any of your mathematical >>> objective >>> > descriptions of such. >>> >>> Perceptronium is Tegmark's notion and not mine. I am not sure that as >>> a concept it adds much to the understanding of consciousness. >>> >>> > Again, everything you are talking about is what Chalmers, and everyone >>> > would call ?easy? problems. Discovering and objectively observing any >>> kind >>> > of ?perceptronium? is an easy problem. We already know how to do this. >>> > Knowing, qualitatively, what that perceptronium is qualitatively like, >>> if >>> > you experienced it, directly, is what makes it hard. >>> >>> Being Brent is necessarily like being Brent. And if I were born in >>> your stead, then I would necessarily be Brent. Moreover, you are being >>> of finite information in that your entire history, your every thought, >>> and your every deed can be described by a very large yet nonetheless >>> finite number of true/false or yes/no questions and their answers. The >>> smallest number of such yes/no questions and answers would equal your >>> Shannon entropy. >>> >>> That means that there is a unique bitstring that describes you. The >>> sum total of every discernible thing about you can be expressed as a >>> very large integer. It would be the most compressed form of you that >>> it is possible to express. >>> >>> > The only ?hard? part of consciousness is the ?Explanatory Ga >>> > p?, or how do you eff >>> the >>> > ineffable nature of qualia. >>> >>> There is no "explanatory gap" because it is filled in by natural >>> selection quite nicely. There are some qualia invariants that can be >>> identified and experienced quite universally. For example, I know what >>> your pain feels like. It feels unpleasant. I know that because our >>> ancestors evolved to feel pain so they would try to avoid dangerously >>> unhealthy environments and behaviors. >>> >>> > Everything else is just easy problems. We >>> > already know, mathematically what it is like to be a bat. But that >>> tells >>> > you nothing, qualitatively about what being a bat is like. >>> >>> You are right, that's where technology can help. If you go >>> hang-gliding on a moonless night while wearing a pair of these sonar >>> glasses, you might come close to knowing what it is like to be a bat. >>> >>> http://sonarglasses.com/ >>> >>> Alternatively, since you are what you eat, you could just eat a bat >>> and describe how it makes you feel. ;-) >>> >>> >>> Stuart LaForge >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Jun 28 23:28:03 2019 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 28 Jun 2019 19:28:03 -0400 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: On Fri, Jun 28, 2019, 11:41 AM Dylan Distasio wrote: > I would assume mine is different from yours unless you're red/green > colorblind also. > > Tbh, everyone's is different even with the same diagnosis of colorblindness. I don't think we need to have confirmed redness of red, milkness of milk, or 8ness of 8 - because the objects are perhaps irrelevant if we agree on the syntax for transformation. Ex: i think of 8 as two cubed and you think 4+4, do we have to make this distinction to do any of the operations on 8? (Let's ask someone who views 8 as half of sixteen) -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sat Jun 29 00:25:56 2019 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 28 Jun 2019 17:25:56 -0700 Subject: [ExI] ai emotions Message-ID: <20190628172556.Horde.uqtW0puzWNSOg6B5mu8AZi7@secure199.inmotionhosting.com> Yes, of course. You are quite right, Henry, the list is too small such behavior. My intent was not hostility or offense but instead more like jocular mischief trying to incite Brent. My judgement might also have been skewed by some wine as well. But I also know that Brent is a veteran of the flame wars on the list at the turn of the century, so I am sure he wasn't overly offended. In any case, I apologize. Stuart LaForge Quoting Henry Rivera: > Stuart, > What?s with all the hostility directed at Brent, who has been trying > to clarify discussions around consciousness for years steadfastly? I > didn?t read such hostility and sarcasm in his response to you. I > don?t get the sense that he is trying to threaten your worldview or > insult your intelligence. I admire the calmness, patience, and > precision I have witnessed in Brent?s dialogs all over the net for > well over ten years. He?ll likely try to respond to all your points, > a task which many would abandon by this point in an email string. I > see no value or virtue aggressive or defensive retorts. Let?s try to > keep it civil here in our Exl realm please. We have a pretty unique > thing here. Not trying to speak for Spike or anything, but I?d like > to think he?d agree. > Respectfully, > -Henry From spike at rainier66.com Sat Jun 29 00:36:07 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 28 Jun 2019 17:36:07 -0700 Subject: [ExI] ai emotions In-Reply-To: <20190628172556.Horde.uqtW0puzWNSOg6B5mu8AZi7@secure199.inmotionhosting.com> References: <20190628172556.Horde.uqtW0puzWNSOg6B5mu8AZi7@secure199.inmotionhosting.com> Message-ID: <00bd01d52e12$a4a60390$edf20ab0$@rainier66.com> Compared to the olden days, the ExI flame wars are relatively serene. ExI-chat is a kinder and gentler place than it once was. Thanks to all for that. spike -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge Subject: Re: [ExI] ai emotions >...Yes, of course. You are quite right, Henry, the list is too small such behavior. My intent was not hostility or offense but instead more like jocular mischief trying to incite Brent. My judgement might also have been skewed by some wine as well. But I also know that Brent is a veteran of the flame wars on the list at the turn of the century, so I am sure he wasn't overly offended. >...In any case, I apologize. Stuart LaForge Quoting Henry Rivera: > Stuart... > Respectfully, > -Henry _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Sat Jun 29 01:42:34 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 28 Jun 2019 19:42:34 -0600 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: John, ?For example, would the subjective experience of somebody who saw the world in black and white be different from somebody who saw the world in red and black? I don't think it would although we'll never know for sure.? I hear you saying something very different (redness vs whiteness) is not different? It could still function the same, is that what you mean? Because it would be very qualitatively different, right? as it is very different in these 3 functionally equivalent robots that are qualitatively very different ? And of course, your claim ?we?ll never know for sure.? is certainly a falsifiable claim. Everyone supporting ?Representational Qualia Theory ? is predicting these claims will soon be falsified, once experimentalists stop being qualia blind. And thanks Dylan for pointing out that red green color blind ?bichromats? have very different qualia than normal trichromats. I look forward to when you can experience what it is like for us trichromats, just as I look forward to experiencing what it is like for the rare tetrachromats. And I would bet there is a good chance that some bats use my redness qualia to represent some of what their echolocation detects. In other words, lots of diversity possibilities within the same species, and lots of similarity possible in different species. On Fri, Jun 28, 2019 at 5:30 PM Mike Dougherty wrote: > On Fri, Jun 28, 2019, 11:41 AM Dylan Distasio wrote: > >> I would assume mine is different from yours unless you're red/green >> colorblind also. >> > >> > Tbh, everyone's is different even with the same diagnosis of > colorblindness. > > I don't think we need to have confirmed redness of red, milkness of milk, > or 8ness of 8 - because the objects are perhaps irrelevant if we agree on > the syntax for transformation. Ex: i think of 8 as two cubed and you > think 4+4, do we have to make this distinction to do any of the operations > on 8? (Let's ask someone who views 8 as half of sixteen) > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Jun 29 12:36:07 2019 From: johnkclark at gmail.com (John Clark) Date: Sat, 29 Jun 2019 08:36:07 -0400 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: On Fri, Jun 28, 2019 at 9:46 PM Brent Allsop wrote: John, > > > >> >>?For example, would the subjective experience of somebody who saw the >> world in black and white be different from somebody who saw the world in >> red and black? I don't think it would although we'll never know for sure. >> ? > > > > *> I hear you saying something very different (redness vs whiteness) is > not different? * > Difference is indeed the key word because meaning needs contrast and that's why the best definition of "nothing" I ever heard was "infinite unbounded homogeneity". I think this is just as true for qualia as anything else, it is meaningless to speak about qualia in isolation from all other qualia. If our entire visual field consisted of a unvarying field of red we wouldn't have a word for "red" or "color" or "vision" because we would be totally blind and be unaware we were seeing red or seeing anything at all. *> It could still function the same, is that what you mean? * > It would certainly function the same no doubt about that. > > *Because it would be very qualitatively different, right?* > I don't think so because qualia can't get there meaning from something absolute in themselves, they must obtain meaning from how they contrast with other qualia. So a world seen in black and white and a world seen in red and white would certainly not be objectively different and, although we will never be able to prove it, I don't think it would be subjectively different either, not quantitatively and not qualitatively. *> And thanks Dylan for pointing out that red green color blind > ?bichromats? have very different qualia than normal trichromats.* In that special case there would be a very obvious objective difference so little stretch would be needed to conclude there would be a subjective difference too. I would also propose that a man who was blind from birth would have different color qualia from both you and me. > *> And of course, your claim ?we?ll never know for sure.? is certainly a > falsifiable claim. * > Nobody will ever be able to prove my claim is false and nobody will ever be able to prove it's correct either, so it's not a scientific statement it's a philosophical one; in other words my idea does not deserve a lot of deep thought because, just like *ALL* consciousness theories and very unlike intelligence theories, it leads precisely nowhere. > * > Everyone supporting ?Representational Qualia Theory > ? is > predicting these claims will soon be falsified, once experimentalists stop > being qualia blind.* > I don't find these claims convincing because they all involve radical merging and alteration of the experimental subject. I don't think you or I will ever know for certain if we experience the same qualia; someday John Allsop and Brent Clark might know if they share the same qualia or not but we won't. And now, as Monty Python would say, for something completely different: Today after 46 years I officially retire my job of being a electrical engineer in order to pursue my goal of becoming a philosopher king, or maybe just a gentleman of leisure, or maybe just a bum. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From interzone at gmail.com Sat Jun 29 15:05:56 2019 From: interzone at gmail.com (Dylan Distasio) Date: Sat, 29 Jun 2019 11:05:56 -0400 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: Congrats on the retirement! I didn't realize you are a EE. Do you do any related hobbyist work? On Sat, Jun 29, 2019, 8:38 AM John Clark wrote: > > > And now, as Monty Python would say, for something completely different: > Today after 46 years I officially retire my job of being a electrical > engineer in order to pursue my goal of becoming a philosopher king, or > maybe just a gentleman of leisure, or maybe just a bum. > > John K Clark > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Jun 29 15:35:05 2019 From: johnkclark at gmail.com (John Clark) Date: Sat, 29 Jun 2019 11:35:05 -0400 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: On Sat, Jun 29, 2019 at 11:09 AM Dylan Distasio wrote: > *Congrats on the retirement! * > Thanks. > *> I didn't realize you are a EE. Do you do any related hobbyist work?* > Nah, reading is my only hobby. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Jun 29 17:12:03 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 29 Jun 2019 10:12:03 -0700 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: <009501d52e9d$c5ec5280$51c4f780$@rainier66.com> John where did you work? spike From: extropy-chat On Behalf Of John Clark Sent: Saturday, June 29, 2019 8:35 AM To: ExI chat list Subject: Re: [ExI] Red and green qualia On Sat, Jun 29, 2019 at 11:09 AM Dylan Distasio > wrote: > Congrats on the retirement! Thanks. > I didn't realize you are a EE. Do you do any related hobbyist work? Nah, reading is my only hobby. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Jun 29 17:31:36 2019 From: johnkclark at gmail.com (John Clark) Date: Sat, 29 Jun 2019 13:31:36 -0400 Subject: [ExI] Red and green qualia In-Reply-To: <009501d52e9d$c5ec5280$51c4f780$@rainier66.com> References: <009501d52e9d$c5ec5280$51c4f780$@rainier66.com> Message-ID: On Sat, Jun 29, 2019 at 1:15 PM wrote: > *John where did you work?* > My last job was in the engineering department at a TV station, and my last day is today; in fact I'm heading for my car right now for the traffic filled 32 mile commute to work for the very last time. That commute was no fun, it was the main reason I decided to call it quits. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Jun 29 19:02:02 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 29 Jun 2019 14:02:02 -0500 Subject: [ExI] Red and green qualia In-Reply-To: References: <009501d52e9d$c5ec5280$51c4f780$@rainier66.com> Message-ID: Happy Retirement to you, John Clark! Just don't wither away. There is a stronger correlation between the age of retirement and the age of mortality than there is between age and mortality. bill w On Sat, Jun 29, 2019 at 12:34 PM John Clark wrote: > On Sat, Jun 29, 2019 at 1:15 PM wrote: > > > *John where did you work?* >> > > My last job was in the engineering department at a TV station, and my last > day is today; in fact I'm heading for my car right now for > the traffic filled 32 mile commute to work for the very last time. That > commute was no fun, it was the main reason I decided to call it quits. > > John K Clark > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Jun 29 19:43:51 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 29 Jun 2019 13:43:51 -0600 Subject: [ExI] Red and green qualia In-Reply-To: References: <009501d52e9d$c5ec5280$51c4f780$@rainier66.com> Message-ID: Congratulations, John. I retired last year. Sooo much fun working on publishing papers on consciousness, attending conferences on consciousness and Ethereum, contributing to the Ethereum community, taking family vacations, remodeling our house, working on Canonizer, shopping for new cars... What are you going to do now? On Sat, Jun 29, 2019 at 1:04 PM William Flynn Wallace wrote: > Happy Retirement to you, John Clark! Just don't wither away. There is a > stronger correlation between the age of retirement and the age of mortality > than there is between age and mortality. > > bill w > > On Sat, Jun 29, 2019 at 12:34 PM John Clark wrote: > >> On Sat, Jun 29, 2019 at 1:15 PM wrote: >> >> > *John where did you work?* >>> >> >> My last job was in the engineering department at a TV station, and my >> last day is today; in fact I'm heading for my car right now for >> the traffic filled 32 mile commute to work for the very last time. That >> commute was no fun, it was the main reason I decided to call it quits. >> >> John K Clark >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Jun 29 21:01:27 2019 From: johnkclark at gmail.com (John Clark) Date: Sat, 29 Jun 2019 17:01:27 -0400 Subject: [ExI] Red and green qualia In-Reply-To: References: <009501d52e9d$c5ec5280$51c4f780$@rainier66.com> Message-ID: On Sat, Jun 29, 2019 at 3:47 PM Brent Allsop wrote: > > What are you going to do now? > That is a good question. I'm still health ( haven't called in sick in 20 years) and have enough money to do pretty much anything I want. Now I just have to figure out what I want. I'm not too worried, I don't bore easily and I can usually find ways to amuse myself. By the way, as I write this my friends at work are giving me a very nice retirement party. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Jun 29 22:46:22 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 29 Jun 2019 15:46:22 -0700 Subject: [ExI] Red and green qualia In-Reply-To: References: Message-ID: <003001d52ecc$79cb7280$6d625780$@rainier66.com> From: extropy-chat On Behalf Of John Clark Subject: Re: [ExI] Red and green qualia On Sat, Jun 29, 2019 at 11:09 AM Dylan Distasio > wrote: >> Congrats on the retirement! >?Thanks. >>Do you do any related hobbyist work? >?Nah, reading is my only hobby. John K Clark OK well, do let BillW?s tragic story be a warning. He too retired, and found himself with far too much time on his hands. It started out as just having a little fun, but got worse and still worse until he was found in the gutter, reading entire books about? cod. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Jun 29 23:59:37 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 29 Jun 2019 18:59:37 -0500 Subject: [ExI] Red and green qualia In-Reply-To: <003001d52ecc$79cb7280$6d625780$@rainier66.com> References: <003001d52ecc$79cb7280$6d625780$@rainier66.com> Message-ID: OK well, do let BillW?s tragic story be a warning. He too retired, and found himself with far too much time on his hands. It started out as just having a little fun, but got worse and still worse until he was found in the gutter, reading entire books about? cod. spike Guilty. I wonder how many books Spike has on bees...... Just finished Thomas Sowell's 'A Conflict of Visions', revised version (did not take sides). He uses the terms 'constrained' and 'unconstrained'. I have not seen those before. You? Parents - when my kids were growing up Time-Life had two series: Nature, and SCience. I read each one in both series before passing them along to the kids (that's where I read a whole book on water). Great stuff. Makes me wonder just what is out there in terms of books about science and nature for kids - say, 6 to 12. Or has the web killed those? bill w On Sat, Jun 29, 2019 at 5:49 PM wrote: > > > > > *From:* extropy-chat *On Behalf > Of *John Clark > *Subject:* Re: [ExI] Red and green qualia > > > > On Sat, Jun 29, 2019 at 11:09 AM Dylan Distasio > wrote: > > > > >> *Congrats on the retirement! * > > > > >?Thanks. > > > > *>>**Do you do any related hobbyist work?* > > > > >?Nah, reading is my only hobby. John K Clark > > > > > > > > OK well, do let BillW?s tragic story be a warning. He too retired, and > found himself with far too much time on his hands. It started out as just > having a little fun, but got worse and still worse until he was found in > the gutter, reading entire books about? cod. > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Jun 30 01:55:08 2019 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 29 Jun 2019 18:55:08 -0700 Subject: [ExI] ai emotions Message-ID: <20190629185508.Horde.wCel5hmNVMhMU75jcjfPNJZ@secure199.inmotionhosting.com> Quoting Brent Allsop: > There are ?week?, ?stronger? and ?strongest? forms predicting how we will > be able to eff the ineffable nature of the physical quality of the redness > someone can directly experience to other people in this ?Objectively, We > are Blind to Physical Qualities > ? > paper. Your paper references Jack Gallant's work but what you call "effing" technology is more popularly called "mind-reading technology" you should see what they have accomplished with fMRI and deep-learning algorthms these days. One of the pioneers in the field is now able to use your EEG(!) fed into a deep learning neural network to reconstruct the faces you are seeing during the experiment. http://www.eneuro.org/content/5/1/ENEURO.0358-17.2018/tab-figures-data > You are basically making the falsifiable prediction that consciousness or > qualia arise from mathematics or functionality. This kind of functionalism > is currently leading in supporting sub camps to representational qualia > theory, there being multiple functionalists? sub camps, with more > supporters than the materialist sub camps. So the question now becomes can an algorithm reconstruct your qualia from your brain-wave data without itself experiencing them? > So, let?s take a simplistic falsifiable mathematical theory as an example, > the way we use glutamate as a simplified falsifiable materialist example. > Say if you predict that it is the square root of 9 that has a redness > quality and you predict that it is the square root of 16 that has a > greenness quality. In other words, this could be verified if no > experimentalists could produce a redness, without doing that particular > necessary and sufficient mathematical function that was the square root of > 9. > But, if the prediction that it is glutamate that has the redness physical > quality that can?t be falsified, and nobody is ever able to reproduce a > redness experience (no matter what kind of mathematics you do) without > physical glutamate, this would falsify functionalist and mathematical > theories of qualia or consciousness. If hooking EEG electrodes to your head allows a machine to show me red whenever you are looking at red, then which does that falsify? Stuart LaForge From spike at rainier66.com Sun Jun 30 05:36:41 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 29 Jun 2019 22:36:41 -0700 Subject: [ExI] Red and green qualia In-Reply-To: References: <003001d52ecc$79cb7280$6d625780$@rainier66.com> Message-ID: <004201d52f05$cc3fc190$64bf44b0$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace Subject: Re: [ExI] Red and green qualia >>?OK well, do let BillW?s tragic story be a warning. He too retired, and found himself with far too much time on his hands. It started out as just having a little fun, but got worse and still worse until he was found in the gutter, reading entire books about? cod. spike >?Guilty. I wonder how many books Spike has on bees...... bill w Well sure, but I have an excuse: friends give me those. My favorite beast book isn?t on either cod or bees however. It?s on ants. It?s the canonical tome by Bert Holldobler and E.O. Wilson, with the unsurprising title The Ants. That one is just wicked cool. Ants are really cool beasts. One can experiment on them without getting wet or raising the ire of the PETA folks (they refuse to stick up for bugs (but other than that, they are a generally commendable group of people.)) I did some fun stuff with ants before my neighbor hired a mystery guy named Jose who claimed he could rid his house completely of ants without even coming inside, for 100 bucks, paid in cash, no questions asked including any identification past ?Jose.? So? he hired Jose, who showed up with some mystery chemical which I think was probably dimethyl chlordane, sprinkled around my neighbor?s house, had some of the bootleg chemical remaining, so he came over and put some around my house too. Whatever probably-illegal chemical he used has kept both homes antless for the past dozen years. So? no more experiments with ants unless I go down the street. Now if I get lymphoma, I have some idea of who slew me: Jose. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Jun 30 17:33:12 2019 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 30 Jun 2019 12:33:12 -0500 Subject: [ExI] Red and green qualia In-Reply-To: <004201d52f05$cc3fc190$64bf44b0$@rainier66.com> References: <003001d52ecc$79cb7280$6d625780$@rainier66.com> <004201d52f05$cc3fc190$64bf44b0$@rainier66.com> Message-ID: the canonical tome by Bert Holldobler and E.O. Wilson, with the unsurprising title The Ants. spike So you were just blown away by the canon. bill w On Sun, Jun 30, 2019 at 12:40 AM wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace > *Subject:* Re: [ExI] Red and green qualia > > > > >>?OK well, do let BillW?s tragic story be a warning. He too retired, > and found himself with far too much time on his hands. It started out as > just having a little fun, but got worse and still worse until he was found > in the gutter, reading entire books about? cod. spike > > > > >?Guilty. I wonder how many books Spike has on bees...... bill w > > > > Well sure, but I have an excuse: friends give me those. > > My favorite beast book isn?t on either cod or bees however. It?s on > ants. It?s the canonical tome by Bert Holldobler and E.O. Wilson, with the > unsurprising title The Ants. > > That one is just wicked cool. Ants are really cool beasts. One can > experiment on them without getting wet or raising the ire of the PETA folks > (they refuse to stick up for bugs (but other than that, they are a > generally commendable group of people.)) > > I did some fun stuff with ants before my neighbor hired a mystery guy > named Jose who claimed he could rid his house completely of ants without > even coming inside, for 100 bucks, paid in cash, no questions asked > including any identification past ?Jose.? So? he hired Jose, who showed up > with some mystery chemical which I think was probably dimethyl chlordane, > sprinkled around my neighbor?s house, had some of the bootleg chemical > remaining, so he came over and put some around my house too. Whatever > probably-illegal chemical he used has kept both homes antless for the past > dozen years. > > So? no more experiments with ants unless I go down the street. > > Now if I get lymphoma, I have some idea of who slew me: Jose. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Jun 30 22:09:11 2019 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 30 Jun 2019 15:09:11 -0700 Subject: [ExI] Red and green qualia In-Reply-To: References: <003001d52ecc$79cb7280$6d625780$@rainier66.com> <004201d52f05$cc3fc190$64bf44b0$@rainier66.com> Message-ID: <003501d52f90$72a24190$57e6c4b0$@rainier66.com> Heh. Wordplay is not only allowed on ExI, it is encouraged. spike From: extropy-chat On Behalf Of William Flynn Wallace Sent: Sunday, June 30, 2019 10:33 AM To: ExI chat list Subject: Re: [ExI] Red and green qualia the canonical tome by Bert Holldobler and E.O. Wilson, with the unsurprising title The Ants. spike So you were just blown away by the canon. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sun Jun 30 22:21:33 2019 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 30 Jun 2019 16:21:33 -0600 Subject: [ExI] ai emotions In-Reply-To: <20190629185508.Horde.wCel5hmNVMhMU75jcjfPNJZ@secure199.inmotionhosting.com> References: <20190629185508.Horde.wCel5hmNVMhMU75jcjfPNJZ@secure199.inmotionhosting.com> Message-ID: Hi Stuart, Once you figure out what ?qualia blindness? means, you will look back on these conversations and, like all people that do now comprehend qualia blindness (including some on this list), you will wonder how you could have missed what should be obvious, for so long. At least you are still persisting. Many people give up before they get this far. Many people that finally get it experience this. In order to not be qualia blind, you need to use more than just one word ?red? when talking about the perception of color and mind reading. If you only have one word for ?red? you can?t model when someone is representing red information, with something physically different like your greenness. Obviously, Both Galant and Nemrodov et al, are doing mind reading. What you are completely missing is how both of these guys and everyone doing this kind of mind reading is doing it in a qualia blind way. The spatiotemporal EEG information they are getting is just abstract information, completely devoid of any color quality information. In order to display mind read colors on the screen, from the abstract data, they need some additional information to tell them when to display what color. If they are qualitatively interpreting the data at all (gallant does this - displaying colored images, Nemrodov isn?t ? he displays no color in their resulting face recognition images) they are doing it in a way that blinds them to any physical qualitative differences they may be detecting. Jack Gallant uses the color map in the movie he shows to know how to qualitatively interpret his spatiotemporal EEG information, which is effectively interpreting it according to the properties of the initial cause of perception (the physical properties of the strawberry out there), not the physical qualities of what they are observing (knowledge of the strawberry, in the brain). Their deep learning neural network algorithms have unique models for each person. These models ?correct? for any physical differences they detect in individual brains, so they only see ?red?, when in realty they may be detecting greenness, and correcting for this difference making their mind reading qualia blind. You obviously haven?t yet red the ?Objectively, We are Blind to Physical Qualities ? paper which describes exactly this in more detail. On Sat, Jun 29, 2019 at 7:57 PM Stuart LaForge wrote: > > Quoting Brent Allsop: > > > > There are ?week?, ?stronger? and ?strongest? forms predicting how we will > > be able to eff the ineffable nature of the physical quality of the > redness > > someone can directly experience to other people in this ?Objectively, We > > are Blind to Physical Qualities > > < > https://docs.google.com/document/d/1uWUm3LzWVlY0ao5D9BFg4EQXGSopVDGPi-lVtCoJzzM/edit?usp=sharing > >? > > paper. > > Your paper references Jack Gallant's work but what you call "effing" > technology is more popularly called "mind-reading technology" you > should see what they have accomplished with fMRI and deep-learning > algorthms these days. One of the pioneers in the field is now able to > use your EEG(!) fed into a deep learning neural network to reconstruct > the faces you are seeing during the experiment. > > http://www.eneuro.org/content/5/1/ENEURO.0358-17.2018/tab-figures-data > > > You are basically making the falsifiable prediction that consciousness or > > qualia arise from mathematics or functionality. This kind of > functionalism > > is currently leading in supporting sub camps to representational qualia > > theory, there being multiple functionalists? sub camps, with more > > supporters than the materialist sub camps. > > So the question now becomes can an algorithm reconstruct your qualia > from your brain-wave data without itself experiencing them? > > > So, let?s take a simplistic falsifiable mathematical theory as an > example, > > the way we use glutamate as a simplified falsifiable materialist example. > > Say if you predict that it is the square root of 9 that has a redness > > quality and you predict that it is the square root of 16 that has a > > greenness quality. In other words, this could be verified if no > > experimentalists could produce a redness, without doing that particular > > necessary and sufficient mathematical function that was the square root > of > > 9. > > But, if the prediction that it is glutamate that has the redness physical > > quality that can?t be falsified, and nobody is ever able to reproduce a > > redness experience (no matter what kind of mathematics you do) without > > physical glutamate, this would falsify functionalist and mathematical > > theories of qualia or consciousness. > > If hooking EEG electrodes to your head allows a machine to show me red > whenever you are looking at red, then which does that falsify? > > Stuart LaForge > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: