From spike at rainier66.com Sun May 1 01:07:45 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 30 Apr 2022 18:07:45 -0700 Subject: [ExI] gaming the system In-Reply-To: <623C6678-D315-414A-97CF-E9AD02A73714@gmail.com> References: <001901d85c99$1443f7e0$3ccbe7a0$@rainier66.com> <623C6678-D315-414A-97CF-E9AD02A73714@gmail.com> Message-ID: <004101d85cf7$de98e830$9bcab890$@rainier66.com> From: extropy-chat On Behalf Of Dan TheBookMan via extropy-chat Sent: Saturday, 30 April, 2022 3:59 PM To: ExI chat list Cc: Dan TheBookMan Subject: Re: [ExI] gaming the system >?Instead of gaming the system as in making edgy content, maybe think of gaming the system as in making spam that?s ever harder to filter out. In which case, transparent filtering algorithms make the spammers job easier, no? Regards, Dan We don?t know Dan. But we get to see, ja? And since that software will be public domain, we can use a copy of it to do the moderation on ExI, ja? What I really want to know is how Twitter was filtered before the purchase, who was filtered, what exactly was filtered, etc. It isn?t often we get to see if a conspiracy theory is true or false. If Musk manages to purchase Twitter, we get to see if all those accusations of shadow banning were true or if they were false. To me that will be worth 44 billion of Elon Musk?s money. I will feel like I got my money?s worth out of Elon?s money. Will you? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sun May 1 03:38:39 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 30 Apr 2022 23:38:39 -0400 Subject: [ExI] ok this explains it In-Reply-To: References: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> <003d01d85ccd$dd5ed3a0$981c7ae0$@rainier66.com> <007501d85cd2$2e517ed0$8af47c70$@rainier66.com> Message-ID: On Sat, Apr 30, 2022 at 4:46 PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > There have to be content moderators. How else could they stop posts that > the algorithms can't figure out? > ### No, there is no need for company-employed content moderators. If somebody wishes not to be exposed to some type of content, he can ask other people to filter available content for him. For example, Twitter could give users the option to see only posts from users they follow. By choosing who you follow you are in fact outsourcing the task of content filtering to them. Every user who has followers would act as a voluntary content moderator for them. Since the choice of who to follow is an individual one, this system would assure individually customized content filtering for everybody, without Twitter spending a dime. So, no, no honest person needs content moderators on Twitter. Only psychopaths who want to steal elections, like the last presidential election, want to force their own "moderators" on the rest of us. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sun May 1 04:01:03 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 May 2022 00:01:03 -0400 Subject: [ExI] Atlas Shrugged In-Reply-To: <011501d85c38$2e3db6b0$8ab92410$@rainier66.com> References: <8435DD35-78F1-4642-A8FB-3A509D01AA00@hxcore.ol> <001901d85b89$d3293550$797b9ff0$@rainier66.com> <011501d85c38$2e3db6b0$8ab92410$@rainier66.com> Message-ID: On Fri, Apr 29, 2022 at 10:15 PM wrote: > > > No. True Marxism is impossible. > ### Yes, true Marxism is possible and it has been done a bunch of times. Marxism is what Marxists do, not what they say. Once you have Marxists in power, and it did happen a number of times, the outcome *is* Marxism, that is, the consequence of applying Marxists principles (dictatorship of the proletariat by the vanguard of the proletariat, abolition of private ownership of means of production, etc.) in real life. To say "What happens after Marxist principles are used is not true Marxism" is a form of the "no true Scotsman" fallacy. Marxists and other mountebanks often tell a tall tale of Utopia ("true communism") and great goodness ("million dollars") that you are sure to reach if you follow their bullshit advice ("give us totalitarian power over the society", "send me $1000 to unlock the bank of account of Miss Ulala Akinyere, widow of Mr Akinyere of the Big Bank of Nigeria"). Only stupid people fall for it. Unfortunately, there is a large supply of stupid people. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun May 1 04:14:31 2022 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 30 Apr 2022 21:14:31 -0700 Subject: [ExI] Fwd: Is Artificial Life Conscious? In-Reply-To: Message-ID: <20220430211431.Horde.C5IhPuPfWtwqMoSEkaT-tb5@sollegro.com> Quoting Rafal Smigrodzki: > Message: 6 > Date: Fri, 29 Apr 2022 01:49:04 -0400 > From: Rafal Smigrodzki > To: ExI chat list > Subject: Re: [ExI] Fwd: Is Artificial Life Conscious? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Thu, Apr 28, 2022 at 11:02 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> The biggest problem that I have with the EM field mediating >> intelligence or consciousness is that my own studies into the >> mathematics of neural networks indicate that learning is mediated >> multiplication of high-dimensional tensors divided into "layers", and >> the more layers, the deeper the AI. EM fields are governed by quantum >> mechanics and therefore linear and subject to an unlimited additive >> property. That is to say that any number of wave functions may be >> added together to make a new wave function. >> >> On the other hand, the most important element of the neural networks >> are neurons that are characterized by non-linear activation functions. >> The necessity of the non-linearity of the activation function is that >> if neurons have a linear activation function, they cannot be organized >> into separate "layers" and instead all act as one giant layer and a >> single layer is quite stupid no matter how large. Therefore the sum of >> quantum fields or other waves do seem mathematically able to exhibit >> observable intelligence, whereas the sum of non-linear neurons do. >> > > ### Yes, absolutely. (I think you meant "the sum of > quantum fields or other waves do NOT seem mathematically able to exhibit > observable intelligence"). > ---------------- Yes, precisely. Thanks for catching that, Rafal. Waves are linear systems and obey the superposition principle of homogeneity and additivity. The most complex waves are precisely the sum of their parts, nothing more. Non-linear systems, however, are more than the sum of their parts due to non-linearity of those parts resulting in emergent phenomena such as intelligence, consciousness, or even chaos. > I think we can both agree on the following: > > Synaptic organization and transmission via ions and neurotransmitters that > involve sparse synaptic connections between non-linearly activated neurons > is how the brain implements complex recursive mathematical function on > tensor-space, which enables intelligence, and which, one might hypothesize, > has something to do with consciousness. > > You and I just pointed to different levels of this integrated process. I agree our viewpoints are compatible. But I am suggesting that intelligence, and presumably consciousness, is an abstraction separable from the physical substrate. It is a somewhat dualist approach in that regard. Stuart LaForge From avant at sollegro.com Sun May 1 04:35:33 2022 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 30 Apr 2022 21:35:33 -0700 Subject: [ExI] new neuron learning theory Message-ID: <20220430213533.Horde.4PZI9bv1pWadHjjA_Y_pdcQ@sollegro.com> Quoting Darin Sunley: > > Perceptron-style "neurons" were a simplified caricature of how neurologists > thought neurons /might/ work back in the 70s, even when they were first > implemented. Yes, ML neurons are very simple compared to real neurons. But neither the moon nor the earth are point masses. Yet assuming that they were point masses, yet mathematical models which assumed they were point masses allowed us to send people to the moon and back. > Time and neurological research hasn't been kind to the comparison. > > At this point, the only similarity between the basic elements of > network-based machine learning algorithms and mammalian brain cells is the > name. ML "neurons" are basically pure mathematical abstractions, completely > unmoored from anything biological cells actually do. As I have stated elsewhere, one key similarities between ML neurons and biological neurons are that both are non-linear with respect to inputs and outputs. Also, both are found in networks with their peers where each assumes unique parameters. Stuart LaForge From rafal.smigrodzki at gmail.com Sun May 1 09:14:05 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 May 2022 05:14:05 -0400 Subject: [ExI] The Shape of Twitter Message-ID: The pattern of connections between Twitter accounts that describes who is following whom, antagonistic vs. friendly relationships, primary focus or theme, is a complex multidimensional shape that reflects the underlying social, intellectual and political relationships between users. The pattern can be described in an algorithmic fashion and it contains a huge amount of information that is valuable to users. Users could use this shape to choose who to interact with, join groups and cliques, to shape various aspects of their engagement with the network, such as finding themes of interest, keeping abreast of common interests and emerging trends, avoiding content that the individual finds objectionable while being able to detect the existence and reach of such content (i.e. knowing one's enemies). If I were to advise Twitter's owners, I would suggest the following: - Create a visualization of the pattern of connections between accounts that would be easy to use and would show many dimensions (thematic, political, affinity, ethnic) - Enable individually controlled content filtering, based on one's preferences informed by the visualization - Make it impossible to delete your own content, except if the user can show he is acting under a US court order to delete that content - Make it impossible to hide your own content from any other user - Allow only one named account per verified human user - Never deny an account to a verified human user, regardless of his previous history of account use, such as history of posting content condemned by other users - Allow all users to access all posts at will, regardless of the content of the posts, including posts that may contain illegal content, until specifically taken down by a US court order - Allow one anonymous account per verified human user - Mark accounts owned by verified institutions (corporations and other legal persons in the US) - Make all posts open source, not protected by copyright - Make it easy for all users to scrape Twitter content and repost on other sites - Use both automated and manual censorship on all unverified accounts to prevent illegal or merely annoying content - Provide a wide array of preformed, explicit and implicit filtering options to suit the needs of those who do not wish to directly control their own filtering algorithm This Twitter would allow all users to shape their experience, avoid bots (by choosing to pre-emptively block all non-verified accounts), while maintaining accountability to other users. This system would also export enforcement of the law to the courts, rather than internalize it in Twitter, such that Twitter would act as a mere conduit for the users' speech. Of course, this is likely to be illegal in the US under many laws. Twitter's content database might have to be split into the legal tweets and the illegal tweets, with the latter being hosted offshore, perhaps in space and under a separate corporate entity but still accessible to all users. This is what it would take to make Twitter the arena for absolutely free speech. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 1 13:30:45 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 06:30:45 -0700 Subject: [ExI] ok this explains it In-Reply-To: References: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> <003d01d85ccd$dd5ed3a0$981c7ae0$@rainier66.com> <007501d85cd2$2e517ed0$8af47c70$@rainier66.com> Message-ID: <006001d85d5f$aa2a8820$fe7f9860$@rainier66.com> ?> On Behalf Of Rafal Smigrodzki via extropy-chat Subject: Re: [ExI] ok this explains it On Sat, Apr 30, 2022 at 4:46 PM William Flynn Wallace via extropy-chat > wrote: There have to be content moderators. How else could they stop posts that the algorithms can't figure out? ### No, there is no need for company-employed content moderators. If somebody wishes not to be exposed to some type of content, he can ask other people to filter available content for him. For example, Twitter could give users the option to see only posts from users they follow. ?Rafal Hey cool idea! Twitter could make a slider bar where the user sets the desired filtering level. There aughta be plenty of money available for Twitter to put some code jockeys on that task after the human moderators are no longer needed. The user would find their comfort zone on two-axis political map, perhaps identify coordinates, then set a tolerance radius for ideas, or sketch the boundaries of their safe space. Then the software would filter away anything too dangerous for them to be exposed to! Rafal, this is brilliant as hell me lad. We could keep people safe online make a buttload of money doing it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 1 13:57:44 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 06:57:44 -0700 Subject: [ExI] dang they caught up with us Message-ID: <006b01d85d63$6f5ad980$4e108c80$@rainier66.com> I thought we were really hip with that discussion of creating a virtual mate then marrying her. I am already married, and my bride wouldn't like it if I had a virtual mistress, so I didn't, and another reason is that I don't actually know how anyway, but a Japanese guy actually did it: https://www.mirror.co.uk/news/weird-news/fictosexual-man-who-married-hologra m-26794504 It's soooo annoying when my favorite weird ideas are actually done by ordinary weird people. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun May 1 14:29:39 2022 From: pharos at gmail.com (BillK) Date: Sun, 1 May 2022 15:29:39 +0100 Subject: [ExI] ok this explains it In-Reply-To: <006001d85d5f$aa2a8820$fe7f9860$@rainier66.com> References: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> <003d01d85ccd$dd5ed3a0$981c7ae0$@rainier66.com> <007501d85cd2$2e517ed0$8af47c70$@rainier66.com> <006001d85d5f$aa2a8820$fe7f9860$@rainier66.com> Message-ID: On Sun, 1 May 2022 at 14:33, spike jones via extropy-chat wrote: > > Hey cool idea! Twitter could make a slider bar where the user sets the desired filtering level. There aughta be plenty of money available for Twitter to put some code jockeys on that task after the human moderators are no longer needed. > > The user would find their comfort zone on two-axis political map, perhaps identify coordinates, then set a tolerance radius for ideas, or sketch the boundaries of their safe space. Then the software would filter away anything too dangerous for them to be exposed to! Rafal, this is brilliant as hell me lad. We could keep people safe online make a buttload of money doing it. > > spike > _______________________________________________ See: Quote: "By outsourcing our jobs, Facebook implies that the 35,000 of us who work in moderation are somehow peripheral to social media," the letter read. ---------- So Facebook probably has about 35,000 moderators and they still can't keep up with the horrors being posted. (Not all these moderators are Facebook employees. Facebook also subcontracts moderation to companies in countries where wages are low). See: Quote: While Twitter has never formally said how many human moderators it uses, a 2020 report by New York business school NYU Stern suggested that it had about 1,500 to cope with the 199 million daily Twitter users worldwide. ------------- But Twitter flags far fewer bad tweets than Facebook flags bad posts. Twitter has a policy to generally not moderate tweet content. They suggest users block or unfollow offensive users. --------- BillK From spike at rainier66.com Sun May 1 15:40:26 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 08:40:26 -0700 Subject: [ExI] ok this explains it In-Reply-To: References: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> <003d01d85ccd$dd5ed3a0$981c7ae0$@rainier66.com> <007501d85cd2$2e517ed0$8af47c70$@rainier66.com> <006001d85d5f$aa2a8820$fe7f9860$@rainier66.com> Message-ID: <000801d85d71$c7e3c3b0$57ab4b10$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Sent: Sunday, 1 May, 2022 7:30 AM Subject: Re: [ExI] ok this explains it On Sun, 1 May 2022 at 14:33, spike jones via extropy-chat wrote: > > Hey cool idea! Twitter could make a slider bar where the user sets the desired filtering level. ... > spike > _______________________________________________ See: Quote: "By outsourcing our jobs, Facebook implies that the 35,000 of us who work in moderation are somehow peripheral to social media," the letter read. ---------- So Facebook probably has about 35,000 moderators and they still can't keep up with the horrors being posted... ------------- >...But Twitter flags far fewer bad tweets than Facebook flags bad posts. Twitter has a policy to generally not moderate tweet content. They suggest users block or unfollow offensive users. --------- BillK _______________________________________________ Whoa! HOLeh schlMOLeh! 35k human moderators, sheesh, of course there is an enormous cost savings available there. Cost for a human is at best about 1k/week so Twitter is bleeding 35 million bucks a week just on something I think can be done by software. Thanks for that BillK. This demonstrates the reason why Musk succeeds at everything he does: he finds obvious costs savings and rebuilds a segment of the industry using those savings. A perfect example of that is in SpaceX, where he intentionally chose subcontractors for his rockets which are all located nearby, making it far more practical and cost-competitive. The DoD and NASA intentionally chose far-flung widely dispersed subcontractors (for some perfectly-justifiable reasons) which always made it a pain in the ass to visit subcontractors. The subcontract tech leads spent waaaay too much time travelling. SpaceX chooses local subcontractors, so a prole can just drive around and do her in-person meetings with her companies. The savings in that alone are huge. Regarding Twitter, all those human moderators must justify their existence somehow, so they flag a lotta stuff. If the software is public, a prole can test run her post to see if it will get flagged or filtered, work it until it is acceptable, then post. A coupla hundred million Twitter users will turn into a coupla billion Twitter users, which woulda overwhelmed the 35k human moderators anyway. spike From foozler83 at gmail.com Sun May 1 15:40:28 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 1 May 2022 10:40:28 -0500 Subject: [ExI] Atlas Shrugged In-Reply-To: References: <8435DD35-78F1-4642-A8FB-3A509D01AA00@hxcore.ol> <001901d85b89$d3293550$797b9ff0$@rainier66.com> <011501d85c38$2e3db6b0$8ab92410$@rainier66.com> Message-ID: Only stupid people fall for it. Unfortunately, there is a large supply of stupid people. Rafal What gets me, Rafal, is that the would-be dictators and rabble rousers should be pointing to us as an example of how to make an economy work, and yet all the examples of Marxism that they can give their followers are economic failures. Maybe "Everything belongs to everybody" is a telling point. I don't listen to such people and don't know what they are saying. Anyone have an idea? bill w On Sat, Apr 30, 2022 at 11:03 PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Apr 29, 2022 at 10:15 PM wrote: > >> >> >> No. True Marxism is impossible. >> > > ### Yes, true Marxism is possible and it has been done a bunch of times. > Marxism is what Marxists do, not what they say. Once you have Marxists in > power, and it did happen a number of times, the outcome *is* Marxism, that > is, the consequence of applying Marxists principles (dictatorship of the > proletariat by the vanguard of the proletariat, abolition of private > ownership of means of production, etc.) in real life. To say "What happens > after Marxist principles are used is not true Marxism" is a form of the "no > true Scotsman" fallacy. > > Marxists and other mountebanks often tell a tall tale of Utopia ("true > communism") and great goodness ("million dollars") that you are sure to > reach if you follow their bullshit advice ("give us totalitarian power over > the society", "send me $1000 to unlock the bank of account of Miss Ulala > Akinyere, widow of Mr Akinyere of the Big Bank of Nigeria"). Only stupid > people fall for it. > > Unfortunately, there is a large supply of stupid people. > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Sun May 1 15:51:04 2022 From: sparge at gmail.com (Dave S) Date: Sun, 1 May 2022 11:51:04 -0400 Subject: [ExI] Turing test was passed in 1989 In-Reply-To: References: Message-ID: On Thu, Apr 28, 2022 at 4:02 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What an abusive chatbot teaches us about the art of conversation > Tim Harford 28th April, 2022 > > < > https://timharford.com/2022/04/what-an-abusive-chatbot-teaches-us-about-the-art-of-conversation/ > > > > Quotes: > But while MGonz was abusive, it was not a troll ? it was a simple > chatbot programmed by UCD undergrad Mark Humphrys that was left to > lurk online while Humphrys went to the pub. The next day, Humphrys > reviewed the chat logs in astonishment. His MGonz chatbot had passed > the Turing test. > That's not one of the Turing test variations I'm familiar with. See https://en.wikipedia.org/wiki/Turing_test -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 1 15:55:50 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 08:55:50 -0700 Subject: [ExI] airtag amateur crimefighters Message-ID: <001601d85d73$eed3cdb0$cc7b6910$@rainier66.com> Oy vey I am sooo not hip, mercy. News story about a family who claims a stalker put an Apple airtag in or on their clothing somehow, then used it to stalk them at Disneyland. I knew there were RFID tags, I have one, but I didn't know they have a range of a coupla hundred meters. That's what Apple is claiming. They only cost about 30 bucks, cool. Idea: we create a network of amateur crime fighters, volunteers. We sell the proles these trackers, prole hides it somewhere in her car. She drives to San Francisco, parks, within minutes car is stolen. She calls me, I post the RFID of the stolen wheels, volunteers everywhere start pinging, track the car, catch the bastard! San Francisco DA lets him go. OK well, the last part of that idea needs work, but in the meantime. at least we get the fun of catching the bastard. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Sun May 1 16:09:09 2022 From: sparge at gmail.com (Dave S) Date: Sun, 1 May 2022 12:09:09 -0400 Subject: [ExI] bee having fun In-Reply-To: <001b01d85c11$a9a48300$fced8900$@rainier66.com> References: <000f01d85be1$0014c500$003e4f00$@rainier66.com> <004a01d85c03$050ff9e0$0f2feda0$@rainier66.com> <007201d85c08$c3035960$490a0c20$@rainier66.com> <001b01d85c11$a9a48300$fced8900$@rainier66.com> Message-ID: On Fri, Apr 29, 2022 at 5:42 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > Is there a legitimate reason, or even a logical illegitimate reason for > stopping a guy from buying social media in order to make its filtering > algorithm public? > I think it's naive to assume that making the source code public will reveal how decisions are made. It's highly likely that machine learning is involved. If that's the case, then bias in filtering can be a result of bias in the training data. I doubt that the training data will be available or its biases understood any time soon. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 1 16:26:44 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 09:26:44 -0700 Subject: [ExI] bee having fun In-Reply-To: References: <000f01d85be1$0014c500$003e4f00$@rainier66.com> <004a01d85c03$050ff9e0$0f2feda0$@rainier66.com> <007201d85c08$c3035960$490a0c20$@rainier66.com> <001b01d85c11$a9a48300$fced8900$@rainier66.com> Message-ID: <003801d85d78$3f93d020$bebb7060$@rainier66.com> ?> On Behalf Of Dave S via extropy-chat Subject: Re: [ExI] bee having fun On Fri, Apr 29, 2022 at 5:42 PM spike jones via extropy-chat > wrote: >>?Is there a legitimate reason, or even a logical illegitimate reason for stopping a guy from buying social media in order to make its filtering algorithm public? >?I think it's naive to assume that making the source code public will reveal how decisions are made. It's highly likely that machine learning is involved. If that's the case, then bias in filtering can be a result of bias in the training data. I doubt that the training data will be available or its biases understood any time soon. -Dave Ja of course. What we have now are plausible accusations that the 35k human moderators are biased already, but it doesn?t matter much because with the number of users, particularly the number of new users we can easily anticipate, 35k humans would be completely overwhelmed anyway, even if Elon had any intentions of keeping them employed. That man is building fleets of electric cars with fewer humans than that. There are a coupla hundred million Twitter users already. Each human content moderator would need to watch on the average about 60 thooouuuuusand users each. No way. Dave if you and I had the money to buy Twitter, the first thing we would do is re-assign those 35k people to write software that flags the posts they flag currently, or failing that, come on in and build electric cars or recoverable rockets. No damn way we would keep bleeding 30 million bucks a week for some service that many users already said is grossly inadequate, for there are still tweets somehow getting thru which have both the terms ?Hunter? and ?laptop.? In the same tweet! Horrors. Hell even I could write software which would filter every post which contains both of those terms, sheesh, and I suck. Get a good programmer, she could have that code written in short order, along with a switch or slider bar for the user to allow that kind of horrifying abuse to come to his Twitter feed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 1 16:33:24 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 09:33:24 -0700 Subject: [ExI] bee having fun In-Reply-To: <003801d85d78$3f93d020$bebb7060$@rainier66.com> References: <000f01d85be1$0014c500$003e4f00$@rainier66.com> <004a01d85c03$050ff9e0$0f2feda0$@rainier66.com> <007201d85c08$c3035960$490a0c20$@rainier66.com> <001b01d85c11$a9a48300$fced8900$@rainier66.com> <003801d85d78$3f93d020$bebb7060$@rainier66.com> Message-ID: <004a01d85d79$2e25ba00$8a712e00$@rainier66.com> From: spike at rainier66.com >?Hell even I could write software which would filter every post which contains both of those terms, sheesh, and I suck. Get a good programmer, she could have that code written in short order, along with a switch or slider bar for the user to allow that kind of horrifying abuse to come to his Twitter feed. spike Hey cool, I gave me an idea. Thanks spike! Set a feature on Twitter such that the user can filter posts which contain terms which they don?t like! For instance, a user could create a custom filter which would for instance filter any post which contains the term ?second amendment? or ?Don?t tread on me? that kinda thing. Oh we could soooo make buttloads of money and keep Twitter proles feeling safe online at the same time! Double score, such a deal. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun May 1 16:50:44 2022 From: pharos at gmail.com (BillK) Date: Sun, 1 May 2022 17:50:44 +0100 Subject: [ExI] Review - TED interview with Elon Musk re Twitter takeover Message-ID: This article is an analysis of Musk's interview comments. The article claims that the interview demonstrates how little Musk understands about content moderation. (Not surprising really, as it is not a field Musk has been involved in). Many good points made in this article! Quotes: Indeed, what struck me about his views is how much they sound like what the techies who originally created social media said in the early days. And here?s the important bit: all of them eventually learned that their simplistic belief in how things should work does not work in reality and have spent the past few decades trying to iterate. ------ Simply saying that moderation should follow the law generally shows that one has never actually tried to moderate anything. Because it?s much more complicated than that, as Musk will implicitly admit later on in this interview, without the self-awareness to see how he?s contradicting himself. -------- There?s then a slightly more interesting discussion of open sourcing the algorithm, which is its own can of worms that I?m not sure Musk understands. First of all, it?s often not the algorithm that is the issue. Second, algorithms that are built up in a proprietary stack are not so easy to just randomly ?open source? without revealing all sorts of other stuff. Third, the biggest beneficiaries of open sourcing the ranking algorithm will be spammers (which is doubly amusing because in just a few moments Musk is going to whine about spammers). Open sourcing the algorithm will be most interesting to those looking to abuse and game the system to promote their own stuff. We know this. We?ve seen it. There?s a reason why Google?s search algorithm has become more and more opaque over the years. Not because it?s trying to suppress people, but because the people who were most interested in understanding how it all worked were search engine spammers. Open sourcing the Twitter algorithm would do the same thing. ----- Indeed, to get to the spot that we?re in now, basically all of these companies started with that same premise, realized it wasn?t workable, and then iterated. And Musk is basically saying ?I have a brilliant idea: let?s go back to step 1 and pretend none of the things experts in this space have learned over the past decade actually happened.? -------- The problem is not ?someone I dislike saying something I dislike? the problem is spam, abuse, harassment, threats of violence, dangerously misleading false information, and more. -------- BillK From spike at rainier66.com Sun May 1 17:21:33 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 10:21:33 -0700 Subject: [ExI] Review - TED interview with Elon Musk re Twitter takeover In-Reply-To: References: Message-ID: <006001d85d7f$e7ef2970$b7cd7c50$@rainier66.com> ...> On Behalf Of BillK via extropy-chat >...Many good points made in this article! -------- >...The problem is not ?someone I dislike saying something I dislike? the problem is spam, abuse, harassment, threats of violence, dangerously misleading false information, and more. -------- BillK _______________________________________________ Thx BillK, the reason (one of the reasons) I am optimistic is that with Twitter, a user chooses which twitterer she wishes to follow. In that sense, she creates a custom filter and can make her space as safe or as dangerous as she wishes. Second: even with its 35k human moderators, Twitter was making some appalling mistakes. They "proactively blocked" a guy who currently has eeeiiiighty niiiine miiiilllllion freaking followers. It isn't even political: he is not eligible for high office, he's African American. Not born here, can't run for POTUS so that isn't it. Twitter proactively filtered a guy who now has nearly half the twitterverse following him, while allowing posts which openly call for violence. What we have in the social media are companies with the power of a publisher but the legal protections of a platform. Unaccountable power leads to abuse. Evidence: Twitter proactively blocking Musk. Elon is a guy who has inherent credibility because of his astonishing business successes (Lions and rockets and cars, oh my!) Of course he is going to have skerjillions of followers. Twitter "proactively blocked" him, not because of what he actually posted, but because of what they feared he would post. Show me any other social medium owner in that situation please. There are none? OK then. I am betting with the guy who wants to try to robo-moderate Twitter, and I think he can succeed where others failed. I think software can find spam, abuse, harassment, threats of violence. That whole "dangerously misleading false information" bit is clearly a lot more tricky, but all those human Twitter moderators collectively failed at that as well. At some point a Ministry of Truth must arbitrate on what is truth. The person chosen in the USA to head the newly-formed Ministry of Truth is the same person who told us the Russia dossier was true but the laptop story was fake. Eh. I choose... Elon Musk for 500 please Alex. spike From spike at rainier66.com Sun May 1 17:52:05 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 10:52:05 -0700 Subject: [ExI] Review - TED interview with Elon Musk re Twitter takeover In-Reply-To: References: Message-ID: <000d01d85d84$2c433d60$84c9b820$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Sunday, 1 May, 2022 9:51 AM ... -------- >...The problem is not ?someone I dislike saying something I dislike? the problem is spam, abuse, harassment, threats of violence, dangerously misleading false information, and more. -------- BillK _______________________________________________ BillK, regarding dangerously misleading information and filtering, consider the article from the Daily Mail below, which I think is a news source from the UK. It clearly contains dangerously misleading information: some idiot is pouring cola in her fuel tank. Think about whoever bought that ad space. Clearly she paid a lotta lotta money for it because that ad is found in a lotta places. Do you suppose she wanted to help people save 55 percent on their fuel bill? Or sell cola? I think neither. I think the motive for that ad purchase was to damage the credibility of any news source willing to run the ad. Some news agencies will run any ad anyone will pay them for. This is a clearly absurd example, but a dangerous one in some ways. Some people are stupid enough to actually do it, and it will be an expensive repair if the engine drinks some of that. Fun aside, they included a little sight-gag. The sticker right above the cola bottle says 97+ octane. You can?t even get that stuff at the airport. That is specialty racing fuel now. I haven?t even seen it for sale at normal gas stations since about the early 80s. The lesson here is clear: the less moderated information sites are buyer beware. The highly moderated sites are buyer be even more aware. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 31168 bytes Desc: not available URL: From pharos at gmail.com Sun May 1 17:54:06 2022 From: pharos at gmail.com (BillK) Date: Sun, 1 May 2022 18:54:06 +0100 Subject: [ExI] Review - TED interview with Elon Musk re Twitter takeover In-Reply-To: <006001d85d7f$e7ef2970$b7cd7c50$@rainier66.com> References: <006001d85d7f$e7ef2970$b7cd7c50$@rainier66.com> Message-ID: On Sun, 1 May 2022 at 18:24, spike jones via extropy-chat wrote: > > Thx BillK, the reason (one of the reasons) I am optimistic is that with Twitter, a user chooses which twitterer she wishes to follow. In that sense, she creates a custom filter and can make her space as safe or as dangerous as she wishes. > > Second: even with its 35k human moderators, Twitter was making some appalling mistakes. They "proactively blocked" a guy who currently has eeeiiiighty niiiine miiiilllllion freaking followers. It isn't even political: he is not eligible for high office, he's African American. Not born here, can't run for POTUS so that isn't it. Twitter proactively filtered a guy who now has nearly half the twitterverse following him, while allowing posts which openly call for violence. > > spike > _______________________________________________ Em, excuse me, it is Facebook that has 35,000 moderators. Their moderation failed because of the billions of posts they had to moderate. Twitter moderation failed because it just wasn't much bothered about doing moderation. Free speech, you know..... BillK BillK From spike at rainier66.com Sun May 1 18:12:37 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 11:12:37 -0700 Subject: [ExI] Review - TED interview with Elon Musk re Twitter takeover In-Reply-To: References: Message-ID: <001801d85d87$0a1b2970$1e517c50$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... -------- The problem is not ?someone I dislike saying something I dislike? the problem is spam, abuse, harassment, threats of violence, dangerously misleading false information, and more. -------- BillK _______________________________________________ Another take: the current POTUS has about 34 million Twitter followers after being in office for well over a year and in government for... no one really knows how long. Elon has been unblocked for a coupla weeks and has 89 million followers, even though he is not even eligible for high office. Which of those guys is really the important leader? spike From spike at rainier66.com Sun May 1 18:21:42 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 11:21:42 -0700 Subject: [ExI] Review - TED interview with Elon Musk re Twitter takeover In-Reply-To: References: <006001d85d7f$e7ef2970$b7cd7c50$@rainier66.com> Message-ID: <001c01d85d88$4f6b5800$ee420800$@rainier66.com> ...> On Behalf Of BillK via extropy-chat ... > _______________________________________________ >...Em, excuse me, it is Facebook that has 35,000 moderators. Their moderation failed because of the billions of posts they had to moderate. Twitter moderation failed because it just wasn't much bothered about doing moderation. Free speech, you know..... BillK _______________________________________________ OK cool thx BillK, I am not a social media user so I am not up to speed on these various companies. ExI is my closest thing to social media. Twitter made its appalling mistakes with few moderators. Even with few moderators, their appalling mistake was not under-moderating a false story, it was in blocking a true story. How the heck does Facebook afford 35k human moderators? The New York Post claims Facebook hid their story too. spike From jasonresch at gmail.com Sun May 1 21:19:32 2022 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 1 May 2022 16:19:32 -0500 Subject: [ExI] The relevance of glutamate in color experience Message-ID: I am curious if Brent was familiar with this passage from David Chalmer's "The Conscious Mind" (pages 267-268), which concern the plausibility of different qualia arising between two functional isomorphs made of different physical materials (e.g. silicon chips vs. neurons), and if so, what does this thought experiment imply for the role of molecules in the emergence of distinct quale? ?For the purposes of the illustration, let these systems be me and Bill. Where I have a red experience, Bill has a slightly different experience. We may as well suppose that Bill sees blue; perhaps his experience will be more similar to mine than that, but it makes no difference to the argument. The two systems are also different in that where there are neurons in some small region of my brain, there are silicon chips in Bill?s brain. This substitution of a silicon circuit for a neural circuit is the only physical difference between Bill and me. The crucial step in the thought experiment is to take a silicon circuit just like Bill?s and install it in my own head as a backup circuit. This circuit will be functionally isomorphic to a circuit already present in my head. We equip the circuit with transducers and effectors so that it can interact with the rest of my brain, but we do not hook it up directly. Instead, we install a switch that can switch directly between the neural and silicon circuits. Upon flipping the switch, the neural circuit becomes irrelevant and the silicon circuit takes over. We can imagine that the switch controls the points of interface where the relevant circuits affect the rest of the brain. When it is switched, the connections from the neural circuit are pushed out of the way, and the silicon circuit's effectors are attached. (We can imagine that the transducers for both circuits are attached the entire time, so that the state of both circuits evolves appropriately, but so that only one circuit at a time is involved in the processing. We could also run a similar experiment where both transducers and effectors are disconnected, to ensure that the backup circuit is entirely isolated from the rest of the system. This would change a few details, but the moral would be the same.) Immediately after flipping the switch, processing that was once performed by the neural circuit is now performed by the silicon circuit. The flow of control within the system has been redirected. However, my functional organization is exactly the same as it would have been if we had not flipped the switch. The only relevant difference between the two cases is the physical makeup of one circuit within the system. There is also a difference in the physical makeup of another ?dangling? circuit, but this is irrelevant to the functional organization, as it plays no role in affecting other components of the system and directing behavior. What happens to my experience when we flip the switch? Before installing the circuit, I was experiencing red. After we install it but before we flip the switch, I will presumably still be experiencing red, as the only difference is the addition of a circuit that is not involved in processing in any way; for all the relevance it has to my processing, I might as well have eaten it. After flipping the switch, however, I am more or less the same person as Bill. The only difference between Bill and me now is that I have a causally irrelevant neural circuit dangling from the system (we might even imagine the circuit is destroyed when the switch is flipped). Bull, by hypothesis, was enjoying a blue experience. After the switch, then, I will have a blue experience too. What will happen, then, is that my experience will change ?before my eyes.? Where I was once experiencing red, I will now experience blue. All of a sudden, I will have a blue experience of the apple on my desk. We can even imagine flipping the switch back and forth a number of times, so that the red and blue experiences ?dance? before my eyes. This might seem reasonable at first?it is a strangely appealing image?but something very odd is going on here. My experiences are switching from red to blue, but I do not notice any change. Even as we flip the switch a number of times and my qualia dance back and forth, I will simply go about my business, noticing nothing unusual. By hypothesis, my functional organization after flipping the switch evolves just as it would have if the switch had not been flipped. There is no special difference in my behavioral dispositions. I am not suddenly disposed to say ?Hmm! Something strange is going on!? There is no room for a sudden start, for an explanation, or even for a distraction of attention. Any unusual reaction would imply a functional difference between the two circuits, contrary to their stipulated isomorphism.? Is there an error in this reasoning? If so, where is it? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon May 2 02:04:45 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 1 May 2022 20:04:45 -0600 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: Hi Jason, Yes, this is the Neuro Substitution Argument for functionalism Stathis, I and others have been rehashing, forever, trying to convince the other side.. Stathis, Chalmers, and other functionalists believe they must accept functionalism because of this argument. This is a specific example of the 'dancing qualia' contradiction (one of many) which results if you accept this argument. I like to point out that this argument is dependent on two assumptions. 1., that all the neurons do is the same thing discrete logic gates do in abstract computers. 2. That the neuro substitution will succeed. If either of these two fail, the argument doesn't work. Steven Leahar, I, and others (there are more functionalists than us) predict that the neurons are doing more than just the kind of discrete logic function abstract computers do. Somehow they use qualities, like redness and greenness to represent information, in a way that can be "computationally bound" doing similar computation to what the mere discrete logic gates are doing, when they represent things with 1s and 0s. A required functionality is if redness changes to blueness, or anything else, the system must behave differently and report the difference. But this functionality isn't possible in abstract systems, because no matter how the substrate changes, it still functions the same. This is by design. (i.e. no matter what is representing a value like '1', whether redness or bluenness or +5 volts, or punch in paper..., you need a different dictionary for each different representation to tell you what is still representing the 1.) Redness, on the other hand, is just a fact. No dictionary required, and substrate independence is impossible, by design. So, the prediction is that it is a fact that something in the brain has a redness quality. Our brain uses this quality to represent conscious knowledge of red things with. Nothing else in the universe has that redness quality. So, when you get to the point of swapping out the first pixel of glutamate/redness quality, with anything else, the system must be able to report that it is no longer the same redness quality. Otherwise, it isn't functioning sufficiently to have conscious redness and greenness qualities. So, the prediction is, no functionalist will ever be able to produce any function, nor anything else, that will result in a redness experience, so the substitution will fail. If this is true, all the 'dancing', 'fading', and all the other 'hard problem' contradictions no longer exist. It simply becomes a color problem, which can be resolved through experimentally demonstrating which of all our descriptions of stuff in the brain is a description of redness. So, if you understand that, does this argument convince you you must be a functionalist, like the majority of people? Brent On Sun, May 1, 2022 at 3:20 PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I am curious if Brent was familiar with this passage from David Chalmer's > "The Conscious Mind" (pages 267-268), which concern the plausibility of > different qualia arising between two functional isomorphs made of different > physical materials (e.g. silicon chips vs. neurons), and if so, what does > this thought experiment imply for the role of molecules in the emergence of > distinct quale? > > ?For the purposes of the illustration, let these systems be me and Bill. > Where I have a red experience, Bill has a slightly different experience. We > may as well suppose that Bill sees blue; perhaps his experience will be > more similar to mine than that, but it makes no difference to the argument. > The two systems are also different in that where there are neurons in some > small region of my brain, there are silicon chips in Bill?s brain. This > substitution of a silicon circuit for a neural circuit is the only physical > difference between Bill and me. > > > The crucial step in the thought experiment is to take a silicon circuit > just like Bill?s and install it in my own head as a backup circuit. This > circuit will be functionally isomorphic to a circuit already present in my > head. We equip the circuit with transducers and effectors so that it can > interact with the rest of my brain, but we do not hook it up directly. > Instead, we install a switch that can switch directly between the neural > and silicon circuits. Upon flipping the switch, the neural circuit becomes > irrelevant and the silicon circuit takes over. We can imagine that the > switch controls the points of interface where the relevant circuits affect > the rest of the brain. When it is switched, the connections from the neural > circuit are pushed out of the way, and the silicon circuit's effectors are > attached. (We can imagine that the transducers for both circuits are > attached the entire time, so that the state of both circuits evolves > appropriately, but so that only one circuit at a time is involved in the > processing. We could also run a similar experiment where both transducers > and effectors are disconnected, to ensure that the backup circuit is > entirely isolated from the rest of the system. This would change a few > details, but the moral would be the same.) > > > Immediately after flipping the switch, processing that was once performed > by the neural circuit is now performed by the silicon circuit. The flow of > control within the system has been redirected. However, my functional > organization is exactly the same as it would have been if we had not > flipped the switch. The only relevant difference between the two cases is > the physical makeup of one circuit within the system. There is also a > difference in the physical makeup of another ?dangling? circuit, but this > is irrelevant to the functional organization, as it plays no role in > affecting other components of the system and directing behavior. > > > What happens to my experience when we flip the switch? Before installing > the circuit, I was experiencing red. After we install it but before we flip > the switch, I will presumably still be experiencing red, as the only > difference is the addition of a circuit that is not involved in processing > in any way; for all the relevance it has to my processing, I might as well > have eaten it. After flipping the switch, however, I am more or less the > same person as Bill. The only difference between Bill and me now is that I > have a causally irrelevant neural circuit dangling from the system (we > might even imagine the circuit is destroyed when the switch is flipped). > Bull, by hypothesis, was enjoying a blue experience. After the switch, > then, I will have a blue experience too. > > > What will happen, then, is that my experience will change ?before my > eyes.? Where I was once experiencing red, I will now experience blue. All > of a sudden, I will have a blue experience of the apple on my desk. We > can even imagine flipping the switch back and forth a number of times, so > that the red and blue experiences ?dance? before my eyes. > > > This might seem reasonable at first?it is a strangely appealing image?but > something very odd is going on here. My experiences are switching from red > to blue, but I do not notice any change. Even as we flip the switch a > number of times and my qualia dance back and forth, I will simply go about > my business, noticing nothing unusual. By hypothesis, my functional > organization after flipping the switch evolves just as it would have if the > switch had not been flipped. There is no special difference in my > behavioral dispositions. I am not suddenly disposed to say ?Hmm! Something > strange is going on!? There is no room for a sudden start, for an > explanation, or even for a distraction of attention. Any unusual reaction > would imply a functional difference between the two circuits, contrary to > their stipulated isomorphism.? > > > Is there an error in this reasoning? If so, where is it? > > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon May 2 02:21:28 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 2 May 2022 12:21:28 +1000 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Mon, 2 May 2022 at 07:21, Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I am curious if Brent was familiar with this passage from David Chalmer's > "The Conscious Mind" (pages 267-268), which concern the plausibility of > different qualia arising between two functional isomorphs made of different > physical materials (e.g. silicon chips vs. neurons), and if so, what does > this thought experiment imply for the role of molecules in the emergence of > distinct quale? > > ?For the purposes of the illustration, let these systems be me and Bill. > Where I have a red experience, Bill has a slightly different experience. We > may as well suppose that Bill sees blue; perhaps his experience will be > more similar to mine than that, but it makes no difference to the argument. > The two systems are also different in that where there are neurons in some > small region of my brain, there are silicon chips in Bill?s brain. This > substitution of a silicon circuit for a neural circuit is the only physical > difference between Bill and me. > > > The crucial step in the thought experiment is to take a silicon circuit > just like Bill?s and install it in my own head as a backup circuit. This > circuit will be functionally isomorphic to a circuit already present in my > head. We equip the circuit with transducers and effectors so that it can > interact with the rest of my brain, but we do not hook it up directly. > Instead, we install a switch that can switch directly between the neural > and silicon circuits. Upon flipping the switch, the neural circuit becomes > irrelevant and the silicon circuit takes over. We can imagine that the > switch controls the points of interface where the relevant circuits affect > the rest of the brain. When it is switched, the connections from the neural > circuit are pushed out of the way, and the silicon circuit's effectors are > attached. (We can imagine that the transducers for both circuits are > attached the entire time, so that the state of both circuits evolves > appropriately, but so that only one circuit at a time is involved in the > processing. We could also run a similar experiment where both transducers > and effectors are disconnected, to ensure that the backup circuit is > entirely isolated from the rest of the system. This would change a few > details, but the moral would be the same.) > > > Immediately after flipping the switch, processing that was once performed > by the neural circuit is now performed by the silicon circuit. The flow of > control within the system has been redirected. However, my functional > organization is exactly the same as it would have been if we had not > flipped the switch. The only relevant difference between the two cases is > the physical makeup of one circuit within the system. There is also a > difference in the physical makeup of another ?dangling? circuit, but this > is irrelevant to the functional organization, as it plays no role in > affecting other components of the system and directing behavior. > > > What happens to my experience when we flip the switch? Before installing > the circuit, I was experiencing red. After we install it but before we flip > the switch, I will presumably still be experiencing red, as the only > difference is the addition of a circuit that is not involved in processing > in any way; for all the relevance it has to my processing, I might as well > have eaten it. After flipping the switch, however, I am more or less the > same person as Bill. The only difference between Bill and me now is that I > have a causally irrelevant neural circuit dangling from the system (we > might even imagine the circuit is destroyed when the switch is flipped). > Bull, by hypothesis, was enjoying a blue experience. After the switch, > then, I will have a blue experience too. > > > What will happen, then, is that my experience will change ?before my > eyes.? Where I was once experiencing red, I will now experience blue. All > of a sudden, I will have a blue experience of the apple on my desk. We > can even imagine flipping the switch back and forth a number of times, so > that the red and blue experiences ?dance? before my eyes. > > > This might seem reasonable at first?it is a strangely appealing image?but > something very odd is going on here. My experiences are switching from red > to blue, but I do not notice any change. Even as we flip the switch a > number of times and my qualia dance back and forth, I will simply go about > my business, noticing nothing unusual. By hypothesis, my functional > organization after flipping the switch evolves just as it would have if the > switch had not been flipped. There is no special difference in my > behavioral dispositions. I am not suddenly disposed to say ?Hmm! Something > strange is going on!? There is no room for a sudden start, for an > explanation, or even for a distraction of attention. Any unusual reaction > would imply a functional difference between the two circuits, contrary to > their stipulated isomorphism.? > > > Is there an error in this reasoning? If so, where is it? > >From presenting this argument to various people over years, the main problem seems to be with the concept of functional equivalence. They often respond that if the qualia were different the subject would notice, although this could only happen if there were no functional equivalence. > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon May 2 02:29:11 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 2 May 2022 12:29:11 +1000 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Mon, 2 May 2022 at 12:06, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Jason, > > Yes, this is the Neuro Substitution Argument for functionalism > Stathis, > I and others have been rehashing, forever, trying to convince the other > side.. Stathis, Chalmers, and other functionalists > > believe they must accept functionalism because of this argument. This is a > specific example of the 'dancing qualia' contradiction (one of many) which > results if you accept this argument. > > I like to point out that this argument is dependent on two assumptions. > 1., that all the neurons do is the same thing discrete logic gates do in > abstract computers. 2. That the neuro substitution will succeed. If > either of these two fail, the argument doesn't work. > The argument does not depend on 1 or 2 being true. It only depends on the conclusion being true IF there is functional equivalence. If you accept that, then you accept functionalism. ?If all dogs have 5 legs and Spot is a dog then Spot has 5 legs? is valid, even though it is false that all dogs have 5 legs. Steven Leahar, I, and others (there are more functionalists than us) > predict that the neurons are doing more than just the kind of discrete > logic function abstract computers do. Somehow they use qualities, like > redness and greenness to represent information, in a way that can be > "computationally bound" doing similar computation to what the mere discrete > logic gates are doing, when they represent things with 1s and 0s. A > required functionality is if redness changes to blueness, or anything else, > the system must behave differently and report the difference. But this > functionality isn't possible in abstract systems, because no matter how the > substrate changes, it still functions the same. This is by design. (i.e. > no matter what is representing a value like '1', whether redness or > bluenness or +5 volts, or punch in paper..., you need a different > dictionary for each different representation to tell you what is still > representing the 1.) Redness, on the other hand, is just a fact. No > dictionary required, and substrate independence is impossible, by design. > > So, the prediction is that it is a fact that something in the brain has a > redness quality. Our brain uses this quality to represent conscious > knowledge of red things with. Nothing else in the universe has that > redness quality. So, when you get to the point of swapping out the first > pixel of glutamate/redness quality, with anything else, the system must be > able to report that it is no longer the same redness quality. Otherwise, > it isn't functioning sufficiently to have conscious redness and greenness > qualities. So, the prediction is, no functionalist will ever be able to > produce any function, nor anything else, that will result in a redness > experience, so the substitution will fail. If this is true, all the > 'dancing', 'fading', and all the other 'hard problem' contradictions no > longer exist. It simply becomes a color problem, which can be resolved > through experimentally demonstrating which of all our descriptions of stuff > in the brain is a description of redness. > > So, if you understand that, does this argument convince you you must be a > functionalist, like the majority of people? > > Brent > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Sun, May 1, 2022 at 3:20 PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I am curious if Brent was familiar with this passage from David Chalmer's >> "The Conscious Mind" (pages 267-268), which concern the plausibility of >> different qualia arising between two functional isomorphs made of different >> physical materials (e.g. silicon chips vs. neurons), and if so, what does >> this thought experiment imply for the role of molecules in the emergence of >> distinct quale? >> >> ?For the purposes of the illustration, let these systems be me and Bill. >> Where I have a red experience, Bill has a slightly different experience. We >> may as well suppose that Bill sees blue; perhaps his experience will be >> more similar to mine than that, but it makes no difference to the argument. >> The two systems are also different in that where there are neurons in some >> small region of my brain, there are silicon chips in Bill?s brain. This >> substitution of a silicon circuit for a neural circuit is the only physical >> difference between Bill and me. >> >> >> The crucial step in the thought experiment is to take a silicon circuit >> just like Bill?s and install it in my own head as a backup circuit. This >> circuit will be functionally isomorphic to a circuit already present in my >> head. We equip the circuit with transducers and effectors so that it can >> interact with the rest of my brain, but we do not hook it up directly. >> Instead, we install a switch that can switch directly between the neural >> and silicon circuits. Upon flipping the switch, the neural circuit becomes >> irrelevant and the silicon circuit takes over. We can imagine that the >> switch controls the points of interface where the relevant circuits affect >> the rest of the brain. When it is switched, the connections from the neural >> circuit are pushed out of the way, and the silicon circuit's effectors are >> attached. (We can imagine that the transducers for both circuits are >> attached the entire time, so that the state of both circuits evolves >> appropriately, but so that only one circuit at a time is involved in the >> processing. We could also run a similar experiment where both transducers >> and effectors are disconnected, to ensure that the backup circuit is >> entirely isolated from the rest of the system. This would change a few >> details, but the moral would be the same.) >> >> >> Immediately after flipping the switch, processing that was once performed >> by the neural circuit is now performed by the silicon circuit. The flow of >> control within the system has been redirected. However, my functional >> organization is exactly the same as it would have been if we had not >> flipped the switch. The only relevant difference between the two cases is >> the physical makeup of one circuit within the system. There is also a >> difference in the physical makeup of another ?dangling? circuit, but this >> is irrelevant to the functional organization, as it plays no role in >> affecting other components of the system and directing behavior. >> >> >> What happens to my experience when we flip the switch? Before installing >> the circuit, I was experiencing red. After we install it but before we flip >> the switch, I will presumably still be experiencing red, as the only >> difference is the addition of a circuit that is not involved in processing >> in any way; for all the relevance it has to my processing, I might as well >> have eaten it. After flipping the switch, however, I am more or less the >> same person as Bill. The only difference between Bill and me now is that I >> have a causally irrelevant neural circuit dangling from the system (we >> might even imagine the circuit is destroyed when the switch is flipped). >> Bull, by hypothesis, was enjoying a blue experience. After the switch, >> then, I will have a blue experience too. >> >> >> What will happen, then, is that my experience will change ?before my >> eyes.? Where I was once experiencing red, I will now experience blue. All >> of a sudden, I will have a blue experience of the apple on my desk. We >> can even imagine flipping the switch back and forth a number of times, so >> that the red and blue experiences ?dance? before my eyes. >> >> >> This might seem reasonable at first?it is a strangely appealing image?but >> something very odd is going on here. My experiences are switching from red >> to blue, but I do not notice any change. Even as we flip the switch a >> number of times and my qualia dance back and forth, I will simply go about >> my business, noticing nothing unusual. By hypothesis, my functional >> organization after flipping the switch evolves just as it would have if the >> switch had not been flipped. There is no special difference in my >> behavioral dispositions. I am not suddenly disposed to say ?Hmm! Something >> strange is going on!? There is no room for a sudden start, for an >> explanation, or even for a distraction of attention. Any unusual reaction >> would imply a functional difference between the two circuits, contrary to >> their stipulated isomorphism.? >> >> >> Is there an error in this reasoning? If so, where is it? >> >> >> Jason >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon May 2 03:35:51 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 1 May 2022 20:35:51 -0700 Subject: [ExI] face recognition software went wrong Message-ID: <009e01d85dd5$b94f7880$2bee6980$@rainier66.com> While I was setting up my phone's face recognition software, I sneezed. Now I hafta do this to unlock my phone. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 15734 bytes Desc: not available URL: From brent.allsop at gmail.com Mon May 2 03:53:37 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 1 May 2022 21:53:37 -0600 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Sun, May 1, 2022 at 8:22 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > From presenting this argument to various people over years, the main > problem seems to be with the concept of functional equivalence. They often > respond that if the qualia were different the subject would notice, > although this could only happen if there were no functional equivalence. > The required functionality is, if redness changes, it MUST notice it and change behavior. Otherwise, it isn't a consciousness, composed of that particular redness, and able to detect differences from redness, as my consciousness is able to do. I must be able to detect when redness changes to anything else. There must be some change in the system which is responsible for the change from redness to anything else, which enables the ability to know what redness is, and when it is not redness. If it can't do all that, it isn't qualitatively like me. Not sure I understand your 5 dog argument, as it is kind of like saying 1 = 2 (the same as 5 = 4). If you do that, you can prove anything and everything to be true. It is just a contradictory set of assumptions that gives the system zero utility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon May 2 13:25:57 2022 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 2 May 2022 09:25:57 -0400 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Sun, May 1, 2022, 10:04 PM Brent Allsop wrote: > > Hi Jason, > > Yes, this is the Neuro Substitution Argument for functionalism > Stathis, > I and others have been rehashing, forever, trying to convince the other > side.. Stathis, Chalmers, and other functionalists > > believe they must accept functionalism because of this argument. This is a > specific example of the 'dancing qualia' contradiction (one of many) which > results if you accept this argument. > > I like to point out that this argument is dependent on two assumptions. > 1., that all the neurons do is the same thing discrete logic gates do in > abstract computers. 2. That the neuro substitution will succeed. If > either of these two fail, the argument doesn't work. > I think you may be misapplying the computational universality argument as it pertains to machines and minds. What I, and other functionalists, claim is not that neurons are like logic gates or that the brain is like a computer, but quite the opposite: It's not that the brain is like a computer but that a computer (with the right program) can be like a brain. The universality of computers means computers are sufficiently versatile and flexible that they can predict the behavior of neurons. (Or molecules, or atoms, or quarks, or chemical interactions, or physical field interactions, or anything that is computable). Your assumption that functional equivalence is impossible to achieve between a computer and a neuron implies neurons must do something uncomputable. They must do something that would take a Turing machine an infinite number of steps to do, or have to process an infinite quantity of information in one step, but what function could this be in a neuron and how could it be relevant to neuronal behavior? All known physical laws are computable, so if everything neurons do is in accordance with known physical laws, then neurons can in principle be perfectly simulated. For your argument to work, one must suppose there are undiscovered, uncomputable physical laws which neurons have learned to tap into and that it is important to their function and behavior. But what is the motivation for this supposition? For Penrose it was the idea that human mathematicians can know something is true that a consistent machine following fixed rules could not prove. But this is flawed in many ways. Truth is different from proof, and human mathematicians are not consistent, they don't necessarily stay within one system when reasoning, and further, they are subject to the same Godelian constraints. For example: "Roger Penrose cannot consistently believe this sentence is true." You and I can see it as true and believe it, but Penrose cannot. He is stuck the same way any consistent proving machine can be stuck. He can only see the sentence as true if he becomes inconsistent himself. To assume something as big as a new class of uncomputable physical laws, for which we have no indication of or evidence for, requires some compelling reason. What is the reason in your case? > Steven Leahar, I, and others (there are more functionalists than us) > predict that the neurons are doing more than just the kind of discrete > logic function abstract computers do. > Certainly, neurons are far more complex than AND or XOR gates. I agree with you there, but that is irrelevant to the point: The right arrangement of logic gates together with an unbounded working memory (a computer), is able to be programmed to perform any computable mathematical function or any computable physical simulation of any physical object following computable physical laws. So the question to answer is: What is the nature of this uncomputable physics, abd how does the neuron use it? It's okay to say: I don't know. But it's important to recognize what you are required to assume for this argument to hold: the presence and importance of a new, uncomputable physics, which playa a necessary functional role in the behavior of neurons. Neurons must decide to fire or not fire based on this new physics, and presently known factors such as ion concentrations and inputs and dendritic connections etc. are insufficient to make the determination of whether or not s neuron will fire. Somehow they use qualities, like redness and greenness to represent > information, in a way that can be "computationally bound" doing > similar computation to what the mere discrete logic gates are doing, when > they represent things with 1s and 0s. > Why presume that red and green must be low level constructs rather than high level constructs? A required functionality is if redness changes to blueness, or anything > else, the system must behave differently and report the difference. But > this functionality isn't possible in abstract systems, because no > matter how the substrate changes, it still functions the same. This is by > design. (i.e. no matter what is representing a value like '1', whether > redness or bluenness or +5 volts, or punch in paper..., you need a > different dictionary for each different representation to tell you what is > still representing the 1.) Redness, on the other hand, is just a fact. No > dictionary required, and substrate independence is impossible, by design. > That redness appears as a brute fact to a mind does not mean it must be a primitive brute fact or property of the substrate itself. Again from Chalmers: ?Now, let us take the system?s ?point of view? toward what is going on. What sort of judgments will it form? Certainly it will form a jdugement such as ?red object there,? but if it is a rational, reflective system, we might also expect it to be able to reflect on the process of perception itself. How does perception ?strike? the system, we might ask? The crucial feature here is that when the system perceives a red object, central processes do not have direct access tot the object itself, and they do not have direct access to the physical processes underlying perception. All that these processes have access to is the color information itself, which is merely a location in a three-dimensional information space. When it comes to linguistically reporting on the situation, the system cannot report, ?This patch is saturated with 500- to 600-nanometer reflections,? as all access to the original wavelengths is gone. Similarly, it cannot eport about the neural structure, ?There?s a 50-hertz spiking frequency now,? as it has no direct access to neural structures. The system has access only to the location in information space. Indeed, as far as central processing is concerned, it simply finds itself in a location in this space. The system is able to make distinctions, and it knows it is able to make distinctions, but it has no idea how it does it. We would expect after a while that it could come to label the various locations it is thrown into??red,? ?green,? and the like-and that it would be able to know just which state it is in at a given time. But when asked just how it knows, there is nothing it can say, over and above ?I just know, directly.? If one asks it, ?What is the difference between these states?? it has no answer to give beyond ?They?re just different,? or ?This is one of those,? or ?This one is red, and that one is green.? When pressed as to what that means, the system has nothing left to say but ?They?re just different, qualitatively.? What else could it say?? ?It is natural to suppose that a system that can know directly the location it occupies in an information space, without having access to any further knowledge, will simply label the states as brutely and primitively different, differing in their ?quality.? Certainly, we should expect these differences to strike the system in an ?immediate? way: it is thrown into these states which in turn are immediately available for the direction of later processing; there is nothing inferential, for example, about its knowledge of which state it is in. And we should expect these states to be quite ?ineffable?: the system lacks access to any further relevant information, so there is nothing it can say about the states beyond pointing to their similarities and differences with each other, and to the various associations they might have. Certainly, one would not expect the ?quality? to be something it could explicate in more basic terms.? > So, the prediction is that it is a fact that something in the brain has a > redness quality. > What are your base assumptions here from which this prediction follows? It it something like: 1. Qualia like red seem primitively real 2. Therefore we should assume qualia like red are primitively real 3. It then follows the brain must use this primitively real stuff to compose red experiences I see the logic of this, but how things seem and how reality is, are often different. Given that this line of reasoning leads to fading/dancing/absent qualia (short of requiring that neuronal behavior involves an unknown uncomputable physics), raises the bar of doubt for me, enough to tilt me towards the idea that in this case, appearances perhaps do not reflect reality as it is (as is often the case in science, e.g. life seemed designed until Darwin). In other words, to keep a simpler physics, I am willing to give up a simple account of qualia. This makes qualia into complex, high level properties of complex processes within minds, but preserves the relative simplicity of our physical theories as we presently know and understand them. Is this a price worth paying? It means we still have more explaining to do regarding qualia, but this is no different a quest than biologists faced when they gave up on a simple "elan vital" in explaining the unique properties and abilities of "organic matter." In the end, the unique abilities and properties if organic matter, to grow, to initiate movement, to cook rather than melt, turned out not to be due to a primitive property inherent to organic matter but rather was a matter of is highly complex organization. I think the same could be true for qualia: that it's not the result of a simple primitive, but the result of a complex organization. Our brain uses this quality to represent conscious knowledge of red things > with. Nothing else in the universe has that redness quality. So, when you > get to the point of swapping out the first pixel of glutamate/redness > quality, with anything else, the system must be able to report that it is > no longer the same redness quality. Otherwise, it isn't > functioning sufficiently to have conscious redness and greenness > qualities. So, the prediction is, no functionalist will ever be able to > produce any function, nor anything else, that will result in a redness > experience, so the substitution will fail. If this is true, all the > 'dancing', 'fading', and all the other 'hard problem' contradictions no > longer exist. It simply becomes a color problem, which can be resolved > through experimentally demonstrating which of all our descriptions of stuff > in the brain is a description of redness. > > So, if you understand that, does this argument convince you you must be a > functionalist, like the majority of people? > Do you agree with my assessment of the trade offs? That is, we either have: 1. Simple primitive qualia, but new uncomputable physical laws Or 2. Simple computable physics, but complex functional organizations necessary for even simple qualia like red? If not, is there a third possibility I have overlooked? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon May 2 17:59:28 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 2 May 2022 13:59:28 -0400 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Sun, May 1, 2022 at 10:07 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Jason, > > Yes, this is the Neuro Substitution Argument for functionalism > Stathis, > I and others have been rehashing, forever, trying to convince the other > side.. Stathis, Chalmers, and other functionalists > > believe they must accept functionalism because of this argument. This is a > specific example of the 'dancing qualia' contradiction (one of many) which > results if you accept this argument. > > I like to point out that this argument is dependent on two assumptions. > 1., that all the neurons do is the same thing discrete logic gates do in > abstract computers. 2. That the neuro substitution will succeed. If > either of these two fail, the argument doesn't work. > ### This is not true. The argument is valid regardless of the mechanism of computation in the device that is substituting for a part of the brain. Only requirement for the substitution argument is that the substituted device must not change the way the rest of the recipient brain works (i.e. the overall pattern of neural activity and behavior controlled by the brain). By way of illustration, instead of using a digital device for substitution, we may consider a genetically engineered brain that has the identical functional organization as a normal human brain but substitutes e.g. D-glutamate for L-glutamate as the transmitter. This would require re-engineering the structure of the relevant glutamate receptors, adding a glutamate isomerase to make D-glutamate out of L-glutamate and perhaps other minor tweaks but it would not change the functional aspects of neurotransmission in the modified brain or its parts. Obviously, if the chemical structure of glutamate somehow determined qualia, then such a modified brain would have different qualia. If however the modified D-glutamate brain is able to substitute for a part of the standard L-glutamate brain without changing the overall patterns of neural activation and without changing behavior then the substitution would prove that glutamate has nothing to do with qualia. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon May 2 18:38:15 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 3 May 2022 04:38:15 +1000 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Tue, 3 May 2022 at 04:01, Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, May 1, 2022 at 10:07 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Jason, >> >> Yes, this is the Neuro Substitution Argument for functionalism >> Stathis, >> I and others have been rehashing, forever, trying to convince the other >> side.. Stathis, Chalmers, and other functionalists >> >> believe they must accept functionalism because of this argument. This is a >> specific example of the 'dancing qualia' contradiction (one of many) which >> results if you accept this argument. >> >> I like to point out that this argument is dependent on two assumptions. >> 1., that all the neurons do is the same thing discrete logic gates do in >> abstract computers. 2. That the neuro substitution will succeed. If >> either of these two fail, the argument doesn't work. >> > > ### This is not true. The argument is valid regardless of the mechanism of > computation in the device that is substituting for a part of the brain. > Only requirement for the substitution argument is that the substituted > device must not change the way the rest of the recipient brain works (i.e. > the overall pattern of neural activity and behavior controlled by the > brain). > > By way of illustration, instead of using a digital device for > substitution, we may consider a genetically engineered brain that has the > identical functional organization as a normal human brain but substitutes > e.g. D-glutamate for L-glutamate as the transmitter. This would require > re-engineering the structure of the relevant glutamate receptors, adding a > glutamate isomerase to make D-glutamate out of L-glutamate and perhaps > other minor tweaks but it would not change the functional aspects of > neurotransmission in the modified brain or its parts. > > Obviously, if the chemical structure of glutamate somehow determined > qualia, then such a modified brain would have different qualia. If however > the modified D-glutamate brain is able to substitute for a part of the > standard L-glutamate brain without changing the overall patterns of neural > activation and without changing behavior then the substitution would prove > that glutamate has nothing to do with qualia. > The argument can be generalised by using a black box that interacts with the brain in the same way as the replaced tissue. It is an argument showing that qualia cannot be separated from behaviour. > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon May 2 19:35:18 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 2 May 2022 13:35:18 -0600 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: Hi Rafal, We're still talking about completely different things. I'm just saying that when you look at a strawberry, and if a pixel on the surface of that strawberry is changing from redness to grenness, there must be something, physical, that is responsible for that change in the one pixel. The prediction is, nothing but that physics will be able to duplicate the redness quality of that one pixel, especially not some abstract function. This redness quality, and what it must be, has nothing to do with what you are talking about. Again, like I was asking Jason. Could I get you guys to support one of the functionalists camps. I'm kind of proud of the fact that I'm in a minority camp, and that so many of you believe functionalism is a reasonable theory. I'd really like to track this more rigorously, and I"d like to know if any of you ever jump camps, based on any evidence, or new arguments, in the future. Stathis' camp: Functional Property Dualism , or there is also the Qualia emerge from function camp. And would anyone care to place a bet (I'd bet 10 to 1 odds) on which camp will have, by far, the most support in 10 years, functionalism or materialism? On Mon, May 2, 2022 at 12:01 PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, May 1, 2022 at 10:07 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Jason, >> >> Yes, this is the Neuro Substitution Argument for functionalism >> Stathis, >> I and others have been rehashing, forever, trying to convince the other >> side.. Stathis, Chalmers, and other functionalists >> >> believe they must accept functionalism because of this argument. This is a >> specific example of the 'dancing qualia' contradiction (one of many) which >> results if you accept this argument. >> >> I like to point out that this argument is dependent on two assumptions. >> 1., that all the neurons do is the same thing discrete logic gates do in >> abstract computers. 2. That the neuro substitution will succeed. If >> either of these two fail, the argument doesn't work. >> > > ### This is not true. The argument is valid regardless of the mechanism of > computation in the device that is substituting for a part of the brain. > Only requirement for the substitution argument is that the substituted > device must not change the way the rest of the recipient brain works (i.e. > the overall pattern of neural activity and behavior controlled by the > brain). > > By way of illustration, instead of using a digital device for > substitution, we may consider a genetically engineered brain that has the > identical functional organization as a normal human brain but substitutes > e.g. D-glutamate for L-glutamate as the transmitter. This would require > re-engineering the structure of the relevant glutamate receptors, adding a > glutamate isomerase to make D-glutamate out of L-glutamate and perhaps > other minor tweaks but it would not change the functional aspects of > neurotransmission in the modified brain or its parts. > > Obviously, if the chemical structure of glutamate somehow determined > qualia, then such a modified brain would have different qualia. If however > the modified D-glutamate brain is able to substitute for a part of the > standard L-glutamate brain without changing the overall patterns of neural > activation and without changing behavior then the substitution would prove > that glutamate has nothing to do with qualia. > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon May 2 19:40:11 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 2 May 2022 13:40:11 -0600 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: Stathis, as usual, we are always talking about completely different things. The prediction is that for each redness pixel of our knowledge of the surface of a strawberry, there must be something that has that redness quality. Nothing will be able to produce that redness quality for that one pixel, other than that set of physics, an example being glutamate. And the system must be able to detect, when the pixel changes, and it must be able to act differently, when it does change. The neural substitution doesn't allow for that redness to be anything, even something functional, for the same reason, so is just an absurd argument. The system must be able to detect when redness changes to greenness, or anything else. What, if not something physical, could be responsible for any such change in the quality of the substrate of our knowledge of the strawberry, in a way so that we can report that it has changed? On Mon, May 2, 2022 at 12:39 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 3 May 2022 at 04:01, Rafal Smigrodzki via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Sun, May 1, 2022 at 10:07 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> Yes, this is the Neuro Substitution Argument for functionalism >>> Stathis, >>> I and others have been rehashing, forever, trying to convince the other >>> side.. Stathis, Chalmers, and other functionalists >>> >>> believe they must accept functionalism because of this argument. This is a >>> specific example of the 'dancing qualia' contradiction (one of many) which >>> results if you accept this argument. >>> >>> I like to point out that this argument is dependent on two assumptions. >>> 1., that all the neurons do is the same thing discrete logic gates do in >>> abstract computers. 2. That the neuro substitution will succeed. If >>> either of these two fail, the argument doesn't work. >>> >> >> ### This is not true. The argument is valid regardless of the mechanism >> of computation in the device that is substituting for a part of the brain. >> Only requirement for the substitution argument is that the substituted >> device must not change the way the rest of the recipient brain works (i.e. >> the overall pattern of neural activity and behavior controlled by the >> brain). >> >> By way of illustration, instead of using a digital device for >> substitution, we may consider a genetically engineered brain that has the >> identical functional organization as a normal human brain but substitutes >> e.g. D-glutamate for L-glutamate as the transmitter. This would require >> re-engineering the structure of the relevant glutamate receptors, adding a >> glutamate isomerase to make D-glutamate out of L-glutamate and perhaps >> other minor tweaks but it would not change the functional aspects of >> neurotransmission in the modified brain or its parts. >> >> Obviously, if the chemical structure of glutamate somehow determined >> qualia, then such a modified brain would have different qualia. If however >> the modified D-glutamate brain is able to substitute for a part of the >> standard L-glutamate brain without changing the overall patterns of neural >> activation and without changing behavior then the substitution would prove >> that glutamate has nothing to do with qualia. >> > > The argument can be generalised by using a black box that interacts with > the brain in the same way as the replaced tissue. It is an argument showing > that qualia cannot be separated from behaviour. > >> -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon May 2 21:13:15 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 3 May 2022 07:13:15 +1000 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Tue, 3 May 2022 at 05:43, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Stathis, as usual, we are always talking about completely different things. > > The prediction is that for each redness pixel of our knowledge of the > surface of a strawberry, there must be something that has that redness > quality. Nothing will be able to produce that redness quality for that one > pixel, other than that set of physics, an example being glutamate. And the > system must be able to detect, when the pixel changes, and it must be able > to act differently, when it does change. The neural substitution doesn't > allow for that redness to be anything, even something functional, for the > same reason, so is just an absurd argument. The system must be able to > detect when redness changes to greenness, or anything else. What, if not > something physical, could be responsible for any such change in the quality > of the substrate of our knowledge of the strawberry, in a way so that we > can report that it has changed? > If the substituted structure can reproduce what you call the abstract qualities without reproducing the qualia, which means the subject will behave the same despite a change in qualia, how does this fit with your concept of what qualia are? > > On Mon, May 2, 2022 at 12:39 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 3 May 2022 at 04:01, Rafal Smigrodzki via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Sun, May 1, 2022 at 10:07 PM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> Hi Jason, >>>> >>>> Yes, this is the Neuro Substitution Argument for functionalism >>>> Stathis, >>>> I and others have been rehashing, forever, trying to convince the other >>>> side.. Stathis, Chalmers, and other functionalists >>>> >>>> believe they must accept functionalism because of this argument. This is a >>>> specific example of the 'dancing qualia' contradiction (one of many) which >>>> results if you accept this argument. >>>> >>>> I like to point out that this argument is dependent on two >>>> assumptions. 1., that all the neurons do is the same thing discrete >>>> logic gates do in abstract computers. 2. That the neuro substitution will >>>> succeed. If either of these two fail, the argument doesn't work. >>>> >>> >>> ### This is not true. The argument is valid regardless of the mechanism >>> of computation in the device that is substituting for a part of the brain. >>> Only requirement for the substitution argument is that the substituted >>> device must not change the way the rest of the recipient brain works (i.e. >>> the overall pattern of neural activity and behavior controlled by the >>> brain). >>> >>> By way of illustration, instead of using a digital device for >>> substitution, we may consider a genetically engineered brain that has the >>> identical functional organization as a normal human brain but substitutes >>> e.g. D-glutamate for L-glutamate as the transmitter. This would require >>> re-engineering the structure of the relevant glutamate receptors, adding a >>> glutamate isomerase to make D-glutamate out of L-glutamate and perhaps >>> other minor tweaks but it would not change the functional aspects of >>> neurotransmission in the modified brain or its parts. >>> >>> Obviously, if the chemical structure of glutamate somehow determined >>> qualia, then such a modified brain would have different qualia. If however >>> the modified D-glutamate brain is able to substitute for a part of the >>> standard L-glutamate brain without changing the overall patterns of neural >>> activation and without changing behavior then the substitution would prove >>> that glutamate has nothing to do with qualia. >>> >> >> The argument can be generalised by using a black box that interacts with >> the brain in the same way as the replaced tissue. It is an argument showing >> that qualia cannot be separated from behaviour. >> >>> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon May 2 22:25:01 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 2 May 2022 15:25:01 -0700 Subject: [ExI] ok this explains it In-Reply-To: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> References: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> Message-ID: On Sat, Apr 30, 2022 at 11:42 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Furthermore, we get to find out if this bad evil practice he describes has > already been done. So? stopping the evil practice this is a good thing, > ja? There are credible accusations that is has been done and is being done > now. But Ari makes it sound like exposing that and stopping that is a bad > thing. I don?t understand. Can anyone explain please? > He knows it's being done to politicians he opposes. He wants that to continue but not be so obvious that most people know, and so phrases things to suggest it's not being done (much, if at all) today. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon May 2 22:30:04 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 2 May 2022 15:30:04 -0700 Subject: [ExI] ok this explains it In-Reply-To: <007501d85cd2$2e517ed0$8af47c70$@rainier66.com> References: <012a01d85cc1$cf1e1b90$6d5a52b0$@rainier66.com> <003d01d85ccd$dd5ed3a0$981c7ae0$@rainier66.com> <007501d85cd2$2e517ed0$8af47c70$@rainier66.com> Message-ID: On Sat, Apr 30, 2022 at 1:39 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > OK so? what if? Twitter human content moderators exist. What do they do, > and how do they do it? What criteria do they use to determine if they will > override the moderation software? If Musk makes his filtering algorithm > public, then he wouldn?t need humans in the loop overriding the software, > ja? So? out they go adios amigo. > Nope. The filtering practice can be considered cyborg: part algorithm, part human. Even if the algorithm part is open-sourced, it still kicks back to humans to allow for review. It's a first pass/front line to greatly reduce human workloads (and respond, at least in extreme cases, much faster than humans), but it does not entirely eliminate them. That's how the filtering algorithms I have worked on functioned, and I have no reason to believe that Twitter is any different. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon May 2 22:41:27 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 2 May 2022 15:41:27 -0700 Subject: [ExI] Hibernation for human space travel not possible In-Reply-To: References: Message-ID: On Sat, Apr 30, 2022 at 5:06 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > ?Humans are simply too large, so the benefits of hibernation are > little ? as in bears ? if we think just on energy savings,? Nespolo > says. > ---------------------- > > > So humans might as well stay awake during long space trips. But they > will have to find some method of avoiding years of boredom during > longer journeys. > You missed a nuance: "...if we think just on energy savings". Hibernation is one way to avoid boredom. (Though there are many other ways, too.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon May 2 22:56:43 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 2 May 2022 15:56:43 -0700 Subject: [ExI] gaming the system In-Reply-To: <623C6678-D315-414A-97CF-E9AD02A73714@gmail.com> References: <001901d85c99$1443f7e0$3ccbe7a0$@rainier66.com> <623C6678-D315-414A-97CF-E9AD02A73714@gmail.com> Message-ID: On Sat, Apr 30, 2022 at 4:00 PM Dan TheBookMan via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Instead of gaming the system as in making edgy content, maybe think of > gaming the system as in making spam that?s ever harder to filter out. In > which case, transparent filtering algorithms make the spammers job easier, > no? > Only in the short term. As spammers adjust, the exploits they use will get pointed out, and the algorithm fixed. It's an arms race, but it winds up with better filters in the end. Further, it reduces the ranks of the spammers over time since most spammers' entire point is to do their job with a minimum of resources. It's kind of like why open source software that's been well-known for a while is more trustable from a security point of view: rather than hope the proprietary software developers have fixed every problem behind closed doors (which they probably haven't), you can see for yourself if there remain any flaws, and when flaws do come up, they can be fixed faster (e.g. by outside parties who ran into the problem and thus are quite familiar with what it is and its context). -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon May 2 23:08:26 2022 From: pharos at gmail.com (BillK) Date: Tue, 3 May 2022 00:08:26 +0100 Subject: [ExI] Hibernation for human space travel not possible In-Reply-To: References: Message-ID: On Mon, 2 May 2022 at 23:43, Adrian Tymes via extropy-chat wrote: > > On Sat, Apr 30, 2022 at 5:06 AM BillK via extropy-chat wrote: >> >> ?Humans are simply too large, so the benefits of hibernation are >> little ? as in bears ? if we think just on energy savings,? Nespolo >> says. >> ---------------------- >> >> So humans might as well stay awake during long space trips. But they >> will have to find some method of avoiding years of boredom during >> longer journeys. > > > You missed a nuance: "...if we think just on energy savings". Hibernation is one way to avoid boredom. > > (Though there are many other ways, too.) > _______________________________________________ The article is mainly concerned with very long space trips. Weight loss during hibernation is about five pounds per year, which becomes significant on multi-year trips. Travel time to Mars would be about 9 months, so the weight loss would only be about four pounds during hibernation. This should not be a problem for most people. Muscle weakness might be significant though. Waking up early to do fitness training should fix that. BillK From rafal.smigrodzki at gmail.com Tue May 3 00:26:47 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 2 May 2022 20:26:47 -0400 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Mon, May 2, 2022 at 3:35 PM Brent Allsop wrote: > > Hi Rafal, > We're still talking about completely different things. I'm just saying > that when you look at a strawberry, and if a pixel on the surface of that > strawberry is changing from redness to grenness, there must be something, > physical, that is responsible for that change in the one pixel. The > prediction is, nothing but that physics will be able to duplicate the > redness quality of that one pixel, especially not some abstract function. > This redness quality, and what it must be, has nothing to do with what you > are talking about. > ### Do you acknowledge that your answer to Jason's post was incorrect? Your claim that the substitution scenario would only be true if neurons were acting as logical gates is logically incorrect. If you refuse to acknowledge and withdraw your claim, it would be impossible to continue a discussion, since discussions where logically untrue statements are made are not worth having. If you specifically answer a challenge question, and your answer is logically incorrect, you don't get to weasel out of it by saying "we're still talking about completely different things". Also, it's pretty arrogant to say that what "redness" must be has nothing to do with what I am talking about. Don't tell me you have a justified belief that would let you say it "must" be as you say, and don't tell me I don't know what I am talking about. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue May 3 00:40:56 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 2 May 2022 20:40:56 -0400 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Mon, May 2, 2022 at 2:38 PM Stathis Papaioannou wrote: > > The argument can be generalised by using a black box that interacts with > the brain in the same way as the replaced tissue. It is an argument showing > that qualia cannot be separated from behaviour. > ### Yes, indeed. I used glutamate as an example because Brent has this obsession with the "redness quality of glutamate" and I was curious about how he would respond. Well, he responded with evasion and claiming we don't know what we are talking about. Typical. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue May 3 01:15:22 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 2 May 2022 21:15:22 -0400 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Mon, May 2, 2022 at 3:44 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Stathis, as usual, we are always talking about completely different things. > > The prediction is that for each redness pixel of our knowledge of the > surface of a strawberry, there must be something that has that redness > quality. Nothing will be able to produce that redness quality for that one > pixel, other than that set of physics, an example being glutamate. And the > system must be able to detect, when the pixel changes, and it must be able > to act differently, when it does change. The neural substitution doesn't > allow for that redness to be anything, even something functional, for the > same reason, so is just an absurd argument. > ### Hey, so we-all are using an "absurd argument"? Does it mean you think we are stupid, since it's mostly stupid people who use absurd arguments? So how stupid is it to say that "glutamate has a redness quality"?. There are over twenty thousand distinct molecular species present in the color-perception cortex. Why not the redness quality of any other molecule? Why not say the water in the brain has a redness quality? What about the redness quality of glutamate in your Chinese takeout? Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue May 3 10:57:39 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 3 May 2022 06:57:39 -0400 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Fri, Apr 29, 2022 at 9:31 AM Jason Resch wrote:. > >> Is this register-by-register and time-step by time-step record of >> synaptic and axonal activity conscious when stored in RAM? In a book? >> > > A record, even a highly detailed one as you describe, I don't believe is > conscious. For if you alter any bit/bits in that record, say the bits > representing visual information sent from the optic nerves, none of those > changes are reflected in any of the neuron states downstream from that > modification, so in what sense are they consciousness of other information, > or the firing of neighboring neurons, or the visual data coming in, etc. > within the representation? > > There is no response to any change and so I conclude there is no awareness > of any of that information. This is why I think counterfactuals are > necessary. If you make a relevant change to the inputs, that change must be > reflected in the right ways throughout the rest of the system, otherwise > you aren't dealing with something that has the right functional relations > and organizations. If no other bits change, then you're dealing with a bit > string that is a record only, it is devoid of all functional relations. > ### This is a good point. Consciousness is a process, not a 3d structure. An alteration of a record of a brain which fails to preserve the usual causal relationships between the recorded brain parts is not likely to create conscious experience. The altered record does not propagate the change through its structure in the way that would happen in a functioning brain - it is just like a frozen brain. So we could say that for consciousness to exist there must be a timelike sequence of states of a material object, and these states must have a proper causal relationship as to model something. That something is the subject, or content, of consciousness - you are either conscious of a subject, or else you are not conscious at all. There is no pure consciousness, all consciousness has a subject. ---------------------------- > > I don't think "running" is the right word either, as relativity reveals > objective time as an illusion. So we must accept the plausibility of > consciousness in timeless four dimensionalism. It then must be the > structure of relations and counterfactuals implied by laws (whether they be > physical or mathematical or some other physics in some other universe) that > are necessary for consciousness. > ### I am reading "Out of Time" by Baron et al - awfully boring and pedantic philosophy but the idea that all you need for time to exist is to have a causal structure, which may be imprinted on a block universe, sounds pretty reasonable. Now putting together the causal theory of time and the above thought experiments on consciousness I seem to discern a connection between time and consciousness. The block universe is the set of all states that could be in some way described. There are places in the block universe that have a causal structure - you can derive each state from another state through the application of some sort of a potentially quite simple rule, which defines the physics of each such place. If the rule is unidirectional, the physics of that place may be said to contain causality - one state causes another by application of the rule. Causality in the block universe is equivalent to time, and of course there is an infinite number of such causally connected sets of states, and an infinity of separate streams of time. Some of small fragments of the states in the timelike series go up one level - they are not just a result of applying a rule over the preceding state but also contain higher-order causal relationships, such that these special states model some content - it may be a model of an object that is outside the special state and is fed by a stream of sensory input, or it may be content generated internally within the special state. So consciousness is something that exists in areas of the block universe where there are multilevel timelike or causal relationships between states. I need to mull this sentence over to make sure I understand what I just wrote :) ----------------------------- > > And what if you run the same synaptic model on two computers? Is the >> consciousness double? >> > > Nick Bostrom has a paper arguing that it does create a duplicate with more > "weight", Arnold Zuboff argues for a position called Unificationism in > which there is only one unique mind even if run twice, and there's no > change in its "weight". > > If reality is infinite and all possible minds and conscious experiences > exist, then if Unificationism is true we should expect to be experiencing a > totally random (think snow on a TV) kind of experience now, since there's > so many more random than ordered unique conscious experiences. Zuboff uses > this to argue that reality is not infinite. But if you believe reality is > infinite it can be used as a basis to reject Unificationism. > ### I feel I have bitten off more than I can handle with the above questions. My guess is that consciousness is local, not global, so copies of a mind do not have a global meaning, they are just separate minds. So I guess I am not an Unificationist but then I also don't think there is a "weight" related to copies of minds, just as there is no global "weight" of all the different independent minds. Minds are just separate, unless they exchange information. I need to let the multilevel timelike causality theory of consciousness settle in my mind for a while before I can start asking more useful questions. -------------------------------- > > Is there something special about dissipation of energy, >> > > This is just a reflection of the fact that in physics, information is > conserved. If you overwrite/erase a bit in a computer memory, that bit has > to go somewhere. In practice, for our current computers, it is leaked into > the environment and this requires leaking energy into the environment as > implied by the Landauer limit. But if no information is erased/overwritten, > which is possible to do in reversible computers (and is in fact necessary > in quantum computers), then you can compute without dissipating any energy > at all. So I conclude dissipating energy is unrelated to computation or > consciousness. > ### I agree, although non-dissipating consciousness may have a number of limitations. -------------------------------------- > > or about causal processes that add something special to the digital, >> mathematical entities represented by such processes? >> > > The causality (though I would say relations since causality itself is > poorly understood and poorly defined) is key, I think. If you study a bit > of cryptography (see "one time pad" encryption) you can come to understand > why any bit string can have any meaning. It is therefore meaningless > without the context of it's interpreter. > ### Yes, I think here we are hitting pay dirt! --------------------------------- > > So to be "informative" we need both information and a system to be > informed by or otherwise interpret that information. Neither by itself is > sufficient. > ### Yes, multilevel causal structure - base level physics organized into more complex states that model other states. ------------------------ > > >> I struggle to understand what is happening. I have a feeling that two >> instances of a simple and pure mathematical entity (a triangle or an >> equation) under consideration by two mathematicians are one and the same >> but then two pure mathematical entities that purport to reflect a mind >> (like the synapse-level model of a brain) being run on two computers are >> separate and presumably independently conscious. Something doesn't fit >> here. >> > > The problem you are referencing is the distinction between types and > tokens. > > A type is something like "Moby Dick", of which there is only one uniquely > defined type which is that story. > > A token is any concrete instance of a given type. For example any > particular book of Moby Dick is a token of the type Moby Dick. > > I think you may be asking: should we think of minds as types or tokens? I > think a particular mind at a particular point in time (one > "observer-moment") can be thought of as a type. But across an infinite > universe that mind state or observer moment may have many, (perhaps an > infinite number of) different tokens -- different instantiations in terms > of different brains or computers with uploaded minds -- representing that > type. > > So two instances of the same mind being run on two different computers are > independently conscious in the sense that turning either one off doesn't > destroy the type, even if one token is destroyed, just as the story of Moby > Dick isn't destroyed if one book is lost. > > The open question to me is: does running two copies increase the > likelihood of finding oneself in that mind state? This is the > Unificationism/Duplicationism debate. > ### Asking about probability in the context of consciousness is asking for trouble because our understanding of either - probability and consciousness is tenuous, and errors explode when you let poorly defined notions interact. I distrust the thought experiments in this area of philosophy. -------------------------- > > > Maybe there is something special about the physical world that imbues >> models of mathematical entities contained in the physical world with a >> different level of existence from the Platonic ideal level. >> > > We can't rule out, (especially given all the other fine-tuning > coincidences we observe), that our physics has a special property necessary > for consciousness, but I tend to not think so, given all the problems > entailed by philosophical zombies and zombie worlds -- where we have > philosophers of mind and books about consciousness and exact copies of the > conversations such as in this thread, being written by entities in a > universe that has no conscious. This idea just doesn't seem coherent to me. > ### Well, if our physics is timelike, and a multilevel causal structure is needed for consciousness, then you need our physics, or an equivalent, for consciousness. It's complicated. ---------------------------- > > Or maybe different areas of the Platonic world are imbued with different >> properties, such as consciousness, even as they copy other parts of the >> Platonic world. >> > > As Bruno Marchal points out in his filmed graph thought experiment, if one > accepts mechanism (a.k.a. functionalism, or computationalism), this implies > that platonically existing number relations and computations are sufficient > for consciousness. Therefore consciousness is in a sense more fundamental > than the physical worlds we experience. The physics in a sense, drops out > as the consistent extensions of the infinite indistinguishable computations > defining a particular observer's current mind state. > ### I would say otherwise - the causal structure of time in our physics (the sequence of Platonic states connected by a causal rule) is the thing that allows consciousness, by being the basis for building additional levels of causal relationships between Platonic objects -------------------------------- > > This is explored in detail by Markus P Mueller, in his paper on deriving > laws of physics from algorithmic information theory. He is able to predict > from these first principles that most observers should find themselves to > be in a universe having simple, but probabilistic laws, with time, and a > point in the past beyond which further retrodiction is impossible. > > Indeed we find this to be true of our own physics and universe. I cover > this subject in some detail in my "Why does anything exist?" article (on > AlwaysAsking.com ). I am currently working on an article about > consciousness. The two questions are quite interrelated. > ### Indeed. Are you familiar with Wolfram's Physics Project? I feel his approach may help us eventually put metaphysics on a firmer ground and maybe connect physics to the theory of consciousness in a more rigorous way. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 3 12:18:01 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 05:18:01 -0700 Subject: [ExI] publish or perish? Message-ID: <004101d85ee7$d5a9f010$80fdd030$@rainier66.com> Suppose one guy has 89 million twitter followers and another guy has 8 followers. Does anyone here wish to argue that their tweets are currently being scrutinized and moderated equally? Should they be? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue May 3 13:24:03 2022 From: sparge at gmail.com (Dave S) Date: Tue, 3 May 2022 09:24:03 -0400 Subject: [ExI] Hibernation for human space travel not possible In-Reply-To: References: Message-ID: On Mon, May 2, 2022 at 7:11 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > The article is mainly concerned with very long space trips. Weight > loss during hibernation is about five pounds per year, which becomes > significant on multi-year trips. > Weight loss would be expected to be about five pounds if humans could hibernate like other animals, which they can't. Animals also can't hibernate for years. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue May 3 13:35:43 2022 From: sparge at gmail.com (Dave S) Date: Tue, 3 May 2022 09:35:43 -0400 Subject: [ExI] publish or perish? In-Reply-To: <004101d85ee7$d5a9f010$80fdd030$@rainier66.com> References: <004101d85ee7$d5a9f010$80fdd030$@rainier66.com> Message-ID: On Tue, May 3, 2022 at 8:20 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > Suppose one guy has 89 million twitter followers and another guy has 8 > followers. Does anyone here wish to argue that their tweets are currently > being scrutinized and moderated equally? > I think the automatic filtering is probably the same for everyone, but the human scrutiny would likely take followers into account. > Should they be? > Maybe. What if the 89M guy follows the 8 guy and retweets him? Does the retweet avoid scrutiny because it was already allowed? And what happens when 89M guy posts something innocuous and later edits it into something that's not? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 3 13:41:51 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 06:41:51 -0700 Subject: [ExI] Hibernation for human space travel not possible In-Reply-To: References: Message-ID: <006801d85ef3$8bae4ef0$a30aecd0$@rainier66.com> ?> On Behalf Of Dave S via extropy-chat >?Weight loss would be expected to be about five pounds if humans could hibernate like other animals, which they can't. Animals also can't hibernate for years. -Dave Dave could a beast hibernate for years if we rigged up some kind of feeding tube and some kind of device up its rear to get rid of the waste? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 3 13:45:50 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 06:45:50 -0700 Subject: [ExI] publish or perish? In-Reply-To: References: <004101d85ee7$d5a9f010$80fdd030$@rainier66.com> Message-ID: <006f01d85ef4$1a3b1360$4eb13a20$@rainier66.com> ?> On Behalf Of Dave S via extropy-chat Subject: Re: [ExI] publish or perish? On Tue, May 3, 2022 at 8:20 AM spike jones via extropy-chat > wrote: >>?Suppose one guy has 89 million twitter followers and another guy has 8 followers. Does anyone here wish to argue that their tweets are currently being scrutinized and moderated equally? >?I think the automatic filtering is probably the same for everyone, but the human scrutiny would likely take followers into account? Ja I agree. Thanks for that. >>?Should they be? >?Maybe. -Dave Cool excellent. The question trifurcates into a simple yes, a simple no or a subtle, complicated and interesting set of maybes. I will attempt to defend all three branches of that trifurcation in a future post, but I want to hear other thoughts on the matter please. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 3 13:56:54 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 06:56:54 -0700 Subject: [ExI] publish or perish? In-Reply-To: References: <004101d85ee7$d5a9f010$80fdd030$@rainier66.com> Message-ID: <008001d85ef5$a60a4900$f21edb00$@rainier66.com> ?> On Behalf Of Dave S via extropy-chat Should they be? >?Maybe. What if the 89M guy follows the 8 guy and retweets him? Does the retweet avoid scrutiny because it was already allowed? That depends. In a platform/publisher where moderation is done by humans, the re-tweet of a previously-allowed tweet is blocked. In a platform/publisher where all moderation is done entirely by software, every retweet is treated as a new tweet and all the same rules are applied. Software offers liberty and justice for all. Humans don?t. >?And what happens when 89M guy posts something innocuous and later edits it into something that's not? -Dave Same answer as above. Every edit turns the tweet into a new tweet and is re-adjudicated, if the moderation is being done by software. Otherwise not. Humans can do some things modern software cannot yet do, but modern software can do some things that humans can never do. Hey cool I like it. I will call it spike?s law of softmod. Aren?t you glad you lived into the 20s so you get to watch this drama unfold? Me too! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 3 14:21:02 2022 From: pharos at gmail.com (BillK) Date: Tue, 3 May 2022 15:21:02 +0100 Subject: [ExI] Hibernation for human space travel not possible In-Reply-To: References: Message-ID: On Tue, 3 May 2022 at 14:27, Dave S via extropy-chat wrote: > > On Mon, May 2, 2022 at 7:11 PM BillK via extropy-chat wrote: >> >> The article is mainly concerned with very long space trips. Weight >> loss during hibernation is about five pounds per year, which becomes >> significant on multi-year trips. > > > Weight loss would be expected to be about five pounds if humans could hibernate like other animals, which they can't. Animals also can't hibernate for years. > > -Dave > _______________________________________________ Agreed - that's what the original article says. They are discussing what might be possible for future long-term space flights. They also point out that there are ethical problems concerning potential research on hibernation in humans. ?Who will be the volunteer for testing a drug, genetic modification, or a surgery for inducing hibernation?? says Nespolo. Present day medically induced coma with patients on a ventilator becomes very risky if longer than a few days to one week. One of the longest surviving patients is Michael Schumacher, who was in a medical coma for almost two months after his brain injury accident. For future astronauts, as well as muscle weakness, there is also bone strength loss to contend with. It may be that 'hibernating' astronauts would have to be maintained in an artificial gravity module rotating around the spaceship. BillK From spike at rainier66.com Tue May 3 14:52:25 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 07:52:25 -0700 Subject: [ExI] Hibernation for human space travel not possible In-Reply-To: References: Message-ID: <002601d85efd$67a8eb50$36fac1f0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat For future astronauts, as well as muscle weakness, there is also bone strength loss to contend with. It may be that 'hibernating' astronauts would have to be maintained in an artificial gravity module rotating around the spaceship. BillK _______________________________________________ Hi BillK, it isn't clear to me that hibernating astronauts would need a gravity module, but it is certainly plausible. If so, it doesn't need to be anything elaborate. You have an artificial gravity module in your own home, capable of a coupla hundred gees: your washing machine. spike From spike at rainier66.com Tue May 3 15:49:00 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 08:49:00 -0700 Subject: [ExI] publish or perish? In-Reply-To: <006f01d85ef4$1a3b1360$4eb13a20$@rainier66.com> References: <004101d85ee7$d5a9f010$80fdd030$@rainier66.com> <006f01d85ef4$1a3b1360$4eb13a20$@rainier66.com> Message-ID: <000801d85f05$4f8856c0$ee990440$@rainier66.com> From: spike at rainier66.com ?> On Behalf Of Dave S via extropy-chat Subject: Re: [ExI] publish or perish? On Tue, May 3, 2022 at 8:20 AM spike jones via extropy-chat >>?Should they be? >>?Maybe. -Dave >?Cool excellent. The question trifurcates into a simple yes, a simple no or a subtle, complicated and interesting set of maybes. >?I will attempt to defend all three branches of that trifurcation in a future post, but I want to hear other thoughts on the matter please. spike OK here goes. Should all tweets be adjudicated equally? 1. Ja of course: pretty simple really. Create a set of rules, then have the moderators follow them. 2. No of course not. The posters with huge followings have the power to influence the masses, so with great power comes great responsibility, so extra scrutiny is justifiable. 3. Maybe: These sets of arguments are complicated but generally intuitive. A company has a certain level of resources available for this sort of thing, and they must be used judiciously. It is flat impossible to monitor 200 million tweeters, so even if we want to make it perfectly fair, it isn?t possible. It would take a thousand times the number of moderators to do it. OK then, cool. As a thought experiment, imagine that Musk leaves aaaallll those meatmods in place, and adds a new, additional layer of moderation done by software. So now you have softmods and meatmods working in parallel, not second guessing each other, but only filtering according to their understanding of what they are supposed to do. So now, the space cannot possibly be less safe than it was before, only more safe ja? Neither group overrides the other?s decision. If either group filters a text, the other group allowing it is irrelevant. Every post must pass both the soft layer and the meat layer. Fair enough? OK now what happens please? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue May 3 15:50:47 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 3 May 2022 09:50:47 -0600 Subject: [ExI] Fwd: Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Colin, This is exactly the same kind of stuff I'm always talking about. Stathis is right about everything, but missing the "what are the representations phenomenally like" part. It sounds like we are in the same camp? On Tue, Apr 26, 2022 at 8:16 PM Colin Hales via extropy-chat < extropy-chat at lists.extropy.org> wrote: > No. What you are saying is both right and irrelevant. Brain's use > information encode in the field system, which is degenerately related to > the position of ions. Stop talking about ion positions and start talking > about "what it is like to BE ions. If you cannot see the problem space, > then this discussion cannot progress anywhere. > > The information content in the total, emergent field system (not just > their ionic charge source locations) is what I am talking about. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue May 3 15:59:49 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 3 May 2022 09:59:49 -0600 Subject: [ExI] Fwd: Is Artificial Life Conscious? In-Reply-To: <20220428200001.Horde.qWXOhdfGoMnH54ql0v8vpz7@sollegro.com> References: <20220428200001.Horde.qWXOhdfGoMnH54ql0v8vpz7@sollegro.com> Message-ID: On Thu, Apr 28, 2022 at 9:01 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In any case, I disagree with both you and Rafal. You think EM fields > mediate consiousness, Rafal thinks that synaptic organization and > transmission via ions and neurotransmitters mediate consciousness. I > think that consciousness is a complex recursive mathematical function > on tensor-space mediated by sparse synaptic connections between > non-linearly activated neurons. I think we all would benefit from > keeping the conversation going. > Would any of you disagree that conscious knowledge of a strawberry is composed of pixel elements of visual knowledge that have various intrinsic qualities like redness and greenness? Seems to me, we must accept that, the question is what is it that has the redness and greenness qualities, and how is it all computationally bound into one unified conscious experience. Each of these different sets of predictions are all possibilities, right? It's simply a matter of experimentally demonstrating which of all these theories is correct, and demonstrating which of all these competing predictions about what a redness quality is, is THE ONE theory that can't be falsified. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue May 3 16:22:10 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 3 May 2022 10:22:10 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Wed, Apr 27, 2022 at 9:47 AM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 2022. Apr 27., Wed at 16:57, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Do you have any intuition it theories for what our current devices lack >> that they would need? >> > > Perhaps integrated information (Tononi), or some quantum thing (Penrose), > or some combination of the two. > Yes, the amount of parallel computational binding going on is a measure of the level of intelligence. For example, if you just have a set of pixels that are only spatially computationally bound to each other, you will be missing all the higher level cognitive functions like names, and which pixels are the strawberry. Even this alone is very complex, as each pixel must be informationally related to all the other pixels. You know the difference in quality of all of them. Something must be doing this relational meaning for all of this situational visual knowledge. In addition to that, you must be aware of higher level cognitive knowledge, like a particular set of pixels make up the strawberry, and a different set make up the leaves, and the relationship between these two entities. something must be doing each and every one of these kinds of meaningful binding. Additionally, there is your knowledge of yourself being aware of this. Some additional binding knowledge is required for that also. Each of these different relationships must be programmed differently. Tononi only seems to focus on the amount of binding, but one must also include the fact that each particular type of computational binding must be differently programmed to provide the many different types of computational meaning. For example, In cases like the Hogan Twins who can see out of each other's eyes , not everything may be computationally bound. Each of their brains may be computationally bound to the conscious knowledge being rendered based on what is being seen out of each other's eyes. But, their individual knowledge of each of their spirits may not be computationally bound to each other. So while both of their conscious experiences may be bound into the same visual conscious knowledge. Each of their knowledge of their spirits bound, giving them higher order knowledge of them perceiving the knowledge. They may not have computational binding in a way that they are aware of the other's spirits being aware of the same knowledge. That kind of awareness would require even more computational binding. So computational binding is at least an Order(n^2) problem. Every piece of knowledge in conscious awareness must be computationally bound to all the other pieces of information. That's why CPUs wouldn't be considered conscious, as there is not enough CPU hardware to do any more computation than binding a handful of registers. Instead, everything is achieved with very little computational binding being done in long yet rapid sequences. We touch on this in the "Computational Binding" chapter of our video . Surely the type of wave computation being done in the brain is far more capable than the discrete logic gates we use in CPUs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue May 3 16:56:22 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 3 May 2022 10:56:22 -0600 Subject: [ExI] The relevance of glutamate in color experience In-Reply-To: References: Message-ID: On Mon, May 2, 2022 at 6:28 PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, May 2, 2022 at 3:35 PM Brent Allsop > wrote: > >> >> Hi Rafal, >> We're still talking about completely different things. I'm just saying >> that when you look at a strawberry, and if a pixel on the surface of that >> strawberry is changing from redness to grenness, there must be something, >> physical, that is responsible for that change in the one pixel. The >> prediction is, nothing but that physics will be able to duplicate the >> redness quality of that one pixel, especially not some abstract function. >> This redness quality, and what it must be, has nothing to do with what you >> are talking about. >> > > ### Do you acknowledge that your answer to Jason's post was incorrect? > > Your claim that the substitution scenario would only be true if neurons > were acting as logical gates is logically incorrect. If you refuse to > acknowledge and withdraw your claim, it would be impossible to continue a > discussion, since discussions where logically untrue statements are made > are not worth having. > There are surely still mistakes in most sets of consensus knowledge (and in my knowledge), even though those supporting said mistaken consensus don't yet see those mistakes. Giving up on the other side and refusing to move forward like this, everyone pushing into their own polarized bubble, is part of the problem. Consensus building and tracking systems are all about pulling this back together and tracking all this, and measuring the progress towards fewer mistakes. It is designed to enable the first person who sees a mistake in the consensus to be able to start a new camp, and from there more effectively get everyone on board with seeing those mistakes, revolutionizing the consensus as fast as possible. That is why the "Start new camp here" is the most important operation any individual can do, facilitating these kinds of revolutions. I think I can see what you are talking about in this mistake of mine you are pointing out. I will need to adjust my models, and descriptions of stuff to account for this, possibly including jumping camps. If you look at the history, using the as-of mechanism on the side bar, you can see multiple times where I, and others have made such realizations, and jumped camps. I appreciate your help with accelerating this process. You can see what camp I'm in, it'd be nice to definitively know what camp you are currently in, that is if you are also willing to admit you could be mistaken or missing something, also. On Mon, May 2, 2022 at 7:16 PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > So how stupid is it to say that "glutamate has a redness quality"?. There > are over twenty thousand distinct molecular species present in the > color-perception cortex. Why not the redness quality of any other molecule? > Why not say the water in the brain has a redness quality? What about the > redness quality of glutamate in your Chinese takeout? > The fact that glutamate is so easily falsifiable, the way you are describing here, is the entire point of why I pick glutamate. If anyone experiences redness, without glutamate present, the theory that it is glutamate that has the redness will be falsified. We're trying to illustrate how to falsify things in the simplest possible way. So, just as you say when you illustrate other possibilities, if you falsify glutamate = redness, you simply substitute glutamate with a description of something else in the brain, and repeat the experiment, till you find a set of necessary and sufficient physical stuff which can no longer be falsified. THEN we will know the necessary and sufficient set of physics which will result in a redness experience, giving us the required dictionary. Glutamate is just a temporary (intentionally easily falsifiable) placeholder for whatever it is we will discover which does have the intrinsic redness quality. Any theory that hasn't yet been falsified is valid, and we should start with the simplest examples which are the easiest to be falsified first, before we move onto more difficult yet more capable theories. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue May 3 17:00:28 2022 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 3 May 2022 12:00:28 -0500 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Surely the type of wave computation being done in the brain is far more > capable than the discrete logic gates we use in CPUs. > > This comment above suggests to me that you perhaps haven't come to terms with the full implications of the Church-Turing Thesis or the stronger Church-Turing-Deutsch Principle . Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue May 3 17:20:28 2022 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 3 May 2022 12:20:28 -0500 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, May 3, 2022 at 5:59 AM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Apr 29, 2022 at 9:31 AM Jason Resch wrote:. > >> >>> Is this register-by-register and time-step by time-step record of >>> synaptic and axonal activity conscious when stored in RAM? In a book? >>> >> >> A record, even a highly detailed one as you describe, I don't believe is >> conscious. For if you alter any bit/bits in that record, say the bits >> representing visual information sent from the optic nerves, none of those >> changes are reflected in any of the neuron states downstream from that >> modification, so in what sense are they consciousness of other information, >> or the firing of neighboring neurons, or the visual data coming in, etc. >> within the representation? >> >> There is no response to any change and so I conclude there is no >> awareness of any of that information. This is why I think counterfactuals >> are necessary. If you make a relevant change to the inputs, that change >> must be reflected in the right ways throughout the rest of the system, >> otherwise you aren't dealing with something that has the right functional >> relations and organizations. If no other bits change, then you're dealing >> with a bit string that is a record only, it is devoid of all functional >> relations. >> > > ### This is a good point. Consciousness is a process, not a 3d structure. > An alteration of a record of a brain which fails to preserve the usual > causal relationships between the recorded brain parts is not likely to > create conscious experience. The altered record does not propagate the > change through its structure in the way that would happen in a functioning > brain - it is just like a frozen brain. > I agree. Each bit in the bit in that sequence might as well be stored on a separate computer, if there is no causal link or interrelation between them. > > So we could say that for consciousness to exist there must be a timelike > sequence of states of a material object, and these states must have a > proper causal relationship as to model something. That something is the > subject, or content, of consciousness - you are either conscious of a > subject, or else you are not conscious at all. There is no pure > consciousness, all consciousness has a subject. > All computers need a time-like dimension across which to order its succession of states and to process information, so I think the same is likely true of our brains. > > ---------------------------- > >> >> I don't think "running" is the right word either, as relativity reveals >> objective time as an illusion. So we must accept the plausibility of >> consciousness in timeless four dimensionalism. It then must be the >> structure of relations and counterfactuals implied by laws (whether they be >> physical or mathematical or some other physics in some other universe) that >> are necessary for consciousness. >> > > ### I am reading "Out of Time" by Baron et al - awfully boring and > pedantic philosophy but the idea that all you need for time to exist is to > have a causal structure, which may be imprinted on a block universe, > sounds pretty reasonable. Now putting together the causal theory of time > and the above thought experiments on consciousness I seem to discern a > connection between time and consciousness. > In that case, you might enjoy my video and/or article on the same subject: https://alwaysasking.com/what-is-time/ https://www.youtube.com/watch?v=QC52vRmtQoU There is also a good PBS Spacetime Episode on causality: https://www.youtube.com/watch?v=1YFrISfN7jo > > The block universe is the set of all states that could be in some way > described. There are places in the block universe that have a causal > structure - you can derive each state from another state through the > application of some sort of a potentially quite simple rule, which defines > the physics of each such place. If the rule is unidirectional, the physics > of that place may be said to contain causality - one state causes another > by application of the rule. Causality in the block universe is equivalent > to time, and of course there is an infinite number of such causally > connected sets of states, and an infinity of separate streams of time. > > Some of small fragments of the states in the timelike series go up one > level - they are not just a result of applying a rule over the preceding > state but also contain higher-order causal relationships, such that these > special states model some content - it may be a model of an object that is > outside the special state and is fed by a stream of sensory input, or it > may be content generated internally within the special state. > > So consciousness is something that exists in areas of the block universe > where there are multilevel timelike or causal relationships between states. > That is how I once saw it. While I still hold the view that the flow of time is a subjective illusion, I now see the computations implementing mind states as more fundamental than the physical patterns and regularities observed by those minds. > > I need to mull this sentence over to make sure I understand what I just > wrote :) > ----------------------------- > >> >> And what if you run the same synaptic model on two computers? Is the >>> consciousness double? >>> >> >> Nick Bostrom has a paper arguing that it does create a duplicate with >> more "weight", Arnold Zuboff argues for a position called Unificationism in >> which there is only one unique mind even if run twice, and there's no >> change in its "weight". >> >> If reality is infinite and all possible minds and conscious experiences >> exist, then if Unificationism is true we should expect to be experiencing a >> totally random (think snow on a TV) kind of experience now, since there's >> so many more random than ordered unique conscious experiences. Zuboff uses >> this to argue that reality is not infinite. But if you believe reality is >> infinite it can be used as a basis to reject Unificationism. >> > > ### I feel I have bitten off more than I can handle with the above > questions. My guess is that consciousness is local, not global, so copies > of a mind do not have a global meaning, they are just separate minds. So I > guess I am not an Unificationist but then I also don't think there is a > "weight" related to copies of minds, just as there is no global "weight" of > all the different independent minds. Minds are just separate, unless they > exchange information. > The only time this is relevant is when making predictions regarding expectations of future observations. For example, if you stepped into a destructive teletransporter, which created one copy of you in NY, one copy in LA, and one copy in Paris, and I asked, what is the probability you will find your next conscious moment to be as someone in the United States? If we are only concerned with matters of what makes a mind or what makes qualia, and don't care about expectations, then we can ignore the duplicationism/unificationism debate. > > I need to let the multilevel timelike causality theory of consciousness > settle in my mind for a while before I can start asking more useful > questions. > -------------------------------- > >> >> Is there something special about dissipation of energy, >>> >> >> This is just a reflection of the fact that in physics, information is >> conserved. If you overwrite/erase a bit in a computer memory, that bit has >> to go somewhere. In practice, for our current computers, it is leaked into >> the environment and this requires leaking energy into the environment as >> implied by the Landauer limit. But if no information is erased/overwritten, >> which is possible to do in reversible computers (and is in fact necessary >> in quantum computers), then you can compute without dissipating any energy >> at all. So I conclude dissipating energy is unrelated to computation or >> consciousness. >> > > ### I agree, although non-dissipating consciousness may have a number of > limitations. > > Reversible computers built on reversible logic gates are Turing universal, so they can compute anything a normal computer can. I've sometimes speculated if it isn't suggestive that our own laws of physics are reversible. For example, if our universe is a simulation in a higher-level universe like (or identical to) ours, it could be done efficiently without leaking information/energy. > -------------------------------------- > >> >> or about causal processes that add something special to the digital, >>> mathematical entities represented by such processes? >>> >> >> The causality (though I would say relations since causality itself is >> poorly understood and poorly defined) is key, I think. If you study a bit >> of cryptography (see "one time pad" encryption) you can come to understand >> why any bit string can have any meaning. It is therefore meaningless >> without the context of it's interpreter. >> > > ### Yes, I think here we are hitting pay dirt! > :-) > > --------------------------------- > >> >> So to be "informative" we need both information and a system to be >> informed by or otherwise interpret that information. Neither by itself is >> sufficient. >> > > ### Yes, multilevel causal structure - base level physics organized into > more complex states that model other states. > ------------------------ > >> >> >>> I struggle to understand what is happening. I have a feeling that two >>> instances of a simple and pure mathematical entity (a triangle or an >>> equation) under consideration by two mathematicians are one and the same >>> but then two pure mathematical entities that purport to reflect a mind >>> (like the synapse-level model of a brain) being run on two computers are >>> separate and presumably independently conscious. Something doesn't fit >>> here. >>> >> >> The problem you are referencing is the distinction between types and >> tokens. >> >> A type is something like "Moby Dick", of which there is only one uniquely >> defined type which is that story. >> >> A token is any concrete instance of a given type. For example any >> particular book of Moby Dick is a token of the type Moby Dick. >> >> I think you may be asking: should we think of minds as types or tokens? I >> think a particular mind at a particular point in time (one >> "observer-moment") can be thought of as a type. But across an infinite >> universe that mind state or observer moment may have many, (perhaps an >> infinite number of) different tokens -- different instantiations in terms >> of different brains or computers with uploaded minds -- representing that >> type. >> >> So two instances of the same mind being run on two different computers >> are independently conscious in the sense that turning either one off >> doesn't destroy the type, even if one token is destroyed, just as the story >> of Moby Dick isn't destroyed if one book is lost. >> >> The open question to me is: does running two copies increase the >> likelihood of finding oneself in that mind state? This is the >> Unificationism/Duplicationism debate. >> > > ### Asking about probability in the context of consciousness is asking for > trouble because our understanding of either - probability and consciousness > is tenuous, and errors explode when you let poorly defined notions > interact. > > I distrust the thought experiments in this area of philosophy. > It is certainly a contentious area. > -------------------------- > >> >> >> Maybe there is something special about the physical world that imbues >>> models of mathematical entities contained in the physical world with a >>> different level of existence from the Platonic ideal level. >>> >> >> We can't rule out, (especially given all the other fine-tuning >> coincidences we observe), that our physics has a special property necessary >> for consciousness, but I tend to not think so, given all the problems >> entailed by philosophical zombies and zombie worlds -- where we have >> philosophers of mind and books about consciousness and exact copies of the >> conversations such as in this thread, being written by entities in a >> universe that has no conscious. This idea just doesn't seem coherent to me. >> > > ### Well, if our physics is timelike, and a multilevel causal structure is > needed for consciousness, then you need our physics, or an equivalent, for > consciousness. > > It's complicated. > > ---------------------------- > > >> >> Or maybe different areas of the Platonic world are imbued with different >>> properties, such as consciousness, even as they copy other parts of the >>> Platonic world. >>> >> >> As Bruno Marchal points out in his filmed graph thought experiment, if >> one accepts mechanism (a.k.a. functionalism, or computationalism), this >> implies that platonically existing number relations and computations are >> sufficient for consciousness. Therefore consciousness is in a sense more >> fundamental than the physical worlds we experience. The physics in a sense, >> drops out as the consistent extensions of the infinite indistinguishable >> computations defining a particular observer's current mind state. >> > > ### I would say otherwise - the causal structure of time in our physics > (the sequence of Platonic states connected by a causal rule) is the thing > that allows consciousness, by being the basis for building additional > levels of causal relationships between Platonic objects > I agree. I would also say there could be a single platonic object whose structure (and its internal time-like sequence) is likewise defined by such rules. It becomes a matter of taste I guess how to draw the lines between objects in Plato's heaven. > -------------------------------- > >> >> This is explored in detail by Markus P Mueller, in his paper on deriving >> laws of physics from algorithmic information theory. He is able to predict >> from these first principles that most observers should find themselves to >> be in a universe having simple, but probabilistic laws, with time, and a >> point in the past beyond which further retrodiction is impossible. >> >> Indeed we find this to be true of our own physics and universe. I cover >> this subject in some detail in my "Why does anything exist?" article (on >> AlwaysAsking.com ). I am currently working on an article about >> consciousness. The two questions are quite interrelated. >> > > ### Indeed. Are you familiar with Wolfram's Physics Project? I feel his > approach may help us eventually put metaphysics on a firmer ground and > maybe connect physics to the theory of consciousness in a more rigorous way. > > His project to frame physics in terms of cellular automata? I think his project is, due to a subtle argument, shown to be impossible. A result by Bruno Marchal implies that if digital Mechanism (in philosophy of mind) is true, then digital physics cannot be true. And because digital physics implies digital mechanism, the idea of digital physics leads to contradiction and so must be false. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue May 3 17:32:32 2022 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 3 May 2022 12:32:32 -0500 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: If you agree with the concept of the Church-Turing Thesis, then you should know that "wave computation" cannot be any more capable than the "discrete logic gate" computation we use in CPUs. All known forms of computation are exactly equivalent in what they can compute. If it can be computed by one type, it can be computed by all types. If it can't be computed by one type, it can't be computed by any type. This discovery has major implications in the philosophy of mind, especially if one rejects the possibility of zombies. It leads directly to multiple realizability, and substrate independence, as Turing noted 72 years ago: ?The fact that Babbage's Analytical Engine was to be entirely mechanical will help us rid ourselves of a superstition. Importance is often attached to the fact that modern digital computers are electrical, and the nervous system is also electrical. Since Babbage's machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. [...] If we wish to find such similarities we should look rather for mathematical analogies of function.? -- Alan Turing in Computing Machinery and Intelligence (1950) Further, if you reject the plausibility of absent, fading, or dancing qualia, then equivalent computations (regardless of substrate) must be equivalently aware and conscious. To believe otherwise, is to believe your color qualia could start inverting every other second without you being able to comment on it or in any way "notice" that it was happening. You wouldn't be caught off guard, you wouldn't suddenly pause to notice, you wouldn't alert anyone to your condition. This should tell you that behavior and the underlying functions that can drive behavior, must be directly tied to conscious experience in a very direct way. Jason On Tue, May 3, 2022 at 12:11 PM Brent Allsop wrote: > > OK, let me see if I am understanding this correctly. consider this image: > [image: 3_robots_tiny.png] > > I would argue that all 3 of these systems are "turing complete", and that > they can all tell you the strawberry is 'red'. > I agree with you on this. > Which brings us to a different point that they would all answer the > question: "What is redness like for you?" differently. > First: "My redness is like your redness." > Second: "My redness is like your greenness." > Third: "I represent knowledge of red things with an abstract word like > "red", I need a definition to know what that means." > > You are focusing on the turing completeness, which I agree with, I'm just > focusing on something different. > > > On Tue, May 3, 2022 at 11:00 AM Jason Resch wrote: > >> >> >> On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Surely the type of wave computation being done in the brain is far more >>> capable than the discrete logic gates we use in CPUs. >>> >>> >> This comment above suggests to me that you perhaps haven't come to terms >> with the full implications of the Church-Turing Thesis >> or the >> stronger Church-Turing-Deutsch Principle >> >> . >> >> Jason >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Tue May 3 21:04:05 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 3 May 2022 15:04:05 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Jason, We continue to talk past each other. I agree with what you are saying but... [image: 3_robots_tiny.png] First off, you seem to be saying you don't care about the fact that the first two systems represent the abstract notion of red with different qualities, and that they achieve their Turing completeness in different ways. If that is the case, why are we talking? I want to know what your redness knowledge is like, you don't seem to care about anything other than all these systems can tell you the strawberry is red, and are all turing complete? In addition to turing completeness, what I am interested in is the efficiency by which computation can be accomplished by different models. Is the amount of hardware used in one model more than is required in another? The reason there are only a few registers in a CPU, is because of the extreme brute force way you must do computational operations like addition and comparison when using discrete logic. It takes far too much hardware to have any more than a handful of registers, which can be computationally bound to each other at any one time. Whereas if knowledge composed of redness and greenness is a standing wave in neural tissue EM fields, every last pixel of knowledge can be much more efficiently meaningfully bound to all the other pixels in a 3D standing wave. If standing waves require far less hardware to do the same amount of parallel computational binding, this is what I'm interested in. They are both turing complete, one is far more efficient than the other. Similarly, in order to achieve substrate independence, like the 3rd system in the image, you need additional dictionaries to tell you whether redness or greenness or +5 volts, or anything else is representing the binary 1, or the word 'red'. Virtual machines, capable of running on different lower level hardware, are less efficient than machines running on nacked hardware. This is because they require the additional translation layer to enable virtual operation on different types of hardware. The first two systems representing information directly on qualities does not require the additional dictionaries required to achieve the substrate independence as architected in the 3rd system. So, again, the first two systems are more efficient, since they require less mapping hardware. On Tue, May 3, 2022 at 11:34 AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > If you agree with the concept of the Church-Turing Thesis, then you should > know that "wave computation" cannot be any more capable than the "discrete > logic gate" computation we use in CPUs. All known forms of computation are > exactly equivalent in what they can compute. If it can be computed by one > type, it can be computed by all types. If it can't be computed by one type, > it can't be computed by any type. > > This discovery has major implications in the philosophy of mind, > especially if one rejects the possibility of zombies. It leads directly to > multiple realizability, and substrate independence, as Turing noted 72 > years ago: > > ?The fact that Babbage's Analytical Engine was to be entirely mechanical > will help us rid ourselves of a superstition. Importance is often attached > to the fact that modern digital computers are electrical, and the nervous > system is also electrical. Since Babbage's machine was not electrical, and > since all digital computers are in a sense equivalent, we see that this use > of electricity cannot be of theoretical importance. [...] If we wish to > find such similarities we should look rather for mathematical analogies of > function.? > -- Alan Turing in Computing Machinery and Intelligence > > (1950) > > > Further, if you reject the plausibility of absent, fading, or dancing > qualia, then equivalent computations (regardless of substrate) must be > equivalently aware and conscious. To believe otherwise, is to believe your > color qualia could start inverting every other second without you being > able to comment on it or in any way "notice" that it was happening. You > wouldn't be caught off guard, you wouldn't suddenly pause to notice, you > wouldn't alert anyone to your condition. This should tell you that behavior > and the underlying functions that can drive behavior, must be directly tied > to conscious experience in a very direct way. > > Jason > > On Tue, May 3, 2022 at 12:11 PM Brent Allsop > wrote: > >> >> OK, let me see if I am understanding this correctly. consider this image: >> [image: 3_robots_tiny.png] >> >> I would argue that all 3 of these systems are "turing complete", and that >> they can all tell you the strawberry is 'red'. >> I agree with you on this. >> Which brings us to a different point that they would all answer the >> question: "What is redness like for you?" differently. >> First: "My redness is like your redness." >> Second: "My redness is like your greenness." >> Third: "I represent knowledge of red things with an abstract word like >> "red", I need a definition to know what that means." >> >> You are focusing on the turing completeness, which I agree with, I'm just >> focusing on something different. >> >> >> On Tue, May 3, 2022 at 11:00 AM Jason Resch wrote: >> >>> >>> >>> On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> Surely the type of wave computation being done in the brain is far more >>>> capable than the discrete logic gates we use in CPUs. >>>> >>>> >>> This comment above suggests to me that you perhaps haven't come to terms >>> with the full implications of the Church-Turing Thesis >>> or the >>> stronger Church-Turing-Deutsch Principle >>> >>> . >>> >>> Jason >>> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From foozler83 at gmail.com Tue May 3 21:40:10 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 3 May 2022 16:40:10 -0500 Subject: [ExI] dna Message-ID: Someone who is not even that close to you can collect your DNA. I assume any use of the would be illegal. Check out the Nature article: https://www.nature.com/articles/d41586-022-01206-z? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From johntc at gmail.com Tue May 3 22:13:53 2022 From: johntc at gmail.com (John Tracy Cunningham) Date: Tue, 3 May 2022 18:13:53 -0400 Subject: [ExI] dna In-Reply-To: References: Message-ID: We leave a trail of DNA behind us wherever we go - skin cells, saliva, etc. Not usually enough to submit to a genetic genealogy company, but enough for police labs and equivalent. That's the final step (before the arrest) in solving cold cases with DNA. Regards, John On Tue, May 3, 2022 at 5:41 PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Someone who is not even that close to you can collect your DNA. I assume > any use of the would be illegal. Check out the Nature article: > > https://www.nature.com/articles/d41586-022-01206-z? > bill w > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 3 22:16:14 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 3 May 2022 15:16:14 -0700 Subject: [ExI] dna In-Reply-To: References: Message-ID: <000501d85f3b$67b69dc0$3723d940$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: [ExI] dna Someone who is not even that close to you can collect your DNA. I assume any use of the would be illegal. Check out the Nature article: https://www.nature.com/articles/d41586-022-01206-z? bill w Meh, legal schmegal, we could have so dang much fun with this. Just imagine some of the gags you could play. Lung tissue is fast-turnaround: cells perish, you sneeze or cough them out, some go aerosol, filter picks them up, recover the cells, 60 dollar DNA kit, polymerase chain reaction does what it does, coupla months go by, get the results, now you are ready for some excellent frat-boy fun. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue May 3 22:25:33 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 4 May 2022 08:25:33 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Wed, 4 May 2022 at 07:06, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Jason, > We continue to talk past each other. I agree with what you are saying > but... > [image: 3_robots_tiny.png] > First off, you seem to be saying you don't care about the fact that the > first two systems represent the abstract notion of red with different > qualities, and that they achieve their Turing completeness in different > ways. > If that is the case, why are we talking? I want to know what your redness > knowledge is like, you don't seem to care about anything other than all > these systems can tell you the strawberry is red, and are all turing > complete? > > In addition to turing completeness, what I am interested in is the > efficiency by which computation can be accomplished by different models. > Is the amount of hardware used in one model more than is required in > another? > The reason there are only a few registers in a CPU, is because of the > extreme brute force way you must do computational operations like addition > and comparison when using discrete logic. It takes far too much hardware > to have any more than a handful of registers, which can be computationally > bound to each other at any one time. Whereas if knowledge composed of > redness and greenness is a standing wave in neural tissue EM fields, every > last pixel of knowledge can be much more efficiently meaningfully bound to > all the other pixels in a 3D standing wave. If standing waves require far > less hardware to do the same amount of parallel computational binding, this > is what I'm interested in. They are both turing complete, one is far more > efficient than the other. > > Similarly, in order to achieve substrate independence, like the 3rd system > in the image, you need additional dictionaries to tell you whether redness > or greenness or +5 volts, or anything else is representing the binary 1, or > the word 'red'. Virtual machines, capable of running on different lower > level hardware, are less efficient than machines running on nacked > hardware. This is because they require the additional translation layer to > enable virtual operation on different types of hardware. The first two > systems representing information directly on qualities does not require the > additional dictionaries required to achieve the substrate independence as > architected in the 3rd system. So, again, the first two systems are more > efficient, since they require less mapping hardware. > Substrate independence is not something that is ?achieved?, it is just the way it works. Hamming is substrate independent because you can make a hammer out of many different things, even though a particular set of materials may be more durable and easier to work with, because it is impossible to separate hammering from the behaviour associated with hammering. Similarly, it is impossible to separate qualia from the behaviour associated with qualia (the abstract properties, as you call them), because otherwise you could make a partial zombie, and you have agreed that is absurd. On Tue, May 3, 2022 at 11:34 AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> If you agree with the concept of the Church-Turing Thesis, then you >> should know that "wave computation" cannot be any more capable than the >> "discrete logic gate" computation we use in CPUs. All known forms of >> computation are exactly equivalent in what they can compute. If it can be >> computed by one type, it can be computed by all types. If it can't be >> computed by one type, it can't be computed by any type. >> >> This discovery has major implications in the philosophy of mind, >> especially if one rejects the possibility of zombies. It leads directly to >> multiple realizability, and substrate independence, as Turing noted 72 >> years ago: >> >> ?The fact that Babbage's Analytical Engine was to be entirely mechanical >> will help us rid ourselves of a superstition. Importance is often attached >> to the fact that modern digital computers are electrical, and the nervous >> system is also electrical. Since Babbage's machine was not electrical, and >> since all digital computers are in a sense equivalent, we see that this use >> of electricity cannot be of theoretical importance. [...] If we wish to >> find such similarities we should look rather for mathematical analogies of >> function.? >> -- Alan Turing in Computing Machinery and Intelligence >> >> (1950) >> >> >> Further, if you reject the plausibility of absent, fading, or dancing >> qualia, then equivalent computations (regardless of substrate) must be >> equivalently aware and conscious. To believe otherwise, is to believe your >> color qualia could start inverting every other second without you being >> able to comment on it or in any way "notice" that it was happening. You >> wouldn't be caught off guard, you wouldn't suddenly pause to notice, you >> wouldn't alert anyone to your condition. This should tell you that behavior >> and the underlying functions that can drive behavior, must be directly tied >> to conscious experience in a very direct way. >> >> Jason >> >> On Tue, May 3, 2022 at 12:11 PM Brent Allsop >> wrote: >> >>> >>> OK, let me see if I am understanding this correctly. consider this >>> image: >>> [image: 3_robots_tiny.png] >>> >>> I would argue that all 3 of these systems are "turing complete", and >>> that they can all tell you the strawberry is 'red'. >>> I agree with you on this. >>> Which brings us to a different point that they would all answer the >>> question: "What is redness like for you?" differently. >>> First: "My redness is like your redness." >>> Second: "My redness is like your greenness." >>> Third: "I represent knowledge of red things with an abstract word like >>> "red", I need a definition to know what that means." >>> >>> You are focusing on the turing completeness, which I agree with, I'm >>> just focusing on something different. >>> >>> >>> On Tue, May 3, 2022 at 11:00 AM Jason Resch >>> wrote: >>> >>>> >>>> >>>> On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> Surely the type of wave computation being done in the brain is far >>>>> more capable than the discrete logic gates we use in CPUs. >>>>> >>>>> >>>> This comment above suggests to me that you perhaps haven't come to >>>> terms with the full implications of the Church-Turing Thesis >>>> or the >>>> stronger Church-Turing-Deutsch Principle >>>> >>>> . >>>> >>>> Jason >>>> >>> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From jasonresch at gmail.com Tue May 3 22:27:05 2022 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 3 May 2022 17:27:05 -0500 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, May 3, 2022 at 4:05 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Jason, > We continue to talk past each other. > Correct. If I raise points, corrections, and ask questions, which you ignore, we are guaranteed to talk past each other. > I agree with what you are saying but... > [image: 3_robots_tiny.png] > First off, you seem to be saying you don't care about the fact that the > first two systems represent the abstract notion of red with different > qualities, and that they achieve their Turing completeness in different > ways. > "Turing completeness" refers to programming languages or systems that can realize a Turing machine. This is something different from my claim that physics (and accordingly any physical system or object) is Turing emulable (something that can be perfectly emulated/simulated by a Turing machine having the right program). You have shown this image multiple times, but not asked me anything about it. I don't see its relevance to the conversation, unless you have a specific point to make about this image, or a question to ask me about it. > If that is the case, why are we talking? I want to know what your redness > knowledge is like, > That's incommunicable. You would have to possess my brain/mind to know what red is like to me, but then you would be me and not Brent, and so you would be stuck in the same position we are in now, being unable to communicate to someone with Brent's brain what red is like to someone with Jason's brain. > you don't seem to care about anything other than all these systems can > tell you the strawberry is red, and are all turing complete? > If that's what you think then I think you have missed my point. The reason I bring up the Church-Turing thesis is to relay to you the implied independence of the material substrate from the behaviors of an implemented mind. This substrate independence means any effort to link glutamate (or name your molecule/compound) can have nothing to do with the quality of perceptions. If you think otherwise, I can show you how it leads to a contradiction or an absurdity (like dancing qualia). > > In addition to turing completeness, what I am interested in is the > efficiency by which computation can be accomplished by different models. > Is the amount of hardware used in one model more than is required in > another? > There is a result from computational complexity theory known as the "extended Church-Turing Thesis" which says: "All reasonable computation models can simulate each other with only polynomial slow down." Which is to say that there can be different efficiencies, but generally they will not be significant. There is, however, a substantial efficiency difference between quantum computers and classical computers. Simulating quantum computers on a classical computer, for some problems, requires an exponential slowdown, to the point where even if all the matter and energy in the universe were used to build a classical computer, it would be unable to keep up with a quantum computer that could fit on top of a table. > The reason there are only a few registers in a CPU, is because of the > extreme brute force way you must do computational operations like addition > and comparison when using discrete logic. It takes far too much hardware > to have any more than a handful of registers, which can be computationally > bound to each other at any one time. > The way I view it is that a single-threaded CPU spreads out a computation minimally through space, and maximally through time, while a highly parallel computer or a biological brain, spreads out the computation through space, and less across time. In neither case are the time or space dimensions zero, the computation always has some positive dimensionality across the dimensions of space and time. Thus there is no "binding" through time, nor across space, aside from those bounds implied by the logical/computational operation. > Whereas if knowledge composed of redness and greenness is a standing wave > in neural tissue EM fields, every last pixel of knowledge can be much more > efficiently meaningfully bound to all the other pixels in a 3D standing > wave. If standing waves require far less hardware to do the same amount of > parallel computational binding, this is what I'm interested in. They are > both turing complete, one is far more efficient than the other. > If they're equivalent computationally, then they're equivalent behaviorally, and therefore they must experience the same qualia (e.g. redness or greenness) as to believe otherwise is to accept dancing qualia (being unable to comment on a redness and greenness swapping back and forth in one's field of vision). > > Similarly, in order to achieve substrate independence, like the 3rd system > in the image, you need additional dictionaries > to tell you whether redness or greenness or +5 volts, or anything else is > representing the binary 1, or the word 'red'. > What are these dictionaries? I don't see how it is possible for any dictionary to specify how a particular quale feels. Qualia are first-person properties, while dictionaries concern themselves only with third-person communicable information. > Virtual machines, capable of running on different lower level hardware, > are less efficient than machines running on nacked hardware. This is > because they require the additional translation layer to enable virtual > operation on different types of hardware. > True, but irrelevant. > The first two systems representing information directly on qualities does > not require the additional dictionaries required to achieve the substrate > independence as architected in the 3rd system. So, again, the first two > systems are more efficient, since they require less mapping hardware. > I am not able to make any sense of the above paragraph. If I understand your example correctly, the three systems: A) the conventionally red seeing man B) the seeing green when light of 700nm strikes his retina man and C) the robot that maps 700nm light to the state of outputting the string "red" Each experience '700nm' light differently, they each have different qualia. Do we agree so far? I have no objection to the possibility of this situation. All I say, is that for this situation to exist, for different systems (A, B, and C) to experience differently, they must process information differently. They "run different programs", or you could say, they have different "high level functional organizations". If they ran the same programs, had the same high level functional organizations, processed information equivalently, then they would necessarily have the same quale for 700nm light. This would be true if one functional organization was made of a computer composed of wooden groves and marbles, if one was made of copper wires and electronics, or if made of fiber optic cables and photonics. Each is a computer capable of running the same program, so each will realize the same mind, exhibiting the same behaviors. Jason > > > > > > On Tue, May 3, 2022 at 11:34 AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> If you agree with the concept of the Church-Turing Thesis, then you >> should know that "wave computation" cannot be any more capable than the >> "discrete logic gate" computation we use in CPUs. All known forms of >> computation are exactly equivalent in what they can compute. If it can be >> computed by one type, it can be computed by all types. If it can't be >> computed by one type, it can't be computed by any type. >> >> This discovery has major implications in the philosophy of mind, >> especially if one rejects the possibility of zombies. It leads directly to >> multiple realizability, and substrate independence, as Turing noted 72 >> years ago: >> >> ?The fact that Babbage's Analytical Engine was to be entirely mechanical >> will help us rid ourselves of a superstition. Importance is often attached >> to the fact that modern digital computers are electrical, and the nervous >> system is also electrical. Since Babbage's machine was not electrical, and >> since all digital computers are in a sense equivalent, we see that this use >> of electricity cannot be of theoretical importance. [...] If we wish to >> find such similarities we should look rather for mathematical analogies of >> function.? >> -- Alan Turing in Computing Machinery and Intelligence >> >> (1950) >> >> >> Further, if you reject the plausibility of absent, fading, or dancing >> qualia, then equivalent computations (regardless of substrate) must be >> equivalently aware and conscious. To believe otherwise, is to believe your >> color qualia could start inverting every other second without you being >> able to comment on it or in any way "notice" that it was happening. You >> wouldn't be caught off guard, you wouldn't suddenly pause to notice, you >> wouldn't alert anyone to your condition. This should tell you that behavior >> and the underlying functions that can drive behavior, must be directly tied >> to conscious experience in a very direct way. >> >> Jason >> >> On Tue, May 3, 2022 at 12:11 PM Brent Allsop >> wrote: >> >>> >>> OK, let me see if I am understanding this correctly. consider this >>> image: >>> [image: 3_robots_tiny.png] >>> >>> I would argue that all 3 of these systems are "turing complete", and >>> that they can all tell you the strawberry is 'red'. >>> I agree with you on this. >>> Which brings us to a different point that they would all answer the >>> question: "What is redness like for you?" differently. >>> First: "My redness is like your redness." >>> Second: "My redness is like your greenness." >>> Third: "I represent knowledge of red things with an abstract word like >>> "red", I need a definition to know what that means." >>> >>> You are focusing on the turing completeness, which I agree with, I'm >>> just focusing on something different. >>> >>> >>> On Tue, May 3, 2022 at 11:00 AM Jason Resch >>> wrote: >>> >>>> >>>> >>>> On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> Surely the type of wave computation being done in the brain is far >>>>> more capable than the discrete logic gates we use in CPUs. >>>>> >>>>> >>>> This comment above suggests to me that you perhaps haven't come to >>>> terms with the full implications of the Church-Turing Thesis >>>> or the >>>> stronger Church-Turing-Deutsch Principle >>>> >>>> . >>>> >>>> Jason >>>> >>> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From rafal.smigrodzki at gmail.com Wed May 4 03:03:52 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 3 May 2022 23:03:52 -0400 Subject: [ExI] Nonspecific effects of Covid vaccines Message-ID: Un-fucking-believable! https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072489 It is highly likely that mRNA Covid vaccines have dramatic negative so-called non-specific effects on non-Covid mortality, which may completely negate their proven beneficial effects on Covid mortality. In contrast, adenovirus Covid vaccines have a beneficial and highly statistically significant nonspecific effect on all-cause mortality. Fucking jesus h christ! Spike, I am sorry for dismissing your concerns about the new Covid vaccines. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed May 4 05:31:58 2022 From: max at maxmore.com (Max More) Date: Wed, 4 May 2022 05:31:58 +0000 Subject: [ExI] George Church will speak at Alcor-50 conference, June 2-5 Message-ID: I?m very pleased to announce that we have added a particularly special speaker for the conference: Prof. George Church. For those who are not familiar with his work: George Church is an American geneticist, molecular engineer, and chemist. He is the Robert Winthrop Professor of Genetics at Harvard Medical School, Professor of Health Sciences and Technology at Harvard and MIT, and a founding member of the Wyss Institute for Biologically Inspired Engineering. Church is known for his professional contributions in the sequencing of genomes and interpreting such data, in synthetic biology and genome engineering, and in an emerging area of neuroscience that proposes to map brain activity and establish a "functional connectome." Among these, Church is known for pioneering the specialized fields of personal genomics and synthetic biology. He has co-founded commercial concerns spanning these areas, and others from green and natural products chemistry to infectious agent testing and fuel production, including Knome, LS9, and Joule Unlimited (respectively, human genomics, green chemistry, and solar fuel companies). George Church (geneticist) - Wikipedia We also have leading philosopher David Chalmers speaking (in person). There?s also the first ever Presidents Panel, bringing together leaders of four cryonics organizations. And lots more: 2022 Conference - Alcor If you haven?t already registered, what are you waiting for?! --Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed May 4 09:57:00 2022 From: pharos at gmail.com (BillK) Date: Wed, 4 May 2022 10:57:00 +0100 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: References: Message-ID: On Wed, 4 May 2022 at 04:07, Rafal Smigrodzki via extropy-chat wrote: > > Un-fucking-believable! > > https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072489 > > It is highly likely that mRNA Covid vaccines have dramatic negative so-called non-specific effects on non-Covid mortality, which may completely negate their proven beneficial effects on Covid mortality. In contrast, adenovirus Covid vaccines have a beneficial and highly statistically significant nonspecific effect on all-cause mortality. > > Fucking jesus h christ! > > Spike, I am sorry for dismissing your concerns about the new Covid vaccines. > > Rafal > _______________________________________________ I found one British news website reporting this paper. It explains the results in plain English for the non-scientist reader. Quotes: A bombshell new study by a distinguished team of Danish researchers led by Prof. Christine Stabell-Benn suggests a surprisingly nuanced answer. In the randomized trials of the covid vaccines, the adenovector-based vaccines, including the AstraZeneca and Johnson & Johnson vaccines, reduced all-cause mortality of study participants relative to people randomly assigned a placebo. Indeed, the reduction in mortality is larger than expected from the Covid effect and may suggest additional beneficial ?non-specific effects? from those vaccines against other health threats. On the other hand, Stabell-Benn and her colleagues found no statistically meaningful evidence in the trial data that the mRNA vaccines reduced all-cause mortality. The numbers of deaths from other causes including cardiovascular deaths appear to be increased in this group, compensating for the beneficial effect of the vaccines on Covid. ------------ Other sites are also starting to query the all-cause mortality statistics. The problem is that these effects take a long time to become evident. By which time it may be too late. BillK From spike at rainier66.com Wed May 4 13:37:05 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 06:37:05 -0700 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: References: Message-ID: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] Nonspecific effects of Covid vaccines On Wed, 4 May 2022 at 04:07, Rafal Smigrodzki via extropy-chat wrote: > > Un-fucking-believable! > > https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072489 > > It is highly likely that mRNA Covid vaccines have dramatic negative so-called non-specific effects on non-Covid mortality... > Spike, I am sorry for dismissing your concerns about the new Covid vaccines. > > Rafal > _______________________________________________ Rafal, there were (and are) so many unknowns with the mRNA vaccines, yet look at how society handled it. Oh we collectively screwed up. The risks were systematically downplayed, the news sites which went anywhere near a balanced presentation were vilified as being anti-vaxxers and Trump followers (ironic since Trump endorsed the vaccines while the current VP said she wouldn't trust any vaccine developed in Trump's administration (so why weren't the real anti-vaxxers accused of being Harris followers?)) The whole thing was highly politicized when it shoulda been highly science-ized. Lesson: talkta ya docta. Don't listen to some goof who makes his living getting elected to stuff, talkta ya docta. Those guys are fine lads (in my case lass.) They made it thru the marathon of med school, they know stuff. Every one of them I know personally is a do-the-right-thing sorta person. They seldom have ulterior motives, they aren't running for office. They risked their freaking lives to come to work during the peak of the pandemic in order to help people, and they didn't do that for money because they know they might end up the richest guy in the cemetery, but in they came (thank you doctors (my own doctor diagnosed me while I was so damn sick I couldn't walk to her office.)) Rafal in my own circle, there four deaths close to me, two of covid with covid, two of covid without covid. Reasoning: my step mother and the father of a good friend both caught, both recovered but were weakened enough they couldn't make it, died within a month. But both were in their 80s and both had major medical problems: one had bad heart disease and atherosclerosis, my step mother had survived cancer twice and had it again. But I had two cousins: one was a 28 yr old who suicided after his business failed from being shut down. The other was 64, had COPD but the medics kept it under control with diuretics for 15 years. Covid caused the local hospital to close permanently, he wouldn't go to the hospital in the city, ran outta meds, adios amigo. >...I found one British news website reporting this paper. It explains the results in plain English for the non-scientist reader. ... BillK _______________________________________________ An important lesson here is in information distribution. I think we have seen systematic suppression of reports on the risks of the vaccines and systematic exaggeration of the benefits. Aside: I saw BillK's response to Rafal, but not Rafal's original post from 4.07 today. I still don't see Rafal's original post. spike From spike at rainier66.com Wed May 4 14:54:14 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 07:54:14 -0700 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> References: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> Message-ID: <004001d85fc6$d2f4c6d0$78de5470$@rainier66.com> -----Original Message----- From: spike at rainier66.com Subject: RE: [ExI] Nonspecific effects of Covid vaccines ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] Nonspecific effects of Covid vaccines On Wed, 4 May 2022 at 04:07, Rafal Smigrodzki via extropy-chat wrote: > > Un-fucking-believable! > > https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4072489 >... > > Rafal > _______________________________________________ >...Rafal, there were (and are) so many unknowns with the mRNA vaccines, yet look at how society handled it... >>...I found one British news website reporting this paper. It explains the results in plain English for the non-scientist reader. ... BillK _______________________________________________ ... >...Aside: I saw BillK's response to Rafal, but not Rafal's original post from 4.07 today. I still don't see Rafal's original post. spike OK I saw BillK's response to Rafal at 6:37 but never saw Rafal's original post from 4:07. Rafal's post is not in my spam folder, not in my inbox, not nowhere. Rafal's post is missing in action. Did anyone else here see BillK's post but not Rafal's? It's a CONSPIRACY, I tells ya! spike From spike at rainier66.com Wed May 4 15:05:22 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 08:05:22 -0700 Subject: [ExI] quantum computing Message-ID: <000c01d85fc8$60f61eb0$22e25c10$@rainier66.com> Those of us who remember the original Spiderman show may recall these lyrics: .Is he strong? Listen bud, He has radioactive blood. Back in the old days the popular press thought anything atomic or radioactive was a super ramped up version of chemical anything. Radioactivity ended up being enormously useful of course, but it wasn't what we were told back then. So.are there any parallels with quantum computing? The US government just authorized a huge pile of money to study QC. I still can't really tell if it is what they say. Opinions please? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed May 4 15:49:56 2022 From: pharos at gmail.com (BillK) Date: Wed, 4 May 2022 16:49:56 +0100 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> References: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> Message-ID: On Wed, 4 May 2022 at 14:40, spike jones via extropy-chat wrote: > > > Lesson: talkta ya docta. Don't listen to some goof who makes his living getting elected to stuff, talkta ya docta. Those guys are fine lads (in my case lass.) They made it thru the marathon of med school, they know stuff. Every one of them I know personally is a do-the-right-thing sorta person. They seldom have ulterior motives, they aren't running for office. They risked their freaking lives to come to work during the peak of the pandemic in order to help people, and they didn't do that for money because they know they might end up the richest guy in the cemetery, but in they came (thank you doctors (my own doctor diagnosed me while I was so damn sick I couldn't walk to her office.)) > > spike > _______________________________________________ Talking to your doctor is definitely a good idea.......But what they really think will only be said in private. Almost all doctors will not publicly contradict the official story coming from on high. Some have lost their jobs for voicing opposition to the official edicts. In the middle of a pandemic (with huge profits at stake and the public demands to do *something*) big Pharma forgot "I will abstain from all intentional wrong-doing and harm" (the original version of primum non nocere). BillK From stathisp at gmail.com Wed May 4 16:13:45 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 5 May 2022 02:13:45 +1000 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: References: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> Message-ID: On Thu, 5 May 2022 at 01:51, BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 4 May 2022 at 14:40, spike jones via extropy-chat > wrote: > > > > > > > Lesson: talkta ya docta. Don't listen to some goof who makes his living > getting elected to stuff, talkta ya docta. Those guys are fine lads (in my > case lass.) They made it thru the marathon of med school, they know > stuff. Every one of them I know personally is a do-the-right-thing sorta > person. They seldom have ulterior motives, they aren't running for > office. They risked their freaking lives to come to work during the peak > of the pandemic in order to help people, and they didn't do that for money > because they know they might end up the richest guy in the cemetery, but in > they came (thank you doctors (my own doctor diagnosed me while I was so > damn sick I couldn't walk to her office.)) > > > > spike > > _______________________________________________ > > > Talking to your doctor is definitely a good idea.......But what they > really think will only be said in private. Almost all doctors will not > publicly contradict the official story coming from on high. Some have > lost their jobs for voicing opposition to the official edicts. > In the middle of a pandemic (with huge profits at stake and the public > demands to do *something*) big Pharma forgot "I will abstain from all > intentional wrong-doing and harm" (the original version of primum non > nocere). I don?t know any doctors who privately said anything different to what they said to their patients about this. The only rational basis for doing so would be if they had access to secret information. > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed May 4 16:23:39 2022 From: pharos at gmail.com (BillK) Date: Wed, 4 May 2022 17:23:39 +0100 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: References: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> Message-ID: On Wed, 4 May 2022 at 17:16, Stathis Papaioannou via extropy-chat wrote: > > I don?t know any doctors who privately said anything different to what they said to their patients about this. The only rational basis for doing so would be if they had access to secret information. > -- > Stathis Papaioannou > _______________________________________________ Patient consultations count as private. Speaking to reporters or writing articles for the press, count as public. BillK From stathisp at gmail.com Wed May 4 16:38:06 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 5 May 2022 02:38:06 +1000 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: References: <002401d85fbc$0bba78d0$232f6a70$@rainier66.com> Message-ID: On Thu, 5 May 2022 at 02:25, BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 4 May 2022 at 17:16, Stathis Papaioannou via extropy-chat > wrote: > > > > I don?t know any doctors who privately said anything different to what > they said to their patients about this. The only rational basis for doing > so would be if they had access to secret information. > > -- > > Stathis Papaioannou > > _______________________________________________ > > > Patient consultations count as private. Speaking to reporters or > writing articles for the press, count as public. If a doctor proposes some unconventional treatment to his patients he must assume that this can become public knowledge, since the patient may tell other people, see other doctors, end up in hospital and so on. > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed May 4 17:19:24 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 4 May 2022 12:19:24 -0500 Subject: [ExI] dna In-Reply-To: <000501d85f3b$67b69dc0$3723d940$@rainier66.com> References: <000501d85f3b$67b69dc0$3723d940$@rainier66.com> Message-ID: ready for some excellent frat-boy fun. spike examples, please? bill w On Tue, May 3, 2022 at 5:22 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > *Subject:* [ExI] dna > > > > Someone who is not even that close to you can collect your DNA. I assume > any use of the would be illegal. Check out the Nature article: > > > > https://www.nature.com/articles/d41586-022-01206-z? > > bill w > > > > > > Meh, legal schmegal, we could have so dang much fun with this. Just > imagine some of the gags you could play. > > > > Lung tissue is fast-turnaround: cells perish, you sneeze or cough them > out, some go aerosol, filter picks them up, recover the cells, 60 dollar > DNA kit, polymerase chain reaction does what it does, coupla months go by, > get the results, now you are ready for some excellent frat-boy fun. > > > > spike > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed May 4 18:26:20 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 11:26:20 -0700 Subject: [ExI] dna In-Reply-To: References: <000501d85f3b$67b69dc0$3723d940$@rainier66.com> Message-ID: <004001d85fe4$743433b0$5c9c9b10$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] dna ready for some excellent frat-boy fun. spike examples, please? bill w We secretly collect DNA from air in a local restaurant. We don?t even know who it belongs to. We give a DNA kit to a sibling, register the unknown DNA under his name, send the restaurant sample to AncestryDNA a week before he sends his. Register his DNA under a different name. A few weeks later, results under his name come back from the mystery person in the restaurant, and Hey bro! You?re ADOPTED buddy! We suspected it all along! Of course the parents deny everything. A few days later, results from my sibling come in, registered under a phony name. Phase two of the gag, call him up: BRO! They SWITCHED the BABIES at the HOSPITAL! You weren?t adopted, you were STOLEN! Heh. Let that mess with his head for a few weeks. Then decide if you will ever let him in on the gag. All this shows to go ya, a comment I hear a lot in DNA circles: it?s so much fun to find second cousins. You make friends, share histories, sometimes become close. They become like a sibling, except without the baggage. If you do crap like that gag I described, that would definitely fall under the term baggage. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natashavita-more.com Wed May 4 18:18:15 2022 From: natasha at natashavita-more.com (Natasha natashavita-more.com) Date: Wed, 4 May 2022 18:18:15 +0000 Subject: [ExI] NEW! H+DAO - Decentralized Transhumanist Projects In-Reply-To: References: Message-ID: Announcing: H+DAO. [cid:ac2a177d-547a-4ffb-b133-a54a3ef9abc4] Transhumanist Launch to the Future! Please support us and vote now to get this funded. Below is the Proposal and link for your convenience. Proposal Link: Swae (deepfunding.ai) Scroll to the bottom of the proposal to vote! Thank you, Onward! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dao-draft-7a1.png Type: image/png Size: 140904 bytes Desc: dao-draft-7a1.png URL: From atymes at gmail.com Wed May 4 19:16:42 2022 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 4 May 2022 12:16:42 -0700 Subject: [ExI] NEW! H+DAO - Decentralized Transhumanist Projects In-Reply-To: References: Message-ID: > The core mission of H+DAO will be to create, launch, and market > project-specific DAOs (PS-DAOs) associated with specific projects which > advocate for and manifest beneficial transhumanism, especially those which > emphasize innovative educational opportunities. Commonly, the initial role > of a PS-DAO will be to raise funds for a project via the sale of governance > tokens in the PS-DAO. So...you're raising funds to create an organization, the main point of which will be to create other organizations to raise funds? -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed May 4 19:26:36 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 4 May 2022 14:26:36 -0500 Subject: [ExI] dna In-Reply-To: <004001d85fe4$743433b0$5c9c9b10$@rainier66.com> References: <000501d85f3b$67b69dc0$3723d940$@rainier66.com> <004001d85fe4$743433b0$5c9c9b10$@rainier66.com> Message-ID: I think you would be lucky if you didn't get killed doing stuff like that. I don't know what you should be doing, but you are wasted. SNL, maybe? bill w On Wed, May 4, 2022 at 1:28 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace via extropy-chat > *Subject:* Re: [ExI] dna > > > > ready for some excellent frat-boy fun. > > > > spike examples, please? bill w > > > > We secretly collect DNA from air in a local restaurant. We don?t even > know who it belongs to. We give a DNA kit to a sibling, register the > unknown DNA under his name, send the restaurant sample to AncestryDNA a > week before he sends his. Register his DNA under a different name. A few > weeks later, results under his name come back from the mystery person in > the restaurant, and Hey bro! You?re ADOPTED buddy! We suspected it all > along! > > Of course the parents deny everything. > > A few days later, results from my sibling come in, registered under a > phony name. Phase two of the gag, call him up: BRO! They SWITCHED the > BABIES at the HOSPITAL! You weren?t adopted, you were STOLEN! > > Heh. Let that mess with his head for a few weeks. Then decide if you > will ever let him in on the gag. > > All this shows to go ya, a comment I hear a lot in DNA circles: it?s so > much fun to find second cousins. You make friends, share histories, > sometimes become close. They become like a sibling, except without the > baggage. > > If you do crap like that gag I described, that would definitely fall under > the term baggage. > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed May 4 19:33:03 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 4 May 2022 14:33:03 -0500 Subject: [ExI] singing opera Message-ID: I read where breathing techniques used by opera singers can help those with problems stemming from Covid. Check yourself out: when you breathe in do your shoulders rise and your chest inflate? Of course you say. Nope. Proper breathing means to take air in all the way down. The stomach should inflate a lot more than the chest. Take it from a singer - music scholarship to LSU in voice. Bill Clinton had perpetual trouble with losing his voice. I noticed that he is a big chest breather. Someone should have told him how to breathe. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed May 4 19:35:20 2022 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 4 May 2022 12:35:20 -0700 Subject: [ExI] The irony of funding and non-funding transhumanism Message-ID: One moment I'm looking over a proposal for a H+ DAO, seeing all the signs that it is hype and will not fund any actual new technology, though the effort itself will likely achieve its requested funding. The next moment I'm talking with some colleagues about an idea for a living CubeSat - a demonstration of a created organism in space, actually intended as a science project to advance the state of the art but perhaps justified as an orbital garbage collector (something that can eat space debris) - and noting it is unlikely to reach the attention of anyone who would actually fund it. What's a fellow to do? Spike, anyone, any suggestions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed May 4 20:08:50 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 13:08:50 -0700 Subject: [ExI] dna In-Reply-To: References: <000501d85f3b$67b69dc0$3723d940$@rainier66.com> <004001d85fe4$743433b0$5c9c9b10$@rainier66.com> Message-ID: <00a801d85ff2$c60859b0$52190d10$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace via extropy-chat Sent: Wednesday, 4 May, 2022 12:27 PM To: ExI chat list Cc: William Flynn Wallace Subject: Re: [ExI] dna >?I think you would be lucky if you didn't get killed doing stuff like that. I don't know what you should be doing, but you are wasted. SNL, maybe? bill w Eh, Billw, if you knew my brother you wouldn?t think so. You already know how I am, well, he?s worse. Not kidding, he really is. If I were to get feeling guilty about doing something like that, I would just remember back on all the stuff he did to me when we were kids, then I would feel much better. Fortunately he really is the kind of guy who would laugh his ass off rolling in the floor if you let him in on it. The whole plan does have risk however. Suppose you collect DNA randomly in a restaurant and you happen to get the son of someone who also did the test, whose wife is into the whole DNA scene. He is minding his own business, his wife suddenly says BILL! Who is this Jones guy who just showed up on AncestryDNA as your SON? Or? random restaurant guy?s son did AncestryDNA, Junior is minding his own business and BILL! Your father isn?t Smith! It?s JONES! OK never mind, this gag carries too much risk. Regarding SNL: I woulda jumped at the opportunity back when it was funny. Now? it is just sad to see what it became. spike On Wed, May 4, 2022 at 1:28 PM spike jones via extropy-chat > wrote: ? We secretly collect DNA from air in a local restaurant. ? A few days later, results from my sibling come in, registered under a phony name. Phase two of the gag, call him up: BRO! They SWITCHED the BABIES at the HOSPITAL! You weren?t adopted, you were STOLEN!? If you do crap like that gag I described, that would definitely fall under the term baggage. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed May 4 20:09:11 2022 From: pharos at gmail.com (BillK) Date: Wed, 4 May 2022 21:09:11 +0100 Subject: [ExI] The irony of funding and non-funding transhumanism In-Reply-To: References: Message-ID: On Wed, 4 May 2022 at 20:42, Adrian Tymes via extropy-chat wrote: > > One moment I'm looking over a proposal for a H+ DAO, seeing all the signs that it is hype and will not fund any actual new technology, though the effort itself will likely achieve its requested funding. > > The next moment I'm talking with some colleagues about an idea for a living CubeSat - a demonstration of a created organism in space, actually intended as a science project to advance the state of the art but perhaps justified as an orbital garbage collector (something that can eat space debris) - and noting it is unlikely to reach the attention of anyone who would actually fund it. > > What's a fellow to do? Spike, anyone, any suggestions? > _______________________________________________ The Pentagon is funding companies to develop orbital garbage collectors. Quote: US Space Force's 'Orbital Prime' project aims to attack space debris by recycling or removing junk By Elizabeth Howell published February 06, 2022 The military branch wants to test on-orbit systems within two to four years with it Orbital Prime project. --------- BillK From spike at rainier66.com Wed May 4 20:12:02 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 13:12:02 -0700 Subject: [ExI] The irony of funding and non-funding transhumanism In-Reply-To: References: Message-ID: <00b101d85ff3$3844bc30$a8ce3490$@rainier66.com> ? On Behalf Of Adrian Tymes via extropy-chat Subject: [ExI] The irony of funding and non-funding transhumanism >? and noting it is unlikely to reach the attention of anyone who would actually fund it. >?What's a fellow to do? Spike, anyone, any suggestions? Any time I had ideas like that, I would call a DARPA rep. I would offer one, but the guy I worked with is long since retired. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed May 4 20:21:47 2022 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 4 May 2022 13:21:47 -0700 Subject: [ExI] The irony of funding and non-funding transhumanism In-Reply-To: References: Message-ID: Yeah, I know. We applied. 8 proposals, all denied. (There are some who are suspecting this is yet another case of preselected winners, where those who weren't in before the project was announced had no chance of funding.) On Wed, May 4, 2022 at 1:16 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 4 May 2022 at 20:42, Adrian Tymes via extropy-chat > wrote: > > > > One moment I'm looking over a proposal for a H+ DAO, seeing all the > signs that it is hype and will not fund any actual new technology, though > the effort itself will likely achieve its requested funding. > > > > The next moment I'm talking with some colleagues about an idea for a > living CubeSat - a demonstration of a created organism in space, actually > intended as a science project to advance the state of the art but perhaps > justified as an orbital garbage collector (something that can eat space > debris) - and noting it is unlikely to reach the attention of anyone who > would actually fund it. > > > > What's a fellow to do? Spike, anyone, any suggestions? > > _______________________________________________ > > > The Pentagon is funding companies to develop orbital garbage collectors. > > > > Quote: > US Space Force's 'Orbital Prime' project aims to attack space debris > by recycling or removing junk > By Elizabeth Howell published February 06, 2022 > The military branch wants to test on-orbit systems within two to four > years with it Orbital Prime project. > --------- > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Wed May 4 20:27:04 2022 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 4 May 2022 16:27:04 -0400 Subject: [ExI] Nonspecific effects of Covid vaccines In-Reply-To: References: Message-ID: On Wed, May 4, 2022 at 5:59 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Other sites are also starting to query the all-cause mortality statistics. > The problem is that these effects take a long time to become evident. > By which time it may be too late. > > ### The problem is that the Covid vaccine studies were terminated very early, after only four months. These were the only placebo-controlled studies, or in other words, the only studies that are unlikely to be garbage. The unexpected mortality related to mRNA vaccines showed up in these four months and of course we don't know if it would have tapered off, persisted or even increased with longer observation. All the other studies on Covid vaccines are observational, case-control and other low-confidence designs. This means we really don't know anything reliable about long-term effects and, since it is very doubtful there will be any additional placebo-controlled studies, we will never know. One interesting tidbit of information from Dr Stabell-Benn is that the nonspecific mortality effects of non-live vaccines are more pronounced in girls. The practical upshot of this all is a big shift in what I would recommend to my patients: 1. Strictly avoid non-Covid mRNA vaccines of all kinds unless conclusively shown in placebo-controlled extended studies that the vaccine has a clear net positive effect on all cause mortality. 2. Give preference to adenovirus Covid vaccines if you consider getting vaccinated. 3. Strictly avoid mRNA Covid vaccines if you are at low risk of Covid morbidity (i.e. generally healthy individual less than 50 years old) 4. Strictly avoid vaccination of children, especially girls, for Covid using mRNA vaccines. 5. Avoid booster Covid mRNA vaccinations until proven safe in placebo-controlled studies. The above points are not meant as medical advice. I provide medical advice only to specific patient's whom I accept into my practice. Consult with your medical provider before making any decisions about prescription medical treatments. The difference from my previous recommendations was that I did not differentiate between mRNA and adenovirus vaccines and I did not strongly advise against vaccination of children and healthy adults up to 50 years of age. I am sorry for failing to scrutinize the available information on Covid vaccines before dismissing concerns about them. Naively, I thought that the pivotal vaccine studies would have a built-in all-cause mortality endpoint but it turns out they didn't. All they did was to look at Covid-related mortality and indeed the vaccines worked fine in that respect. Unfortunately, there are many situations in medicine where success in a particular endpoint (disease-specific mortality, improvement in a proxy measure of health) is completely overshadowed by concurrent effects on all cause mortality and other morbidity. Every pivotal study *must* have all-cause mortality as an endpoint, or else it's garbage. There is a lot of garbage out there. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed May 4 20:40:13 2022 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 4 May 2022 13:40:13 -0700 Subject: [ExI] The irony of funding and non-funding transhumanism In-Reply-To: <00b101d85ff3$3844bc30$a8ce3490$@rainier66.com> References: <00b101d85ff3$3844bc30$a8ce3490$@rainier66.com> Message-ID: On Wed, May 4, 2022 at 1:26 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Any time I had ideas like that, I would call a DARPA rep. I would offer > one, but the guy I worked with is long since retired. > All the contacts we make with people who knew people at DARPA, all their contacts have since retired. No one seems to know how, or be able, to make new contacts at DARPA - at least outside of specific efforts that they announce, and then only for those specific efforts, as opposed to out-of-band/unsolicited things like this. Which means they have to think of it (without us telling them) before they'll fund it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed May 4 23:24:35 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 4 May 2022 17:24:35 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Stathis, On Tue, May 3, 2022 at 4:28 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 4 May 2022 at 07:06, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Hi Jason, >> We continue to talk past each other. I agree with what you are saying >> but... >> [image: 3_robots_tiny.png] >> First off, you seem to be saying you don't care about the fact that the >> first two systems represent the abstract notion of red with different >> qualities, and that they achieve their Turing completeness in different >> ways. >> If that is the case, why are we talking? I want to know what your >> redness knowledge is like, you don't seem to care about anything other than >> all these systems can tell you the strawberry is red, and are all turing >> complete? >> >> In addition to turing completeness, what I am interested in is the >> efficiency by which computation can be accomplished by different models. >> Is the amount of hardware used in one model more than is required in >> another? >> The reason there are only a few registers in a CPU, is because of the >> extreme brute force way you must do computational operations like addition >> and comparison when using discrete logic. It takes far too much hardware >> to have any more than a handful of registers, which can be computationally >> bound to each other at any one time. Whereas if knowledge composed of >> redness and greenness is a standing wave in neural tissue EM fields, every >> last pixel of knowledge can be much more efficiently meaningfully bound to >> all the other pixels in a 3D standing wave. If standing waves require far >> less hardware to do the same amount of parallel computational binding, this >> is what I'm interested in. They are both turing complete, one is far more >> efficient than the other. >> >> Similarly, in order to achieve substrate independence, like the 3rd >> system in the image, you need additional dictionaries to tell you whether >> redness or greenness or +5 volts, or anything else is representing the >> binary 1, or the word 'red'. Virtual machines, capable of running on >> different lower level hardware, are less efficient than machines running on >> nacked hardware. This is because they require the additional translation >> layer to enable virtual operation on different types of hardware. The >> first two systems representing information directly on qualities does not >> require the additional dictionaries required to achieve the substrate >> independence as architected in the 3rd system. So, again, the first two >> systems are more efficient, since they require less mapping hardware. >> > > > Substrate independence is not something that is ?achieved?, it is just the > way it works. Hamming is substrate independent because you can make a > hammer out of many different things, even though a particular set of > materials may be more durable and easier to work with, because it is > impossible to separate hammering from the behaviour associated with > hammering. Similarly, it is impossible to separate qualia from the > behaviour associated with qualia (the abstract properties, as you call > them), because otherwise you could make a partial zombie, and you have > agreed that is absurd. > I've been reading this over and over again, since you sent it several days ago, trying to figure out how you could be thinking about qualia, and the "behavior associated with qualia". Because this must be radically different from what I understand qualia to be. First off, I don't believe I've ever talked about anything close to "the abstract properties, as [Brent] calls them". I always talk about an abstract word like "redness" which is meaningless until we define it by pointing to a particular physical (or subjective) quality or property. And I've also talked about abstract descriptions of such physical properties, like descriptions of how glutamate reacts in a synapse, stressing how abstract descriptions of this physical behavior tells us nothing of the intrinsic physical qualities that could be being described. The actual physical properties being described are not "abstract properties", they can just be described with abstract descriptions, which tells you nothing of what they are like. The fact that you conflate my descriptions of abstract descriptions of properties with "abstract properties" whatever those are, seems to tell me volumes about how you think about reality, descriptions of reality, and knowledge of reality. I'm probably mistaken, but it seems to me that anyone that doesn't clearly distinguish between these things won't be able to understand what intrinsic colorness qualities, or knowledge of color, and perception (through senses) could be. Then when you compare hammering behavior with something like an intrinsic redness quality of either the strawberry or intrinsic qualities of our knowledge of the strawberry (or the behavior of such which can be described with abstract descriptions which tells us nothing of the quality of what we are describing...?) When you describe hamering behavior (or dancing or any other similar behavior) to a blind person, you will be able to 100% communicate to them what you are describing. but when you describe any "behavior of redness" to a blind person, you will fail, completely. The differences between these two things is what is all important, this is what you seem to be ignorring. What do you think colorness qualities are? Why can't you describe them to blind people? What is it, in this world, that has all these colorness qualities? How can anyone think that colorness qualities are anything like hammering behavior? If someone was honestly claiming they experience a colorness quality you can't experience, and if they called that colorness quality grue. What would that mean, to you? And I've asked you this before, and I don't recall your answer. It's easy to understand how someone could come up with a description of some new behavior, something different than a hammering behavior. But how would you come up with (or discover) a new color nobody has ever experienced before. How would you know what a description of that colorness quality was like? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From stathisp at gmail.com Thu May 5 00:47:26 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 5 May 2022 10:47:26 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Thu, 5 May 2022 at 09:26, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Stathis, > On Tue, May 3, 2022 at 4:28 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, 4 May 2022 at 07:06, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Hi Jason, >>> We continue to talk past each other. I agree with what you are saying >>> but... >>> [image: 3_robots_tiny.png] >>> First off, you seem to be saying you don't care about the fact that the >>> first two systems represent the abstract notion of red with different >>> qualities, and that they achieve their Turing completeness in different >>> ways. >>> If that is the case, why are we talking? I want to know what your >>> redness knowledge is like, you don't seem to care about anything other than >>> all these systems can tell you the strawberry is red, and are all turing >>> complete? >>> >>> In addition to turing completeness, what I am interested in is the >>> efficiency by which computation can be accomplished by different models. >>> Is the amount of hardware used in one model more than is required in >>> another? >>> The reason there are only a few registers in a CPU, is because of the >>> extreme brute force way you must do computational operations like addition >>> and comparison when using discrete logic. It takes far too much hardware >>> to have any more than a handful of registers, which can be computationally >>> bound to each other at any one time. Whereas if knowledge composed of >>> redness and greenness is a standing wave in neural tissue EM fields, every >>> last pixel of knowledge can be much more efficiently meaningfully bound to >>> all the other pixels in a 3D standing wave. If standing waves require far >>> less hardware to do the same amount of parallel computational binding, this >>> is what I'm interested in. They are both turing complete, one is far more >>> efficient than the other. >>> >>> Similarly, in order to achieve substrate independence, like the 3rd >>> system in the image, you need additional dictionaries to tell you whether >>> redness or greenness or +5 volts, or anything else is representing the >>> binary 1, or the word 'red'. Virtual machines, capable of running on >>> different lower level hardware, are less efficient than machines running on >>> nacked hardware. This is because they require the additional translation >>> layer to enable virtual operation on different types of hardware. The >>> first two systems representing information directly on qualities does not >>> require the additional dictionaries required to achieve the substrate >>> independence as architected in the 3rd system. So, again, the first two >>> systems are more efficient, since they require less mapping hardware. >>> >> >> >> Substrate independence is not something that is ?achieved?, it is just >> the way it works. Hamming is substrate independent because you can make a >> hammer out of many different things, even though a particular set of >> materials may be more durable and easier to work with, because it is >> impossible to separate hammering from the behaviour associated with >> hammering. Similarly, it is impossible to separate qualia from the >> behaviour associated with qualia (the abstract properties, as you call >> them), because otherwise you could make a partial zombie, and you have >> agreed that is absurd. >> > > I've been reading this over and over again, since you sent it several days > ago, trying to figure out how you could be thinking about qualia, and the > "behavior associated with qualia". Because this must be radically > different from what I understand qualia to be. > > First off, I don't believe I've ever talked about anything close to "the > abstract properties, as [Brent] calls them". I always talk about an > abstract word like "redness" which is meaningless until we define it by > pointing to a particular physical (or subjective) quality or property. And > I've also talked about abstract descriptions of such physical properties, > like descriptions of how glutamate reacts in a synapse, stressing how > abstract descriptions of this physical behavior tells us nothing of the > intrinsic physical qualities that could be being described. The actual > physical properties being described are not "abstract properties", they can > just be described with abstract descriptions, which tells you nothing of > what they are like. The fact that you conflate my descriptions of abstract > descriptions of properties with "abstract properties" whatever those are, > seems to tell me volumes about how you think about reality, descriptions of > reality, and knowledge of reality. I'm probably mistaken, but it seems to > me that anyone that doesn't clearly distinguish between these things won't > be able to understand what intrinsic colorness qualities, or knowledge of > color, and perception (through senses) could be. > I apologise if I misapplied your use of the word "abstract". Part of the difficulty is that we use different terminology. I am referring to the physical properties, the properties that can be "abstractly" described (if I am using that term correctly now), the observable properties, the functional properties. These are contrasted with the qualia, the phenomenal properties, the private properties, the consciousness. The functionalist position is that you cannot reproduce the first type of properties without also reproducing the second type of properties, because if you could, it would result in absurdity. That is the entire argument summarised in a sentence. > Then when you compare hammering behavior with something like an intrinsic > redness quality of either the strawberry or intrinsic qualities of our > knowledge of the strawberry (or the behavior of such which can be described > with abstract descriptions which tells us nothing of the quality of what we > are describing...?) When you describe hamering behavior (or dancing or any > other similar behavior) to a blind person, you will be able to 100% > communicate to them what you are describing. but when you describe any > "behavior of redness" to a blind person, you will fail, completely. The > differences between these two things is what is all important, this is what > you seem to be ignorring. > I struggle to find an analogy and this one is imperfect, but just as if you reproduce the hammering behaviour you necessarily reproduce the hammering, so if you reproduce the glutamate behaviour you necessarily reproduce any qualia associated with glutamate. If you can't reproduce the glutamate behaviour that doesn't mean that qualia are substrate specific any more than if you can't reproduce the hammering behaviour it means that hammering is substrate specific. > What do you think colorness qualities are? Why can't you describe them to > blind people? What is it, in this world, that has all these colorness > qualities? How can anyone think that colorness qualities are anything like > hammering behavior? If someone was honestly claiming they experience a > colorness quality you can't experience, and if they called that colorness > quality grue. What would that mean, to you? > And I've asked you this before, and I don't recall your answer. It's easy > to understand how someone could come up with a description of some new > behavior, something different than a hammering behavior. But how would you > come up with (or discover) a new color nobody has ever experienced before. > How would you know what a description of that colorness quality was like? > I think colourness qualities are what the human behaviour associated with distinguishing between colours, describing them, reacting to them emotionally etc. is seen from inside the system. If you make a physical change to the system and perfectly reproduce this behaviour, you will also necessarily perfectly reproduce the colourness qualities. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From spike at rainier66.com Thu May 5 04:46:09 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 4 May 2022 21:46:09 -0700 Subject: [ExI] swedish perspective Message-ID: <000901d8603b$0ab12c70$20138550$@rainier66.com> This chart offers a whole nuther perspective. Sweden didn't shut down their economy for covid. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 43066 bytes Desc: not available URL: From stathisp at gmail.com Thu May 5 04:59:46 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 5 May 2022 14:59:46 +1000 Subject: [ExI] swedish perspective In-Reply-To: <000901d8603b$0ab12c70$20138550$@rainier66.com> References: <000901d8603b$0ab12c70$20138550$@rainier66.com> Message-ID: On Thu, 5 May 2022 at 14:47, spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > This chart offers a whole nuther perspective. Sweden didn?t shut down > their economy for covid. > > > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > In the old days, it was accepted that people just got sick and died. They had a dozen children so that they had a few spares. We live in better times. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 43066 bytes Desc: not available URL: From brent.allsop at gmail.com Thu May 5 16:34:24 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 5 May 2022 10:34:24 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think colourness qualities are what the human behaviour associated with > distinguishing between colours, describing them, reacting to them > emotionally etc. is seen from inside the system. If you make a physical > change to the system and perfectly reproduce this behaviour, you will also > necessarily perfectly reproduce the colourness qualities. > An abstract description of the behavior of redness can perfectly capture 100% of the behavior, one to one, isomorphically perfectly modeled. Are you saying that since you abstractly reproduce 100% of the behavior, that you have duplicated the quale? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu May 5 18:59:20 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 6 May 2022 04:59:20 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think colourness qualities are what the human behaviour associated with >> distinguishing between colours, describing them, reacting to them >> emotionally etc. is seen from inside the system. If you make a physical >> change to the system and perfectly reproduce this behaviour, you will also >> necessarily perfectly reproduce the colourness qualities. >> > > An abstract description of the behavior of redness can perfectly capture > 100% of the behavior, one to one, isomorphically perfectly modeled. Are > you saying that since you abstractly reproduce 100% of the behavior, that > you have duplicated the quale? > Here is where I end up misquoting you because I don?t understand what exactly you mean by ?abstract description?. But the specific example I have used is that if you perfectly reproduce the physical effect of glutamate on the rest of the brain using a different substrate, and glutamate is involved in redness qualia, then you necessarily also reproduce the redness qualia. This is because if it were not so, it would be possible to grossly change the qualia without the subject noticing any change, which is absurd. > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu May 5 21:45:36 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 5 May 2022 15:45:36 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Stathis, On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think colourness qualities are what the human behaviour associated >>> with distinguishing between colours, describing them, reacting to them >>> emotionally etc. is seen from inside the system. If you make a physical >>> change to the system and perfectly reproduce this behaviour, you will also >>> necessarily perfectly reproduce the colourness qualities. >>> >> >> An abstract description of the behavior of redness can perfectly capture >> 100% of the behavior, one to one, isomorphically perfectly modeled. Are >> you saying that since you abstractly reproduce 100% of the behavior, that >> you have duplicated the quale? >> > > Here is where I end up misquoting you because I don?t understand what > exactly you mean by ?abstract description?. > [image: 3_robots_tiny.png] This is the best possible illustration of what I mean by abstract vs intrinsic physical qualities. The first two represent knowledge with two different intrinsic physical qualities, redness and greenness. "Red" is just an abstract word, composed of strings of ones and zeros. You can't know what it means, by design, without a dictionary. The redness intrinsic quality your brain uses to represent knowledge of 'red' things, is your definition for the abstract word 'red'. > But the specific example I have used is that if you perfectly reproduce > the physical effect of glutamate on the rest of the brain using a different > substrate, and glutamate is involved in redness qualia, then you > necessarily also reproduce the redness qualia. This is because if it were > not so, it would be possible to grossly change the qualia without the > subject noticing any change, which is absurd. > I think the confusion comes in the different ways we think about this: "the physical effect of glutamate on the rest of the brain using a different substrate" Everything in your model seems to be based on this kind of "cause and effect" or "interpretations of interpretations". I think of things in a different way. I would emagine you would say that the causal properties of glutamate or redness would result in someone saying: "That is red." However, to me, the redness quality, alone, isn't the cause of someone saying: "That is Red", as someone could lie, and say: "That is Green", proving the redness isn't the only cause of what the person is saying. The computational system, and the way the knowledge is consciousnessly represented is different from simple cause and effect. The entire system is aware of all of the intrinsic qualities of each of the pixels on the surface of the strawberry (along with any reasoning for why it would lie or not) And it is because of this composite awareness, that is the cause of the system choosing to say: "that is red", or choose to lie in some way. It is the entire composit 'free will system' that is the initial cause of someone choosing to say something, not any single quality like the redness of a single pixel. For you, everything is just a chain of causes and effects, no composite awareness and no composit free will system involved. If I recall correctly, you admit that the quality of your conscious knowledge is dependent on the particular quality of your redness, so qualia can be thought of as a substrate, on which the quality of your consciousness is dependent, right? If you are only focusing on a different substrate being able to produce the same 'redness behavior' then all you are doing is making two contradictory assumptions. If you take that assumption, then you can prove that nothing, even a redness function can't have redness, for the same reason. There must be something that is redness, and the system must be able to know when redness changes to anything else. All you are saying is that nothing can do that. That is why I constantly ask you what could be responsible for redness. Because whatever you say that is, I could use your same argument and say it can't be that, either. If you could describe to me what redness could be, this would falsify my camp, and I would jump to the functionalist camp. But that is impossible, because all your so-called proof is claiming, is that nothing can be redness. If it isn't glutamate that has the redness quality, what can? Nothing you say will work, because of your so-called proof. Because when you have contradictory assumptions you can prove all claims to be both true and false, which has no utility. Until you can provide some falsifiable example of what could be responsible for your redness quality, further conversation seems to be a waste. Because my assertion is that given your assumptions NOTHING will work, and until you falsify that, with at least one example possibility, why go on with this contradictory assumption where qualitative consciousness, based on substrates like redness and greenness, simply isn't possible? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From stathisp at gmail.com Fri May 6 00:00:59 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 6 May 2022 10:00:59 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Stathis, > On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I think colourness qualities are what the human behaviour associated >>>> with distinguishing between colours, describing them, reacting to them >>>> emotionally etc. is seen from inside the system. If you make a physical >>>> change to the system and perfectly reproduce this behaviour, you will also >>>> necessarily perfectly reproduce the colourness qualities. >>>> >>> >>> An abstract description of the behavior of redness can perfectly capture >>> 100% of the behavior, one to one, isomorphically perfectly modeled. Are >>> you saying that since you abstractly reproduce 100% of the behavior, that >>> you have duplicated the quale? >>> >> >> Here is where I end up misquoting you because I don?t understand what >> exactly you mean by ?abstract description?. >> > > > [image: 3_robots_tiny.png] > > This is the best possible illustration of what I mean by abstract vs > intrinsic physical qualities. > The first two represent knowledge with two different intrinsic physical > qualities, redness and greenness. > "Red" is just an abstract word, composed of strings of ones and zeros. > You can't know what it means, by design, without a dictionary. > The redness intrinsic quality your brain uses to represent knowledge of > 'red' things, is your definition for the abstract word 'red'. > > >> But the specific example I have used is that if you perfectly reproduce >> the physical effect of glutamate on the rest of the brain using a different >> substrate, and glutamate is involved in redness qualia, then you >> necessarily also reproduce the redness qualia. This is because if it were >> not so, it would be possible to grossly change the qualia without the >> subject noticing any change, which is absurd. >> > > I think the confusion comes in the different ways we think about this: > > "the physical effect of glutamate on the rest of the brain using a > different substrate" > > Everything in your model seems to be based on this kind of "cause and > effect" or "interpretations of interpretations". I think of things in a > different way. > I would emagine you would say that the causal properties of glutamate or > redness would result in someone saying: "That is red." > However, to me, the redness quality, alone, isn't the cause of someone > saying: "That is Red", as someone could lie, and say: "That is Green", > proving the redness isn't the only cause of what the person is saying. > The causal properties of the glutamate are basically the properties that cause motion in other parts of the system. Consider a glutamate molecule as a part of a clockwork mechanism. If you remove the glutamate molecule, you will disrupt the movement of the entire clockwork mechanism. But if you replace it with a different molecule that has similar physical properties, the rest of the clockwork mechanism will continue functioning the same. Not all of the physical properties are relevant, and they only have to be replicated to within a certain tolerance. The computational system, and the way the knowledge is consciousnessly > represented is different from simple cause and effect. > The entire system is aware of all of the intrinsic qualities of each of > the pixels on the surface of the strawberry (along with any reasoning for > why it would lie or not) > And it is because of this composite awareness, that is the cause of the > system choosing to say: "that is red", or choose to lie in some way. > It is the entire composit 'free will system' that is the initial cause of > someone choosing to say something, not any single quality like the redness > of a single pixel. > For you, everything is just a chain of causes and effects, no composite > awareness and no composit free will system involved. > I am proposing that the awareness of the system be completely ignored, and only the relevant physical properties be replicated. If this is done, then whether you like it or not, the awareness of the system will also be replicated. It?s impossible to do one without the other. If I recall correctly, you admit that the quality of your conscious > knowledge is dependent on the particular quality of your redness, so qualia > can be thought of as a substrate, on which the quality of your > consciousness is dependent, right? If you are only focusing on a different > substrate being able to produce the same 'redness behavior' then all you > are doing is making two contradictory assumptions. If you take that > assumption, then you can prove that nothing, even a redness function can't > have redness, for the same reason. There must be something that is > redness, and the system must be able to know when redness changes to > anything else. All you are saying is that nothing can do that. > I am saying that redness is not a substrate, but it supervenes on a certain type of behaviour, regardless of the substrate of its implementation. This allows the system to know when the redness changes to something else, since the behaviour on which the redness supervenes would change to a different behaviour on which different colour qualia supervene. That is why I constantly ask you what could be responsible for redness. > Because whatever you say that is, I could use your same argument and say it > can't be that, either. > If you could describe to me what redness could be, this would falsify my > camp, and I would jump to the functionalist camp. But that is impossible, > because all your so-called proof is claiming, is that nothing can be > redness. > > If it isn't glutamate that has the redness quality, what can? Nothing you > say will work, because of your so-called proof. Because when you have > contradictory assumptions you can prove all claims to be both true and > false, which has no utility. > Glutamate doesn?t have the redness quality, but glutamate or something that functions like glutamate is necessary to produce the redness quality. We know this because it is what we observe: certain brain structures are needed in order to have certain experiences. We know that it can?t be substrate specific because then we could grossly change the qualia without the subject noticing, which is absurd, meaning there is no difference between having and not having qualia. Until you can provide some falsifiable example of what could be responsible > for your redness quality, further conversation seems to be a waste. > Because my assertion is that given your assumptions NOTHING will work, and > until you falsify that, with at least one example possibility, why go on > with this contradictory assumption where qualitative consciousness, based > on substrates like redness and greenness, simply isn't possible? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From foozler83 at gmail.com Fri May 6 19:50:05 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 6 May 2022 14:50:05 -0500 Subject: [ExI] local warming Message-ID: Next week it is predicted that every day will be hot - mid 90s. Setting records. But this is May, not July or August! I wonder what the summer will bring. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From henrik.ohrstrom at gmail.com Fri May 6 21:21:30 2022 From: henrik.ohrstrom at gmail.com (Henrik Ohrstrom) Date: Fri, 6 May 2022 23:21:30 +0200 Subject: [ExI] swedish perspective In-Reply-To: <000901d8603b$0ab12c70$20138550$@rainier66.com> References: <000901d8603b$0ab12c70$20138550$@rainier66.com> Message-ID: Interesting. Do you have access to the same data for other countries? As of now, COVID 19 is not an interesting problem on society level in Sweden. Everyone and her uncle is vaccinated and or has had the virus. I finally caught it 3 weeks ago after being in full hospital/ICU contact since the beginning. Protection works. Still a nasty virus and the effect on lungs are no laughing matter, you really notice when someone has had an COVID infection when they have anesthesia. But as of ICU whatnots, patient's have something and are COVID positive, they are not in the ICU because of covid and the virus is currently not particularly interesting. The latest influensa is as big a problem. I caught that one too and was worse of from that. Except for the coughing, that is bad and I don't want to have anesthesia for a while yet. Also the effect on taste is unfun. So fore the next pandemic, do as the experts recommend. Science works. /Henrik Den tors 5 maj 2022 06:49spike jones via extropy-chat < extropy-chat at lists.extropy.org> skrev: > > > > > This chart offers a whole nuther perspective. Sweden didn?t shut down > their economy for covid. > > > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 43066 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 43066 bytes Desc: not available URL: From pharos at gmail.com Sat May 7 14:16:25 2022 From: pharos at gmail.com (BillK) Date: Sat, 7 May 2022 15:16:25 +0100 Subject: [ExI] local warming In-Reply-To: References: Message-ID: On Fri, 6 May 2022 at 20:53, William Flynn Wallace via extropy-chat wrote: > > Next week it is predicted that every day will be hot - mid 90s. Setting records. > But this is May, not July or August! I wonder what the summer will bring. bill w > _______________________________________________ In India and Pakistan the midsummer heat has arrived two months early. It reached almost 50 C (122 F). This heat is destroying crops and means people can only work during the cooler nighttime hours. ?The hottest temperatures recorded are south-east and south-west of Ahmedabad, with maximum land-surface temperatures of around 65C (149F),? the European Space Agency said on its website. BillK From spike at rainier66.com Sun May 8 16:05:25 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 8 May 2022 09:05:25 -0700 Subject: [ExI] 1950 census Message-ID: <006601d862f5$6e436610$4aca3230$@rainier66.com> The 1950 census was released on 1 April, which is part of the reason you have been hearing a little less from me in the last few weeks. I have been very busy, documenting and finding those still living who might be willing to answer questions. Fun aside. Check out the screen shot on every page of the 1950 census, instructions to the pollster. Good for a laugh on a fine Sunday morning in May: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 65314 bytes Desc: not available URL: From atymes at gmail.com Mon May 9 07:18:30 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 9 May 2022 00:18:30 -0700 Subject: [ExI] eating what you need In-Reply-To: References: Message-ID: On Tue, Apr 26, 2022 at 8:51 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > https://neurosciencenews.com/nutritional-intelligence-20464/ > > I read of this baby study (which could have been replicated ethically, I > am sure) many years ago and have validated it in my own experience: when I > get short of some nutrient I start eating foods that supply it. This is > esp. true of green things: after some time of not eating them, when I do > my brain rewards me with great taste, far more than I usually experience > with that food. The next day the same food does taste anything special - I > have gotten the nutrients I needed. > > "You know, I think I'd like some ________." And you have no idea where > this came from. And when you do eat it, it is esp. good. > > Have you had these experiences before? > I have, though it's not entirely reliable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon May 9 15:02:02 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 9 May 2022 10:02:02 -0500 Subject: [ExI] eating what you need In-Reply-To: References: Message-ID: Not reliable - ok. I am sure we begin early learning food preferences that could override our tendencies to eat what we need. That's pretty clear. But when we do eat what we need we know it; our bodies tell us it's good and why not have some more? Maybe this even extends to vision: we just see the food we need and it spurs a desire to buy it. bill w On Mon, May 9, 2022 at 2:22 AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Apr 26, 2022 at 8:51 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> https://neurosciencenews.com/nutritional-intelligence-20464/ >> >> I read of this baby study (which could have been replicated ethically, I >> am sure) many years ago and have validated it in my own experience: when I >> get short of some nutrient I start eating foods that supply it. This is >> esp. true of green things: after some time of not eating them, when I do >> my brain rewards me with great taste, far more than I usually experience >> with that food. The next day the same food does taste anything special - I >> have gotten the nutrients I needed. >> >> "You know, I think I'd like some ________." And you have no idea where >> this came from. And when you do eat it, it is esp. good. >> >> Have you had these experiences before? >> > > I have, though it's not entirely reliable. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon May 9 21:53:00 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 15:53:00 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Stathis, OK, let me try saying it this way. You use the neural substitution argument to "prove" redness cannot be substrate dependent. Then you conclude that redness "supervenes" on some function. The problem is, you can prove that redness can't "supervene" on any function, via the same neural substitution proof. On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Stathis, >> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> I think colourness qualities are what the human behaviour associated >>>>> with distinguishing between colours, describing them, reacting to them >>>>> emotionally etc. is seen from inside the system. If you make a physical >>>>> change to the system and perfectly reproduce this behaviour, you will also >>>>> necessarily perfectly reproduce the colourness qualities. >>>>> >>>> >>>> An abstract description of the behavior of redness can perfectly >>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>> behavior, that you have duplicated the quale? >>>> >>> >>> Here is where I end up misquoting you because I don?t understand what >>> exactly you mean by ?abstract description?. >>> >> >> >> [image: 3_robots_tiny.png] >> >> This is the best possible illustration of what I mean by abstract vs >> intrinsic physical qualities. >> The first two represent knowledge with two different intrinsic physical >> qualities, redness and greenness. >> "Red" is just an abstract word, composed of strings of ones and zeros. >> You can't know what it means, by design, without a dictionary. >> The redness intrinsic quality your brain uses to represent knowledge of >> 'red' things, is your definition for the abstract word 'red'. >> >> >>> But the specific example I have used is that if you perfectly reproduce >>> the physical effect of glutamate on the rest of the brain using a different >>> substrate, and glutamate is involved in redness qualia, then you >>> necessarily also reproduce the redness qualia. This is because if it were >>> not so, it would be possible to grossly change the qualia without the >>> subject noticing any change, which is absurd. >>> >> >> I think the confusion comes in the different ways we think about this: >> >> "the physical effect of glutamate on the rest of the brain using a >> different substrate" >> >> Everything in your model seems to be based on this kind of "cause and >> effect" or "interpretations of interpretations". I think of things in a >> different way. >> I would emagine you would say that the causal properties of glutamate or >> redness would result in someone saying: "That is red." >> However, to me, the redness quality, alone, isn't the cause of someone >> saying: "That is Red", as someone could lie, and say: "That is Green", >> proving the redness isn't the only cause of what the person is saying. >> > > The causal properties of the glutamate are basically the properties that > cause motion in other parts of the system. Consider a glutamate molecule as > a part of a clockwork mechanism. If you remove the glutamate molecule, you > will disrupt the movement of the entire clockwork mechanism. But if you > replace it with a different molecule that has similar physical properties, > the rest of the clockwork mechanism will continue functioning the same. Not > all of the physical properties are relevant, and they only have to be > replicated to within a certain tolerance. > > The computational system, and the way the knowledge is consciousnessly >> represented is different from simple cause and effect. >> The entire system is aware of all of the intrinsic qualities of each of >> the pixels on the surface of the strawberry (along with any reasoning for >> why it would lie or not) >> And it is because of this composite awareness, that is the cause of the >> system choosing to say: "that is red", or choose to lie in some way. >> It is the entire composit 'free will system' that is the initial cause of >> someone choosing to say something, not any single quality like the redness >> of a single pixel. >> For you, everything is just a chain of causes and effects, no composite >> awareness and no composit free will system involved. >> > > I am proposing that the awareness of the system be completely ignored, and > only the relevant physical properties be replicated. If this is done, then > whether you like it or not, the awareness of the system will also be > replicated. It?s impossible to do one without the other. > > If I recall correctly, you admit that the quality of your conscious >> knowledge is dependent on the particular quality of your redness, so qualia >> can be thought of as a substrate, on which the quality of your >> consciousness is dependent, right? If you are only focusing on a different >> substrate being able to produce the same 'redness behavior' then all you >> are doing is making two contradictory assumptions. If you take that >> assumption, then you can prove that nothing, even a redness function can't >> have redness, for the same reason. There must be something that is >> redness, and the system must be able to know when redness changes to >> anything else. All you are saying is that nothing can do that. >> > > I am saying that redness is not a substrate, but it supervenes on a > certain type of behaviour, regardless of the substrate of its > implementation. This allows the system to know when the redness changes to > something else, since the behaviour on which the redness supervenes would > change to a different behaviour on which different colour qualia supervene. > > That is why I constantly ask you what could be responsible for redness. >> Because whatever you say that is, I could use your same argument and say it >> can't be that, either. >> If you could describe to me what redness could be, this would falsify my >> camp, and I would jump to the functionalist camp. But that is impossible, >> because all your so-called proof is claiming, is that nothing can be >> redness. >> >> If it isn't glutamate that has the redness quality, what can? Nothing >> you say will work, because of your so-called proof. Because when you have >> contradictory assumptions you can prove all claims to be both true and >> false, which has no utility. >> > > Glutamate doesn?t have the redness quality, but glutamate or something > that functions like glutamate is necessary to produce the redness quality. > We know this because it is what we observe: certain brain structures are > needed in order to have certain experiences. We know that it can?t be > substrate specific because then we could grossly change the qualia without > the subject noticing, which is absurd, meaning there is no difference > between having and not having qualia. > > Until you can provide some falsifiable example of what could be >> responsible for your redness quality, further conversation seems to be a >> waste. Because my assertion is that given your assumptions NOTHING will >> work, and until you falsify that, with at least one example possibility, >> why go on with this contradictory assumption where qualitative >> consciousness, based on substrates like redness and greenness, simply isn't >> possible? >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From stathisp at gmail.com Mon May 9 22:01:38 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 10 May 2022 08:01:38 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Stathis, > > OK, let me try saying it this way. > You use the neural substitution argument to "prove" redness cannot be > substrate dependent. > Then you conclude that redness "supervenes" on some function. > The problem is, you can prove that redness can't "supervene" on any > function, via the same neural substitution proof. > It supervenes on any substrate that preserves the redness behaviour. In other words, if you replace a part of the brain with a black box that affects the rest of the brain in the same way as the original, the subject must behave the same and must have the same qualia. It doesn?t matter what?s in the black box. > On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Stathis, >>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> I think colourness qualities are what the human behaviour associated >>>>>> with distinguishing between colours, describing them, reacting to them >>>>>> emotionally etc. is seen from inside the system. If you make a physical >>>>>> change to the system and perfectly reproduce this behaviour, you will also >>>>>> necessarily perfectly reproduce the colourness qualities. >>>>>> >>>>> >>>>> An abstract description of the behavior of redness can perfectly >>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>> behavior, that you have duplicated the quale? >>>>> >>>> >>>> Here is where I end up misquoting you because I don?t understand what >>>> exactly you mean by ?abstract description?. >>>> >>> >>> >>> [image: 3_robots_tiny.png] >>> >>> This is the best possible illustration of what I mean by abstract vs >>> intrinsic physical qualities. >>> The first two represent knowledge with two different intrinsic physical >>> qualities, redness and greenness. >>> "Red" is just an abstract word, composed of strings of ones and zeros. >>> You can't know what it means, by design, without a dictionary. >>> The redness intrinsic quality your brain uses to represent knowledge of >>> 'red' things, is your definition for the abstract word 'red'. >>> >>> >>>> But the specific example I have used is that if you perfectly reproduce >>>> the physical effect of glutamate on the rest of the brain using a different >>>> substrate, and glutamate is involved in redness qualia, then you >>>> necessarily also reproduce the redness qualia. This is because if it were >>>> not so, it would be possible to grossly change the qualia without the >>>> subject noticing any change, which is absurd. >>>> >>> >>> I think the confusion comes in the different ways we think about this: >>> >>> "the physical effect of glutamate on the rest of the brain using a >>> different substrate" >>> >>> Everything in your model seems to be based on this kind of "cause and >>> effect" or "interpretations of interpretations". I think of things in a >>> different way. >>> I would emagine you would say that the causal properties of glutamate or >>> redness would result in someone saying: "That is red." >>> However, to me, the redness quality, alone, isn't the cause of someone >>> saying: "That is Red", as someone could lie, and say: "That is Green", >>> proving the redness isn't the only cause of what the person is saying. >>> >> >> The causal properties of the glutamate are basically the properties that >> cause motion in other parts of the system. Consider a glutamate molecule as >> a part of a clockwork mechanism. If you remove the glutamate molecule, you >> will disrupt the movement of the entire clockwork mechanism. But if you >> replace it with a different molecule that has similar physical properties, >> the rest of the clockwork mechanism will continue functioning the same. Not >> all of the physical properties are relevant, and they only have to be >> replicated to within a certain tolerance. >> >> The computational system, and the way the knowledge is consciousnessly >>> represented is different from simple cause and effect. >>> The entire system is aware of all of the intrinsic qualities of each of >>> the pixels on the surface of the strawberry (along with any reasoning for >>> why it would lie or not) >>> And it is because of this composite awareness, that is the cause of the >>> system choosing to say: "that is red", or choose to lie in some way. >>> It is the entire composit 'free will system' that is the initial cause >>> of someone choosing to say something, not any single quality like the >>> redness of a single pixel. >>> For you, everything is just a chain of causes and effects, no composite >>> awareness and no composit free will system involved. >>> >> >> I am proposing that the awareness of the system be completely ignored, >> and only the relevant physical properties be replicated. If this is done, >> then whether you like it or not, the awareness of the system will also be >> replicated. It?s impossible to do one without the other. >> >> If I recall correctly, you admit that the quality of your conscious >>> knowledge is dependent on the particular quality of your redness, so qualia >>> can be thought of as a substrate, on which the quality of your >>> consciousness is dependent, right? If you are only focusing on a different >>> substrate being able to produce the same 'redness behavior' then all you >>> are doing is making two contradictory assumptions. If you take that >>> assumption, then you can prove that nothing, even a redness function can't >>> have redness, for the same reason. There must be something that is >>> redness, and the system must be able to know when redness changes to >>> anything else. All you are saying is that nothing can do that. >>> >> >> I am saying that redness is not a substrate, but it supervenes on a >> certain type of behaviour, regardless of the substrate of its >> implementation. This allows the system to know when the redness changes to >> something else, since the behaviour on which the redness supervenes would >> change to a different behaviour on which different colour qualia supervene. >> >> That is why I constantly ask you what could be responsible for redness. >>> Because whatever you say that is, I could use your same argument and say it >>> can't be that, either. >>> If you could describe to me what redness could be, this would falsify my >>> camp, and I would jump to the functionalist camp. But that is impossible, >>> because all your so-called proof is claiming, is that nothing can be >>> redness. >>> >>> If it isn't glutamate that has the redness quality, what can? Nothing >>> you say will work, because of your so-called proof. Because when you have >>> contradictory assumptions you can prove all claims to be both true and >>> false, which has no utility. >>> >> >> Glutamate doesn?t have the redness quality, but glutamate or something >> that functions like glutamate is necessary to produce the redness quality. >> We know this because it is what we observe: certain brain structures are >> needed in order to have certain experiences. We know that it can?t be >> substrate specific because then we could grossly change the qualia without >> the subject noticing, which is absurd, meaning there is no difference >> between having and not having qualia. >> >> Until you can provide some falsifiable example of what could be >>> responsible for your redness quality, further conversation seems to be a >>> waste. Because my assertion is that given your assumptions NOTHING will >>> work, and until you falsify that, with at least one example possibility, >>> why go on with this contradictory assumption where qualitative >>> consciousness, based on substrates like redness and greenness, simply isn't >>> possible? >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Mon May 9 22:40:42 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 16:40:42 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: OK, let me explain in more detail. Redness can't supervene on a function, because you can substitute the function (say bubble sort) with some other function (quick sort) So redness can't supervene on a function, either because "you replace a part (or function) of the brain with a black box that affects the rest of the brain in the same way as the original, the subject must behave the same" So redness can't superven on any function. On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Hi Stathis, >> >> OK, let me try saying it this way. >> You use the neural substitution argument to "prove" redness cannot be >> substrate dependent. >> Then you conclude that redness "supervenes" on some function. >> The problem is, you can prove that redness can't "supervene" on any >> function, via the same neural substitution proof. >> > > It supervenes on any substrate that preserves the redness behaviour. In > other words, if you replace a part of the brain with a black box that > affects the rest of the brain in the same way as the original, the subject > must behave the same and must have the same qualia. It doesn?t matter > what?s in the black box. > > > >> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> Hi Stathis, >>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> I think colourness qualities are what the human behaviour associated >>>>>>> with distinguishing between colours, describing them, reacting to them >>>>>>> emotionally etc. is seen from inside the system. If you make a physical >>>>>>> change to the system and perfectly reproduce this behaviour, you will also >>>>>>> necessarily perfectly reproduce the colourness qualities. >>>>>>> >>>>>> >>>>>> An abstract description of the behavior of redness can perfectly >>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>> behavior, that you have duplicated the quale? >>>>>> >>>>> >>>>> Here is where I end up misquoting you because I don?t understand what >>>>> exactly you mean by ?abstract description?. >>>>> >>>> >>>> >>>> [image: 3_robots_tiny.png] >>>> >>>> This is the best possible illustration of what I mean by abstract vs >>>> intrinsic physical qualities. >>>> The first two represent knowledge with two different intrinsic physical >>>> qualities, redness and greenness. >>>> "Red" is just an abstract word, composed of strings of ones and zeros. >>>> You can't know what it means, by design, without a dictionary. >>>> The redness intrinsic quality your brain uses to represent knowledge of >>>> 'red' things, is your definition for the abstract word 'red'. >>>> >>>> >>>>> But the specific example I have used is that if you perfectly >>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>> not so, it would be possible to grossly change the qualia without the >>>>> subject noticing any change, which is absurd. >>>>> >>>> >>>> I think the confusion comes in the different ways we think about this: >>>> >>>> "the physical effect of glutamate on the rest of the brain using a >>>> different substrate" >>>> >>>> Everything in your model seems to be based on this kind of "cause and >>>> effect" or "interpretations of interpretations". I think of things in a >>>> different way. >>>> I would emagine you would say that the causal properties of glutamate >>>> or redness would result in someone saying: "That is red." >>>> However, to me, the redness quality, alone, isn't the cause of someone >>>> saying: "That is Red", as someone could lie, and say: "That is Green", >>>> proving the redness isn't the only cause of what the person is saying. >>>> >>> >>> The causal properties of the glutamate are basically the properties that >>> cause motion in other parts of the system. Consider a glutamate molecule as >>> a part of a clockwork mechanism. If you remove the glutamate molecule, you >>> will disrupt the movement of the entire clockwork mechanism. But if you >>> replace it with a different molecule that has similar physical properties, >>> the rest of the clockwork mechanism will continue functioning the same. Not >>> all of the physical properties are relevant, and they only have to be >>> replicated to within a certain tolerance. >>> >>> The computational system, and the way the knowledge is consciousnessly >>>> represented is different from simple cause and effect. >>>> The entire system is aware of all of the intrinsic qualities of each of >>>> the pixels on the surface of the strawberry (along with any reasoning for >>>> why it would lie or not) >>>> And it is because of this composite awareness, that is the cause of the >>>> system choosing to say: "that is red", or choose to lie in some way. >>>> It is the entire composit 'free will system' that is the initial cause >>>> of someone choosing to say something, not any single quality like the >>>> redness of a single pixel. >>>> For you, everything is just a chain of causes and effects, no composite >>>> awareness and no composit free will system involved. >>>> >>> >>> I am proposing that the awareness of the system be completely ignored, >>> and only the relevant physical properties be replicated. If this is done, >>> then whether you like it or not, the awareness of the system will also be >>> replicated. It?s impossible to do one without the other. >>> >>> If I recall correctly, you admit that the quality of your conscious >>>> knowledge is dependent on the particular quality of your redness, so qualia >>>> can be thought of as a substrate, on which the quality of your >>>> consciousness is dependent, right? If you are only focusing on a different >>>> substrate being able to produce the same 'redness behavior' then all you >>>> are doing is making two contradictory assumptions. If you take that >>>> assumption, then you can prove that nothing, even a redness function can't >>>> have redness, for the same reason. There must be something that is >>>> redness, and the system must be able to know when redness changes to >>>> anything else. All you are saying is that nothing can do that. >>>> >>> >>> I am saying that redness is not a substrate, but it supervenes on a >>> certain type of behaviour, regardless of the substrate of its >>> implementation. This allows the system to know when the redness changes to >>> something else, since the behaviour on which the redness supervenes would >>> change to a different behaviour on which different colour qualia supervene. >>> >>> That is why I constantly ask you what could be responsible for redness. >>>> Because whatever you say that is, I could use your same argument and say it >>>> can't be that, either. >>>> If you could describe to me what redness could be, this would falsify >>>> my camp, and I would jump to the functionalist camp. But that is >>>> impossible, because all your so-called proof is claiming, is that nothing >>>> can be redness. >>>> >>>> If it isn't glutamate that has the redness quality, what can? Nothing >>>> you say will work, because of your so-called proof. Because when you have >>>> contradictory assumptions you can prove all claims to be both true and >>>> false, which has no utility. >>>> >>> >>> Glutamate doesn?t have the redness quality, but glutamate or something >>> that functions like glutamate is necessary to produce the redness quality. >>> We know this because it is what we observe: certain brain structures are >>> needed in order to have certain experiences. We know that it can?t be >>> substrate specific because then we could grossly change the qualia without >>> the subject noticing, which is absurd, meaning there is no difference >>> between having and not having qualia. >>> >>> Until you can provide some falsifiable example of what could be >>>> responsible for your redness quality, further conversation seems to be a >>>> waste. Because my assertion is that given your assumptions NOTHING will >>>> work, and until you falsify that, with at least one example possibility, >>>> why go on with this contradictory assumption where qualitative >>>> consciousness, based on substrates like redness and greenness, simply isn't >>>> possible? >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> -- >>> Stathis Papaioannou >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From atymes at gmail.com Mon May 9 22:50:31 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 9 May 2022 15:50:31 -0700 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Those are not reasons why redness can't supervene. On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > OK, let me explain in more detail. > Redness can't supervene on a function, because you can substitute the > function (say bubble sort) with some other function (quick sort) > So redness can't supervene on a function, either because "you replace a > part (or function) of the brain with a black box that affects the rest of > the brain in the same way as the original, the subject must behave the same" > So redness can't superven on any function. > > > > > On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Hi Stathis, >>> >>> OK, let me try saying it this way. >>> You use the neural substitution argument to "prove" redness cannot be >>> substrate dependent. >>> Then you conclude that redness "supervenes" on some function. >>> The problem is, you can prove that redness can't "supervene" on any >>> function, via the same neural substitution proof. >>> >> >> It supervenes on any substrate that preserves the redness behaviour. In >> other words, if you replace a part of the brain with a black box that >> affects the rest of the brain in the same way as the original, the subject >> must behave the same and must have the same qualia. It doesn?t matter >> what?s in the black box. >> >> >> >>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> Hi Stathis, >>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>> >>>>>>> >>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>> behavior, that you have duplicated the quale? >>>>>>> >>>>>> >>>>>> Here is where I end up misquoting you because I don?t understand what >>>>>> exactly you mean by ?abstract description?. >>>>>> >>>>> >>>>> >>>>> [image: 3_robots_tiny.png] >>>>> >>>>> This is the best possible illustration of what I mean by abstract vs >>>>> intrinsic physical qualities. >>>>> The first two represent knowledge with two different intrinsic >>>>> physical qualities, redness and greenness. >>>>> "Red" is just an abstract word, composed of strings of ones and >>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>> The redness intrinsic quality your brain uses to represent knowledge >>>>> of 'red' things, is your definition for the abstract word 'red'. >>>>> >>>>> >>>>>> But the specific example I have used is that if you perfectly >>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>> not so, it would be possible to grossly change the qualia without the >>>>>> subject noticing any change, which is absurd. >>>>>> >>>>> >>>>> I think the confusion comes in the different ways we think about this: >>>>> >>>>> "the physical effect of glutamate on the rest of the brain using a >>>>> different substrate" >>>>> >>>>> Everything in your model seems to be based on this kind of "cause and >>>>> effect" or "interpretations of interpretations". I think of things in a >>>>> different way. >>>>> I would emagine you would say that the causal properties of glutamate >>>>> or redness would result in someone saying: "That is red." >>>>> However, to me, the redness quality, alone, isn't the cause of someone >>>>> saying: "That is Red", as someone could lie, and say: "That is Green", >>>>> proving the redness isn't the only cause of what the person is saying. >>>>> >>>> >>>> The causal properties of the glutamate are basically the properties >>>> that cause motion in other parts of the system. Consider a glutamate >>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>> But if you replace it with a different molecule that has similar physical >>>> properties, the rest of the clockwork mechanism will continue functioning >>>> the same. Not all of the physical properties are relevant, and they only >>>> have to be replicated to within a certain tolerance. >>>> >>>> The computational system, and the way the knowledge is consciousnessly >>>>> represented is different from simple cause and effect. >>>>> The entire system is aware of all of the intrinsic qualities of each >>>>> of the pixels on the surface of the strawberry (along with any reasoning >>>>> for why it would lie or not) >>>>> And it is because of this composite awareness, that is the cause of >>>>> the system choosing to say: "that is red", or choose to lie in some way. >>>>> It is the entire composit 'free will system' that is the initial cause >>>>> of someone choosing to say something, not any single quality like the >>>>> redness of a single pixel. >>>>> For you, everything is just a chain of causes and effects, no >>>>> composite awareness and no composit free will system involved. >>>>> >>>> >>>> I am proposing that the awareness of the system be completely ignored, >>>> and only the relevant physical properties be replicated. If this is done, >>>> then whether you like it or not, the awareness of the system will also be >>>> replicated. It?s impossible to do one without the other. >>>> >>>> If I recall correctly, you admit that the quality of your conscious >>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>> can be thought of as a substrate, on which the quality of your >>>>> consciousness is dependent, right? If you are only focusing on a different >>>>> substrate being able to produce the same 'redness behavior' then all you >>>>> are doing is making two contradictory assumptions. If you take that >>>>> assumption, then you can prove that nothing, even a redness function can't >>>>> have redness, for the same reason. There must be something that is >>>>> redness, and the system must be able to know when redness changes to >>>>> anything else. All you are saying is that nothing can do that. >>>>> >>>> >>>> I am saying that redness is not a substrate, but it supervenes on a >>>> certain type of behaviour, regardless of the substrate of its >>>> implementation. This allows the system to know when the redness changes to >>>> something else, since the behaviour on which the redness supervenes would >>>> change to a different behaviour on which different colour qualia supervene. >>>> >>>> That is why I constantly ask you what could be responsible for >>>>> redness. Because whatever you say that is, I could use your same argument >>>>> and say it can't be that, either. >>>>> If you could describe to me what redness could be, this would falsify >>>>> my camp, and I would jump to the functionalist camp. But that is >>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>> can be redness. >>>>> >>>>> If it isn't glutamate that has the redness quality, what can? Nothing >>>>> you say will work, because of your so-called proof. Because when you have >>>>> contradictory assumptions you can prove all claims to be both true and >>>>> false, which has no utility. >>>>> >>>> >>>> Glutamate doesn?t have the redness quality, but glutamate or something >>>> that functions like glutamate is necessary to produce the redness quality. >>>> We know this because it is what we observe: certain brain structures are >>>> needed in order to have certain experiences. We know that it can?t be >>>> substrate specific because then we could grossly change the qualia without >>>> the subject noticing, which is absurd, meaning there is no difference >>>> between having and not having qualia. >>>> >>>> Until you can provide some falsifiable example of what could be >>>>> responsible for your redness quality, further conversation seems to be a >>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>> work, and until you falsify that, with at least one example possibility, >>>>> why go on with this contradictory assumption where qualitative >>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>> possible? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> -- >>>> Stathis Papaioannou >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Mon May 9 22:53:42 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 16:53:42 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Exactly, I couldn't have set it better myself. Those aren't reasons why redness isn't substrate dependent. If you ask the system: "How do you do your sorting, one system must be able to say "bubble sort" and the other must be able to say: "quick sort" Just the same as if you asked: "What is redness like for you, one would say your redness, the other would say your greenness." On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Those are not reasons why redness can't supervene. > > On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> OK, let me explain in more detail. >> Redness can't supervene on a function, because you can substitute the >> function (say bubble sort) with some other function (quick sort) >> So redness can't supervene on a function, either because "you replace a >> part (or function) of the brain with a black box that affects the rest of >> the brain in the same way as the original, the subject must behave the same" >> So redness can't superven on any function. >> >> >> >> >> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Hi Stathis, >>>> >>>> OK, let me try saying it this way. >>>> You use the neural substitution argument to "prove" redness cannot be >>>> substrate dependent. >>>> Then you conclude that redness "supervenes" on some function. >>>> The problem is, you can prove that redness can't "supervene" on any >>>> function, via the same neural substitution proof. >>>> >>> >>> It supervenes on any substrate that preserves the redness behaviour. In >>> other words, if you replace a part of the brain with a black box that >>> affects the rest of the brain in the same way as the original, the subject >>> must behave the same and must have the same qualia. It doesn?t matter >>> what?s in the black box. >>> >>> >>> >>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> Hi Stathis, >>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat >>>>>>>> wrote: >>>>>>>> >>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>> >>>>>>>> >>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>> behavior, that you have duplicated the quale? >>>>>>>> >>>>>>> >>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>> what exactly you mean by ?abstract description?. >>>>>>> >>>>>> >>>>>> >>>>>> [image: 3_robots_tiny.png] >>>>>> >>>>>> This is the best possible illustration of what I mean by abstract vs >>>>>> intrinsic physical qualities. >>>>>> The first two represent knowledge with two different intrinsic >>>>>> physical qualities, redness and greenness. >>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>> The redness intrinsic quality your brain uses to represent knowledge >>>>>> of 'red' things, is your definition for the abstract word 'red'. >>>>>> >>>>>> >>>>>>> But the specific example I have used is that if you perfectly >>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>> subject noticing any change, which is absurd. >>>>>>> >>>>>> >>>>>> I think the confusion comes in the different ways we think about this: >>>>>> >>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>> different substrate" >>>>>> >>>>>> Everything in your model seems to be based on this kind of "cause and >>>>>> effect" or "interpretations of interpretations". I think of things in a >>>>>> different way. >>>>>> I would emagine you would say that the causal properties of glutamate >>>>>> or redness would result in someone saying: "That is red." >>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>> saying. >>>>>> >>>>> >>>>> The causal properties of the glutamate are basically the properties >>>>> that cause motion in other parts of the system. Consider a glutamate >>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>> But if you replace it with a different molecule that has similar physical >>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>> the same. Not all of the physical properties are relevant, and they only >>>>> have to be replicated to within a certain tolerance. >>>>> >>>>> The computational system, and the way the knowledge is consciousnessly >>>>>> represented is different from simple cause and effect. >>>>>> The entire system is aware of all of the intrinsic qualities of each >>>>>> of the pixels on the surface of the strawberry (along with any reasoning >>>>>> for why it would lie or not) >>>>>> And it is because of this composite awareness, that is the cause of >>>>>> the system choosing to say: "that is red", or choose to lie in some way. >>>>>> It is the entire composit 'free will system' that is the initial >>>>>> cause of someone choosing to say something, not any single quality like the >>>>>> redness of a single pixel. >>>>>> For you, everything is just a chain of causes and effects, no >>>>>> composite awareness and no composit free will system involved. >>>>>> >>>>> >>>>> I am proposing that the awareness of the system be completely ignored, >>>>> and only the relevant physical properties be replicated. If this is done, >>>>> then whether you like it or not, the awareness of the system will also be >>>>> replicated. It?s impossible to do one without the other. >>>>> >>>>> If I recall correctly, you admit that the quality of your conscious >>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>> can be thought of as a substrate, on which the quality of your >>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>> are doing is making two contradictory assumptions. If you take that >>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>> have redness, for the same reason. There must be something that is >>>>>> redness, and the system must be able to know when redness changes to >>>>>> anything else. All you are saying is that nothing can do that. >>>>>> >>>>> >>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>> certain type of behaviour, regardless of the substrate of its >>>>> implementation. This allows the system to know when the redness changes to >>>>> something else, since the behaviour on which the redness supervenes would >>>>> change to a different behaviour on which different colour qualia supervene. >>>>> >>>>> That is why I constantly ask you what could be responsible for >>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>> and say it can't be that, either. >>>>>> If you could describe to me what redness could be, this would falsify >>>>>> my camp, and I would jump to the functionalist camp. But that is >>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>> can be redness. >>>>>> >>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>> and false, which has no utility. >>>>>> >>>>> >>>>> Glutamate doesn?t have the redness quality, but glutamate or something >>>>> that functions like glutamate is necessary to produce the redness quality. >>>>> We know this because it is what we observe: certain brain structures are >>>>> needed in order to have certain experiences. We know that it can?t be >>>>> substrate specific because then we could grossly change the qualia without >>>>> the subject noticing, which is absurd, meaning there is no difference >>>>> between having and not having qualia. >>>>> >>>>> Until you can provide some falsifiable example of what could be >>>>>> responsible for your redness quality, further conversation seems to be a >>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>> work, and until you falsify that, with at least one example possibility, >>>>>> why go on with this contradictory assumption where qualitative >>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>> possible? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> -- >>>>> Stathis Papaioannou >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> -- >>> Stathis Papaioannou >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Mon May 9 23:02:59 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 17:02:59 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Redness isn't about the black box functionality, redness is about how the black box achieves the functionality. On Mon, May 9, 2022 at 4:53 PM Brent Allsop wrote: > > Exactly, I couldn't have set it better myself. > Those aren't reasons why redness isn't substrate dependent. > If you ask the system: "How do you do your sorting, one system must be > able to say "bubble sort" and the other must be able to say: "quick sort" > Just the same as if you asked: "What is redness like for you, one would > say your redness, the other would say your greenness." > > > > On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Those are not reasons why redness can't supervene. >> >> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> OK, let me explain in more detail. >>> Redness can't supervene on a function, because you can substitute the >>> function (say bubble sort) with some other function (quick sort) >>> So redness can't supervene on a function, either because "you replace a >>> part (or function) of the brain with a black box that affects the rest of >>> the brain in the same way as the original, the subject must behave the same" >>> So redness can't superven on any function. >>> >>> >>> >>> >>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Hi Stathis, >>>>> >>>>> OK, let me try saying it this way. >>>>> You use the neural substitution argument to "prove" redness cannot be >>>>> substrate dependent. >>>>> Then you conclude that redness "supervenes" on some function. >>>>> The problem is, you can prove that redness can't "supervene" on any >>>>> function, via the same neural substitution proof. >>>>> >>>> >>>> It supervenes on any substrate that preserves the redness behaviour. In >>>> other words, if you replace a part of the brain with a black box that >>>> affects the rest of the brain in the same way as the original, the subject >>>> must behave the same and must have the same qualia. It doesn?t matter >>>> what?s in the black box. >>>> >>>> >>>> >>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> Hi Stathis, >>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>> extropy-chat wrote: >>>>>>>>> >>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>> >>>>>>>>> >>>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>>> behavior, that you have duplicated the quale? >>>>>>>>> >>>>>>>> >>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> [image: 3_robots_tiny.png] >>>>>>> >>>>>>> This is the best possible illustration of what I mean by abstract vs >>>>>>> intrinsic physical qualities. >>>>>>> The first two represent knowledge with two different intrinsic >>>>>>> physical qualities, redness and greenness. >>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>> The redness intrinsic quality your brain uses to represent knowledge >>>>>>> of 'red' things, is your definition for the abstract word 'red'. >>>>>>> >>>>>>> >>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>> subject noticing any change, which is absurd. >>>>>>>> >>>>>>> >>>>>>> I think the confusion comes in the different ways we think about >>>>>>> this: >>>>>>> >>>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>>> different substrate" >>>>>>> >>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>> a different way. >>>>>>> I would emagine you would say that the causal properties of >>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>> saying. >>>>>>> >>>>>> >>>>>> The causal properties of the glutamate are basically the properties >>>>>> that cause motion in other parts of the system. Consider a glutamate >>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>>> But if you replace it with a different molecule that has similar physical >>>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>>> the same. Not all of the physical properties are relevant, and they only >>>>>> have to be replicated to within a certain tolerance. >>>>>> >>>>>> The computational system, and the way the knowledge is >>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>> The entire system is aware of all of the intrinsic qualities of each >>>>>>> of the pixels on the surface of the strawberry (along with any reasoning >>>>>>> for why it would lie or not) >>>>>>> And it is because of this composite awareness, that is the cause of >>>>>>> the system choosing to say: "that is red", or choose to lie in some way. >>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>> redness of a single pixel. >>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>> composite awareness and no composit free will system involved. >>>>>>> >>>>>> >>>>>> I am proposing that the awareness of the system be completely >>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>> also be replicated. It?s impossible to do one without the other. >>>>>> >>>>>> If I recall correctly, you admit that the quality of your conscious >>>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>>> can be thought of as a substrate, on which the quality of your >>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>> have redness, for the same reason. There must be something that is >>>>>>> redness, and the system must be able to know when redness changes to >>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>> >>>>>> >>>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>>> certain type of behaviour, regardless of the substrate of its >>>>>> implementation. This allows the system to know when the redness changes to >>>>>> something else, since the behaviour on which the redness supervenes would >>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>> >>>>>> That is why I constantly ask you what could be responsible for >>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>> and say it can't be that, either. >>>>>>> If you could describe to me what redness could be, this would >>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>> can be redness. >>>>>>> >>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>> and false, which has no utility. >>>>>>> >>>>>> >>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>> something that functions like glutamate is necessary to produce the redness >>>>>> quality. We know this because it is what we observe: certain brain >>>>>> structures are needed in order to have certain experiences. We know that it >>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>> difference between having and not having qualia. >>>>>> >>>>>> Until you can provide some falsifiable example of what could be >>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>> why go on with this contradictory assumption where qualitative >>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>> possible? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> -- >>>> Stathis Papaioannou >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From atymes at gmail.com Mon May 9 23:08:29 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 9 May 2022 16:08:29 -0700 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: You could have said it much better. I said that your argument doesn't make logical sense. Your response, in fact, did say it better. However, you are asserting as true things that Stathis is asserting as false. On Mon, May 9, 2022 at 3:58 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Exactly, I couldn't have set it better myself. > Those aren't reasons why redness isn't substrate dependent. > If you ask the system: "How do you do your sorting, one system must be > able to say "bubble sort" and the other must be able to say: "quick sort" > Just the same as if you asked: "What is redness like for you, one would > say your redness, the other would say your greenness." > > > > On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Those are not reasons why redness can't supervene. >> >> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> OK, let me explain in more detail. >>> Redness can't supervene on a function, because you can substitute the >>> function (say bubble sort) with some other function (quick sort) >>> So redness can't supervene on a function, either because "you replace a >>> part (or function) of the brain with a black box that affects the rest of >>> the brain in the same way as the original, the subject must behave the same" >>> So redness can't superven on any function. >>> >>> >>> >>> >>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Hi Stathis, >>>>> >>>>> OK, let me try saying it this way. >>>>> You use the neural substitution argument to "prove" redness cannot be >>>>> substrate dependent. >>>>> Then you conclude that redness "supervenes" on some function. >>>>> The problem is, you can prove that redness can't "supervene" on any >>>>> function, via the same neural substitution proof. >>>>> >>>> >>>> It supervenes on any substrate that preserves the redness behaviour. In >>>> other words, if you replace a part of the brain with a black box that >>>> affects the rest of the brain in the same way as the original, the subject >>>> must behave the same and must have the same qualia. It doesn?t matter >>>> what?s in the black box. >>>> >>>> >>>> >>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> Hi Stathis, >>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>> extropy-chat wrote: >>>>>>>>> >>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>> >>>>>>>>> >>>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>>> behavior, that you have duplicated the quale? >>>>>>>>> >>>>>>>> >>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> [image: 3_robots_tiny.png] >>>>>>> >>>>>>> This is the best possible illustration of what I mean by abstract vs >>>>>>> intrinsic physical qualities. >>>>>>> The first two represent knowledge with two different intrinsic >>>>>>> physical qualities, redness and greenness. >>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>> The redness intrinsic quality your brain uses to represent knowledge >>>>>>> of 'red' things, is your definition for the abstract word 'red'. >>>>>>> >>>>>>> >>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>> subject noticing any change, which is absurd. >>>>>>>> >>>>>>> >>>>>>> I think the confusion comes in the different ways we think about >>>>>>> this: >>>>>>> >>>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>>> different substrate" >>>>>>> >>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>> a different way. >>>>>>> I would emagine you would say that the causal properties of >>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>> saying. >>>>>>> >>>>>> >>>>>> The causal properties of the glutamate are basically the properties >>>>>> that cause motion in other parts of the system. Consider a glutamate >>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>>> But if you replace it with a different molecule that has similar physical >>>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>>> the same. Not all of the physical properties are relevant, and they only >>>>>> have to be replicated to within a certain tolerance. >>>>>> >>>>>> The computational system, and the way the knowledge is >>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>> The entire system is aware of all of the intrinsic qualities of each >>>>>>> of the pixels on the surface of the strawberry (along with any reasoning >>>>>>> for why it would lie or not) >>>>>>> And it is because of this composite awareness, that is the cause of >>>>>>> the system choosing to say: "that is red", or choose to lie in some way. >>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>> redness of a single pixel. >>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>> composite awareness and no composit free will system involved. >>>>>>> >>>>>> >>>>>> I am proposing that the awareness of the system be completely >>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>> also be replicated. It?s impossible to do one without the other. >>>>>> >>>>>> If I recall correctly, you admit that the quality of your conscious >>>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>>> can be thought of as a substrate, on which the quality of your >>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>> have redness, for the same reason. There must be something that is >>>>>>> redness, and the system must be able to know when redness changes to >>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>> >>>>>> >>>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>>> certain type of behaviour, regardless of the substrate of its >>>>>> implementation. This allows the system to know when the redness changes to >>>>>> something else, since the behaviour on which the redness supervenes would >>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>> >>>>>> That is why I constantly ask you what could be responsible for >>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>> and say it can't be that, either. >>>>>>> If you could describe to me what redness could be, this would >>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>> can be redness. >>>>>>> >>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>> and false, which has no utility. >>>>>>> >>>>>> >>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>> something that functions like glutamate is necessary to produce the redness >>>>>> quality. We know this because it is what we observe: certain brain >>>>>> structures are needed in order to have certain experiences. We know that it >>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>> difference between having and not having qualia. >>>>>> >>>>>> Until you can provide some falsifiable example of what could be >>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>> why go on with this contradictory assumption where qualitative >>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>> possible? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> -- >>>> Stathis Papaioannou >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Tue May 10 00:00:09 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 18:00:09 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Which things is Stathis (false) and I (true) on? Stathis is using an argument as to why redness is not substrate dependent. I'm using the same argument as to why redness cannot superveen on function. I'm just demonstrating that both of these arguments are not arguments, they are just assuming that redness is only about function, and not about how the function is achieved, which is entirely missing the point of what redness is. On Mon, May 9, 2022 at 5:10 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > You could have said it much better. I said that your argument doesn't > make logical sense. > > Your response, in fact, did say it better. However, you are asserting as > true things that Stathis is asserting as false. > > On Mon, May 9, 2022 at 3:58 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Exactly, I couldn't have set it better myself. >> Those aren't reasons why redness isn't substrate dependent. >> If you ask the system: "How do you do your sorting, one system must be >> able to say "bubble sort" and the other must be able to say: "quick sort" >> Just the same as if you asked: "What is redness like for you, one would >> say your redness, the other would say your greenness." >> >> >> >> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Those are not reasons why redness can't supervene. >>> >>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> OK, let me explain in more detail. >>>> Redness can't supervene on a function, because you can substitute the >>>> function (say bubble sort) with some other function (quick sort) >>>> So redness can't supervene on a function, either because "you replace a >>>> part (or function) of the brain with a black box that affects the rest of >>>> the brain in the same way as the original, the subject must behave the same" >>>> So redness can't superven on any function. >>>> >>>> >>>> >>>> >>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Hi Stathis, >>>>>> >>>>>> OK, let me try saying it this way. >>>>>> You use the neural substitution argument to "prove" redness cannot be >>>>>> substrate dependent. >>>>>> Then you conclude that redness "supervenes" on some function. >>>>>> The problem is, you can prove that redness can't "supervene" on any >>>>>> function, via the same neural substitution proof. >>>>>> >>>>> >>>>> It supervenes on any substrate that preserves the redness behaviour. >>>>> In other words, if you replace a part of the brain with a black box that >>>>> affects the rest of the brain in the same way as the original, the subject >>>>> must behave the same and must have the same qualia. It doesn?t matter >>>>> what?s in the black box. >>>>> >>>>> >>>>> >>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> Hi Stathis, >>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat >>>>>>>> wrote: >>>>>>>> >>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>> extropy-chat wrote: >>>>>>>>>> >>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>>>> behavior, that you have duplicated the quale? >>>>>>>>>> >>>>>>>>> >>>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [image: 3_robots_tiny.png] >>>>>>>> >>>>>>>> This is the best possible illustration of what I mean by abstract >>>>>>>> vs intrinsic physical qualities. >>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>> physical qualities, redness and greenness. >>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>> >>>>>>>> >>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>> >>>>>>>> >>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>> this: >>>>>>>> >>>>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>>>> different substrate" >>>>>>>> >>>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>>> a different way. >>>>>>>> I would emagine you would say that the causal properties of >>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>> saying. >>>>>>>> >>>>>>> >>>>>>> The causal properties of the glutamate are basically the properties >>>>>>> that cause motion in other parts of the system. Consider a glutamate >>>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>>>> But if you replace it with a different molecule that has similar physical >>>>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>>>> the same. Not all of the physical properties are relevant, and they only >>>>>>> have to be replicated to within a certain tolerance. >>>>>>> >>>>>>> The computational system, and the way the knowledge is >>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>> reasoning for why it would lie or not) >>>>>>>> And it is because of this composite awareness, that is the cause of >>>>>>>> the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>> redness of a single pixel. >>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>> composite awareness and no composit free will system involved. >>>>>>>> >>>>>>> >>>>>>> I am proposing that the awareness of the system be completely >>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>> >>>>>>> If I recall correctly, you admit that the quality of your conscious >>>>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>>>> can be thought of as a substrate, on which the quality of your >>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>> >>>>>>> >>>>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>>>> certain type of behaviour, regardless of the substrate of its >>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>> >>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>> and say it can't be that, either. >>>>>>>> If you could describe to me what redness could be, this would >>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>> can be redness. >>>>>>>> >>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>> and false, which has no utility. >>>>>>>> >>>>>>> >>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>> difference between having and not having qualia. >>>>>>> >>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>> possible? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> -- >>>>>>> Stathis Papaioannou >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> -- >>>>> Stathis Papaioannou >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From atymes at gmail.com Tue May 10 00:41:57 2022 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 9 May 2022 17:41:57 -0700 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: > How do you do your sorting, one system must be able to say "bubble sort" and the other must be able to say: "quick sort" You assert the system must be able to say this. Stathis appears to be asserting that the system does not have to be able to say this. On Mon, May 9, 2022 at 5:02 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Which things is Stathis (false) and I (true) on? > Stathis is using an argument as to why redness is not substrate dependent. > I'm using the same argument as to why redness cannot superveen on function. > I'm just demonstrating that both of these arguments are not arguments, > they are just assuming that redness is only about function, and not about > how the function is achieved, which is entirely missing the point of what > redness is. > > > > > On Mon, May 9, 2022 at 5:10 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> You could have said it much better. I said that your argument doesn't >> make logical sense. >> >> Your response, in fact, did say it better. However, you are asserting as >> true things that Stathis is asserting as false. >> >> On Mon, May 9, 2022 at 3:58 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Exactly, I couldn't have set it better myself. >>> Those aren't reasons why redness isn't substrate dependent. >>> If you ask the system: "How do you do your sorting, one system must be >>> able to say "bubble sort" and the other must be able to say: "quick sort" >>> Just the same as if you asked: "What is redness like for you, one would >>> say your redness, the other would say your greenness." >>> >>> >>> >>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Those are not reasons why redness can't supervene. >>>> >>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> OK, let me explain in more detail. >>>>> Redness can't supervene on a function, because you can substitute the >>>>> function (say bubble sort) with some other function (quick sort) >>>>> So redness can't supervene on a function, either because "you replace >>>>> a part (or function) of the brain with a black box that affects the rest of >>>>> the brain in the same way as the original, the subject must behave the same" >>>>> So redness can't superven on any function. >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> Hi Stathis, >>>>>>> >>>>>>> OK, let me try saying it this way. >>>>>>> You use the neural substitution argument to "prove" redness cannot >>>>>>> be substrate dependent. >>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>> The problem is, you can prove that redness can't "supervene" on any >>>>>>> function, via the same neural substitution proof. >>>>>>> >>>>>> >>>>>> It supervenes on any substrate that preserves the redness behaviour. >>>>>> In other words, if you replace a part of the brain with a black box that >>>>>> affects the rest of the brain in the same way as the original, the subject >>>>>> must behave the same and must have the same qualia. It doesn?t matter >>>>>> what?s in the black box. >>>>>> >>>>>> >>>>>> >>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Hi Stathis, >>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>> extropy-chat wrote: >>>>>>>>> >>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>> >>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>>>>> behavior, that you have duplicated the quale? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>> >>>>>>>>> This is the best possible illustration of what I mean by abstract >>>>>>>>> vs intrinsic physical qualities. >>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>> physical qualities, redness and greenness. >>>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>> >>>>>>>>> >>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>>> this: >>>>>>>>> >>>>>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>>>>> different substrate" >>>>>>>>> >>>>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>>>> a different way. >>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>> saying. >>>>>>>>> >>>>>>>> >>>>>>>> The causal properties of the glutamate are basically the properties >>>>>>>> that cause motion in other parts of the system. Consider a glutamate >>>>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>>>>> But if you replace it with a different molecule that has similar physical >>>>>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>>>>> the same. Not all of the physical properties are relevant, and they only >>>>>>>> have to be replicated to within a certain tolerance. >>>>>>>> >>>>>>>> The computational system, and the way the knowledge is >>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>>> reasoning for why it would lie or not) >>>>>>>>> And it is because of this composite awareness, that is the cause >>>>>>>>> of the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>>> redness of a single pixel. >>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>> >>>>>>>> >>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>> >>>>>>>> If I recall correctly, you admit that the quality of your conscious >>>>>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>>>>> can be thought of as a substrate, on which the quality of your >>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>> >>>>>>>> >>>>>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>>>>> certain type of behaviour, regardless of the substrate of its >>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>> >>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>> and say it can't be that, either. >>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>> can be redness. >>>>>>>>> >>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>> and false, which has no utility. >>>>>>>>> >>>>>>>> >>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>> difference between having and not having qualia. >>>>>>>> >>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>> possible? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> -- >>>>>>>> Stathis Papaioannou >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From stathisp at gmail.com Tue May 10 00:48:54 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 10 May 2022 10:48:54 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Redness isn't about the black box functionality, redness is about how the > black box achieves the functionality. > That may seem plausible, but the functionalist position is that however the functionality is achieved, redness will be preserved. On Mon, May 9, 2022 at 4:53 PM Brent Allsop wrote: > >> >> Exactly, I couldn't have set it better myself. >> Those aren't reasons why redness isn't substrate dependent. >> If you ask the system: "How do you do your sorting, one system must be >> able to say "bubble sort" and the other must be able to say: "quick sort" >> Just the same as if you asked: "What is redness like for you, one would >> say your redness, the other would say your greenness." >> >> >> >> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Those are not reasons why redness can't supervene. >>> >>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> OK, let me explain in more detail. >>>> Redness can't supervene on a function, because you can substitute the >>>> function (say bubble sort) with some other function (quick sort) >>>> So redness can't supervene on a function, either because "you replace a >>>> part (or function) of the brain with a black box that affects the rest of >>>> the brain in the same way as the original, the subject must behave the same" >>>> So redness can't superven on any function. >>>> >>>> >>>> >>>> >>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Hi Stathis, >>>>>> >>>>>> OK, let me try saying it this way. >>>>>> You use the neural substitution argument to "prove" redness cannot be >>>>>> substrate dependent. >>>>>> Then you conclude that redness "supervenes" on some function. >>>>>> The problem is, you can prove that redness can't "supervene" on any >>>>>> function, via the same neural substitution proof. >>>>>> >>>>> >>>>> It supervenes on any substrate that preserves the redness behaviour. >>>>> In other words, if you replace a part of the brain with a black box that >>>>> affects the rest of the brain in the same way as the original, the subject >>>>> must behave the same and must have the same qualia. It doesn?t matter >>>>> what?s in the black box. >>>>> >>>>> >>>>> >>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> Hi Stathis, >>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat >>>>>>>> wrote: >>>>>>>> >>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>> extropy-chat wrote: >>>>>>>>>> >>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>>>> behavior, that you have duplicated the quale? >>>>>>>>>> >>>>>>>>> >>>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [image: 3_robots_tiny.png] >>>>>>>> >>>>>>>> This is the best possible illustration of what I mean by abstract >>>>>>>> vs intrinsic physical qualities. >>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>> physical qualities, redness and greenness. >>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>> >>>>>>>> >>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>> >>>>>>>> >>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>> this: >>>>>>>> >>>>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>>>> different substrate" >>>>>>>> >>>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>>> a different way. >>>>>>>> I would emagine you would say that the causal properties of >>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>> saying. >>>>>>>> >>>>>>> >>>>>>> The causal properties of the glutamate are basically the properties >>>>>>> that cause motion in other parts of the system. Consider a glutamate >>>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>>>> But if you replace it with a different molecule that has similar physical >>>>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>>>> the same. Not all of the physical properties are relevant, and they only >>>>>>> have to be replicated to within a certain tolerance. >>>>>>> >>>>>>> The computational system, and the way the knowledge is >>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>> reasoning for why it would lie or not) >>>>>>>> And it is because of this composite awareness, that is the cause of >>>>>>>> the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>> redness of a single pixel. >>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>> composite awareness and no composit free will system involved. >>>>>>>> >>>>>>> >>>>>>> I am proposing that the awareness of the system be completely >>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>> >>>>>>> If I recall correctly, you admit that the quality of your conscious >>>>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>>>> can be thought of as a substrate, on which the quality of your >>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>> >>>>>>> >>>>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>>>> certain type of behaviour, regardless of the substrate of its >>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>> >>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>> and say it can't be that, either. >>>>>>>> If you could describe to me what redness could be, this would >>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>> can be redness. >>>>>>>> >>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>> and false, which has no utility. >>>>>>>> >>>>>>> >>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>> difference between having and not having qualia. >>>>>>> >>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>> possible? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> -- >>>>>>> Stathis Papaioannou >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> -- >>>>> Stathis Papaioannou >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Tue May 10 00:58:42 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 18:58:42 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Redness isn't about the black box functionality, redness is about how the >> black box achieves the functionality. >> > > That may seem plausible, but the functionalist position is that however > the functionality is achieved, redness will be preserved. > In other words, you are not understanding where I pointed out, below, that you can't achieve redness via functionality, either, according to this argument. So, why do you accept your substitution argument against substrate dependence, but not the same substitution argument for why redness can't superveen on function, as you claim, either? > On Mon, May 9, 2022 at 4:53 PM Brent Allsop >> wrote: >> >>> >>> Exactly, I couldn't have set it better myself. >>> Those aren't reasons why redness isn't substrate dependent. >>> If you ask the system: "How do you do your sorting, one system must be >>> able to say "bubble sort" and the other must be able to say: "quick sort" >>> Just the same as if you asked: "What is redness like for you, one would >>> say your redness, the other would say your greenness." >>> >>> >>> >>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Those are not reasons why redness can't supervene. >>>> >>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> OK, let me explain in more detail. >>>>> Redness can't supervene on a function, because you can substitute the >>>>> function (say bubble sort) with some other function (quick sort) >>>>> So redness can't supervene on a function, either because "you replace >>>>> a part (or function) of the brain with a black box that affects the rest of >>>>> the brain in the same way as the original, the subject must behave the same" >>>>> So redness can't superven on any function. >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> Hi Stathis, >>>>>>> >>>>>>> OK, let me try saying it this way. >>>>>>> You use the neural substitution argument to "prove" redness cannot >>>>>>> be substrate dependent. >>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>> The problem is, you can prove that redness can't "supervene" on any >>>>>>> function, via the same neural substitution proof. >>>>>>> >>>>>> >>>>>> It supervenes on any substrate that preserves the redness behaviour. >>>>>> In other words, if you replace a part of the brain with a black box that >>>>>> affects the rest of the brain in the same way as the original, the subject >>>>>> must behave the same and must have the same qualia. It doesn?t matter >>>>>> what?s in the black box. >>>>>> >>>>>> >>>>>> >>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Hi Stathis, >>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>> extropy-chat wrote: >>>>>>>>> >>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>> >>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> An abstract description of the behavior of redness can perfectly >>>>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly >>>>>>>>>>> modeled. Are you saying that since you abstractly reproduce 100% of the >>>>>>>>>>> behavior, that you have duplicated the quale? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>> >>>>>>>>> This is the best possible illustration of what I mean by abstract >>>>>>>>> vs intrinsic physical qualities. >>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>> physical qualities, redness and greenness. >>>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>> >>>>>>>>> >>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>>> this: >>>>>>>>> >>>>>>>>> "the physical effect of glutamate on the rest of the brain using a >>>>>>>>> different substrate" >>>>>>>>> >>>>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>>>> a different way. >>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>> saying. >>>>>>>>> >>>>>>>> >>>>>>>> The causal properties of the glutamate are basically the properties >>>>>>>> that cause motion in other parts of the system. Consider a glutamate >>>>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate >>>>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism. >>>>>>>> But if you replace it with a different molecule that has similar physical >>>>>>>> properties, the rest of the clockwork mechanism will continue functioning >>>>>>>> the same. Not all of the physical properties are relevant, and they only >>>>>>>> have to be replicated to within a certain tolerance. >>>>>>>> >>>>>>>> The computational system, and the way the knowledge is >>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>>> reasoning for why it would lie or not) >>>>>>>>> And it is because of this composite awareness, that is the cause >>>>>>>>> of the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>>> redness of a single pixel. >>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>> >>>>>>>> >>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>> >>>>>>>> If I recall correctly, you admit that the quality of your conscious >>>>>>>>> knowledge is dependent on the particular quality of your redness, so qualia >>>>>>>>> can be thought of as a substrate, on which the quality of your >>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>> >>>>>>>> >>>>>>>> I am saying that redness is not a substrate, but it supervenes on a >>>>>>>> certain type of behaviour, regardless of the substrate of its >>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>> >>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>> and say it can't be that, either. >>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>> can be redness. >>>>>>>>> >>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>> and false, which has no utility. >>>>>>>>> >>>>>>>> >>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>> difference between having and not having qualia. >>>>>>>> >>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>> possible? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> -- >>>>>>>> Stathis Papaioannou >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From jasonresch at gmail.com Tue May 10 01:27:28 2022 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 9 May 2022 21:27:28 -0400 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Mon, May 9, 2022, 8:59 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Redness isn't about the black box functionality, redness is about how >>> the black box achieves the functionality. >>> >> >> That may seem plausible, but the functionalist position is that however >> the functionality is achieved, redness will be preserved. >> > > In other words, you are not understanding where I pointed out, below, that > you can't achieve redness via functionality, either, according to this > argument. > So, why do you accept your substitution argument against substrate > dependence, but not the same substitution argument for why redness can't > superveen on function, as you claim, either? > Great question Brent. This is the same issue that led Putnam to abandon functionalism. But I think there's a way out of applying multiple realizability to attack functionalism. There's a concept of "substitution level." The idea that there's a required minimum level of fidelity which must be functionally preserved in order to faithfully implement the mind. To see this clearly, consider that a person adding 4 and 5 and a calculator adding 4 and 5 perform the same function, insofar as at the highest level they have the same inputs and outputs. But this does not imply the conscious experience of the calculator is the same as that of the human, nor that two different humans doing that will experience the exact same thing while doing the addition in their head. As to whether the consciousness produced by a mind that at some level relies on bubble sort is the same as one relying on quicksort depends on at what level the function is implemented relative to the mind in question. If the sorting algorithm is sufficiently isolated and at a low enough level, then any sorting algorithm could be substituted interchangeably without impacting the mind, just as two functionally equivalent neurons of silicon or carbon might be swapped out without changing the consciousness. But now consider a mind consciously implementing the two sorting algorithms in its high level thoughts, in that case the conscious experience of the mind implementing the two algorithms would be different. You've elevated the function to a level where it can no longer be substituted without changing the abstract processes and functional organization of the mind and the information it has access to. Jason > > >> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>> wrote: >>> >>>> >>>> Exactly, I couldn't have set it better myself. >>>> Those aren't reasons why redness isn't substrate dependent. >>>> If you ask the system: "How do you do your sorting, one system must be >>>> able to say "bubble sort" and the other must be able to say: "quick sort" >>>> Just the same as if you asked: "What is redness like for you, one would >>>> say your redness, the other would say your greenness." >>>> >>>> >>>> >>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Those are not reasons why redness can't supervene. >>>>> >>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> OK, let me explain in more detail. >>>>>> Redness can't supervene on a function, because you can substitute the >>>>>> function (say bubble sort) with some other function (quick sort) >>>>>> So redness can't supervene on a function, either because "you replace >>>>>> a part (or function) of the brain with a black box that affects the rest of >>>>>> the brain in the same way as the original, the subject must behave the same" >>>>>> So redness can't superven on any function. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> Hi Stathis, >>>>>>>> >>>>>>>> OK, let me try saying it this way. >>>>>>>> You use the neural substitution argument to "prove" redness cannot >>>>>>>> be substrate dependent. >>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>> The problem is, you can prove that redness can't "supervene" on any >>>>>>>> function, via the same neural substitution proof. >>>>>>>> >>>>>>> >>>>>>> It supervenes on any substrate that preserves the redness behaviour. >>>>>>> In other words, if you replace a part of the brain with a black box that >>>>>>> affects the rest of the brain in the same way as the original, the subject >>>>>>> must behave the same and must have the same qualia. It doesn?t matter >>>>>>> what?s in the black box. >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi Stathis, >>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>> extropy-chat wrote: >>>>>>>>>> >>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>> >>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>> >>>>>>>>>> This is the best possible illustration of what I mean by abstract >>>>>>>>>> vs intrinsic physical qualities. >>>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>>> physical qualities, redness and greenness. >>>>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>>>> this: >>>>>>>>>> >>>>>>>>>> "the physical effect of glutamate on the rest of the brain using >>>>>>>>>> a different substrate" >>>>>>>>>> >>>>>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>>>>> a different way. >>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>> saying. >>>>>>>>>> >>>>>>>>> >>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>> >>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>> And it is because of this composite awareness, that is the cause >>>>>>>>>> of the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>>>> redness of a single pixel. >>>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>> >>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I am saying that redness is not a substrate, but it supervenes on >>>>>>>>> a certain type of behaviour, regardless of the substrate of its >>>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>>> >>>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>>> and say it can't be that, either. >>>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>>> can be redness. >>>>>>>>>> >>>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>>> and false, which has no utility. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>> difference between having and not having qualia. >>>>>>>>> >>>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>> possible? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> -- >>>>>>>>> Stathis Papaioannou >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> -- >>>>>>> Stathis Papaioannou >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From stathisp at gmail.com Tue May 10 01:58:02 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 10 May 2022 11:58:02 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Redness isn't about the black box functionality, redness is about how >>> the black box achieves the functionality. >>> >> >> That may seem plausible, but the functionalist position is that however >> the functionality is achieved, redness will be preserved. >> > > In other words, you are not understanding where I pointed out, below, that > you can't achieve redness via functionality, either, according to this > argument. > So, why do you accept your substitution argument against substrate > dependence, but not the same substitution argument for why redness can't > superveen on function, as you claim, either? > The function that must be preserved in order to preserve the qualia is ultimately the behaviour presented to the environment. Obviously if you swap living tissue for electronic circuits the function of the new components is different. > On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>> wrote: >>> >>>> >>>> Exactly, I couldn't have set it better myself. >>>> Those aren't reasons why redness isn't substrate dependent. >>>> If you ask the system: "How do you do your sorting, one system must be >>>> able to say "bubble sort" and the other must be able to say: "quick sort" >>>> Just the same as if you asked: "What is redness like for you, one would >>>> say your redness, the other would say your greenness." >>>> >>>> >>>> >>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Those are not reasons why redness can't supervene. >>>>> >>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> OK, let me explain in more detail. >>>>>> Redness can't supervene on a function, because you can substitute the >>>>>> function (say bubble sort) with some other function (quick sort) >>>>>> So redness can't supervene on a function, either because "you replace >>>>>> a part (or function) of the brain with a black box that affects the rest of >>>>>> the brain in the same way as the original, the subject must behave the same" >>>>>> So redness can't superven on any function. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> Hi Stathis, >>>>>>>> >>>>>>>> OK, let me try saying it this way. >>>>>>>> You use the neural substitution argument to "prove" redness cannot >>>>>>>> be substrate dependent. >>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>> The problem is, you can prove that redness can't "supervene" on any >>>>>>>> function, via the same neural substitution proof. >>>>>>>> >>>>>>> >>>>>>> It supervenes on any substrate that preserves the redness behaviour. >>>>>>> In other words, if you replace a part of the brain with a black box that >>>>>>> affects the rest of the brain in the same way as the original, the subject >>>>>>> must behave the same and must have the same qualia. It doesn?t matter >>>>>>> what?s in the black box. >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi Stathis, >>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>> extropy-chat wrote: >>>>>>>>>> >>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>> >>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Here is where I end up misquoting you because I don?t understand >>>>>>>>>>> what exactly you mean by ?abstract description?. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>> >>>>>>>>>> This is the best possible illustration of what I mean by abstract >>>>>>>>>> vs intrinsic physical qualities. >>>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>>> physical qualities, redness and greenness. >>>>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>>>> this: >>>>>>>>>> >>>>>>>>>> "the physical effect of glutamate on the rest of the brain using >>>>>>>>>> a different substrate" >>>>>>>>>> >>>>>>>>>> Everything in your model seems to be based on this kind of "cause >>>>>>>>>> and effect" or "interpretations of interpretations". I think of things in >>>>>>>>>> a different way. >>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>> saying. >>>>>>>>>> >>>>>>>>> >>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>> >>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>> And it is because of this composite awareness, that is the cause >>>>>>>>>> of the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>>>> redness of a single pixel. >>>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>> >>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I am saying that redness is not a substrate, but it supervenes on >>>>>>>>> a certain type of behaviour, regardless of the substrate of its >>>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>>> >>>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>>> and say it can't be that, either. >>>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>>> can be redness. >>>>>>>>>> >>>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>>> and false, which has no utility. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>> difference between having and not having qualia. >>>>>>>>> >>>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>> possible? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> -- >>>>>>>>> Stathis Papaioannou >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> -- >>>>>>> Stathis Papaioannou >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou Virus-free. www.avast.com <#m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Tue May 10 02:10:36 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 9 May 2022 20:10:36 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Right, so you are agreeing that what consciousness is like is substrate dependent, at the elemental level. Elemental greenness is not like elemental redness, and the word 'red' is not like either one, even though all 3 can represent 'red' information sufficiently for the system to tell you the strawberry is red. On Mon, May 9, 2022 at 7:59 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> Redness isn't about the black box functionality, redness is about how >>>> the black box achieves the functionality. >>>> >>> >>> That may seem plausible, but the functionalist position is that however >>> the functionality is achieved, redness will be preserved. >>> >> >> In other words, you are not understanding where I pointed out, below, >> that you can't achieve redness via functionality, either, according to this >> argument. >> So, why do you accept your substitution argument against substrate >> dependence, but not the same substitution argument for why redness can't >> superveen on function, as you claim, either? >> > > The function that must be preserved in order to preserve the qualia is > ultimately the behaviour presented to the environment. Obviously if you > swap living tissue for electronic circuits the function of the new > components is different. > > >> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>>> wrote: >>>> >>>>> >>>>> Exactly, I couldn't have set it better myself. >>>>> Those aren't reasons why redness isn't substrate dependent. >>>>> If you ask the system: "How do you do your sorting, one system must be >>>>> able to say "bubble sort" and the other must be able to say: "quick sort" >>>>> Just the same as if you asked: "What is redness like for you, one >>>>> would say your redness, the other would say your greenness." >>>>> >>>>> >>>>> >>>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Those are not reasons why redness can't supervene. >>>>>> >>>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> OK, let me explain in more detail. >>>>>>> Redness can't supervene on a function, because you can substitute >>>>>>> the function (say bubble sort) with some other function (quick sort) >>>>>>> So redness can't supervene on a function, either because "you >>>>>>> replace a part (or function) of the brain with a black box that affects the >>>>>>> rest of the brain in the same way as the original, the subject must behave >>>>>>> the same" >>>>>>> So redness can't superven on any function. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> Hi Stathis, >>>>>>>>> >>>>>>>>> OK, let me try saying it this way. >>>>>>>>> You use the neural substitution argument to "prove" redness cannot >>>>>>>>> be substrate dependent. >>>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>>> The problem is, you can prove that redness can't "supervene" on >>>>>>>>> any function, via the same neural substitution proof. >>>>>>>>> >>>>>>>> >>>>>>>> It supervenes on any substrate that preserves the redness >>>>>>>> behaviour. In other words, if you replace a part of the brain with a black >>>>>>>> box that affects the rest of the brain in the same way as the original, the >>>>>>>> subject must behave the same and must have the same qualia. It doesn?t >>>>>>>> matter what?s in the black box. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via >>>>>>>>> extropy-chat wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hi Stathis, >>>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>> >>>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Here is where I end up misquoting you because I don?t >>>>>>>>>>>> understand what exactly you mean by ?abstract description?. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>>> >>>>>>>>>>> This is the best possible illustration of what I mean by >>>>>>>>>>> abstract vs intrinsic physical qualities. >>>>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>>>> physical qualities, redness and greenness. >>>>>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I think the confusion comes in the different ways we think about >>>>>>>>>>> this: >>>>>>>>>>> >>>>>>>>>>> "the physical effect of glutamate on the rest of the brain using >>>>>>>>>>> a different substrate" >>>>>>>>>>> >>>>>>>>>>> Everything in your model seems to be based on this kind of >>>>>>>>>>> "cause and effect" or "interpretations of interpretations". I think of >>>>>>>>>>> things in a different way. >>>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>>> saying. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>>> >>>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>>> And it is because of this composite awareness, that is the cause >>>>>>>>>>> of the system choosing to say: "that is red", or choose to lie in some way. >>>>>>>>>>> It is the entire composit 'free will system' that is the initial >>>>>>>>>>> cause of someone choosing to say something, not any single quality like the >>>>>>>>>>> redness of a single pixel. >>>>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>>> >>>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I am saying that redness is not a substrate, but it supervenes on >>>>>>>>>> a certain type of behaviour, regardless of the substrate of its >>>>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>>>> >>>>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>>>> and say it can't be that, either. >>>>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>>>> can be redness. >>>>>>>>>>> >>>>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>>>> and false, which has no utility. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>>> difference between having and not having qualia. >>>>>>>>>> >>>>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>>> possible? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Stathis Papaioannou >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> -- >>>>>>>> Stathis Papaioannou >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> -- >>> Stathis Papaioannou >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > -- > Stathis Papaioannou > > > Virus-free. > www.avast.com > > <#m_9177929776750035119_m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue May 10 02:26:19 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 10 May 2022 12:26:19 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, 10 May 2022 at 12:12, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Right, so you are agreeing that what consciousness is like is substrate > dependent, at the elemental level. > Elemental greenness is not like elemental redness, and the word 'red' is > not like either one, even though all 3 can represent 'red' > information sufficiently for the system to tell you the strawberry is red. > No, I don?t think consciousness can be tied to any particular substrate or structure. I agree that greenness is different to redness and I agree that the word ?red? is not like either one. I also think that glutamate and electronic circuits are unlike any qualia, they are different categories of things. If the system can tell that something is red that does not mean that it has redness qualia. A blind person can use an instrument to tell you that a strawberry is red. However, a blind person with an instrument is not functionally identical to someone with normal vision, since the blind man will readily tell you that he can?t see the strawberry, an obvious functional difference. > > On Mon, May 9, 2022 at 7:59 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> Redness isn't about the black box functionality, redness is about how >>>>> the black box achieves the functionality. >>>>> >>>> >>>> That may seem plausible, but the functionalist position is that however >>>> the functionality is achieved, redness will be preserved. >>>> >>> >>> In other words, you are not understanding where I pointed out, below, >>> that you can't achieve redness via functionality, either, according to this >>> argument. >>> So, why do you accept your substitution argument against substrate >>> dependence, but not the same substitution argument for why redness can't >>> superveen on function, as you claim, either? >>> >> >> The function that must be preserved in order to preserve the qualia is >> ultimately the behaviour presented to the environment. Obviously if you >> swap living tissue for electronic circuits the function of the new >> components is different. >> >> >>> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>>>> wrote: >>>>> >>>>>> >>>>>> Exactly, I couldn't have set it better myself. >>>>>> Those aren't reasons why redness isn't substrate dependent. >>>>>> If you ask the system: "How do you do your sorting, one system must >>>>>> be able to say "bubble sort" and the other must be able to say: "quick sort" >>>>>> Just the same as if you asked: "What is redness like for you, one >>>>>> would say your redness, the other would say your greenness." >>>>>> >>>>>> >>>>>> >>>>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> Those are not reasons why redness can't supervene. >>>>>>> >>>>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> OK, let me explain in more detail. >>>>>>>> Redness can't supervene on a function, because you can substitute >>>>>>>> the function (say bubble sort) with some other function (quick sort) >>>>>>>> So redness can't supervene on a function, either because "you >>>>>>>> replace a part (or function) of the brain with a black box that affects the >>>>>>>> rest of the brain in the same way as the original, the subject must behave >>>>>>>> the same" >>>>>>>> So redness can't superven on any function. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> Hi Stathis, >>>>>>>>>> >>>>>>>>>> OK, let me try saying it this way. >>>>>>>>>> You use the neural substitution argument to "prove" >>>>>>>>>> redness cannot be substrate dependent. >>>>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>>>> The problem is, you can prove that redness can't "supervene" on >>>>>>>>>> any function, via the same neural substitution proof. >>>>>>>>>> >>>>>>>>> >>>>>>>>> It supervenes on any substrate that preserves the redness >>>>>>>>> behaviour. In other words, if you replace a part of the brain with a black >>>>>>>>> box that affects the rest of the brain in the same way as the original, the >>>>>>>>> subject must behave the same and must have the same qualia. It doesn?t >>>>>>>>> matter what?s in the black box. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via >>>>>>>>>> extropy-chat wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>> >>>>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Here is where I end up misquoting you because I don?t >>>>>>>>>>>>> understand what exactly you mean by ?abstract description?. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>>>> >>>>>>>>>>>> This is the best possible illustration of what I mean by >>>>>>>>>>>> abstract vs intrinsic physical qualities. >>>>>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>>>>> physical qualities, redness and greenness. >>>>>>>>>>>> "Red" is just an abstract word, composed of strings of ones and >>>>>>>>>>>> zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I think the confusion comes in the different ways we think >>>>>>>>>>>> about this: >>>>>>>>>>>> >>>>>>>>>>>> "the physical effect of glutamate on the rest of the brain >>>>>>>>>>>> using a different substrate" >>>>>>>>>>>> >>>>>>>>>>>> Everything in your model seems to be based on this kind of >>>>>>>>>>>> "cause and effect" or "interpretations of interpretations". I think of >>>>>>>>>>>> things in a different way. >>>>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>>>> saying. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>>>> >>>>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>>>> The entire system is aware of all of the intrinsic qualities of >>>>>>>>>>>> each of the pixels on the surface of the strawberry (along with any >>>>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>>>> And it is because of this composite awareness, that is the >>>>>>>>>>>> cause of the system choosing to say: "that is red", or choose to lie in >>>>>>>>>>>> some way. >>>>>>>>>>>> It is the entire composit 'free will system' that is the >>>>>>>>>>>> initial cause of someone choosing to say something, not any single quality >>>>>>>>>>>> like the redness of a single pixel. >>>>>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>>>> >>>>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I am saying that redness is not a substrate, but it supervenes >>>>>>>>>>> on a certain type of behaviour, regardless of the substrate of its >>>>>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>>>>> >>>>>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>>>>> and say it can't be that, either. >>>>>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>>>>> can be redness. >>>>>>>>>>>> >>>>>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>>>>> and false, which has no utility. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>>>> difference between having and not having qualia. >>>>>>>>>>> >>>>>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>>>> possible? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> -- >>>>>>>>> Stathis Papaioannou >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> -- >>>> Stathis Papaioannou >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> >> >> -- >> Stathis Papaioannou >> >> >> Virus-free. >> www.avast.com >> >> <#m_-8755855459092372129_m_9177929776750035119_m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 10 09:57:42 2022 From: pharos at gmail.com (BillK) Date: Tue, 10 May 2022 10:57:42 +0100 Subject: [ExI] Federal Fiscal Shortfall Nears $1 Million Per Household Message-ID: The U.S. Treasury has published a major report revealing that the federal government has amassed $124.1 trillion in debts, liabilities, and unfunded obligations. By James D. Agresti May 5, 2022 Quotes: Although the report discloses information of crucial import to the citizens of the United States, Google News indicates that no major media outlet has informed anyone about it since it was released on February 17, 2022. Contrary to the media narrative that tax cuts and military spending are to blame for the runaway national debt, the primary cause is greater spending on social programs which provide healthcare, income security, education, nutrition, housing, and cultural services. These programs have grown from 21% of all federal spending in 1960 to 73% in 2020. Under current laws and policies, CBO projects that almost all future growth in spending will be due to social programs and interest on the national debt. While some believe the U.S. government can spend and borrow with abandon because it can print money, one of the most established laws of economics is that there is no such thing as a free lunch. The prolific economist William A. McEachern explains why this is so: There is no free lunch because all goods and services involve a cost to someone. The lunch may seem free to you, but it draws scarce resources away from the production of other goods and services, and whoever provides a free lunch often expects something in return. A Russian proverb makes a similar point but with a bit more bite: ?The only place you find free cheese is in a mousetrap.? ----------------- Don't worry! Limits don't exist for us! Yee-Ha! BillK From brent.allsop at gmail.com Tue May 10 13:04:30 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 10 May 2022 07:04:30 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Stathis, [image: 3_robots_tiny.png] We can say *functionality* is multiply realizable, the above systems being different examples of the same knowledge of the strawberry functionality. We can say the same for *intelligence*. But if you define "*consciousness*" to be computationally bound elemental intrinsic qualities, like redness and greenness, that is basically saying it is important to ask questions like what is your consciousness like? Which of the above 3 qualities are you using to paint your conscious knowledge of the strawberry with? And given that definition of "*consciousness*" isn't this the opposite of: "No, I don?t think *consciousness* can be tied to any particular [colorness quality of the] substrate or structure." On Mon, May 9, 2022 at 8:27 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 10 May 2022 at 12:12, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Right, so you are agreeing that what consciousness is like is substrate >> dependent, at the elemental level. >> Elemental greenness is not like elemental redness, and the word 'red' is >> not like either one, even though all 3 can represent 'red' >> information sufficiently for the system to tell you the strawberry is red. >> > > No, I don?t think consciousness can be tied to any particular substrate or > structure. I agree that greenness is different to redness and I agree that > the word ?red? is not like either one. I also think that glutamate and > electronic circuits are unlike any qualia, they are different categories of > things. If the system can tell that something is red that does not mean > that it has redness qualia. A blind person can use an instrument to tell > you that a strawberry is red. However, a blind person with an instrument is > not functionally identical to someone with normal vision, since the blind > man will readily tell you that he can?t see the strawberry, an obvious > functional difference. > >> >> On Mon, May 9, 2022 at 7:59 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> Redness isn't about the black box functionality, redness is about how >>>>>> the black box achieves the functionality. >>>>>> >>>>> >>>>> That may seem plausible, but the functionalist position is that >>>>> however the functionality is achieved, redness will be preserved. >>>>> >>>> >>>> In other words, you are not understanding where I pointed out, below, >>>> that you can't achieve redness via functionality, either, according to this >>>> argument. >>>> So, why do you accept your substitution argument against substrate >>>> dependence, but not the same substitution argument for why redness can't >>>> superveen on function, as you claim, either? >>>> >>> >>> The function that must be preserved in order to preserve the qualia is >>> ultimately the behaviour presented to the environment. Obviously if you >>> swap living tissue for electronic circuits the function of the new >>> components is different. >>> >>> >>>> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> Exactly, I couldn't have set it better myself. >>>>>>> Those aren't reasons why redness isn't substrate dependent. >>>>>>> If you ask the system: "How do you do your sorting, one system must >>>>>>> be able to say "bubble sort" and the other must be able to say: "quick sort" >>>>>>> Just the same as if you asked: "What is redness like for you, one >>>>>>> would say your redness, the other would say your greenness." >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> Those are not reasons why redness can't supervene. >>>>>>>> >>>>>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> OK, let me explain in more detail. >>>>>>>>> Redness can't supervene on a function, because you can substitute >>>>>>>>> the function (say bubble sort) with some other function (quick sort) >>>>>>>>> So redness can't supervene on a function, either because "you >>>>>>>>> replace a part (or function) of the brain with a black box that affects the >>>>>>>>> rest of the brain in the same way as the original, the subject must behave >>>>>>>>> the same" >>>>>>>>> So redness can't superven on any function. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via >>>>>>>>> extropy-chat wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Stathis, >>>>>>>>>>> >>>>>>>>>>> OK, let me try saying it this way. >>>>>>>>>>> You use the neural substitution argument to "prove" >>>>>>>>>>> redness cannot be substrate dependent. >>>>>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>>>>> The problem is, you can prove that redness can't "supervene" on >>>>>>>>>>> any function, via the same neural substitution proof. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> It supervenes on any substrate that preserves the redness >>>>>>>>>> behaviour. In other words, if you replace a part of the brain with a black >>>>>>>>>> box that affects the rest of the brain in the same way as the original, the >>>>>>>>>> subject must behave the same and must have the same qualia. It doesn?t >>>>>>>>>> matter what?s in the black box. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via >>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Here is where I end up misquoting you because I don?t >>>>>>>>>>>>>> understand what exactly you mean by ?abstract description?. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>>>>> >>>>>>>>>>>>> This is the best possible illustration of what I mean by >>>>>>>>>>>>> abstract vs intrinsic physical qualities. >>>>>>>>>>>>> The first two represent knowledge with two different intrinsic >>>>>>>>>>>>> physical qualities, redness and greenness. >>>>>>>>>>>>> "Red" is just an abstract word, composed of strings of ones >>>>>>>>>>>>> and zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> But the specific example I have used is that if you perfectly >>>>>>>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a >>>>>>>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you >>>>>>>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were >>>>>>>>>>>>>> not so, it would be possible to grossly change the qualia without the >>>>>>>>>>>>>> subject noticing any change, which is absurd. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I think the confusion comes in the different ways we think >>>>>>>>>>>>> about this: >>>>>>>>>>>>> >>>>>>>>>>>>> "the physical effect of glutamate on the rest of the brain >>>>>>>>>>>>> using a different substrate" >>>>>>>>>>>>> >>>>>>>>>>>>> Everything in your model seems to be based on this kind of >>>>>>>>>>>>> "cause and effect" or "interpretations of interpretations". I think of >>>>>>>>>>>>> things in a different way. >>>>>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>>>>> However, to me, the redness quality, alone, isn't the cause of >>>>>>>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>>>>> saying. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>>>>> >>>>>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>>>>> The entire system is aware of all of the intrinsic qualities >>>>>>>>>>>>> of each of the pixels on the surface of the strawberry (along with any >>>>>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>>>>> And it is because of this composite awareness, that is the >>>>>>>>>>>>> cause of the system choosing to say: "that is red", or choose to lie in >>>>>>>>>>>>> some way. >>>>>>>>>>>>> It is the entire composit 'free will system' that is the >>>>>>>>>>>>> initial cause of someone choosing to say something, not any single quality >>>>>>>>>>>>> like the redness of a single pixel. >>>>>>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>>>>> >>>>>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I am saying that redness is not a substrate, but it supervenes >>>>>>>>>>>> on a certain type of behaviour, regardless of the substrate of its >>>>>>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>>>>>> >>>>>>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>>>>>> and say it can't be that, either. >>>>>>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>>>>>> can be redness. >>>>>>>>>>>>> >>>>>>>>>>>>> If it isn't glutamate that has the redness quality, what can? >>>>>>>>>>>>> Nothing you say will work, because of your so-called proof. Because when >>>>>>>>>>>>> you have contradictory assumptions you can prove all claims to be both true >>>>>>>>>>>>> and false, which has no utility. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>>>>> difference between having and not having qualia. >>>>>>>>>>>> >>>>>>>>>>>> Until you can provide some falsifiable example of what could be >>>>>>>>>>>>> responsible for your redness quality, further conversation seems to be a >>>>>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>>>>> possible? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Stathis Papaioannou >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> -- >>>>> Stathis Papaioannou >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> >>> >>> -- >>> Stathis Papaioannou >>> >>> >>> Virus-free. >>> www.avast.com >>> >>> <#m_924314508426026969_m_-8755855459092372129_m_9177929776750035119_m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>> -- >>> Stathis Papaioannou >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From stathisp at gmail.com Tue May 10 13:46:45 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 10 May 2022 23:46:45 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, 10 May 2022 at 23:06, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Stathis, > [image: 3_robots_tiny.png] > > We can say *functionality* is multiply realizable, the above systems > being different examples of the same knowledge of the strawberry > functionality. > We can say the same for *intelligence*. > But if you define "*consciousness*" to be computationally bound elemental > intrinsic qualities, like redness and greenness, that is basically saying > it is important to ask questions like what is your consciousness like? > Which of the above 3 qualities are you using to paint your conscious > knowledge of the strawberry with? > > And given that definition of "*consciousness*" isn't this the opposite of: > "No, I don?t think *consciousness* can be tied to any particular > [colorness quality of the] substrate or structure." > If the three subjects differ in their behaviour, such as their description of the strawberry or the nature of the redness experience, then they have different qualia. If they say exactly the same things under all possible circumstances about strawberries, redness, greenness and everything else they have the same qualia. > On Mon, May 9, 2022 at 8:27 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 10 May 2022 at 12:12, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Right, so you are agreeing that what consciousness is like is substrate >>> dependent, at the elemental level. >>> Elemental greenness is not like elemental redness, and the word 'red' is >>> not like either one, even though all 3 can represent 'red' >>> information sufficiently for the system to tell you the strawberry is red. >>> >> >> No, I don?t think consciousness can be tied to any particular substrate >> or structure. I agree that greenness is different to redness and I agree >> that the word ?red? is not like either one. I also think that glutamate and >> electronic circuits are unlike any qualia, they are different categories of >> things. If the system can tell that something is red that does not mean >> that it has redness qualia. A blind person can use an instrument to tell >> you that a strawberry is red. However, a blind person with an instrument is >> not functionally identical to someone with normal vision, since the blind >> man will readily tell you that he can?t see the strawberry, an obvious >> functional difference. >> >>> >>> On Mon, May 9, 2022 at 7:59 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> Redness isn't about the black box functionality, redness is about >>>>>>> how the black box achieves the functionality. >>>>>>> >>>>>> >>>>>> That may seem plausible, but the functionalist position is that >>>>>> however the functionality is achieved, redness will be preserved. >>>>>> >>>>> >>>>> In other words, you are not understanding where I pointed out, below, >>>>> that you can't achieve redness via functionality, either, according to this >>>>> argument. >>>>> So, why do you accept your substitution argument against substrate >>>>> dependence, but not the same substitution argument for why redness can't >>>>> superveen on function, as you claim, either? >>>>> >>>> >>>> The function that must be preserved in order to preserve the qualia is >>>> ultimately the behaviour presented to the environment. Obviously if you >>>> swap living tissue for electronic circuits the function of the new >>>> components is different. >>>> >>>> >>>>> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> Exactly, I couldn't have set it better myself. >>>>>>>> Those aren't reasons why redness isn't substrate dependent. >>>>>>>> If you ask the system: "How do you do your sorting, one system must >>>>>>>> be able to say "bubble sort" and the other must be able to say: "quick sort" >>>>>>>> Just the same as if you asked: "What is redness like for you, one >>>>>>>> would say your redness, the other would say your greenness." >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> Those are not reasons why redness can't supervene. >>>>>>>>> >>>>>>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> OK, let me explain in more detail. >>>>>>>>>> Redness can't supervene on a function, because you can substitute >>>>>>>>>> the function (say bubble sort) with some other function (quick sort) >>>>>>>>>> So redness can't supervene on a function, either because "you >>>>>>>>>> replace a part (or function) of the brain with a black box that affects the >>>>>>>>>> rest of the brain in the same way as the original, the subject must behave >>>>>>>>>> the same" >>>>>>>>>> So redness can't superven on any function. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via >>>>>>>>>> extropy-chat wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>> >>>>>>>>>>>> OK, let me try saying it this way. >>>>>>>>>>>> You use the neural substitution argument to "prove" >>>>>>>>>>>> redness cannot be substrate dependent. >>>>>>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>>>>>> The problem is, you can prove that redness can't "supervene" on >>>>>>>>>>>> any function, via the same neural substitution proof. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> It supervenes on any substrate that preserves the redness >>>>>>>>>>> behaviour. In other words, if you replace a part of the brain with a black >>>>>>>>>>> box that affects the rest of the brain in the same way as the original, the >>>>>>>>>>> subject must behave the same and must have the same qualia. It doesn?t >>>>>>>>>>> matter what?s in the black box. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via >>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Here is where I end up misquoting you because I don?t >>>>>>>>>>>>>>> understand what exactly you mean by ?abstract description?. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is the best possible illustration of what I mean by >>>>>>>>>>>>>> abstract vs intrinsic physical qualities. >>>>>>>>>>>>>> The first two represent knowledge with two different >>>>>>>>>>>>>> intrinsic physical qualities, redness and greenness. >>>>>>>>>>>>>> "Red" is just an abstract word, composed of strings of ones >>>>>>>>>>>>>> and zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> But the specific example I have used is that if you >>>>>>>>>>>>>>> perfectly reproduce the physical effect of glutamate on the rest of the >>>>>>>>>>>>>>> brain using a different substrate, and glutamate is involved in redness >>>>>>>>>>>>>>> qualia, then you necessarily also reproduce the redness qualia. This is >>>>>>>>>>>>>>> because if it were not so, it would be possible to grossly change the >>>>>>>>>>>>>>> qualia without the subject noticing any change, which is absurd. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I think the confusion comes in the different ways we think >>>>>>>>>>>>>> about this: >>>>>>>>>>>>>> >>>>>>>>>>>>>> "the physical effect of glutamate on the rest of the brain >>>>>>>>>>>>>> using a different substrate" >>>>>>>>>>>>>> >>>>>>>>>>>>>> Everything in your model seems to be based on this kind of >>>>>>>>>>>>>> "cause and effect" or "interpretations of interpretations". I think of >>>>>>>>>>>>>> things in a different way. >>>>>>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>>>>>> However, to me, the redness quality, alone, isn't the cause >>>>>>>>>>>>>> of someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>>>>>> saying. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>>>>>> >>>>>>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>>>>>> The entire system is aware of all of the intrinsic qualities >>>>>>>>>>>>>> of each of the pixels on the surface of the strawberry (along with any >>>>>>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>>>>>> And it is because of this composite awareness, that is the >>>>>>>>>>>>>> cause of the system choosing to say: "that is red", or choose to lie in >>>>>>>>>>>>>> some way. >>>>>>>>>>>>>> It is the entire composit 'free will system' that is the >>>>>>>>>>>>>> initial cause of someone choosing to say something, not any single quality >>>>>>>>>>>>>> like the redness of a single pixel. >>>>>>>>>>>>>> For you, everything is just a chain of causes and effects, no >>>>>>>>>>>>>> composite awareness and no composit free will system involved. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>>>>>> >>>>>>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I am saying that redness is not a substrate, but it supervenes >>>>>>>>>>>>> on a certain type of behaviour, regardless of the substrate of its >>>>>>>>>>>>> implementation. This allows the system to know when the redness changes to >>>>>>>>>>>>> something else, since the behaviour on which the redness supervenes would >>>>>>>>>>>>> change to a different behaviour on which different colour qualia supervene. >>>>>>>>>>>>> >>>>>>>>>>>>> That is why I constantly ask you what could be responsible for >>>>>>>>>>>>>> redness. Because whatever you say that is, I could use your same argument >>>>>>>>>>>>>> and say it can't be that, either. >>>>>>>>>>>>>> If you could describe to me what redness could be, this would >>>>>>>>>>>>>> falsify my camp, and I would jump to the functionalist camp. But that is >>>>>>>>>>>>>> impossible, because all your so-called proof is claiming, is that nothing >>>>>>>>>>>>>> can be redness. >>>>>>>>>>>>>> >>>>>>>>>>>>>> If it isn't glutamate that has the redness quality, what >>>>>>>>>>>>>> can? Nothing you say will work, because of your so-called proof. Because >>>>>>>>>>>>>> when you have contradictory assumptions you can prove all claims to be both >>>>>>>>>>>>>> true and false, which has no utility. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>>>>>> difference between having and not having qualia. >>>>>>>>>>>>> >>>>>>>>>>>>> Until you can provide some falsifiable example of what could >>>>>>>>>>>>>> be responsible for your redness quality, further conversation seems to be a >>>>>>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>>>>>> possible? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> >>>> >>>> -- >>>> Stathis Papaioannou >>>> >>>> >>>> Virus-free. >>>> www.avast.com >>>> >>>> <#m_-3714099383540579201_m_924314508426026969_m_-8755855459092372129_m_9177929776750035119_m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>> -- >>>> Stathis Papaioannou >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Tue May 10 14:00:10 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 10 May 2022 08:00:10 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Yes, that is all true, but it is still missing the point. There must be something in the system which has a coolorness quality. You must be able to change redness to greenness, and if you do, the system must be able to report that the quality has changed. If that functionality is not included somewhere in the system, it does not have sufficient functionality to be considered conscious. On Tue, May 10, 2022 at 7:48 AM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, 10 May 2022 at 23:06, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Stathis, >> [image: 3_robots_tiny.png] >> >> We can say *functionality* is multiply realizable, the above systems >> being different examples of the same knowledge of the strawberry >> functionality. >> We can say the same for *intelligence*. >> But if you define "*consciousness*" to be computationally bound >> elemental intrinsic qualities, like redness and greenness, that is >> basically saying it is important to ask questions like what is your >> consciousness like? Which of the above 3 qualities are you using to paint >> your conscious knowledge of the strawberry with? >> >> And given that definition of "*consciousness*" isn't this the >> opposite of: >> "No, I don?t think *consciousness* can be tied to any particular >> [colorness quality of the] substrate or structure." >> > > If the three subjects differ in their behaviour, such as their description > of the strawberry or the nature of the redness experience, then they have > different qualia. If they say exactly the same things under all possible > circumstances about strawberries, redness, greenness and everything else > they have the same qualia. > > >> On Mon, May 9, 2022 at 8:27 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Tue, 10 May 2022 at 12:12, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Right, so you are agreeing that what consciousness is like is substrate >>>> dependent, at the elemental level. >>>> Elemental greenness is not like elemental redness, and the word 'red' >>>> is not like either one, even though all 3 can represent 'red' >>>> information sufficiently for the system to tell you the strawberry is red. >>>> >>> >>> No, I don?t think consciousness can be tied to any particular substrate >>> or structure. I agree that greenness is different to redness and I agree >>> that the word ?red? is not like either one. I also think that glutamate and >>> electronic circuits are unlike any qualia, they are different categories of >>> things. If the system can tell that something is red that does not mean >>> that it has redness qualia. A blind person can use an instrument to tell >>> you that a strawberry is red. However, a blind person with an instrument is >>> not functionally identical to someone with normal vision, since the blind >>> man will readily tell you that he can?t see the strawberry, an obvious >>> functional difference. >>> >>>> >>>> On Mon, May 9, 2022 at 7:59 PM Stathis Papaioannou via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> Redness isn't about the black box functionality, redness is about >>>>>>>> how the black box achieves the functionality. >>>>>>>> >>>>>>> >>>>>>> That may seem plausible, but the functionalist position is that >>>>>>> however the functionality is achieved, redness will be preserved. >>>>>>> >>>>>> >>>>>> In other words, you are not understanding where I pointed out, below, >>>>>> that you can't achieve redness via functionality, either, according to this >>>>>> argument. >>>>>> So, why do you accept your substitution argument against substrate >>>>>> dependence, but not the same substitution argument for why redness can't >>>>>> superveen on function, as you claim, either? >>>>>> >>>>> >>>>> The function that must be preserved in order to preserve the qualia is >>>>> ultimately the behaviour presented to the environment. Obviously if you >>>>> swap living tissue for electronic circuits the function of the new >>>>> components is different. >>>>> >>>>> >>>>>> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Exactly, I couldn't have set it better myself. >>>>>>>>> Those aren't reasons why redness isn't substrate dependent. >>>>>>>>> If you ask the system: "How do you do your sorting, one system >>>>>>>>> must be able to say "bubble sort" and the other must be able to say: "quick >>>>>>>>> sort" >>>>>>>>> Just the same as if you asked: "What is redness like for you, one >>>>>>>>> would say your redness, the other would say your greenness." >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>> >>>>>>>>>> Those are not reasons why redness can't supervene. >>>>>>>>>> >>>>>>>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> OK, let me explain in more detail. >>>>>>>>>>> Redness can't supervene on a function, because you can >>>>>>>>>>> substitute the function (say bubble sort) with some other function (quick >>>>>>>>>>> sort) >>>>>>>>>>> So redness can't supervene on a function, either because "you >>>>>>>>>>> replace a part (or function) of the brain with a black box that affects the >>>>>>>>>>> rest of the brain in the same way as the original, the subject must behave >>>>>>>>>>> the same" >>>>>>>>>>> So redness can't superven on any function. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via >>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>>> >>>>>>>>>>>>> OK, let me try saying it this way. >>>>>>>>>>>>> You use the neural substitution argument to "prove" >>>>>>>>>>>>> redness cannot be substrate dependent. >>>>>>>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>>>>>>> The problem is, you can prove that redness can't "supervene" >>>>>>>>>>>>> on any function, via the same neural substitution proof. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> It supervenes on any substrate that preserves the redness >>>>>>>>>>>> behaviour. In other words, if you replace a part of the brain with a black >>>>>>>>>>>> box that affects the rest of the brain in the same way as the original, the >>>>>>>>>>>> subject must behave the same and must have the same qualia. It doesn?t >>>>>>>>>>>> matter what?s in the black box. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via >>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat < >>>>>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I think colourness qualities are what the human behaviour >>>>>>>>>>>>>>>>>> associated with distinguishing between colours, describing them, reacting >>>>>>>>>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a >>>>>>>>>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you >>>>>>>>>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Here is where I end up misquoting you because I don?t >>>>>>>>>>>>>>>> understand what exactly you mean by ?abstract description?. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This is the best possible illustration of what I mean by >>>>>>>>>>>>>>> abstract vs intrinsic physical qualities. >>>>>>>>>>>>>>> The first two represent knowledge with two different >>>>>>>>>>>>>>> intrinsic physical qualities, redness and greenness. >>>>>>>>>>>>>>> "Red" is just an abstract word, composed of strings of ones >>>>>>>>>>>>>>> and zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> But the specific example I have used is that if you >>>>>>>>>>>>>>>> perfectly reproduce the physical effect of glutamate on the rest of the >>>>>>>>>>>>>>>> brain using a different substrate, and glutamate is involved in redness >>>>>>>>>>>>>>>> qualia, then you necessarily also reproduce the redness qualia. This is >>>>>>>>>>>>>>>> because if it were not so, it would be possible to grossly change the >>>>>>>>>>>>>>>> qualia without the subject noticing any change, which is absurd. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I think the confusion comes in the different ways we think >>>>>>>>>>>>>>> about this: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> "the physical effect of glutamate on the rest of the brain >>>>>>>>>>>>>>> using a different substrate" >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Everything in your model seems to be based on this kind of >>>>>>>>>>>>>>> "cause and effect" or "interpretations of interpretations". I think of >>>>>>>>>>>>>>> things in a different way. >>>>>>>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>>>>>>> However, to me, the redness quality, alone, isn't the cause >>>>>>>>>>>>>>> of someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>>>>>>> saying. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>>>>>>> The entire system is aware of all of the intrinsic qualities >>>>>>>>>>>>>>> of each of the pixels on the surface of the strawberry (along with any >>>>>>>>>>>>>>> reasoning for why it would lie or not) >>>>>>>>>>>>>>> And it is because of this composite awareness, that is the >>>>>>>>>>>>>>> cause of the system choosing to say: "that is red", or choose to lie in >>>>>>>>>>>>>>> some way. >>>>>>>>>>>>>>> It is the entire composit 'free will system' that is the >>>>>>>>>>>>>>> initial cause of someone choosing to say something, not any single quality >>>>>>>>>>>>>>> like the redness of a single pixel. >>>>>>>>>>>>>>> For you, everything is just a chain of causes and effects, >>>>>>>>>>>>>>> no composite awareness and no composit free will system involved. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I am proposing that the awareness of the system be completely >>>>>>>>>>>>>> ignored, and only the relevant physical properties be replicated. If this >>>>>>>>>>>>>> is done, then whether you like it or not, the awareness of the system will >>>>>>>>>>>>>> also be replicated. It?s impossible to do one without the other. >>>>>>>>>>>>>> >>>>>>>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I am saying that redness is not a substrate, but it >>>>>>>>>>>>>> supervenes on a certain type of behaviour, regardless of the substrate of >>>>>>>>>>>>>> its implementation. This allows the system to know when the redness changes >>>>>>>>>>>>>> to something else, since the behaviour on which the redness supervenes >>>>>>>>>>>>>> would change to a different behaviour on which different colour qualia >>>>>>>>>>>>>> supervene. >>>>>>>>>>>>>> >>>>>>>>>>>>>> That is why I constantly ask you what could be responsible >>>>>>>>>>>>>>> for redness. Because whatever you say that is, I could use your same >>>>>>>>>>>>>>> argument and say it can't be that, either. >>>>>>>>>>>>>>> If you could describe to me what redness could be, this >>>>>>>>>>>>>>> would falsify my camp, and I would jump to the functionalist camp. But >>>>>>>>>>>>>>> that is impossible, because all your so-called proof is claiming, is that >>>>>>>>>>>>>>> nothing can be redness. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> If it isn't glutamate that has the redness quality, what >>>>>>>>>>>>>>> can? Nothing you say will work, because of your so-called proof. Because >>>>>>>>>>>>>>> when you have contradictory assumptions you can prove all claims to be both >>>>>>>>>>>>>>> true and false, which has no utility. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>>>>>>> difference between having and not having qualia. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Until you can provide some falsifiable example of what could >>>>>>>>>>>>>>> be responsible for your redness quality, further conversation seems to be a >>>>>>>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>>>>>>> possible? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> extropy-chat mailing list >>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> -- >>>>>>> Stathis Papaioannou >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> >>>>> >>>>> -- >>>>> Stathis Papaioannou >>>>> >>>>> >>>>> Virus-free. >>>>> www.avast.com >>>>> >>>>> <#m_7634101451313130792_m_-3714099383540579201_m_924314508426026969_m_-8755855459092372129_m_9177929776750035119_m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>>> -- >>>>> Stathis Papaioannou >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> -- >>> Stathis Papaioannou >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From spike at rainier66.com Tue May 10 14:22:30 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 10 May 2022 07:22:30 -0700 Subject: [ExI] Federal Fiscal Shortfall Nears $1 Million Per Household In-Reply-To: References: Message-ID: <005601d86479$62232340$266969c0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Subject: [ExI] Federal Fiscal Shortfall Nears $1 Million Per Household The U.S. Treasury has published a major report revealing that the federal government has amassed $124.1 trillion in debts, liabilities, and unfunded obligations. By James D. Agresti May 5, 2022 https://www.justfactsdaily.com/federal-fiscal-shortfall-nears-1-million-per-household ... Quotes: Although the report discloses information of crucial import to the citizens of the United States, Google News indicates that no major media outlet has informed anyone about it since it was released on February 17, 2022. ... ----------------- >...Don't worry! Limits don't exist for us! Yee-Ha! BillK _______________________________________________ BillK have been told the US Government cannot go bankrupt. Reasoning: it never has in the past. Somehow, I fail to find that reassuring. At least since the mid 90s, it has been very clear to me the fed has been running up debts it can only deal with through massive inflation. That might explain the popularity of BitCoin. spike From stathisp at gmail.com Tue May 10 15:14:19 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 11 May 2022 01:14:19 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Wed, 11 May 2022 at 00:02, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Yes, that is all true, but it is still missing the point. > There must be something in the system which has a coolorness quality. > You must be able to change redness to greenness, and if you do, the system > must be able to report that the quality has changed. > If that functionality is not included somewhere in the system, it does not > have sufficient functionality to be considered conscious. > There is something in the system that gives rise to colourless qualities, but it can be replicated by anything else that replicates the reporting of it. On Tue, May 10, 2022 at 7:48 AM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Tue, 10 May 2022 at 23:06, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Stathis, >>> [image: 3_robots_tiny.png] >>> >>> We can say *functionality* is multiply realizable, the above systems >>> being different examples of the same knowledge of the strawberry >>> functionality. >>> We can say the same for *intelligence*. >>> But if you define "*consciousness*" to be computationally bound >>> elemental intrinsic qualities, like redness and greenness, that is >>> basically saying it is important to ask questions like what is your >>> consciousness like? Which of the above 3 qualities are you using to paint >>> your conscious knowledge of the strawberry with? >>> >>> And given that definition of "*consciousness*" isn't this the >>> opposite of: >>> "No, I don?t think *consciousness* can be tied to any particular >>> [colorness quality of the] substrate or structure." >>> >> >> If the three subjects differ in their behaviour, such as their >> description of the strawberry or the nature of the redness experience, then >> they have different qualia. If they say exactly the same things under all >> possible circumstances about strawberries, redness, greenness and >> everything else they have the same qualia. >> >> >>> On Mon, May 9, 2022 at 8:27 PM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Tue, 10 May 2022 at 12:12, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Right, so you are agreeing that what consciousness is like is >>>>> substrate dependent, at the elemental level. >>>>> Elemental greenness is not like elemental redness, and the word 'red' >>>>> is not like either one, even though all 3 can represent 'red' >>>>> information sufficiently for the system to tell you the strawberry is red. >>>>> >>>> >>>> No, I don?t think consciousness can be tied to any particular substrate >>>> or structure. I agree that greenness is different to redness and I agree >>>> that the word ?red? is not like either one. I also think that glutamate and >>>> electronic circuits are unlike any qualia, they are different categories of >>>> things. If the system can tell that something is red that does not mean >>>> that it has redness qualia. A blind person can use an instrument to tell >>>> you that a strawberry is red. However, a blind person with an instrument is >>>> not functionally identical to someone with normal vision, since the blind >>>> man will readily tell you that he can?t see the strawberry, an obvious >>>> functional difference. >>>> >>>>> >>>>> On Mon, May 9, 2022 at 7:59 PM Stathis Papaioannou via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 May 2022 at 11:00, Brent Allsop via extropy-chat < >>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, May 9, 2022 at 6:49 PM Stathis Papaioannou via extropy-chat < >>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, 10 May 2022 at 09:04, Brent Allsop via extropy-chat < >>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Redness isn't about the black box functionality, redness is about >>>>>>>>> how the black box achieves the functionality. >>>>>>>>> >>>>>>>> >>>>>>>> That may seem plausible, but the functionalist position is that >>>>>>>> however the functionality is achieved, redness will be preserved. >>>>>>>> >>>>>>> >>>>>>> In other words, you are not understanding where I pointed out, >>>>>>> below, that you can't achieve redness via functionality, either, >>>>>>> according to this argument. >>>>>>> So, why do you accept your substitution argument against substrate >>>>>>> dependence, but not the same substitution argument for why redness can't >>>>>>> superveen on function, as you claim, either? >>>>>>> >>>>>> >>>>>> The function that must be preserved in order to preserve the qualia >>>>>> is ultimately the behaviour presented to the environment. Obviously if you >>>>>> swap living tissue for electronic circuits the function of the new >>>>>> components is different. >>>>>> >>>>>> >>>>>>> On Mon, May 9, 2022 at 4:53 PM Brent Allsop >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Exactly, I couldn't have set it better myself. >>>>>>>>>> Those aren't reasons why redness isn't substrate dependent. >>>>>>>>>> If you ask the system: "How do you do your sorting, one system >>>>>>>>>> must be able to say "bubble sort" and the other must be able to say: "quick >>>>>>>>>> sort" >>>>>>>>>> Just the same as if you asked: "What is redness like for you, one >>>>>>>>>> would say your redness, the other would say your greenness." >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat < >>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>> >>>>>>>>>>> Those are not reasons why redness can't supervene. >>>>>>>>>>> >>>>>>>>>>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat < >>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> OK, let me explain in more detail. >>>>>>>>>>>> Redness can't supervene on a function, because you can >>>>>>>>>>>> substitute the function (say bubble sort) with some other function (quick >>>>>>>>>>>> sort) >>>>>>>>>>>> So redness can't supervene on a function, either because "you >>>>>>>>>>>> replace a part (or function) of the brain with a black box that affects the >>>>>>>>>>>> rest of the brain in the same way as the original, the subject must behave >>>>>>>>>>>> the same" >>>>>>>>>>>> So redness can't superven on any function. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via >>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat < >>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>>>> >>>>>>>>>>>>>> OK, let me try saying it this way. >>>>>>>>>>>>>> You use the neural substitution argument to "prove" >>>>>>>>>>>>>> redness cannot be substrate dependent. >>>>>>>>>>>>>> Then you conclude that redness "supervenes" on some function. >>>>>>>>>>>>>> The problem is, you can prove that redness can't "supervene" >>>>>>>>>>>>>> on any function, via the same neural substitution proof. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> It supervenes on any substrate that preserves the redness >>>>>>>>>>>>> behaviour. In other words, if you replace a part of the brain with a black >>>>>>>>>>>>> box that affects the rest of the brain in the same way as the original, the >>>>>>>>>>>>> subject must behave the same and must have the same qualia. It doesn?t >>>>>>>>>>>>> matter what?s in the black box. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via >>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat < >>>>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Stathis, >>>>>>>>>>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via >>>>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat >>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via >>>>>>>>>>>>>>>>>> extropy-chat wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I think colourness qualities are what the human >>>>>>>>>>>>>>>>>>> behaviour associated with distinguishing between colours, describing them, >>>>>>>>>>>>>>>>>>> reacting to them emotionally etc. is seen from inside the system. If you >>>>>>>>>>>>>>>>>>> make a physical change to the system and perfectly reproduce this >>>>>>>>>>>>>>>>>>> behaviour, you will also necessarily perfectly reproduce the colourness >>>>>>>>>>>>>>>>>>> qualities. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> An abstract description of the behavior of redness can >>>>>>>>>>>>>>>>>> perfectly capture 100% of the behavior, one to one, isomorphically >>>>>>>>>>>>>>>>>> perfectly modeled. Are you saying that since you abstractly reproduce 100% >>>>>>>>>>>>>>>>>> of the behavior, that you have duplicated the quale? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Here is where I end up misquoting you because I don?t >>>>>>>>>>>>>>>>> understand what exactly you mean by ?abstract description?. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [image: 3_robots_tiny.png] >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> This is the best possible illustration of what I mean by >>>>>>>>>>>>>>>> abstract vs intrinsic physical qualities. >>>>>>>>>>>>>>>> The first two represent knowledge with two different >>>>>>>>>>>>>>>> intrinsic physical qualities, redness and greenness. >>>>>>>>>>>>>>>> "Red" is just an abstract word, composed of strings of ones >>>>>>>>>>>>>>>> and zeros. You can't know what it means, by design, without a dictionary. >>>>>>>>>>>>>>>> The redness intrinsic quality your brain uses to represent >>>>>>>>>>>>>>>> knowledge of 'red' things, is your definition for the abstract word 'red'. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> But the specific example I have used is that if you >>>>>>>>>>>>>>>>> perfectly reproduce the physical effect of glutamate on the rest of the >>>>>>>>>>>>>>>>> brain using a different substrate, and glutamate is involved in redness >>>>>>>>>>>>>>>>> qualia, then you necessarily also reproduce the redness qualia. This is >>>>>>>>>>>>>>>>> because if it were not so, it would be possible to grossly change the >>>>>>>>>>>>>>>>> qualia without the subject noticing any change, which is absurd. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think the confusion comes in the different ways we think >>>>>>>>>>>>>>>> about this: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> "the physical effect of glutamate on the rest of the brain >>>>>>>>>>>>>>>> using a different substrate" >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Everything in your model seems to be based on this kind of >>>>>>>>>>>>>>>> "cause and effect" or "interpretations of interpretations". I think of >>>>>>>>>>>>>>>> things in a different way. >>>>>>>>>>>>>>>> I would emagine you would say that the causal properties of >>>>>>>>>>>>>>>> glutamate or redness would result in someone saying: "That is red." >>>>>>>>>>>>>>>> However, to me, the redness quality, alone, isn't the cause >>>>>>>>>>>>>>>> of someone saying: "That is Red", as someone could lie, and say: "That is >>>>>>>>>>>>>>>> Green", proving the redness isn't the only cause of what the person is >>>>>>>>>>>>>>>> saying. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The causal properties of the glutamate are basically the >>>>>>>>>>>>>>> properties that cause motion in other parts of the system. Consider a >>>>>>>>>>>>>>> glutamate molecule as a part of a clockwork mechanism. If you remove the >>>>>>>>>>>>>>> glutamate molecule, you will disrupt the movement of the entire clockwork >>>>>>>>>>>>>>> mechanism. But if you replace it with a different molecule that has similar >>>>>>>>>>>>>>> physical properties, the rest of the clockwork mechanism will continue >>>>>>>>>>>>>>> functioning the same. Not all of the physical properties are relevant, and >>>>>>>>>>>>>>> they only have to be replicated to within a certain tolerance. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The computational system, and the way the knowledge is >>>>>>>>>>>>>>>> consciousnessly represented is different from simple cause and effect. >>>>>>>>>>>>>>>> The entire system is aware of all of the intrinsic >>>>>>>>>>>>>>>> qualities of each of the pixels on the surface of the strawberry (along >>>>>>>>>>>>>>>> with any reasoning for why it would lie or not) >>>>>>>>>>>>>>>> And it is because of this composite awareness, that is the >>>>>>>>>>>>>>>> cause of the system choosing to say: "that is red", or choose to lie in >>>>>>>>>>>>>>>> some way. >>>>>>>>>>>>>>>> It is the entire composit 'free will system' that is the >>>>>>>>>>>>>>>> initial cause of someone choosing to say something, not any single quality >>>>>>>>>>>>>>>> like the redness of a single pixel. >>>>>>>>>>>>>>>> For you, everything is just a chain of causes and effects, >>>>>>>>>>>>>>>> no composite awareness and no composit free will system involved. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I am proposing that the awareness of the system be >>>>>>>>>>>>>>> completely ignored, and only the relevant physical properties be >>>>>>>>>>>>>>> replicated. If this is done, then whether you like it or not, the awareness >>>>>>>>>>>>>>> of the system will also be replicated. It?s impossible to do one without >>>>>>>>>>>>>>> the other. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> If I recall correctly, you admit that the quality of your >>>>>>>>>>>>>>>> conscious knowledge is dependent on the particular quality of your redness, >>>>>>>>>>>>>>>> so qualia can be thought of as a substrate, on which the quality of your >>>>>>>>>>>>>>>> consciousness is dependent, right? If you are only focusing on a different >>>>>>>>>>>>>>>> substrate being able to produce the same 'redness behavior' then all you >>>>>>>>>>>>>>>> are doing is making two contradictory assumptions. If you take that >>>>>>>>>>>>>>>> assumption, then you can prove that nothing, even a redness function can't >>>>>>>>>>>>>>>> have redness, for the same reason. There must be something that is >>>>>>>>>>>>>>>> redness, and the system must be able to know when redness changes to >>>>>>>>>>>>>>>> anything else. All you are saying is that nothing can do that. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I am saying that redness is not a substrate, but it >>>>>>>>>>>>>>> supervenes on a certain type of behaviour, regardless of the substrate of >>>>>>>>>>>>>>> its implementation. This allows the system to know when the redness changes >>>>>>>>>>>>>>> to something else, since the behaviour on which the redness supervenes >>>>>>>>>>>>>>> would change to a different behaviour on which different colour qualia >>>>>>>>>>>>>>> supervene. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> That is why I constantly ask you what could be responsible >>>>>>>>>>>>>>>> for redness. Because whatever you say that is, I could use your same >>>>>>>>>>>>>>>> argument and say it can't be that, either. >>>>>>>>>>>>>>>> If you could describe to me what redness could be, this >>>>>>>>>>>>>>>> would falsify my camp, and I would jump to the functionalist camp. But >>>>>>>>>>>>>>>> that is impossible, because all your so-called proof is claiming, is that >>>>>>>>>>>>>>>> nothing can be redness. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> If it isn't glutamate that has the redness quality, what >>>>>>>>>>>>>>>> can? Nothing you say will work, because of your so-called proof. Because >>>>>>>>>>>>>>>> when you have contradictory assumptions you can prove all claims to be both >>>>>>>>>>>>>>>> true and false, which has no utility. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Glutamate doesn?t have the redness quality, but glutamate or >>>>>>>>>>>>>>> something that functions like glutamate is necessary to produce the redness >>>>>>>>>>>>>>> quality. We know this because it is what we observe: certain brain >>>>>>>>>>>>>>> structures are needed in order to have certain experiences. We know that it >>>>>>>>>>>>>>> can?t be substrate specific because then we could grossly change the qualia >>>>>>>>>>>>>>> without the subject noticing, which is absurd, meaning there is no >>>>>>>>>>>>>>> difference between having and not having qualia. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Until you can provide some falsifiable example of what could >>>>>>>>>>>>>>>> be responsible for your redness quality, further conversation seems to be a >>>>>>>>>>>>>>>> waste. Because my assertion is that given your assumptions NOTHING will >>>>>>>>>>>>>>>> work, and until you falsify that, with at least one example possibility, >>>>>>>>>>>>>>>> why go on with this contradictory assumption where qualitative >>>>>>>>>>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't >>>>>>>>>>>>>>>> possible? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Stathis Papaioannou >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> extropy-chat mailing list >>>>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>> extropy-chat mailing list >>>>>>>>> extropy-chat at lists.extropy.org >>>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>>> >>>>>>>> -- >>>>>>>> Stathis Papaioannou >>>>>>>> _______________________________________________ >>>>>>>> extropy-chat mailing list >>>>>>>> extropy-chat at lists.extropy.org >>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> extropy-chat mailing list >>>>>>> extropy-chat at lists.extropy.org >>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> >>>>>> >>>>>> Virus-free. >>>>>> www.avast.com >>>>>> >>>>>> <#m_-6053198792149820200_m_7634101451313130792_m_-3714099383540579201_m_924314508426026969_m_-8755855459092372129_m_9177929776750035119_m_1920751299084759755_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>>>> -- >>>>>> Stathis Papaioannou >>>>>> _______________________________________________ >>>>>> extropy-chat mailing list >>>>>> extropy-chat at lists.extropy.org >>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> -- >>>> Stathis Papaioannou >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> >> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_robots_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From jasonresch at gmail.com Tue May 10 15:34:14 2022 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 10 May 2022 11:34:14 -0400 Subject: [ExI] Federal Fiscal Shortfall Nears $1 Million Per Household In-Reply-To: <005601d86479$62232340$266969c0$@rainier66.com> References: <005601d86479$62232340$266969c0$@rainier66.com> Message-ID: On Tue, May 10, 2022, 10:23 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of > BillK via extropy-chat > Subject: [ExI] Federal Fiscal Shortfall Nears $1 Million Per Household > > The U.S. Treasury has published a major report revealing that the federal > government has amassed $124.1 trillion in debts, liabilities, and unfunded > obligations. > By James D. Agresti May 5, 2022 > > > https://www.justfactsdaily.com/federal-fiscal-shortfall-nears-1-million-per-household > ... > Quotes: > Although the report discloses information of crucial import to the > citizens of the United States, Google News indicates that no major media > outlet has informed anyone about it since it was released on February 17, > 2022. > ... > ----------------- > > >...Don't worry! Limits don't exist for us! Yee-Ha! > > BillK > > _______________________________________________ > > > > BillK have been told the US Government cannot go bankrupt. Reasoning: it > never has in the past. > It's defaulted at least 4 times: In 1790, it defaulted on its war debts, deferring interest payments for 10 years in the Funding Act of 1790. In 1861, it created green backs (legal tender), but the next year refused on demand convertibility to gold. Liberty bonds issued in 1918 were to be repaid in gold, but when the gold reserves ran dry in 1933, the government stopped repaying them in gold. It also led the government to confiscate gold from the the population, then afterwards, revalue the dollar from $20/oz of gold to $35/oz. Related to debts of the Vietnam war, the 1971 termination of the dollar gold convertibility can be seen as a default for the foreign reserves other nations held in dollars. In 1979, the treasury failed to send out debt payments for treasury bonds on time, and further refused to pay interest for being late. So depending on how you count them, that's 4-5 times in the 250 year history of the U.S., and 3 in the last century. Almost all the defaults were related to war debts and gold convertibility. Now that the dollar is entirely untethered to gold, it remains to be seen whether default or money printing it's the preferred way out. In 1979, after leaving the gold standard, it chose default, and not to compensate bond holders. Money printing can also be viewed as a kind of default, but one where the loss is not born entirely by the creditors, but is instead distributed among all holders of dollars and dollar denominated debts. Jason > Somehow, I fail to find that reassuring. At least since the mid 90s, it > has been very clear to me the fed has been running up debts it can only > deal with through massive inflation. That might explain the popularity of > BitCoin. > > > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue May 10 16:58:34 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 10 May 2022 10:58:34 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, May 10, 2022 at 9:15 AM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, 11 May 2022 at 00:02, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Yes, that is all true, but it is still missing the point. >> There must be something in the system which has a coolorness quality. >> You must be able to change redness to greenness, and if you do, the >> system must be able to report that the quality has changed. >> If that functionality is not included somewhere in the system, it does >> not have sufficient functionality to be considered conscious. >> > > There is something in the system that gives rise to colourless qualities, > This is a falsifiable claim. The prediction is that our abstract description of something like glutamate, is a description of redness, and that nothing else but that will be able to get redness to 'arise'. Nothing will have an intrinsic redness quality, without glutamate (or whatever it is that has the redness quality) but it can be replicated by anything else that replicates the reporting of > it. > Yes, I agree. The prediction is that the neuro substitution will fail, because nothing but glutamate has the redness quality. And saying that consciousness must work in a discrete logic way, where there is no substrate dependence on redness, and no ability to report the change to greeness, is missing the point. Because if that is the case redness can't supervene on anything, including functions, for the same substitution reason. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue May 10 20:24:29 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 11 May 2022 06:24:29 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Wed, 11 May 2022 at 03:00, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, May 10, 2022 at 9:15 AM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, 11 May 2022 at 00:02, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Yes, that is all true, but it is still missing the point. >>> There must be something in the system which has a coolorness quality. >>> You must be able to change redness to greenness, and if you do, the >>> system must be able to report that the quality has changed. >>> If that functionality is not included somewhere in the system, it does >>> not have sufficient functionality to be considered conscious. >>> >> >> There is something in the system that gives rise to colourless qualities, >> > This is a falsifiable claim. The prediction is that our abstract > description of something like glutamate, is a description of redness, and > that nothing else but that will be able to get redness to 'arise'. Nothing > will have an intrinsic redness quality, without glutamate (or whatever it > is that has the redness quality) > > but it can be replicated by anything else that replicates the reporting of >> it. >> > Yes, I agree. The prediction is that the neuro substitution will fail, > because nothing but glutamate has the redness quality. > And saying that consciousness must work in a discrete logic way, where > there is no substrate dependence on redness, and no ability to report the > change to greeness, is missing the point. > Because if that is the case redness can't supervene on anything, including > functions, for the same substitution reason. > You have never explained what you think would happen if the glutamate were replaced with something that could replicate the observable behaviour, the pattern of glutamate molecules? interactions with other molecules via the electromagnetic force, but not the qualia. > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed May 11 00:30:21 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 10 May 2022 18:30:21 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: There are lots of competing theories making predictions about what qualia are. It will be an answers to the question: Which of all our descriptions of stuff in the brain is a description of redness. My assumption is there is some necessary and sufficient set of observable physical behavior or chemical reactions which are the descriptions of redness. So to say anything that is not the qualia, is anything outside of this necessary and sufficient set of physics. So, by definition, if anything varies from the necessary and sufficient set, it would no longer be redness. I like to think of it as being similar to when you burn certain metals, it emits different colored light. Obviously, if you change or remove the metal, the color changes. And nothing but those metals will produce the same chemical reaction that emits that particular color. It isn't the light, which many things could produce the same light, it is possible that only the particular chemical reaction that can be computationally bound, such that if it changes, the redness will change in a way that the entire system must be aware of that change from redness. On Tue, May 10, 2022 at 2:25 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Wed, 11 May 2022 at 03:00, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Tue, May 10, 2022 at 9:15 AM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Wed, 11 May 2022 at 00:02, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Yes, that is all true, but it is still missing the point. >>>> There must be something in the system which has a coolorness quality. >>>> You must be able to change redness to greenness, and if you do, the >>>> system must be able to report that the quality has changed. >>>> If that functionality is not included somewhere in the system, it does >>>> not have sufficient functionality to be considered conscious. >>>> >>> >>> There is something in the system that gives rise to colourless >>> qualities, >>> >> This is a falsifiable claim. The prediction is that our abstract >> description of something like glutamate, is a description of redness, and >> that nothing else but that will be able to get redness to 'arise'. Nothing >> will have an intrinsic redness quality, without glutamate (or whatever it >> is that has the redness quality) >> >> but it can be replicated by anything else that replicates the reporting >>> of it. >>> >> Yes, I agree. The prediction is that the neuro substitution will fail, >> because nothing but glutamate has the redness quality. >> And saying that consciousness must work in a discrete logic way, where >> there is no substrate dependence on redness, and no ability to report the >> change to greeness, is missing the point. >> Because if that is the case redness can't supervene on anything, >> including functions, for the same substitution reason. >> > > You have never explained what you think would happen if the glutamate were > replaced with something that could replicate the observable behaviour, the > pattern of glutamate molecules? interactions with other molecules via the > electromagnetic force, but not the qualia. > >> -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed May 11 01:10:21 2022 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 10 May 2022 21:10:21 -0400 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, May 10, 2022, 8:31 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > There are lots of competing theories making predictions about what qualia > are. > It will be an answers to the question: Which of all our descriptions of > stuff in the brain is a description of redness. > I think it is an error to presume it must be "stuff". Many things we know are not stuff. A game of chess, the story of Gulliver's travels, bits, etc. What all these things have in common is that they can be implemented using various kinds of stuff. You can implement a chess game using marble, wooden, plastic pieces, etc. You can have the story of Gulliver's travels in hard cover, ebook, PDF, webpage, etc. You can implement bits in punch cards, magnetic tape, optical flashes, charges in flags memory, etc. None of these things (chess, the story, the bits) is stuff, they are ideas and abstractions. Any instance can be made of particular stuff, but the material is largely irrelevant. My question to you is: how do you know "red" is a certain kind of stuff, rather than an idea or abstraction? Jason My assumption is there is some necessary and sufficient set of > observable physical behavior or chemical reactions which are the > descriptions of redness. > So to say anything that is not the qualia, is anything outside of this > necessary and sufficient set of physics. > So, by definition, if anything varies from the necessary and sufficient > set, it would no longer be redness. > I like to think of it as being similar to when you burn certain metals, it > emits different colored light. > Obviously, if you change or remove the metal, the color changes. And > nothing but those metals will produce the same chemical reaction that emits > that particular color. > It isn't the light, which many things could produce the same light, it is > possible that only the particular chemical reaction that can be > computationally bound, such that if it changes, the redness will change in > a way that the entire system must be aware of that change from redness. > > > > On Tue, May 10, 2022 at 2:25 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Wed, 11 May 2022 at 03:00, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Tue, May 10, 2022 at 9:15 AM Stathis Papaioannou via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Wed, 11 May 2022 at 00:02, Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Yes, that is all true, but it is still missing the point. >>>>> There must be something in the system which has a coolorness quality. >>>>> You must be able to change redness to greenness, and if you do, the >>>>> system must be able to report that the quality has changed. >>>>> If that functionality is not included somewhere in the system, it does >>>>> not have sufficient functionality to be considered conscious. >>>>> >>>> >>>> There is something in the system that gives rise to colourless >>>> qualities, >>>> >>> This is a falsifiable claim. The prediction is that our abstract >>> description of something like glutamate, is a description of redness, and >>> that nothing else but that will be able to get redness to 'arise'. Nothing >>> will have an intrinsic redness quality, without glutamate (or whatever it >>> is that has the redness quality) >>> >>> but it can be replicated by anything else that replicates the reporting >>>> of it. >>>> >>> Yes, I agree. The prediction is that the neuro substitution will fail, >>> because nothing but glutamate has the redness quality. >>> And saying that consciousness must work in a discrete logic way, where >>> there is no substrate dependence on redness, and no ability to report the >>> change to greeness, is missing the point. >>> Because if that is the case redness can't supervene on anything, >>> including functions, for the same substitution reason. >>> >> >> You have never explained what you think would happen if the glutamate >> were replaced with something that could replicate the observable behaviour, >> the pattern of glutamate molecules? interactions with other molecules via >> the electromagnetic force, but not the qualia. >> >>> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed May 11 01:13:09 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 11 May 2022 11:13:09 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Wed, 11 May 2022 at 10:32, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > There are lots of competing theories making predictions about what qualia > are. > It will be an answers to the question: Which of all our descriptions of > stuff in the brain is a description of redness. > My assumption is there is some necessary and sufficient set of > observable physical behavior or chemical reactions which are the > descriptions of redness. > So to say anything that is not the qualia, is anything outside of this > necessary and sufficient set of physics. > So, by definition, if anything varies from the necessary and sufficient > set, it would no longer be redness. > I like to think of it as being similar to when you burn certain metals, it > emits different colored light. > Obviously, if you change or remove the metal, the color changes. And > nothing but those metals will produce the same chemical reaction that emits > that particular color. > It isn't the light, which many things could produce the same light, it is > possible that only the particular chemical reaction that can be > computationally bound, such that if it changes, the redness will change in > a way that the entire system must be aware of that change from redness. > It's possible that for technical reasons nothing can be found that will affect the rest of the system in the same way as glutamate does. However, saying this is avoiding the question. There is no logical reason why a substitute either for glutamate or one of the thousands of other components in the brain could not be found. As an example that would probably work, substitute some of the atoms in a molecule with different isotopes. This is a quite common technique to track molecules in biomedical research. You can order some online if you want: https://www.moravek.com/what-exactly-is-radiolabeling/ So radiolabeled glutamate will behave the same as regular glutamate, but it is a different substrate. What do you think would happen to the qualia if radiolabeled glutamate replaced regular glutamate? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed May 11 01:57:56 2022 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 10 May 2022 21:57:56 -0400 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Tue, May 10, 2022, 9:12 PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > My question to you is: how do you know "red" is a certain kind of stuff, > rather than an idea or abstraction? > This is a very good question. I'd also like to know how the millions of kinds of stuff necessary to represent the various stuffness of stuff actually fit inside the brain. I'm also wondering how such things as the fourness of four explains 2+2, 2x2, 2^2 : is there a twoness of two, plusness of plus, etc that create an ensemble that would be indiscernible with fourness of four? I know, I know... words are hard. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu May 12 01:10:22 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 11 May 2022 20:10:22 -0500 Subject: [ExI] nutrition report Message-ID: According to the BBC, scientists report that the following in order are the most nutritious foods, almonds being number 1. Look at #8. bill w The top 10 most nutritious foods ranked in order: almonds, cherimoya, ocean perch, flatfish, chia seeds, pumpkin seeds, swiss chard, pork fat, beet greens, snapper. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu May 12 22:21:11 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 12 May 2022 16:21:11 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: It is a necessary truth, that if you know something, that knowledge must be something. And if you have knowledge that has a redness quality, there must be something in your brain that has that quality that is your conscious knowledge. This is true if that stuff is some kind of "Material" or "electromagnetic field" "spiritual" or "functional" stuff, it remains a fact that your knowledge, composed of that, has a redness quality. On Tue, May 10, 2022 at 7:59 PM Mike Dougherty via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, May 10, 2022, 9:12 PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> My question to you is: how do you know "red" is a certain kind of stuff, >> rather than an idea or abstraction? >> > > This is a very good question. > > I'd also like to know how the millions of kinds of stuff necessary to > represent the various stuffness of stuff actually fit inside the brain. > > I'm also wondering how such things as the fourness of four explains 2+2, > 2x2, 2^2 : is there a twoness of two, plusness of plus, etc that create an > ensemble that would be indiscernible with fourness of four? > > I know, I know... words are hard. > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu May 12 22:23:29 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 12 May 2022 16:23:29 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Stathis, I think we are in agreement with that. There may be different things, or variants of things, that all have a redness quality. But, there must be something physically different than that, which has the greenness quality, and you can't get redness, from the stuff that has a greenness quality, unless you have a dictionary that says greenness is representing red. On Tue, May 10, 2022 at 7:17 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Wed, 11 May 2022 at 10:32, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> There are lots of competing theories making predictions about what qualia >> are. >> It will be an answers to the question: Which of all our descriptions of >> stuff in the brain is a description of redness. >> My assumption is there is some necessary and sufficient set of >> observable physical behavior or chemical reactions which are the >> descriptions of redness. >> So to say anything that is not the qualia, is anything outside of this >> necessary and sufficient set of physics. >> So, by definition, if anything varies from the necessary and sufficient >> set, it would no longer be redness. >> I like to think of it as being similar to when you burn certain metals, >> it emits different colored light. >> Obviously, if you change or remove the metal, the color changes. And >> nothing but those metals will produce the same chemical reaction that emits >> that particular color. >> It isn't the light, which many things could produce the same light, it is >> possible that only the particular chemical reaction that can be >> computationally bound, such that if it changes, the redness will change in >> a way that the entire system must be aware of that change from redness. >> > > It's possible that for technical reasons nothing can be found that will > affect the rest of the system in the same way as glutamate does. However, > saying this is avoiding the question. There is no logical reason why a > substitute either for glutamate or one of the thousands of other components > in the brain could not be found. As an example that would probably work, > substitute some of the atoms in a molecule with different isotopes. This is > a quite common technique to track molecules in biomedical research. You can > order some online if you want: > https://www.moravek.com/what-exactly-is-radiolabeling/ > So radiolabeled glutamate will behave the same as regular glutamate, but > it is a different substrate. What do you think would happen to the qualia > if radiolabeled glutamate replaced regular glutamate? > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu May 12 22:30:33 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 12 May 2022 15:30:33 -0700 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: <003401d8664f$e5303160$af909420$@rainier66.com> From: extropy-chat On Behalf Of Brent Allsop via extropy-chat Subject: Re: [ExI] Is Artificial Life Conscious? >?It is a necessary truth, that if you know something, that knowledge must be something. And if you have knowledge that has a redness quality, there must be something in your brain that has that quality that is your conscious knowledge. This is true if that stuff is some kind of "Material" or "electromagnetic field" "spiritual" or "functional" stuff, it remains a fact that your knowledge, composed of that, has a redness quality? Hi Brent, if I understand it (I don?t necessarily) the qualia notion is another approach to the question ?is consciousness substrate dependent?? Answer: I don?t know. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu May 12 22:33:36 2022 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 12 May 2022 18:33:36 -0400 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Thu, May 12, 2022, 6:23 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > It is a necessary truth, that if you know something, that knowledge must > be something. > And if you have knowledge that has a redness quality, there must be > something in your brain that has that quality that is your conscious > knowledge. > This is true if that stuff is some kind of "Material" or "electromagnetic > field" "spiritual" or "functional" stuff, it remains a fact that your > knowledge, composed of that, has a redness quality. > You know we can agree on a conversational level despite having exchanged zero informational content, right? You seem to be accepting that my appreciation or understanding of ... well, anything... is antithetical to every premise you have thus far attempted to promote. "Stuffness comes from fundamental stuff" ... is hardly a point of successful understanding. You can walk a mile in my shoes, but all you can assert is how that feels on your feet. Maybe that's all you ever wanted. Maybe that's all you'll ever get. Idk. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu May 12 22:57:13 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 12 May 2022 16:57:13 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Hi Mike, Sorry, didn't mean to not be trying to understand your POV. In fact, I very much care about others' POVs. That's why we created canonizer, with camps, in the first place. I'm just saying what I currently think, and basically asking (even if not explicitly) if anyone else sees things differently. That's why the most important thing on canonizer is the "create a competing camp here" button. On Thu, May 12, 2022 at 4:43 PM Mike Dougherty via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, May 12, 2022, 6:23 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> It is a necessary truth, that if you know something, that knowledge must >> be something. >> And if you have knowledge that has a redness quality, there must be >> something in your brain that has that quality that is your conscious >> knowledge. >> This is true if that stuff is some kind of "Material" or "electromagnetic >> field" "spiritual" or "functional" stuff, it remains a fact that your >> knowledge, composed of that, has a redness quality. >> > > You know we can agree on a conversational level despite having exchanged > zero informational content, right? > > You seem to be accepting that my appreciation or understanding of ... > well, anything... is antithetical to every premise you have thus far > attempted to promote. > > "Stuffness comes from fundamental stuff" > ... is hardly a point of successful understanding. > > You can walk a mile in my shoes, but all you can assert is how that feels > on your feet. Maybe that's all you ever wanted. Maybe that's all you'll > ever get. Idk. > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu May 12 23:07:17 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 12 May 2022 17:07:17 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: <003401d8664f$e5303160$af909420$@rainier66.com> References: <003401d8664f$e5303160$af909420$@rainier66.com> Message-ID: Hi Spike, I agree with you, but I think functionalists like Stathis disagree. And functionalism seems to be the leading popular consensus. Even Stathis admits his knowledge of a strawberry has a redness quality, which is different from greenness, and if redness changed to greenness, it would be different, i.e. it's quality is dependent on the quality of whatever substrate that is. What I don't understand is how anyone can think redness, itself, can "arise" from (or superveen on?) different substrates with different properties in a substrate independent way. And even if that was the case, that there was some function, from which "redness" could arize, that would still be a physical fact, the quality of which consciousness would be dependent on. On Thu, May 12, 2022 at 4:36 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Brent Allsop via extropy-chat > *Subject:* Re: [ExI] Is Artificial Life Conscious? > > > > > > >?It is a necessary truth, that if you know something, that knowledge must > be something. > > And if you have knowledge that has a redness quality, there must be > something in your brain that has that quality that is your conscious > knowledge. > > This is true if that stuff is some kind of "Material" or "electromagnetic > field" "spiritual" or "functional" stuff, it remains a fact that your > knowledge, composed of that, has a redness quality? > > > > > > > > > > Hi Brent, if I understand it (I don?t necessarily) the qualia notion is > another approach to the question ?is consciousness substrate dependent?? > > > > Answer: I don?t know. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri May 13 00:34:15 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 13 May 2022 10:34:15 +1000 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: On Fri, 13 May 2022 at 08:30, Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Stathis, > I think we are in agreement with that. There may be different things, or > variants of things, that all have a redness quality. > So you agree it isn't substrate specific. > But, there must be something physically different than that, which has the > greenness quality, and you can't get redness, from the stuff that has a > greenness quality, unless you have a dictionary that says greenness is > representing red. > The same thing in the same configuration will give the same qualia, a different thing in a different configuration may give the same or different qualia. > On Tue, May 10, 2022 at 7:17 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Wed, 11 May 2022 at 10:32, Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> There are lots of competing theories making predictions about what >>> qualia are. >>> It will be an answers to the question: Which of all our descriptions of >>> stuff in the brain is a description of redness. >>> My assumption is there is some necessary and sufficient set of >>> observable physical behavior or chemical reactions which are the >>> descriptions of redness. >>> So to say anything that is not the qualia, is anything outside of this >>> necessary and sufficient set of physics. >>> So, by definition, if anything varies from the necessary and sufficient >>> set, it would no longer be redness. >>> I like to think of it as being similar to when you burn certain metals, >>> it emits different colored light. >>> Obviously, if you change or remove the metal, the color changes. And >>> nothing but those metals will produce the same chemical reaction that emits >>> that particular color. >>> It isn't the light, which many things could produce the same light, it >>> is possible that only the particular chemical reaction that can be >>> computationally bound, such that if it changes, the redness will change in >>> a way that the entire system must be aware of that change from redness. >>> >> >> It's possible that for technical reasons nothing can be found that will >> affect the rest of the system in the same way as glutamate does. However, >> saying this is avoiding the question. There is no logical reason why a >> substitute either for glutamate or one of the thousands of other components >> in the brain could not be found. As an example that would probably work, >> substitute some of the atoms in a molecule with different isotopes. This is >> a quite common technique to track molecules in biomedical research. You can >> order some online if you want: >> https://www.moravek.com/what-exactly-is-radiolabeling/ >> So radiolabeled glutamate will behave the same as regular glutamate, but >> it is a different substrate. What do you think would happen to the qualia >> if radiolabeled glutamate replaced regular glutamate? >> >> -- >> Stathis Papaioannou >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri May 13 04:52:35 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 12 May 2022 22:52:35 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: Message-ID: Yes, I agree with that. Just like there are a bunch of different things that have a redness color, and a bunch of different things from any of those which have a gereness color. The sets will be completely disjoint, and nothing but the redness set will have a redness quality. A greeness property, or any other physical property will be able to represent 'red' knowledge, but only if you have a dictionary to allow such substrate independence. That is how perception and communication across any substrate works - it requires transducing dictionaries to achieve the substrate independence. Redness is just a physical fact, and your redness is your definition of 'red'. On Thu, May 12, 2022 at 6:35 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, 13 May 2022 at 08:30, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Stathis, >> I think we are in agreement with that. There may be different things, or >> variants of things, that all have a redness quality. >> > > So you agree it isn't substrate specific. > > >> But, there must be something physically different than that, which has >> the greenness quality, and you can't get redness, from the stuff that has a >> greenness quality, unless you have a dictionary that says greenness is >> representing red. >> > > The same thing in the same configuration will give the same qualia, a > different thing in a different configuration may give the same or different > qualia. > > >> On Tue, May 10, 2022 at 7:17 PM Stathis Papaioannou via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Wed, 11 May 2022 at 10:32, Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> There are lots of competing theories making predictions about what >>>> qualia are. >>>> It will be an answers to the question: Which of all our descriptions of >>>> stuff in the brain is a description of redness. >>>> My assumption is there is some necessary and sufficient set of >>>> observable physical behavior or chemical reactions which are the >>>> descriptions of redness. >>>> So to say anything that is not the qualia, is anything outside of this >>>> necessary and sufficient set of physics. >>>> So, by definition, if anything varies from the necessary and sufficient >>>> set, it would no longer be redness. >>>> I like to think of it as being similar to when you burn certain metals, >>>> it emits different colored light. >>>> Obviously, if you change or remove the metal, the color changes. And >>>> nothing but those metals will produce the same chemical reaction that emits >>>> that particular color. >>>> It isn't the light, which many things could produce the same light, it >>>> is possible that only the particular chemical reaction that can be >>>> computationally bound, such that if it changes, the redness will change in >>>> a way that the entire system must be aware of that change from redness. >>>> >>> >>> It's possible that for technical reasons nothing can be found that will >>> affect the rest of the system in the same way as glutamate does. However, >>> saying this is avoiding the question. There is no logical reason why a >>> substitute either for glutamate or one of the thousands of other components >>> in the brain could not be found. As an example that would probably work, >>> substitute some of the atoms in a molecule with different isotopes. This is >>> a quite common technique to track molecules in biomedical research. You can >>> order some online if you want: >>> https://www.moravek.com/what-exactly-is-radiolabeling/ >>> So radiolabeled glutamate will behave the same as regular glutamate, but >>> it is a different substrate. What do you think would happen to the qualia >>> if radiolabeled glutamate replaced regular glutamate? >>> >>> -- >>> Stathis Papaioannou >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > -- > Stathis Papaioannou > > > Virus-free. > www.avast.com > > <#m_-3652782376358621079_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri May 13 05:45:59 2022 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 12 May 2022 22:45:59 -0700 Subject: [ExI] Is Artificial Life Conscious? Message-ID: <20220512224559.Horde.VvO08gkJCDWwZgywIhaklG2@sollegro.com> Quoting Brent Allsop: > It is a necessary truth, that if you know something, that knowledge must be > something. > And if you have knowledge that has a redness quality, there must be > something in your brain that has that quality that is your conscious > knowledge. I think what you have done with your color problem is entangle the hard problem of consciousness with the millennia old problem of universals. Does redness actually exist at all? Does redness exist only in the brain? Can something have the redness quality without actually being red? Does it have the redness quality when it is outside of the brain? Can abstract information have the redness quality? Is something red if nobody can see it? > This is true if that stuff is some kind of "Material" or "electromagnetic > field" "spiritual" or "functional" stuff, it remains a fact that your > knowledge, composed of that, has a redness quality. It seems you are quite open-minded when it comes to what qualifies as "stuff". If so, then why does your 3-robot-scenario single out information as not being stuff? If you wish to insist that something physical in the brain has the redness quality and conveys knowledge of redness, then why glutamate? Why not instead hypothesize that is the only thing that prima facie has the redness property to begin with i.e. red light? After all there are photoreceptors in the deep brain. https://www.frontiersin.org/articles/10.3389/fnana.2016.00048/full#:~:text=The%20existence%20of%20multiple%20opsin,for%20photoreception%20in%20particular%20regions. Stuart LaForge From brent.allsop at gmail.com Fri May 13 06:28:24 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 13 May 2022 00:28:24 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: <20220512224559.Horde.VvO08gkJCDWwZgywIhaklG2@sollegro.com> References: <20220512224559.Horde.VvO08gkJCDWwZgywIhaklG2@sollegro.com> Message-ID: Hi Stuart, On Thu, May 12, 2022 at 11:46 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Quoting Brent Allsop: > > It is a necessary truth, that if you know something, that knowledge must > be > > something. > > And if you have knowledge that has a redness quality, there must be > > something in your brain that has that quality that is your conscious > > knowledge. > > I think what you have done with your color problem is entangle the > hard problem of consciousness with the millennia old problem of > universals. Does redness actually exist at all? Does redness exist > only in the brain? Can something have the redness quality without > actually being red? Does it have the redness quality when it is > outside of the brain? Can abstract information have the redness > quality? Is something red if nobody can see it? > I'm probably arguing that there isn't an impossible to solve "hard problem of consciousness" , there is just the solvable "problem of universals" ? or in other words, just an intrinsic color problem. That's the title of our video: "Consciousness: Not a 'Hard Problem' just a color problem ." First, we must recognize that redness is not an intrinsic quality of the strawberry, it is a quality of our knowledge of the strawberry in our brain. This must be true since we can invert our knowledge by simply inverting any transducing system anywhere in the perception process. If we have knowledge of a strawberry that has a redness quality, and if we objectively observed this redness in someone else's brain, and fully described that redness, would that tell us the quality we are describing? No, for the same reason you can't communicate to a blind person what redness is like. The entirety of our objective knowledge tells us nothing of the intrinsic qualities of any of that stuff we are describing. The only way to know the qualities of the stuff we are abstractly describing is to directly apprehend those qualities as computationally bound conscious knowledge. Once we do discover and demonstrate which of all our descriptions of stuff in the brain is a description of redness and greenness, then we will know the qualities we are describing. Once we have this dictionary, defining our abstract terms, we will then be able to eff the ineffable. Let's assume, for a moment, that we can directly apprehend glutamates qualities, and they are redness, and our description of glycine is a description of greenness. (If you don't like glutamate and glycine, pick anything else in the brain, until we get one that can't be falsified.) Given that, these would then be saying the same thing: My redness is like your greenness, both of which we call red. My glutamate is like your glycine, both of which we represent red information with. > > This is true if that stuff is some kind of "Material" or "electromagnetic > > field" "spiritual" or "functional" stuff, it remains a fact that your > > knowledge, composed of that, has a redness quality. > > It seems you are quite open-minded when it comes to what qualifies as > "stuff". If so, then why does your 3-robot-scenario single out > information as not being stuff? If you wish to insist that something > physical in the brain has the redness quality and conveys knowledge of > redness, then why glutamate? Why not instead hypothesize that is the > only thing that prima facie has the redness property to begin with > i.e. red light? After all there are photoreceptors in the deep brain. > Any physical property like redness, greenness, +5votes, holes in a punch card... can represent (convey) an abstract 1. There must be something physical representing that one, but, again, you can't know what that is unless you have a transducing dictionary telling you which is which. Then once you define a pattern of ones and zeros to be words like 'red' and 'green', again, you need a 3rd dictionary to get from a word like 'red' back to physical reality. The redness quality of your knowledge of red things is your definition of the word red. Does that answer your questions? > > https://www.frontiersin.org/articles/10.3389/fnana.2016.00048/full#:~:text=The%20existence%20of%20multiple%20opsin,for%20photoreception%20in%20particular%20regions > . > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri May 13 07:13:02 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 13 May 2022 01:13:02 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: References: <20220512224559.Horde.VvO08gkJCDWwZgywIhaklG2@sollegro.com> Message-ID: On Fri, May 13, 2022 at 12:28 AM Brent Allsop wrote: > My redness is like your greenness, both of which we call red. > My glutamate is like your glycine, both of which we represent red > information with. > >> > > Sorry, I said that completely wrong. Let me try again. Both of these are the same: My redness is like your greens, both of which we chose to represent or convey red information with. I use glycine (which you use to represent green with) and you use glutamate to represent or convey red information. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri May 13 10:33:38 2022 From: pharos at gmail.com (BillK) Date: Fri, 13 May 2022 11:33:38 +0100 Subject: [ExI] Deepmind Gato - Another step towards AGI Message-ID: Published May 12, 2022 Quote: Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. ----------- Combining more tasks into a single AI is progress towards a generalised AI that can handle multiple environments and many tasks. The next step is AI self-teaching when it encounters new tasks and environments. BillK From avant at sollegro.com Fri May 13 10:41:47 2022 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 13 May 2022 03:41:47 -0700 Subject: [ExI] Is Artificial Life Conscious? Message-ID: <20220513034147.Horde.ieK90AsoQ4E0ABuIpUIpm5C@sollegro.com> Quoting Brent Allsop: > Hi Stuart, > > On Thu, May 12, 2022 at 11:46 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: >> I think what you have done with your color problem is entangle the >> hard problem of consciousness with the millennia old problem of >> universals. Does redness actually exist at all? Does redness exist >> only in the brain? Can something have the redness quality without >> actually being red? Does it have the redness quality when it is >> outside of the brain? Can abstract information have the redness >> quality? Is something red if nobody can see it? >> > > I'm probably arguing that there isn't an impossible to solve "hard problem > of consciousness" , there is just the solvable "problem of universals" ? > or in other words, just an intrinsic color problem. > That's the title of our video: "Consciousness: Not a 'Hard Problem' just a > color problem . The hard problem of consciousness has been around for less than 50 years, but you think it is impossible to solve. The problem of universals has been around for thousands of years, yet you think it is solvable. > First, we must recognize that redness is not an intrinsic quality of the > strawberry, it is a quality of our knowledge of the strawberry in our > brain. This must be true since we can invert our knowledge by simply > inverting any transducing system anywhere in the perception process. > If we have knowledge of a strawberry that has a redness quality, and if we > objectively observed this redness in someone else's brain, and fully > described that redness, would that tell us the quality we are describing? > No, for the same reason you can't communicate to a blind person what > redness is like. Why not? If redness is not intrinsic to the strawberry but is instead a quality of our knowledge of the strawberry, then why can't we explain to a blind person what redness is like? Blind people have knowledge of strawberries and plenty of glutamate in their brains. Just tell them that redness is what strawberries are like, and they will understand you just fine. > The entirety of our objective knowledge tells us nothing > of the intrinsic qualities of any of that stuff we are describing. Ok, but you just said that redness was not an intrinsic quality of strawberries but of our knowledge of them, so our objective knowledge of them should be sufficient to describe redness. > The only way to know the qualities of the stuff we are abstractly > describing is to directly apprehend those qualities as computationally > bound conscious knowledge. But blind people have computationally bound conscious knowledge and can directly apprehend qualities too. Why can't they directly apprehend the glutamate in their brains? Is blind people glutamate different than normal glutamate? > Once we do discover and demonstrate which of all our descriptions of stuff > in the brain is a description of redness and greenness, then we will know > the qualities we are describing. > Once we have this dictionary, defining our abstract terms, we will then be > able to eff the ineffable. Why is not telling a blind person that redness is what perceiving a strawberry is like, not sufficient to eff the ineffable? > Let's assume, for a moment, that we can directly apprehend glutamates > qualities, and they are redness, and our description of glycine is a > description of greenness. If we assume this, then why can't a blind person apprehend the qualities of all that glutamate in their brain? > (If you don't like glutamate and glycine, pick anything else in the brain, > until we get one that can't be falsified.) > Given that, these would then be saying the same thing: > My redness is like your greenness, both of which we call red. > My glutamate is like your glycine, both of which we represent red > information with. You subsequently correct this to: "My redness is like your greens, both of which we chose to represent or convey red information with. I use glycine (which you use to represent green with) and you use glutamate to represent or convey red information." So if this "stuff" is glutamate, glycine, or whatever, and it exists in the brains of blind people, then why can't it represent redness (or greenness) information to them also? > > >>> This is true if that stuff is some kind of "Material" or "electromagnetic >>> field" "spiritual" or "functional" stuff, it remains a fact that your >>> knowledge, composed of that, has a redness quality. >> >> It seems you are quite open-minded when it comes to what qualifies as >> "stuff". If so, then why does your 3-robot-scenario single out >> information as not being stuff? If you wish to insist that something >> physical in the brain has the redness quality and conveys knowledge of >> redness, then why glutamate? Why not instead hypothesize that is the >> only thing that prima facie has the redness property to begin with >> i.e. red light? After all there are photoreceptors in the deep brain. >> > > Any physical property like redness, greenness, +5votes, holes in a punch > card... can represent (convey) an abstract 1. There must be something > physical representing that one, but, again, you can't know what that is > unless you have a transducing dictionary telling you which is which. You may need something physical to represent the abstract 1, but that abstract 1 in turn represents some different physical thing. One should be careful to distinguish between how information is represented and what it represents. > Then once you define a pattern of ones and zeros to be words like 'red' and > 'green', again, you need a 3rd dictionary to get from a word like 'red' > back to physical reality. > The redness quality of your knowledge of red things is your definition of > the word red. Using ones and zeros to represent the word 'red' is not the same thing as using ones and zeros to represent the redness quality. > Does that answer your questions? No, you did not even address my question. You just repeated your mantra. But that is ok, because my question was largely rhetorical. Stuart LaForge From pharos at gmail.com Fri May 13 16:14:54 2022 From: pharos at gmail.com (BillK) Date: Fri, 13 May 2022 17:14:54 +0100 Subject: [ExI] eating what you need In-Reply-To: References: Message-ID: On Mon, 9 May 2022 at 16:05, William Flynn Wallace via extropy-chat wrote: > > Not reliable - ok. I am sure we begin early learning food preferences that could override our tendencies to eat what we need. That's pretty clear. But when we do eat what we need we know it; our bodies tell us it's good and why not have some more? Maybe this even extends to vision: we just see the food we need and it spurs a desire to buy it. bill w > _______________________________________________ By coincidence, this has just been received.... :) Quote: A WOMAN who is trusting her body to tell her what to eat has discovered it wants an entire sleeve of biscuits to be consumed in one sitting. Kelly Howard recently became an expert on the practice of intuitive eating after reading a whole Facebook post about it. This extensive research led her to conclude that the best choice of afternoon snack was 16 Hobnobs in a row. etc..... BillK :) From foozler83 at gmail.com Fri May 13 18:01:39 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 13 May 2022 13:01:39 -0500 Subject: [ExI] eating what you need In-Reply-To: References: Message-ID: 16 Hobnobs - sugary? - Eating disorders aside, her brain must have told her that she needed a lot of what was in the things- or she was stoned as a rat. bill w On Fri, May 13, 2022 at 11:17 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, 9 May 2022 at 16:05, William Flynn Wallace via extropy-chat > wrote: > > > > Not reliable - ok. I am sure we begin early learning food preferences > that could override our tendencies to eat what we need. That's pretty > clear. But when we do eat what we need we know it; our bodies tell us it's > good and why not have some more? Maybe this even extends to vision: we > just see the food we need and it spurs a desire to buy it. bill w > > _______________________________________________ > > > By coincidence, this has just been received.... :) > < > https://www.thedailymash.co.uk/news/food/woman-listening-to-what-her-body-needs-eats-entire-packet-of-hobnobs-20220513221001 > > > Quote: > A WOMAN who is trusting her body to tell her what to eat has > discovered it wants an entire sleeve of biscuits to be consumed in one > sitting. > Kelly Howard recently became an expert on the practice of intuitive > eating after reading a whole Facebook post about it. This extensive > research led her to conclude that the best choice of afternoon snack > was 16 Hobnobs in a row. > etc..... > > > BillK :) > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri May 13 18:29:38 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 13 May 2022 11:29:38 -0700 Subject: [ExI] eating what you need In-Reply-To: References: Message-ID: <006301d866f7$68001220$38003660$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] eating what you need On Mon, 9 May 2022 at 16:05, William Flynn Wallace via extropy-chat wrote: > > Not reliable - ok. I am sure we begin early learning food preferences that could override our tendencies to eat what we need... bill w > _______________________________________________ By coincidence, this has just been received.... :) Quote: >...A WOMAN who is trusting her body to tell her what to eat has discovered it wants an entire sleeve of biscuits to be consumed in one sitting. etc..... BillK :) _______________________________________________ I am sooooo there Bill, both Bills. I always trust my body to tell me what it needs, knowing that my body is an untrustworthy bastard. I do it anyway of course, but I know I am pulling a fast one on me. I am trying to trick me into giving myself donuts and coffee. It works of course. I give me whatever I ask. My wish is my command. Here's the real deal: I am of the school of thought that holds one can eat pretty much whatever one wants, so long as it isn't too much of it. If one keeps the total calorie count low, that works for some reason: it seems to lead to good health. Some yahoo demonstrated it with the twinkie diet, a professor of human nutrition: http://www.cnn.com/2010/HEALTH/11/08/twinkie.diet.professor/index.html Wouldn't take it that far, but I see his point. spike From brent.allsop at gmail.com Fri May 13 19:54:07 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 13 May 2022 13:54:07 -0600 Subject: [ExI] Is Artificial Life Conscious? In-Reply-To: <20220513034147.Horde.ieK90AsoQ4E0ABuIpUIpm5C@sollegro.com> References: <20220513034147.Horde.ieK90AsoQ4E0ABuIpUIpm5C@sollegro.com> Message-ID: Hi Stuart, On Fri, May 13, 2022 at 4:42 AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Quoting Brent Allsop: > > Hi Stuart, > > > > On Thu, May 12, 2022 at 11:46 PM Stuart LaForge via extropy-chat < > > extropy-chat at lists.extropy.org> wrote: > > >> I think what you have done with your color problem is entangle the > >> hard problem of consciousness with the millennia old problem of > >> universals. Does redness actually exist at all? Does redness exist > >> only in the brain? Can something have the redness quality without > >> actually being red? Does it have the redness quality when it is > >> outside of the brain? Can abstract information have the redness > >> quality? Is something red if nobody can see it? > >> > > > > I'm probably arguing that there isn't an impossible to solve "hard > problem > > of consciousness" , there is just the solvable "problem of universals" ? > > or in other words, just an intrinsic color problem. > > That's the title of our video: "Consciousness: Not a 'Hard Problem' just > a > > color problem . > > The hard problem of consciousness has been around for less than 50 > years, but you think it is impossible to solve. The problem of > universals has been around for thousands of years, yet you think it is > solvable. > No, only the popular consensus functionalists, led by Chalmers, with his derivative and mistaken "substitution argument" work. results in them thinking it is a hard problem, leading the whole world astray. The hard problem would be solved by now, if it wasn't for all that. If you understand why the substitution argument is a mistaken sleight of hand, that so-called "hard problem" goes away. All the stuff like What is it like to be a bat, how do you bridge the explanatory gap, and all that simply fall away, once you know the colorness quality of something. And I don't really know much about the problem of universals. I just know that we live in a world full of LOTS of colourful things, yet all we know are the colors things seem to be. Nobody yet knows the true intrinsic colorness quality of anything. The emerging consensus Representational Qualia Theory , and all the supporters of all the sub camps are predicting once we discover which of all our descriptions of stuff in the brain is a description of redness, this will falsify all but THE ONE camp finally demonstrated to be true. All the supporters of all the falsified camps will then be seen jumping to this one yet to be falsified camp. We are tracking all this in real time, and already seeing significant progress. In other words, there will be irrefutable consensus proof that the 'hard problem' has finally been resolved. I predict this will happen within 10 years. Anyone care to make a bet, that THE ONE camp will have over 90% "Mind Expert consensus", and there will be > 1000 experts in total, participating, within 10 years? > > > First, we must recognize that redness is not an intrinsic quality of the > > strawberry, it is a quality of our knowledge of the strawberry in our > > brain. This must be true since we can invert our knowledge by simply > > inverting any transducing system anywhere in the perception process. > > If we have knowledge of a strawberry that has a redness quality, and if > we > > objectively observed this redness in someone else's brain, and fully > > described that redness, would that tell us the quality we are describing? > > No, for the same reason you can't communicate to a blind person what > > redness is like. > > Why not? If redness is not intrinsic to the strawberry but is instead > a quality of our knowledge of the strawberry, then why can't we > explain to a blind person what redness is like? Blind people have > knowledge of strawberries and plenty of glutamate in their brains. > Just tell them that redness is what strawberries are like, and they > will understand you just fine. > Wait, what? No you can't. Sure, maybe if they've been sighted, seen a strawberry, with their eyes, (i.e. directly experienced redness knowledge) then became blind. They will be able to kind of remember what that redness was like, but the will no longer be able to experience it. > > > The entirety of our objective knowledge tells us nothing > > of the intrinsic qualities of any of that stuff we are describing. > > Ok, but you just said that redness was not an intrinsic quality of > strawberries but of our knowledge of them, so our objective knowledge > of them should be sufficient to describe redness. > Sure, it is sufficient, but until you know which sufficient description is a description of redness, and which sufficient description is a description of greenness, we won't know which is which. > > The only way to know the qualities of the stuff we are abstractly > > describing is to directly apprehend those qualities as computationally > > bound conscious knowledge. > > But blind people have computationally bound conscious knowledge and > can directly apprehend qualities too. Why can't they directly > apprehend the glutamate in their brains? Is blind people glutamate > different than normal glutamate? > > > Once we do discover and demonstrate which of all our descriptions of > stuff > > in the brain is a description of redness and greenness, then we will know > > the qualities we are describing. > > Once we have this dictionary, defining our abstract terms, we will then > be > > able to eff the ineffable. > > Why is not telling a blind person that redness is what perceiving a > strawberry is like, not sufficient to eff the ineffable? > > > Let's assume, for a moment, that we can directly apprehend glutamates > > qualities, and they are redness, and our description of glycine is a > > description of greenness. > > If we assume this, then why can't a blind person apprehend the > qualities of all that glutamate in their brain? > > > (If you don't like glutamate and glycine, pick anything else in the > brain, > > until we get one that can't be falsified.) > > Given that, these would then be saying the same thing: > > My redness is like your greenness, both of which we call red. > > My glutamate is like your glycine, both of which we represent red > > information with. > > You subsequently correct this to: "My redness is like your greens, > both of which we chose to represent or > convey red information with. I use glycine (which you use to represent > green with) and you use glutamate > to represent or convey red information." > > So if this "stuff" is glutamate, glycine, or whatever, and it exists > in the brains of blind people, then why can't it represent redness (or > greenness) information to them also? > People may be able to dream redness. Or they may take some psychedelics that enables them to experience redness, or surgeons may stimulate a part of the brain, while doing brain surgery, producing a redness experience, Those rare cases are possible, But that isn't yet normal. Once they discover which of all our descriptions of stuff in the brain is a description of redness, someone like Neuralink will be producing that redness quality in blind people's brains all the time, with artificial eyes, and so on. But to date, normal blind people can't experience redness quality. > > > > > > >>> This is true if that stuff is some kind of "Material" or > "electromagnetic > >>> field" "spiritual" or "functional" stuff, it remains a fact that your > >>> knowledge, composed of that, has a redness quality. > >> > >> It seems you are quite open-minded when it comes to what qualifies as > >> "stuff". If so, then why does your 3-robot-scenario single out > >> information as not being stuff? If you wish to insist that something > >> physical in the brain has the redness quality and conveys knowledge of > >> redness, then why glutamate? Why not instead hypothesize that is the > >> only thing that prima facie has the redness property to begin with > >> i.e. red light? After all there are photoreceptors in the deep brain. > >> > > > > Any physical property like redness, greenness, +5votes, holes in a punch > > card... can represent (convey) an abstract 1. There must be something > > physical representing that one, but, again, you can't know what that is > > unless you have a transducing dictionary telling you which is which. > > You may need something physical to represent the abstract 1, but that > abstract 1 in turn represents some different physical thing. Only if you have a transducing dictionary that enables such, or you think of it in that particular way. Other than that, it's just a set of physical facts, which can be interpreted as something else, that is all. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri May 13 20:30:45 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 13 May 2022 15:30:45 -0500 Subject: [ExI] eating what you need In-Reply-To: <006301d866f7$68001220$38003660$@rainier66.com> References: <006301d866f7$68001220$38003660$@rainier66.com> Message-ID: I read of a boy of about 9 whose mother was complaining to the doctor about his nutrition. He ate only French fries and peanut butter. The doctor examined him and said he was fine. But I think this could not go on indefinitely, as those foods don't contain all we need. Probably it was just a 'stage', as we used to say that would end with some other peculiar fad of his. Moderation in all things, as long as it allows for the occasional overdoing of one's favorites. I once ate a 2.5 pound roast and a pound of shrimp at one setting. And why not? They weren't going to be as good later. I was much younger, of course. 50 years ago. bill w On Fri, May 13, 2022 at 1:31 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of BillK via extropy-chat > Subject: Re: [ExI] eating what you need > > On Mon, 9 May 2022 at 16:05, William Flynn Wallace via extropy-chat > wrote: > > > > Not reliable - ok. I am sure we begin early learning food preferences > that could override our tendencies to eat what we need... bill w > > _______________________________________________ > > > By coincidence, this has just been received.... :) > < > https://www.thedailymash.co.uk/news/food/woman-listening-to-what-her-body-n > eeds-eats-entire-packet-of-hobnobs-20220513221001 > > > > Quote: > >...A WOMAN who is trusting her body to tell her what to eat has discovered > it wants an entire sleeve of biscuits to be consumed in one sitting. > etc..... > > > BillK :) > > _______________________________________________ > > > I am sooooo there Bill, both Bills. I always trust my body to tell me what > it needs, knowing that my body is an untrustworthy bastard. I do it anyway > of course, but I know I am pulling a fast one on me. I am trying to trick > me into giving myself donuts and coffee. It works of course. I give me > whatever I ask. My wish is my command. > > Here's the real deal: I am of the school of thought that holds one can eat > pretty much whatever one wants, so long as it isn't too much of it. If one > keeps the total calorie count low, that works for some reason: it seems to > lead to good health. Some yahoo demonstrated it with the twinkie diet, a > professor of human nutrition: > > http://www.cnn.com/2010/HEALTH/11/08/twinkie.diet.professor/index.html > > Wouldn't take it that far, but I see his point. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Fri May 13 21:03:57 2022 From: mbb386 at main.nc.us (MB) Date: Fri, 13 May 2022 17:03:57 -0400 Subject: [ExI] eating what you need In-Reply-To: References: <006301d866f7$68001220$38003660$@rainier66.com> Message-ID: When DS was little we put food on the highchair tray: meat, pasta, broccoli, cookie, piece of apple, etc. All at one time, just there, not even on a dish. He ate. We were interested that he did not choose the cookie first. He ate bits of this, and a bite of that, and then something else. DD was another story. Half of what was on her tray was met with "I don't like that" and she would eat nothing until the offending food was removed. It was most distressing. We could have been more "together" and refused to remove anything, let her work around it, but we didn't think of that, we just wanted her to quit complaining throughout the meal. They both grew up ok and will eat all sorts of foods now, and neither one was overweight as a child. Part of that may have been lack of between-meal snacks. Food was not a battle I wanted to fight with them. Bad childhood memories haunted me, a classic picky eater. There are foods I fear to buy because I will gorge on them until they are *all* gone. Crackers, cheddar popcorn, potato chips. The newer thing of single-serving packets suits me well. ;) Regards, MB From john at ziaspace.com Sun May 15 23:05:46 2022 From: john at ziaspace.com (John Klos) Date: Sun, 15 May 2022 23:05:46 +0000 (UTC) Subject: [ExI] Maintenance Message-ID: Hi, all, Just a heads up: I'll be visiting the datacenter to do some maintenance on several machines, and updating one will mean the network will be down for a while, perhaps up to twenty minutes, and perhaps more than once. This is planned for Thursday. I'll make another announcement with more specifics closer to then. Note that no messages will be lost because we already have a backup MX, but messages will be delayed during maintenance. Thanks! John From steinberg.will at gmail.com Mon May 16 01:09:09 2022 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 15 May 2022 21:09:09 -0400 Subject: [ExI] calling all artists and writers Message-ID: WHO: a team of strange folk who see in crypto the same freedom of use and expression that came in the early days of the WWW. we want to show the world that crypto isn't a ponzi scheme or speculative bubble, but a necessary technology that will bring massive advances to global equity and consumer agency, and also is just a really fun and weird space where you can make cool shit THING: a project -- crypto adjacent -- will give more info privately VIBE: cypherpunk, cyberpunk, vaporwave, diy, gonzo, crustpunk, psychedelic, hellenic, alchemypunk, anarchist, h+, philip k dick,1990-2000, playstation 1, anime, psychonaut, physics, tarot, &c. NEED: {Short form}: art, writing, comics, humor, photo, design, puzzles, games. video/audio/videogame and other dynamic media also desired but less prioritized GIVE: more than adequate compensation DO: forward to potentially interested parties thanks all xo -w -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue May 17 01:34:42 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 16 May 2022 20:34:42 -0500 Subject: [ExI] tennis Message-ID: Is anyone on this list a serious tennis player? I would like to have a few words with you. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 17 02:45:58 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 16 May 2022 19:45:58 -0700 Subject: [ExI] tennis In-Reply-To: References: Message-ID: <003501d86998$3d084fd0$b718ef70$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: [ExI] tennis >?Is anyone on this list a serious tennis player? I would like to have a few words with you. >?bill w Funny aside Billw. That?s what my mother used to say when she was about to chew my ass over something: I would like to have a few words with you. That always meant she would be saying them and we would be receiving them. We always knew that means the annoyometer needle zoomed past the annoyed, exasperated, irked and irritated regions, and has slammed against the sincerely pissed peg. So when I read your comment, my first thought was: Eh? Whats wrong with playing serious tennis? It?s good exercise, doesn?t cost much, doesn?t risk wrecking anything. Then: Oh wait, retract, Billw wants to literally exchange a few actual words (not necessarily acrimonious or caustic ones) with a serious tennis player, OK no worries, carry on, tennis lads. 8^D Another funny aside: the 1950 census just came out last month. I have been in genealogy a long time. Saw the 1920 census published (1992) then the 1930, then the 1940. Each time, 72 years after it was created, each time it answered and created a lotta new questions, each time it caused an uproar, and I always wondered what was the big deal? So you geezers are public domain now, sheesh, don?t have a cow, etc. Well? last month the 1950 census came out, and this time I damn well do know what is the big deal. Now I kinda agree, 72 years is probably not long enough to keep that wrapped up. The needle on the annoyometer is hovering near the old sincerely pissed peg over some information in the 1950 census that my alive-and-well cousins would prefer would not become public. Today, there are a loooootta lotta people who are living and perfectly healthy who suddenly are unwillingly public domain. If you get feeling we are not making progress as a species, do ponder that last sentence. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 17 02:57:15 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 16 May 2022 19:57:15 -0700 Subject: [ExI] 1950 census, was: RE: tennis Message-ID: <004401d86999$d0afe210$720fa630$@rainier66.com> From: spike at rainier66.com ? Today, there are a loooootta lotta people who are living and perfectly healthy who suddenly are unwillingly public domain?spike Fun thought experiment that works better if you are at least about 50 now: think of someone you knew as a child who was in their 80s back then, pretty much anyone in their 80s. Recall your impression of their general condition. Here?s an example, my much beloved great grandmother, who was age 82 when I took this photo over 50 yrs ago: OK have your mental picture? Now compare to someone you know who is in their 80s now. Perhaps you have friends or know someone locally, or parents perhaps in their 80s. Are there differences between todays 80 somethings and the 80ers from a long time ago? What differences? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 34781 bytes Desc: not available URL: From pharos at gmail.com Tue May 17 08:59:40 2022 From: pharos at gmail.com (BillK) Date: Tue, 17 May 2022 09:59:40 +0100 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: <004401d86999$d0afe210$720fa630$@rainier66.com> References: <004401d86999$d0afe210$720fa630$@rainier66.com> Message-ID: On Tue, 17 May 2022 at 03:59, spike jones via extropy-chat wrote: > > From: spike at rainier66.com > ? Today, there are a loooootta lotta people who are living and perfectly healthy who suddenly are unwillingly public domain?spike > spike > _______________________________________________ In the UK the census data is released to the public after 100 years. So fewer people will still be alive to be embarrassed by the data. BillK From spike at rainier66.com Tue May 17 13:23:55 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 17 May 2022 06:23:55 -0700 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: References: <004401d86999$d0afe210$720fa630$@rainier66.com> Message-ID: <005801d869f1$5c4d1520$14e73f60$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Tuesday, 17 May, 2022 2:00 AM To: ExI chat list Cc: BillK Subject: Re: [ExI] 1950 census, was: RE: tennis On Tue, 17 May 2022 at 03:59, spike jones via extropy-chat wrote: > > From: spike at rainier66.com ? Today, there are a > loooootta lotta people who are living and perfectly healthy who > suddenly are unwillingly public domain?spike > spike > _______________________________________________ In the UK the census data is released to the public after 100 years. So fewer people will still be alive to be embarrassed by the data. BillK _______________________________________________ Hi BillK, My quirky hobby is DNA-based genealogy. I am conflicted on that: personally I think it should be 100 years, but I am thankful that it is 72. So I have moral cognitive dissonance over the whole thing and I do confess I have found out stuff that I don't talk about with other family members. Not telling something one knows is another source of moral cognitive dissonance. My comment above creates still more cog-dis. For instance, I sometimes comment that I learned something I wish I didn't know. Does anyone here have a good example of finding out something you really wish you didn't know? I don't. I have learned some disturbing things, but I would rather be disturbed than ignorant. So... while respecting others' privacy, I am good with the 72 year delay on census records. spike From foozler83 at gmail.com Tue May 17 13:38:46 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 17 May 2022 08:38:46 -0500 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: <004401d86999$d0afe210$720fa630$@rainier66.com> References: <004401d86999$d0afe210$720fa630$@rainier66.com> Message-ID: IMO: age is harder on women's looks even if they do live longer. bill w On Mon, May 16, 2022 at 9:59 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* spike at rainier66.com > *?* Today, there are a loooootta lotta people who are living and > perfectly healthy who suddenly are unwillingly public domain?spike > > > > > > Fun thought experiment that works better if you are at least about 50 now: > think of someone you knew as a child who was in their 80s back then, pretty > much anyone in their 80s. Recall your impression of their general > condition. Here?s an example, my much beloved great grandmother, who was > age 82 when I took this photo over 50 yrs ago: > > > > > > OK have your mental picture? Now compare to someone you know who is in > their 80s now. Perhaps you have friends or know someone locally, or > parents perhaps in their 80s. Are there differences between todays 80 > somethings and the 80ers from a long time ago? What differences? > > > > spike > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 34781 bytes Desc: not available URL: From spike at rainier66.com Tue May 17 15:03:23 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 17 May 2022 08:03:23 -0700 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: References: <004401d86999$d0afe210$720fa630$@rainier66.com> Message-ID: <007101d869ff$4173ba70$c45b2f50$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] 1950 census, was: RE: tennis >?IMO: age is harder on women's looks even if they do live longer. bill w Ja but what I am comparing is people who were in their 80s in the 1960s (such as my great grandmother) to people in their 80s now. I recall anyone who made it to 80 back then appeared to have one foot? you know the saying. The stuff they said generally didn?t make sense, they seemed to mysteriously know stuff like how to operate those weird devices we kids didn?t recognize. But now, plenty of us know people in their 80s who are just going along doing the normal things, in what appears to be good health, they use the internet, they do normal things. Might do them a bit slower, but they do them. 80 is the new 60. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 17 15:53:37 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 17 May 2022 08:53:37 -0700 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: <005801d869f1$5c4d1520$14e73f60$@rainier66.com> References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> Message-ID: <001301d86a06$4656eec0$d304cc40$@rainier66.com> -----Original Message----- From: spike at rainier66.com >...I sometimes comment that I learned something I wish I didn't know. Does anyone here have a good example of finding out something you really wish you didn't know? I don't. I have learned some disturbing things, but I would rather be disturbed than ignorant. So... while respecting others' privacy, I am good with the 72 year delay on census records. spike I have a related question to the above about cognitive dissonance and learning something you wish you didn't know. In my quirky hobby of DNA-based genealogy, I sometimes learn things that are disturbing about my own family which I intentionally withhold from the rest of the family. This is a question even more loaded with moral and ethical ambiguity, the kind of problem at which I do not do well: if it can't be modeled with a system of simultaneous differential equations, good chance I can't solve it. I don't do ethical dilemmas. This is why I didn't go to medical school and I am so glad I didn't. I have withheld some biiiiig stuff from my own family and continue to do so. Note the above comment assumes the questionable position that one does not do wrong by doing nothing. It's the train switch dilemma where you choose to do nothing and five are slain rather than switch the tracks and sacrifice a different three. Maybe. Another good example: thru DNA genealogy, I discovered that one of my adopted third cousins was sired by her great grandfather (her great grandfather met a teenage girl who was a squatter on his land, his son never confessed that he had met a squatter on the same land 19 years previously and had sired a daughter, who the old man later met and sired two before he married her and sired five more. The father and great grandfather were the same man. I never told my adopted cousin that information, even though it might be medically relevant. It's a moral dilemma which almost made me give up DNA genealogy when I was first starting. Question please: if you find out something (legitimately) which you realize is relevant but perhaps disturbing to someone else, do you withhold or tell? First think about a related question: is there information you wish you didn't know? So if you want to know, but are willing to withhold relevant true information from someone else, does not that violate the golden rule? Looks to me like an exception to the golden rule, which is ordinarily a solid ethical guide. If you ponder the father = great grandfather dilemma, I have an even thornier one for you, also related to DNA genealogy. spike From pharos at gmail.com Tue May 17 17:05:46 2022 From: pharos at gmail.com (BillK) Date: Tue, 17 May 2022 18:05:46 +0100 Subject: [ExI] Deepmind Gato - Another step towards AGI In-Reply-To: References: Message-ID: On Fri, 13 May 2022 at 11:33, BillK wrote: > > Combining more tasks into a single AI is progress towards a > generalised AI that can handle multiple environments and many tasks. > The next step is AI self-teaching when it encounters new tasks and environments. > > BillK Quotes: ?The Game is Over?: Google?s DeepMind says it is on verge of achieving human-level AI Anthony Cuthbertson 17 May 2022 Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google?s DeepMind AI division. Dr Nando de Freitas said ?the game is over? in the decades-long quest to realise artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry. Described as a ?generalist agent?, DeepMind?s new Gato AI needs to just be scaled up in order to create an AI capable of rivalling human intelligence, Dr de Freitas said. ----------- BillK From msd001 at gmail.com Tue May 17 18:46:42 2022 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 17 May 2022 14:46:42 -0400 Subject: [ExI] Deepmind Gato - Another step towards AGI In-Reply-To: References: Message-ID: On Tue, May 17, 2022, 1:08 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Described as a ?generalist agent?, DeepMind?s new Gato AI needs to > just be scaled up in order to create an AI capable of rivalling human > intelligence, Dr de Freitas said. > ----------- > Yeah right, like an abacus just needs to be scaled up to model quantum computing: imagine this, but with millions more bars and discs that slide not only back&forth but also front&third :p We could totally do it any time we wanted, but we're running low on pixie dust > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue May 17 19:12:59 2022 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 17 May 2022 15:12:59 -0400 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: <001301d86a06$4656eec0$d304cc40$@rainier66.com> References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> <001301d86a06$4656eec0$d304cc40$@rainier66.com> Message-ID: On Tue, May 17, 2022, 11:55 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > If you ponder the father = great grandfather dilemma, I have an even > thornier one for you, also related to DNA genealogy. > I don't understand why that's any dilemma other than cultural taboo. I understand inbreeding is bad for genetic diversity reasons at general population level, but as we take manual control of fixing outright defects what difference does it make? I make the same point about ye olde artifacts of food safety: don't eat pork because dirty animals (whatever religious reason) and cross-species infection. Now that we have better processes and refrigeration the other white meat is probably cleaner than the latest round of recalled lettuce; is the prohibition on pork still a good idea or is it an unquestioned tradition of kosher eating? Heh, "don't run with scissors" is good advice but might have legit exceptions. I ask a lot of questions. Answers lead to better questions. :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue May 17 19:14:57 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 17 May 2022 14:14:57 -0500 Subject: [ExI] 1950 census - using the dead In-Reply-To: <001301d86a06$4656eec0$d304cc40$@rainier66.com> References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> <001301d86a06$4656eec0$d304cc40$@rainier66.com> Message-ID: Just whose privacy is being protected? The dead? I think that we should use the dead in any way possible if it will help us with our lives, such as the medical info you referred to, Spike. Therefore, I would contact the descendants and tell them that you have information on their dead relatives. You don't tell what info, but hint that some might be good, some bad, some neutral, and let them decide. But then, I am a very open person and will tell all to anyone who asks. When I am dead they can think what they will - can't hurt me. bill w On Tue, May 17, 2022 at 10:55 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: spike at rainier66.com > > > > >...I sometimes comment that I learned something I wish I didn't know. > Does anyone here have a good example of finding out something you really > wish you didn't know? I don't. I have learned some disturbing things, but > I would rather be disturbed than ignorant. So... while respecting others' > privacy, I am good with the 72 year delay on census records. > > spike > > > > > > I have a related question to the above about cognitive dissonance and > learning something you wish you didn't know. > > In my quirky hobby of DNA-based genealogy, I sometimes learn things that > are disturbing about my own family which I intentionally withhold from the > rest of the family. This is a question even more loaded with moral and > ethical ambiguity, the kind of problem at which I do not do well: if it > can't be modeled with a system of simultaneous differential equations, good > chance I can't solve it. I don't do ethical dilemmas. This is why I > didn't go to medical school and I am so glad I didn't. I have withheld > some biiiiig stuff from my own family and continue to do so. > > Note the above comment assumes the questionable position that one does not > do wrong by doing nothing. It's the train switch dilemma where you choose > to do nothing and five are slain rather than switch the tracks and > sacrifice a different three. Maybe. > > Another good example: thru DNA genealogy, I discovered that one of my > adopted third cousins was sired by her great grandfather (her great > grandfather met a teenage girl who was a squatter on his land, his son > never confessed that he had met a squatter on the same land 19 years > previously and had sired a daughter, who the old man later met and sired > two before he married her and sired five more. The father and great > grandfather were the same man. I never told my adopted cousin that > information, even though it might be medically relevant. It's a moral > dilemma which almost made me give up DNA genealogy when I was first > starting. > > Question please: if you find out something (legitimately) which you > realize is relevant but perhaps disturbing to someone else, do you withhold > or tell? First think about a related question: is there information you > wish you didn't know? So if you want to know, but are willing to withhold > relevant true information from someone else, does not that violate the > golden rule? Looks to me like an exception to the golden rule, which is > ordinarily a solid ethical guide. > > If you ponder the father = great grandfather dilemma, I have an even > thornier one for you, also related to DNA genealogy. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 17 19:20:17 2022 From: pharos at gmail.com (BillK) Date: Tue, 17 May 2022 20:20:17 +0100 Subject: [ExI] Deepmind Gato - Another step towards AGI In-Reply-To: References: Message-ID: On Tue, 17 May 2022 at 19:46, Mike Dougherty wrote: > > Yeah right, like an abacus just needs to be scaled up to model quantum computing: imagine this, but with millions more bars and discs that slide not only back&forth but also front&third :p > We could totally do it any time we wanted, but we're running low on pixie dust. ---------------------- Dr de Freitas is DeepMind?s research director. Obviously, part of his job is to publicise their achievements, with an eye on more funding and more researchers. GATO is a 'proof-of-concept' generalised AI application. He knows that scaling-up requires much more research. He says ?It?s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI.? It won't happen next week, but in a few more years........... BillK From spike at rainier66.com Tue May 17 20:55:12 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 17 May 2022 13:55:12 -0700 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> <001301d86a06$4656eec0$d304cc40$@rainier66.com> Message-ID: <005401d86a30$6750e980$35f2bc80$@rainier66.com> From: Mike Dougherty Subject: Re: [ExI] 1950 census, was: RE: tennis On Tue, May 17, 2022, 11:55 AM spike jones via extropy-chat > wrote: If you ponder the father = great grandfather dilemma, I have an even thornier one for you, also related to DNA genealogy. >?I don't understand why that's any dilemma other than cultural taboo?.I ask a lot of questions? In the part of the country where this took place, cultural taboos are taken very seriously, but it gets worse. The man who married his own granddaughter didn?t realize she was a blood relative. His own son denied having gone into the back country sowing his wild oats, so he didn?t know. The old man sired two with his granddaughter, the local church ladies either knew or suspected, they went back there unbeknownst, saw the wretched condition of the teenage mother and her two children, realized the babies would both probably die. So? they did something which isn?t in any of the records: they stole the baby, put her up for adoption. We don?t know how that was done, if they got the mother stoned or what happened, and no one is talking. The other child was age about 2 years. They left him behind. The old man divorced his fourth wife, went up and married the teenage girl who now had just the one child. Together they had five more. I learned of all this from my cousin?s elderly niece. Ja that sounds like a contraction in terms, but the niece was 26 years the senior to her aunt. She told me the whole story of the man who owned two square miles of property where there were squatters living since the long time agos, who married five times and had children whose ages spanned 55 years. The elderly niece knew the younger baby had disappeared and no one knew what had become of the missing child, didn?t even know if she had survived, until DNA genealogy came along. Between us we figured it out. Ethical dilemma bigtime: do I tell my cousin the circumstances of her birth and that she was stolen? Answer: no. I might do that to my brother as a joke, but I would not do that to an innocent person. I don?t know what psychological impact that would have. So? I didn?t. I did introduce my cousin to her elderly niece and decided to step out of that loop entirely. I don?t know what they told her and what they didn?t tell her, but my cousin was adopted outside the area and had a good life. Her six siblings did not. Mike, what would you do in that situation please? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue May 17 21:52:59 2022 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 17 May 2022 17:52:59 -0400 Subject: [ExI] Deepmind Gato - Another step towards AGI In-Reply-To: References: Message-ID: On Tue, May 17, 2022, 3:26 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Dr de Freitas is DeepMind?s research director. > Obviously, part of his job is to publicise their achievements, with an > eye on more funding and more researchers. > GATO is a 'proof-of-concept' generalised AI application. He knows that > scaling-up requires much more research. He says ?It?s all about making > these models bigger, safer, compute efficient, faster at sampling, > smarter memory, more modalities, innovative data, on/offline... > Solving these challenges is what will deliver AGI.? > It won't happen next week, but in a few more years........... > I have no doubt it will happen. I have no reason to question credentials. I was mostly commenting on "just scale up" has been the plan now for half a century -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue May 17 22:24:52 2022 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 17 May 2022 18:24:52 -0400 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: <005401d86a30$6750e980$35f2bc80$@rainier66.com> References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> <001301d86a06$4656eec0$d304cc40$@rainier66.com> <005401d86a30$6750e980$35f2bc80$@rainier66.com> Message-ID: On Tue, May 17, 2022, 4:57 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Ethical dilemma bigtime: do I tell my cousin the circumstances of her > birth and that she was stolen? > > Answer: no. I might do that to my brother as a joke, but I would not do > that to an innocent person. I don?t know what psychological impact that > would have. So? I didn?t. > > Mike, what would you do in that situation please? > I wouldn't do anything And I would feel confident that choice is correct (for me) The only usefulness I see in sharing that information would be around donor matching close family members... but even that usefulness is waning as medical technology improves. As far as stolen may be a concern it has little bearing on that person's life today. Until they give you some indication that question needs an answer, you should not allow the answer to beg the question. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue May 17 22:29:30 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 17 May 2022 15:29:30 -0700 Subject: [ExI] 1950 census, was: RE: tennis In-Reply-To: References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> <001301d86a06$4656eec0$d304cc40$@rainier66.com> <005401d86a30$6750e980$35f2bc80$@rainier66.com> Message-ID: <009b01d86a3d$93bbd720$bb338560$@rainier66.com> From: Mike Dougherty ? Mike, what would you do in that situation please? >?I wouldn't do anything And I would feel confident that choice is correct (for me) >?The only usefulness I see in sharing that information would be around donor matching close family members... but even that usefulness is waning as medical technology improves. >?As far as stolen may be a concern it has little bearing on that person's life today. Until they give you some indication that question needs an answer, you should not allow the answer to beg the question? Thanks for that Mike. It?s pretty much in parallel with my thinking and what I did: I took the time-honored approach to solving that ethical dilemma: spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 16645 bytes Desc: not available URL: From spike at rainier66.com Thu May 19 01:07:22 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 18 May 2022 18:07:22 -0700 Subject: [ExI] art thru careful planning In-Reply-To: <004901d86b1b$10a1d540$31e57fc0$@rainier66.com> References: <004901d86b1b$10a1d540$31e57fc0$@rainier66.com> Message-ID: <005c01d86b1c$cbbb0f80$63312e80$@rainier66.com> For the math fans among us, you can calculate to get this shot, Peter woulda had to be about 9 miles away. The moon is about half a degree and you can get the dimensions of the Lick Observatory complex, then do the scaling: Lunar eclipse over Lick Observatory | I headed to the Sierra? | Flickr Everything had to work out just perfectly for this: a calm very clear evening over San Jose, trail access, cool as all get out. I am thinking of trying a closer shot such that the main refractor dome on Mount Ham would span about a quarter of a degree. But it won?t match that eclipse photo Peter took, heh. spike From: spike at rainier66.com Sent: Wednesday, 18 May, 2022 5:55 PM To: 'ExI chat list' Cc: spike at rainier66.com Subject: art thru careful planning Peter Thoeny really nailed this one. This is the Lick Observatory on Mount Hamilton with this week?s lunar eclipse in the background. I know the exact trail he was on to get this photo. I hiked that one coupla weeks ago. Well done indeed Peter! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 15168 bytes Desc: not available URL: From pharos at gmail.com Thu May 19 10:24:17 2022 From: pharos at gmail.com (BillK) Date: Thu, 19 May 2022 11:24:17 +0100 Subject: [ExI] Another Fermi Paradox Paper Message-ID: The fact that we can?t find aliens may not be due to us, but to them! By Charlie Elliott 18 May 2022 Quotes: We haven?t found any aliens yet. And that may have a macabre reason; the aliens are (before we can see them) doomed to destroy their civilization or ? in the best case scenario ? put their expansionism on the back burner. Researchers come up with this hypothesis in the Journal of the Royal Society Interface. The hypothesis is loosely based on what we see happening in cities here on Earth. ?Other scientists have already determined that cities are growing in ways that are unsustainable in the long term,? said study researcher Michael Wong. Scientias.nl from. ?It is because resource consumption increases disproportionately as cities grow.? And that is obviously a problem. Because it means that there will come a time when cities will, for example, need more energy than is available. ?It results in crises we call ?singularities? where population and energy demand increase endlessly over a finite period of time.? In such a scenario, civilization is doomed to run into shortages, causing ? without intervention ? the entire system to collapse. Innovation The only way to avert or at least postpone such a crisis is to come up with innovations. But because the population and energy demand in a growing city are increasing exponentially, those innovations will have to follow each other more and more quickly if we want to prevent a crisis. And therein lies the challenge. Not just for us. But also possible, the researchers argue in their research article, for aliens. ?We hypothesize that planetary civilizations behave like cities,? Wong said. ?And if that?s the case, sooner or later they?ll hit a limit that limits their growth. We call this boundary ?asymptotic burnout?: an ultimate crisis in which the time that elapses between singularities (or crises, ed.) is shorter than the time between innovations.? In other words, the aliens are innovating too slowly to escape their self-created fate. Downfall or other priorities When such an asymptotic burnout threatens, there are actually two options, say Wong and colleague Stuart Bartlett. One: the alien civilization is either oblivious or burying its head in the sand and completely collapses. Or, two, the aliens become aware that they are headed for their doom and change course. ?They prioritize homeostasis: a state in which cosmic expansion is no longer a goal.? In both scenarios, it becomes difficult, if not impossible, to detect the aliens from a considerable distance. Because in the first scenario, they, or at least the advanced and therefore detectable civilization they once formed, are no more. And in the second scenario, they are no longer focused on exploring space or increasing and proclaiming their presence, but on preserving what they have. ?Unbridled growth and productivity gives way to a focus on health, balance and maximum longevity,? says Wong. -------------------------- Original Research Paper here:- This idea sounds reasonable to me. Currently our civilisation is looking for a new innovation - Fusion energy - to catch up with our energy requirements. Reducing oil consumption cannot be replaced by solar power and wind power. If a new energy resource cannot be found, then the shortages could well lead to new World Wars. A shrinkage of our civilisation, with the remnants forced to 'live within their means' no longer seems out of the question. BillK From dsunley at gmail.com Thu May 19 18:31:41 2022 From: dsunley at gmail.com (Darin Sunley) Date: Thu, 19 May 2022 12:31:41 -0600 Subject: [ExI] Another Fermi Paradox Paper In-Reply-To: References: Message-ID: The Fermi Paradox is really easy to solve, once you appreciate even the most conservative possible parameters of a technological singularity. The problem is the universe is really really old, such that the odds of two independently developing technological species being aligned in their development even to within one million years is literally astronomical. And therein lies the problem. Because if we're a million years ahead of them, they're still pre-linguistic Stone Age hunter gatherers, and we wouldn't be able to detect them. And if they're a million years ahead of us, we're currently running as the screensaver on their desktop computers, and they have absolute control as to whether they want to be detected or not. There is no in-between. On Thu, May 19, 2022 at 4:27 AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The fact that we can?t find aliens may not be due to us, but to them! > By Charlie Elliott 18 May 2022 > > < > http://techzle.com/the-fact-that-we-cant-find-aliens-may-not-be-due-to-us-but-to-them > > > > Quotes: > We haven?t found any aliens yet. And that may have a macabre reason; > the aliens are (before we can see them) doomed to destroy their > civilization or ? in the best case scenario ? put their expansionism > on the back burner. > > Researchers come up with this hypothesis in the Journal of the Royal > Society Interface. The hypothesis is loosely based on what we see > happening in cities here on Earth. ?Other scientists have already > determined that cities are growing in ways that are unsustainable in > the long term,? said study researcher Michael Wong. Scientias.nl from. > ?It is because resource consumption increases disproportionately as > cities grow.? And that is obviously a problem. Because it means that > there will come a time when cities will, for example, need more energy > than is available. ?It results in crises we call ?singularities? where > population and energy demand increase endlessly over a finite period > of time.? In such a scenario, civilization is doomed to run into > shortages, causing ? without intervention ? the entire system to > collapse. > > Innovation > > The only way to avert or at least postpone such a crisis is to come up > with innovations. But because the population and energy demand in a > growing city are increasing exponentially, those innovations will have > to follow each other more and more quickly if we want to prevent a > crisis. And therein lies the challenge. Not just for us. But also > possible, the researchers argue in their research article, for aliens. > ?We hypothesize that planetary civilizations behave like cities,? Wong > said. ?And if that?s the case, sooner or later they?ll hit a limit > that limits their growth. We call this boundary ?asymptotic burnout?: > an ultimate crisis in which the time that elapses between > singularities (or crises, ed.) is shorter than the time between > innovations.? In other words, the aliens are innovating too slowly to > escape their self-created fate. > > Downfall or other priorities > > When such an asymptotic burnout threatens, there are actually two > options, say Wong and colleague Stuart Bartlett. One: the alien > civilization is either oblivious or burying its head in the sand and > completely collapses. Or, two, the aliens become aware that they are > headed for their doom and change course. ?They prioritize homeostasis: > a state in which cosmic expansion is no longer a goal.? In both > scenarios, it becomes difficult, if not impossible, to detect the > aliens from a considerable distance. Because in the first scenario, > they, or at least the advanced and therefore detectable civilization > they once formed, are no more. And in the second scenario, they are no > longer focused on exploring space or increasing and proclaiming their > presence, but on preserving what they have. ?Unbridled growth and > productivity gives way to a focus on health, balance and maximum > longevity,? says Wong. > -------------------------- > > Original Research Paper here:- > > > This idea sounds reasonable to me. Currently our civilisation is > looking for a new innovation - Fusion energy - to catch up with our > energy requirements. Reducing oil consumption cannot be replaced by > solar power and wind power. If a new energy resource cannot be found, > then the shortages could well lead to new World Wars. A shrinkage of > our civilisation, with the remnants forced to 'live within their > means' no longer seems out of the question. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Thu May 19 18:53:35 2022 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 19 May 2022 20:53:35 +0200 Subject: [ExI] Another Fermi Paradox Paper In-Reply-To: References: Message-ID: On 2022. May 19., Thu at 20:33, Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The Fermi Paradox is really easy to solve, once you appreciate even the > most conservative possible parameters of a technological singularity. > > The problem is the universe is really really old, such that the odds of > two independently developing technological species being aligned in their > development even to within one million years is literally astronomical. > > And therein lies the problem. Because if we're a million years ahead of > them, they're still pre-linguistic Stone Age hunter gatherers, and we > wouldn't be able to detect them. > > And if they're a million years ahead of us, we're currently running as the > screensaver on their desktop computers, and they have absolute control as > to whether they want to be detected or not. > > There is no in-between. > My take: https://www.turingchurch.com/p/et-is-smarter-than-enrico-fermi-and?s=r > On Thu, May 19, 2022 at 4:27 AM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> The fact that we can?t find aliens may not be due to us, but to them! >> By Charlie Elliott 18 May 2022 >> >> < >> http://techzle.com/the-fact-that-we-cant-find-aliens-may-not-be-due-to-us-but-to-them >> > >> >> Quotes: >> We haven?t found any aliens yet. And that may have a macabre reason; >> the aliens are (before we can see them) doomed to destroy their >> civilization or ? in the best case scenario ? put their expansionism >> on the back burner. >> >> Researchers come up with this hypothesis in the Journal of the Royal >> Society Interface. The hypothesis is loosely based on what we see >> happening in cities here on Earth. ?Other scientists have already >> determined that cities are growing in ways that are unsustainable in >> the long term,? said study researcher Michael Wong. Scientias.nl from. >> ?It is because resource consumption increases disproportionately as >> cities grow.? And that is obviously a problem. Because it means that >> there will come a time when cities will, for example, need more energy >> than is available. ?It results in crises we call ?singularities? where >> population and energy demand increase endlessly over a finite period >> of time.? In such a scenario, civilization is doomed to run into >> shortages, causing ? without intervention ? the entire system to >> collapse. >> >> Innovation >> >> The only way to avert or at least postpone such a crisis is to come up >> with innovations. But because the population and energy demand in a >> growing city are increasing exponentially, those innovations will have >> to follow each other more and more quickly if we want to prevent a >> crisis. And therein lies the challenge. Not just for us. But also >> possible, the researchers argue in their research article, for aliens. >> ?We hypothesize that planetary civilizations behave like cities,? Wong >> said. ?And if that?s the case, sooner or later they?ll hit a limit >> that limits their growth. We call this boundary ?asymptotic burnout?: >> an ultimate crisis in which the time that elapses between >> singularities (or crises, ed.) is shorter than the time between >> innovations.? In other words, the aliens are innovating too slowly to >> escape their self-created fate. >> >> Downfall or other priorities >> >> When such an asymptotic burnout threatens, there are actually two >> options, say Wong and colleague Stuart Bartlett. One: the alien >> civilization is either oblivious or burying its head in the sand and >> completely collapses. Or, two, the aliens become aware that they are >> headed for their doom and change course. ?They prioritize homeostasis: >> a state in which cosmic expansion is no longer a goal.? In both >> scenarios, it becomes difficult, if not impossible, to detect the >> aliens from a considerable distance. Because in the first scenario, >> they, or at least the advanced and therefore detectable civilization >> they once formed, are no more. And in the second scenario, they are no >> longer focused on exploring space or increasing and proclaiming their >> presence, but on preserving what they have. ?Unbridled growth and >> productivity gives way to a focus on health, balance and maximum >> longevity,? says Wong. >> -------------------------- >> >> Original Research Paper here:- >> >> >> This idea sounds reasonable to me. Currently our civilisation is >> looking for a new innovation - Fusion energy - to catch up with our >> energy requirements. Reducing oil consumption cannot be replaced by >> solar power and wind power. If a new energy resource cannot be found, >> then the shortages could well lead to new World Wars. A shrinkage of >> our civilisation, with the remnants forced to 'live within their >> means' no longer seems out of the question. >> >> BillK >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu May 19 20:42:40 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 19 May 2022 13:42:40 -0700 Subject: [ExI] Another Fermi Paradox Paper In-Reply-To: References: Message-ID: <009901d86bc0$fbd82910$f3887b30$@rainier66.com> From: extropy-chat On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] Another Fermi Paradox Paper >?The Fermi Paradox is really easy to solve, once you appreciate even the most conservative possible parameters of a technological singularity?. And if they're a million years ahead of us, we're currently running as the screensaver on their desktop computers? Darin Darin do let me offer a more optimistic view if I may. In the days when we were having singularity conferences regularly (LOCALLY even! Cool!) we looked a number of possibilities regarding the Fermi silence. One of them I think (and hope) was under-studied at that time has grown far more appealing since then, both in the positive view it presents and in my estimation, the probability of its adequately explaining the Fermi observation. Consider this thought experiment: imagine the universe expands dramatically by six orders of magnitude, where the solar system stays as is, but the nearest stars go from a few light-years to a few million light years, and the nearest galaxies move out to a few trillion light years. Our perspective on signals from out there is changed, for we realize we can never go there, they can never come here, there is no possible or practical actual travel. In that scenario, we no longer bother to even think much about space travel, for it is pointless. We can do things with the local rocks in orbit about the sun if we wish, but beyond that? space is a forever formidable chasm over which travel can never occur, and even if we did somehow, the signal of its success will be millions of years future, so? don?t even bother. Sending out signals into space becomes a pointless activity as well, so there is no reason to squander the energy to do it. In that thought experiment, we turn our full attention of our greatest minds to increased organization of the matter right here and right now, for it is all we will ever have. OK then, suppose we discover some kind of mathematical proof or something to convince us that consciousness is not substrate dependent, and that it can be modeled and simulated in software, and so? our new task before us is to create computing platforms which will be ultra-reliable and suitable for having us move into them. Ja? It follows that if we discover that consciousness is not substrate dependent and that we are collectively smart enough to create computing devices competent enough for consciousness to move into, then we will do that, and we don?t send out signals. We communicate directly through actual conductors, rather than transmitting across empty space with EM signals. We eventually organize every atom available into being-space for consciousness. Otherwise? we go the way of other tech civilizations which eventually discover fusion and nuke themselves back out of technological capability, in which case we don?t send out signals. A third branch of that scenario is that we don?t nuke ourselves and we don?t inload, but generally just use up the available free energy, fail to find a suitable path to sustainability and just fade away as a tech-enabled civilization. All three of those scenarios explain the Fermi silence. I really like the notion that we figure out how to inload. That would be way cool. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Thu May 19 21:01:18 2022 From: dsunley at gmail.com (Darin Sunley) Date: Thu, 19 May 2022 15:01:18 -0600 Subject: [ExI] Another Fermi Paradox Paper In-Reply-To: <009901d86bc0$fbd82910$f3887b30$@rainier66.com> References: <009901d86bc0$fbd82910$f3887b30$@rainier66.com> Message-ID: I'm a simulation hypothesis guy myself. It only takes one Von Neumann machine having gotten here to make all that moot, and if it got here more than about 2500 years ago we wouldn't even have legends of the brief uploading discontinuity. "The movie goes on, and no one in the audience has any idea." On Thu, May 19, 2022 at 2:44 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Darin Sunley via extropy-chat > *Subject:* Re: [ExI] Another Fermi Paradox Paper > > > > >?The Fermi Paradox is really easy to solve, once you appreciate even the > most conservative possible parameters of a technological singularity?. > > And if they're a million years ahead of us, we're currently running as the > screensaver on their desktop computers? Darin > > > > > > > > Darin do let me offer a more optimistic view if I may. > > > > In the days when we were having singularity conferences regularly (LOCALLY > even! Cool!) we looked a number of possibilities regarding the Fermi > silence. One of them I think (and hope) was under-studied at that time has > grown far more appealing since then, both in the positive view it presents > and in my estimation, the probability of its adequately explaining the > Fermi observation. > > > > Consider this thought experiment: imagine the universe expands > dramatically by six orders of magnitude, where the solar system stays as > is, but the nearest stars go from a few light-years to a few million light > years, and the nearest galaxies move out to a few trillion light years. > Our perspective on signals from out there is changed, for we realize we can > never go there, they can never come here, there is no possible or practical > actual travel. In that scenario, we no longer bother to even think much > about space travel, for it is pointless. We can do things with the local > rocks in orbit about the sun if we wish, but beyond that? space is a > forever formidable chasm over which travel can never occur, and even if we > did somehow, the signal of its success will be millions of years future, > so? don?t even bother. Sending out signals into space becomes a pointless > activity as well, so there is no reason to squander the energy to do it. > > > > In that thought experiment, we turn our full attention of our greatest > minds to increased organization of the matter right here and right now, for > it is all we will ever have. > > > > OK then, suppose we discover some kind of mathematical proof or something > to convince us that consciousness is not substrate dependent, and that it > can be modeled and simulated in software, and so? our new task before us is > to create computing platforms which will be ultra-reliable and suitable for > having us move into them. > > > > Ja? > > > > It follows that if we discover that consciousness is not substrate > dependent and that we are collectively smart enough to create computing > devices competent enough for consciousness to move into, then we will do > that, and we don?t send out signals. We communicate directly through > actual conductors, rather than transmitting across empty space with EM > signals. We eventually organize every atom available into being-space for > consciousness. > > > > Otherwise? we go the way of other tech civilizations which eventually > discover fusion and nuke themselves back out of technological capability, > in which case we don?t send out signals. > > > > A third branch of that scenario is that we don?t nuke ourselves and we > don?t inload, but generally just use up the available free energy, fail to > find a suitable path to sustainability and just fade away as a tech-enabled > civilization. > > > > All three of those scenarios explain the Fermi silence. I really like the > notion that we figure out how to inload. That would be way cool. > > > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu May 19 22:04:33 2022 From: pharos at gmail.com (BillK) Date: Thu, 19 May 2022 23:04:33 +0100 Subject: [ExI] Augmented reality will give us superpowers Message-ID: Augmented reality will give us superpowers AR promises us a magical world. Louis Rosenberg May 16, 2022 Quotes: Virtual and augmented reality have been through some false starts. But this time is different. Rapid advances in AR have made doctors the first superhumans, and these abilities are only growing. As AR hits the consumer market, it will sell superpowers to all of us. AR eyewear will soon replace the smartphone as our interface for digital content. Over the next decade, the handheld mobile phone will be replaced by augmented-reality glasses that you will wear during most of your waking hours. While many consumers, myself included, are skeptical that we?ll ever want to wear digital hardware on our faces for hours each day, we will. The reason is simple: Augmented reality will give us superpowers. This brings me back to my thesis: Over the next ten years, augmented reality will replace the mobile phone as our primary interface for digital content. Early adopters will embrace the lure of new, magical capabilities. Everyone else, skeptics included, will quickly find themselves at a disadvantage without omniscience, x-ray vision, superhuman recall, and dozens of other capabilities that are not even on the drawing board yet. This will drive adoption as quickly as the transition from flip phones to smartphones. After all, not upgrading your hardware will mean missing out on layers of useful information that everyone else can see. End Quotes ------------------------ For many years, I have thought that the smartphone is an interim device. AR spectacles are the next step. In turn they will be followed by AR contact lenses, as AR tech merges into the human body. As the article says, who doesn't want superpowers? Resistance is futile. We will be the Borg. BillK From sparge at gmail.com Fri May 20 12:39:47 2022 From: sparge at gmail.com (Dave S) Date: Fri, 20 May 2022 08:39:47 -0400 Subject: [ExI] Augmented reality will give us superpowers In-Reply-To: References: Message-ID: On Thu, May 19, 2022 at 6:08 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > For many years, I have thought that the smartphone is an interim > device. AR spectacles are the next step. In turn they will be followed > by AR contact lenses, as AR tech merges into the human body. > As the article says, who doesn't want superpowers? > I was disappointed that Google cancelled the Glass product but I suspect they're still working on AR. > Resistance is futile. We will be the Borg. > The Borg were much more than a population of individuals with AR. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri May 20 13:31:57 2022 From: pharos at gmail.com (BillK) Date: Fri, 20 May 2022 14:31:57 +0100 Subject: [ExI] Augmented reality will give us superpowers In-Reply-To: References: Message-ID: On Fri, 20 May 2022 at 13:40, Dave S wrote: > > On Thu, May 19, 2022 at 6:08 PM BillK via extropy-chat wrote: >> >> >> For many years, I have thought that the smartphone is an interim >> device. AR spectacles are the next step. In turn they will be followed >> by AR contact lenses, as AR tech merges into the human body. >> As the article says, who doesn't want superpowers? > > > I was disappointed that Google cancelled the Glass product but I suspect they're still working on AR. > Oh, yes, they are working on AR! Other companies have opened up and changed the market for smart specs. Did you see the demo of Google's new instant translate specs for foreign language speech? >> >> Resistance is futile. We will be the Borg. > > > The Borg were much more than a population of individuals with AR. > -Dave Just as humans will change into much more than just people with smart specs. BillK From spike at rainier66.com Sat May 21 04:28:27 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 20 May 2022 21:28:27 -0700 Subject: [ExI] 1950 census - using the dead In-Reply-To: References: <004401d86999$d0afe210$720fa630$@rainier66.com> <005801d869f1$5c4d1520$14e73f60$@rainier66.com> <001301d86a06$4656eec0$d304cc40$@rainier66.com> Message-ID: <009101d86ccb$38101a90$a8304fb0$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace via extropy-chat Sent: Tuesday, 17 May, 2022 12:15 PM To: ExI chat list Cc: William Flynn Wallace Subject: Re: [ExI] 1950 census - using the dead >?Just whose privacy is being protected? The dead? I think that we should use the dead in any way possible if it will help us with our lives, such as the medical info you referred to, Spike. >?Therefore, I would contact the descendants and tell them that you have information on their dead relatives. You don't tell what info, but hint that some might be good, some bad, some neutral, and let them decide. But then, I am a very open person and will tell all to anyone who asks. When I am dead they can think what they will - can't hurt me. bill w An example to answer the question of whose privacy is being protected: a person I match contacted me asking if I could help her find out the identity of her biological father. I asked some questions: she was born recently and her mother refused to give her a hint of her parentage. She never thought much of it until she went to a nearby town with some girlfriends. Her mother freaked out, told her they would get drunk, end up preggers, etc, described it in such a way that she seemed to know a lot about that. So? I figured out who it was by the DNA signature we shared. I went into YearbookUSA, found the guy?s name, went into WhitePages.com and found the guy was married, had four children, etc. I decided to not tell. I did tell the cousin all the resources she needed to figure it out. If she chose to pursue it at that point, it?s her business. Whose privacy is being protected? The living, in that case. spike On Tue, May 17, 2022 at 10:55 AM spike jones via extropy-chat > wrote: -----Original Message----- From: spike at rainier66.com > >...I sometimes comment that I learned something I wish I didn't know. Does anyone here have a good example of finding out something you really wish you didn't know? I don't. I have learned some disturbing things, but I would rather be disturbed than ignorant. So... while respecting others' privacy, I am good with the 72 year delay on census records. spike I have a related question to the above about cognitive dissonance and learning something you wish you didn't know. In my quirky hobby of DNA-based genealogy, I sometimes learn things that are disturbing about my own family which I intentionally withhold from the rest of the family. This is a question even more loaded with moral and ethical ambiguity, the kind of problem at which I do not do well: if it can't be modeled with a system of simultaneous differential equations, good chance I can't solve it. I don't do ethical dilemmas. This is why I didn't go to medical school and I am so glad I didn't. I have withheld some biiiiig stuff from my own family and continue to do so. Note the above comment assumes the questionable position that one does not do wrong by doing nothing. It's the train switch dilemma where you choose to do nothing and five are slain rather than switch the tracks and sacrifice a different three. Maybe. Another good example: thru DNA genealogy, I discovered that one of my adopted third cousins was sired by her great grandfather (her great grandfather met a teenage girl who was a squatter on his land, his son never confessed that he had met a squatter on the same land 19 years previously and had sired a daughter, who the old man later met and sired two before he married her and sired five more. The father and great grandfather were the same man. I never told my adopted cousin that information, even though it might be medically relevant. It's a moral dilemma which almost made me give up DNA genealogy when I was first starting. Question please: if you find out something (legitimately) which you realize is relevant but perhaps disturbing to someone else, do you withhold or tell? First think about a related question: is there information you wish you didn't know? So if you want to know, but are willing to withhold relevant true information from someone else, does not that violate the golden rule? Looks to me like an exception to the golden rule, which is ordinarily a solid ethical guide. If you ponder the father = great grandfather dilemma, I have an even thornier one for you, also related to DNA genealogy. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sat May 21 09:26:02 2022 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 21 May 2022 02:26:02 -0700 Subject: [ExI] Substitution argument was Re: Is Artificial Life Conscious? In-Reply-To: <1881527377.1859575.1652543735686@mail.yahoo.com> References: <20220513034147.Horde.ieK90AsoQ4E0ABuIpUIpm5C@sollegro.com> <1881527377.1859575.1652543735686@mail.yahoo.com> Message-ID: <20220521022602.Horde.fVoNOwiSXPLhoXtnVvdZSkN@webmail.sollegro.com> Quoting Brent Allsop: > No, only the popular consensus functionalists, led by Chalmers, with > his derivative and mistaken "substitution argument" work. results in > them thinking it is a hard problem, leading the whole world astray.? > The hard problem would be solved by now,?if it wasn't for all that. > If you understand why the substitution argument is a mistaken > sleight of hand, that so-called "hard problem" goes away.? All the > stuff like What is it like to be a bat, how do you bridge the > explanatory gap, and all that simply fall away, once you know the > colorness quality of something. I would probably be lumped in with the functionalists since I think intelligence is a literal mathematical fitness function on tensors being optimized by having their partial derivatives minimized by gradient descent against environmental parameters. In the brain, these tensors represent the relative weights and bias of the neurons in the neural network. I am toying with calling these tensor functions SELFs for scalable epistemic learning functions. That being said, I have issues with the substitution argument. For one thing, the larger a network gets, the more information lies between nodes relative to information within nodes. That is to say that relationships between components increase in importance relative to the components themselves. In my theory, this is the essence of emergence. It might intuitively aid the understanding of my argument to examine a higher order network. The substitution argument suggests that a small part of my brain could be replaced by a functionally identical artificial part, and I would not be able to tell the difference. The problem with this argument is that the function of any neuron or neural circuit of the brain is not determined solely by the properties of the neuron or neural circuit, but by its holistic relationship with all the other neurons it is connected to. So not only could an artificial neuron not be an "indistinguishable substitute" for the native neuron, but even another identical biological neuron would not be a sufficient replacement unless it was somehow grown or developed in the context of a brain identical to yours. It might be more intuitively obvious to consider your family, than a brain. If you were instantaneously replaced with a clone of yourself, even if that clone had been trained in your memories up until let's say last month, your family would notice some pretty jarring differences between you and your clone. Those problems could eventually go away as your family adapted to your clone, and your clone adapted to your family, but the actual replacement itself would be obvious to your family when it occurred. Similarly, an artificial replacement neuron/neural circuit (or even a biological one) would have to undergo "on the job training" to sufficiently substitute for the component it was replacing. And if the circuit was extensive enough, you and the people around you would notice a difference. > And I don't really know much about the problem of universals.? I > just know that we live in a world full of LOTS of colourful things, > yet all we know are the colors things seem to be.? Nobody yet knows > the true intrinsic colorness quality of anything.? The emerging > consensus Representational Qualia Theory, and all the supporters of > all the sub camps are predicting once we discover which of all our > descriptions of stuff in the brain is a description of redness, this > will falsify all but THE ONE camp finally demonstrated to be true.? > All the supporters of all the falsified camps will then be seen > jumping to this one yet to be falsified camp.? We are tracking all > this in real time, and already seeing significant progress.? In > other words, there will be irrefutable consensus proof that the > 'hard problem' has finally been resolved.? I predict this will > happen within 10 years.? Anyone care to make a bet, that THE ONE > camp will have over 90% "Mind Expert consensus", and there will be > > 1000 experts in total, participating, within 10 years? Consensus is simply consensus; it is not proof. The majority and even the totality have been wrong about a great deal many things over the long span of history. ? >> ?? >>> First, we must recognize that redness is not an intrinsic quality of the >>> strawberry, it is a quality of our knowledge of the strawberry in our >>> brain.? This must be true since we can invert our knowledge by simply >>> inverting any transducing system anywhere in the perception process. >>> If we have knowledge of a strawberry that has a redness quality, and if we >>> objectively observed this redness in someone else's brain, and fully >>> described that redness, would that tell us the quality we are describing? >>> No, for the same reason you can't communicate to a blind person what >>> redness is like. >> >> Why not? If redness is not intrinsic to the strawberry but is instead? >> a quality of our knowledge of the strawberry, then why can't we? >> explain to a blind person what redness is like? Blind people have? >> knowledge of strawberries and plenty of glutamate in their brains.? >> Just tell them that redness is what strawberries are like, and they? >> will understand you just fine. > > Wait, what?? No you can't.? Sure, maybe if they've been sighted, > seen a strawberry, with their eyes, (i.e. directly experienced > redness knowledge) then became blind.? They will be able to kind of > remember what that redness was like, but the will no longer be able > to experience it. But how does the experience of redness in the sighted change the glutamate (or whatever representational "stuff" you hypothesize) versus the glutamate of the blind? Surely you can see my point that redness must be learned, and the brains of the color-learned are chemically indistinguishable from the brains of the blind. And if there were any representational "stuff", then it would lie in the difference between the brains of the sighted and the blind. I would posit that any such difference would lie in the neural wiring and synaptic weights which would be chemically indistinguishable but structurally and functionally distinct. ? >> ?? >>> The entirety of our objective knowledge tells us? nothing >>> of the intrinsic qualities of any of that stuff we are describing. >> >> Ok, but you just said that redness was not an intrinsic quality of? >> strawberries but of our knowledge of them, so our objective knowledge? >> of them should be sufficient to describe redness. > > Sure, it is sufficient, but until you know which sufficient > description is a description of redness, and which sufficient > description is a description of greenness, we won't know which is > which. We don't need to know anything as long as we are constantly learning. If you woke up up tomorrow and everything that was red looked green to you, at first you would be confused, but after a week, your would adapt and be functionally equivalent to now. You might even eventually even forget there ever was a switch. ? >> >> So if this "stuff" is glutamate, glycine, or whatever, and it exists? >> in the brains of blind people, then why can't it represent redness (or? >> greenness) information to them also? > > People? may be able to dream redness.? Or they may take some > psychedelics that enables them to experience redness, or surgeons > may stimulate a part of the brain, while doing brain surgery, > producing a redness experience,? Those rare cases are possible,? But > that isn't yet normal.? Once they discover which of all our > descriptions of stuff in the brain is a description of redness, > someone like Neuralink will be producing that redness quality in > blind people's brains all the time, with artificial eyes, and so > on.? But to date, normal blind people can't experience redness > quality. Sure they can, even if it just through a frequency of sound output by an Orcam MyEye. You learn what redness is by some manner of perception and how you perceive it does not matter. Synesthetics might even be able to taste or small redness. ? >> ?? >>> >>> >>>>> This is true if that stuff is some kind of "Material" or "electromagnetic >>>>> field" "spiritual" or "functional" stuff, it remains a fact that your >>>>> knowledge, composed of that, has a redness quality. >>>> >>>> It seems you are quite open-minded when it comes to what qualifies as >>>> "stuff". If so, then why does your 3-robot-scenario single out >>>> information as not being stuff? If you wish to insist that something >>>> physical in the brain has the redness quality and conveys knowledge of >>>> redness, then why glutamate? Why not instead hypothesize that is the >>>> only thing that prima facie has the redness property to begin with >>>> i.e. red light? After all there are photoreceptors in the deep brain. >>>> >>> >>> Any physical property like redness, greenness, +5votes, holes in a punch >>> card... can represent (convey) an abstract 1.? There must be something >>> physical representing that one, but, again, you can't know what that is >>> unless you have a transducing dictionary telling you which is which. >> >> You may need something physical to represent the abstract 1, but that? >> abstract 1 in turn represents some different physical thing. > > Only if you have a transducing dictionary that enables such, or you > think of it in that particular way.? Other than that, it's just a > set of physical facts, which can be interpreted as something else, > that is all. A transducing dictionary is not enough. Something has to read the dictionary, and all meaning is relative. If your wife is lost in the jungle, then her cries for help would mean something very different to you than they would to a hungry tiger. In communication, it takes meaning to understand meaning. The reason you can understand abstract information is because you yourself are abstract information. Stuart LaForge From stathisp at gmail.com Sat May 21 14:14:55 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 22 May 2022 00:14:55 +1000 Subject: [ExI] Substitution argument was Re: Is Artificial Life Conscious? In-Reply-To: <20220521022602.Horde.fVoNOwiSXPLhoXtnVvdZSkN@webmail.sollegro.com> References: <20220513034147.Horde.ieK90AsoQ4E0ABuIpUIpm5C@sollegro.com> <1881527377.1859575.1652543735686@mail.yahoo.com> <20220521022602.Horde.fVoNOwiSXPLhoXtnVvdZSkN@webmail.sollegro.com> Message-ID: On Sat, 21 May 2022 at 19:27, Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Brent Allsop: > > > > No, only the popular consensus functionalists, led by Chalmers, with > > his derivative and mistaken "substitution argument" work. results in > > them thinking it is a hard problem, leading the whole world astray. > > The hard problem would be solved by now, if it wasn't for all that. > > If you understand why the substitution argument is a mistaken > > sleight of hand, that so-called "hard problem" goes away. All the > > stuff like What is it like to be a bat, how do you bridge the > > explanatory gap, and all that simply fall away, once you know the > > colorness quality of something. > > I would probably be lumped in with the functionalists since I think > intelligence is a literal mathematical fitness function on tensors > being optimized by having their partial derivatives minimized by > gradient descent against environmental parameters. In the brain, these > tensors represent the relative weights and bias of the neurons in the > neural network. I am toying with calling these tensor functions SELFs > for scalable epistemic learning functions. > > That being said, I have issues with the substitution argument. For one > thing, the larger a network gets, the more information lies between > nodes relative to information within nodes. That is to say that > relationships between components increase in importance relative to > the components themselves. In my theory, this is the essence of > emergence. > > It might intuitively aid the understanding of my argument to examine a > higher order network. The substitution argument suggests that a small > part of my brain could be replaced by a functionally identical > artificial part, and I would not be able to tell the difference. The > problem with this argument is that the function of any neuron or > neural circuit of the brain is not determined solely by the properties > of the neuron or neural circuit, but by its holistic relationship with > all the other neurons it is connected to. So not only could an > artificial neuron not be an "indistinguishable substitute" for the > native neuron, but even another identical biological neuron would not > be a sufficient replacement unless it was somehow grown or developed > in the context of a brain identical to yours. ?Functionally identical? means that the replacement interacts with the remaining tissue exactly the same way as the original did. If it doesn?t, then it isn?t functionally identical. It might be more intuitively obvious to consider your family, than a > brain. If you were instantaneously replaced with a clone of yourself, > even if that clone had been trained in your memories up until let's > say last month, your family would notice some pretty jarring > differences between you and your clone. Those problems could > eventually go away as your family adapted to your clone, and your > clone adapted to your family, but the actual replacement itself would > be obvious to your family when it occurred. How would your family notice a difference if your behaviour were exactly the same? Similarly, an artificial replacement neuron/neural circuit (or even a > biological one) would have to undergo "on the job training" to > sufficiently substitute for the component it was replacing. And if the > circuit was extensive enough, you and the people around you would > notice a difference. Technical difficulty is not a problem in a thought experiment. The argument is that IF a part of your brain were replaced with a functionally identical analogue THEN your consciousness would necessarily be preserved. > And I don't really know much about the problem of universals. I > > just know that we live in a world full of LOTS of colourful things, > > yet all we know are the colors things seem to be. Nobody yet knows > > the true intrinsic colorness quality of anything. The emerging > > consensus Representational Qualia Theory, and all the supporters of > > all the sub camps are predicting once we discover which of all our > > descriptions of stuff in the brain is a description of redness, this > > will falsify all but THE ONE camp finally demonstrated to be true. > > All the supporters of all the falsified camps will then be seen > > jumping to this one yet to be falsified camp. We are tracking all > > this in real time, and already seeing significant progress. In > > other words, there will be irrefutable consensus proof that the > > 'hard problem' has finally been resolved. I predict this will > > happen within 10 years. Anyone care to make a bet, that THE ONE > > camp will have over 90% "Mind Expert consensus", and there will be > > > 1000 experts in total, participating, within 10 years? > > Consensus is simply consensus; it is not proof. The majority and even > the totality have been wrong about a great deal many things over the > long span of history. > > >> > >>> First, we must recognize that redness is not an intrinsic quality of > the > >>> strawberry, it is a quality of our knowledge of the strawberry in our > >>> brain. This must be true since we can invert our knowledge by simply > >>> inverting any transducing system anywhere in the perception process. > >>> If we have knowledge of a strawberry that has a redness quality, and > if we > >>> objectively observed this redness in someone else's brain, and fully > >>> described that redness, would that tell us the quality we are > describing? > >>> No, for the same reason you can't communicate to a blind person what > >>> redness is like. > >> > >> Why not? If redness is not intrinsic to the strawberry but is instead > >> a quality of our knowledge of the strawberry, then why can't we > >> explain to a blind person what redness is like? Blind people have > >> knowledge of strawberries and plenty of glutamate in their brains. > >> Just tell them that redness is what strawberries are like, and they > >> will understand you just fine. > > > > Wait, what? No you can't. Sure, maybe if they've been sighted, > > seen a strawberry, with their eyes, (i.e. directly experienced > > redness knowledge) then became blind. They will be able to kind of > > remember what that redness was like, but the will no longer be able > > to experience it. > > But how does the experience of redness in the sighted change the > glutamate (or whatever representational "stuff" you hypothesize) > versus the glutamate of the blind? Surely you can see my point that > redness must be learned, and the brains of the color-learned are > chemically indistinguishable from the brains of the blind. And if > there were any representational "stuff", then it would lie in the > difference between the brains of the sighted and the blind. I would > posit that any such difference would lie in the neural wiring and > synaptic weights which would be chemically indistinguishable but > structurally and functionally distinct. > > >> > >>> The entirety of our objective knowledge tells us nothing > >>> of the intrinsic qualities of any of that stuff we are describing. > >> > >> Ok, but you just said that redness was not an intrinsic quality of > >> strawberries but of our knowledge of them, so our objective knowledge > >> of them should be sufficient to describe redness. > > > > Sure, it is sufficient, but until you know which sufficient > > description is a description of redness, and which sufficient > > description is a description of greenness, we won't know which is > > which. > > We don't need to know anything as long as we are constantly learning. > If you woke up up tomorrow and everything that was red looked green to > you, at first you would be confused, but after a week, your would > adapt and be functionally equivalent to now. You might even eventually > even forget there ever was a switch. > > >> > >> So if this "stuff" is glutamate, glycine, or whatever, and it exists > >> in the brains of blind people, then why can't it represent redness (or > >> greenness) information to them also? > > > > People may be able to dream redness. Or they may take some > > psychedelics that enables them to experience redness, or surgeons > > may stimulate a part of the brain, while doing brain surgery, > > producing a redness experience, Those rare cases are possible, But > > that isn't yet normal. Once they discover which of all our > > descriptions of stuff in the brain is a description of redness, > > someone like Neuralink will be producing that redness quality in > > blind people's brains all the time, with artificial eyes, and so > > on. But to date, normal blind people can't experience redness > > quality. > > Sure they can, even if it just through a frequency of sound output by > an Orcam MyEye. You learn what redness is by some manner of perception > and how you perceive it does not matter. Synesthetics might even be > able to taste or small redness. > > >> > >>> > >>> > >>>>> This is true if that stuff is some kind of "Material" or > "electromagnetic > >>>>> field" "spiritual" or "functional" stuff, it remains a fact that your > >>>>> knowledge, composed of that, has a redness quality. > >>>> > >>>> It seems you are quite open-minded when it comes to what qualifies as > >>>> "stuff". If so, then why does your 3-robot-scenario single out > >>>> information as not being stuff? If you wish to insist that something > >>>> physical in the brain has the redness quality and conveys knowledge of > >>>> redness, then why glutamate? Why not instead hypothesize that is the > >>>> only thing that prima facie has the redness property to begin with > >>>> i.e. red light? After all there are photoreceptors in the deep brain. > >>>> > >>> > >>> Any physical property like redness, greenness, +5votes, holes in a > punch > >>> card... can represent (convey) an abstract 1. There must be something > >>> physical representing that one, but, again, you can't know what that is > >>> unless you have a transducing dictionary telling you which is which. > >> > >> You may need something physical to represent the abstract 1, but that > >> abstract 1 in turn represents some different physical thing. > > > > Only if you have a transducing dictionary that enables such, or you > > think of it in that particular way. Other than that, it's just a > > set of physical facts, which can be interpreted as something else, > > that is all. > > A transducing dictionary is not enough. Something has to read the > dictionary, and all meaning is relative. If your wife is lost in the > jungle, then her cries for help would mean something very different to > you than they would to a hungry tiger. In communication, it takes > meaning to understand meaning. The reason you can understand abstract > information is because you yourself are abstract information. > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat May 21 21:42:59 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 21 May 2022 16:42:59 -0500 Subject: [ExI] education Message-ID: Education: no politician would say that it is anything but #1 on their agenda. But what are the facts? For a long time teaching was done by women, often single, and were underpaid by any standard. Has that changed? Read the following, from Quora (I did not try to validate the statistics - 100K, Spike?). Right now, 65% of new teachers will quit the profession within their first 5 years. Five years ago that figure was 50%. Over the past 5 years at my high school, we have had to replace 20% of the teachers each year. This is due to new teachers quitting, older teachers retiring, and middle-aged teachers taking early retirement and doing something else. A few teachers were assaulted so badly by students they had to take medical retirement because they suffered permanent injuries and can no longer carry out the duties of a teacher. If I am lucky, I get a 3 or 4 week summer break. I am expected to do all sorts of training over summer that I do not get paid for. The district will pay for me to attend the training but not pay my wages to go to the training. I have to engage in continuing education to renew my credential every 5 years. It is my district that signs off on the form that is submitted to the state to renew my credential. So if I do not do enough of these summer institutes, I do not get to renew my credential. This past summer, I could not attend any training because I helped my son move out of state to take a job and I was building on an addition to my house. So I have to make up those hours taking night classes. I am doing 12 to 16 hour days and I still have to do all the lesson planning, grading, and attend all the meetings I normally do. I have over 40 students with IEPs and 504 plans so I have to attend those meetings and meet with those parents when they wish. My district also requires I fill out paperwork on all of those students to ensure we are following the legal requirements. After school today, I was called to the office because one student in a class is being bullied by two other students in the same class. I spent an hour dealing with that. I get to spend another two hours tomorrow in a conflict mediation meeting tomorrow with parents after school which means I have to cancel a doctor?s appointment, I have no choice but to cancel this appointment. The next slot is in two months for this specialist. My son also has an advanced degree in science like I do and he earns more than I do with his first job and he has much better health benefits than I do. I have to pay $1,000 a month for my health insurance, the district only pays $200 a month. If I were to quit my teaching job right now and go work for the company my son works for as a lab tech, I would actually get a pay raise and move to a part of the country where the cost of living is much lower. I could go back to the career I had before teaching and after brushing up I would earn two to three times what I do now. In CA, there is a shortage of over 100,000 science teachers. Graduates with science degrees will earn more than twice the money in private industry than teaching. Twenty or thirty years ago that was not the case. Well over half of my students are failing because they will not even pick up a pencil and try. They only play games on their cell phones. I am discouraged from confiscating them because if I ask for them the students refuse to hand them over. I then have to write a referral and security removes them from class. An administrator then confiscates the phone and sends the student to in-school suspension for that day and the next day. Parents come in and blame the teacher or the school for the policy. So I let the students have their phones and my policy is if your phone is out and you do not finish the assignment by the end of the period, you get a 0 and no opportunity to make it up. The students that are failing are good with it. They tell me they do not need school. They plan to live on welfare like their parents or they can make more money selling drugs so they do not need school. The past few years, I have not been doing much teaching. That is why this is my last year. I am tired of parents coming in and telling me they can do a better job. I took a parent up on the offer and after 5 minutes the parent left in tears. The students ran her out with their profanity and complete lack of respect. I am tired of being assaulted by parents. I am tired of administrators telling me I have to do more to get students to pass. I am tired of every time I turn my back being hit with things that are thrown at me. When I do catch the person doing the throwing, the admin does absolutely nothing. I am tired of having student fights in my room and having to wait 10 or more minutes for admin to respond and then the students are returned the next day because admin does not want to suspend them. We are being told by the state we have to lower the number of suspensions and if we do not, we will lose money. I cannot be part of a dysfunctional education system. There are still parts in this country where education still works. I have a friend that moved to a place and teaches at a place that does not have these problems. I plan to do the same next year. The upside is I will also get a pay raise because there are still a few places left in this country that are willing to pay teachers more. It is not much more but enough to make me feel that my skills as a teacher are appreciated. I did not become a teacher to get rich. I did it to help students but I need to earn a decent salary. Where I am right now, I have not had a pay raise in over 10 years. Ten years ago we took a 10% pay cut to help the district balance the budget. Now that the economy is better and the state has provided them more money for COLA they have raised the salaries of the administration by over 30% but they claim they have no money to replace the money they took in the pay cut. I am tired of the phrase it is for the children meaning that teachers are supposed to work harder, longer, with less, and for less and do major miracles with student success when we cannot even get students to pick up a pencil and try. The ones that want to do school cannot because the ones that do not want to do school make it impossible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlatorra at gmail.com Sat May 21 22:34:47 2022 From: mlatorra at gmail.com (Michael LaTorra) Date: Sat, 21 May 2022 16:34:47 -0600 Subject: [ExI] education In-Reply-To: References: Message-ID: I was born in the 1950s. Most of my teachers in elementary school were women. Some were already advanced in age. One lady fresh out of college. Most of them were married. (The one fresh out of college got married while she was teaching my class.) A female teacher who was married did not have to earn as much to live well as a single female would. These ladies were quite bright and almost all of them were good teachers. In today's society, many of these ladies would be able to earn much more money in other professions than they would as teachers. Back in the '50s and '60s, not many women were employed in traditionally male professions as they are today. This is definitely a mixed blessing. I had some really excellent female science teachers in junior high school and high school -- they really knew their stuff. Mike LaTorra On Sat, May 21, 2022 at 3:44 PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Education: no politician would say that it is anything but #1 on their > agenda. But what are the facts? For a long time teaching was done by > women, often single, and were underpaid by any standard. Has that > changed? Read the following, from Quora (I did not try to validate the > statistics - 100K, Spike?). > > Right now, 65% of new teachers will quit the profession within their first > 5 years. Five years ago that figure was 50%. Over the past 5 years at my > high school, we have had to replace 20% of the teachers each year. This is > due to new teachers quitting, older teachers retiring, and middle-aged > teachers taking early retirement and doing something else. A few teachers > were assaulted so badly by students they had to take medical retirement > because they suffered permanent injuries and can no longer carry out the > duties of a teacher. > > If I am lucky, I get a 3 or 4 week summer break. I am expected to do all > sorts of training over summer that I do not get paid for. The district will > pay for me to attend the training but not pay my wages to go to the > training. I have to engage in continuing education to renew my credential > every 5 years. It is my district that signs off on the form that is > submitted to the state to renew my credential. So if I do not do enough of > these summer institutes, I do not get to renew my credential. > > This past summer, I could not attend any training because I helped my son > move out of state to take a job and I was building on an addition to my > house. So I have to make up those hours taking night classes. I am doing 12 > to 16 hour days and I still have to do all the lesson planning, grading, > and attend all the meetings I normally do. I have over 40 students with > IEPs and 504 plans so I have to attend those meetings and meet with those > parents when they wish. My district also requires I fill out paperwork on > all of those students to ensure we are following the legal requirements. > > After school today, I was called to the office because one student in a > class is being bullied by two other students in the same class. I spent an > hour dealing with that. I get to spend another two hours tomorrow in a > conflict mediation meeting tomorrow with parents after school which means I > have to cancel a doctor?s appointment, I have no choice but to cancel this > appointment. The next slot is in two months for this specialist. > > My son also has an advanced degree in science like I do and he earns more > than I do with his first job and he has much better health benefits than I > do. I have to pay $1,000 a month for my health insurance, the district only > pays $200 a month. If I were to quit my teaching job right now and go work > for the company my son works for as a lab tech, I would actually get a pay > raise and move to a part of the country where the cost of living is much > lower. > > I could go back to the career I had before teaching and after brushing up > I would earn two to three times what I do now. > > In CA, there is a shortage of over 100,000 science teachers. Graduates > with science degrees will earn more than twice the money in private > industry than teaching. Twenty or thirty years ago that was not the case. > > Well over half of my students are failing because they will not even pick > up a pencil and try. They only play games on their cell phones. I am > discouraged from confiscating them because if I ask for them the students > refuse to hand them over. I then have to write a referral and security > removes them from class. An administrator then confiscates the phone and > sends the student to in-school suspension for that day and the next day. > Parents come in and blame the teacher or the school for the policy. So I > let the students have their phones and my policy is if your phone is out > and you do not finish the assignment by the end of the period, you get a 0 > and no opportunity to make it up. The students that are failing are good > with it. They tell me they do not need school. They plan to live on welfare > like their parents or they can make more money selling drugs so they do not > need school. > > The past few years, I have not been doing much teaching. That is why this > is my last year. I am tired of parents coming in and telling me they can do > a better job. I took a parent up on the offer and after 5 minutes the > parent left in tears. The students ran her out with their profanity and > complete lack of respect. > > I am tired of being assaulted by parents. I am tired of administrators > telling me I have to do more to get students to pass. I am tired of every > time I turn my back being hit with things that are thrown at me. When I do > catch the person doing the throwing, the admin does absolutely nothing. I > am tired of having student fights in my room and having to wait 10 or more > minutes for admin to respond and then the students are returned the next > day because admin does not want to suspend them. We are being told by the > state we have to lower the number of suspensions and if we do not, we will > lose money. > > I cannot be part of a dysfunctional education system. There are still > parts in this country where education still works. I have a friend that > moved to a place and teaches at a place that does not have these problems. > I plan to do the same next year. The upside is I will also get a pay raise > because there are still a few places left in this country that are willing > to pay teachers more. It is not much more but enough to make me feel that > my skills as a teacher are appreciated. I did not become a teacher to get > rich. I did it to help students but I need to earn a decent salary. Where I > am right now, I have not had a pay raise in over 10 years. Ten years ago we > took a 10% pay cut to help the district balance the budget. Now that the > economy is better and the state has provided them more money for COLA they > have raised the salaries of the administration by over 30% but they claim > they have no money to replace the money they took in the pay cut. I am > tired of the phrase it is for the children meaning that teachers are > supposed to work harder, longer, with less, and for less and do major > miracles with student success when we cannot even get students to pick up a > pencil and try. The ones that want to do school cannot because the ones > that do not want to do school make it impossible. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat May 21 23:47:50 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 21 May 2022 16:47:50 -0700 Subject: [ExI] education In-Reply-To: References: Message-ID: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> On Sat, May 21, 2022 at 3:44 PM William Flynn Wallace via extropy-chat > wrote: Education: no politician would say that it is anything but #1 on their agenda. But what are the facts? For a long time teaching was done by women, often single, and were underpaid by any standard. Has that changed? Read the following, from Quora (I did not try to validate the statistics - 100K, Spike?). Supply and demand is what determines the value of any profession. We have the notion that all elementary and secondary teachers should be paid the same, but those who know science, technology, engineering and math have the option of higher paying careers outside of teaching. Consequently? there aren?t enough STEM teachers. I see in some areas workarounds: STEM-oriented private schools where teachers are on different pay scales. Like any other business, they pay what the market demands for the skillset they need. Public schools are union-dominated which will lead to the same pay scale for a STEM teacher as for one whose degree is in gender studies or journalism or some goofy thing. Their shortage of STEM teachers is never going away. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose_cordeiro at yahoo.com Sun May 22 02:56:07 2022 From: jose_cordeiro at yahoo.com (Jose Cordeiro) Date: Sun, 22 May 2022 02:56:07 +0000 (UTC) Subject: [ExI] World Build 2045: welcome all your "feedback" to our entry 468 with friends from The Millennium Project: https://worldbuild.ai/W-0000000468/ References: <2096490935.1276099.1653188167695.ref@mail.yahoo.com> Message-ID: <2096490935.1276099.1653188167695@mail.yahoo.com> Dear Extropy friends, ? ? ?Several from The Millennium Project (Jerome Glenn, Ted Gordon, Elizabeth Florescu, Veronica Agreda and myself) have entered the fascinating World Build 2045 competition organized by the Future of Life Institute (led by MIT professor Max Tegmark and funded by Elon Musk, Jaan Tallin and other luminaries) that calls for: 1) a timeline from 2022 to 2045 with events and data points; 2) short description of a day in the life in 2045 (vignettes); 3) Answers to 13 questions about the future; and 4) an art/media piece. ? ? ? Our entry is at:?https://worldbuild.ai/W-0000000468/?scroll down to see the time line, scroll down further the see the two vignettes of a day in the life in 2045, scroll down further to see our answers to 13 questions about the future; and scroll down further to see our futuristic 5-minute video on 2045. ? ? ? At any point you can scroll back to the top and click on ?Feedback? to include your comments. This is very important to improve our entry and move up the competition during the next two weeks, please. So, kindly click on "Feedback" and add your views to our entry 468, we appreciate your feedback eternally, please:?https://worldbuild.ai/W-0000000468/ ? ? ?Futuristically yours, ? ? ?La vie est belle! Jose Cordeiro, MBA, PhD?(www.cordeiro.org)https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun May 22 16:13:05 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 22 May 2022 11:13:05 -0500 Subject: [ExI] education In-Reply-To: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> Message-ID: Spike - what if you had a teacher whose STEM skills were so low that they could not get a higher paying job in industry? Could they supplement their teaching with Khan Academy? A teacher coming into the classroom, turning on the TV and sitting with their paperback book while the class watches, brings to mind my experience with teaching psych 101 by TV. Students hated it. I suppose Khan is better? bill w On Sat, May 21, 2022 at 6:49 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sat, May 21, 2022 at 3:44 PM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > > > > > Education: no politician would say that it is anything but #1 on their > agenda. But what are the facts? For a long time teaching was done by > women, often single, and were underpaid by any standard. Has that > changed? Read the following, from Quora (I did not try to validate the > statistics - 100K, Spike?). > > > > > > > > Supply and demand is what determines the value of any profession. We have > the notion that all elementary and secondary teachers should be paid the > same, but those who know science, technology, engineering and math have the > option of higher paying careers outside of teaching. Consequently? there > aren?t enough STEM teachers. > > > > I see in some areas workarounds: STEM-oriented private schools where > teachers are on different pay scales. Like any other business, they pay > what the market demands for the skillset they need. > > > > Public schools are union-dominated which will lead to the same pay scale > for a STEM teacher as for one whose degree is in gender studies or > journalism or some goofy thing. Their shortage of STEM teachers is never > going away. > > > > spike > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun May 22 16:18:31 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 22 May 2022 11:18:31 -0500 Subject: [ExI] j a corey Message-ID: A few months ago I offered to ship the set of books to whomever would pay the shipping costs. Someone replied and I don't remember who that was. I have had several life-shaking events since then and am just now trying to get in touch with the person who wanted the books. So if you are that person, please let me know. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 22 16:37:55 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 22 May 2022 09:37:55 -0700 Subject: [ExI] education In-Reply-To: References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> Message-ID: <008801d86dfa$4a590830$df0b1890$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] education >?Spike - what if you had a teacher whose STEM skills were so low that they could not get a higher paying job in industry? Could they supplement their teaching with Khan Academy?... Of course, and plenty of them do exactly that. Thanks for the question Billw, for it allows me to bang on one of my favorite drums. A prominent calculus teacher in the Los Angeles area did a lot to promote Khan Academy in the early days, when Sal was getting it moving. Khan didn?t really have a lot of content there yet, but had some, and it was good. This calculus teacher recognized the potential and what was already available, promoted it, encouraged his students to get going on that. Some did, and those students demonstrated what was possible in what we now call asynchronous learning. The ones who had some drive, some giterdun, would look ahead in the syllabus, check it out on Khan Academy, listen to Sal?s explanation, come to class prepared, get more out of the lecture, way more. This calculus teacher eventually contributed to filling out and rounding out Sal?s previously gappy (and at times cobby, incomplete) curriculum, they created KA?s World of Math, which has somewhere over 1100 videos with exercise sets, all free. If a student went thru all of that, the student was ready for anything college throws at them. >?.A teacher coming into the classroom, turning on the TV and sitting with their paperback book while the class watches, brings to mind my experience with teaching psych 101 by TV? Nooooooo that isn?t how Khan Academy is used, and it defeats the strength and value of asynchronous learning. The students don?t view anything simultaneously and don?t use up class time on it. They go on their own time and progress at their own pace. The calculus teacher in the first part of this post realized that if he let all the calculus students study and progress at their own pace, forget teaching them all the same lessons at the same time, then true: the gap between the best and weakest students opens dramatically, however? the class, collectively, the average student does better. The class average goes up dramatically. The teaching is far more efficient. The average student gets more, the best students get waaay more outta the class. >?Students hated it. I suppose Khan is better? bill w Ja, Khan Academy is way better because Sal Khan recognizes a fundamental truth about education which he writes about in his book: every classroom contains both eagles and pigeons. Accept it and work with it. Billw, you are a professor, so you know exactly what I mean. Asynchronous learning doesn?t pretend to produce equal outcomes for all. It freely acknowledges that to do the most good for the most students, we just hafta do what Khan Academy does best: it lets the pigeons peck and the eagles soar. Do comment on the previous paragraph please Billw. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun May 22 19:03:14 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 22 May 2022 14:03:14 -0500 Subject: [ExI] education In-Reply-To: <008801d86dfa$4a590830$df0b1890$@rainier66.com> References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> Message-ID: Asynchronous learning doesn?t pretend to produce equal outcomes for all. It freely acknowledges that to do the most good for the most students, we just hafta do what Khan Academy does best: it lets the pigeons peck and the eagles soar. spike My question: what is the role of the teacher? Wandering around the classroom helping students with the page they are on? An advanced student can do this - no grad work needed (for 1-12 grade applications). I agree that learning at one's own rate is the ideal. I just don't see how a teacher fits into this. bill w On Sun, May 22, 2022 at 11:39 AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > *Subject:* Re: [ExI] education > > > > >?Spike - what if you had a teacher whose STEM skills were so low that > they could not get a higher paying job in industry? Could they supplement > their teaching with Khan Academy?... > > > > Of course, and plenty of them do exactly that. Thanks for the question > Billw, for it allows me to bang on one of my favorite drums. > > > > A prominent calculus teacher in the Los Angeles area did a lot to promote > Khan Academy in the early days, when Sal was getting it moving. Khan > didn?t really have a lot of content there yet, but had some, and it was > good. This calculus teacher recognized the potential and what was already > available, promoted it, encouraged his students to get going on that. Some > did, and those students demonstrated what was possible in what we now call > asynchronous learning. The ones who had some drive, some giterdun, would > look ahead in the syllabus, check it out on Khan Academy, listen to Sal?s > explanation, come to class prepared, get more out of the lecture, way > more. This calculus teacher eventually contributed to filling out and > rounding out Sal?s previously gappy (and at times cobby, incomplete) > curriculum, they created KA?s World of Math, which has somewhere over 1100 > videos with exercise sets, all free. If a student went thru all of that, > the student was ready for anything college throws at them. > > > > > > >?.A teacher coming into the classroom, turning on the TV and sitting > with their paperback book while the class watches, brings to mind my > experience with teaching psych 101 by TV? > > > > Nooooooo that isn?t how Khan Academy is used, and it defeats the strength > and value of asynchronous learning. The students don?t view anything > simultaneously and don?t use up class time on it. They go on their own > time and progress at their own pace. > > > > The calculus teacher in the first part of this post realized that if he > let all the calculus students study and progress at their own pace, forget > teaching them all the same lessons at the same time, then true: the gap > between the best and weakest students opens dramatically, however? the > class, collectively, the average student does better. The class average > goes up dramatically. The teaching is far more efficient. The average > student gets more, the best students get waaay more outta the class. > > > > >?Students hated it. I suppose Khan is better? bill w > > > > Ja, Khan Academy is way better because Sal Khan recognizes a fundamental > truth about education which he writes about in his book: every classroom > contains both eagles and pigeons. Accept it and work with it. > > > > Billw, you are a professor, so you know exactly what I mean. Asynchronous > learning doesn?t pretend to produce equal outcomes for all. It freely > acknowledges that to do the most good for the most students, we just hafta > do what Khan Academy does best: it lets the pigeons peck and the eagles > soar. > > > > Do comment on the previous paragraph please Billw. > > > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 22 20:02:37 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 22 May 2022 13:02:37 -0700 Subject: [ExI] education In-Reply-To: References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> Message-ID: <002901d86e16$e307ba60$a9172f20$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] education >>?Asynchronous learning doesn?t pretend to produce equal outcomes for all. It freely acknowledges that to do the most good for the most students, we just hafta do what Khan Academy does best: it lets the pigeons peck and the eagles soar. spike >?My question: what is the role of the teacher? Wandering around the classroom helping students with the page they are on? An advanced student can do this - no grad work needed (for 1-12 grade applications). I agree that learning at one's own rate is the ideal. I just don't see how a teacher fits into this. bill w Billw, your question is exactly the reason why I was hoping you would respond to this thread: you bring experience and insight to the table which is outside of my own and outside the usual paradigms found here. Thanks for that. Now to answer your question: I don?t know. Lets see what the others come up with. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun May 22 23:05:24 2022 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 22 May 2022 16:05:24 -0700 Subject: [ExI] Substitution argument Message-ID: <20220522160524.Horde.FYvjPSeQfA_34I0bTjvSa59@sollegro.com> Quoting Stathis Papaioannou: >> >> It might intuitively aid the understanding of my argument to examine a >> higher order network. The substitution argument suggests that a small >> part of my brain could be replaced by a functionally identical >> artificial part, and I would not be able to tell the difference. The >> problem with this argument is that the function of any neuron or >> neural circuit of the brain is not determined solely by the properties >> of the neuron or neural circuit, but by its holistic relationship with >> all the other neurons it is connected to. So not only could an >> artificial neuron not be an "indistinguishable substitute" for the >> native neuron, but even another identical biological neuron would not >> be a sufficient replacement unless it was somehow grown or developed >> in the context of a brain identical to yours. > > > ?Functionally identical? means that the replacement interacts with the > remaining tissue exactly the same way as the original did. If it doesn?t, > then it isn?t functionally identical. I don't have an issue with the meaning of "functionally identical"; I have just don't believe such a functionally identical replacement is possible. Not when the function of a neuron is so dependent on the neurons that it is connected to. It is a flawed premise and invalidates the argument. > It might be more intuitively obvious to consider your family, than a >> brain. If you were instantaneously replaced with a clone of yourself, >> even if that clone had been trained in your memories up until let's >> say last month, your family would notice some pretty jarring >> differences between you and your clone. Those problems could >> eventually go away as your family adapted to your clone, and your >> clone adapted to your family, but the actual replacement itself would >> be obvious to your family when it occurred. > > > How would your family notice a difference if your behaviour were exactly > the same? The clone would have no memory of any family interactions that happened since the clone was last updated. Commitments, arguments, obligations, promises, plans and other relationship details would seemingly be forgotten. At best, the family would wonder why you had lost your memories of the last 30 days; possibly assuming you were doing drugs or something. You can't behave correctly when that behavior is based on knowledge you don't posses. > Similarly, an artificial replacement neuron/neural circuit (or even a >> biological one) would have to undergo "on the job training" to >> sufficiently substitute for the component it was replacing. And if the >> circuit was extensive enough, you and the people around you would >> notice a difference. > > > Technical difficulty is not a problem in a thought experiment. The argument > is that IF a part of your brain were replaced with a functionally identical > analogue THEN your consciousness would necessarily be preserved. Technical difficulty bordering on impossibility can make a thought experiment irrelevant. For example, IF I had a functional time machine, THEN I could undo all my mistakes in the past. The substitution argument is logically sound but it stems from false premises. Secondly it is a mistake to assume that ones consciousness cannot be changed while preserving ones identity. Your consciousness changes all the time but you do not cease being you because of it. When I have too much to drink, my consciousness changes, but I am still me albeit, drunk. So a not-quite-functionally-identical analogue to a neural circuit that noticeably changes your consciousness, would not suddenly render you no longer yourself. It would simply change you in ways similar to the numerous changes you have already experienced in the course of your life. The whole point of the brain is neural plasticity. Stuart LaForge From stathisp at gmail.com Sun May 22 23:47:55 2022 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 23 May 2022 09:47:55 +1000 Subject: [ExI] Substitution argument In-Reply-To: <20220522160524.Horde.FYvjPSeQfA_34I0bTjvSa59@sollegro.com> References: <20220522160524.Horde.FYvjPSeQfA_34I0bTjvSa59@sollegro.com> Message-ID: On Mon, 23 May 2022 at 09:06, Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Stathis Papaioannou: > > > >> > >> It might intuitively aid the understanding of my argument to examine a > >> higher order network. The substitution argument suggests that a small > >> part of my brain could be replaced by a functionally identical > >> artificial part, and I would not be able to tell the difference. The > >> problem with this argument is that the function of any neuron or > >> neural circuit of the brain is not determined solely by the properties > >> of the neuron or neural circuit, but by its holistic relationship with > >> all the other neurons it is connected to. So not only could an > >> artificial neuron not be an "indistinguishable substitute" for the > >> native neuron, but even another identical biological neuron would not > >> be a sufficient replacement unless it was somehow grown or developed > >> in the context of a brain identical to yours. > > > > > > ?Functionally identical? means that the replacement interacts with the > > remaining tissue exactly the same way as the original did. If it doesn?t, > > then it isn?t functionally identical. > > I don't have an issue with the meaning of "functionally identical"; I > have just don't believe such a functionally identical replacement is > possible. Not when the function of a neuron is so dependent on the > neurons that it is connected to. It is a flawed premise and > invalidates the argument. > It does not invalidate the argument since the argument does not depend on it being practically possible to make the substitution. > > It might be more intuitively obvious to consider your family, than a > >> brain. If you were instantaneously replaced with a clone of yourself, > >> even if that clone had been trained in your memories up until let's > >> say last month, your family would notice some pretty jarring > >> differences between you and your clone. Those problems could > >> eventually go away as your family adapted to your clone, and your > >> clone adapted to your family, but the actual replacement itself would > >> be obvious to your family when it occurred. > > > > > > How would your family notice a difference if your behaviour were exactly > > the same? > > The clone would have no memory of any family interactions that > happened since the clone was last updated. Commitments, arguments, > obligations, promises, plans and other relationship details would > seemingly be forgotten. At best, the family would wonder why you had > lost your memories of the last 30 days; possibly assuming you were > doing drugs or something. You can't behave correctly when that > behavior is based on knowledge you don't posses. > If it's missing memories, it isn't functionally identical. > > Similarly, an artificial replacement neuron/neural circuit (or even a > >> biological one) would have to undergo "on the job training" to > >> sufficiently substitute for the component it was replacing. And if the > >> circuit was extensive enough, you and the people around you would > >> notice a difference. > > > > > > Technical difficulty is not a problem in a thought experiment. The > argument > > is that IF a part of your brain were replaced with a functionally > identical > > analogue THEN your consciousness would necessarily be preserved. > > Technical difficulty bordering on impossibility can make a thought > experiment irrelevant. For example, IF I had a functional time > machine, THEN I could undo all my mistakes in the past. The > substitution argument is logically sound but it stems from false > premises. > Well, if you had a time machine you could undo the mistakes of the past, and if that's all you want to show then the argument is valid (ignoring the logical paradoxes of time travel). The fact that a time machine may be impossible does not change this. In a similar way, all that is claimed in the substitution argument is that if you reproduced the behaviour of the substituted part then you would also reproduce any associated consciousness. > Secondly it is a mistake to assume that ones consciousness cannot be > changed while preserving ones identity. Your consciousness changes all > the time but you do not cease being you because of it. When I have too > much to drink, my consciousness changes, but I am still me albeit, > drunk. So a not-quite-functionally-identical analogue to a neural > circuit that noticeably changes your consciousness, would not suddenly > render you no longer yourself. It would simply change you in ways > similar to the numerous changes you have already experienced in the > course of your life. The whole point of the brain is neural plasticity. > The argument is not about identity, it is about consciousness not being substrate specific. -- Stathis Papaioannou Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> Virus-free. www.avast.com <#m_-924659830263318866_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun May 22 23:48:36 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 22 May 2022 16:48:36 -0700 Subject: [ExI] smoking weed Message-ID: <001c01d86e36$74ae77f0$5e0b67d0$@rainier66.com> Remember talking about this over 20 years ago? It seemed to me like a solvable controls problem. The image-recognition part I didn?t get then, but it seemed to me you could plant at very precise locations and smoke anything that was very far outside that radius, which wouldn?t even need image recognition: https://twitter.com/i/status/1528367575146475522 Looks to me like this could work on insects and possibly mice as well. Stay tuned on this because we are being told humanity is facing a food emergency. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon May 23 00:00:55 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 22 May 2022 18:00:55 -0600 Subject: [ExI] Substitution argument was Re: Is Artificial Life Conscious? In-Reply-To: <20220521022602.Horde.fVoNOwiSXPLhoXtnVvdZSkN@webmail.sollegro.com> References: <20220513034147.Horde.ieK90AsoQ4E0ABuIpUIpm5C@sollegro.com> <1881527377.1859575.1652543735686@mail.yahoo.com> <20220521022602.Horde.fVoNOwiSXPLhoXtnVvdZSkN@webmail.sollegro.com> Message-ID: First off, thank you Stuart, and everyone else that has indulged me so extensively in conversations like this. I know you guys are surely way smarter than I am on most everything, and it is very interesting to see how intelligent people think about this kind of stuff. On Sat, May 21, 2022 at 3:27 AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Brent Allsop: > > > > No, only the popular consensus functionalists, led by Chalmers, with > > his derivative and mistaken "substitution argument" work. results in > > them thinking it is a hard problem, leading the whole world astray. > > The hard problem would be solved by now, if it wasn't for all that. > > If you understand why the substitution argument is a mistaken > > sleight of hand, that so-called "hard problem" goes away. All the > > stuff like What is it like to be a bat, how do you bridge the > > explanatory gap, and all that simply fall away, once you know the > > colorness quality of something. > > I would probably be lumped in with the functionalists since I think > intelligence is a literal mathematical fitness function on tensors > being optimized by having their partial derivatives minimized by > gradient descent against environmental parameters. In the brain, these > tensors represent the relative weights and bias of the neurons in the > neural network. I am toying with calling these tensor functions SELFs > for scalable epistemic learning functions. > > That being said, I have issues with the substitution argument. For one > thing, the larger a network gets, the more information lies between > nodes relative to information within nodes. That is to say that > relationships between components increase in importance relative to > the components themselves. In my theory, this is the essence of > emergence. > > It might intuitively aid the understanding of my argument to examine a > higher order network. The substitution argument suggests that a small > part of my brain could be replaced by a functionally identical > artificial part, and I would not be able to tell the difference. The > problem with this argument is that the function of any neuron or > neural circuit of the brain is not determined solely by the properties > of the neuron or neural circuit, but by its holistic relationship with > all the other neurons it is connected to. So not only could an > artificial neuron not be an "indistinguishable substitute" for the > native neuron, but even another identical biological neuron would not > be a sufficient replacement unless it was somehow grown or developed > in the context of a brain identical to yours. > > It might be more intuitively obvious to consider your family, than a > brain. If you were instantaneously replaced with a clone of yourself, > even if that clone had been trained in your memories up until let's > say last month, your family would notice some pretty jarring > differences between you and your clone. Those problems could > eventually go away as your family adapted to your clone, and your > clone adapted to your family, but the actual replacement itself would > be obvious to your family when it occurred. > > Similarly, an artificial replacement neuron/neural circuit (or even a > biological one) would have to undergo "on the job training" to > sufficiently substitute for the component it was replacing. And if the > circuit was extensive enough, you and the people around you would > notice a difference. > I read this, before I read Stathis response to this. And I fully expected him to jump in and reply, exactly as he did. And I happen to agree with what stathis is replying in response to this. He is saying it far better than I could have said it. > > And I don't really know much about the problem of universals. I > > just know that we live in a world full of LOTS of colourful things, > > yet all we know are the colors things seem to be. Nobody yet knows > > the true intrinsic colorness quality of anything. The emerging > > consensus Representational Qualia Theory, and all the supporters of > > all the sub camps are predicting once we discover which of all our > > descriptions of stuff in the brain is a description of redness, this > > will falsify all but THE ONE camp finally demonstrated to be true. > > All the supporters of all the falsified camps will then be seen > > jumping to this one yet to be falsified camp. We are tracking all > > this in real time, and already seeing significant progress. In > > other words, there will be irrefutable consensus proof that the > > 'hard problem' has finally been resolved. I predict this will > > happen within 10 years. Anyone care to make a bet, that THE ONE > > camp will have over 90% "Mind Expert consensus", and there will be > > > 1000 experts in total, participating, within 10 years? > > Consensus is simply consensus; it is not proof. The majority and even > the totality have been wrong about a great deal many things over the > long span of history. > Oh, of course. I just enjoy knowing this and tracking as much of this as possible, and I believe that which you measure, improves. > > >> > >>> First, we must recognize that redness is not an intrinsic quality of > the > >>> strawberry, it is a quality of our knowledge of the strawberry in our > >>> brain. This must be true since we can invert our knowledge by simply > >>> inverting any transducing system anywhere in the perception process. > >>> If we have knowledge of a strawberry that has a redness quality, and > if we > >>> objectively observed this redness in someone else's brain, and fully > >>> described that redness, would that tell us the quality we are > describing? > >>> No, for the same reason you can't communicate to a blind person what > >>> redness is like. > >> > >> Why not? If redness is not intrinsic to the strawberry but is instead > >> a quality of our knowledge of the strawberry, then why can't we > >> explain to a blind person what redness is like? Blind people have > >> knowledge of strawberries and plenty of glutamate in their brains. > >> Just tell them that redness is what strawberries are like, and they > >> will understand you just fine. > > > > Wait, what? No you can't. Sure, maybe if they've been sighted, > > seen a strawberry, with their eyes, (i.e. directly experienced > > redness knowledge) then became blind. They will be able to kind of > > remember what that redness was like, but the will no longer be able > > to experience it. > > But how does the experience of redness in the sighted change the > glutamate (or whatever representational "stuff" you hypothesize) > versus the glutamate of the blind? Surely you can see my point that > redness must be learned, and the brains of the color-learned are > chemically indistinguishable from the brains of the blind. And if > there were any representational "stuff", then it would lie in the > difference between the brains of the sighted and the blind. I would > posit that any such difference would lie in the neural wiring and > synaptic weights which would be chemically indistinguishable but > structurally and functionally distinct. > I don't think of it this way. One thing is for sure, there is a 3D model of visual reality, somewhere in your brain. It is this model that has the colorness qualities, and it must be that it is this model that your brain does all the intelligent thinking and recognition with/about, post perception process, not the useless raw data coming from the senses. It is the perception system that has all the "neural wiring", and the networks that do the recognition on the rendered models. But I would seriously doubt there are any " neural wirings or synaptic weights" in the actual representations with colorness qualities, themselves. Teslas weren't able to be very intelligent, until they also started rendering 3D model (including time) representations, then having all the intelligence work on these (abstract) rendered 3D models. Our perception system renders this 3d model, with non abstract colorness qualities, into your consciousness. Sure, you learn to use glutamat (or something), which has the redness quality to represent a point on this model that has a redness quality. Your consciousness doesn't change the glutamate, Your brain just uses glutamate when it wants to render knowledge of a 3D voxel of knowledge with a redness quality into consciousness. Let me say it this way. YOU cannot experience redness, with your eyes closed. Your memory of recalled redness is even far less than what you experience (not glutamate) when looking at something red. And if not for your memory of glutamate, YOU would not be able to find out what redness was like, with your eyes closed, under normal operation. > >> > >>> The entirety of our objective knowledge tells us nothing > >>> of the intrinsic qualities of any of that stuff we are describing. > >> > >> Ok, but you just said that redness was not an intrinsic quality of > >> strawberries but of our knowledge of them, so our objective knowledge > >> of them should be sufficient to describe redness. > > > > Sure, it is sufficient, but until you know which sufficient > > description is a description of redness, and which sufficient > > description is a description of greenness, we won't know which is > > which. > > We don't need to know anything as long as we are constantly learning. > If you woke up up tomorrow and everything that was red looked green to > you, at first you would be confused, but after a week, your would > adapt and be functionally equivalent to now. You might even eventually > even forget there ever was a switch. > Right, but this doesn't change the quality your brain uses to paint you conscious knowledge of red things with. And I predict once we know what it is, in our brain, which your brain uses to represent that redness quality will be objectively observable. In other words, any of the above changes that you are describing, (i.e. changing a brain to use greenness, instead of redness to represent red things with) will be objectively observable. In other words, we'll be able to objectively observe when your grenness has changed to redness, and also whether I use your greenness to represent red with, and so on. > >> > >> So if this "stuff" is glutamate, glycine, or whatever, and it exists > >> in the brains of blind people, then why can't it represent redness (or > >> greenness) information to them also? > > > > People may be able to dream redness. Or they may take some > > psychedelics that enables them to experience redness, or surgeons > > may stimulate a part of the brain, while doing brain surgery, > > producing a redness experience, Those rare cases are possible, But > > that isn't yet normal. Once they discover which of all our > > descriptions of stuff in the brain is a description of redness, > > someone like Neuralink will be producing that redness quality in > > blind people's brains all the time, with artificial eyes, and so > > on. But to date, normal blind people can't experience redness > > quality. > > Sure they can, even if it just through a frequency of sound output by > an Orcam MyEye. You learn what redness is by some manner of perception > and how you perceive it does not matter. Synesthetics might even be > able to taste or small redness. > Yes, of course, as I was saying, in specialc ases like synesthetics, drug inducement, the correct neuralink stimulation, and such, where the brain is induced into rendering glutamate knowledge into your consciousness, you will experience redness. But, again, you won't be able to experience redness, when your eyes are close, without something additional like this. > >> > >>> > >>> > >>>>> This is true if that stuff is some kind of "Material" or > "electromagnetic > >>>>> field" "spiritual" or "functional" stuff, it remains a fact that your > >>>>> knowledge, composed of that, has a redness quality. > >>>> > >>>> It seems you are quite open-minded when it comes to what qualifies as > >>>> "stuff". If so, then why does your 3-robot-scenario single out > >>>> information as not being stuff? If you wish to insist that something > >>>> physical in the brain has the redness quality and conveys knowledge of > >>>> redness, then why glutamate? Why not instead hypothesize that is the > >>>> only thing that prima facie has the redness property to begin with > >>>> i.e. red light? After all there are photoreceptors in the deep brain. > >>>> > >>> > >>> Any physical property like redness, greenness, +5votes, holes in a > punch > >>> card... can represent (convey) an abstract 1. There must be something > >>> physical representing that one, but, again, you can't know what that is > >>> unless you have a transducing dictionary telling you which is which. > >> > >> You may need something physical to represent the abstract 1, but that > >> abstract 1 in turn represents some different physical thing. > > > > Only if you have a transducing dictionary that enables such, or you > > think of it in that particular way. Other than that, it's just a > > set of physical facts, which can be interpreted as something else, > > that is all. > > A transducing dictionary is not enough. Something has to read the > dictionary, and all meaning is relative. If your wife is lost in the > jungle, then her cries for help would mean something very different to > you than they would to a hungry tiger. In communication, it takes > meaning to understand meaning. The reason you can understand abstract > information is because you yourself are abstract information. > All true, yes. But you're getting away from the fact that a tesla represents red knowledge with an abstract word like red, and a dictionary is required to know what that means, while you represent red knowledge with something that has a redness quality. That quality is your definition of the word red. My prediction is that it is something like glutamate that has the redness quality. when we describe glutamate reacting in a sysnapse, we are describing what you can directly apprehend as redness conscious knowledge. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon May 23 03:08:04 2022 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 22 May 2022 23:08:04 -0400 Subject: [ExI] Substitution argument In-Reply-To: References: <20220522160524.Horde.FYvjPSeQfA_34I0bTjvSa59@sollegro.com> Message-ID: Here's a simpler way to consider a substitution with a functionally equivalent parts in a way less subject to worrying whether or not all the important behaviors of neurons are captured: A) Consider a full brain simulation down to quark gluon level. 1. Run on a Windows computer with AMD processors. 2. Run on a Mac computer with Intel processors. The two computers can be changed without having any bearing on the behavior of the simulated brain. Jason P.S. as I understand it, the generally described neuron substitution argument does not use generic and identically behaving neurons, but ones wired up in the same way as the neuron it replaces, with the same weights and biases as the original, such that it's spiking/firing behavior and interactions with it's neighboring connected neurons will be the same as it was with the original. On Sun, May 22, 2022, 7:49 PM Stathis Papaioannou via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Mon, 23 May 2022 at 09:06, Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Stathis Papaioannou: >> >> >> >> >> >> It might intuitively aid the understanding of my argument to examine a >> >> higher order network. The substitution argument suggests that a small >> >> part of my brain could be replaced by a functionally identical >> >> artificial part, and I would not be able to tell the difference. The >> >> problem with this argument is that the function of any neuron or >> >> neural circuit of the brain is not determined solely by the properties >> >> of the neuron or neural circuit, but by its holistic relationship with >> >> all the other neurons it is connected to. So not only could an >> >> artificial neuron not be an "indistinguishable substitute" for the >> >> native neuron, but even another identical biological neuron would not >> >> be a sufficient replacement unless it was somehow grown or developed >> >> in the context of a brain identical to yours. >> > >> > >> > ?Functionally identical? means that the replacement interacts with the >> > remaining tissue exactly the same way as the original did. If it >> doesn?t, >> > then it isn?t functionally identical. >> >> I don't have an issue with the meaning of "functionally identical"; I >> have just don't believe such a functionally identical replacement is >> possible. Not when the function of a neuron is so dependent on the >> neurons that it is connected to. It is a flawed premise and >> invalidates the argument. >> > > It does not invalidate the argument since the argument does not depend on > it being practically possible to make the substitution. > > >> > It might be more intuitively obvious to consider your family, than a >> >> brain. If you were instantaneously replaced with a clone of yourself, >> >> even if that clone had been trained in your memories up until let's >> >> say last month, your family would notice some pretty jarring >> >> differences between you and your clone. Those problems could >> >> eventually go away as your family adapted to your clone, and your >> >> clone adapted to your family, but the actual replacement itself would >> >> be obvious to your family when it occurred. >> > >> > >> > How would your family notice a difference if your behaviour were exactly >> > the same? >> >> The clone would have no memory of any family interactions that >> happened since the clone was last updated. Commitments, arguments, >> obligations, promises, plans and other relationship details would >> seemingly be forgotten. At best, the family would wonder why you had >> lost your memories of the last 30 days; possibly assuming you were >> doing drugs or something. You can't behave correctly when that >> behavior is based on knowledge you don't posses. >> > > If it's missing memories, it isn't functionally identical. > > >> > Similarly, an artificial replacement neuron/neural circuit (or even a >> >> biological one) would have to undergo "on the job training" to >> >> sufficiently substitute for the component it was replacing. And if the >> >> circuit was extensive enough, you and the people around you would >> >> notice a difference. >> > >> > >> > Technical difficulty is not a problem in a thought experiment. The >> argument >> > is that IF a part of your brain were replaced with a functionally >> identical >> > analogue THEN your consciousness would necessarily be preserved. >> >> Technical difficulty bordering on impossibility can make a thought >> experiment irrelevant. For example, IF I had a functional time >> machine, THEN I could undo all my mistakes in the past. The >> substitution argument is logically sound but it stems from false >> premises. >> > > Well, if you had a time machine you could undo the mistakes of the past, > and if that's all you want to show then the argument is valid (ignoring the > logical paradoxes of time travel). The fact that a time machine may be > impossible does not change this. In a similar way, all that is claimed in > the substitution argument is that if you reproduced the behaviour of the > substituted part then you would also reproduce any associated consciousness. > > >> Secondly it is a mistake to assume that ones consciousness cannot be >> changed while preserving ones identity. Your consciousness changes all >> the time but you do not cease being you because of it. When I have too >> much to drink, my consciousness changes, but I am still me albeit, >> drunk. So a not-quite-functionally-identical analogue to a neural >> circuit that noticeably changes your consciousness, would not suddenly >> render you no longer yourself. It would simply change you in ways >> similar to the numerous changes you have already experienced in the >> course of your life. The whole point of the brain is neural plasticity. >> > > The argument is not about identity, it is about consciousness not being > substrate specific. > > -- > Stathis Papaioannou > > > Virus-free. > www.avast.com > > <#m_-2657965232514762273_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > > Virus-free. > www.avast.com > > <#m_-2657965232514762273_m_-924659830263318866_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon May 23 14:58:53 2022 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 23 May 2022 08:58:53 -0600 Subject: [ExI] Substitution argument In-Reply-To: References: <20220522160524.Horde.FYvjPSeQfA_34I0bTjvSa59@sollegro.com> Message-ID: Hi Stuart, What do you mean by: "The substitution argument is logically sound but it stems from false premises." On Sun, May 22, 2022 at 9:09 PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Here's a simpler way to consider a substitution with a functionally > equivalent parts in a way less subject to worrying whether or not all the > important behaviors of neurons are captured: > > A) Consider a full brain simulation down to quark gluon level. > > 1. Run on a Windows computer with AMD processors. > > 2. Run on a Mac computer with Intel processors. > > The two computers can be changed without having any bearing on the > behavior of the simulated brain. > > Jason > > > P.S. as I understand it, the generally described neuron substitution > argument does not use generic and identically behaving neurons, but ones > wired up in the same way as the neuron it replaces, with the same weights > and biases as the original, such that it's spiking/firing behavior and > interactions with it's neighboring connected neurons will be the same as it > was with the original. > > On Sun, May 22, 2022, 7:49 PM Stathis Papaioannou via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Mon, 23 May 2022 at 09:06, Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Stathis Papaioannou: >>> >>> >>> >> >>> >> It might intuitively aid the understanding of my argument to examine a >>> >> higher order network. The substitution argument suggests that a small >>> >> part of my brain could be replaced by a functionally identical >>> >> artificial part, and I would not be able to tell the difference. The >>> >> problem with this argument is that the function of any neuron or >>> >> neural circuit of the brain is not determined solely by the properties >>> >> of the neuron or neural circuit, but by its holistic relationship with >>> >> all the other neurons it is connected to. So not only could an >>> >> artificial neuron not be an "indistinguishable substitute" for the >>> >> native neuron, but even another identical biological neuron would not >>> >> be a sufficient replacement unless it was somehow grown or developed >>> >> in the context of a brain identical to yours. >>> > >>> > >>> > ?Functionally identical? means that the replacement interacts with the >>> > remaining tissue exactly the same way as the original did. If it >>> doesn?t, >>> > then it isn?t functionally identical. >>> >>> I don't have an issue with the meaning of "functionally identical"; I >>> have just don't believe such a functionally identical replacement is >>> possible. Not when the function of a neuron is so dependent on the >>> neurons that it is connected to. It is a flawed premise and >>> invalidates the argument. >>> >> >> It does not invalidate the argument since the argument does not depend on >> it being practically possible to make the substitution. >> >> >>> > It might be more intuitively obvious to consider your family, than a >>> >> brain. If you were instantaneously replaced with a clone of yourself, >>> >> even if that clone had been trained in your memories up until let's >>> >> say last month, your family would notice some pretty jarring >>> >> differences between you and your clone. Those problems could >>> >> eventually go away as your family adapted to your clone, and your >>> >> clone adapted to your family, but the actual replacement itself would >>> >> be obvious to your family when it occurred. >>> > >>> > >>> > How would your family notice a difference if your behaviour were >>> exactly >>> > the same? >>> >>> The clone would have no memory of any family interactions that >>> happened since the clone was last updated. Commitments, arguments, >>> obligations, promises, plans and other relationship details would >>> seemingly be forgotten. At best, the family would wonder why you had >>> lost your memories of the last 30 days; possibly assuming you were >>> doing drugs or something. You can't behave correctly when that >>> behavior is based on knowledge you don't posses. >>> >> >> If it's missing memories, it isn't functionally identical. >> >> >>> > Similarly, an artificial replacement neuron/neural circuit (or even a >>> >> biological one) would have to undergo "on the job training" to >>> >> sufficiently substitute for the component it was replacing. And if the >>> >> circuit was extensive enough, you and the people around you would >>> >> notice a difference. >>> > >>> > >>> > Technical difficulty is not a problem in a thought experiment. The >>> argument >>> > is that IF a part of your brain were replaced with a functionally >>> identical >>> > analogue THEN your consciousness would necessarily be preserved. >>> >>> Technical difficulty bordering on impossibility can make a thought >>> experiment irrelevant. For example, IF I had a functional time >>> machine, THEN I could undo all my mistakes in the past. The >>> substitution argument is logically sound but it stems from false >>> premises. >>> >> >> Well, if you had a time machine you could undo the mistakes of the past, >> and if that's all you want to show then the argument is valid (ignoring the >> logical paradoxes of time travel). The fact that a time machine may be >> impossible does not change this. In a similar way, all that is claimed in >> the substitution argument is that if you reproduced the behaviour of the >> substituted part then you would also reproduce any associated consciousness. >> >> >>> Secondly it is a mistake to assume that ones consciousness cannot be >>> changed while preserving ones identity. Your consciousness changes all >>> the time but you do not cease being you because of it. When I have too >>> much to drink, my consciousness changes, but I am still me albeit, >>> drunk. So a not-quite-functionally-identical analogue to a neural >>> circuit that noticeably changes your consciousness, would not suddenly >>> render you no longer yourself. It would simply change you in ways >>> similar to the numerous changes you have already experienced in the >>> course of your life. The whole point of the brain is neural plasticity. >>> >> >> The argument is not about identity, it is about consciousness not being >> substrate specific. >> >> -- >> Stathis Papaioannou >> >> >> Virus-free. >> www.avast.com >> >> <#m_-7450593785778402687_m_-2657965232514762273_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> >> >> Virus-free. >> www.avast.com >> >> <#m_-7450593785778402687_m_-2657965232514762273_m_-924659830263318866_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon May 23 15:21:18 2022 From: sparge at gmail.com (Dave S) Date: Mon, 23 May 2022 11:21:18 -0400 Subject: [ExI] education In-Reply-To: <002901d86e16$e307ba60$a9172f20$@rainier66.com> References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> <002901d86e16$e307ba60$a9172f20$@rainier66.com> Message-ID: On Sun, May 22, 2022 at 4:04 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > Now to answer your question [what is the role of the teacher]: I don?t > know. > Seems to me they'd be there to explain things that the student doesn't understand from the Khan material, as well as answering any other questions the students have. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon May 23 15:38:52 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 23 May 2022 10:38:52 -0500 Subject: [ExI] education In-Reply-To: References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> <002901d86e16$e307ba60$a9172f20$@rainier66.com> Message-ID: Dave, if a student doesn't understand the Khan material, then either the material is not neatly arranged in tiny steps, in which case it needs to be rewritten, or the student didn't get the previous material and should not have advanced to the point where he had to ask a question. I am assuming the Khan has frequent self-tests to measure the progress to date. If a test is not passed, then the student is sent back to a particular page (or whatever) for relearning. I also assume that Khan has pretests to decide which video to start with. Spike - comment on the above, please. bill w On Mon, May 23, 2022 at 10:23 AM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, May 22, 2022 at 4:04 PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> Now to answer your question [what is the role of the teacher]: I don?t >> know. >> > > Seems to me they'd be there to explain things that the student doesn't > understand from the Khan material, as well as answering any other questions > the students have. > > -Dave > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon May 23 15:48:24 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 23 May 2022 08:48:24 -0700 Subject: [ExI] education In-Reply-To: References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> <002901d86e16$e307ba60$a9172f20$@rainier66.com> Message-ID: <005601d86ebc$89b0ec10$9d12c430$@rainier66.com> From: extropy-chat On Behalf Of Dave S via extropy-chat Subject: Re: [ExI] education On Sun, May 22, 2022 at 4:04 PM spike jones via extropy-chat > wrote: Now to answer your question [what is the role of the teacher]: I don?t know. >?Seems to me they'd be there to explain things that the student doesn't understand from the Khan material, as well as answering any other questions the students have. -Dave Ja Dave keep in mind the corner of education in which I am most familiar is STEM, which is really better suited for online video training. Billw?s expertise is specific to the area of psychology, which I think is more dependent on professor interaction and classroom interaction of students with each other. Humans somehow defy being modeled by a system of simultaneous differential equations. But we are working that problem. Undergraduate engineering education, most technology, the hard sciences (the kind which are rigidly structured with equations (lotsa marvelous equations (in all their refulgent glory (equations are my friends)))) and all fields of math are well-suited for completely asynchronous online video education. I think back on my own engineering education: that could have all been done online with video. Excellent example, check this. I am astonished at how good this guy is: https://www.youtube.com/channel/UCm5mt-A4w61lknZ9lCsZtBw I saw Steve?s presentation of optimal nonlinear control theory, and was hooked! (229) Nonlinear Control: Hamilton Jacobi Bellman (HJB) and Dynamic Programming - YouTube This was the best explanation of the mind-bending Hamilton Jacobi Bellman function I have ever heard. Too bad Steve wasn?t around 30 years ago. Excellent engineering education is now free to anyone anywhere on the planet with an internet connection. How cool is that? Fun aside: if a prole really really knows the hell outta the HJB equation (referenced above), knows all the gritty details, how to use it, how to solve it with all the modern dynamic programming techniques, that prole can make a good living just doing her favorite thing. Full disclosure: I am not one of those proles. I admire those who have a good handle on HJB theory but tragically, I suck. I know a local prole who is really good at it (and is making a good living with that skill (the equation I meant.)) spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon May 23 15:52:38 2022 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 23 May 2022 08:52:38 -0700 Subject: [ExI] education In-Reply-To: References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> <002901d86e16$e307ba60$a9172f20$@rainier66.com> Message-ID: <005f01d86ebd$215da850$6418f8f0$@rainier66.com> From: extropy-chat On Behalf Of William Flynn Wallace via extropy-chat I also assume that Khan has pretests to decide which video to start with. Spike - comment on the above, please. bill w Hi Billw, Ja, Khan Academy has pre-tests so you know where to start, and also the KA world of math is almost entirely set up to have exercises to go along with the lectures. You can?t get through world of math without doing skerjillions of exercises and passing the tests. It can be frustrating: the way they used to do it is a prole had to get five correct answers in a row to pass a skill. That can be damn hard to do in some areas. I don?t know how that could be done in disciplines which are not well-suited for text-book exercises. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon May 23 17:56:24 2022 From: sparge at gmail.com (Dave S) Date: Mon, 23 May 2022 13:56:24 -0400 Subject: [ExI] education In-Reply-To: References: <00c201d86d6d$2eeebb90$8ccc32b0$@rainier66.com> <008801d86dfa$4a590830$df0b1890$@rainier66.com> <002901d86e16$e307ba60$a9172f20$@rainier66.com> Message-ID: On Mon, May 23, 2022 at 11:40 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Dave, if a student doesn't understand the Khan material, then either the > material is not neatly arranged in tiny steps, in which case it needs to be > rewritten, or the student didn't get the previous material and should not > have advanced to the point where he had to ask a question. > If only everything were so black and white... No educational material is going to be perfectly understood by every student, every time. And it can't answer every question a student might ask in advance. Obviously, teachers would provide feedback to refine the material. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From ExiMod at protonmail.com Wed May 25 15:59:55 2022 From: ExiMod at protonmail.com (ExiMod) Date: Wed, 25 May 2022 15:59:55 +0000 Subject: [ExI] Proton secure email has big update Message-ID: Hi Proton Mail has just had a big update / improvement. Recommended! "Evolving into a unified Proton reflects our growth from [an end-to-end encrypted email provider to an entire privacy ecosystem](https://proton.me/news/evolving-privacy), allowing us to deliver even more benefits to the Proton community and make privacy accessible to everyone". Proton now provides: - Proton Mail ensures your emails are private - Proton VPN keeps your internet activity private and unblocks all content - Proton Calendar encrypts your life events - Proton Drive stores and backs up your important documents with end-to-end encryption. There is a range of plans, a free encrypted email service and two paid-for levels of service. https://proton.me/pricing Regards, ExiMod Sent with [Proton Mail](https://proton.me/) secure email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri May 27 19:23:46 2022 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 27 May 2022 12:23:46 -0700 Subject: [ExI] World Build 2045: welcome all your "feedback" to our entry 468 with friends from The Millennium Project: https://worldbuild.ai/W-0000000468/ In-Reply-To: <2096490935.1276099.1653188167695@mail.yahoo.com> References: <2096490935.1276099.1653188167695.ref@mail.yahoo.com> <2096490935.1276099.1653188167695@mail.yahoo.com> Message-ID: Not bad, though it assumes a rationality and willingness to release control (more precisely, accept outside control) that China's leaders have, in recent years, demonstrated the opposite of. On Sat, May 21, 2022 at 7:57 PM Jose Cordeiro via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Dear Extropy friends, > > Several from The Millennium Project (Jerome Glenn, Ted Gordon, > Elizabeth Florescu, Veronica Agreda and myself) have entered the > fascinating World Build 2045 competition organized by the Future of Life > Institute (led by MIT professor Max Tegmark and funded by Elon Musk, Jaan > Tallin and other luminaries) that calls for: 1) a timeline from 2022 to > 2045 with events and data points; 2) short description of a day in the life > in 2045 (vignettes); 3) Answers to 13 questions about the future; and 4) an > art/media piece. > > > > Our entry is at: https://worldbuild.ai/W-0000000468/ scroll down to > see the time line, scroll down further the see the two vignettes of a day > in the life in 2045, scroll down further to see our answers to 13 questions > about the future; and scroll down further to see our futuristic 5-minute > video on 2045. > > > At any point you can scroll back to the top and click on ?Feedback? to > include your comments. This is very important to improve our entry and move > up the competition during the next two weeks, please. So, kindly click on > "Feedback" and add your views to our entry 468, we appreciate your feedback > eternally, please: https://worldbuild.ai/W-0000000468/ > > Futuristically yours, > > La vie est belle! > > Jose Cordeiro, MBA, PhD (www.cordeiro.org) > https://en.wikipedia.org/wiki/Jos%C3%A9_Luis_Cordeiro > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue May 31 19:19:46 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 31 May 2022 14:19:46 -0500 Subject: [ExI] physiognomy by AI Message-ID: guidelines . Troublingly, some modern AI applications are delving into physiognomy, a set of pseudoscientific ideas that first appeared thousands of years ago with the ancient Greeks. In antiquity, Pythagoras, the Greek mathematician, based his decisions on accepting students on whether they ?looked? gifted. To the philosopher Aristotle, bulbous noses denoted an insensitive person and round faces signaled courage. Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter. SIGN UP Recent research has tried to show that political leaning, sexuality, and criminality can be inferred from pictures of people?s faces. *Political orientation*. In a 2021 article in *Nature Scientific Reports, *Stanford researcher Michal Kosinski found that using open-source code and publicly available data and facial images, facial recognition technology can judge a persons? political orientation accurately 68 percent of the time even when controlling demographic factors. In this research, the primary algorithm learned the average face for conservatives and liberals and then predicted the political leanings of unknown faces by comparing them to the reference images. Kosinski wrote that his findings about AI?s abilities have grave implications: ?The privacy threats posed by facial recognition technology are, in many ways, unprecedented.? While the questions behind this line of inquiry may not immediately trigger an alarm, the underlying premise still fits squarely within physiognomy, predicting personality traits from face features. (excerpt) from - geneticliteracyproject.com bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 31 21:32:01 2022 From: pharos at gmail.com (BillK) Date: Tue, 31 May 2022 22:32:01 +0100 Subject: [ExI] physiognomy by AI In-Reply-To: References: Message-ID: On Tue, 31 May 2022 at 20:22, William Flynn Wallace via extropy-chat wrote: > > Troublingly, some modern AI applications are delving into physiognomy, a set of pseudoscientific ideas that first appeared thousands of years ago with the ancient Greeks. In antiquity, Pythagoras, the Greek mathematician, based his decisions on accepting students on whether they ?looked? gifted. To the philosopher Aristotle, bulbous noses denoted an insensitive person and round faces signaled courage. > > Recent research has tried to show that political leaning, sexuality, and criminality can be inferred from pictures of people?s faces. > > Political orientation. In a 2021 article in Nature Scientific Reports, Stanford researcher Michal Kosinski found that using open-source code and publicly available data and facial images, facial recognition technology can judge a persons? political orientation accurately 68 percent of the time even when controlling demographic factors. In this research, the primary algorithm learned the average face for conservatives and liberals and then predicted the political leanings of unknown faces by comparing them to the reference images. Kosinski wrote that his findings about AI?s abilities have grave implications: ?The privacy threats posed by facial recognition technology are, in many ways, unprecedented.? > > While the questions behind this line of inquiry may not immediately trigger an alarm, the underlying premise still fits squarely within physiognomy, predicting personality traits from face features. (excerpt) > > from - geneticliteracyproject.com bill w > _______________________________________________ (The link should be to geneticliteracyproject.org ) The complete article is at They are complaining about the misuse of AI in trying to validate junk science. Also consider the amount of plastic surgery that is available today. This makes physiognomy even less relevant. Quote: Why South Korea is the plastic surgery capital of the world Temi Iwalaiye May 24, 2022 In South Korea, being ugly, unconventionally attractive or even ordinary looking is a crime no one wants to be guilty of. About one-third of South Korean women between the ages of 19 and 29 have undergone cosmetic surgery before. Plastic surgery is so popular in South Korea that 70% of High School students undergo eye surgery or receive it as graduation presents from family members. -------------------- Some remarkable before and after photos in that article! Hmm, maybe a trip to South Korea is recommended...... BillK From foozler83 at gmail.com Tue May 31 21:38:13 2022 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 31 May 2022 16:38:13 -0500 Subject: [ExI] physiognomy by AI In-Reply-To: References: Message-ID: eye surgery? to 'fix' epicanthal folds maybe? bill w On Tue, May 31, 2022 at 4:34 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, 31 May 2022 at 20:22, William Flynn Wallace via extropy-chat > wrote: > > > > Troublingly, some modern AI applications are delving into physiognomy, a > set of pseudoscientific ideas that first appeared thousands of years ago > with the ancient Greeks. In antiquity, Pythagoras, the Greek mathematician, > based his decisions on accepting students on whether they ?looked? gifted. > To the philosopher Aristotle, bulbous noses denoted an insensitive person > and round faces signaled courage. > > > > Recent research has tried to show that political leaning, sexuality, and > criminality can be inferred from pictures of people?s faces. > > > > Political orientation. In a 2021 article in Nature Scientific Reports, > Stanford researcher Michal Kosinski found that using open-source code and > publicly available data and facial images, facial recognition technology > can judge a persons? political orientation accurately 68 percent of the > time even when controlling demographic factors. In this research, the > primary algorithm learned the average face for conservatives and liberals > and then predicted the political leanings of unknown faces by comparing > them to the reference images. Kosinski wrote that his findings about AI?s > abilities have grave implications: ?The privacy threats posed by facial > recognition technology are, in many ways, unprecedented.? > > > > While the questions behind this line of inquiry may not immediately > trigger an alarm, the underlying premise still fits squarely within > physiognomy, predicting personality traits from face features. (excerpt) > > > > from - geneticliteracyproject.com bill w > > _______________________________________________ > > > (The link should be to geneticliteracyproject.org ) > The complete article is at > < > https://thebulletin.org/2022/05/is-your-face-gay-conservative-criminal-ai-researchers-are-asking-the-wrong-questions/ > > > They are complaining about the misuse of AI in trying to validate junk > science. > > Also consider the amount of plastic surgery that is available today. > This makes physiognomy even less relevant. > < > https://www.pulse.com.gh/lifestyle/beauty-health/why-south-korea-is-the-plastic-surgery-capital-of-the-world/j7kyx8q > > > Quote: > Why South Korea is the plastic surgery capital of the world > Temi Iwalaiye May 24, 2022 > > In South Korea, being ugly, unconventionally attractive or even > ordinary looking is a crime no one wants to be guilty of. > About one-third of South Korean women between the ages of 19 and 29 > have undergone cosmetic surgery before. > Plastic surgery is so popular in South Korea that 70% of High School > students undergo eye surgery or receive it as graduation presents from > family members. > -------------------- > > Some remarkable before and after photos in that article! > Hmm, maybe a trip to South Korea is recommended...... > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue May 31 22:22:00 2022 From: pharos at gmail.com (BillK) Date: Tue, 31 May 2022 23:22:00 +0100 Subject: [ExI] Sexual Assault problem in the Metaverse Message-ID: A Researcher Says She Was Just Sexually Assaulted In The Metaverse Within an hour of putting on her Oculus headset, her avatar was raped while she was in the virtual realm. Published May 31, 2022 By Katie Hutton Quote: ?Metaverse: another cesspool of toxic content,? a new report published by the researcher Tuesday, details the researcher?s encounter in Meta?s Horizon World. Researchers working with SumOfUs said that they were exposed to homophobic and racist remarks while in Horizon World. ------------------- Looks like Lowest Common Denominator stuff at present. BillK From atymes at gmail.com Tue May 31 23:23:10 2022 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 31 May 2022 16:23:10 -0700 Subject: [ExI] Sexual Assault problem in the Metaverse In-Reply-To: References: Message-ID: I wonder why she was using a platform where avatars even had the parts to rape or be raped, if she was not there for sexual content. On Tue, May 31, 2022 at 3:24 PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > A Researcher Says She Was Just Sexually Assaulted In The Metaverse > Within an hour of putting on her Oculus headset, her avatar was raped > while she was in the virtual realm. > Published May 31, 2022 By Katie Hutton > > < > https://themindunleashed.com/2022/05/a-researcher-says-she-was-just-sexually-assaulted-in-the-metaverse.html > > > Quote: > ?Metaverse: another cesspool of toxic content,? a new report published > by the researcher Tuesday, details the researcher?s encounter in > Meta?s Horizon World. > Researchers working with SumOfUs said that they were exposed to > homophobic and racist remarks while in Horizon World. > ------------------- > > Looks like Lowest Common Denominator stuff at present. > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: