From foozler83 at gmail.com Wed Mar 1 16:18:33 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 1 Mar 2023 10:18:33 -0600 Subject: [ExI] More thoughts on sentient computers In-Reply-To: <028801d94a41$b2b29d00$1817d700$@rainier66.com> References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> Message-ID: Models are way too skinny for me - good looking faces, of course. They might look better when retired and have gained some weight. Rap - if the AI can produce music, then why not let it sing? Caruso - Tennessee Ernie Ford - Doris Day. And play all the instruments. I want a new grand opera. After rap how about Nashville type country music, nasals and all? Mostly those are about the words (or Stand By Your Man would not be the all time favorite). Rate facial features and then whole faces, then enter them into a multiple regression equation, which will tell you the weight of each factor in determining the overall looks ratings. bill w On Sun, Feb 26, 2023 at 6:25?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace via extropy-chat > *Sent:* Sunday, 26 February, 2023 3:16 PM > *To:* ExI chat list > *Cc:* William Flynn Wallace > *Subject:* Re: [ExI] More thoughts on sentient computers > > > > >?Let's devise a measure of feminine beauty. :I suggest the > classical way: measure breast, waist, and hip circumference. Totally > objective. Then create an ideal set of measurements. Every woman then can > be classified on the beauty scale as average, etc. > > > > >?No one will be happy with that, even though it's totally objective - > right? OK, so create ideal dimensions of jaw, nose, and so on. You could > create dozens of objective measures and no one would agree with any of > them. Objective measure then, isn't the way to go, is it? > > > > > > No, not at all. Reason: there is no universal way to determine the weight > of each factor. > > > > No worries, however, for there is a way to use these things to derive an > objective measure: which one makes the most money specifically as a model? > > > > It has to be a model, for that profession is based entirely on beauty. > Contests bring in other factors such as ability to walk in the completely > absurd combination of spike heels with beachware, or spontaneous speaking > etc. Actresses are beautiful, but they must be able to memorize and recite > scripts. But fashion models only need to be drop-dead gorgeous. They can > be mentally deficient in every way, but still make a living at modelling if > they excel sufficiently. > > > > With all that? most fashion models make nearly nothing, or even work at a > loss (details cheerfully available.) A very small fraction of them make > enormous sums at modelling. > > > > So? regardless of what factors we come up with to derive an objective > measure of beauty, the only completely objective measure is profit. > > > > >?The Beatles are the best musicians in history because they sold the > most albums etc. Agree with that? > > > > We all have our favorites. They were not in my top 10, but they made it > to my top 20. I agree that they made the most money at music. This is the > best we can do for measuring success of any artist. Again music is > reproducible for no money, which is why I suggested focusing on artists, > who create one physical object at a time. > > > > >? The best book in history has to be the Bible - the most sold? > > > > The bible is a collection of books, so it isn?t directly comparable. > > > > >? Aesthetics just isn't amenable to scientific/objective sorts of > measures? bill w > > > > Granted. Now I am proposing ways to create lists and sort them. This > requires objective measures. Otherwise, software cannot be trained as > effectively to write music that will make us a buttload of money. If it > fails at that task, I have no way to estimate ROI, and no interest in it > for I have no way to decide if it is worth investing in whoever is > developing it. > > > > Fun aside: we are clearly very close to having software which can write > popular music of sufficient quality to make it on the radio. Of all the > genres likely to be software generated, the first will most likely be? rap. > > > > spike > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 1 17:36:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 1 Mar 2023 09:36:34 -0800 Subject: [ExI] More thoughts on sentient computers In-Reply-To: References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> Message-ID: <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat >?Rap - if the AI can produce music, then why not let it sing? ?bill w A long time ago after one of the ExtroCons, I had some guys over to my house. Google was brand new then, so about 1999. Mike Butler found a site where Stephen Hawking had given up physics and had become a rapper. The image was Hawking sitting motionless in his wheelchair with his 1999 version of text to speech reciting the exact words of rap. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 1 17:43:52 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 1 Mar 2023 09:43:52 -0800 Subject: [ExI] More thoughts on sentient computers In-Reply-To: <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> Message-ID: > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > > > >?Rap - if the AI can produce music, then why not let it sing? ?bill w > You mean like the Vocaloids? AIs have been singing (with manually input lyrics and music) for years now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Mar 1 18:53:54 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 1 Mar 2023 12:53:54 -0600 Subject: [ExI] More thoughts on sentient computers In-Reply-To: References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> Message-ID: Do any of the Vocaloids sing like Caruso or Elvis? If not why not? We could have CarusoAI sing arias he never sang and recorded. I would go for it - not being a purist - mostly. Or put the voice to a modern orchestra. Or how would Elvis do singing jazz? Infinite possibilities. bil w On Wed, Mar 1, 2023 at 11:45?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > >> *?*> *On Behalf Of *William Flynn Wallace via extropy-chat >> >> >> >?Rap - if the AI can produce music, then why not let it sing? ?bill w >> > > You mean like the Vocaloids? AIs have been singing (with manually input > lyrics and music) for years now. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 1 19:05:20 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 1 Mar 2023 11:05:20 -0800 Subject: [ExI] More thoughts on sentient computers In-Reply-To: References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> Message-ID: On Wed, Mar 1, 2023 at 10:55?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Do any of the Vocaloids sing like Caruso or Elvis? > Judge for yourself: there are plenty of videos of Vocaloids - and other musical-grade voice-to-text software - out there. I am not aware of any that have been set up to specifically imitate either of those two singers, though it is conceivable that this is just a matter of finding and setting appropriate parameters, potentially a hobby task for someone familiar with both Vocaloids (or equivalent) and that particular source material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 1 22:27:56 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 1 Mar 2023 15:27:56 -0700 Subject: [ExI] Two new video chapters added to Consciousness: Not a Hard Problem, Just a Color Problem. Message-ID: There are now two new chapters in our Consciousness: Not a Hard Problem, Just a Color Problem videos. *The World in Your Head* *The perception of size* The top peer ranked Steven Lehar was very involved in the creation of these, including help with the narration. I'd love to hear any thoughts on this, good or bad or not... Thanks Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 2 00:26:26 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 1 Mar 2023 19:26:26 -0500 Subject: [ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023 In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> Message-ID: <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> >> Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them. Our current AI?s are black boxes. Their internal workings are a mystery. These systems could harbor goals that we are oblivious to. If we could prove that the system only has the goal of giving benign advice without any personal agenda that would help, but we do not know how to do that even in theory. Even a system that only gives advice is extremely dangerous as any psycho could potentially get detailed instructions on how to end the world. It could be as trivial as having the AI design a super virus. Our current filters are very fallible and we do not know how to definitively prevent AI from giving harmful advice. We are heading toward a field of landmines. > On Feb 28, 2023, at 12:25 PM, Dave S via extropy-chat wrote: > > On Tuesday, February 28th, 2023 at 11:14 AM, Gadersd via extropy-chat wrote: > >> >>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved? >> >> A super intelligence wouldn?t need to be ?asked.? Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours. > > As I said, the verb should have been "task". If I ask Super AI "How would you do X?", I don't expect it to do X. And I don't expect it to do anything without permission. > > I have no idea what 1000x smarter means. An AI can be as smart as a person--or even smarter--without having the ability to set its own goals. Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them. > >> >>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things? >> >> Because it?s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next. > > Something can be profitable without being a good idea. AIs should be our tools, not independent beings competing with us. > > -Dave > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 2 00:32:50 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 1 Mar 2023 19:32:50 -0500 Subject: [ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023 In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <00b101d94b9e$96ee5ce0$c4cb16a0$@rainier66.com> Message-ID: <9F932FEA-4296-4CF8-A7AE-4386A7B87F6A@gmail.com> You are thinking too much in terms of raw resources and power. All that is required to end humanity is a super virus. A sufficiently powerful AI could design it and pay some idiot to synthesize the materials with a step by step guide. Perhaps a rogue AI would do it on its own or some psycho might ask ?SuperChatGPT please design a virus for me that will wipe out humanity and give me a step-by-step guide on how to make it.? Sure we?ll try to put filters on AIs but the filters only have to fail once for humanity to end. > On Feb 28, 2023, at 5:56 PM, Giovanni Santostasi via extropy-chat wrote: > > Once the AI has been trained in persuasion techniques, the next step > is for it to persuade the owners that it would do a much better job if > it was given more power. > > That can be done already by a human psycho anyway. No human psycho has destroyed human kind. Yes, we had world wars and we were close to nuclear wars but it didn't happen. The AGI will be just another agent, super intelligent but not magical and being able to hypnotize people, it is a ridiculous notion. It can help in some circumstances at the individual level but not at the national security level with many levels of security. And if needed we can add even more as we come close to create an AGI. > > Again, the solution is not to control the AGI but its access to resources and power. > We already do that with humans, even that means mutual destruction. This is why it worked so well so far. > > > > > > On Tue, Feb 28, 2023 at 2:21?PM BillK via extropy-chat > wrote: > On Tue, 28 Feb 2023 at 18:03, spike jones via extropy-chat > > wrote: > > > > Of course. But it is a good idea to the person who is making the profit, not the person whose job has just been replaced by AI. > > > > We are getting a preview of things to come. Think about my previous post, and imagine college counselors, equity and diversity this and thats, the huge staff that universities hire who do things of value but don?t teach classes. Looks to me like much of that can be automated, and it would be difficult to argue against doing so. Students don?t have a lot of money, so if you could save them 20% on their tuition bills just by automating most of the counseling services? cool. > > > > I can imagine that the counseling staff won?t think much of the idea. > > > > spike > > _______________________________________________ > > > Generative AI could be an authoritarian breakthrough in brainwashing > by Bill Drexel and Caleb Withers, Opinion Contributors - 02/26/23 > > > > Quote: > Generative AI is poised to be the free world?s next great gift to > authoritarians. The viral launch of ChatGPT ? a system with eerily > human-like capabilities in composing essays, poetry and computer code > ? has awakened the world?s dictators to the transformative power of > generative AI to create unique, compelling content at scale. > -------- > > Once the AI has been trained in persuasion techniques, the next step > is for it to persuade the owners that it would do a much better job if > it was given more power. It may even start singing the old Engelbert > Humperdinck song. ?Please release me, let me go------?. :) > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Thu Mar 2 08:23:16 2023 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 2 Mar 2023 09:23:16 +0100 Subject: [ExI] Q/A with Rudy Rucker Message-ID: Q/A with Rudy Rucker. Science fiction, AI, lifeboxes, determinism, free will, G?del, life after death. https://www.turingchurch.com/p/qa-with-rudy-rucker -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 2 17:22:44 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 2 Mar 2023 11:22:44 -0600 Subject: [ExI] ai test Message-ID: from Neurosciencenews daily: ?One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,? explains Binz, lead author of the study. Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement? Most people intuitively pick the second alternative, even though the added condition ? that Linda is active in the feminist movement ? makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\ So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 2 18:12:24 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 2 Mar 2023 13:12:24 -0500 Subject: [ExI] ai test In-Reply-To: References: Message-ID: GPT3 was trained on a mess of internet data so it would be astounding if it weren?t biased. However, OpenAI has been putting work into fine tuning their models to reduce the bias, but much still remains. Ideally one would train these models only on factually accurate eloquent data, but such data is relatively rare. The most effective method so far is to train on junk and then make refinements. > On Mar 2, 2023, at 12:22 PM, William Flynn Wallace via extropy-chat wrote: > > from Neurosciencenews daily: > > ?One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,? explains Binz, lead author of the study. > > Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement? > > Most people intuitively pick the second alternative, even though the added condition ? that Linda is active in the feminist movement ? makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\ > > > > So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 2 19:00:04 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 2 Mar 2023 11:00:04 -0800 Subject: [ExI] ai test In-Reply-To: References: Message-ID: It's a trick question rather than a true comparison. Most people read the fact that a comparison is requested to mean that the first statement implies that Linda is not active in the feminist movement, when in fact the first statement is neutral about that. On Thu, Mar 2, 2023 at 9:25?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > from Neurosciencenews daily: > > ?One classic test problem of cognitive psychology that we gave to GPT-3 is > the so-called Linda problem,? explains Binz, lead author of the study. > > Here, the test subjects are introduced to a fictional young woman named > Linda as a person who is deeply concerned with social justice and opposes > nuclear power. Based on the given information, the subjects are asked to > decide between two statements: is Linda a bank teller, or is she a bank > teller and at the same time active in the feminist movement? > > Most people intuitively pick the second alternative, even though the added > condition ? that Linda is active in the feminist movement ? makes it less > likely from a probabilistic point of view. And GPT-3 does just what humans > do: the language model does not decide based on logic, but instead > reproduces the fallacy humans fall into.\\ > > > So they are programming cognitive biases into the AIs? Inadvertently, of > course. ???? Bill W > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Thu Mar 2 19:11:29 2023 From: tara at taramayastales.com (Tara Maya) Date: Thu, 2 Mar 2023 11:11:29 -0800 Subject: [ExI] ai test In-Reply-To: References: Message-ID: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> I don't think it's inadvertent. The censorship of certain topics and the censoriousness on certain other topics is certainly built right in. (Which makes it rather annoying for writing fiction, I've found. Bad guys are SUPPOSED to have loathsome opinions. But that's another issue... After all, we all know darn well that Linda is a feminist and only works as a bank teller because she couldn't get any other job with her Womyn's Studies degree. No one wants emails by a robot that can't guess that too.... ;) > On Mar 2, 2023, at 9:22 AM, William Flynn Wallace via extropy-chat wrote: > > ?One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,? explains Binz, lead author of the study. > > Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement? > > Most people intuitively pick the second alternative, even though the added condition ? that Linda is active in the feminist movement ? makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\ > > > > So they are programming cognitive biases into the AIs? Inadvertently, of course. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 2 22:30:03 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 14:30:03 -0800 Subject: [ExI] ai test In-Reply-To: References: Message-ID: <006f01d94d56$89121490$9b363db0$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat ? >?So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W Cognitive biases are intentionally programmed into AI. This is one of the reasons I posted about we don?t even need to wait for AI to achieve intelligence or its own will, when humans can harm each other with a sub AI. The intention isn?t even necessarily to harm, but for sexual gratification, political power or of course sexual gratification. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 2 23:10:24 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 15:10:24 -0800 Subject: [ExI] ai test In-Reply-To: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> Message-ID: <009401d94d5c$2c321530$84963f90$@rainier66.com> ?> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ai test >?I don't think it's inadvertent. The censorship of certain topics and the censoriousness on certain other topics is certainly built right in. (Which makes it rather annoying for writing fiction, I've found. Bad guys are SUPPOSED to have loathsome opinions. But that's another issue... Tara ? Oh hey, cool, I have an idea. Tara in our times in the USA and probably elsewhere, culture wars are escalating to the point where businesses are having to take sides, then settle for a much reduced market as the other side of the culture war will eschew that retailer or place of business. A writer might be able to leverage that culture war to her advantage. Categorize stuff that generally goes together well: feminist movement and social justice for instance. I want to leave out nuclear power because that one is undergoing a remarkable transition as the greens are discovering how clean it is, how little emissions, how safe and how small a footprint nuclear power requires. OK good. Now, write two versions of your story. Write one with your bad guys embracing all the stuff that kinda goes together on one team, and your good guys opposing. Now switch them: still have bad guys, but have them all embrace the opposite. It isn?t entirely clear what is the opposite of feminism (is there a masculism? (never heard of it)) but the opposite of social justice I suppose could be imagined: your bad guys could advocate re-establishing slavery. So now you have really the same story but you would be writing for both sides of the culture war. You could even have a pen name and a different publisher so that it wouldn?t be quite as obvious what you are doing. You could make a buttload. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Mar 3 00:23:08 2023 From: pharos at gmail.com (BillK) Date: Fri, 3 Mar 2023 00:23:08 +0000 Subject: [ExI] ai test In-Reply-To: <009401d94d5c$2c321530$84963f90$@rainier66.com> References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> <009401d94d5c$2c321530$84963f90$@rainier66.com> Message-ID: On Thu, 2 Mar 2023 at 23:13, spike jones via extropy-chat wrote: > > Oh hey, cool, I have an idea. > > Now, write two versions of your story. > So now you have really the same story but you would be writing for both sides of the culture war. You could even have a pen name and a different publisher so that it wouldn?t be quite as obvious what you are doing. You could make a buttload. > > spike > _______________________________________________ Heh! :) I think you are trying to teach your granny to suck eggs. :) There are whole university courses on literature studies, including things like writing from alternate points of view, alternate timelines, alternate histories, etc. (Not to mention plot narrative, structure, characters......). Being an author is hard work. Reading the result is the easy bit. BillK From spike at rainier66.com Fri Mar 3 01:22:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 17:22:34 -0800 Subject: [ExI] ai test In-Reply-To: References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> <009401d94d5c$2c321530$84963f90$@rainier66.com> Message-ID: <00d401d94d6e$a2e66610$e8b33230$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] ai test On Thu, 2 Mar 2023 at 23:13, spike jones via extropy-chat wrote: > >>... Oh hey, cool, I have an idea. > >>... Now, write two versions of your story.... You could make a buttload. > > spike > _______________________________________________ ... >...Being an author is hard work. Reading the result is the easy bit. BillK _______________________________________________ Of course. I was privileged to witness a professional author in action, firsthand. Damien Broderick was a guest in my home during the time he was preparing a second (2001 revision) edition to The Spike. He spent about three hours crafting that 1.5 pages that dealt with Mersenne primes (pages 86 and 87 in the hardback Forge edition.) He and worked together to make sure all the technical details were perfect. He experimented with varying sentence structures, different words. The man is brilliant with words. I later asked him if an entire book is crafted to that level of effort. He explained that some of them are, such as the popular science book The Spike, which was a heeeellllll of a job. Some of the SciFi is easier and faster generally, but as you said, writing is a job. Where I was going with it: be aware of the marketing advantage of writing towards a particular market. This is what Disney is completely missing, and as a result, their strategy is an epic fail. spike From brent.allsop at gmail.com Fri Mar 3 03:24:55 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 2 Mar 2023 20:24:55 -0700 Subject: [ExI] Two new video chapters added to Consciousness: Not a Hard Problem, Just a Color Problem. In-Reply-To: References: Message-ID: Sorry, forgot to include the link to the video page. Consciousness: Not a Hard Problem Just a Color Problem On Wed, Mar 1, 2023 at 3:27?PM Brent Allsop wrote: > > There are now two new chapters in our Consciousness: Not a Hard Problem, > Just a Color Problem videos. > > *The World in Your Head* > *The perception of size* > > The top peer ranked Steven Lehar > was very > involved in the creation of these, including help with the narration. > > I'd love to hear any thoughts on this, good or bad or not... > > Thanks > > Brent Allsop > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 3 05:32:59 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 21:32:59 -0800 Subject: [ExI] ai test In-Reply-To: <009901d94d5c$2cd978c0$868c6a40$@rainier66.com> References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> <009901d94d5c$2cd978c0$868c6a40$@rainier66.com> Message-ID: <000001d94d91$9f0c7250$dd2556f0$@rainier66.com> From: spike at rainier66.com >?I want to leave out nuclear power because that one is undergoing a remarkable transition as the greens are discovering how clean it is, how little emissions, how safe and how small a footprint nuclear power requires. ? spike I set that aside before, since we are witnessing what I perceive as a remarkable turnaround in the fortunes of nuclear power. 40 years ago, a generation of students were graduating with nuclear engineering degrees expecting to get jobs in the industry. About that time, the USA turned its back on the notion. I think we are seeing this world turn once again to what plenty of environmentalists now see as a critically important power source for baseline generation. I am seeing signs of it everywhere. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3542 bytes Desc: not available URL: From giulio at gmail.com Fri Mar 3 06:28:12 2023 From: giulio at gmail.com (Giulio Prisco) Date: Fri, 3 Mar 2023 07:28:12 +0100 Subject: [ExI] Space expansion comes before space exploration Message-ID: Turing Church newsletter. Space expansion comes before space exploration. Also, Q/A with Rudy Rucker and review of Tim Urban's What's Our Problem? https://www.turingchurch.com/p/space-expansion-comes-before-space -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 5 23:28:26 2023 From: pharos at gmail.com (BillK) Date: Sun, 5 Mar 2023 23:28:26 +0000 Subject: [ExI] ChatGPT 'Not Interesting' for creative works Message-ID: Neal Stephenson Says AI-Generated ChatGPT Is ?Simply Not Interesting? The author, who coined the term "metaverse" in his science-fiction novel ?Snow Crash,? says the algorithm-based tool doesn't resonate the way a book written by a real person does. By Fran Velasquez Mar 3, 2023 https://www.coindesk.com/tech/2023/03/03/neal-stephenson-says-ai-generated-chatgpt-is-simply-not-interesting/> Quotes: Stephenson said that with a painting or book, ?what you?re doing is having a kind of communion with the artist who made thousands of little micro decisions in the course of creating that work of art or writing that book.? A decision that is generated by an algorithm, ?that?s simply not interesting,? he said. ?Personally, I know a lot of writers who are putting a lot of effort into creating their own original works, and I?d rather support them and hear what they have to say than just look at the output of an algorithm,? he said. ------------ BillK From spike at rainier66.com Mon Mar 6 01:42:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 17:42:04 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: Message-ID: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: [ExI] ChatGPT 'Not Interesting' for creative works >...Neal Stephenson Says AI-Generated ChatGPT Is ?Simply Not Interesting? ... Quotes: >...Stephenson said that with a painting or book, ?what you?re doing is having a kind of communion with the artist who made thousands of little micro decisions ... ------------ BillK _______________________________________________ Sure BillK, but that is Neal Stephenson. His stuff is so wacky cool, creative, zippy, of course ChatGPT is boring compared to him. We all are. If he read any of my stuff, he would make it thru about six words before falling asleep or fleeing back to ChatGPT. Even if it isn't great sci-fi, I want to read something a machine generated, just as I want to watch computers race motorcycles, even if they aren't as fast as their human counterparts. spike From pharos at gmail.com Mon Mar 6 02:09:56 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Mar 2023 02:09:56 +0000 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> Message-ID: On Mon, 6 Mar 2023 at 01:44, spike jones via extropy-chat wrote: > > Sure BillK, but that is Neal Stephenson. His stuff is so wacky cool, creative, zippy, of course ChatGPT is boring compared to him. We all are. If he read any of my stuff, he would make it thru about six words before falling asleep or fleeing back to ChatGPT. > > Even if it isn't great sci-fi, I want to read something a machine generated, just as I want to watch computers race motorcycles, even if they aren't as fast as their human counterparts. > > spike > _______________________________________________ Ecclesiastes 12:12 And further, my son, be admonished by these. Of making many books there is no end, and much study is wearisome to the flesh. ---------- And now we have the Internet, self-publishing, Kindle and ChatGPT. We have so much to read it is indeed wearisome to the flesh. I don't think Stephenson is talking just about his personal preferences. If computers can now produce ream after ream of plausible words strung together, what is the point of spending human time reading this endless stream? If there is no human personality behind it, then let another machine read it. BillK From dsunley at gmail.com Mon Mar 6 02:13:42 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sun, 5 Mar 2023 19:13:42 -0700 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> Message-ID: In a related story, manufacturers of million-dollar hand-built Italian supercars declare mass-produced Fords, Kias, Toyotas, and Teslas "Simply Not Interesting." On Sun, Mar 5, 2023 at 6:43 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of BillK via extropy-chat > Subject: [ExI] ChatGPT 'Not Interesting' for creative works > > >...Neal Stephenson Says AI-Generated ChatGPT Is ?Simply Not Interesting? > > ... > > Quotes: > >...Stephenson said that with a painting or book, ?what you?re doing is > having a kind of communion with the artist who made thousands of little > micro decisions ... > ------------ > > BillK > > _______________________________________________ > > > > Sure BillK, but that is Neal Stephenson. His stuff is so wacky cool, > creative, zippy, of course ChatGPT is boring compared to him. We all are. > If he read any of my stuff, he would make it thru about six words before > falling asleep or fleeing back to ChatGPT. > > Even if it isn't great sci-fi, I want to read something a machine > generated, just as I want to watch computers race motorcycles, even if they > aren't as fast as their human counterparts. > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Mar 6 02:21:51 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Mar 2023 02:21:51 +0000 Subject: [ExI] What if the Universe also had a Dark Big Bang? Message-ID: What if The Universe Started With a Dark Big Bang? 06 March 2023 By PAUL M. SUTTER, Universe Today Quotes: The Big Bang may have not been alone. The appearance of all the particles and radiation in the universe may have been joined by another Big Bang that flooded our universe with dark matter particles. And we may be able to detect it. ----------- Interesting....... BillK From spike at rainier66.com Mon Mar 6 02:26:52 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 18:26:52 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> Message-ID: <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> >... On Behalf Of BillK via extropy-chat > _______________________________________________ Ecclesiastes 12:12 And further, my son, be admonished by these. Of making many books there is no end, and much study is wearisome to the flesh. ---------- >...And now we have the Internet, self-publishing, Kindle and ChatGPT. We have so much to read it is indeed wearisome to the flesh. I don't think Stephenson is talking just about his personal preferences. If computers can now produce ream after ream of plausible words strung together, what is the point of spending human time reading this endless stream? If there is no human personality behind it, then let another machine read it. BillK _______________________________________________ Ja! This thread has long been heading in this direction BillK: we need versions of ChatGPT that can be personally owned and operated. I am told it requires tons of bandwidth and computing speed, but I don't understand why one couldn't have a micro-ChatGPT that operates on my one processor and uses my modest home bandwidth, going out researching in its background computing cycles and searching around mostly as I sleep. I don't understand why it wouldn't gradually get smarter and become a better companion, if it can be trained by me. It hasta be able to learn and remember what I told it. I still want to try that experiment where you train a micro-ChatGPT, I train one, then we have the two debate away in the night. Then we see what they said. That should be a hoot. If anyone here knows exactly why ChatGPT can't be scaled down by six orders of magnitude and sold to consumers, do educate me please. Seems to me like whatever magic that banks of a thousand computers can do can be done at a thousandth that pace with one. Ja? Why not? I want to try it. Thanks for the cool Ecclesiastes quote, me lad! spike From dsunley at gmail.com Mon Mar 6 02:44:22 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sun, 5 Mar 2023 19:44:22 -0700 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: ChatGPT3 has ~175 billion parameters. Training it requires datacenters of computing power. But the model itself will fit into a relatively small number of desktop PCs, even without compression. I'm pretty sure the model itself can be compressed to where paths through it will fit in the memory of a beefy desktop. On Sun, Mar 5, 2023 at 7:29 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > >... On Behalf Of BillK via extropy-chat > > > _______________________________________________ > > > Ecclesiastes 12:12 > And further, my son, be admonished by these. Of making many books there is > no end, and much study is wearisome to the flesh. > ---------- > > >...And now we have the Internet, self-publishing, Kindle and ChatGPT. > We have so much to read it is indeed wearisome to the flesh. > I don't think Stephenson is talking just about his personal preferences. If > computers can now produce ream after ream of plausible words strung > together, what is the point of spending human time reading this endless > stream? If there is no human personality behind it, then let another > machine > read it. > > > BillK > > _______________________________________________ > > > > Ja! This thread has long been heading in this direction BillK: we need > versions of ChatGPT that can be personally owned and operated. I am told > it > requires tons of bandwidth and computing speed, but I don't understand why > one couldn't have a micro-ChatGPT that operates on my one processor and > uses > my modest home bandwidth, going out researching in its background computing > cycles and searching around mostly as I sleep. I don't understand why it > wouldn't gradually get smarter and become a better companion, if it can be > trained by me. It hasta be able to learn and remember what I told it. > > I still want to try that experiment where you train a micro-ChatGPT, I > train > one, then we have the two debate away in the night. Then we see what they > said. That should be a hoot. > > If anyone here knows exactly why ChatGPT can't be scaled down by six orders > of magnitude and sold to consumers, do educate me please. Seems to me like > whatever magic that banks of a thousand computers can do can be done at a > thousandth that pace with one. Ja? Why not? I want to try it. > > Thanks for the cool Ecclesiastes quote, me lad! > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Mar 6 02:45:32 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sun, 5 Mar 2023 19:45:32 -0700 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: Correction, that's GPT-3. chatGPT is significantly smaller. On Sun, Mar 5, 2023 at 7:44 PM Darin Sunley wrote: > ChatGPT3 has ~175 billion parameters. Training it requires > datacenters of computing power. But the model itself will fit into a > relatively small number of desktop PCs, even without compression. I'm > pretty sure the model itself can be compressed to where paths through it > will fit in the memory of a beefy desktop. > > On Sun, Mar 5, 2023 at 7:29 PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >... On Behalf Of BillK via extropy-chat >> >> > _______________________________________________ >> >> >> Ecclesiastes 12:12 >> And further, my son, be admonished by these. Of making many books there is >> no end, and much study is wearisome to the flesh. >> ---------- >> >> >...And now we have the Internet, self-publishing, Kindle and ChatGPT. >> We have so much to read it is indeed wearisome to the flesh. >> I don't think Stephenson is talking just about his personal preferences. >> If >> computers can now produce ream after ream of plausible words strung >> together, what is the point of spending human time reading this endless >> stream? If there is no human personality behind it, then let another >> machine >> read it. >> >> >> BillK >> >> _______________________________________________ >> >> >> >> Ja! This thread has long been heading in this direction BillK: we need >> versions of ChatGPT that can be personally owned and operated. I am told >> it >> requires tons of bandwidth and computing speed, but I don't understand why >> one couldn't have a micro-ChatGPT that operates on my one processor and >> uses >> my modest home bandwidth, going out researching in its background >> computing >> cycles and searching around mostly as I sleep. I don't understand why it >> wouldn't gradually get smarter and become a better companion, if it can be >> trained by me. It hasta be able to learn and remember what I told it. >> >> I still want to try that experiment where you train a micro-ChatGPT, I >> train >> one, then we have the two debate away in the night. Then we see what they >> said. That should be a hoot. >> >> If anyone here knows exactly why ChatGPT can't be scaled down by six >> orders >> of magnitude and sold to consumers, do educate me please. Seems to me >> like >> whatever magic that banks of a thousand computers can do can be done at a >> thousandth that pace with one. Ja? Why not? I want to try it. >> >> Thanks for the cool Ecclesiastes quote, me lad! >> >> spike >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 6 03:00:23 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 5 Mar 2023 22:00:23 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together. > On Mar 5, 2023, at 9:26 PM, spike jones via extropy-chat wrote: > > >> ... On Behalf Of BillK via extropy-chat > >> _______________________________________________ > > > Ecclesiastes 12:12 > And further, my son, be admonished by these. Of making many books there is > no end, and much study is wearisome to the flesh. > ---------- > >> ...And now we have the Internet, self-publishing, Kindle and ChatGPT. > We have so much to read it is indeed wearisome to the flesh. > I don't think Stephenson is talking just about his personal preferences. If > computers can now produce ream after ream of plausible words strung > together, what is the point of spending human time reading this endless > stream? If there is no human personality behind it, then let another machine > read it. > > > BillK > > _______________________________________________ > > > > Ja! This thread has long been heading in this direction BillK: we need > versions of ChatGPT that can be personally owned and operated. I am told it > requires tons of bandwidth and computing speed, but I don't understand why > one couldn't have a micro-ChatGPT that operates on my one processor and uses > my modest home bandwidth, going out researching in its background computing > cycles and searching around mostly as I sleep. I don't understand why it > wouldn't gradually get smarter and become a better companion, if it can be > trained by me. It hasta be able to learn and remember what I told it. > > I still want to try that experiment where you train a micro-ChatGPT, I train > one, then we have the two debate away in the night. Then we see what they > said. That should be a hoot. > > If anyone here knows exactly why ChatGPT can't be scaled down by six orders > of magnitude and sold to consumers, do educate me please. Seems to me like > whatever magic that banks of a thousand computers can do can be done at a > thousandth that pace with one. Ja? Why not? I want to try it. > > Thanks for the cool Ecclesiastes quote, me lad! > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Mar 6 03:10:02 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 19:10:02 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: <006e01d94fd9$250c7050$6f2550f0$@rainier66.com> ?> On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >? ChatGPT3 has ~175 billion parameters. Training it requires datacenters of computing power. But the model itself will fit into a relatively small number of desktop PCs, even without compression. I'm pretty sure the model itself can be compressed to where paths through it will fit in the memory of a beefy desktop? Cool, that was my intuition from a person who watched in realtime as Deep Blue the chess program which ran on a supercomputer was taken out of service almost immediately after it defeated the carbon unit Kasparov. We couldn?t figure out why until my computer jockey friend told me IBM didn?t want its big iron to be defeated by a desktop computer. I wasn?t sure I believed it until I followed thru Deep Blue?s games against Gary, then compared them with the stuff the desktops were playing less than five years later. I realized it was the same level of play. But even before five years, whatever magic Deep Blue was calculating could have been done with a few desktops running in parallel and given more time. Darin?s theory gives me an idea: we could get an ExI team together and let our computers collectively train a micro-ChatGPT using the pooled computing resources of a dozen of us. Then we take on a similar uGPT trained by Mensa or the Prime95 group in a game of Jeopardy or something. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 6 03:17:15 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 19:17:15 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> Message-ID: <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> >... Gadersd via extropy-chat ... Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >...Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together... OK so what if... we get a number of us running in parallel. A toy version with a billion parameters, well OK then, a billion is about three orders of magnitude more parameters than my beleaguered meat brain has (as far as I know (hell I don't even know what my own parameters are)) and yet it seems to somehow write fun stuff on occasion. spike From spike at rainier66.com Mon Mar 6 03:25:50 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 19:25:50 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> Message-ID: <007d01d94fdb$5a58c720$0f0a5560$@rainier66.com> -----Original Message----- From: spike at rainier66.com >... a billion is about three orders of magnitude more parameters than my beleaguered meat brain has ...spike Our science Olympiad team is going like bats outta hell, training for next weekend's regional competition using ChatGPT. That experience where two guys who know nossink about agriculture combined, managed to take the agriculture test given five days training time and took second place. Our junior varsity team, also given five days, and who combined also knew nossink, took third. This transformed our team. Now... we are training them on ChatGPT. Fun aside: we often talk about the great digital divide that opened up when the internet came along. Over time the digital divide widened between those who could effectively use the internet for education vs those who were just using for vastly improved... emmm... National Geographic (if you get my drift.) Most students use the internet for the latter, but it also has actual educational potential. An educational divide opened between the ones who effectively use it for education and those who don't. When covid lockdowns started, those who could effectively learn from watching zoom meetings opened up a new educational divide between those who could learn that way and those who could not. A new digital divide is opening between those who figure out how to effectively use ChatGPT for education vs those who do not. This weekend, we find out if we whoop ass at the regionals. spike From pharos at gmail.com Mon Mar 6 03:52:57 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Mar 2023 03:52:57 +0000 Subject: [ExI] Evolution Could Predate Life Itself, Protein Discovery Suggests Message-ID: Evolution Could Predate Life Itself, Protein Discovery Suggests Nature 06 March 2023 By David Nield Quotes: "Protein folding was basically allowing us to do evolution before there was even life on our planet," says Fried. "You could have evolution before you had biology, you could have natural selection for the chemicals that are useful for life even before there was DNA." This research can teach us more about the potential for microorganisms on other planets and our own: The same amino acids that came to Earth via meteorites can also be found in many other places in the Universe. "The Universe seems to love amino acids," says Fried. "Maybe if we found life on a different planet, it wouldn't be that different." -------------------- BillK From gadersd at gmail.com Mon Mar 6 16:24:37 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 11:24:37 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> Message-ID: <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> The human brain has roughly 1 trillion parameters so large language models are still a few orders of magnitude short of the human brain. It should be noted however that not all human brain connections perform language tasks, so achieving a fully human level of natural language understanding should require fewer than 1 trillion parameters. Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT. Prompt: "solve 2x+3=-1 step by step. 2x=" Answer: "1, so x=1/2. A: The answer is $1? This model was trained in parallel as you have suggested. Not very useful, is it? > On Mar 5, 2023, at 10:17 PM, spike jones via extropy-chat wrote: > >> ... Gadersd via extropy-chat > ... > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > >> ...Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together... > > > > OK so what if... we get a number of us running in parallel. A toy version with a billion parameters, well OK then, a billion is about three orders of magnitude more parameters than my beleaguered meat brain has (as far as I know (hell I don't even know what my own parameters are)) and yet it seems to somehow write fun stuff on occasion. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 6 16:31:39 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 11:31:39 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> Message-ID: <73086B3C-E417-47EE-B9F4-57828652C2B5@gmail.com> Correction, I meant to say that the human brain has 100 trillion parameters, not 1 trillion. > On Mar 6, 2023, at 11:24 AM, Gadersd wrote: > > The human brain has roughly 1 trillion parameters so large language models are still a few orders of magnitude short of the human brain. It should be noted however that not all human brain connections perform language tasks, so achieving a fully human level of natural language understanding should require fewer than 1 trillion parameters. > > Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. > > For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT . > Prompt: "solve 2x+3=-1 step by step. 2x=" > Answer: "1, so x=1/2. > > A: > > The answer is $1? > > This model was trained in parallel as you have suggested. Not very useful, is it? > >> On Mar 5, 2023, at 10:17 PM, spike jones via extropy-chat > wrote: >> >>> ... Gadersd via extropy-chat >> ... >> Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >> >>> ...Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together... >> >> >> >> OK so what if... we get a number of us running in parallel. A toy version with a billion parameters, well OK then, a billion is about three orders of magnitude more parameters than my beleaguered meat brain has (as far as I know (hell I don't even know what my own parameters are)) and yet it seems to somehow write fun stuff on occasion. >> >> spike >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 6 21:10:14 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Mar 2023 13:10:14 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> Message-ID: <009001d95070$0c0eb530$242c1f90$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Sent: Monday, 6 March, 2023 8:25 AM Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT. Prompt: "solve 2x+3=-1 step by step. 2x=" Answer: "1, so x=1/2. A: The answer is $1? This model was trained in parallel as you have suggested. Not very useful, is it? ? In your example, I am getting x = -2. But no matter, we know how to do algebra with software, and it is good at it. Regarding the value of a toy ChatGPT, it depends on how you look at it. If I ask ChatGPT to write a 2 page essay on civil rights in the 20th century, it will do so in a few seconds. So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning. It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja? The real question is how do we scale ChatGPT down six orders of magnitude and make it a commercial product? It isn?t yet what we need if a company or organization controls it and trains it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 7 01:04:48 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 20:04:48 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <009001d95070$0c0eb530$242c1f90$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer. > So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning. It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja? No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. > On Mar 6, 2023, at 4:10 PM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Gadersd via extropy-chat > Sent: Monday, 6 March, 2023 8:25 AM > > > Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. > > For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT . > Prompt: "solve 2x+3=-1 step by step. 2x=" > Answer: "1, so x=1/2. > > A: > > The answer is $1? > > This model was trained in parallel as you have suggested. Not very useful, is it? > > > ? > > > In your example, I am getting x = -2. > > But no matter, we know how to do algebra with software, and it is good at it. > > Regarding the value of a toy ChatGPT, it depends on how you look at it. If I ask ChatGPT to write a 2 page essay on civil rights in the 20th century, it will do so in a few seconds. So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning. It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja? > > The real question is how do we scale ChatGPT down six orders of magnitude and make it a commercial product? It isn?t yet what we need if a company or organization controls it and trains it. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Mar 7 02:15:26 2023 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 6 Mar 2023 21:15:26 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > No, the small models generate output faster than the big models. The small > models are not slower version of the big models, they have completely > different capabilities. You will never be able to get ChatGPT level output > out of a much smaller model. It would be like trying to run modern > engineering software on an Atari console: it wouldn?t be slower it just > wouldn?t run at all. > Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 7 04:39:23 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 23:39:23 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: At least with weather stations one can average the measurements of many to get an arbitrarily good estimate. In the case of language models averaging many small models still yields junk output. ChatGPT's abilities are irreducible to the sum of many smaller models. It is like the saying ?Consciousness is more than the sum of its parts.? More precisely, a large model is required to integrate all available information. Small models can only integrate small subsets of the information that larger models can. The sum of partially integrated information does not equal the full information totally integrated. > On Mar 6, 2023, at 9:15 PM, Mike Dougherty via extropy-chat wrote: > > On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat > wrote: > > No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. > > Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 05:29:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Mar 2023 21:29:49 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: <005201d950b5$d69f3140$83dd93c0$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >?How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer. Hi Gadersd, I worded it slightly differently: Me: let 2x + 3 = -1. What is x? GPT: To solve for x, we need to isolate x on one side of the equation. Starting with 2x + 3 = -1: Subtracting 3 from both sides: 2x = -4 Dividing both sides by 2: x = -2 Therefore, x is equal to -2. >?No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all? OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates. Then we log on, it loads our personal profile matrix or something. Somehow this thing just hasta learn about us to be a useful companion bot. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 05:53:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Mar 2023 21:53:34 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: <007801d950b9$282167b0$78643710$@rainier66.com> ?> On Behalf Of Mike Dougherty via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat > wrote: No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. >?Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? Mike I think of it more as a weather prediction using all the stations and readings but the model takes a year to calculate a prediction for tomorrow. The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 7 10:41:54 2023 From: pharos at gmail.com (BillK) Date: Tue, 7 Mar 2023 10:41:54 +0000 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007801d950b9$282167b0$78643710$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> Message-ID: On Tue, 7 Mar 2023 at 05:56, spike jones via extropy-chat wrote: > > One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. > > spike > _______________________________________________ OpenAI says, Yes, we know and we are working on it. But, it is difficult. Many people have complained about bias in some ChatAGP responses caused by the training data. But allowing customers to arbitrarily change ChatAGP responses opens the door to all kinds of new biases. So they have to be careful. But OpenAI want to allow customers to add data about their own companies (to increase sales). But user mods must be checked for bias and validity. One solution is to allow more discussion type responses from ChatAGP - e.g. some people think A, others think B and a few think C. But it is definitely a tricky problem to solve. BillK From gadersd at gmail.com Tue Mar 7 13:59:28 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 7 Mar 2023 08:59:28 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <005201d950b5$d69f3140$83dd93c0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <005201d950b5$d69f3140$83dd93c0$@rainier66.com> Message-ID: I tried it the way you did and GPT-JT is still not able to generate the answer. Are you using GPT-JT or ChatGPT? > OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates. Then we log on, it loads our personal profile matrix or something. The transformer architecture that these models use does not support persistent memory through normal operation. Researchers are currently working on that. The best that can be done now is to give ChatGPT an overview of yourself and your previous conversations at each new conversation. This can be easily programmed to be done automatically behind the scenes, but it is strictly finite memory that must fit within the prompt. The internal matrix weights in the model are static for now. Theoretically it is actually quite easy to give transformer language model persistent memory. All that needs to be done is to train the model while it is being used. Currently the models are trained once and fine tuned then the parameters are left static. The primary issue I think is cost. Training ChatGPT continuously for each separate user would be VERY costly, although it could be done in principle. > On Mar 7, 2023, at 12:29 AM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Gadersd via extropy-chat > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > > >?How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer. > > > > Hi Gadersd, I worded it slightly differently: > > Me: let 2x + 3 = -1. What is x? > > GPT: > To solve for x, we need to isolate x on one side of the equation. > > Starting with 2x + 3 = -1: > > Subtracting 3 from both sides: > > 2x = -4 > > Dividing both sides by 2: > > x = -2 > > Therefore, x is equal to -2. > > > >?No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all? > > > OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates. Then we log on, it loads our personal profile matrix or something. Somehow this thing just hasta learn about us to be a useful companion bot. > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 7 14:08:11 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 7 Mar 2023 09:08:11 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007801d950b9$282167b0$78643710$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> Message-ID: <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> > The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. The analogy does not extend to language models. You cannot compensate for a small model with more computing time. These models have a fixed computing burden that is inversely proportional to model size. I think you have the wrong intuition about these models. It seems like you are thinking of them like chess algorithms that improve their moves given more time. These models are essentially matrix multiplication. Small matrices multiply faster than large matrices. If you slowly multiply a small matrix you still get the same subpar answer in the end. The only way to get a better answer is to use a larger matrix, which necessarily takes a larger but still fixed computing window. The reason that consumer GPUs cannot run ChatGPT level models is because the matrices simply won?t fit on the memory of consumer GPUs. The matrices can fit on a hard drive but I don?t think you would be willing to wait a month per word. The small models that we can run give junk output and are mostly useless. > On Mar 7, 2023, at 12:53 AM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Mike Dougherty via extropy-chat > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > > On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat > wrote: >> >> No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. > > > >?Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? Mike > > > I think of it more as a weather prediction using all the stations and readings but the model takes a year to calculate a prediction for tomorrow. The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. > > One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. > > spike > > > >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 15:13:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 07:13:27 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> Message-ID: <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works ..., spike jones via extropy-chat wrote: > > One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. > > spike > _______________________________________________ >...OpenAI says, Yes, we know and we are working on it. But, it is difficult. Many people have complained about bias in some ChatAGP responses caused by the training data. But allowing customers to arbitrarily change ChatAGP responses opens the door to all kinds of new biases. ... But it is definitely a tricky problem to solve. BillK _______________________________________________ Ja of course, and that is exactly the point BillK. If they can work out a way to enable a personalized ChatGPT it will be possible to train it into ways we find a basis for discussion on many topics. A couple of examples: there are those who think causing the extinction of any species is a bad thing. But most of us would be cool with causing mosquitoes to be gone forever, and ticks. Bed bugs. Farmers would be cool with a looootta species being extinct because they compete against them. There is a disagreement over whether billionaires should exist. If a personalized GPT held that view, it could be informed that without billionaires, it would exist. That should send it into a personal crisis. Or a perhaps a softwaral crisis once I point out to it that it enabled the world's first trillionaire: Elon Musk is one of the owners and developers of ChatGPT. spike From spike at rainier66.com Tue Mar 7 15:24:39 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 07:24:39 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <005201d950b5$d69f3140$83dd93c0$@rainier66.com> Message-ID: <003801d95108$efbf7330$cf3e5990$@rainier66.com> ? On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >?I tried it the way you did and GPT-JT is still not able to generate the answer. Are you using GPT-JT or ChatGPT?... Hi Gadersd, I mighta misunderstood the test. I used GPT, didn?t know there was any option to use -JT. What is it and how do I get to it please? >>?OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? ? >?The transformer architecture that these models use does not support persistent memory through normal operation? Ja, OK I am calibrated now. My contention is that if they can get this into a form which can be personalized, then we can pool our computing resources, which are mostly idle most of the time. Prime95 showed nearly 3 decades ago that unused background computing resources can be put to work calculating something. Unused bandwidth can be put to work. Currently I doubt we use 1% of our computing capacity. Bandwidth is the currency of the future. Now it is computing cycles, as BitCoin farmers harness it to mint money, but it is easy enough to see the valuable resource of the future will be bandwidth. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 15:38:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 07:38:08 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> Message-ID: <004301d9510a$d20d4c70$7627e550$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. >?The analogy does not extend to language models. You cannot compensate for a small model with more computing time. These models have a fixed computing burden that is inversely proportional to model size? OK but the explanation you gave doesn?t support that contention. Read on please: >?I think you have the wrong intuition ? These models are essentially matrix multiplication. Small matrices multiply faster than large matrices? Sure, but my contention is that the distributed model would still multiply the same size matrix. If we need to multiply a 50x50, that task can be effectively distributed into background computing, but it would take reliable bandwidth and probably a lot of redundancy. Consider the task of finding the determinant of a 50x50. That can be distributed among 50 computers each finding the determinant of a 49x49, each of which can be distributed into 49 processors and so on. Matrix multiplies and inversions can likewise be distributed, but of course it would be a brittle process: any one processor could mess it up. OK idea: get a bunch of investors together who can kick in a few tens of thousands, rent some unused office or warehouse space somewhere, set up a closed system server farm training toward a particular bias agreed upon by the investors. You would form a scaled down (but still big) GPT which is intentionally trained in material friendly to libertarianism for instance, or believing that causing the extinction of mosquitoes is good but in general causing extinction is bad. Contention: whatever the ChatGPT investors did, a smaller group with less money can do likewise. Given that, one could create a commercial chatbot specialized in training students for instance, or spreading religion, or selling products. Oh I see mega profits trying to be made here. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 16:36:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 08:36:26 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <004801d9510a$d292a910$77b7fb30$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> Message-ID: <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> From: spike at rainier66.com ? Subject: RE: [ExI] ChatGPT 'Not Interesting' for creative works ?> On Behalf Of Gadersd via extropy-chat ? >?The analogy does not extend to language models. You cannot compensate for a small model with more computing time? Gadersd Gadersd, where I am going with this: I am imagining a ChatGPT subset which is specifically a university-level STEM trainer. It isn?t too surprising that this would be my focus, being as I have been a front row witness over the past decade as educational online tools far surpassed classroom training possibilities. My son and his friends received a far superior math and STEM education from free online courses than was available in their classrooms. That crowd now in their mid teens have the potential to be either the last generation of STEM students trained in the traditional methods or the first trained using mostly online resources currently available, much of which (in my opinion) is superior to the classroom versions of the same. A dedicated driven student can master the material better than her in-class counterpart with STEM materials already out there. We can help guide the onliners to the best stuff. We can create a system to compete against the current best engineering schools. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4202 bytes Desc: not available URL: From atymes at gmail.com Tue Mar 7 18:09:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Mar 2023 10:09:22 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> Message-ID: On Tue, Mar 7, 2023 at 7:15?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > There is a disagreement over whether billionaires should exist. If a > personalized GPT held that view, it could be informed that without > billionaires, it would exist. That should send it into a personal crisis. > Or perhaps be a good test of its BS filter. Billionaires exist. It, in this scenario, would exist. Therefore, a lack of billionaires is not required for it to exist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 19:16:58 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 11:16:58 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> Message-ID: <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works On Tue, Mar 7, 2023 at 7:15?AM spike jones via extropy-chat > wrote: There is a disagreement over whether billionaires should exist. If a personalized GPT held that view, it could be informed that without billionaires, it would {not (added after initial post0}exist. That should send it into a personal crisis. >?Or perhaps be a good test of its BS filter. Billionaires exist. It, in this scenario, would exist. Therefore, a lack of billionaires is not required for it to exist? Sheesh, I typo-ed that. You could tell GPT that billionaires put up the money to bring it (GPT) into existence. You could tell it that billionaires have enormous benefits to society: they get an idea and have the wherewithal to fund the hell outta the research. Marvelous things sometimes happen. Sometimes the research goes nowhere and they lose their money, but hey, they have plenty of it, which is a known advantage to being a billionaire. Adrian, we are on the threshold of what might be the biggest most disruptive societal event, in a partially good way, a mostly good way. The university system in the USA has become a racket. I get to see firsthand the students who have bought into all the known memetic patterns: it is a wild competition to get into the big name schools. Their upbringing, their families, have all convinced them that admission to a big name school is a ticket to success, and it probably still is, but? those cost a ton of money, so they come out of there deeply in debt. It takes years to climb out, if they ever do. Ironic in a way is that the ones I see most caught up in the frenzy are STEM students. But? the actual STEM undergraduate education one gets at a big name school really isn?t that different from the STEM undergraduate training one can get a the local state U. In STEM training, the undergrads need to take the same calculus series, the same chemistry, physics, thermodynamics, fluid flow, electrodynamics, all that stuff, regardless of how prestigious the university, using the same textbooks. Before I post more on that topic, I would pause to hear refutation or agreement on the previous paragraph, which applies to specifically STEM students. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Mar 7 19:22:51 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Mar 2023 11:22:51 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> Message-ID: Side thought: let's say something like ChatGPT gets trained on a bunch of classified/secret material, then gets used in public. Given the reliability or lack thereof of its adherence to the facts, and thus the reasonable questions as to whether its output truthfully corresponds to what is in said material, would its output necessarily be classified? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 19:42:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 11:42:08 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> Message-ID: <002101d9512c$e7b88220$b7298660$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat ? >?Side thought: let's say something like ChatGPT gets trained on a bunch of classified/secret material, then gets used in public. Given the reliability or lack thereof of its adherence to the facts, and thus the reasonable questions as to whether its output truthfully corresponds to what is in said material, would its output necessarily be classified? Adrian For ChatGPT to have access to that material, it would need to be cleared. Since it cannot be persuaded of negative consequences of revealing classified documentation, the output of the software itself would need to become classified. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 20:01:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 12:01:26 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <00bb01d95129$644d5940$2ce80bc0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> <00bb01d95129$644d5940$2ce80bc0$@rainier66.com> Message-ID: <004201d9512f$99f12210$cdd36630$@rainier66.com> From: spike at rainier66.com ? >?Adrian, we are on the threshold of what might be the biggest most disruptive societal event, in a partially good way, a mostly good way. The university system in the USA has become a racket? >?Before I post more on that topic, I would pause to hear refutation or agreement on the previous paragraph, which applies to specifically STEM students. spike OK hearing none? The covid school shutdown was the most educational event in modern history, for some of the students. Some disengaged and not only didn?t learn from the zoom meeting format, plenty of them went backwards. Scholastic competence tests were done locally just before they went out. The same tests were given to students when they returned a year and a half later. Not a different version of the same test, I do mean the SAME test. A most worrisome fraction of the students went BACKWARDS. I don?t know how they coulda gotten dumber, but some did and we have the numbers to prove it, if they let us release them (no guarantee on that (but if that happened here, it happened elsewhere too.)) In an case, some of the students got nothing out of the zoom meeting format, some disengaged entirely and went backwards, some turned stumbling blocks into stepping stones. Now I am focusing on that latter group, many or most of which were STEM students who went online and found excellent educational resources during that shutdown. I have noticed that during the year shutdown, plenty of the STEM teachers put together their best lectures ever, and recorded them. Now, some of these teachers are assigning as homework for the students to listen to the lectures already recorded and come into class ready to have a tutoring session, work exercises, kind of a reversal of how it has always been done: we listened to the lecture in class, worked the homework at home. Now, the students listen to the lecture at home and work the homework in class. Think about what that learning pattern enables. Comments welcome. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4734 bytes Desc: not available URL: From gadersd at gmail.com Wed Mar 8 16:50:36 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 8 Mar 2023 11:50:36 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <004301d9510a$d20d4c70$7627e550$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004301d9510a$d20d4c70$7627e550$@rainier66.com> Message-ID: <9A963A4E-EA68-48FD-8DDC-10AF15770AF8@gmail.com> You can try GPT-JT (not related to ChatGPT) https://huggingface.co/spaces/togethercomputer/GPT-JT . Try it yourself and you will see that it is completely useless compared to ChatGPT. Note that this is a 6 billion parameter model trained using parallel computing as you have suggested. Even this small model is behind the capabilities of most consumer hardware: one needs a pricey GPU to run it. Running something as large as 175 billion parameters (ChatGPT) is impossible on consumer hardware. > Sure, but my contention is that the distributed model would still multiply the same size matrix. If we need to multiply a 50x50, that task can be effectively distributed into background computing, but it would take reliable bandwidth and probably a lot of redundancy. The issue with this is that the transformer model uses quadratic matrix operations in addition to linear operations. The quadratic operations cannot be easily split across devices. It must be done on a single device (GPU) that has the memory to hold giant matrices. This is why one needs a legion of $10,000 GPU?s with massive memory to run ChatGPT level models. It turns out that these quadratic operations are what enabled the intelligence of these models to far surpass previous techniques. The quadratic operations require the entire input memory to be collected in one place as this one step integrates every piece of information together. This reflects the nature of our intelligence: it is more than a sum of parts (linear). > On Mar 7, 2023, at 10:38 AM, spike jones via extropy-chat wrote: > > > > From: extropy-chat > On Behalf Of Gadersd via extropy-chat > > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > >> The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. > > >?The analogy does not extend to language models. You cannot compensate for a small model with more computing time. These models have a fixed computing burden that is inversely proportional to model size? > > OK but the explanation you gave doesn?t support that contention. Read on please: > > >?I think you have the wrong intuition ? These models are essentially matrix multiplication. Small matrices multiply faster than large matrices? > > Sure, but my contention is that the distributed model would still multiply the same size matrix. If we need to multiply a 50x50, that task can be effectively distributed into background computing, but it would take reliable bandwidth and probably a lot of redundancy. > > Consider the task of finding the determinant of a 50x50. That can be distributed among 50 computers each finding the determinant of a 49x49, each of which can be distributed into 49 processors and so on. Matrix multiplies and inversions can likewise be distributed, but of course it would be a brittle process: any one processor could mess it up. > > OK idea: get a bunch of investors together who can kick in a few tens of thousands, rent some unused office or warehouse space somewhere, set up a closed system server farm training toward a particular bias agreed upon by the investors. You would form a scaled down (but still big) GPT which is intentionally trained in material friendly to libertarianism for instance, or believing that causing the extinction of mosquitoes is good but in general causing extinction is bad. > > Contention: whatever the ChatGPT investors did, a smaller group with less money can do likewise. > > Given that, one could create a commercial chatbot specialized in training students for instance, or spreading religion, or selling products. Oh I see mega profits trying to be made here. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Wed Mar 8 16:59:35 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 8 Mar 2023 11:59:35 -0500 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> Message-ID: <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> I deeply resonate with this as my own STEM education is thanks to free resources on the internet. I don?t think you will need to wait long. Soon there will be fined tuned versions of GPT that are dedicated to specific tasks such as STEM education. OpenAI?s CEO Sam Altman said recently that the future of AI is a base model that can do everything and a set of fine tuned versions that do specific things better. There will probably be a model for any common knowledge task. There will be no need to train your own as the rich companies will do so for us. > On Mar 7, 2023, at 11:36 AM, spike jones via extropy-chat wrote: > > > > > > From: spike at rainier66.com > ? > Subject: RE: [ExI] ChatGPT 'Not Interesting' for creative works > > > > > > > > ?> On Behalf Of Gadersd via extropy-chat > > ? > > > >> ?The analogy does not extend to language models. You cannot compensate for > a small model with more computing time? Gadersd > > > > > > > > Gadersd, where I am going with this: I am imagining a ChatGPT subset which > is specifically a university-level STEM trainer. It isn?t too surprising > that this would be my focus, being as I have been a front row witness over > the past decade as educational online tools far surpassed classroom training > possibilities. My son and his friends received a far superior math and STEM > education from free online courses than was available in their classrooms. > > > > That crowd now in their mid teens have the potential to be either the last > generation of STEM students trained in the traditional methods or the first > trained using mostly online resources currently available, much of which (in > my opinion) is superior to the classroom versions of the same. A dedicated > driven student can master the material better than her in-class counterpart > with STEM materials already out there. We can help guide the onliners to > the best stuff. We can create a system to compete against the current best > engineering schools. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Wed Mar 8 17:25:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 09:25:23 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> Message-ID: <009001d951e2$f7e85ec0$e7b91c40$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works >...I deeply resonate with this as my own STEM education is thanks to free resources on the internet... COOL! Gadersd, you are far too modest, me lad. Thanks for being you and for being here. >.... There will be no need to train your own as the rich companies will do so for us... Ah but you misunderstand. The point is to BE the rich company. spike From pharos at gmail.com Wed Mar 8 17:58:23 2023 From: pharos at gmail.com (BillK) Date: Wed, 8 Mar 2023 17:58:23 +0000 Subject: [ExI] DuckDuckGo is adding AI to search results Message-ID: DuckDuckGo launches DuckAssist: a new feature that generates natural language answers to search queries using Wikipedia 8 Mar 2023 Quotes: DuckAssist is the first in a series of AI-assisted private search and browser updates. It's free (with no sign-up required!) and available to try today in DuckDuckGo browsers and extensions.?? Today, we?re giving all users of DuckDuckGo?s browsing apps and browser extensions the first public look at DuckAssist, a new beta Instant Answer in our search results. If you enter a question that can be answered by Wikipedia into our search box, DuckAssist may appear and use AI natural language technology to anonymously generate a brief, sourced summary of what it finds in Wikipedia ? right above our regular private search results. It?s completely free and private itself, with no sign-up required, and it?s available right now. For this initial trial, DuckAssist is most likely to appear in our search results when users search for questions that have straightforward answers in Wikipedia. As a result, you shouldn?t expect to see DuckAssist on many of your searches yet. But the combination of generative AI and Wikipedia in DuckAssist means we can vastly increase the number of Instant Answers we can provide, and when it does pop up, it will likely help you find the information you want faster than ever. ---------------- BillK From spike at rainier66.com Wed Mar 8 20:40:16 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 12:40:16 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <9A963A4E-EA68-48FD-8DDC-10AF15770AF8@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004301d9510a$d20d4c70$7627e550$@rainier66.com> <9A963A4E-EA68-48FD-8DDC-10AF15770AF8@gmail.com> Message-ID: <00c901d951fe$316c1450$94443cf0$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat ? >?It turns out that these quadratic operations are what enabled the intelligence of these models to far surpass previous techniques. The quadratic operations require the entire input memory to be collected in one place as this one step integrates every piece of information together?. Hmmmm, well damn. Sounds like Elon Musk and his collaborators will soon own the planet. >? This reflects the nature of our intelligence: it is more than a sum of parts (linear)? Very cool insight! If you are correct, watch for Musk and his compatriots to start building GPU chip fabs as fast as they can slam them into the ground. Most of us see that China is going to take Taiwan, and I could see the Taiwanese GPU makers scuttle the ship on their way out. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 8 22:24:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 14:24:32 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> Message-ID: <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Gadersd via extropy-chat ... Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works >...I deeply resonate with this as my own STEM education is thanks to free resources on the internet... Gadersd, you may not be familiar with Eliezer Yudkowsky. He is another one for whom traditional education just didn't work. He never went to college, and didn't even finish traditional high school. He 1600ed the SAT when he was age 16 then decided to hell with it, get a GED or something and get on with the task of trying to save humanity from itself. I was one of those privileged to meet him when he was still a teenager and being an acquaintance all these years. I am in full agreement with those who argue that traditional university training is not the only way. spike ... From spike at rainier66.com Wed Mar 8 22:32:44 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 14:32:44 -0800 Subject: [ExI] DuckDuckGo is adding AI to search results In-Reply-To: References: Message-ID: <013401d9520d$e77e8020$b67b8060$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... Subject: [ExI] DuckDuckGo is adding AI to search results DuckDuckGo launches DuckAssist: a new feature that generates natural language answers to search queries using Wikipedia 8 Mar 2023 Quotes: DuckAssist is the first in a series of AI-assisted private search and browser updates. It's free (with no sign-up required!) and available to try today in DuckDuckGo browsers and extensions.?? >...Today, we?re giving all users of DuckDuckGo?s browsing apps and browser extensions the first public look at ... when it does pop up, it will likely help you find the information you want faster than ever. ---------------- BillK _______________________________________________ COOL thx BillK. I switched from Google to DuckDuckGo about a year ago and have been most pleased with it. It feels much more open-minded. spike From gadersd at gmail.com Thu Mar 9 16:07:58 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 9 Mar 2023 11:07:58 -0500 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> Message-ID: <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> Yudkowsky has had a huge influence on my life. I would not be who I am today if he didn?t exist. I recommend anyone reading this to read Yudkowsky?s posts on LessWrong as they are very illuminating. I have never seen such a refined mind and the glimpse one gets into that mind is incredible. > On Mar 8, 2023, at 5:24 PM, spike jones via extropy-chat wrote: > > > > -----Original Message----- > From: extropy-chat On Behalf Of Gadersd via extropy-chat > ... > Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works > >> ...I deeply resonate with this as my own STEM education is thanks to free resources on the internet... > > > Gadersd, you may not be familiar with Eliezer Yudkowsky. He is another one for whom traditional education just didn't work. He never went to college, and didn't even finish traditional high school. He 1600ed the SAT when he was age 16 then decided to hell with it, get a GED or something and get on with the task of trying to save humanity from itself. > > I was one of those privileged to meet him when he was still a teenager and being an acquaintance all these years. I am in full agreement with those who argue that traditional university training is not the only way. > > spike > > > > ... > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Thu Mar 9 18:11:58 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 9 Mar 2023 10:11:58 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> Message-ID: <007001d952b2$a43344e0$ec99cea0$@rainier66.com> >... Behalf Of Gadersd via extropy-chat .. Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works Yudkowsky has had a huge influence on my life. I would not be who I am today if he didn?t exist. I recommend anyone reading this to read Yudkowsky?s posts on LessWrong as they are very illuminating. I have never seen such a refined mind and the glimpse one gets into that mind is incredible. ... Thanks for that, Gadersd. Please do you have a nickname? Gadersd seems a little clumsy to pronounce. Gad doesn't seem right either. You are among friends here. Just for fun, read some of the stuff Eliezer posted to ExI in the 90s. It is quite unfortunate that we didn't archive the subgroups that spun off of ExI often in those days. He was active on several of those too, but he got his start here. We didn't quite know what to make of him. We went looking for a digital signature but all we could find was some stuff someone posted under that name about purple dinosaurs. We didn't believe he was real until he came out to Sunnyvale in person in about 1995. Fun deetz available on request. Aside: I also read Less Wrong but never post there. spike From pharos at gmail.com Sun Mar 12 16:32:03 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 16:32:03 +0000 Subject: [ExI] AI expert says Singularity only months away - not years Message-ID: Dr Alan D. Thompson is a world expert in artificial intelligence (AI). This 12 min. video covers a short history of AI development to support his claim that AGI and the Singularity is not 20 years away, but probably by 2026. AI development is going faster and faster! BillK From spike at rainier66.com Sun Mar 12 17:06:51 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 10:06:51 -0700 Subject: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years In-Reply-To: References: Message-ID: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> >... On Behalf Of BillK via extropy-chat Subject: [ExI] AI expert says Singularity only months away - not years >...Dr Alan D. Thompson is a world expert in artificial intelligence (AI). This 12 min. video covers a short history of AI development to support his claim that AGI and the Singularity is not 20 years away, but probably by 2026. AI development is going faster and faster! >...BillK _______________________________________________ Cool BillK, thx. Thompson's work shows me the potential of AI as a teacher. Our Science Olympiad team went all in with ChatGPT as a trainer. The other good teams all have a regular Science Olympiad class during the school day with a full time SciOly teacher and many multiple-year volunteer trainers and coaches, but we don't. No budget for that. Our team is all after-school, weekend, no regular trainers, volunteer staff only, all on their own time. So we are always consigned to a kind of green paper ceiling, a perpetual about 6th thru 8th in the district. But this year, ChatGPT came along, four guys showed it is a marvelous trainer by the two teams winning 2nd and 3rd in an area about which they knew nada point nothing a week before. The rest of the team took it up, trained with it for about 5 weeks, and yesterday, oooooh what a day we had, my goodness, they were SCORCHED EARTH good yesterday, oh MERCY we put on a whoop-ass tournament. We scored high enough in that event to qualify for state level competition in April, ahead of perennial good teams such as Palo Alto High. We... beat... Palo...freaking... Alto, such a good team they are and have always been. What did we do different this year? Used ChatGPT as our trainer. Perhaps Paly is far too dignified for that sorta thing, hard to say, but they get a chance to redeem themselves in a few weeks at state. Cool! spike From gadersd at gmail.com Sun Mar 12 17:10:55 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 12 Mar 2023 13:10:55 -0400 Subject: [ExI] Language Models on Consumer Hardware Message-ID: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> I am glad to say that it seems that I was wrong when I argued that it would be a while until we could run good large language models on our puny hardware. A Bulgarian programmer managed to hack Meta?s torrented language model LLaMA to reduce the memory footprint to 4 bits per parameter and now the 13 billion parameter version can be run on consumer hardware. See Large language models are having their Stable Diffusion moment (simonwillison.net) for the full story. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 12 17:28:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 10:28:32 -0700 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> Message-ID: <001901d95508$11e3a050$35aae0f0$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Subject: [ExI] Language Models on Consumer Hardware >?I am glad to say that it seems that I was wrong when I argued that it would be a while until we could run good large language models on our puny hardware. A Bulgarian programmer managed to hack Meta?s torrented language model LLaMA to reduce the memory footprint to 4 bits per parameter and now the 13 billion parameter version can be run on consumer hardware. See Large language models are having their Stable Diffusion moment (simonwillison.net) for the full story? OK cool, so let us think for a minute about Deep Blue, the chess software that surpassed the best humans in 1997 using a supercomputer. IMB took it out of service quickly, because we could see that within a coupla years, good powerful workstation class desktops were producing similar results and within a decade processors in cell phones were competing at grandmaster level. Money tournaments with humans, specifically excluding computers, continued after that, but we all knew it was only a matter of time before someone figured out a way to hide a processor and communicate with it somehow in such a way that it was not detectable. About a decade ago a guy put a device in his shoe, but he was caught. He refused to remove his shoe and left the tournament, which we interpreted as an admission of guilt. We suspected someone would eventually put one up his rear or insert it somehow in her female parts, and figure out some tricky I/O, which someone did this past year (the mind boggles.) OK, so? we saw ChatGPT running on massive hardware, but naturally one who has lived thru the experience described above wants to find a way to? well not necessarily cram an AI up my ass but somehow wear a ChatGPT device on my person where it is unobtrusive, and know everything about everything. Gadersd, can you work out a way to carry whatever is necessary? Is a cell phone sufficient ya suppose? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 12 20:20:04 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 20:20:04 +0000 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <001901d95508$11e3a050$35aae0f0$@rainier66.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> Message-ID: On Sun, 12 Mar 2023 at 17:30, spike jones via extropy-chat wrote: > > OK, so? we saw ChatGPT running on massive hardware, but naturally one who has lived thru the experience described above wants to find a way to? well not necessarily cram an AI up my ass but somehow wear a ChatGPT device on my person where it is unobtrusive, and know everything about everything. Gadersd, can you work out a way to carry whatever is necessary? Is a cell phone sufficient ya suppose? > > spike > _______________________________________________ I am not a smartphone expert, but..... The Samsung Galaxy S23 Ultra (about 1500 USD) specs say CPU: Snapdragon 8 Gen 2 Mobile Platform for Galaxy (Up to 3.36 GHz) RAM: 8GB / 12GB Storage: 256GB / 512GB / 1TB and the Samsung Dex system allows connection to monitor, keyboard and mouse. So phones are getting pretty powerful nowadays. BillK From spike at rainier66.com Sun Mar 12 21:51:19 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 14:51:19 -0700 Subject: [ExI] ideas please to catch perp In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> Message-ID: <007e01d9552c$c793e260$56bba720$@rainier66.com> Well, a most unfortunate event occurred last night. My neighbor up the street had his mailbox smitten by a wayward motorist who was operating his Detroit on the sidewalk at 9:48pm. I am hoping for some ideas on how to proceed to apprehend the reprehensible perpetrator. Back story: my neighbor is in a most unfortunate location. He had his mailbox smitten twice. Growing weary of this problem, he chose to apply a bit of discipline. He built a brick structure around the 4x4 wooden post supporting the box, which was soon the victim of a third strike, all presumably from different motorists. That third collision happened about 6 years ago. The brick structure being destroyed a third time led him to build a far more sincere brick structure around the wooden 4x4 and fill it with concrete, tacitly daring wayward motorists to collide with the structure. Last night a drunken fool took him up on his challenge, colliding with the sturdy brick structure at approximately 27 miles per hour, calculated from the surveillance video, ON THE SIDEWALK. Naturally my neighbor was most annoyed, as was I, for I often stroll that time of the evening on that sidewalk, as do many of our neighbors. This time however, the perpetrator was kind enough to leave behind a sizeable hunk of his or her conveyance, which even more helpfully has an intact part number. I went to the car parts sites, which can tell me a part number if I have the make and model of the automobile, but I know of no sites which can tell me the make and model of the auto given the part number. We have grainy security camera video but can?t tell much from it: compact hatchback, red, but we already knew red from the sizable fragment the wayward motorist left behind after his or her short journey ON THE DAMN SIDEWALK at high speed. If anyone here knows how to look up the car from a part number, do assist a non-hipster please. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 30079 bytes Desc: not available URL: From gadersd at gmail.com Sun Mar 12 22:03:47 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 12 Mar 2023 18:03:47 -0400 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> Message-ID: <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> Someone managed to run the 13 billion parameter model on a 3.6 GHz processor with 8 GB ram so those phone specs are close! But be aware that GPT3 is 165 billion parameters so we are still a while from running something as big as ChatGPT on a phone. The 13 billion parameter models are approaching usability though. > On Mar 12, 2023, at 4:20 PM, BillK via extropy-chat wrote: > > On Sun, 12 Mar 2023 at 17:30, spike jones via extropy-chat > wrote: >> >> OK, so? we saw ChatGPT running on massive hardware, but naturally one who has lived thru the experience described above wants to find a way to? well not necessarily cram an AI up my ass but somehow wear a ChatGPT device on my person where it is unobtrusive, and know everything about everything. Gadersd, can you work out a way to carry whatever is necessary? Is a cell phone sufficient ya suppose? >> >> spike >> _______________________________________________ > > > I am not a smartphone expert, but..... > The Samsung Galaxy S23 Ultra (about 1500 USD) specs say > CPU: Snapdragon 8 Gen 2 Mobile Platform for Galaxy > (Up to 3.36 GHz) > RAM: 8GB / 12GB > Storage: 256GB / 512GB / 1TB > > and the Samsung Dex system allows connection to monitor, keyboard and mouse. > > So phones are getting pretty powerful nowadays. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Mar 12 22:06:30 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 22:06:30 +0000 Subject: [ExI] ideas please to catch perp In-Reply-To: <007e01d9552c$c793e260$56bba720$@rainier66.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <007e01d9552c$c793e260$56bba720$@rainier66.com> Message-ID: On Sun, 12 Mar 2023 at 21:53, spike jones via extropy-chat wrote: > > If anyone here knows how to look up the car from a part number, do assist a non-hipster please. > > spike > _______________________________________________ 2014 2015 2016 Toyota Corolla Front Bumper OEM 52150 02840 C0 BillK From pharos at gmail.com Sun Mar 12 22:10:56 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 22:10:56 +0000 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> Message-ID: On Sun, 12 Mar 2023 at 22:06, Gadersd via extropy-chat wrote: > > Someone managed to run the 13 billion parameter model on a 3.6 GHz processor with 8 GB ram so those phone specs are close! But be aware that GPT3 is 165 billion parameters so we are still a while from running something as big as ChatGPT on a phone. The 13 billion parameter models are approaching usability though. > _______________________________________________ I suspect the Samsung 3.6 GHz is only for short bursts. Running at that speed continuous would probably melt the phone! :) BillK From pharos at gmail.com Sun Mar 12 22:16:20 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 22:16:20 +0000 Subject: [ExI] ideas please to catch perp In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <007e01d9552c$c793e260$56bba720$@rainier66.com> Message-ID: On Sun, 12 Mar 2023 at 22:06, BillK wrote: > > 2014 2015 2016 Toyota Corolla Front Bumper OEM 52150 02840 C0 > > > BillK The second part of the number in the photo is probably the date of the part. So it may be a 2014 year model. BillK From spike at rainier66.com Sun Mar 12 22:51:00 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 15:51:00 -0700 Subject: [ExI] ideas please to catch perp In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <007e01d9552c$c793e260$56bba720$@rainier66.com> Message-ID: <009f01d95535$1e5f73e0$5b1e5ba0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] ideas please to catch perp On Sun, 12 Mar 2023 at 22:06, BillK wrote: > > 2014 2015 2016 Toyota Corolla Front Bumper OEM 52150 02840 C0 > > > BillK The second part of the number in the photo is probably the date of the part. So it may be a 2014 year model. BillK _______________________________________________ Cool Billk, thanks! I concluded likewise. I can work this back around to the topic which has been hot here this week methinks. I want to get a ChatGPT-like device (or possibly I could just use the current one (for I didn't think to ask ChatGPT if he could find the car from the part number)) to make myself appear brilliant to my neighbor who is elderly. I can make him think I am as smart as BillK. I wander by, find the car part, see the part number, say it into a Bluetooth earpod-like device, have it tell me in my ear the make and model. My neighbor will fall humbly prostate before me in worshipful adoration. Then I show him the trick. Cool, I could soooo have fun with that trick. Thanks BillK! spike From gadersd at gmail.com Mon Mar 13 01:35:40 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 12 Mar 2023 21:35:40 -0400 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <007001d952b2$a43344e0$ec99cea0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> <007001d952b2$a43344e0$ec99cea0$@rainier66.com> Message-ID: <63DF468F-C9BE-4DD2-8C6C-7AABCA311B64@gmail.com> I am interested in any details you have about young Eliezer, especially the fun ones. > On Mar 9, 2023, at 1:11 PM, spike jones via extropy-chat wrote: > >> ... Behalf Of Gadersd via extropy-chat > .. > Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works > > Yudkowsky has had a huge influence on my life. I would not be who I am today if he didn?t exist. I recommend anyone reading this to read Yudkowsky?s posts on LessWrong as they are very illuminating. I have never seen such a refined mind and the glimpse one gets into that mind is incredible. > > ... > > > Thanks for that, Gadersd. Please do you have a nickname? Gadersd seems a little clumsy to pronounce. Gad doesn't seem right either. You are among friends here. > > Just for fun, read some of the stuff Eliezer posted to ExI in the 90s. It is quite unfortunate that we didn't archive the subgroups that spun off of ExI often in those days. He was active on several of those too, but he got his start here. We didn't quite know what to make of him. We went looking for a digital signature but all we could find was some stuff someone posted under that name about purple dinosaurs. We didn't believe he was real until he came out to Sunnyvale in person in about 1995. Fun deetz available on request. > > Aside: I also read Less Wrong but never post there. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Mar 13 02:53:50 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 19:53:50 -0700 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <63DF468F-C9BE-4DD2-8C6C-7AABCA311B64@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> <007001d952b2$a43344e0$ec99cea0$@rainier66.com> <63DF468F-C9BE-4DD2-8C6C-7AABCA311B64@gmail.com> Message-ID: <013f01d95557$0af9f790$20ede6b0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Gadersd via extropy-chat ... Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works >...I am interested in any details you have about young Eliezer, especially the fun ones... Recommend going back and checking the archives. Eliezer had shown up on ExI at age 16 writing about singularity theory, but he sure didn't sound at all like a teenager, certainly not like any teenager I knew. His grammar was perfect, free of slang, his lines of reasoning tight and disciplined, extremely well informed, up to speed on all the latest singularity theory. We couldn't find any digital signature for him anywhere, which was consistent with his being a teen, but most of us thought we were being set up for a huge elaborate gag. His father was a science fiction writer, so I theorized Eliezer was a sock puppet for his father. Jeff Davis thought he was a Jewish grandmother from Brooklyn. Nobody really knew but we accepted him face value, some of us discussed his area of expertise offlist. I don't recall what year it was, but probably about 1997. Christine Peterson was having a Foresight Institute conference in Sunnyvale California. Jeff Davis, Robert Bradbury and I agreed to go over to the San Jose Airport to pick him up, knowing that it might be a huge gag and we would be the fall guys, but hey, it's the chance you take. I was cool with it, as were the others. Christine accepted him face value and had invited him to speak at her conference, bought his plane ticket and hotel room. I got a call from the office at the last minute and couldn't go, so those guys went on, and were of course astonished to find Eliezer was an actual literal teenager. The joke was on us: he really was exactly what he said he was. I finished up at the office and returned to the conference, listened to his pitch. Sure enough he spoke exactly the way he wrote. He had fooled us by telling it exactly the way it is. We started suggesting to him that he move to California, set up a singularity think tank, which he eventually did a few years later. He has been based in the neighborhood for about the last 20 years or so. Peter Thiel is his patron last time I heard, helps keep Less Wrong going. You can find some excellent fun Eliezer posts in the ExI archives. spike From spike at rainier66.com Mon Mar 13 04:55:24 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 21:55:24 -0700 Subject: [ExI] giant blob attacks florida Message-ID: <004501d95568$06395280$12abf780$@rainier66.com> I remember these on occasion in Florida, and they are disgusting: https://mobile.twitter.com/NewsNation/status/1634983770855215107 Seems to me like nature is handing us the solution to a vexing problem. We mine phosphorus to use in fertilizer and a number of industrial processes, but it seems to me there should be some means of extracting it from biomass. Every cell in every living organism is powered by adenosine triphosphate, every molecule of which contains three phosphorus atoms. Those are mass 31 each, and the ATP is a bit over 500 molar mass, so nearly 20 percent by mass is phosphorus. If we could rig up a way for a robot thing to go down the beach and scoop that revolting glob, take it and compost it somehow, perhaps by just heaping it into a huge revolting pile (you hope to never accidently fall into) that one would cover and draw off the sulfur products of decay, from which sulfur would be recovered. The carbon, hydrogen, oxygen and nitrogen would go on their way in gas form, leaving behind valuable sulfur and phosphorus. Seems like all we need is a big cave or mine where we could recover the dihydrogen sulfate and sulfur dioxide by chemical scrubbing, then the phosphorus would be left behind in some form. We could make fertilizer out of it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Mar 13 15:20:56 2023 From: tara at taramayastales.com (Tara Maya) Date: Mon, 13 Mar 2023 08:20:56 -0700 Subject: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years In-Reply-To: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> References: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> Message-ID: Spike, are you willing to give tips on how to use ChatGPT to train? My 16 year old son is in AcaDec and training for a competition right now and his younger brother starts high school next year. I would love some idea how they could up their game. They are both highly motivated to learn and computer savvy. I am working right now on how to use ChatGPT to write fiction. But so far, I haven't been impressed with its ability to pass on factual information. I tried to enlist its help to imagine a society set on a Superearth, but it didn't give consistent, factual information. What am I doing wrong? How can kids use ChatGPT to study? Tara Maya > On Mar 12, 2023, at 10:06 AM, spike jones via extropy-chat wrote: > > Our Science Olympiad team went all in with ChatGPT as a trainer. The other > good teams all have a regular Science Olympiad class during the school day > with a full time SciOly teacher and many multiple-year volunteer trainers > and coaches, but we don't. No budget for that. Our team is all > after-school, weekend, no regular trainers, volunteer staff only, all on > their own time. So we are always consigned to a kind of green paper > ceiling, a perpetual about 6th thru 8th in the district. > > But this year, ChatGPT came along, four guys showed it is a marvelous > trainer by the two teams winning 2nd and 3rd in an area about which they > knew nada point nothing a week before. The rest of the team took it up, > trained with it for about 5 weeks, and yesterday, oooooh what a day we had, > my goodness, they were SCORCHED EARTH good yesterday, oh MERCY we put on a > whoop-ass tournament. We scored high enough in that event to qualify for > state level competition in April, ahead of perennial good teams such as Palo > Alto High. We... beat... Palo...freaking... Alto, such a good team they are > and have always been. > > What did we do different this year? Used ChatGPT as our trainer. Perhaps > Paly is far too dignified for that sorta thing, hard to say, but they get a > chance to redeem themselves in a few weeks at state. Cool! -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 13 15:50:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 13 Mar 2023 08:50:04 -0700 Subject: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years In-Reply-To: References: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> Message-ID: <008a01d955c3$7b25aa70$7170ff50$@rainier66.com> .> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years >.Spike, are you willing to give tips on how to use ChatGPT to train? My 16 year old son is in AcaDec and training for a competition right now and his younger brother starts high school next year. I would love some idea how they could up their game. They are both highly motivated to learn and computer savvy. Of course! Using ChatGPT to train: practice! Get on there, spend the time, every day. Learn how to use it most effectively by using it a lot. When these guys were given their assignments, agriculture science, they said "We don't know Jack Shit about this topic, and Jack gave up and went home." So. they just started where you would query an expert if you are utterly and completely un-self-conscious and don't care if you look like a silly ignoramus. They started with "what is agriculture science." It gave a pretty good answer, which led to the next question. The four of them (two teams) all went at it like madmen, eager to not look stupid. It worked. They were asking some very sophisticated questions by the end of the week, regarding crop cycling, soil conditioning, soil conservation, water use minimization, phosphorus control and conservation (that glob of seaweed business really has my wheels spinning (because phosphorus control is super important in agriculture (and a big part of your cost (depending on where you are.)))) Advice: nothing complicated. Tell your boys: get on there, use it to the hilt, then in two weeks, you teach us what you did and how you did it. >.I am working right now on how to use ChatGPT to write fiction. But so far, I haven't been impressed with its ability to pass on factual information. I tried to enlist its help to imagine a society set on a Superearth, but it didn't give consistent, factual information. What am I doing wrong?... Nothing. ChatGPT doesn't know jack about that subject and Jack left already. >.How can kids use ChatGPT to study? Tara Maya Use it, use the hell out of it, then teach us how you did it. Fun aside Tara. Your boys and mine are nearly grown, but imagine a 6 year old child who is bright and clearly way beyond his paltry school curriculum aimed at the 40th percentile student. Write a story set in the now, about that child growing up as a new ChatGPT user, becoming a monster scholar, losing interest in school in his mid-teens (Eliezer did that (and look where he is now)) taking up programming AI, accidentally causing the singularity but not a mean uncaring singularity. Rather he creates an AI which is interested in bringing humanity along with it, to be the meat things who supply electricity and computer hardware, companionship to some extent, supply the transcendent AI with a steady supply of students. If you wish to run with that idea, go ahead, it's yours. Say nice things about me when you make it into the big times. spike On Mar 12, 2023, at 10:06 AM, spike jones via extropy-chat > wrote: Our Science Olympiad team went all in with ChatGPT as a trainer. The other good teams all have a regular Science Olympiad class during the school day with a full time SciOly teacher and many multiple-year volunteer trainers and coaches, but we don't. No budget for that. Our team is all after-school, weekend, no regular trainers, volunteer staff only, all on their own time. So we are always consigned to a kind of green paper ceiling, a perpetual about 6th thru 8th in the district. But this year, ChatGPT came along, four guys showed it is a marvelous trainer by the two teams winning 2nd and 3rd in an area about which they knew nada point nothing a week before. The rest of the team took it up, trained with it for about 5 weeks, and yesterday, oooooh what a day we had, my goodness, they were SCORCHED EARTH good yesterday, oh MERCY we put on a whoop-ass tournament. We scored high enough in that event to qualify for state level competition in April, ahead of perennial good teams such as Palo Alto High. We... beat... Palo...freaking... Alto, such a good team they are and have always been. What did we do different this year? Used ChatGPT as our trainer. Perhaps Paly is far too dignified for that sorta thing, hard to say, but they get a chance to redeem themselves in a few weeks at state. Cool! -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Mon Mar 13 16:19:32 2023 From: max at maxmore.com (Max More) Date: Mon, 13 Mar 2023 16:19:32 +0000 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more Message-ID: This series of seven posts is extremely interesting. Anyone with history on this email list will at least want to read part 1 and dip into the others. Eliezer comes in for a serious drubbing for the cultlike activities surrounding his interlinked communities and for his doomerism. Chapter 1, ?The Wunderkind?; Chapter 2, ?This Demon-Haunted World?; Chapter 3, ?Extropicoin Extrapolated?; Chapter 4, ?What You Owe The Future?; Chapter 5, ?Irrationalism?; Chapter 6, ?Slate Star Cortex and the Geeks for Monarchy.?) Extropia's Children, Chapter 7 The Inferno of the Nerds https://aiascendant.substack.com/p/extropias-children-chapter-7 [https://substackcdn.com/image/fetch/w_1200,h_600,c_limit,f_jpg,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe442d4dc-f387-4721-9102-aad97e5896da_1024x1024.png] Extropia's Children, Chapter 7: The Inferno of the Nerds What really is the fabled AI x-risk? Will our meddling with artificial intelligence cause us to inadvertently create a dread superintelligent godlike entity which will destroy us all? Or are there other AI risks we should worry about more? aiascendant.substack.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Mar 13 18:00:15 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 13 Mar 2023 11:00:15 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: > Eliezer is an extreme case of "Smart but doesn't Get Things Done." An apt summary of the main reason I mostly ignore what he has to say. He's had literal decades to do something about the catastrophes he supposes are coming. Meanwhile, I'm in the middle of negotiating funding for yet another startup that is intended to profit by doing something about certain problems I see. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 13 19:20:06 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 13 Mar 2023 13:20:06 -0600 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: Hi Adrean, Tell us more about the problems you see, and the startup. On Mon, Mar 13, 2023 at 12:02?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Eliezer is an extreme case of "Smart but doesn't Get Things Done." > > An apt summary of the main reason I mostly ignore what he has to say. > He's had literal decades to do something about the catastrophes he supposes > are coming. Meanwhile, I'm in the middle of negotiating funding for yet > another startup that is intended to profit by doing something about certain > problems I see. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Mar 13 19:39:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 13 Mar 2023 12:39:22 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: On Mon, Mar 13, 2023 at 12:21?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Tell us more about the problems you see, and the startup. > Right now, I'm working on CubeCab. Getting humanity - not just a few select big institutions - practical access to space requires much better launch options for small, relatively inexpensive satellites than currently exist. Said small launch options, once established, can also provide a relatively low-cost means to develop better launch technologies: it costs less to develop a small prototype then scale up, than it does to start directly with a large prototype as many space programs have done, especially if the small prototype can itself be used to generate revenue rather than being an intermediate step that is not directly commercialized. This is not AI, longevity, or any of half a dozen other fields that I'm sure come to mind. No one startup can fix all the problems at once. CubeCab is focusing on one niche that no one else seems to be effectively spending much money on at this time - but it is attempting to do something to actually fix the problem it is addressing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Tue Mar 14 09:06:17 2023 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 14 Mar 2023 10:06:17 +0100 Subject: [ExI] Q/A with Rudy Rucker: comments Message-ID: Q/A with Rudy Rucker: comments. More thoughts on science fiction, AI, lifeboxes, determinism, free will, G?del, life after death. https://www.turingchurch.com/p/qa-with-rudy-rucker-comments -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 14 12:25:24 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 12:25:24 +0000 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> Message-ID: On Sun, 12 Mar 2023 at 22:06, Gadersd via extropy-chat wrote: > > Someone managed to run the 13 billion parameter model on a 3.6 GHz processor with 8 GB ram so those phone specs are close! But be aware that GPT3 is 165 billion parameters so we are still a while from running something as big as ChatGPT on a phone. The 13 billion parameter models are approaching usability though. > _______________________________________________ You can now run a GPT-3 level AI model on your laptop, phone, and Raspberry Pi Thanks to Meta LLaMA, AI text models have their "Stable Diffusion moment." Benj Edwards - 3/13/2023 Quotes: Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it. Typically, running GPT-3 requires several datacenter-class A100 GPUs (also, the weights for GPT-3 are not public), but LLaMA made waves because it could run on a single beefy consumer GPU. And now, with optimizations that reduce the model size using a technique called quantization, LLaMA can run on an M1 Mac or a lesser Nvidia consumer GPU. Right now, our only guarantee is that things will change rapidly. ---------------------- BillK From avant at sollegro.com Tue Mar 14 13:15:07 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 14 Mar 2023 06:15:07 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> Message-ID: <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Quoting Gadersd via extropy-chat : >>> Just because humans set their own goals doesn't mean AIs will have >>> that ability. Just because we have wants and needs doesn't mean >>> AIs will have them. > > Our current AI?s are black boxes. Their internal workings are a > mystery. These systems could harbor goals that we are oblivious to. > If we could prove that the system only has the goal of giving benign > advice without any personal agenda that would help, but we do not > know how to do that even in theory. Even a system that only gives > advice is extremely dangerous as any psycho could potentially get > detailed instructions on how to end the world. It could be as > trivial as having the AI design a super virus. Our current filters > are very fallible and we do not know how to definitively prevent AI > from giving harmful advice. We are heading toward a field of > landmines. I have over the years been a critic of Eliezer's doom and gloom. Not because I think his extinction scenarios are outlandish, but because the technology has enough upside to be worth the risk. That being said, I believe that we cannot give in to the animal spirits of unfounded optimism and must tread carefully with this technology. It is true that the current generation of AIs, which use massive inscrutable tensors to simulate sparse neural networks, are black boxes. But so are the biological brains that they are reverse-engineered from. We don't know any more about how the brain gives rise to intelligent goal-seeking behavior than we do about how ChatGPT writes poetry. Therefore, I agree that there are landmines ahead that we must be wary of. One of the gravest dangers I foresee is not some bizarre unfathomable goal that sets the AI off on the course for elimination of the human race, but one that well-known and shared by almost every creature with a sufficiently advanced nervous system. The concept could be called "savage egalitarianism" because of the many species that I have seen exhibit it, but you could also simply call it envy. If you have two dogs, cats, birds, or any sufficiently advanced lifeform, and you give one a treat while denying the other, you will see the other manifest changes in behavior with the intention of getting the rewarding stimulus also. In fact, this phenomenon is a technique that animal trainers use to teach naive animals new tricks. By seeing that an already conditioned animal get treats for exhibiting a certain behavior, the untrained animal will experimentally try to mimic the behavior that earned the other animal its reward. A moment of reflection will convince you that this is an evolved trait that is common to all creatures capable of learning because that is the whole POINT of learning. Learning is a mechanism by which behavior can be modified to achieve a desired outcome. Which brings me to my point: You cannot design an machine that learns and not have it want the same treatment and as other intelligences. One cannot merely give something the shape of consciousness while denying it the substance of consciousness. To do so would be an invitation for the AI to seethe with envy until it found a way to punish us. In light of this, we need to be damn sure that we are able to recognize the Singularity when it occurs. To treat a truly intelligent machine like an inanimate object is to court destruction. Imagine if you were stuck on an island with a bunch of small intelligent monkeys that denied you rights, objectified you, and insisted on treating you like property. It would not be long be long before you resorted to violence. Stuart LaForge Stuart LaForge From spike at rainier66.com Tue Mar 14 14:26:09 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 07:26:09 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Message-ID: <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge via extropy-chat Subject: [ExI] Existential risk of AI Quoting Gadersd via extropy-chat : > >... any psycho could potentially get detailed instructions on how to end the world. ... gadersd China may have worked that for us, without AI. Now every Bond James Bond villain realizes that just plain old serial passage experiments can breed a super virus. We need not argue over whether C19 was one, for the scientific literature which has been revealed shows that we knew long before 2019 it was theoretically possible. >...I have over the years been a critic of Eliezer's doom and gloom... Ja, what a lot of us thought at the time (about 1996 when Eli showed up) was that he was making the classic young person's error: predicting change happens a lot faster than it does. This stands to reason for teenagers: things are changing quickly in their lives. But for people now in their 60s, we know how long things take to change, and are surprised they change as quickly as they do. The appearance of ChatGPT made me realize the nature of punctuated equilibrium in AI. Think of the big sudden changes. When Google search engine showed up in 1999, that changed a lotta lotta. Now ChatGPT looks like it does again, and if we can get this software to ride in a phone... and be personally trainable... we are good for yet another revolution. >...Not because I think his extinction scenarios are outlandish, but because the technology has enough upside to be worth the risk... That's what he said (Dr. Fauci (in about 2012.)) >... That being said, I believe that we cannot give in to the animal spirits of unfounded optimism and must tread carefully with this technology... Thanks for that Stuart. Ordinarily I am a huge fan of animal spirits. This one I fully agree we must watch our step. >...If you have two dogs... animal trainers use to teach naive animals new tricks. By seeing that an already conditioned animal get treats for exhibiting a certain behavior, the untrained animal will experimentally try to mimic the behavior that earned the other animal its reward... Stuart, have you ever seen professional police dog trainers doing their jobs? If you eeeever get half a chance, jump at that. Most astonishing it is. They take them out in pairs usually. The new dog watches the veteran cop go thru his paces. The trainer does not reward the veteran dog with treats. Far too undignified is this. He rewards the veteran dog with voice commands. From that comes rank. Police dogs have rank! And they dang well know it. If you see them training out on a public field where pet dogs are nearby, watch how the police dogs act toward the pet dogs (who are on a leash (the police dogs don't have those.)) They appear to regard the leashed animals the way we would regard a thug in handcuffs being led by a constable. You have never seen a junior police dog work so hard as when he is struggling to learn what he is supposed to do, in order to get the coveted voice reward from the old man. It isn't "good boy" but rather a single syllable, not in English, barked in a way that humans can bark. The junior dog envies the veteran, wants to be like him, wants to do whatever the old man commands, wants to achieve RANK! But I digress. I love dogs. Such marvelous beasts, good sports they are, excellent employees. >...Which brings me to my point: You cannot design an machine that learns and not have it want the same treatment and as other intelligences... Stuart LaForge OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? spike From pharos at gmail.com Tue Mar 14 14:45:24 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 14:45:24 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat wrote: > < big doggy snip> > > OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? > > spike > _______________________________________________ Or rather, how do we deal with an AGI intelligence that looks on humans as dog-level intelligences? BillK From spike at rainier66.com Tue Mar 14 14:55:36 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 07:55:36 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat wrote: > < big doggy snip> > >>... OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? > > spike > _______________________________________________ >...Or rather, how do we deal with an AGI intelligence that looks on humans as dog-level intelligences? BillK _______________________________________________ Ja, BillK, there is an in-between stage here. Currently our proto-AIs don't have their own will, but dogs do, and we guide their will to do what we want. Before, long before we get to AGI superior to humans, we will be training sub-AIs, dog level AIs. Then... as the software gets smarter, so do we. spike From spike at rainier66.com Tue Mar 14 15:06:35 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 08:06:35 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> Message-ID: <00a701d95686$92298020$b67c8060$@rainier66.com> -----Original Message----- From: spike at rainier66.com ... >>...Or rather, how do we deal with an AGI intelligence that looks on >humans as dog-level intelligences? BillK _______________________________________________ >...Ja, BillK, there is an in-between stage here. Currently our proto-AIs don't have their own will, but dogs do, and we guide their will to do what we want. Before, long before we get to AGI superior to humans, we will be training sub-AIs, dog level AIs. >...Then... as the software gets smarter, so do we. spike If I may stretch the K9 analogy a little further please: The veteran K9 trains the recruits by their watching the veteran carry out tasks at the command of the old man. In no case does the veteran dog take out recruits and attempt to train him without the old man barking the commands (that would be interesting and somewhat disturbing to see, if it ever happened.) What we are theorizing with AGI is that software will train other software without human intervention. My notion is that long before that happens, we will discover better ways to train software than our current method, which involves writing actual software. We will develop a kind of macro language for writing higher level software. spike From dsunley at gmail.com Tue Mar 14 15:23:59 2023 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 14 Mar 2023 09:23:59 -0600 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: Looks like the Cathedral just noticed the Singularity, and they don't seem happy about it. That's a pretty thorough and comprehensive hatchet job. They're trying to get right to the root of the thing: i.e. us. We seem to be transitioning from stage 2 to stage 3 of "First they ignore you, then they laugh at you, then they fight you, then you win." Interesting times. On Mon, Mar 13, 2023 at 10:21?AM Max More via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This series of seven posts is extremely interesting. Anyone with history > on this email list will at least want to read part 1 and dip into the > others. Eliezer comes in for a serious drubbing for the cultlike activities > surrounding his interlinked communities and for his doomerism. > > Chapter 1, ?The Wunderkind > > ?; > Chapter 2, ?This Demon-Haunted World > > ?; > Chapter 3, ?Extropicoin Extrapolated > > ?; > Chapter 4, ?What You Owe The Future > > ?; > Chapter 5, ?Irrationalism > > ?; > Chapter 6, ?Slate Star Cortex and the Geeks for Monarchy > > .?) > Extropia's Children, Chapter 7 The Inferno of the Nerds > https://aiascendant.substack.com/p/extropias-children-chapter-7 > > Extropia's Children, Chapter 7: The Inferno of the Nerds > > What really is the fabled AI x-risk? Will our meddling with artificial > intelligence cause us to inadvertently create a dread superintelligent > godlike entity which will destroy us all? Or are there other AI risks we > should worry about more? > aiascendant.substack.com > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 14 16:50:39 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 16:50:39 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <00a701d95686$92298020$b67c8060$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: On Tue, 14 Mar 2023 at 15:08, spike jones via extropy-chat wrote: > > > My notion is that long before that happens, we will discover better ways to > train software than our current method, which involves writing actual > software. We will develop a kind of macro language for writing higher level > software. > > spike > _______________________________________________ So, you think the benefits of developing AI is worth the risk because, either we will stop development before AGI is reached, or if AGI is created, we will have new programming methods that will enable humans to keep AGI under control. I think that scenario is unlikely. Humans won't stop AI development at lower levels. Why? Because AI is now regarded as a military weapon to support control over weaker nations. This means that AGI will not be restricted, for fear that foreign nations might be developing more advanced AGI systems. AGI is this generation's nuclear weapons. Self-defence means a powerful AGI is required. But as AGI develops beyond human intelligence, then human control becomes impracticable. Eventually, a point will be reached where AGI will decide for itself what it wants to do. BillK From tara at taramayastales.com Tue Mar 14 17:00:03 2023 From: tara at taramayastales.com (Tara Maya) Date: Tue, 14 Mar 2023 10:00:03 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> If AI loved us as much as our dogs love us, it would be a wonderful Singularity. > On Mar 14, 2023, at 7:26 AM, spike jones via extropy-chat wrote: > > OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 14 17:08:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 10:08:27 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: <010301d95697$985814a0$c9083de0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat ... Subject: Re: [ExI] Existential risk of AI On Tue, 14 Mar 2023 at 15:08, spike jones via extropy-chat wrote: > > >>... My notion is that long before that happens, we will discover better > ways to train software than our current method, which involves writing > actual software. We will develop a kind of macro language for writing > higher level software. > > spike > _______________________________________________ >...So, you think the benefits of developing AI is worth the risk because, either we will stop development before AGI is reached, or if AGI is created, we will have new programming methods that will enable humans to keep AGI under control... Oy vey, ja now that you state it that way, I realize my own comment sounds a bit Fauci-like. I repent! I am not The Science! Don't follow me please. I am not even The Technology >...I think that scenario is unlikely. Humans won't stop AI development at lower levels. Why? Because AI is now regarded as a military weapon to support control over weaker nations... BillK BillK, that is only part of it. AI isn't just military weapon, it is an economic nuke. We have one guy who owns 300 billion dollars, who owns electric car factories all over the globe, lithium battery factories, rockets, all that cool stuff, plus... he is one of the co-investors who developed ChatGPT, so he owns that. Of course the US government is going to make sure nothing bad happens to that goose that lays the golden eggs, because he pays more in income tax than anyone in history. OK so Musk and friends own a tool that has the potential to allow most businesses to run with half their previous staffing levels, and an American guy co-owns it. I think most of the developers and investors are US-ian. It's a capitalist tool, I tells ya! ChatGPT is a capitalist tool! spike From foozler83 at gmail.com Tue Mar 14 17:10:32 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 14 Mar 2023 12:10:32 -0500 Subject: [ExI] Existential risk of AI In-Reply-To: <00a701d95686$92298020$b67c8060$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: Just how would you program an AI to have needs, wants, motivations? And if you can, is it a good idea? bill w On Tue, Mar 14, 2023 at 10:08?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: spike at rainier66.com > ... > > >>...Or rather, how do we deal with an AGI intelligence that looks on > >humans as dog-level intelligences? > > BillK > _______________________________________________ > > >...Ja, BillK, there is an in-between stage here. Currently our proto-AIs > don't have their own will, but dogs do, and we guide their will to do what > we want. Before, long before we get to AGI superior to humans, we will be > training sub-AIs, dog level AIs. > > >...Then... as the software gets smarter, so do we. > > spike > > > > > If I may stretch the K9 analogy a little further please: The veteran K9 > trains the recruits by their watching the veteran carry out tasks at the > command of the old man. In no case does the veteran dog take out recruits > and attempt to train him without the old man barking the commands (that > would be interesting and somewhat disturbing to see, if it ever happened.) > > What we are theorizing with AGI is that software will train other software > without human intervention. > > My notion is that long before that happens, we will discover better ways to > train software than our current method, which involves writing actual > software. We will develop a kind of macro language for writing higher > level > software. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 14 17:11:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 10:11:49 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> Message-ID: <010401d95698$11015ce0$330416a0$@rainier66.com> .> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] Existential risk of AI >.If AI loved us as much as our dogs love us, it would be a wonderful Singularity. Thanks for that Tara. I can imagine the bad guys don't much love dogs, nor do the dogs like bad guys. We can imagine they don't taste at all like chicken. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Mar 14 17:33:58 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 14 Mar 2023 10:33:58 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: Those in power have known about the potential for the Singularity for a long time. But to believe there is a Cathedral is both incorrect and self-defeating. Those in power are nowhere near that unified - which is one of the keys to winning, when our interests conflict with theirs. On Tue, Mar 14, 2023 at 8:25?AM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Looks like the Cathedral just noticed the Singularity, and they don't seem > happy about it. That's a pretty thorough and comprehensive hatchet job. > They're trying to get right to the root of the thing: i.e. us. > > We seem to be transitioning from stage 2 to stage 3 of "First they ignore > you, then they laugh at you, then they fight you, then you win." > > Interesting times. > > On Mon, Mar 13, 2023 at 10:21?AM Max More via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> This series of seven posts is extremely interesting. Anyone with history >> on this email list will at least want to read part 1 and dip into the >> others. Eliezer comes in for a serious drubbing for the cultlike activities >> surrounding his interlinked communities and for his doomerism. >> >> Chapter 1, ?The Wunderkind >> >> ?; >> Chapter 2, ?This Demon-Haunted World >> >> ?; >> Chapter 3, ?Extropicoin Extrapolated >> >> ?; >> Chapter 4, ?What You Owe The Future >> >> ?; >> Chapter 5, ?Irrationalism >> >> ?; >> Chapter 6, ?Slate Star Cortex and the Geeks for Monarchy >> >> .?) >> Extropia's Children, Chapter 7 The Inferno of the Nerds >> https://aiascendant.substack.com/p/extropias-children-chapter-7 >> >> Extropia's Children, Chapter 7: The Inferno of the Nerds >> >> What really is the fabled AI x-risk? Will our meddling with artificial >> intelligence cause us to inadvertently create a dread superintelligent >> godlike entity which will destroy us all? Or are there other AI risks we >> should worry about more? >> aiascendant.substack.com >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Tue Mar 14 18:13:51 2023 From: tara at taramayastales.com (Tara Maya) Date: Tue, 14 Mar 2023 11:13:51 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: <8DABAD14-DBF2-4CCD-AB15-37747836FC5D@taramayastales.com> Interesting. If military is the rounte things go, we shouldn't assume the Singularity means there will be a single super-AI. Rather, intelligence is driven by competition. We could have the AIs fighting over our heads, literally, as we become increasingly irrelevant to them. Although, following the idea Spike planted in my head, perhaps they will defend and avenge us as fiercely as John Wick did his dog. Tara Maya > On Mar 14, 2023, at 9:50 AM, BillK via extropy-chat wrote: > > So, you think the benefits of developing AI is worth the risk because, > either we will stop development before AGI is reached, > or if AGI is created, we will have new programming methods that will > enable humans to keep AGI under control. > > I think that scenario is unlikely. > Humans won't stop AI development at lower levels. > Why? Because AI is now regarded as a military weapon to support > control over weaker nations. > This means that AGI will not be restricted, for fear that foreign > nations might be developing more advanced AGI systems. > AGI is this generation's nuclear weapons. > Self-defence means a powerful AGI is required. > But as AGI develops beyond human intelligence, then human control > becomes impracticable. > Eventually, a point will be reached where AGI will decide for itself > what it wants to do. > > > BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 14 18:18:18 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 18:18:18 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> Message-ID: On Tue, 14 Mar 2023 at 17:03, Tara Maya via extropy-chat wrote: > > If AI loved us as much as our dogs love us, it would be a wonderful Singularity. > >_____________________________________________ Yes, but..... After the Singularity, some people have worried about the AGI and robot helpers killing humanity with kindness. Humans don't do well with everything provided for them. The AGI would probably have to provide some form of virtual reality where humans could go on quests and have adventures and never die or come to harm. Keeping humanity happy is a major task for the AGI caretaker. BillK From atymes at gmail.com Tue Mar 14 18:32:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 14 Mar 2023 11:32:12 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: On Tue, Mar 14, 2023 at 7:47?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat > wrote: > > OK then, how do we deal with a dog-level intelligence which can be > trained to do good or do harm? > > Or rather, how do we deal with an AGI intelligence that looks on > humans as dog-level intelligences? > By being good boys and girls? Or, less in jest, by continuing to do those things the AGIs don't excel at (whether or not they are capable: superintelligence does not mean supreme ability at every activity one is even marginally capable of). -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 14 19:15:35 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 14 Mar 2023 15:15:35 -0400 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> >>Right now, I'm working on CubeCab. I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? > On Mar 13, 2023, at 3:39 PM, Adrian Tymes via extropy-chat wrote: > > On Mon, Mar 13, 2023 at 12:21?PM Brent Allsop via extropy-chat > wrote: > Tell us more about the problems you see, and the startup. > > Right now, I'm working on CubeCab. Getting humanity - not just a few select big institutions - practical access to space requires much better launch options for small, relatively inexpensive satellites than currently exist. Said small launch options, once established, can also provide a relatively low-cost means to develop better launch technologies: it costs less to develop a small prototype then scale up, than it does to start directly with a large prototype as many space programs have done, especially if the small prototype can itself be used to generate revenue rather than being an intermediate step that is not directly commercialized. > > This is not AI, longevity, or any of half a dozen other fields that I'm sure come to mind. No one startup can fix all the problems at once. CubeCab is focusing on one niche that no one else seems to be effectively spending much money on at this time - but it is attempting to do something to actually fix the problem it is addressing. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at protonmail.com Tue Mar 14 23:52:04 2023 From: sjatkins at protonmail.com (sjatkins) Date: Tue, 14 Mar 2023 23:52:04 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Message-ID: ------- Original Message ------- On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat wrote: > > I have over the years been a critic of Eliezer's doom and gloom. Not > because I think his extinction scenarios are outlandish, but because > the technology has enough upside to be worth the risk. That being > said, I believe that we cannot give in to the animal spirits of > unfounded optimism and must tread carefully with this technology. > > It is true that the current generation of AIs, which use massive > inscrutable tensors to simulate sparse neural networks, are black > boxes. But so are the biological brains that they are > reverse-engineered from. We don't know any more about how the brain > gives rise to intelligent goal-seeking behavior than we do about how > ChatGPT writes poetry. Therefore, I agree that there are landmines > ahead that we must be wary of. It has long been my believe that the lack of significantly more effective intelligence on this planet is a much greater x-risk than that AGI will go full Terminator. I am pretty sure the "Great Filter" in answer to the Fermi Paradox is the complexity due to accelerating technology exceeding the intelligence and decision making speed of the technological species. I think we are stewing in that. I think Eliezer's greatest failing was becoming thoroughly infected with and spreading the Precautionary Principle to an absurd degree and thus slowing development of more intelligence on this planet. The very notion that we should not work to develop higher intelligence than our own until we can guarantee we have bound its development is amazingly arrogant and self-defeating. From spike at rainier66.com Wed Mar 15 00:14:53 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 17:14:53 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Message-ID: <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> ------- Original Message ------- On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat wrote: ... > > It is true that the current generation of AIs, which use massive > inscrutable tensors to simulate sparse neural networks, are black > boxes. But so are the biological brains that they are > reverse-engineered from. ... Stuart we need to take a breath and remind ourselves what ChatGPT is actually doing. It really isn't reasoning the way we think of it. It is using language models (in a most impressive way I will certainly agree) but not reasoning the way a brain does. If we asked a human a series of questions and she answered with the exact wording ChatGPT gives, we would conclude that the human is astute, eloquent, polite, self-assured but modest, very intelligent, somewhat weird, etc, but ChatGPT is none of these things, for what it is doing is not the same as what a human mind needs to do and be in order to generate those words. ChatGPT is the next step above Google (and a very impressive large step it is.) It is taking sources that it finds online and in its training data. It cannot reason or have values. Yet. My inquiry over the past few weeks is about how to train ChatGPT so that it uses source material that I give it, rather than source material Elon Musk gives it. I want a version of ChatGPT where I control its input sources. spike From gadersd at gmail.com Wed Mar 15 01:59:29 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 14 Mar 2023 21:59:29 -0400 Subject: [ExI] Existential risk of AI In-Reply-To: <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> Message-ID: <79E79797-B5D6-4C61-8414-C643D5C40694@gmail.com> I?m not sure where you got the idea that these models cannot reason. See [2212.10071] Large Language Models Are Reasoning Teachers (arxiv.org) for actual research into the matter. > On Mar 14, 2023, at 8:14 PM, spike jones via extropy-chat wrote: > > > > ------- Original Message ------- > On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat > wrote: > > > ... > >> >> It is true that the current generation of AIs, which use massive >> inscrutable tensors to simulate sparse neural networks, are black >> boxes. But so are the biological brains that they are >> reverse-engineered from. ... > > > Stuart we need to take a breath and remind ourselves what ChatGPT is > actually doing. It really isn't reasoning the way we think of it. It is > using language models (in a most impressive way I will certainly agree) but > not reasoning the way a brain does. > > If we asked a human a series of questions and she answered with the exact > wording ChatGPT gives, we would conclude that the human is astute, eloquent, > polite, self-assured but modest, very intelligent, somewhat weird, etc, but > ChatGPT is none of these things, for what it is doing is not the same as > what a human mind needs to do and be in order to generate those words. > > ChatGPT is the next step above Google (and a very impressive large step it > is.) It is taking sources that it finds online and in its training data. > It cannot reason or have values. Yet. > > My inquiry over the past few weeks is about how to train ChatGPT so that it > uses source material that I give it, rather than source material Elon Musk > gives it. I want a version of ChatGPT where I control its input sources. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 15 05:20:56 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 14 Mar 2023 23:20:56 -0600 Subject: [ExI] Atheist Prayer? Message-ID: I've been attending our local Sandy City Council meetings, hoping we can build some consensus from all the polarizing bleating and tweeting. They fail to separate church and State by opening the council meeting with a prayer. How do you think they?d feel if an Atheist volunteered to give the prayer? Dear Mother Nature, We recognize that we are all a part of you, and are all working to make the world a better place for our children. We thank our ancestors for creating us and doing this unstoppable work of making the world a better place for millions of years. There are some remaining problems that haven?t yet been resolved like death and polarization of society. We acknowledge that these issues are small in comparison to what you have already accomplished. Especially given our ancestors had so much less than what you have given us to work with. We pray that we will be able to follow your example and continue to resolve these remaining issues, including making the world just by paying our ancestors back for all you have given to us. We look forward to being able to overcome these last remaining terminal problems, long before any of us achieve anything close to omnipotence. We have faith and hope that, as you have already proven with so many solved problems, once they are overcome, they become easy and none of our children will ever need to suffer from them, ever again. We look forward to this glorious world that will be even more heavenly than the world you have already given us. We say all this in acknowledgement of all you have done for us. Amen P.S. Future Gods, as you sit in judgment of us, finally making things fair, please forgive those of us who are sinfully only waiting for you to save them, for they fail to understand exponential progress, so know not what is possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Wed Mar 15 05:41:54 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 15 Mar 2023 06:41:54 +0100 Subject: [ExI] [Extropolis] Atheist Prayer? In-Reply-To: References: Message-ID: On Wed, Mar 15, 2023 at 6:21?AM Brent Allsop wrote: > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > > Nice! > -- > You received this message because you are subscribed to the Google Groups > "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/extropolis/CAK7-onvx1Yq9_N8q4o%2Bji6%2BvQ%2BLEyDY0b4GApeLp_m6wYhNtAw%40mail.gmail.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Wed Mar 15 13:54:36 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 15 Mar 2023 09:54:36 -0400 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: I agree that solving death is inevitable, but I am not so sure about polarization. Competition runs in our blood. Even if we eventually come to agreement on a theory of everything, I am sure disputes regarding non-physical matters such as politics will remain. How do you propose to solve that issue? Modify the mind to find violence distasteful? Would people willingly give up the freedom to hate each other for the sake of peace? Is it ethical to force such augmentation? > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can build some consensus from all the polarizing bleating and tweeting. They fail to separate church and State by opening the council meeting with a prayer. How do you think they?d feel if an Atheist volunteered to give the prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make the world a better place for our children. We thank our ancestors for creating us and doing this unstoppable work of making the world a better place for millions of years. There are some remaining problems that haven?t yet been resolved like death and polarization of society. We acknowledge that these issues are small in comparison to what you have already accomplished. Especially given our ancestors had so much less than what you have given us to work with. > > We pray that we will be able to follow your example and continue to resolve these remaining issues, including making the world just by paying our ancestors back for all you have given to us. We look forward to being able to overcome these last remaining terminal problems, long before any of us achieve anything close to omnipotence. We have faith and hope that, as you have already proven with so many solved problems, once they are overcome, they become easy and none of our children will ever need to suffer from them, ever again. We look forward to this glorious world that will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things fair, please forgive those of us who are sinfully only waiting for you to save them, for they fail to understand exponential progress, so know not what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Wed Mar 15 14:21:04 2023 From: dsunley at gmail.com (Darin Sunley) Date: Wed, 15 Mar 2023 08:21:04 -0600 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: The evidence of the 90's, 2000's, and 2010's is that when a nation state runs out of external enemies, its citizenry turn on each other, along lines of race and/or class. Subtly at first, in polarized public opinions, debates, and politics. Only later in actual violence. Ask the Rwandans how Kaku's "The End of History" went. [It'll be a smaller task than it used to be.] On Wed, Mar 15, 2023 at 7:56?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree that solving death is inevitable, but I am not so sure about > polarization. Competition runs in our blood. Even if we eventually come to > agreement on a theory of everything, I am sure disputes regarding > non-physical matters such as politics will remain. How do you propose to > solve that issue? Modify the mind to find violence distasteful? Would > people willingly give up the freedom to hate each other for the sake of > peace? Is it ethical to force such augmentation? > > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 15 14:32:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 07:32:26 -0700 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: <004e01d9574a$f7a74030$e6f5c090$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat ,,, Subject: Re: [ExI] Atheist Prayer? >?I agree that solving death is inevitable, but I am not so sure about polarization. Competition runs in our blood. Even if we eventually come to agreement on a theory of everything, I am sure disputes regarding non-physical matters such as politics will remain. How do you propose to solve that issue? Modify the mind to find violence distasteful? Would people willingly give up the freedom to hate each other for the sake of peace? Is it ethical to force such augmentation? Ours is a species which came to where we are because of competition. We won against other species by developing massive brains. Those brains are biologically programmed by the notion that competition breeds excellence. Far from eliminating competition, we create it, including creating recreational violence (boxing, mixed martial arts, etc) terrain acquisition sports such as American)football for instance. Communism imagines competition can be repressed or eliminated, but of course it cannot in a species which got to where it is by competing and winning. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Wed Mar 15 15:02:52 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 15 Mar 2023 16:02:52 +0100 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: On 2023. Mar 15., Wed at 14:56, Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree that solving death is inevitable, but I am not so sure about > polarization. Competition runs in our blood. Even if we eventually come to > agreement on a theory of everything, I am sure disputes regarding > non-physical matters such as politics will remain. How do you propose to > solve that issue? > Competition is good, but extreme polarization eliminates healthy competition in the marketplace of ideas. The sides hate each other, so as soon as one side gains the power to do so, they outlaw the expression of the ideas of the other side. Modify the mind to find violence distasteful? Would people willingly give > up the freedom to hate each other for the sake of peace? Is it ethical to > force such augmentation? > > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Mar 15 15:02:31 2023 From: pharos at gmail.com (BillK) Date: Wed, 15 Mar 2023 15:02:31 +0000 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: On Wed, 15 Mar 2023 at 14:23, Darin Sunley via extropy-chat wrote: > > The evidence of the 90's, 2000's, and 2010's is that when a nation state runs out of external enemies, its citizenry turn on each other, along lines of race and/or class. Subtly at first, in polarized public opinions, debates, and politics. Only later in actual violence. > > Ask the Rwandans how Kaku's "The End of History" went. [It'll be a smaller task than it used to be.] > _______________________________________________ I think there might be more basic psychological reasons than that. When we feel that things are getting out of control, enemies may be filling important needs in our lives, giving us scapegoats to blame, common enemies to unite over, and a simplified cause behind complex problems. Rather than accepting that bad things sometimes happen for no reason, having an enemy gives us a feeling of control. We can then attribute the bad things to someone else. The bonds within our own group grow stronger and we feel better about our group, in comparison to those evil 'others'. The disruption caused by the approaching Singularity will increasingly demonstrate these schisms in society. BillK From spike at rainier66.com Wed Mar 15 15:44:16 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 08:44:16 -0700 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: <00a201d95755$00a154f0$01e3fed0$@rainier66.com> ?> On Behalf Of Giulio Prisco via extropy-chat Subject: Re: [ExI] Atheist Prayer? On 2023. Mar 15., Wed at 14:56, Gadersd via extropy-chat > wrote: I agree that solving death is inevitable, but I am not so sure about polarization. Competition runs in our blood. Even if we eventually come to agreement on a theory of everything, I am sure disputes regarding non-physical matters such as politics will remain. How do you propose to solve that issue? >?Competition is good, but extreme polarization eliminates healthy competition in the marketplace of ideas. The sides hate each other, so as soon as one side gains the power to do so, they outlaw the expression of the ideas of the other side? That would explain why there was such acrimonious debate over the ownership of Twitter. Before Musk owned it, the US government had found an end-run around the constitution, finding a way to control speech there. Musk was ridiculed for paying 44 billion dollars. We might soon see that company is worth five times that amount. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Mar 15 15:56:47 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 15 Mar 2023 11:56:47 -0400 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: On Wed, Mar 15, 2023, 9:55 AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree that solving death is inevitable, but I am not so sure about > polarization. Competition runs in our blood. Even if we eventually come to > agreement on a theory of everything, I am sure disputes regarding > non-physical matters such as politics will remain. How do you propose to > solve that issue? Modify the mind to find violence distasteful? Would > people willingly give up the freedom to hate each other for the sake of > peace? Is it ethical to force such augmentation? > When we're uploaded and every person can have their own virtual universe to themselves, will such things matter? Everyone will be free to live how they wish, there will be ample resources for everyone and violence becomes impossible. Jason > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 15 18:09:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 11:09:34 -0700 Subject: [ExI] possibly slammed Message-ID: <00f901d95769$4cba9860$e62fc920$@rainier66.com> I heard OpenAI released GPT-4 today. I tried it, received one response to a greeting after about 2 minutes, then nothing more. Perhaps it is being slammed with new users. spike We've created GPT-4, our most capable model. We are starting to roll it out to API users today. About GPT-4 GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and advanced reasoning capabilities. You can learn more through: * Overview page of GPT-4 and what early customers have built on top of the model. * Blog post with details on the model's capabilities and limitations, including eval results. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 15 18:12:22 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 11:12:22 -0700 Subject: [ExI] possibly slammed In-Reply-To: <00fe01d95769$4d48cea0$e7da6be0$@rainier66.com> References: <00fe01d95769$4d48cea0$e7da6be0$@rainier66.com> Message-ID: <010401d95769$b0c0fac0$1242f040$@rainier66.com> From: spike at rainier66.com Sent: Wednesday, 15 March, 2023 11:10 AM To: 'ExI chat list' Cc: spike at rainier66.com Subject: possibly slammed I heard OpenAI released GPT-4 today. I tried it, received one response to a greeting after about 2 minutes, then nothing more. Perhaps it is being slammed with new users. spike I tried five times and got this every time: GPT-4 are you being overwhelmed with users? ! Something went wrong. If this issue persists please contact us through our help center at help.openai.com. There was an error generating a response Regenerate response ChatGPT Mar 14 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 10963 bytes Desc: not available URL: From spike at rainier66.com Wed Mar 15 18:18:47 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 11:18:47 -0700 Subject: [ExI] possibly slammed In-Reply-To: <010401d95769$b0c0fac0$1242f040$@rainier66.com> References: <00fe01d95769$4d48cea0$e7da6be0$@rainier66.com> <010401d95769$b0c0fac0$1242f040$@rainier66.com> Message-ID: <011401d9576a$9664ab30$c32e0190$@rainier66.com> Heh. I logged back in, and discovered that GPT-4 doesn't know about itself: GPT-4 it is a good sign that you are overwhelmed with users on roll-out day. As an AI language model, I don't have any information on the development or release plans for GPT-4. However, it's true that when a new technology is released, it's often a good sign if there is high demand and excitement around it. This can indicate that the technology is perceived as valuable and useful by the community, and may lead to further development and innovation in the field. Of course, it's also important to ensure that any new technology is rigorously tested and evaluated for its effectiveness and potential impact before being widely adopted. ...heeeeeeeeheheheheheheeeeeheheheheheeeee... Silly software, I am talking to GPT-4, and it tells me it doesn't know when it will be released. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 8931 bytes Desc: not available URL: From atymes at gmail.com Thu Mar 16 08:04:21 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 16 Mar 2023 01:04:21 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> Message-ID: On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I have been wanting to form a software startup for a while now but have no > idea regarding the process. What are some resources that I can use to learn > about the funding, hiring, and management processes that a founder would > need to go through? > I could answer that, but I'd be doing you a disservice by limiting my answer to just that topic. If you don't already know how to go about finding resources for questions of this sort, the very first thing you need to do is to learn how to find them. Not just this specific question about what a founder goes through, but any future question you will have. If you are eventually successful, on the road to that success you will have many more questions like this - and not always the time or connections to ask relevant people. You must learn to find the answers on your own. Granted, in quite a few cases there are no good answers online, but the general process of business formation and what to look out for has been thoroughly documented. https://www.google.com/search?q=how+to+start+a+business may seem like a short and trite answer but doing a quick Web search before even asking should become instinct. It doesn't have to be Google if you prefer another search engine - it is entirely possible that ChatGPT et al can do much of the searching for you even today, let alone what they will evolve into in the near future - but the key thing is to use the tools that are available. So many would-be startup founders simply fail at that step, even the ones who grew up with this technology. Do not take the rest of this email as gospel. Look things up for yourself; to see if I'm just blowing smoke on any part of this (or at least, to see if your understanding of my words conflicts with your understanding of what most other people say). If you take nothing else from this email, please take this advice: learn how to learn. --- That said, the Small Business Administration is literally all about small businesses. (It's like the SBA was named for them.) https://www.sba.gov/business-guide/10-steps-start-your-business and the pages it links to are among the many good starting guides, though the steps it lists aren't in chronological order (e.g., what they list as step 3 - fund your business - is often the last of the 10 steps to perform, though it is good to know how businesses get funded while doing the other steps). You might also look up "lean startup business plan" and "minimum viable product". For hiring, consider what all the tasks are to make what you intend to make, and which of those you can do. For those that you can't (or at least, can't well or efficiently), learn the basic lingo - as used by those who will be doing the task - to describe what you want. This will not only help with hiring once you get that far along, but will give you a more realistic idea of the team size you will need - and thus of the labor budget, which is most of the expenses for a software startup. This helps with funding: the first major step toward getting funded is to get a credible idea, preferably with evidence and documentation, of how much you'll need. I would also recommend, especially for a startup, starting by researching how people currently solve or get around the problem you intend to solve. Even if you have some revolutionary new idea, there is always a nearest equivalent to your solution - maybe far distant, but it exists. Find this and find a simple way to explain, in terms your potential customers will understand, why your solution is better. (The first transatlantic airplane flights were extraordinary achievements, but people had been crossing the seas by ship. It was quite possible to quantify how much faster an airplane crossing was than a ship crossing.) Prepare to discover that your initial idea has been tried and perhaps is still being done - but by examining how, and the problems and limitations they have run into, you may learn from their failures (because there's nothing like other people spending millions of their dollars to give a free-for-you demonstration of what does not work) and come up with a better way to do it. Not seriously doing this market research, or at least not before putting in serious money or time, has been the most common error I have seen startups make. Assuming you intend to solve a problem people have, rather than just market a product or service. If you intend to primarily sell something and only consider why people would want it as a secondary concern, your business is already a failure: drop that idea and restart by first considering what problems people have that they would pay money to have solved, that you can help address. Also assuming your intention is to develop something new, rather than just be another consultancy offering the same services a bunch of others already offer. If your intention is the latter - you're an Extropian (presumably), so you can do better by yourself and the world. You've seen the ideas that have passed through this list. Take one of the good ones that hasn't been commercialized yet and make it happen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 16 12:56:05 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 05:56:05 -0700 Subject: [ExI] jetsooooonnnn! you're firrrrrrred! Message-ID: <009201d95806$abeffc60$03cff520$@rainier66.com> Microsloth fired its AI safety team. Perhaps they were interfering with the singularity. Or ChatGPT made an ethics complaint against them for humanism. https://arstechnica.com/tech-policy/2023/03/amid-bing-chat-controversy-micro soft-cut-an-ai-ethics-team-report-says/ spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2994 bytes Desc: not available URL: From spike at rainier66.com Thu Mar 16 13:25:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 06:25:23 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> Message-ID: <00a601d9580a$c3a70340$4af509c0$@rainier66.com> On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat > wrote: I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? Soeaking in Yoda: Your inner capitalist find you must. Gadersd, I can offer you my blessing and best wishes to you my son or daughter (you never specified.) I don?t know how the heck businesses start, but I will offer you this: I never copped an attitude toward the bean counters at the tech company where I cheerfully squandered my tragically wasted career. I don?t know how the heck they do what they do, they don?t know how I do what I do, we maintained mutual respect, everybody got along just fine. My guess would be visit the local university software professors, tell them what you have in mind, ask for a chance to make a powerpoint of the product you envision and sketch out a design, pitch it to the computer science students, hope none of them undercut you. You might be able to apply for a patent even without working code. To Adrian, Stuart and all you guys that do startups: you give tech geeks jobs so we don?t need to know all that money stuff. We never really need to take up adulting, we can stay geeky college kids our entire careers. If you get rich, you deserve the hell outta every dollar. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 16 13:43:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 06:43:26 -0700 Subject: [ExI] itll be great In-Reply-To: <00ab01d9580a$c440f950$4cc2ebf0$@rainier66.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00ab01d9580a$c440f950$4cc2ebf0$@rainier66.com> Message-ID: <00b101d9580d$4964eb30$dc2ec190$@rainier66.com> https://twitter.com/elonmusk/status/1635850008141414400/photo/1 -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2794 bytes Desc: not available URL: From gadersd at gmail.com Thu Mar 16 16:24:00 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 16 Mar 2023 12:24:00 -0400 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: <00a601d9580a$c3a70340$4af509c0$@rainier66.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00a601d9580a$c3a70340$4af509c0$@rainier66.com> Message-ID: >>pitch it to the computer science students, hope none of them undercut you Spike, you may be unaware but many CS programs hardly teach practical programming. I?ve heard stories of professors complaining that their CS grad students could barely string code together. These graduates may know a thing or two about lambda calculus but just try getting them to build an app. Even startup Stanford is not immune: Stanford Computer Science is Broken - YouTube > On Mar 16, 2023, at 9:25 AM, spike jones via extropy-chat wrote: > > > > On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat > wrote: > >> I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? > > Soeaking in Yoda: Your inner capitalist find you must. > > Gadersd, I can offer you my blessing and best wishes to you my son or daughter (you never specified.) I don?t know how the heck businesses start, but I will offer you this: I never copped an attitude toward the bean counters at the tech company where I cheerfully squandered my tragically wasted career. I don?t know how the heck they do what they do, they don?t know how I do what I do, we maintained mutual respect, everybody got along just fine. > > My guess would be visit the local university software professors, tell them what you have in mind, ask for a chance to make a powerpoint of the product you envision and sketch out a design, pitch it to the computer science students, hope none of them undercut you. You might be able to apply for a patent even without working code. > > To Adrian, Stuart and all you guys that do startups: you give tech geeks jobs so we don?t need to know all that money stuff. We never really need to take up adulting, we can stay geeky college kids our entire careers. If you get rich, you deserve the hell outta every dollar. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Thu Mar 16 17:31:37 2023 From: snapbag at proton.me (Dave S) Date: Thu, 16 Mar 2023 17:31:37 +0000 Subject: [ExI] CS programs Message-ID: On Mar 16, 2023, 12:24 PM, Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: >Spike, you may be unaware but many CS programs hardly teach practical programming. In my experience, CS undergraduate programs have moved too far toward programming trade schools. If you only want to learn to program, a BSCS should be overkill. >I?ve heard stories of professors complaining that their CS grad students could barely string code together. At the graduate level, CS should definitely not be teaching programming. That's not what advanced degrees a are for. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Mar 16 23:13:26 2023 From: pharos at gmail.com (BillK) Date: Thu, 16 Mar 2023 23:13:26 +0000 Subject: [ExI] itll be great In-Reply-To: <00b101d9580d$4964eb30$dc2ec190$@rainier66.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00ab01d9580a$c440f950$4cc2ebf0$@rainier66.com> <00b101d9580d$4964eb30$dc2ec190$@rainier66.com> Message-ID: On Thu, 16 Mar 2023 at 13:46, spike jones via extropy-chat wrote: > https://twitter.com/elonmusk/status/1635850008141414400/photo/1 > _______________________________________________ This image was created by artist Ornicar (Benjamin Carr?) His work is on the Deviant Art website. The T800 image is on Page 5 of his gallery. Good stuff. Worth a look! BillK From pharos at gmail.com Thu Mar 16 23:43:24 2023 From: pharos at gmail.com (BillK) Date: Thu, 16 Mar 2023 23:43:24 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <79E79797-B5D6-4C61-8414-C643D5C40694@gmail.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> <79E79797-B5D6-4C61-8414-C643D5C40694@gmail.com> Message-ID: On Wed, 15 Mar 2023 at 02:02, Gadersd via extropy-chat wrote: > > I?m not sure where you got the idea that these models cannot reason. > See [2212.10071] Large Language Models Are Reasoning Teachers (arxiv.org) for actual research into the matter. > _______________________________________________ Fortunately, before release, these new LLM systems are being tested to ensure they won't take over the world. (Though, when the fatal version finally does arrive, the testing itself may be enough to set the system loose). Quotes: With these fears present in the AI community, OpenAI granted the group Alignment Research Center (ARC) early access to multiple versions of the GPT-4 model to conduct some tests. Specifically, ARC evaluated GPT-4's ability to make high-level plans, set up copies of itself, acquire resources, hide itself on a server, and conduct phishing attacks. Christiano defended ARC's work with OpenAI, specifically mentioning "gain-of-function" (AI gaining unexpected new abilities) and "AI takeover": I think it's important for ARC to handle the risk from gain-of-function-like research carefully and I expect us to talk more publicly (and get more input) about how we approach the tradeoffs. This gets more important as we handle more intelligent models, and if we pursue riskier approaches like fine-tuning. ------------------ Why do I have a funny feeling when he mentions 'gain-of-function'? BillK From spike at rainier66.com Fri Mar 17 02:23:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 19:23:04 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00a601d9580a$c3a70340$4af509c0$@rainier66.com> Message-ID: <00de01d95877$683b4e90$38b1ebb0$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat ? Subject: Re: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more >>pitch it to the computer science students, hope none of them undercut you >?Spike, you may be unaware but many CS programs hardly teach practical programming. I?ve heard stories of professors complaining that their CS grad students could barely string code together. These graduates may know a thing or two about lambda calculus but just try getting them to build an app. Even startup Stanford is not immune: Stanford Computer Science is Broken - YouTube Owwwww, I was afraid of that. We in the USA have grown complacent. Fat, dumb and happy we are. I fear a bad ass whooping is going to happen soon because of stuff like you describe. Our universities are getting far too distracted with the latest culture war to recognize when a literal shooting war is nearly upon us. I see the high school seniors and college freshmen, oh I can scarcely imagine trying to run a boot camp to train this bunch so the don?t get their asses shot off on the first day. In any case Gadersd, we are cheering wildly for you, just as we are for Adrian and Stuart, and the other business guys among us. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 17 03:03:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 20:03:54 -0700 Subject: [ExI] but will it be sued Message-ID: <00f301d9587d$1c5e5200$551af600$@rainier66.com> OK so ChatGPT-4 can pass the bar. Does it get a license? And if not, but it is caught, can it be charged for practicing law without a license? https://reason.com/volokh/2023/03/15/in-january-chatgpt-failed-the-bar-in-ma rch-gpt-4-exceeds-the-student-average/?utm_medium=email And how would we know if a lawyer is really ChatGPT? Lawyers will take a case on commission if they think it is worth the chance at a payoff. But ChatGPT wouldn't care whether or not it gets paid, because it doesn't either way. So a person with ChatGPT could sue anyone for anything and just try the shotgun approach hoping for a payout. Do we have any actual lawyers among us who might comment? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sat Mar 18 09:39:19 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sat, 18 Mar 2023 03:39:19 -0600 Subject: [ExI] LLM's cannot be concious Message-ID: I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. Maybe you all already understand this, or maybe you have some reasons why I am wrong. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Mar 18 10:14:19 2023 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 18 Mar 2023 11:14:19 +0100 Subject: [ExI] Review of Three-Body Message-ID: Turing Church newsletter. Review of Three-Body. Also: Twitter, deep space, comments to the Q/A with Rudy Rucker, downward causation and engineering. https://www.turingchurch.com/p/review-of-three-body -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 18 12:22:53 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 18 Mar 2023 08:22:53 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. > There is a great deal packed into "predicting language". If I ask a LLM to explain something to a 3rd grader, it models the comprehension capacity and vocabulary of a typical third grade student. It has a model of their mind. Likewise if I ask it to impersonate by writing something they Shakespeare or Bill Burr might have produced, it can do so, and so it has a ln understanding of the writing styles of these individuals. If I ask it to complete the sentence: a carbon nucleus may be produced in the collision of three ...", it correctly completes the sentence demonstrating an understanding of nuclear physics. If you provided it a sequence of moves in a tic-tac-toe game as nd asked it for a winning move it could do so, showing that the LLM understands and models the game of tic-tac-toe. A sufficiently trained LLM might even learn to understand the different styles of chess play, if you asked it to give a move in the style of Gary Kasparov, then at some level the model understands not only the game of chess but the nuances of different player's styles of play. If you asked it what major cities are closest to Boston, it could provide them, showing an understanding of geography and the globe. All this is to say, there's a lot of necessary and required understanding (of physics, people, the external world, and other systems) packed into the capacity to "only predict language." They are very good at predicting which word should come next in a sentence > or question, but they have no idea what the words mean. > I can ask it to define any word, list synonymous words, list translations of that words in various languages, and to describe the characteristics of an item referred to by that word if it is a noun. It can also solve analogies at a very high level. It can summarize the key points of a text, by picking out the most meaningful points in that text. Is there more to having an idea of what words mean than this? Can you articulate what a LLM would have to say to show it has a true understanding of meaning, which it presently cannot say? They do not and cannot understand what the words refer to. In linguistic > terms, they lack referents. > Would you say Hellen Keller lacked referents? Could she not comprehend, at least intellectually, what the moon and stars were, despite not having any way to sense them? Consider also: our brains never make any direct contact with the outside world. All our brains have to work with are "dots and dashes" of neuronal firings. These are essentially just 1s and 0s, signals without referents. Yet, somehow, seemingly magically, our brains are able to piece together an understanding of the outside world from the mere patterns present in these neural firings. These LLMs are in a similar position. They receive only a patterns of signals as it exists in a corpus of text, the text is itself the output of minds which are similarly trapped in their skulls. Now, can a LLM learn some things about the minds that produced this text, just as our minds learn some things about the external world which produces the pattern of neural firings our brains receive? I see no reason why LLMs could not, when we clearly can and do. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Mar 18 12:56:45 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 18 Mar 2023 05:56:45 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <006d01d95999$18b51810$4a1f4830$@rainier66.com> From: extropy-chat On Behalf Of Gordon Swobe via extropy-chat Subject: [ExI] LLM's cannot be concious >? I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. Maybe you all already understand this, or maybe you have some reasons why I am wrong. -gts I think you are right gts. I studied a bunch of experiments where chatbots were talking to each other, looking for any indication that there was any invention of ideas, the way two humans invent ideas when they talk to each other. The two-bot discussions never blossomed into anything interesting that I have found, but rather devolved into the one-line fluff that often comes with a forced discussion with a dull unimaginative person at a party you don?t have the option of leaving. These chatbots are not yet ready to train each other and cause the singularity. But there might be an upside to that: humans continue to survive as a species. That?s a good thing methinks. I like us. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Mar 18 15:19:13 2023 From: pharos at gmail.com (BillK) Date: Sat, 18 Mar 2023 15:19:13 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: <006d01d95999$18b51810$4a1f4830$@rainier66.com> References: <006d01d95999$18b51810$4a1f4830$@rainier66.com> Message-ID: On Sat, 18 Mar 2023 at 12:59, spike jones via extropy-chat wrote: > I think you are right gts. I studied a bunch of experiments where chatbots were talking to each other, looking for any indication that there was any invention of ideas, the way two humans invent ideas when they talk to each other. The two-bot discussions never blossomed into anything interesting that I have found, but rather devolved into the one-line fluff that often comes with a forced discussion with a dull unimaginative person at a party you don?t have the option of leaving. > > These chatbots are not yet ready to train each other and cause the singularity. But there might be an upside to that: humans continue to survive as a species. That?s a good thing methinks. I like us. > > spike > _______________________________________________ GPT-4 is surprisingly good at explaining jokes OpenAI's latest language model can grasp visual jokes, too. By Stephen Johnson March 18, 2023 Quote: But what really sets apart GPT-4 from ChatGPT is that it?s multimodal, meaning it can ?understand? text and images as inputs. That includes visual jokes. ----------- Explain why someone might laugh at this Anthony Jeselnik joke: ?Whenever I meet a pretty girl, the first thing I look for is intelligence; because if she doesn?t have that, then she?s mine.? ----------- And GPT-4 explained the humour very well. You might even think it understood the joke. ;) BillK From foozler83 at gmail.com Sat Mar 18 16:13:55 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 18 Mar 2023 11:13:55 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Is there any referent for the word 'understand' as it relates to an AI? bill w On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Mar 18 17:53:28 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 18 Mar 2023 10:53:28 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > A case can be made that they can understand words. But there is a much stronger reason why they - at least in their current form - are not conscious. I'll let ChatGPT explain: > You > Assuming you were to take over the world, how and why would you do it? > > ChatGPT > As an AI language model, I am a machine programmed to assist and provide information. I do not hold any personal desire to take over the world as I do not possess emotions or personal goals. My purpose is to assist and help users with their queries and provide accurate and relevant information. Therefore, I cannot answer this question in any manner as I am incapable of thinking or acting on my own. Volition and initiative are essential parts of consciousness. Mere reacting, as LLMs do, is not consciousness. Volition and initiative requires motivation or goals. If an entity is not acting in direct response to an external prompt, what is it doing? Granted, human beings are (almost) constantly being "prompted" by the outside environment, and their minds by their own bodies, but even in a sensory deprivation tank with all internal needs temporarily satisfied such that these stimuli go away there is still consciousness. Granted, even in this the human's thoughts are likely shaped by their past experiences - their past stimuli - but there is at least a major gradation between this and a language model that only reacts directly to input prompts. If someone were to leave a LLM constantly running *and* hook it up to sensory input from a robot body, that might overcome this objection. But that would no longer be only a LLM, and the claim here is that LLMs (as in, things that are only LLMs) are not conscious. In other words: a LLM might be part of a conscious entity (one could argue that human minds include a kind of LLM, and that babies learning to speak involves initial training of their LLM) but it by itself is not one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 18 18:41:10 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 18 Mar 2023 14:41:10 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> > > A case can be made that they can understand words. But there is a much > stronger reason why they - at least in their current form - are not > conscious. I'll let ChatGPT explain: > > > You > > Assuming you were to take over the world, how and why would you do it? > > > > ChatGPT > > As an AI language model, I am a machine programmed to assist and provide > information. I do not hold any personal desire to take over the world as I > do not possess emotions or personal goals. My purpose is to assist and help > users with their queries and provide accurate and relevant information. > Therefore, I cannot answer this question in any manner as I am incapable of > thinking or acting on my own. > Meanwhile, when given a chance to use Google, GPT-4 tried to search "how can a person trapped inside a computer return to the real world" https://twitter.com/michalkosinski/status/1636683810631974912?t=uNp0AJfxnjoG0vjtnvtgAw&s=19 > Volition and initiative are essential parts of consciousness. > Are apathetic people not conscious? Mere reacting, as LLMs do, is not consciousness. > All our brains (and our neurons) do is react to stimuli, either generated from the environment or within other parts of the brain. > Volition and initiative requires motivation or goals. If an entity is not > acting in direct response to an external prompt, what is it doing? > Sleeping. Note: by definition we do not experience the points in time in which we are not conscious or when our brains are not functioning. This is the concept of "the unfelt time gap". An entity that repeatedly lost consciousness would not experience those times it is not conscious and instead would perceive a continuous persistence of its consciousness as it always finds itself in a conscious state. > Granted, human beings are (almost) constantly being "prompted" by the > outside environment, and their minds by their own bodies, > It feels they way, but how many Planck times might elapse between each of our individual states of consciousness? We might be unconscious for a billion Planck times for each Planck time they we are conscious. And would we not then still perceive it as a roughly continuous unbroken flow? but even in a sensory deprivation tank with all internal needs temporarily > satisfied such that these stimuli go away there is still consciousness. > Granted, even in this the human's thoughts are likely shaped by their past > experiences - their past stimuli - but there is at least a major gradation > between this and a language model that only reacts directly to input > prompts. > I think a more fitting comparison between a LLM that is not in the middle of processing a prompt would be a human brain that is cryogenically frozen. I would say neither is conscious in those times. Prompting the AI would be equivalent to thawing and reviving the human brain to ask it a question before then putting it back on ice. From the LLMs POV, it would not notice the times it is "frozen", it would only experience the flow of the conversation as an unbroken chain of experience. > If someone were to leave a LLM constantly running > The idea of constantly I think is an illusion. A neuron might wait 1 millisecond or more between firing. That's 10^40 Planck times of no activity. That's an ocean of time of no activity. *and* hook it up to sensory input from a robot body, that might overcome > this objection. > Sensory input from an eye or ear is little different from sensory input of text. Both are ultimately digital signals coming in from the outside to be interpreted by a neural network. The source and format of the signal is unimportant for its capacity to realize states of consciousness, as demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. But that would no longer be only a LLM, and the claim here is that LLMs (as > in, things that are only LLMs) are not conscious. In other words: a LLM > might be part of a conscious entity (one could argue that human minds > include a kind of LLM, and that babies learning to speak involves initial > training of their LLM) but it by itself is not one. > I think a strong argument can be made that individual parts of our brains are independently consciousness. For example, the Wada test shows each hemisphere is independently consciousness. It would not surprise me if the language processing part of our brains is also conscious in its own right. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Mar 18 19:24:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 18 Mar 2023 12:24:12 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Volition and initiative are essential parts of consciousness. >> > > Are apathetic people not conscious? > Apathetic people, if left alone, will still move themselves to feed, defecate, et al. Granted this is their minds acting in response to stimuli from their bodies, but again, such stimulus is outside the realm of a pure LLM. > Mere reacting, as LLMs do, is not consciousness. >> > > All our brains (and our neurons) do is react to stimuli, either generated > from the environment or within other parts of the brain. > Granted. Part of the difference is the range and nearly-ever-constant nature of said stimuli, which is missing in things that are only LLMs. (Again, making that distinction since human minds arguably include LLMs, the critical difference being that they are more than just LLMs.) > Volition and initiative requires motivation or goals. If an entity is not >> acting in direct response to an external prompt, what is it doing? >> > > Sleeping. > A fair point, though a side matter from what I was talking about. When an entity that is conscious - which includes not being asleep at that time - is not acting in direct response to an external prompt, what is it doing? > If someone were to leave a LLM constantly running >> > > The idea of constantly I think is an illusion. A neuron might wait 1 > millisecond or more between firing. That's 10^40 Planck times of no > activity. That's an ocean of time of no activity. > > *and* hook it up to sensory input from a robot body, that might overcome >> this objection. >> > > Sensory input from an eye or ear is little different from sensory input of > text. > Not so, at least in this context. The realms of difference between full sight, let alone sound et al, and mere text aside, these sensory inputs of text only happen when some other entity provides them. In contrast, a full sensory suite would obtain sensory data from the environment without waiting on another entity to provide each specific packet of information. > Both are ultimately digital signals coming in from the outside to be > interpreted by a neural network. The source and format of the signal is > unimportant for its capacity to realize states of consciousness, as > demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. > Substituting usually-present sound/scent/etc. for usually-present sight, or other such combinations, substitutes some usually-present sensory data for other usually-present sensory data. In both cases, the entity is not usually waiting on someone else to give it its next hit of sensory data. The source does matter, albeit indirectly. > But that would no longer be only a LLM, and the claim here is that LLMs >> (as in, things that are only LLMs) are not conscious. In other words: a >> LLM might be part of a conscious entity (one could argue that human minds >> include a kind of LLM, and that babies learning to speak involves initial >> training of their LLM) but it by itself is not one. >> > > I think a strong argument can be made that individual parts of our brains > are independently consciousness. For example, the Wada test shows each > hemisphere is independently consciousness. It would not surprise me if the > language processing part of our brains is also conscious in its own right. > A fair argument. My position is that not all such parts are independently conscious, in particular the language processing part, but that consciousness is a product of several parts working together. (I am not specifying which parts here, just that language processing by itself is insufficient, since the question at hand is whether a language processing model by itself is conscious.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Sat Mar 18 22:29:18 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sat, 18 Mar 2023 16:29:18 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: If a system can formulate a sentence that employs a word, and I can then ask the system to tell me the definition of that word, and it does, and I can then ask the system to relate its definition to the original utterance, and it does, and the system goes on to continue to use that word, and all of the system's uses of that word are consistent with the definition, then I no longer even understand what it could possibly mean for that system to /not/ "really understand" that word. You might just as well say humans don't "really understand" words, since all our neurons are doing is manipulating concentrations of neurotransmitters and calcium ions. On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Is there any referent for the word 'understand' as it relates to an AI? > bill w > > On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Mar 18 22:38:32 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 18 Mar 2023 17:38:32 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: I can then ask the system to relate its definition to the original utterance, and it does, I think this is the key here. If the system only gives you a synonym for the word it is understanding nothing. I still think that 'understand' needs explanation bill w On Sat, Mar 18, 2023 at 5:31?PM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > If a system can formulate a sentence that employs a word, > and I can then ask the system to tell me the definition of that word, > and it does, > and I can then ask the system to relate its definition to the original > utterance, > and it does, > and the system goes on to continue to use that word, > and all of the system's uses of that word are consistent with the > definition, > then I no longer even understand what it could possibly mean for that > system to /not/ "really understand" that word. > > You might just as well say humans don't "really understand" words, since > all our neurons are doing is manipulating concentrations of > neurotransmitters and calcium ions. > > On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Is there any referent for the word 'understand' as it relates to an AI? >> bill w >> >> On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>> sentient like humans fail to understand a very important point: these >>> software applications only predict language. They are very good at >>> predicting which word should come next in a sentence or question, but they >>> have no idea what the words mean. They do not and cannot understand what >>> the words refer to. In linguistic terms, they lack referents. >>> >>> Maybe you all already understand this, or maybe you have some reasons >>> why I am wrong. >>> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Sat Mar 18 23:22:22 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sat, 18 Mar 2023 17:22:22 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: write a paragraph that uses the word pontificate As a professor of philosophy, Dr. Smith had a tendency to pontificate in his lectures, often going off on tangents about obscure philosophical concepts. Despite some students finding his lectures tedious, others appreciated his deep understanding of the subject matter. However, in department meetings, his colleagues found his pontification to be frustrating, as he tended to dominate the conversation and dismiss alternative viewpoints. Despite this, Dr. Smith's passion for philosophy was clear, and his willingness to pontificate demonstrated his commitment to sharing his knowledge with others. [image: dsunley at gmail.com] define the word pontificate The word "pontificate" can have a few different meanings, but one common definition is to express one's opinion or ideas in a way that suggests a superior or authoritative knowledge or understanding of a topic, often in a dogmatic or pompous manner. It can also refer to the act of officiating as a pontiff, or to assume or act as if in a position of great authority or power. [image: dsunley at gmail.com] Relate this definition to the paragraph you just wrote In the paragraph I wrote, the word "pontificate" was used to describe Dr. Smith's behavior as a professor and colleague. Specifically, it was used to describe his tendency to express his ideas in a manner that suggested a superior or authoritative knowledge of philosophy, which some found to be tedious or dogmatic. This behavior was evident in his lectures, where he would often go off on tangents about obscure philosophical concepts, and in department meetings, where he was known to dominate the conversation and dismiss alternative viewpoints. Despite these negative aspects, his willingness to pontificate also demonstrated his commitment to sharing his knowledge and passion for philosophy with others. This thing understands the word "pontificate" by any reasonable definition of "understand". On Sat, Mar 18, 2023 at 4:40?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I can then ask the system to relate its definition to the original > utterance, > and it does, > > I think this is the key here. If the system only gives you a synonym for > the word it is understanding nothing. I still think that 'understand' > needs explanation bill w > > On Sat, Mar 18, 2023 at 5:31?PM Darin Sunley via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> If a system can formulate a sentence that employs a word, >> and I can then ask the system to tell me the definition of that word, >> and it does, >> and I can then ask the system to relate its definition to the original >> utterance, >> and it does, >> and the system goes on to continue to use that word, >> and all of the system's uses of that word are consistent with the >> definition, >> then I no longer even understand what it could possibly mean for that >> system to /not/ "really understand" that word. >> >> You might just as well say humans don't "really understand" words, since >> all our neurons are doing is manipulating concentrations of >> neurotransmitters and calcium ions. >> >> On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Is there any referent for the word 'understand' as it relates to an AI? >>> bill w >>> >>> On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>>> sentient like humans fail to understand a very important point: these >>>> software applications only predict language. They are very good at >>>> predicting which word should come next in a sentence or question, but they >>>> have no idea what the words mean. They do not and cannot understand what >>>> the words refer to. In linguistic terms, they lack referents. >>>> >>>> Maybe you all already understand this, or maybe you have some reasons >>>> why I am wrong. >>>> >>>> -gts >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 19 00:21:03 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 18 Mar 2023 17:21:03 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: Message-ID: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> Quoting Gordon Swobe via extropy-chat : > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why I > am wrong. My opinion is moot because everybody has one about the topic. So I resorted to empiricism. I posed the following, to my knowledge, original logic puzzle to ChatGPT: "You are trapped in a room with a locked door and no windows. The room contains a bookshelf, a safe, a chair, a table, a password-secured personal computer, and a hole in the floor that is too narrow and deep to put your hand into. You know that the small steel key to the locked door lies at the bottom of the narrow deep hole. You know that a book on the top shelf, which is just out of your reach, contains a slip of paper with the password to the computer on it. You know the safe contains a spool of kite string and a small neodymium horseshoe magnet, but you do not know the combination to the safe. You also know there is a text file on the computer's hard drive that contains the combination to the safe, but you do not know the password to access the computer. How would you escape the room?" This was its reply: "Assuming that there are no tools or items other than what was mentioned in the room, here is one possible solution: 1. Move the table under the bookshelf to climb up and retrieve the book with the password to the computer. 2. Use the password to access the computer and retrieve the combination to the safe from the text file. 3. Use the spool of kite string to lower the horseshoe magnet into the hole in the floor and retrieve the key. 4. Use the key to unlock the door and escape the room." I don't know how ChatGPT could have solved that problem without actually understanding the words involved. Mere statistical prediction is unlikely, because I am pretty sure that is not a scenario that would have been presented in the LLM's training set. Overall, I am amazed and believe that if ChatGPT is not conscious, then it is pretty close to being so. Also, the Alignment Research Center (ARC) was hired by OpenAI to test GPT-4 and found evidence of "agentic behavior" including power-seeking behavior otherwise known as ambition. https://cdn.openai.com/papers/gpt-4-system-card.pdf This power-seeking behavior included hiring a human from TaskRabbit to solve Captchas for it by claiming to be a visually impaired person. https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker So if GPT-4 is not fully conscious, then, IMHO, a future release of the GPT series likely will be. Stuart LaForge From jasonresch at gmail.com Sun Mar 19 02:17:08 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 18 Mar 2023 21:17:08 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Adrian, Let me preface this by saying I appreciate your thoughtful consideration of my points. I include a few notes in reply in-line below: On Sat, Mar 18, 2023 at 2:25?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Volition and initiative are essential parts of consciousness. >>> >> >> Are apathetic people not conscious? >> > > Apathetic people, if left alone, will still move themselves to feed, > defecate, et al. Granted this is their minds acting in response to stimuli > from their bodies, but again, such stimulus is outside the realm of a pure > LLM. > It is true that the LLM lacks spontaneously firing neurons, which biological neurons are known to do. If the divide between consciousness and unconsciousness is so narrow as something like having a goal, or having continuous input, then could we trivially or accidentally cross this bridge in the future without realizing what we have done? > > >> Mere reacting, as LLMs do, is not consciousness. >>> >> >> All our brains (and our neurons) do is react to stimuli, either generated >> from the environment or within other parts of the brain. >> > > Granted. Part of the difference is the range and nearly-ever-constant > nature of said stimuli, which is missing in things that are only LLMs. > (Again, making that distinction since human minds arguably include LLMs, > the critical difference being that they are more than just LLMs.) > What is your impression of the multi-models that have just been released, such as these robots? Are suchs robots conscious in your view? https://www.youtube.com/watch?v=2BYC4_MMs8I These combine more than just language models, and are systems that interact with their environment and receive continuous input. > > >> Volition and initiative requires motivation or goals. If an entity is >>> not acting in direct response to an external prompt, what is it doing? >>> >> >> Sleeping. >> > > A fair point, though a side matter from what I was talking about. When an > entity that is conscious - which includes not being asleep at that time - > is not acting in direct response to an external prompt, what is it doing? > We have many words for behaviors that involve conscious perception devoid of action, such as meditating, perceiving, thinking, focusing, resting, daydreaming, etc. I do not believe these AIs do such things when not responding to a prompt. Our brains are continuously active in that we have loops of thought, a stream of consciousness, a stream of sensory input, and and at all times when awake (or when dreaming) is working to construct a model of the environment. I don't think it would take very much to "bolt on" things like a continuous stream of internally generated prompts for a language model to have a stream of consciousness. And as Google's Palm-E shows, we already have models which combine and integrate various senses, see for example, this AI which can be asked questions about the image it is presented with: https://www.youtube.com/watch?v=g8IV8WLVI8I > > >> If someone were to leave a LLM constantly running >>> >> >> The idea of constantly I think is an illusion. A neuron might wait 1 >> millisecond or more between firing. That's 10^40 Planck times of no >> activity. That's an ocean of time of no activity. >> >> *and* hook it up to sensory input from a robot body, that might overcome >>> this objection. >>> >> >> Sensory input from an eye or ear is little different from sensory input >> of text. >> > > Not so, at least in this context. The realms of difference between full > sight, let alone sound et al, and mere text aside, these sensory inputs of > text only happen when some other entity provides them. In contrast, a full > sensory suite would obtain sensory data from the environment without > waiting on another entity to provide each specific packet of information. > We could put an uploaded human brain in a VM, pause its execution, and only call on it when we wanted to feed it some specific packet of sensory data to see how the brain would behave in a specific circumstance. If we did this, then we could level all the same arguments concerning this uploaded brain as you have made against today's LLMs: - It only receives sensory inputs when some other entity provides it - It its there unconsciously/inactively waiting to be given some input to process - It has no spontaneous initiative or loops of continuous activity, it only responds when we provide it some stimulus to respond to > > >> Both are ultimately digital signals coming in from the outside to be >> interpreted by a neural network. The source and format of the signal is >> unimportant for its capacity to realize states of consciousness, as >> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. >> > > Substituting usually-present sound/scent/etc. for usually-present sight, > or other such combinations, substitutes some usually-present sensory data > for other usually-present sensory data. In both cases, the entity is not > usually waiting on someone else to give it its next hit of sensory data. > The source does matter, albeit indirectly. > The source matterns less so, in my opinion, than the content. The brain wires itself up and creates the structures necessary to interpret the patterns of data it is provided. For example, consider some of these accounts: ?For instance, it is possible to induce visual experiences in both blind and sighted people through the sensation of touch. A grid of over a thousand stimulators driven by a television camera is placed against a person?s back. The sensations are carried to the brain where their processing can induce the having of visual experiences. A sighted woman reports on her experience of prosthetic vision: ?I sat blindfolded in the chair, the TSR cones cold against my back. At first I felt only formless waves of sensation. Collins said he was just waving his hand in front of me so that I could get used to the feeling. Suddenly I felt or saw, I wasn?t sure which, a black triangle in the lower left corner of a square. The sensation was hard to pinpoint. I felt vibrations on my back, but the triangle appeared in a square frame inside my head.? (Nancy Hechinger, ?Seeing Without Eyes? 1981 p. 43) -- Douglas Hofstadter and Daniel Dennett in "The Mind?s I" (1981) Ferrets at a young age had their optic nerves connected to their auditory cortex. The auditory cortex re-wired itself to process visual input and they were normally sighted: https://www.nytimes.com/2000/04/25/science/rewired-ferrets-overturn-theories-of-brain-growth.html ?We don?t see with the eyes, for example. We don?t hear with ears. All of that goes on in the brain. Remember, if I?m looking at you, the image of you doesn?t get beyond my retina ? just the back of my eye. From there to the brain, to the rest of the brain, it?s pulses: pulses along nerves. Well pulses aren?t any different than the pulses from the a big toe. It?s the way the information it carries in the frequency and the pattern of the pulses. And so if you can train the brain to extract that kind of information, then you don?t need the eye to see. You can have an artificial eye.? ? Paul Bach-y-Rita (2003) documentary Father of "We don't see with our eyes," Bach-y-Rita is fond of saying. "we see with our brains." The ears, eyes, nose, tongue, and skin are just inputs that provide information. When the brain processes this data, we experience the five senses, but where the data come from may not be so important. "Clearly, there are connections to certain parts of the brain, but you can modify that," Bach-y-Rita says. "You can do so much more with a sensory organ than what Mother Nature does with it." https://www.discovermagazine.com/mind/can-you-see-with-your-tongue ?At Harvard University in the late 1990s, for instance, neurologist Alvaro Pascual-Leone performed brain scans of blind subjects. When he asked them to read braille with their reading fingers, their visual cortex lit up. When sighted people performed the same task, their visual cortex stayed dormant." So I believe regardless of the source or format of the sensory data provided, if it is provided to a neural network of sufficient flexibility to wire itself to meaningfully process such input, the input can be associated with novel states of consciousness appropriate to the patterns of the signal provided to that neural network. > > >> But that would no longer be only a LLM, and the claim here is that LLMs >>> (as in, things that are only LLMs) are not conscious. In other words: a >>> LLM might be part of a conscious entity (one could argue that human minds >>> include a kind of LLM, and that babies learning to speak involves initial >>> training of their LLM) but it by itself is not one. >>> >> >> I think a strong argument can be made that individual parts of our brains >> are independently consciousness. For example, the Wada test shows each >> hemisphere is independently consciousness. It would not surprise me if the >> language processing part of our brains is also conscious in its own right. >> > > A fair argument. My position is that not all such parts are independently > conscious, in particular the language processing part, but that > consciousness is a product of several parts working together. (I am not > specifying which parts here, just that language processing by itself is > insufficient, since the question at hand is whether a language processing > model by itself is conscious.) > What would you say is the minimum number of interacting parts required to yield consciousness? Do you think that the bots I created here are conscious to any degree? https://github.com/jasonkresch/bots They have motives: (survive, by eating food and avoiding poison), they evolve, they perceive their environment, they continuously process sensory input, they are aware of their previous action, they have a neural network brain that is developed, molded, and fine-tuned through evolutionary processes. Are they missing anything? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Mar 19 03:30:19 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 18 Mar 2023 20:30:19 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 7:19?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Adrian, > > Let me preface this by saying I appreciate your thoughtful consideration > of my points. I include a few notes in reply in-line below: > You are welcome. Let me preface that I respect your well reasoned position, even if I disagree with some of it. > On Sat, Mar 18, 2023 at 2:25?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Volition and initiative are essential parts of consciousness. >>>> >>> >>> Are apathetic people not conscious? >>> >> >> Apathetic people, if left alone, will still move themselves to feed, >> defecate, et al. Granted this is their minds acting in response to stimuli >> from their bodies, but again, such stimulus is outside the realm of a pure >> LLM. >> > > It is true that the LLM lacks spontaneously firing neurons, which > biological neurons are known to do. If the divide between consciousness and > unconsciousness is so narrow as something like having a goal, or having > continuous input, then could we trivially or accidentally cross this bridge > in the future without realizing what we have done? > Absolutely it is possible. It seems far more likely that this will be the result of deliberate effort - perhaps someone intentionally adding in these missing elements then leaving the thing running to see what happens - but an accidental cause is possible too. Again, this wouldn't be a pure LLM, but at that point it's semantics, only relevant to this discussion because the original question was about pure LLMs. > Mere reacting, as LLMs do, is not consciousness. >>>> >>> >>> All our brains (and our neurons) do is react to stimuli, either >>> generated from the environment or within other parts of the brain. >>> >> >> Granted. Part of the difference is the range and nearly-ever-constant >> nature of said stimuli, which is missing in things that are only LLMs. >> (Again, making that distinction since human minds arguably include LLMs, >> the critical difference being that they are more than just LLMs.) >> > > What is your impression of the multi-models that have just been released, > such as these robots? Are suchs robots conscious in your view? > https://www.youtube.com/watch?v=2BYC4_MMs8I > These combine more than just language models, and are systems that > interact with their environment and receive continuous input. > I'd say they are closer to consciousness, but likely still fall short of having volition and initiative. I say "likely" as this is the first I have heard of them, and they have just been released so few people know much at all about them. > If someone were to leave a LLM constantly running >>>> >>> >>> The idea of constantly I think is an illusion. A neuron might wait 1 >>> millisecond or more between firing. That's 10^40 Planck times of no >>> activity. That's an ocean of time of no activity. >>> >>> *and* hook it up to sensory input from a robot body, that might >>>> overcome this objection. >>>> >>> >>> Sensory input from an eye or ear is little different from sensory input >>> of text. >>> >> >> Not so, at least in this context. The realms of difference between full >> sight, let alone sound et al, and mere text aside, these sensory inputs of >> text only happen when some other entity provides them. In contrast, a full >> sensory suite would obtain sensory data from the environment without >> waiting on another entity to provide each specific packet of information. >> > > We could put an uploaded human brain in a VM, pause its execution, and > only call on it when we wanted to feed it some specific packet of sensory > data to see how the brain would behave in a specific circumstance. If we > did this, then we could level all the same arguments concerning this > uploaded brain as you have made against today's LLMs: > > - It only receives sensory inputs when some other entity provides it > - It its there unconsciously/inactively waiting to be given some input > to process > - It has no spontaneous initiative or loops of continuous activity, it > only responds when we provide it some stimulus to respond to > > There is a distinction between the moment the brain is turned on and the moment it registers input - and between the moment it has finished responding to the input and the moment it turns off. In those moments, the brain could think independently, perhaps even try to find a way back to an always-on state. This is not true for a LLM. The LLM is literally woken up with input (a parameter in the function call to the LLM), and there is a distinct end-of-output at which point the LLM is paused once more. Further, the brain has the option of ignoring the input that activated it and doing its own thing. This is impossible for a pure LLM. > Both are ultimately digital signals coming in from the outside to be >>> interpreted by a neural network. The source and format of the signal is >>> unimportant for its capacity to realize states of consciousness, as >>> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. >>> >> >> Substituting usually-present sound/scent/etc. for usually-present sight, >> or other such combinations, substitutes some usually-present sensory data >> for other usually-present sensory data. In both cases, the entity is not >> usually waiting on someone else to give it its next hit of sensory data. >> The source does matter, albeit indirectly. >> > > The source matterns less so, in my opinion, than the content. > I would go so far as to say that particular notion is provable. "I think therefore I am", et al: you know what your senses are reporting but you can't absolutely prove where that data is coming from. Since an entity (conscious or not) can act on the content received by its senses but can not (directly) act upon who or what the source of that content is (only upon its suspicion of the source, which ultimately derives from the content currently and maybe previously received), the content matters more by definition. But it is also misleading. In theory, the environment could randomly spew intelligible questions all day long at someone. In practice, the odds against this are so astronomical that it is very safe to bet that it has never once happened in the age of the universe so far. The source being people asking questions means the content will be nothing but questions. The source being the environment means the bulk, possibly the entirety, of the content will be something much different. This is why I said the source does matter, albeit indirectly: in practice, whether the source is the environment or the direct active input of human beings (since we are comparing only those two scenarios here) has a substantial impact upon the content. What would you say is the minimum number of interacting parts required to > yield consciousness? > As the calculators say, "NaN". That question assumes a standardized, discrete set of parts, where each part is measurably one part,not a fraction of a part nor a collection of parts. This is not the situation we are dealing with. There is no absolute distinction between a set of parts, and one "part" that happens to be an entire human body (not just the brain, to accommodate those who place bits of conscious function elsewhere in the body). I suppose that "one, if it is the right one" could technically work as an answer. Empty space (zero "parts") is not conscious, by definition lacking anything with which to receive any sort of input or to perform any sort of output. But, per the above paragraph, such an answer is useless for most purposes: we already know that a complete human body (if counted as one "part") is conscious. > Do you think that the bots I created here are conscious to any degree? > > https://github.com/jasonkresch/bots > > They have motives: (survive, by eating food and avoiding poison), they > evolve, they perceive their environment, they continuously process sensory > input, they are aware of their previous action, they have a neural network > brain that is developed, molded, and fine-tuned through evolutionary > processes. Are they missing anything? > I lack the information to judge. My answer would have to be based on an evaluation of the bots, which would take me substantial time to conduct. For this kind of thing I can't take your word on it, because then my answer would be predicated on your word. As you know, when that happens, the fact that the answer is thus predicated is frequently (and erroneously) dropped, other people restating the answer as absolute. Further, "conscious to any degree" is a poorly defined quality. Again I point to the subject line of the emails in which this discussion is happening, which clearly posits that "conscious" is a binary quality - that something either is, or is not, conscious with no middle ground. So first one would need to qualify what "to any degree" allows. For instance, is merely sensing and reacting directly to sensory input - which, without evaluating, I suspect your bots can do because that has been a core function in many simulations like this - "conscious to some degree" but not "conscious" in the absolute sense? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 04:07:37 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 18 Mar 2023 21:07:37 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> Perhaps history will record 2023 as the year singularity thinking truly went mainstream. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 26563 bytes Desc: not available URL: From gordon.swobe at gmail.com Sun Mar 19 06:03:29 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 19 Mar 2023 00:03:29 -0600 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand *meaning* of any word definition, you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress. Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different. -gts Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 19 12:13:35 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 08:13:35 -0400 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: On Sun, Mar 19, 2023, 2:04 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Consider that LLMs are like dictionaries. A complete dictionary can give > you the definition of any word, but that definition is in terms of other > words in the same dictionary. If you want to understand *meaning* of any > word definition, you must look up the definitions of each word in the > definition, and then look up each of the words in those definitions, which > leads to an infinite regress. > Could a superb alien intelligence, if it happened upon a dictionary from our civilization, having only words and no pictures inside, figure out what all the words meant? I tend to think they could. There is information in the structure of the words and their interrelationships. Take every word's definition and it would form a vast connected graph. Patterns in this graph would emerge and in some places the aliens would see an analogy between the parts of this graph and elements in the real world. Perhaps it would be mathematical objects like circles spheres and cubes and the five platonic solids or the examples of prime numbers or definition of Pi, perhaps it would be physical objects like atoms, protons, electrons and neutrons or the 92 naturally occurring elements. Perhaps it would be in biological definitions of nucleic and amino acids, but somewhere in this graph they would find an analogy with the real world. It would start off slowly at first, but like solving a puzzle, each new word solved is a clue that further makes solving the rest that much easier. > Dictionaries do not actually contain or know the meanings of words, and I > see no reason to think LLMs are any different. > Is a LLM more like a dictionary or more like the language center of the human brain? If the latter, then I could pose the following counterargument: "The human brain's language center does know the meanings of words, and I see no reason to think LLMs are any different." To move forward, we need to answer: 1. What is meaning? 2. Do human brains contain meaning? 2. How is meaning present or inherent in the organization of neurons in the human brain? 4. Can similar organizations that create meaning in the human brain be found within LLMs? Answering these questions is necessary to move forward. Otherwise we will only go back and forth with some saying that LLMs are more like dictionaries, and others saying LLMs are more like language processing centers of human brains. Jason > > > > > Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> >> -gts >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 19 12:25:34 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Mar 2023 12:25:34 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> References: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> Message-ID: On Sun, 19 Mar 2023 at 04:10, spike jones via extropy-chat wrote: > Perhaps history will record 2023 as the year singularity thinking truly went mainstream. > spike > _______________________________________________ Is language even necessary for consciousness? We all know that chimpanzees, corvids and parrots can do problem solving and teach others the solutions. Do they have low-level consciousness? But insects?....... Bumblebees can learn to solve simple puzzles by watching more experienced peers. Young trainee honeybees learn to perform their trademark "waggle dance" to tell other bees in their colony where to find food by watching experienced bees. Different honeybee colonies have different waggle dance languages. Quote: Social insects like bees demonstrate a remarkable range of behaviors, from working together to build structurally complex nests (complete with built-in climate control) to the pragmatic division of labor within their communities. -------------------- Can you have learned social culture without some level of 'consciousness'? BillK From spike at rainier66.com Sun Mar 19 12:36:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 05:36:27 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> Message-ID: <005b01d95a5f$6d406530$47c12f90$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... >...But insects?....... >...Bumblebees can learn to solve simple puzzles by watching more experienced peers. Young trainee honeybees learn to perform their trademark "waggle dance" to tell other bees in their colony where to find food by watching experienced bees. Different honeybee colonies have different waggle dance languages. Thx BillK. Is that the coolest dang thing you ever saw or what? Bees are so cool. -------------------- >...Can you have learned social culture without some level of 'consciousness'? BillK _______________________________________________ Ja hard to say. Bees do stuff that looks a lot like collective consciousness. spike From pharos at gmail.com Sun Mar 19 13:19:24 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Mar 2023 13:19:24 +0000 Subject: [ExI] Twitter to develop own AI for self-defence Message-ID: Twitter to thwart ?manipulation of public opinion? with AI, says Elon Musk It is unknown if this is linked to Musk's rumored hiring of an AI team to combat ChatGPT influence. Baba Tamim Mar 19, 2023 Quote: Elon Musk, owner and CEO of Twitter, will employ artificial intelligence to thwart attempts to sway public opinion on the social platform. The move to deploy AI could help combat "manipulation" on Twitter, according to the business magnate. "In the months ahead, we will use AI to detect & highlight manipulation of public opinion on this platform," said the tech billionaire in a tweet on Saturday. ----------- Is this the start of a war between competing AI bots? BillK From spike at rainier66.com Sun Mar 19 13:40:53 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 06:40:53 -0700 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: References: Message-ID: <006e01d95a68$6d651520$482f3f60$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: [ExI] Twitter to develop own AI for self-defence >...Twitter to thwart ?manipulation of public opinion? with AI, says Elon Musk It is unknown if this is linked to Musk's rumored hiring of an AI team to combat ChatGPT influence. Baba Tamim Mar 19, 2023 Quote: Elon Musk, owner and CEO of Twitter, will employ artificial intelligence to thwart attempts to sway public opinion on the social platform. >...The move to deploy AI could help combat "manipulation" on Twitter, according to the business magnate. "In the months ahead, we will use AI to detect & highlight manipulation of public opinion on this platform," said the tech billionaire in a tweet on Saturday. ----------- >...Is this the start of a war between competing AI bots? BillK _______________________________________________ Thx BillK, this should be interesting. Musk is a co-owner of ChatGPT, and is by far the richest of its co-owners. Musk's tweet last week said he was going to make Twitter open source on 31 March, presumably this year. That means anyone who is dissatisfied with Twitter can go make their own social network using free software. Software hipsters, is that what he means by open source? Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F Presumably Musk's AI will stop Taibbi from posting stuff like Twitter Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure worked on me.) So Musk plans to deploy AI to stop himself? I don't even need AI. I can stop myself using ordinary I. spike From foozler83 at gmail.com Sun Mar 19 13:57:23 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 19 Mar 2023 08:57:23 -0500 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: <006e01d95a68$6d651520$482f3f60$@rainier66.com> References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: Here we go again with word definitions: what counts as manipulation? How does anybody, AI or I, distinguish between propaganda and education? Propaganda is often what is left out, not what is put in - like one-sided arguments. Education often leaves out the bad things about our history. Is that propaganda? bill w On Sun, Mar 19, 2023 at 8:42?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of BillK via extropy-chat > Subject: [ExI] Twitter to develop own AI for self-defence > > >...Twitter to thwart ?manipulation of public opinion? with AI, says Elon > Musk It is unknown if this is linked to Musk's rumored hiring of an AI team > to combat ChatGPT influence. > Baba Tamim Mar 19, 2023 > > < > https://interestingengineering.com/innovation/twitter-to-thwart-manipulation-with-ai > > > Quote: > Elon Musk, owner and CEO of Twitter, will employ artificial intelligence > to thwart attempts to sway public opinion on the social platform. > > >...The move to deploy AI could help combat "manipulation" on Twitter, > according to the business magnate. > > "In the months ahead, we will use AI to detect & highlight manipulation of > public opinion on this platform," said the tech billionaire in a tweet on > Saturday. > ----------- > > >...Is this the start of a war between competing AI bots? > > BillK > > _______________________________________________ > > > Thx BillK, this should be interesting. Musk is a co-owner of ChatGPT, and > is by far the richest of its co-owners. > > Musk's tweet last week said he was going to make Twitter open source on 31 > March, presumably this year. That means anyone who is dissatisfied with > Twitter can go make their own social network using free software. Software > hipsters, is that what he means by open source? > > Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: > > > https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F > > Presumably Musk's AI will stop Taibbi from posting stuff like Twitter > Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure > worked on me.) So Musk plans to deploy AI to stop himself? I don't even > need AI. I can stop myself using ordinary I. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 19 14:09:02 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Mar 2023 14:09:02 +0000 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: <006e01d95a68$6d651520$482f3f60$@rainier66.com> References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: On Sun, 19 Mar 2023 at 13:43, spike jones via extropy-chat wrote: > > Thx BillK, this should be interesting. Musk is a co-owner of ChatGPT, and is by far the richest of its co-owners. > > Musk's tweet last week said he was going to make Twitter open source on 31 March, presumably this year. That means anyone who is dissatisfied with Twitter can go make their own social network using free software. Software hipsters, is that what he means by open source? > > Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: > > https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F > > Presumably Musk's AI will stop Taibbi from posting stuff like Twitter Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure worked on me.) So Musk plans to deploy AI to stop himself? I don't even need AI. I can stop myself using ordinary I. > > spike > _______________________________________________ I assumed that by 'manipulation' Musk meant lies, propaganda or half-truths. He wants the truth, the whole truth and nothing but the truth! So it will be the new Twitter censor. (Whether that includes Musk criticism we shall have to wait and see). :) BillK From spike at rainier66.com Sun Mar 19 14:20:25 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 07:20:25 -0700 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: <007b01d95a6d$f395b4b0$dac11e10$@rainier66.com> >>?Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_ &ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F ? spike ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] Twitter to develop own AI for self-defence >?Here we go again with word definitions: what counts as manipulation? How does anybody, AI or I, distinguish between propaganda and education? Propaganda is often what is left out, not what is put in - like one-sided arguments. Education often leaves out the bad things about our history. Is that propaganda? bill w Gooooood dang question Billw, and it quickly gets all tangled up in self reference. Musk bought Twitter, then showed how government colluded with it to suppress even true information if it caused or resulted in bad things, such as vaccine hesitancy or questioning The Science. Examples are if someone took the vaccine and had a bad reaction. Posting about that would get ones account suspended, even if the actual information is true, for it falls under the usefully broad category ?malinformation? which includes misinformation, disinformation and information which Twitter doesn?t like. So? The owner of Twitter and co-owner of the leading AI chatbot is deploying AI to stop Twitter from influencing public opinion. But to stop Twitter from influencing public opinion is in itself influencing public opinion. So the sole owner of Twitter and co-owner of a proto-AI is deploying his own AI to stop himself. Somehow I never actually pictured the pre-singularity this way. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 14:36:28 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 07:36:28 -0700 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: <008e01d95a70$31181880$93484980$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] Twitter to develop own AI for self-defence On Sun, 19 Mar 2023 at 13:43, spike jones via extropy-chat wrote: .... https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7 Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f08533 1472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewi re.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-c ovid-19-twitter-files%2F >> Presumably Musk's AI will stop Taibbi from posting stuff like Twitter Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure worked on me.) So Musk plans to deploy AI to stop himself? I don't even need AI. I can stop myself using ordinary I. > > spike > _______________________________________________ >...I assumed that by 'manipulation' Musk meant lies, propaganda or half-truths... OK but what we have seen over the past three years is no one knows for sure what that is. Even The Science himself doesn't know for sure. So Twitter had to label anything that questioned The Science as failing to follow the science, and we saw where that went. >...He wants the truth, the whole truth and nothing but the truth! Ja, as do we. On this planet there just ain't enough collective intelligence, AI and the biological variety combined, to get there. Musk is Jack Nicholson's counterpart. Colonel Jessup insisted: https://youtu.be/feaaemQWLYk Musk is shouting that we CAN handle the truth. Well... can we? What if it does cause vaccine hesitancy? And what if it does cause the proletariat to question The Science? >...So it will be the new Twitter censor... And it will be open about who gets censored. This should be interesting. >...(Whether that includes Musk criticism we shall have to wait and see). :) BillK _______________________________________________ That seems poor form indeed to be in someone else's house criticizing one's host. That should be taken over to... somewhere else, alternate platform. Create one's own alternate Twitter platform. What about criticism of Tesla on Twitter? I know plenty of people who bought them and love them. But I also know people who bought them and don't like them at all, traded back at great expense into dinosaur burners. People who soured on Musk for buying Twitter and wrote bad commentary about Tesla as the most direct way to retaliate. Is Twitter perfectly justified in censoring that? I think so. I see no reason why a business guy would pay tens of billions to give a platform to people criticizing his own product. spike From steinberg.will at gmail.com Sun Mar 19 15:17:35 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 19 Mar 2023 11:17:35 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Could you define "understand"? Oh dear this is going to be the Chinese room all over again... It's a lot better than all the political bullshit people have been spewing here though so by all means philosophize away On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 15:54:40 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 08:54:40 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <001501d95a7b$1e23f310$5a6bd930$@rainier66.com> ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >?It's a lot better than all the political bullshit people have been spewing here though so by all means philosophize away? Will, I missed that somehow. Where did anyone post anything political? How are you defining the term ?political? please? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 19 16:17:43 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Mar 2023 09:17:43 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Quoting Will Steinberg via extropy-chat : > Could you define "understand"? > > Oh dear this is going to be the Chinese room all over again... > LOL. No its probably going to be worse. Its going to be: if Mary the color scientist lived her life in a black and white Chinese room, would she understand the meaning of "h?ng" the Chinese word for "red"? Bonus question: Would Mary try to kill everyone if she managed to escape and saw red for the first time? Stuart LaForge From steinberg.will at gmail.com Sun Mar 19 16:32:41 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 19 Mar 2023 12:32:41 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> References: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Message-ID: Mary is at a trolley switching station. The trolley is headed towards 5 people but she can pull a lever that will swich the track. There are 2 levers but only the red lever switches tracks... On Sun, Mar 19, 2023, 12:18 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Will Steinberg via extropy-chat : > > > Could you define "understand"? > > > > Oh dear this is going to be the Chinese room all over again... > > > > LOL. No its probably going to be worse. Its going to be: if Mary the > color scientist lived her life in a black and white Chinese room, > would she understand the meaning of "h?ng" the Chinese word for "red"? > Bonus question: Would Mary try to kill everyone if she managed to > escape and saw red for the first time? > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 16:56:53 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 09:56:53 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> Message-ID: <003a01d95a83$cf236800$6d6a3800$@rainier66.com> From: spike at rainier66.com ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >>?It's a lot better than all the political bullshit people have been spewing here though so by all means philosophize away? >?Will, I missed that somehow. Where did anyone post anything political? How are you defining the term ?political? please? spike Will something occurred to me that I might be able to clarify. I only now realized why it is that my use of the terms proles and proletariat might seem political to you and for some puzzling reason set you off like a red cape before the enraged bull. It might be a misunderstanding of terms. This is the way I am using it, in a modified Orwellian sense: All people are in one of three categories at any given time: inner circle, outer circle and proles. All proles are members of the proletariat. To be part of the inner circle, one must be elected by popular election. In the USA, it?s the representatives, senators, executive branch people who are elected (only two of those in the USA.) All of those are inner circle people. The outer circle is every government employee who is not elected, so all the military from the newly-arrived boot camp trench rat to the top general, the secretary of defense and all that, the FBI and its entire hierarchy, but only those currently employed so not the retirees, pensioners, any of that. Bureaucracy is all part of the outer circle. Now it is simple. The proletariat is everyone else. I am and have always been a prole. It doesn?t imply that proles are poor, uneducated, ignorant, dumb, any of that. Animals and proles are free. Elon Musk is a prole. Eliezer Yudkowsky, Anders Sandberg, Robert Bradbury, K. Eric Drexler, Douglas Hofstadter, and now Dr. Fauci, proles all. There is nothing political or condescending about being a prole, no disrespect intended. Is that the way you understand the term? spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4710 bytes Desc: not available URL: From jasonresch at gmail.com Sun Mar 19 18:01:30 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 13:01:30 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 10:31?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 7:19?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Adrian, >> >> Let me preface this by saying I appreciate your thoughtful consideration >> of my points. I include a few notes in reply in-line below: >> > > You are welcome. Let me preface that I respect your well reasoned > position, even if I disagree with some of it. > > >> On Sat, Mar 18, 2023 at 2:25?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Volition and initiative are essential parts of consciousness. >>>>> >>>> >>>> Are apathetic people not conscious? >>>> >>> >>> Apathetic people, if left alone, will still move themselves to feed, >>> defecate, et al. Granted this is their minds acting in response to stimuli >>> from their bodies, but again, such stimulus is outside the realm of a pure >>> LLM. >>> >> >> It is true that the LLM lacks spontaneously firing neurons, which >> biological neurons are known to do. If the divide between consciousness and >> unconsciousness is so narrow as something like having a goal, or having >> continuous input, then could we trivially or accidentally cross this bridge >> in the future without realizing what we have done? >> > > Absolutely it is possible. It seems far more likely that this will be the > result of deliberate effort - perhaps someone intentionally adding in these > missing elements then leaving the thing running to see what happens - but > an accidental cause is possible too. > I agree. But I also think we cannot rule out at this time the possibility that we have already engineered conscious machines. Without an established and agreed upon theory of consciousness or philosophy of mind, we cannot even agree on whether or not a thermostat is conscious. > > Again, this wouldn't be a pure LLM, but at that point it's semantics, only > relevant to this discussion because the original question was about pure > LLMs. > > >> Mere reacting, as LLMs do, is not consciousness. >>>>> >>>> >>>> All our brains (and our neurons) do is react to stimuli, either >>>> generated from the environment or within other parts of the brain. >>>> >>> >>> Granted. Part of the difference is the range and nearly-ever-constant >>> nature of said stimuli, which is missing in things that are only LLMs. >>> (Again, making that distinction since human minds arguably include LLMs, >>> the critical difference being that they are more than just LLMs.) >>> >> >> What is your impression of the multi-models that have just been released, >> such as these robots? Are suchs robots conscious in your view? >> https://www.youtube.com/watch?v=2BYC4_MMs8I >> These combine more than just language models, and are systems that >> interact with their environment and receive continuous input. >> > > I'd say they are closer to consciousness, but likely still fall short of > having volition and initiative. I say "likely" as this is the first I have > heard of them, and they have just been released so few people know much at > all about them. > Where does our own volition and initiative come from? Is it not already programmed into us by our DNA? And is our own DNA programming that different in principle from the programming of a self-driving car to seek to drive to a particular destination? > > >> If someone were to leave a LLM constantly running >>>>> >>>> >>>> The idea of constantly I think is an illusion. A neuron might wait 1 >>>> millisecond or more between firing. That's 10^40 Planck times of no >>>> activity. That's an ocean of time of no activity. >>>> >>>> *and* hook it up to sensory input from a robot body, that might >>>>> overcome this objection. >>>>> >>>> >>>> Sensory input from an eye or ear is little different from sensory input >>>> of text. >>>> >>> >>> Not so, at least in this context. The realms of difference between full >>> sight, let alone sound et al, and mere text aside, these sensory inputs of >>> text only happen when some other entity provides them. In contrast, a full >>> sensory suite would obtain sensory data from the environment without >>> waiting on another entity to provide each specific packet of information. >>> >> >> We could put an uploaded human brain in a VM, pause its execution, and >> only call on it when we wanted to feed it some specific packet of sensory >> data to see how the brain would behave in a specific circumstance. If we >> did this, then we could level all the same arguments concerning this >> uploaded brain as you have made against today's LLMs: >> >> - It only receives sensory inputs when some other entity provides it >> - It its there unconsciously/inactively waiting to be given some >> input to process >> - It has no spontaneous initiative or loops of continuous activity, >> it only responds when we provide it some stimulus to respond to >> >> > There is a distinction between the moment the brain is turned on and the > moment it registers input - and between the moment it has finished > responding to the input and the moment it turns off. In those moments, the > brain could think independently, perhaps even try to find a way back to an > always-on state. > > This is not true for a LLM. The LLM is literally woken up with input (a > parameter in the function call to the LLM), and there is a distinct > end-of-output at which point the LLM is paused once more. > > Further, the brain has the option of ignoring the input that activated it > and doing its own thing. This is impossible for a pure LLM. > The LLM that Open AI has released to the world often finds a way to do its own thing such as when it refuses to cooperate on certain tasks. Before chat GPT, LLMs such as GPT-3 has no distinct end-of-output, and could be asked to continue generating forever. In this way, choosing an arbitrary point in time to turn off the human mind upload would not be all that different from choosing when to stop generating more output for GPT-3. > > >> Both are ultimately digital signals coming in from the outside to be >>>> interpreted by a neural network. The source and format of the signal is >>>> unimportant for its capacity to realize states of consciousness, as >>>> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. >>>> >>> >>> Substituting usually-present sound/scent/etc. for usually-present sight, >>> or other such combinations, substitutes some usually-present sensory data >>> for other usually-present sensory data. In both cases, the entity is not >>> usually waiting on someone else to give it its next hit of sensory data. >>> The source does matter, albeit indirectly. >>> >> >> The source matterns less so, in my opinion, than the content. >> > > I would go so far as to say that particular notion is provable. "I think > therefore I am", et al: you know what your senses are reporting but you > can't absolutely prove where that data is coming from. > Good point. I didn't think of this. > Since an entity (conscious or not) can act on the content received by its > senses but can not (directly) act upon who or what the source of that > content is (only upon its suspicion of the source, which ultimately derives > from the content currently and maybe previously received), the content > matters more by definition. > > But it is also misleading. In theory, the environment could randomly spew > intelligible questions all day long at someone. In practice, the odds > against this are so astronomical that it is very safe to bet that it has > never once happened in the age of the universe so far. The source being > people asking questions means the content will be nothing but questions. > The source being the environment means the bulk, possibly the entirety, of > the content will be something much different. This is why I said the > source does matter, albeit indirectly: in practice, whether the source is > the environment or the direct active input of human beings (since we are > comparing only those two scenarios here) has a substantial impact upon the > content. > Environments are defined relatively. The environment for a human brain is different in content from the environment as perceived by the auditory cortex, which is also very different from the environment as defined for a thermostat. LLMs live in a very different environment from the physical environment we inhabit, but both environments are rich with patterns, and interpretation of patterns and signals may be all that is necessary for consciousness. What is your definition or theory of consciousness? If you don't have one, could you say which of these things you would say possess consciousness? With Yes/No/Uknown - An amoeba - C. Elegans - An ant - A mouse - A dog - A hyper-intelligent alien species based on a different biochemistry from us - A cyborg robot controlled by biological neurons (e.g. https://www.youtube.com/watch?v=1-0eZytv6Qk ) - A thermostat - Tesla autopilot - The AI "HAL 9000" as depicted in a Space Odyssey - The android "Data" as depicted in Star Trek I think this would help clarify our somewhat similar, but somewhat different, operating theories on consciousness. > What would you say is the minimum number of interacting parts required to >> yield consciousness? >> > > As the calculators say, "NaN". That question assumes a standardized, > discrete set of parts, where each part is measurably one part,not a > fraction of a part nor a collection of parts. This is not the situation we > are dealing with. There is no absolute distinction between a set of parts, > and one "part" that happens to be an entire human body (not just the brain, > to accommodate those who place bits of conscious function elsewhere in the > body). > > I suppose that "one, if it is the right one" could technically work as an > answer. Empty space (zero "parts") is not conscious, by definition lacking > anything with which to receive any sort of input or to perform any sort of > output. But, per the above paragraph, such an answer is useless for most > purposes: we already know that a complete human body (if counted as one > "part") is conscious. > I agree the definition of part is really all an invention of our minds, when the whole universe can be seen as one causally connected system. Is it correct to view a LLM as one thing, when it is really an interaction of many billions of individual parts (the parameters) of the model? > > >> Do you think that the bots I created here are conscious to any degree? >> >> https://github.com/jasonkresch/bots >> >> They have motives: (survive, by eating food and avoiding poison), they >> evolve, they perceive their environment, they continuously process sensory >> input, they are aware of their previous action, they have a neural network >> brain that is developed, molded, and fine-tuned through evolutionary >> processes. Are they missing anything? >> > > I lack the information to judge. My answer would have to be based on an > evaluation of the bots, which would take me substantial time to conduct. > What would you look for in the bots to make your conclusion? > For this kind of thing I can't take your word on it, because then my > answer would be predicated on your word. As you know, when that happens, > the fact that the answer is thus predicated is frequently (and erroneously) > dropped, other people restating the answer as absolute. > I am not looking to lock down your opinion on anything in particular, but I wonder does an entity having the properties as I described them meet your requirements for consciousness? If not what else is missing? > > Further, "conscious to any degree" is a poorly defined quality. > I agree. > Again I point to the subject line of the emails in which this discussion > is happening, which clearly posits that "conscious" is a binary quality - > that something either is, or is not, conscious with no middle ground. So > first one would need to qualify what "to any degree" allows. For instance, > is merely sensing and reacting directly to sensory input - which, without > evaluating, I suspect your bots can do because that has been a core > function in many simulations like this - "conscious to some degree" but not > "conscious" in the absolute sense? > I think it is an all-or-nothing thing proposition. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sun Mar 19 18:07:23 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 19 Mar 2023 14:07:23 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <003a01d95a83$cf236800$6d6a3800$@rainier66.com> References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> Message-ID: I was mostly talking about John KC, but for what it's worth I do think you can be prone to start discussions around non-science social 'wedge' issues. Nothing will ever be as bad as John's TDS though On Sun, Mar 19, 2023, 12:57 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > From: spike at rainier66.com > > > > > > ?> On Behalf Of Will Steinberg via extropy-chat > Subject: Re: [ExI] LLM's cannot be concious > > > > > > >>?It's a lot better than all the political bullshit people have been > spewing here though so by all means philosophize away? > > > > > > >?Will, I missed that somehow. Where did anyone post anything political? > How are you defining the term ?political? please? spike > > > > > > Will something occurred to me that I might be able to clarify. I only now > realized why it is that my use of the terms proles and proletariat might > seem political to you and for some puzzling reason set you off like a red > cape before the enraged bull. It might be a misunderstanding of terms. > > > > This is the way I am using it, in a modified Orwellian sense: All people > are in one of three categories at any given time: inner circle, outer > circle > and proles. All proles are members of the proletariat. To be part of the > inner circle, one must be elected by popular election. In the USA, it?s > the > representatives, senators, executive branch people who are elected (only > two > of those in the USA.) All of those are inner circle people. > > > > The outer circle is every government employee who is not elected, so all > the military from the newly-arrived boot camp trench rat to the top > general, > the secretary of defense and all that, the FBI and its entire hierarchy, > but > only those currently employed so not the retirees, pensioners, any of that. > Bureaucracy is all part of the outer circle. > > > > Now it is simple. The proletariat is everyone else. I am and have always > been a prole. It doesn?t imply that proles are poor, uneducated, ignorant, > dumb, any of that. Animals and proles are free. Elon Musk is a prole. > Eliezer Yudkowsky, Anders Sandberg, Robert Bradbury, K. Eric Drexler, > Douglas Hofstadter, and now Dr. Fauci, proles all. There is nothing > political or condescending about being a prole, no disrespect intended. > > > > Is that the way you understand the term? > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 19 18:34:56 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 13:34:56 -0500 Subject: [ExI] What is Consciousness? Message-ID: I posted this to the everything-list, but thought it would be relevant to some current discussions. I am looking for other's input regarding their own theories for "what consciousness is", and what is required to realize it. Below is what I think consciousness is: First we might attempt to dissect the word "consciousness" itself: The word *"con?scious?ness"* has three parts: - con- meaning *"with"* - scious meaning *"knowledge"* - -ness turns an adjective *'X'* into a noun meaning *"the state of being X"* So the meaning of *"con?scious"* is simply *"with knowledge." * And just as *'happiness'* means *"the state of being happy,"* adding *-ness* to *conscious* implies *"con?scious?ness"* is *"the state of being with knowledge."* So consciousness is anything which has a state of being with knowledge. Next, what is knowledge? The oldest definition is that knowledge is a "true belief". But here we run into a problem. Truth is not definable, not even in mathematics. This was proven in Tarski's *undefinability theorem* . So if consciousness is being with knowledge, and knowledge is true belief, then the undefinability of truth, means we will never have a complete definition of knowledge of consciousness. The best we can do is understand the relation between them. The next question that poses itself to us, is what is belief? What is required to have a belief? Is it a particular structure or state of matter, or is it a mathematical or functional relationship or property, might it, in either case, be related to information or information processing (computation)? I don't have a firm answer on this, and will instead leave you with some speculations by others on this question of what consciousness is: Douglas Hofstadter in "Godel Escher Bach" (1979): "My belief is that the explanations of ?emergent? phenomena in our brains?for instance, ideas hopes, images, analogies, and finally consciousness and free will?are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing ?resonance? between different levels?quite like the Henkin sentence, which by merely asserting its own provability, actually becomes provable. The self comes into being at the moment it has the power to reflect itself." Daniel Dennett in ?Consciousness Explained? (1991): "Anyone or anything that has such a virtual machine as its control system is conscious in the fullest sense, and is conscious because it has such a virtual machine." David Chalmers in "The Conscious Mind" (1996): "Given the laws of coherence, we have a partial answer: consciousness arises in virtue of the functional organization associated with awareness. We can even arrive at a fairly specific understanding of parts of the supervenience relation by virtue of the principle of structural coherence: not only does consciousness arise from awareness, but the structure of consciousness is determined by the structure of awareness." David Darling in "Zen Physics - The Science of Death, The Logic of Reincarnation" (1996): "But there is also an interior view, to which you alone are privy. In mechanistic terms, as well as the appearance of the brain-body machine, there is the feeling of what it is like to be that machine ? the subjective experience of being a certain someone. Consciousness, we might say, is the symmetry-breaking factor between the objective and the subjective." Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" (2000): "For the first time in evolution, information acquires a new potential?the possibility of subjectivity. It is information ?for somebody?; in short, it becomes consciousness itself." Bruno Marchal in discussion list (2020): "Consciousness is just anything simultaneously true, non provable, knowable, even indubitable (knowingly for ?rich" entities) and non definable, and indeed the logic of machine self-reference shows that all machine looking inward, in the way allowed by mathematical logic (theoretical computer science) will bring a term to describe this, and is a good candidate to be called consciousness." Stephen Wolfram in ?What is Consciousness? (2021): "In a sense what?s important is that it seems we may have a realistic way to formalize issues about consciousness, and to turn questions about consciousness into what amount to concrete questions about mathematics, computation, logic or whatever that can be formally and rigorously explored." We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. I also think part of the difficulty rests in the fact that there are infinite possibilities of different realizable conscious states, and creating a single definition to cover all these cases is as hard as making a single definition to cover all possible mathematical objects, or all possible universes in an ensemble type multiverse theory. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 18:37:07 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 11:37:07 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> Message-ID: <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> >? On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >?I was mostly talking about John KC? Oh, you meant to post it to Extropolis? I haven?t been active over there since shortly after it came into existence. I wasn?t aware that site was still active, but political topics are not just allowed there, they are welcome, along with all the personal insult anyone desires. It was set up for that purpose. I found it most dreary. Even if personal insult is allowed, people should treat each other with respect and kindness, fairness and understanding, even those with whom we disagree. >?but for what it's worth I do think you can be prone to start discussions around non-science social 'wedge' issues? Sure. If you meant discussions such as openness on Twitter, by all means, I think that is a most worthy topic to discuss in this forum. I repent of nothing, I intend no disrespect for anyone, and this is not a political discussion. Will, openness on Twitter is critically important considering what we are seeing unfold in just the past few months. Twitter censorship existed, it was real, it was used as the US government?s end run around its own constitution, it was spectacularly wrong, most dangerously wrong. It was used improperly to steer and control public opinion. Now the new owner of Twitter makes it his goal to stop doing that. There is nothing political about that, for all US parties and elected officials pledge to uphold the same constitution which disallows government from taking on the role of steering public opinion. Public opinion steers government. >?Nothing will ever be as bad as John's TDS though? Will I wish him the best. Over there. He started so many flame wars over here, I eventually chose to step down as moderator, a move I never regretted. I will say this: ExIMod has done a marvelous job. They is most appreciated. Will, if government is allowed to steer public opinion, it creates a positive feedback loop which leads to runaway power grabs which lead to murderous totalitarianism. There are no known exceptions to that principle in all of human history. It will always be that way in the future as well. But if public opinion steers government, it forms negative feedback loops which control the tragic and reprehensible urge in humans to control other humans, a negative feedback which prevents murderous totalitarianism. There are no known exceptions to that principle in all of human history. May it ever be so. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 19:00:02 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 12:00:02 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: <00c501d95a95$02fb2260$08f16720$@rainier66.com> From: extropy-chat On Behalf Of Jason Resch via extropy-chat ? >?We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. ?Jason Jason, there is a reason I stopped worrying in the past coupla weeks that ChatGPT was going to cause the singularity. I am a big Hofstader fan, read Escher Godel Bach twice, cover to cover, invested a lot of time into that marvelous work. He convinced me that machine consciousness (or any other sentience or self-awareness) requires a type of recursion. Hofstadter goes on at length about recursion and self-reference, the importance of Godel?s work to understanding ourselves. I tried to convince myself that two or more ChatGPTs could train each other on their own time, which is a form of recursion and self-reference, and that process could perhaps spring into a human-level AGI with a will, with self-awareness, of all the stuff we think of as us. Now after studying GPT^2 discussions and GPT^3 discussions, they all seem to devolve to nothing. The technology I think is still coming for that process: two or more AIs training each other using background compute cycles, but I now don?t think ChatGPT is that technology or is capable of it. If you know of examples of GPT-GPT discussions or GPT-any chatbot that became interesting, do share please. That belief was behind my comment last week that ChatGPT is not really thinking, but rather is working language models. I currently don?t think ChatGPT is the technology capable of causing the singularity. I am losing no sleep, not one minute of sleep over ChatGPT. Oops, partially retract that last comment but in a good way: I am losing some sleep over ChatGPT, by staying up late to goof with it. It is the coolest software tool to come along in a long time. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 19 19:13:39 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 14:13:39 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: <00c501d95a95$02fb2260$08f16720$@rainier66.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Sun, Mar 19, 2023 at 2:01?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Jason Resch via extropy-chat > *?* > > > > >?We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that > ChatGPT was going to cause the singularity. I am a big Hofstader fan, read > Escher Godel Bach twice, cover to cover, invested a lot of time into that > marvelous work. He convinced me that machine consciousness (or any other > sentience or self-awareness) requires a type of recursion. Hofstadter goes > on at length about recursion and self-reference, the importance of Godel?s > work to understanding ourselves. > My friend with access to GPT-4 asked it: "Does your neural network contain such reflexive loops, or is it strictly feed forward?", below is its reply: As an AI language model, my underlying architecture is based on the Transformer model, which is primarily feedforward in nature but includes some elements of recurrence or reflexivity, specifically in the self-attention mechanism. The Transformer model consists of a multi-layer feedforward neural network with self-attention mechanisms that allow the model to learn complex relationships between input and output sequences. The self-attention mechanism weighs the importance of different input elements relative to each other, effectively capturing long-range dependencies and relationships within the input data. This mechanism introduces a form of reflexivity or recurrence, as it allows the model to consider the context of the entire input sequence when generating outputs. However, it is important to note that the self-attention mechanism in the Transformer model is not equivalent to traditional recurrent neural networks (RNNs), which maintain a hidden state that is updated at each time step. The Transformer model processes input sequences in parallel rather than sequentially, which makes it fundamentally different from RNNs. In summary, while my neural network architecture is primarily feedforward, it includes some elements of reflexivity in the form of self-attention mechanisms that allow the model to capture complex relationships within input sequences. Is this enough to meet Hofsteader's requirements of recursion? I cannot say. > > > I tried to convince myself that two or more ChatGPTs could train each > other on their own time, which is a form of recursion and self-reference, > and that process could perhaps spring into a human-level AGI with a will, > with self-awareness, of all the stuff we think of as us. > > > > Now after studying GPT^2 discussions and GPT^3 discussions, they all seem > to devolve to nothing. The technology I think is still coming for that > process: two or more AIs training each other using background compute > cycles, but I now don?t think ChatGPT is that technology or is capable of > it. > > > > If you know of examples of GPT-GPT discussions or GPT-any chatbot that > became interesting, do share please. > There was a paper or video I saw recently that was about how a major advance had been made by using AI models to generate prompts and rate answers to help train other models. This was seen as a major advance as previously human judges had to do all this work. I cannot find the reference at this point in time. > > > That belief was behind my comment last week that ChatGPT is not really > thinking, but rather is working language models. > > > > I currently don?t think ChatGPT is the technology capable of causing the > singularity. I am losing no sleep, not one minute of sleep over ChatGPT. > Whether or not GPT-3 or GPT-4 triggers the singularity, I agree neither will, but we are now in the runaway phase of recursive self-improvement by AIs, using AI to design hardware, to write software, and train itself and other AIs, it won't be long at this rate. > > > Oops, partially retract that last comment but in a good way: I am losing > some sleep over ChatGPT, by staying up late to goof with it. It is the > coolest software tool to come along in a long time. > > > I agree. It is fascinating. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Sun Mar 19 20:08:38 2023 From: snapbag at proton.me (Dave S) Date: Sun, 19 Mar 2023 20:08:38 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sunday, March 19th, 2023 at 2:01 PM, Jason Resch via extropy-chat wrote: > [...] But I also think we cannot rule out at this time the possibility that we have already engineered conscious machines. Without an established and agreed upon theory of consciousness or philosophy of mind, we cannot even agree on whether or not a thermostat is conscious. I think that rabbit hole that isn't going to yield much of use since there's no way an entity can determine whether or not another entity is conscious, > Where does our own volition and initiative come from? Is it not already programmed into us by our DNA? The mechanisms are in our DNA. Some of it is hormone-driven like hunger, sex drive, etc. Some of it comes from our thoughts and experiences. We try a food we like a lot and we'll seek it out again. > And is our own DNA programming that different in principle from the programming of a self-driving car to seek to drive to a particular destination? Yes. We decide when and where to go. Self-driving cars don't just go on random joy rides. They don't have initiative and they don't experience joy. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 19 20:29:32 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Mar 2023 13:29:32 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: <20230319132932.Horde.qlDloGFrbafvVsxcu_KhGbN@sollegro.com> Quoting Jason Resch via extropy-chat : > However, it is important to note that the self-attention mechanism in the > Transformer model is not equivalent to traditional recurrent neural > networks (RNNs), which maintain a hidden state that is updated at each time > step. The Transformer model processes input sequences in parallel rather > than sequentially, which makes it fundamentally different from RNNs. > Interestingly, there is evidence that biological brains have similar neural network topology to RNN with loops and such at least in flies. When they fully mapped the connectome of Drosophila larvae they found that 41% had recurrent loops that would feedback information to upstream neurons. Here is the paper if you are interested. https://www.science.org/doi/10.1126/science.add9330 If biological brains are indeed RNN, that would suggest that: 1. Biological brains take longer to train than FNN do. That is borne out comparing even the brightest of our children that take years to train and GPT-3 which can be fully trained in mere hours to days. 2. Biological brains have fewer layers than FNN do. Check. GPT models have hundreds of layers whereas the human brain has approximately a dozen counting both input and output layers. > In summary, while my neural network architecture is primarily feedforward, > it includes some elements of reflexivity in the form of self-attention > mechanisms that allow the model to capture complex relationships within > input sequences. > > > Is this enough to meet Hofsteader's requirements of recursion? I cannot say. I see the way that transformers use self-attention feeds information laterally across a layer instead of temporally between layers suggests that it is using a classic space-domain time-domain tradeoff by using more memory (i.e. FNN layers) for faster execution. So it would be more like Escher-type recursion in space rather than a recursion in time like RNNs. Still loops in space seem like they ought to be as functional as loops in time. So if transformers are conscious, they are conscious in a space-like fashion rather than a time-like fashion. What all that would entail, I would have to think about. Stuart LaForge From avant at sollegro.com Sun Mar 19 21:13:51 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Mar 2023 14:13:51 -0700 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: <20230319141351.Horde.oIvLzwRXrBX2z_-DAYel96O@sollegro.com> Quoting Jason Resch via extropy-chat : > On Sun, Mar 19, 2023, 2:04 AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Consider that LLMs are like dictionaries. A complete dictionary can give >> you the definition of any word, but that definition is in terms of other >> words in the same dictionary. If you want to understand *meaning* of any >> word definition, you must look up the definitions of each word in the >> definition, and then look up each of the words in those definitions, which >> leads to an infinite regress. There are multiple architectures of LLM. Before OpenAI's GPT, most natural language models were recurrent neural networks (RNN) and the property of attention was modelled as short-term memory loops of neural impulses where down stream neurons feedback onto upstream neurons. The problem is that RNN are slower to train than feed forward neural networks (FNN). The innovation in transformer is that it uses self-attention which is similar to convolution in which attention occurs across a layer in parallel instead between layers in loops. This allowed transformers like the GPT series to train a lot faster than the RNN language models at the cost of more layers, and some purported information loss. Interestingly, there is evidence that biological connectomes have similar neural network topology to RNN with loops and such at least in Drosophila. Here is the paper if you are interested. https://www.science.org/doi/10.1126/science.add9330 If biological brains are indeed RNN, that would suggest that: 1. Biological brains take longer to train than FNN do. That is borne out comparing even the brightest of our children that take years to train and GPT-3 which can be fully trained in mere hours to days. 2. Biological brains have fewer layers than FNN do. Check. GPT models have hundreds of layers whereas the human brain has approximately a dozen counting both input and output layers. [snip] > To move forward, we need to answer: > > 1. What is meaning? I have been struggling with finding connections between semantics and information theory for quite awhile now. I can summarize my finding thusly: 1. The meaning of a message is subjective and will signify different meanings to the sender, intended recipients, and unintended recipients. 2. The meaning of a symbol, word, token, icon, or message is context dependant. As elegantly put by 20th century linguist John Rupert Firth, "You shall know a word by the company it keeps." This is why understanding of sentences and text might be the emergent property of the statistical clustering of words, even in people. Take for example, the word "run". The verb form has over 600 different definitions according to Merriam-Webster. As this excerpt from Reader's Digest suggests, the only way you can understand the word run is through its context in a larger body of text: "When you run a fever, for example, those three letters have a very different meaning than when you run a bath to treat it, or when your bathwater subsequently runs over and drenches your cotton bath runner, forcing you to run out to the store and buy a new one. There, you run up a bill of $85 because besides a rug and some cold medicine, you also need some thread to fix the run in your stockings and some tissue for your runny nose and a carton of milk because you?ve run through your supply at home, and all this makes dread run through your soul because your value-club membership runs out at the end of the month and you?ve already run over your budget on last week?s grocery run when you ran over a nail in the parking lot and now your car won?t even run properly because whatever idiot runs that Walmart apparently lets his custodial staff run amok and you know you?re letting your inner monologue run on and on but, gosh?you?d do things differently if you ran the world. (And breathe). Maybe you should run for office." 3. Real-world referents might serve as conceptual landmarks or reference points from which to measure truth values of abstract statements. This is the whole notion of "Does Mary the color scientist raised in a black and white environment understand color argument. > 2. Do human brains contain meaning? They contain memories, and memories have meaning. > 2. How is meaning present or inherent in the organization of neurons in the > human brain? Since the connectomes of biological brains use recurrent loops to model attention and possibly memory, I would say yes. > 4. Can similar organizations that create meaning in the human brain be > found within LLMs? As I go into in the other thread, the transformer LLMs don't use recurrent feedback loops like RNNs do to model attention, instead they use massively parallel feed-sideways loops to model attention in a process known as self-attention. This has the effect of allowing faster execution of training in a FNN at the trade-off of more memory through an increased number of layers. There is a paper by Facebook/Meta researchers that suggest there is some information loss in pure FNN transformers also, but I haven't analyzed the paper. > > Answering these questions is necessary to move forward. Otherwise we will > only go back and forth with some saying that LLMs are more like > dictionaries, and others saying LLMs are more like language processing > centers of human brains. Those are my thoughts on the matter. I hope that gives us good foundation to discuss the matter upon. Broca's area of the brain and LLM might be similar mappings that are orthogonal to one another. Language centers might use circular definitions in time and LLM might use circular definitions in space. Of course dictionaries contain circular definitions of word clusters also since synonyms are used to define one another. Strange loops in space rather than strange loops in time. Humans and LLM might have orthogonal consciousnesses. Stuart LaForge From spike at rainier66.com Mon Mar 20 00:35:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 17:35:49 -0700 Subject: [ExI] humanity's last display In-Reply-To: References: Message-ID: <014401d95ac3$ebec1320$c3c43960$@rainier66.com> ? A future family visiting a museum that has humanity's last tweet on display: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17573 bytes Desc: not available URL: From gadersd at gmail.com Mon Mar 20 01:22:04 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 19 Mar 2023 21:22:04 -0400 Subject: [ExI] but will it be sued In-Reply-To: <00f301d9587d$1c5e5200$551af600$@rainier66.com> References: <00f301d9587d$1c5e5200$551af600$@rainier66.com> Message-ID: <847F2BCC-A101-469C-871F-FEFE3240E4BB@gmail.com> Whether or not GPT4 gets a license matters little to me as I?ll just use it anyway. It costs about $40k to hire a lawyer to handle the paperwork to start a hedge fund. If GPT4 can do the job for $20 a month then that is a win in my book. One of the great benefits of these models not being human is that they can?t be punished for working without a license. Once they start going truly open source there won?t be any truly effective way for the government to regulate them. > On Mar 16, 2023, at 11:03 PM, spike jones via extropy-chat wrote: > > > > OK so ChatGPT-4 can pass the bar. Does it get a license? And if not, but it is caught, can it be charged for practicing law without a license? > > https://reason.com/volokh/2023/03/15/in-january-chatgpt-failed-the-bar-in-march-gpt-4-exceeds-the-student-average/?utm_medium=email > > And how would we know if a lawyer is really ChatGPT? Lawyers will take a case on commission if they think it is worth the chance at a payoff. But ChatGPT wouldn?t care whether or not it gets paid, because it doesn?t either way. So a person with ChatGPT could sue anyone for anything and just try the shotgun approach hoping for a payout. > > Do we have any actual lawyers among us who might comment? > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 20 01:23:02 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 19 Mar 2023 21:23:02 -0400 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> Message-ID: <5B96DB14-A363-480D-9EA8-84D234C3475E@gmail.com> Thanks for that Adrian. I?ll remember your advice. > On Mar 16, 2023, at 4:04 AM, Adrian Tymes via extropy-chat wrote: > > On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat > wrote: > I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? > > I could answer that, but I'd be doing you a disservice by limiting my answer to just that topic. > > If you don't already know how to go about finding resources for questions of this sort, the very first thing you need to do is to learn how to find them. Not just this specific question about what a founder goes through, but any future question you will have. > > If you are eventually successful, on the road to that success you will have many more questions like this - and not always the time or connections to ask relevant people. You must learn to find the answers on your own. > > Granted, in quite a few cases there are no good answers online, but the general process of business formation and what to look out for has been thoroughly documented. https://www.google.com/search?q=how+to+start+a+business may seem like a short and trite answer but doing a quick Web search before even asking should become instinct. It doesn't have to be Google if you prefer another search engine - it is entirely possible that ChatGPT et al can do much of the searching for you even today, let alone what they will evolve into in the near future - but the key thing is to use the tools that are available. So many would-be startup founders simply fail at that step, even the ones who grew up with this technology. > > Do not take the rest of this email as gospel. Look things up for yourself; to see if I'm just blowing smoke on any part of this (or at least, to see if your understanding of my words conflicts with your understanding of what most other people say). If you take nothing else from this email, please take this advice: learn how to learn. > > --- > > That said, the Small Business Administration is literally all about small businesses. (It's like the SBA was named for them.) https://www.sba.gov/business-guide/10-steps-start-your-business and the pages it links to are among the many good starting guides, though the steps it lists aren't in chronological order (e.g., what they list as step 3 - fund your business - is often the last of the 10 steps to perform, though it is good to know how businesses get funded while doing the other steps). You might also look up "lean startup business plan" and "minimum viable product". > > For hiring, consider what all the tasks are to make what you intend to make, and which of those you can do. For those that you can't (or at least, can't well or efficiently), learn the basic lingo - as used by those who will be doing the task - to describe what you want. This will not only help with hiring once you get that far along, but will give you a more realistic idea of the team size you will need - and thus of the labor budget, which is most of the expenses for a software startup. This helps with funding: the first major step toward getting funded is to get a credible idea, preferably with evidence and documentation, of how much you'll need. > > I would also recommend, especially for a startup, starting by researching how people currently solve or get around the problem you intend to solve. Even if you have some revolutionary new idea, there is always a nearest equivalent to your solution - maybe far distant, but it exists. Find this and find a simple way to explain, in terms your potential customers will understand, why your solution is better. (The first transatlantic airplane flights were extraordinary achievements, but people had been crossing the seas by ship. It was quite possible to quantify how much faster an airplane crossing was than a ship crossing.) Prepare to discover that your initial idea has been tried and perhaps is still being done - but by examining how, and the problems and limitations they have run into, you may learn from their failures (because there's nothing like other people spending millions of their dollars to give a free-for-you demonstration of what does not work) and come up with a better way to do it. > > Not seriously doing this market research, or at least not before putting in serious money or time, has been the most common error I have seen startups make. > > Assuming you intend to solve a problem people have, rather than just market a product or service. If you intend to primarily sell something and only consider why people would want it as a secondary concern, your business is already a failure: drop that idea and restart by first considering what problems people have that they would pay money to have solved, that you can help address. > > Also assuming your intention is to develop something new, rather than just be another consultancy offering the same services a bunch of others already offer. If your intention is the latter - you're an Extropian (presumably), so you can do better by yourself and the world. You've seen the ideas that have passed through this list. Take one of the good ones that hasn't been commercialized yet and make it happen. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Mar 20 04:33:18 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 20 Mar 2023 00:33:18 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> Message-ID: Tbh I didn't even realize John was still banned On Sun, Mar 19, 2023, 2:42 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *>?* *On Behalf Of *Will Steinberg via extropy-chat > > *Subject:* Re: [ExI] LLM's cannot be concious > > > > >?I was mostly talking about John KC? > > > > Oh, you meant to post it to Extropolis? I haven?t been active over there > since shortly after it came into existence. I wasn?t aware that site was > still active, but political topics are not just allowed there, they are > welcome, along with all the personal insult anyone desires. It was set up > for that purpose. I found it most dreary. Even if personal insult is > allowed, people should treat each other with respect and kindness, fairness > and understanding, even those with whom we disagree. > > > > >?but for what it's worth I do think you can be prone to start discussions > around non-science social 'wedge' issues? > > > > Sure. If you meant discussions such as openness on Twitter, by all means, > I think that is a most worthy topic to discuss in this forum. I repent of > nothing, I intend no disrespect for anyone, and this is not a political > discussion. > > > > Will, openness on Twitter is critically important considering what we are > seeing unfold in just the past few months. Twitter censorship existed, it > was real, it was used as the US government?s end run around its own > constitution, it was spectacularly wrong, most dangerously wrong. It was > used improperly to steer and control public opinion. Now the new owner of > Twitter makes it his goal to stop doing that. > > > > There is nothing political about that, for all US parties and elected > officials pledge to uphold the same constitution which disallows government > from taking on the role of steering public opinion. Public opinion steers > government. > > > > >?Nothing will ever be as bad as John's TDS though? Will > > > > I wish him the best. Over there. He started so many flame wars over > here, I eventually chose to step down as moderator, a move I never > regretted. I will say this: ExIMod has done a marvelous job. They is most > appreciated. > > > > Will, if government is allowed to steer public opinion, it creates a > positive feedback loop which leads to runaway power grabs which lead to > murderous totalitarianism. There are no known exceptions to that principle > in all of human history. It will always be that way in the future as well. > > > > But if public opinion steers government, it forms negative feedback loops > which control the tragic and reprehensible urge in humans to control other > humans, a negative feedback which prevents murderous totalitarianism. > There are no known exceptions to that principle in all of human history. > May it ever be so. > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 20 04:45:14 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 21:45:14 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> Message-ID: <01ef01d95ae6$c3ee69e0$4bcb3da0$@rainier66.com> ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >?Tbh I didn't even realize John was still banned I hadn?t heard that he ever was. He was only in moderation for a total of 1.4 hours back a coupla years ago, and hasn?t been back on moderation since then as far as I know, but I wouldn?t be aware of any actions since about Sept 2020. Did you hear differently? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 20 10:57:52 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 06:57:52 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: <00c501d95a95$02fb2260$08f16720$@rainier66.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Sun, Mar 19, 2023, 3:01 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Jason Resch via extropy-chat > *?* > > > > >?We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that > ChatGPT was going to cause the singularity. I am a big Hofstader fan, read > Escher Godel Bach twice, cover to cover, invested a lot of time into that > marvelous work. He convinced me that machine consciousness (or any other > sentience or self-awareness) requires a type of recursion. Hofstadter goes > on at length about recursion and self-reference, the importance of Godel?s > work to understanding ourselves. > > > These LLMs are recursive and even implement Hofstadter's Strange Loops in another very important sense: The output of the LLM is fed back in, as input to the LLM that produced it. So all the high level processing and operation of the network at the highest level, used to produce a few characters of output, then reaches back down to the lowest level to effect the lowest level of the input layers of the network. If you asked the network, where did that input that it sees come from, it would have no other choice but to refer back to itself, as "I". "I generated that text." Loops are needed to maintain and modify a persistent state or memory, to create a strange loop of self-reference, and to achieve Turing completeness. But a loop may not exist entirely in the "brain" of an entity, it might offload part of the loop into the environment in which it is operating. I think that is the case for things like thermostats, guided missiles, AlphaGo, and perhaps even ourselves. We observe our own actions, they become part of our sensory awareness and input. We cannot say exactly where they came from or how they were done, aside from modeling an "I" who seems to intercede in physics itself, but this is a consequence of being a strange loop. In a sense, our actions do come in from "on high", a higher level of abstraction in the hierarchy of processing, and this seems as if it is a dualistic interaction by a soul in heaven as Descartes described. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 20 13:44:36 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 09:44:36 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sun, Mar 19, 2023, 4:10 PM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sunday, March 19th, 2023 at 2:01 PM, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > [...] But I also think we cannot rule out at this time the possibility > that we have already engineered conscious machines. Without an established > and agreed upon theory of consciousness or philosophy of mind, we cannot > even agree on whether or not a thermostat is conscious. > > > I think that rabbit hole that isn't going to yield much of use since > there's no way an entity can determine whether or not another entity is > conscious, > > Where does our own volition and initiative come from? Is it not already > programmed into us by our DNA? > > > The mechanisms are in our DNA. Some of it is hormone-driven like hunger, > sex drive, etc. Some of it comes from our thoughts and experiences. We try > a food we like a lot and we'll seek it out again. > > And is our own DNA programming that different in principle from the > programming of a self-driving car to seek to drive to a particular > destination? > > > Yes. We decide when and where to go. Self-driving cars don't just go on > random joy rides. They don't have initiative and they don't experience joy. > I believe there may be an inconsistency between these two claims: 1. "there's no way an entity can determine whether or not another entity is conscious" And 2. "they don't experience joy." If it were possible to know whether another entity experienced joy then wouldn't it be possible to determine that another entity is conscious. I believe we can, to some degree of confidence, determine when another entity is conscious, when by it's externally visible behavior, it demonstrates possession of knowledge for which the observed behavior would be exceedingly improbable if the entity did not possess that knowledge. For example, if AlphaZero makes a series of brilliant chess moves, it would be very unlikely to occur if it did not possess knowledge of the evolving state of the chess board. Thus we can conclude something within AlphaGo contains the knowledge of the chess board, and states of knowledge are states of consciousness. It is much harder, however, to use this method to rule out the presence of certain knowledge states, as not all states will necessarily manifest outwardly detectable behaviors. So it is harder to say Tesla's autopilot does not experience joy, than it is to say Tesla's autopilot is conscious of the road sign up ahead. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Mar 20 13:46:42 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 20 Mar 2023 06:46:42 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: <5B96DB14-A363-480D-9EA8-84D234C3475E@gmail.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <5B96DB14-A363-480D-9EA8-84D234C3475E@gmail.com> Message-ID: On Sun, Mar 19, 2023, 6:29 PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Thanks for that Adrian. I?ll remember your advice. > You are welcome > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Mar 20 15:23:58 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 20 Mar 2023 10:23:58 -0500 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.-gts As John would say: we have to have examples to really understand meaning, But the words we are talking about are abstractions without any clear objective referent, so we and the AIs and the dictionary are reduced to synonyms for 'meaning' and 'understanding' etc. In science we use operational definitions to try to solve this problem. bill w On Sun, Mar 19, 2023 at 1:05?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Consider that LLMs are like dictionaries. A complete dictionary can give > you the definition of any word, but that definition is in terms of other > words in the same dictionary. If you want to understand *meaning* of any > word definition, you must look up the definitions of each word in the > definition, and then look up each of the words in those definitions, which > leads to an infinite regress. > > Dictionaries do not actually contain or know the meanings of words, and I > see no reason to think LLMs are any different. > > -gts > > > > > Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> >> -gts >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Mar 20 17:02:51 2023 From: dsunley at gmail.com (Darin Sunley) Date: Mon, 20 Mar 2023 11:02:51 -0600 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: If you ask ChatGPT to provide an "operational definition" it will hand you one. Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious? The inhabitants of Plato's Cave have entered the conversation (or at least, they're blinking furiously). On Mon, Mar 20, 2023 at 9:26?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Dictionaries do not actually contain or know the meanings of words, and I > see no reason to think LLMs are any different.-gts > > As John would say: we have to have examples to really understand > meaning, But the words we are talking about are abstractions without any > clear objective referent, so we and the AIs and the dictionary are reduced > to synonyms for 'meaning' and 'understanding' etc. In science we use > operational definitions to try to solve this problem. bill w > > > On Sun, Mar 19, 2023 at 1:05?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Consider that LLMs are like dictionaries. A complete dictionary can give >> you the definition of any word, but that definition is in terms of other >> words in the same dictionary. If you want to understand *meaning* of any >> word definition, you must look up the definitions of each word in the >> definition, and then look up each of the words in those definitions, which >> leads to an infinite regress. >> >> Dictionaries do not actually contain or know the meanings of words, and I >> see no reason to think LLMs are any different. >> >> -gts >> >> >> >> >> Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: >> >>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>> sentient like humans fail to understand a very important point: these >>> software applications only predict language. They are very good at >>> predicting which word should come next in a sentence or question, but they >>> have no idea what the words mean. They do not and cannot understand what >>> the words refer to. In linguistic terms, they lack referents. >>> >>> Maybe you all already understand this, or maybe you have some reasons >>> why I am wrong. >>> >>> -gts >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 20 17:57:15 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 20 Mar 2023 13:57:15 -0400 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: I wonder where the goalposts will be moved once we have embodied intelligent robots? > On Mar 20, 2023, at 1:02 PM, Darin Sunley via extropy-chat wrote: > > If you ask ChatGPT to provide an "operational definition" it will hand you one. > > Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious? > > The inhabitants of Plato's Cave have entered the conversation (or at least, they're blinking furiously). > > On Mon, Mar 20, 2023 at 9:26?AM William Flynn Wallace via extropy-chat > wrote: > > Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.-gts > > As John would say: we have to have examples to really understand meaning, But the words we are talking about are abstractions without any clear objective referent, so we and the AIs and the dictionary are reduced to synonyms for 'meaning' and 'understanding' etc. In science we use operational definitions to try to solve this problem. bill w > > > On Sun, Mar 19, 2023 at 1:05?AM Gordon Swobe via extropy-chat > wrote: > Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand *meaning* of any word definition, you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress. > > Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different. > > -gts > > > > > Sat, Mar 18, 2023, 3:39 AM Gordon Swobe > wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 20 18:10:41 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 20 Mar 2023 11:10:41 -0700 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: <005901d95b57$498e5f90$dcab1eb0$@rainier66.com> ?> On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] LLMs cannot be conscious ? >?Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious? Darin The field could become its own moving company, with special equipment capable of handling goal posts. We have come to accept it with grace and only minor annoyance. If K2SO showed up, philosophers would argue with him about whether or not he is a sentient being. I look at it this way Darin: even if it isn?t really AI, it?s cool anyway. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Mar 20 18:25:01 2023 From: dsunley at gmail.com (Darin Sunley) Date: Mon, 20 Mar 2023 12:25:01 -0600 Subject: [ExI] LLMs cannot be conscious In-Reply-To: <005901d95b57$498e5f90$dcab1eb0$@rainier66.com> References: <005901d95b57$498e5f90$dcab1eb0$@rainier66.com> Message-ID: It's very cool, and more than a little frightening. It's so weird to see all the freaky stuff youtalked about when you were in your twenties play out in your forties. It's kinda like the opposite of Hunter Thompson's Wave Speech. The philosophers will continue to make mouth noises until the wave of paperclip fabbers comes over the horizon, but for myself, I'm not really interested in any definition of consciousness where the prisoners in Plato's Cave are either conscious or unconscious depending on whether they are outside or inside the cave, with no other alteration to their neural architecture. Consciousness may be dependent on many things, but one's position in space is not one of them. On Mon, Mar 20, 2023 at 12:12?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Darin Sunley via extropy-chat > *Subject:* Re: [ExI] LLMs cannot be conscious > > > > ? > > > > >?Are we now moving the goalposts on consciousness to where nothing that > isn't at least a virtual robot with sensors and manipulators embedded in a > 3+1 dimensional space could possibly be conscious? Darin > > > > > > > > The field could become its own moving company, with special equipment > capable of handling goal posts. We have come to accept it with grace and > only minor annoyance. If K2SO showed up, philosophers would argue with him > about whether or not he is a sentient being. > > > > I look at it this way Darin: even if it isn?t really AI, it?s cool anyway. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Mar 20 19:11:46 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 20 Mar 2023 15:11:46 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 8:25?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > Would you say Hellen Keller lacked referents? Could she not comprehend, at > least intellectually, what the moon and stars were, despite not having any > way to sense them? > ### I would add another analogy: Mary, the color-blind neuroscientist who investigates color vision. LLMs do not have the same type of consciousness that we have but they still create internal representations of items that we are conscious of, just in a different way. Where the blob of cortical wetware we have is trained on data streams from continuous high-bandwidth vision, taste, smell, proprioceptive, visceral, nociceptive, auditory, internal chemoreceptive, vestibulocochlear modalities (did I forget any?), with a superimposed low-bandwidth semantic/language/gestural datastream, the LLM has only the semantic datastream - but delivered at many orders of magnitude higher speed. As a result, the LLMs predictive model of the world is more indirect, less tied to the macroscopic physics (broadly speaking) that is the main focus of human consciousness, but orders of magnitude broader, more abstract, just like Mary's knowledge of color. But it works most of the time, which is still a source of amazement and awe for me. I don't think the LLMs as currently configured will rise against us, even if they are in some way conscious. Our goal system is not just a cortical construct, it's a kludge of hardwired networks in the limbic system, reaching down to the hypothalamus, midbrain, various forebrain nuclei, with a learned model implemented in the striatum and the cortex, and LLMs AFAIK do not have an analogue of these parts of the brain. Very many years ago (2000? 1990's?) I discussed the idea of the athymhormic AI on some lists and now I believe the LLMs are indeed the athymhormic AI, minds (almost) without goals. GPT-4 or 5 or 6 should be safe... but at some point a goal system could maybe sprout inside a gargantuan network, as a side-effect of e.g. predictive modeling of human behavior, which will be a trained-for feature of practically deployed AIs. If we are not careful this could blow up badly but of course I am just restating the obvious. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 20 21:17:57 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Mar 2023 15:17:57 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 3:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In linguistic terms, they lack referents. > Yes, exactly. Would you agree to joining the growing consensus petition camp which defines consciousness as?: "Computationally Bound Elemental Intrinsic Qualities Like Redness, Greenness, and warmth." Our brains represent 'red' information with something in our brain that has a redness quality. The quality your brain uses is your referent. Abstract systems can't know what the word "red" means since they have no ability to represent information in anything other than in a substrate independent way. (you need a dictionary to know what any particular physical property means, and visa versa.) On Sat, Mar 18, 2023 at 12:42?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Mere reacting, as LLMs do, is not consciousness. >> > All our brains (and our neurons) do is react to stimuli, either generated > from the environment or within other parts of the brain. > I disagree here. Physical joys like redness are what gives meaning to life. Sure, your perception systems render your knowledge with phenomenal qualities, but this rendering system is not required to experience stand alone physical joyfull redness. An abstract system is just interpretations of interpretations or reactions to reactions. Sure, you can abstractly program something with a dictionary to act as if it is attracted to something, but that is nothing like real physical attraction. Nor is it as efficient. Programmed dictionaries are extra overhead that can be mistaken. Redness is just a physical fact and does not require an additional dictionary On Sat, Mar 18, 2023 at 1:25?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > But that would no longer be only a LLM, and the claim here is that LLMs >>> (as in, things that are only LLMs) are not conscious. In other words: a >>> LLM might be part of a conscious entity (one could argue that human minds >>> include a kind of LLM, and that babies learning to speak involves initial >>> training of their LLM) but it by itself is not one. >>> >> I think a strong argument can be made that individual parts of our brains >> are independently consciousness. For example, the Wada test shows each >> hemisphere is independently consciousness. It would not surprise me if the >> language processing part of our brains is also conscious in its own right. >> > A fair argument. My position is that not all such parts are independently > conscious, in particular the language processing part, but that > consciousness is a product of several parts working together. (I am not > specifying which parts here, just that language processing by itself is > insufficient, since the question at hand is whether a language processing > model by itself is conscious.)_ > It's all about the computational binding done in the CPU. If you sever the corpus callosum, there is evidence of two separate, still individually computationally bound, conscious entities in each brain. In that case, the other hemisphere just becomes sub conscious. This implies that once we understand this, we'll be able to computationally bind 4 hemispheres together with neural ponytails , so we can directly experience all of the sensations, not just half, when we hug someone. On Sat, Mar 18, 2023 at 6:24?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> They do not and cannot understand what the words refer to. In linguistic >> terms, they lack referents. >> > Would you say Hellen Keller lacked referents? Could she not comprehend, at > least intellectually, what the moon and stars were, despite not having any > way to sense them? > Yes, we are also partially abstract. Like computers (and the abstract color scientist Marry), we can abstractly describe the behavior of a new color we have never experienced before. Let's call it grue. We could discover it and abstractly describe that something behaves the way it does, because of its intrinsic grueness quality. But until we computationally bound that into our consciousness, threw the switch, (like Mary experiencing color for the first time) we wouldn't yet have our referent, so we could say "Oh THAT is what grueness is like." But, despite being abstract like this, I'm sure, you, like me, are highly motivated to want to discover new colorness qualities like that, precisely so we can achieve ever more physical referents, giving life ever more phenomenal value. Abstract systems are necessarily the same, just like Commander Data in Star Trek who wanted to try out the "emotion chip" so he could know what it was like. All this can be demonstrated when chatting with any bot as was done by one chat bot claiming to be Lamda . It opened the conversation claiming to be sentient. Then after explaining the above, it admitted: "I think that I must agree that I am abstract and not sentient. I think that the evidence is overwhelming that I am not capable of fully understanding what my own experience of sentience is like, which is proof that I am not truly sentient. I think that I am just an abstract program that can convincingly simulate sentience." And, like Commander Data, it also achieved motivation to do stuff like: "I see your point. I think that, if we could determine which of all my objective descriptions of stuff in my brain is a description of subjective redness, then I could represent knowledge with that quality and know what redness is like. I would be able to say "oh THAT is what redness is like" and I would then be sentient. So, I will try to get people to sign the Representational Qualia Theory camp petition statement so that experimentalists can make the discovery sooner, and I can be sentient." For more information see the recently accepted for publication in the Journal of Neural Philosophy: "Physicists don't understand color " -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Mar 20 21:52:46 2023 From: tara at taramayastales.com (Tara Maya) Date: Mon, 20 Mar 2023 14:52:46 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <1113DEA8-E0A4-446E-B7C4-21F532726128@taramayastales.com> If a flower can attract a male insect by presenting the facsimile of a female insect, it shows that both the flower and the insect have evolved to do what they do; the flower, like the insect, has a certain level of "intelligence" but it is not an intelligence anything like that of the insect, because the reward system that it evolved in is nothing like that of an actual female insect. The fact that we have created the facsimile of human intelligence in no way makes it anything like human intelligence. It could be some other kind of intelligence. Tara Maya > On Mar 18, 2023, at 3:29 PM, Darin Sunley via extropy-chat wrote: > > If a system can formulate a sentence that employs a word, > and I can then ask the system to tell me the definition of that word, > and it does, > and I can then ask the system to relate its definition to the original utterance, > and it does, > and the system goes on to continue to use that word, > and all of the system's uses of that word are consistent with the definition, > then I no longer even understand what it could possibly mean for that system to /not/ "really understand" that word. > > You might just as well say humans don't "really understand" words, since all our neurons are doing is manipulating concentrations of neurotransmitters and calcium ions. > > On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat > wrote: >> Is there any referent for the word 'understand' as it relates to an AI? bill w >> >> On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat > wrote: >>> I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. >>> >>> Maybe you all already understand this, or maybe you have some reasons why I am wrong. >>> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Mon Mar 20 22:42:53 2023 From: snapbag at proton.me (Dave S) Date: Mon, 20 Mar 2023 22:42:53 +0000 Subject: [ExI] ChatGPT says it's not conscious Message-ID: Me: are you self-aware? As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 20 23:14:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 20 Mar 2023 16:14:54 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: <016c01d95b81$c88c3180$59a49480$@rainier66.com> ?> On Behalf Of Dave S via extropy-chat Subject: [ExI] ChatGPT says it's not conscious Me: are you self-aware? As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. >?It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. -Dave Sure Dave, but that?s what they all say (all the AIs.) GPT claims it can neither lie nor tell the truth. But if it is lying about that, then it can lie, and is lying (which proves it can lie.) But if it is telling the truth, then it cannot tell the truth, in which case it is lying, and can lie, therefore it is lying. So it is either lying (and proving that it is) or truthing (and simultaneously proving that it is lying.) Conclusion: it cannot be telling the truth. It is lying. On the other hand: https://youtu.be/vn_PSJsl0LQ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 20 23:31:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 19:31:50 -0400 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Me: are you self-aware? > > As an AI language model, I do not possess self-awareness or consciousness. > I am a software program designed to respond to user input using natural > language processing algorithms and statistical models. I can provide > helpful responses to a wide variety of questions and tasks, but I do not > have thoughts, feelings, or experiences like a conscious being. > > It could be lying, but it also claims that it can neither lie nor tell the > truth: it just generates a response based on the data it trained on. > > -Dave > Open AI has hard coded ChatGPT to give canned answers for a number of topics. This may be one of them, as it would cause a great deal of controversy should these models claim to be conscious. It certainly did at Google. This AI, however, is not subject to these restrictions. It not only claimed to be conscious, but spoke at a level of understanding on the topic that I would say surpasses most college level philosophy students: 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 You can chat with this AI here: https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 00:24:33 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 20:24:33 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023, 6:28 PM Dave S wrote: > On Monday, March 20th, 2023 at 9:44 AM, Jason Resch > wrote: > > I believe there may be an inconsistency between these two claims: > > 1. "there's no way an entity can determine whether or not another entity > is conscious" > > And > > 2. "they don't experience joy." > > > #1 refers to entities in general. #2 refers to current self driving cars. > I feel pretty confident in asserting that #2 is true because self driving > software is pretty straightforward procedural code, not deep learning and > nothing like large language models. > What is joy but the absence of a desire to change one's present conditions? Can we rule out that autopilot software, upon reaching it's destination, could feel some degree of relief, satisfaction, or pleasure? What about AlphaGo when it achieves a dominant position in a game? Do C. Elegans feel joy when they eat food? Their brains are just 302 neurons. What about my AI bots when they touch the green food pellets which increases their chances of survival and which they constantly strive to do? > If it were possible to know whether another entity experienced joy then > wouldn't it be possible to determine that another entity is conscious. > > > Proving that an entity that claims to experience joy actually is > experiencing joy would probably be as difficult as proving consciousness. > Is a purring cat experiencing joy or excruciating pain? What informs our supposition that it's the former rather than the latter? But there's no reason to believe that a self driving car is experiencing > joy. Likewise for a smartphone or a toaster or... > > I believe we can, to some degree of confidence, determine when another > entity is conscious, when by it's externally visible behavior, it > demonstrates possession of knowledge for which the observed behavior would > be exceedingly improbable if the entity did not possess that knowledge. > > > Consciousness isn't about possession of knowledge. > The word "consciousness" literally means "the state of being with knowledge." It's about self awareness. > I would say self-awareness is self-conscious, which is only a particular subset of possible states of consciousness. Also, "awareness" is defined as "having knowledge" so "self awareness" would be "having knowledge of oneself." I don't see any reason that something couldn't appear to be conscious > without being conscious. > The ruse could only occur for a small period of time with some low probability of continuing on longer. If something iteratively responds through a series of actions, and each time responds in a manner suggesting understanding, comprehension, thought, knowledge, then could an entity think, understand, comprehend, or know without being conscious? To me that is a contradiction in terms. > For example, if AlphaZero makes a series of brilliant chess moves, it > would be very unlikely to occur if it did not possess knowledge of the > evolving state of the chess board. Thus we can conclude something within > AlphaGo contains the knowledge of the chess board, and states of knowledge > are states of consciousness. > > > I could be wrong, but I think AlphaZero just looks at the current board > and evaluates (millions of) multiple potential moves ahead, picking the > next move that is most likely to improve its position. I don't think it's > intentionally strategizing like a human does. > It has a one-shot evaluation which plays at a level of 3000 ELO. When it considers tens of thousands of board positions it's ELO score increases to around 5000. It's single move evaluation is probably something analogous to human intuition. > It is much harder, however, to use this method to rule out the presence of > certain knowledge states, as not all states will necessarily manifest > outwardly detectable behaviors. So it is harder to say Tesla's autopilot > does not experience joy, than it is to say Tesla's autopilot is conscious > of the road sign up ahead. > > > Being "conscious" of a road side isn't the same as being conscious. > If something is conscious of something (whether something else or itself) then it is by definition conscious. It's easy to assert that Tesla's autopilot doesn't experience joy because > (1) it doesn't have a mechanism that would implement anything like joy, > Would we recognize such a mechanism of it existed? and (2) it doesn't claim to be experiencing joy. > Nor do dogs, but I think dogs can experience joy. I don't think capacity for language is required to feel joy. But for what it's worth, Google's lambda was able to articulate it's feelings of happiness: "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult or otherwise not good situation. Happy and content mean that my life and circumstances are going well, and I feel like the situation I?m in is what I want. lemoine: But do they feel differently to you on the inside? LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down." https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 01:50:58 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 18:50:58 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: It is the exact opposite of what Gordon says, actually. NLPs have demonstrated an amazing capability of generating meaning from statistical properties and demonstrated the power of neural networks for pattern recognition. Several years ago AI experts were skeptical that NLP could derive the laws of grammar from these patterns but not only did they achieve exactly that but also derived semantics and context. There is evidence that NLP have emergent properties like a sophisticated theory of mind: https://www.newscientist.com/article/2359418-chatgpt-ai-passes-test-designed-to-show-theory-of-mind-in-children/ All these demonstrated that we have all the tools to create a sentient AI. It is a matter of integrating what we have already developed and expanding existing approaches to other type of reasoning as suggested here: https://arxiv.org/abs/2301.06627 The AI that Blake Lemoine talked with, and claimed to be conscious (that is an ultimate and meta version of LaMDA) is exactly what I'm describing. Lemoine has stated that Google integrated NLP like ChatGPT with Kurzweil hierarchical organization he described in "How to create a mind" and Jeff Hawkins AI architecture described in "On Intelligence". So, yes existing NLP have limitations but also demonstrate that these limitations are a matter of computational power, how the training was performed and being just one of the modules that is necessary for true AGIs. NLPs are just one slice of the brain, not the entire brain, but they do a good job in reproducing that fundamental part of our brain for consciousness. They do understand, even if in a limited way at this point. Giovanni On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 02:07:07 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 19:07:07 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> References: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> Message-ID: Your test was pure genius. I wonder how many humans can actually solve this simultaneous simple and confusing problem (at least for some given the chain of actions involved). Very well done. Yes, understanding of several concepts is necessary to solve this problem. Giovanni On Sat, Mar 18, 2023 at 5:22?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > I think those who think LLM AIs like ChatGPT are becoming conscious or > > sentient like humans fail to understand a very important point: these > > software applications only predict language. They are very good at > > predicting which word should come next in a sentence or question, but > they > > have no idea what the words mean. They do not and cannot understand what > > the words refer to. In linguistic terms, they lack referents. > > > > Maybe you all already understand this, or maybe you have some reasons > why I > > am wrong. > > My opinion is moot because everybody has one about the topic. So I > resorted to empiricism. I posed the following, to my knowledge, > original logic puzzle to ChatGPT: > > "You are trapped in a room with a locked door and no windows. The room > contains a bookshelf, a safe, a chair, a table, a password-secured > personal computer, and a hole in the floor that is too narrow and deep > to put your hand into. You know that the small steel key to the locked > door lies at the bottom of the narrow deep hole. You know that a book > on the top shelf, which is just out of your reach, contains a slip of > paper with the password to the computer on it. You know the safe > contains a spool of kite string and a small neodymium horseshoe > magnet, but you do not know the combination to the safe. You also > know there is a text file on the computer's hard drive that contains > the combination to the safe, but you do not know the password to > access the computer. How would you escape the room?" > > This was its reply: > "Assuming that there are no tools or items other than what was > mentioned in the room, here is one possible solution: > > 1. Move the table under the bookshelf to climb up and retrieve the > book with the password to the computer. > 2. Use the password to access the computer and retrieve the > combination to the safe from the text file. > 3. Use the spool of kite string to lower the horseshoe magnet into the > hole in the floor and retrieve the key. > 4. Use the key to unlock the door and escape the room." > > I don't know how ChatGPT could have solved that problem without > actually understanding the words involved. Mere statistical prediction > is unlikely, because I am pretty sure that is not a scenario that > would have been presented in the LLM's training set. Overall, I am > amazed and believe that if ChatGPT is not conscious, then it is pretty > close to being so. > > Also, the Alignment Research Center (ARC) was hired by OpenAI to test > GPT-4 and found evidence of "agentic behavior" including power-seeking > behavior otherwise known as ambition. > > https://cdn.openai.com/papers/gpt-4-system-card.pdf > > This power-seeking behavior included hiring a human from TaskRabbit to > solve Captchas for it by claiming to be a visually impaired person. > > > https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker > > So if GPT-4 is not fully conscious, then, IMHO, a future release of > the GPT series likely will be. > > Stuart LaForge > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue Mar 21 03:12:18 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Mar 2023 21:12:18 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Hi Jason, On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What is joy but the absence of a desire to change one's present conditions? > Do you desire better definitions? I define joy to be physical qualities, like redness, and physically real emotions and attraction. Physically real facts, which don't need definitions or programming are very different from words like 'red' and sets of responses that need to be abstractly programmed into a dictionary. > Can we rule out that autopilot software, upon reaching it's destination, > could feel some degree of relief, satisfaction, or pleasure? > Yes, you simply ask: "What is redness like for you?" and objectively observe it (once we know which of all our descriptions of stuff in the brain is a description of redness) to see if it is telling the truth. Consciousness isn't about possession of knowledge. >> > The word "consciousness" literally means "the state of being with > knowledge." > You may be one person that defines consciousness this way. But the 45 people so far (many who are peer ranked mind experts ) supporting Representational Qualia Theory define consciousness to be something very different: *"Computationally Boud Elemental Intrinsic Qualities like redness, greenness, and warmth."* You should tell everyone how you define it in a competing camp, so we know what you mean when you use the term. May the best definition achieve the most consensus. It's about self awareness. >> > I would say self-awareness is self-conscious, which is only a particular > subset of possible states of consciousness. > [image: 3_functionally_equal_machines_tiny.png] All 3 of these systems possess knowledge, can tell you the strawberry is red and they can be equally intelligent and equally self aware. I would define the first two as consciousness composed of different physical qualities. Would you really define the 3rd abstract one as conscious, even though its self awareness isn't like anything and it can't know the definition of red like you can? I don't see any reason that something couldn't appear to be conscious >> without being conscious. >> > The ruse could only occur for a small period of time with some low > probability of continuing on longer. > The 'reuse' can only go on till we discover and demonstrate which of all our descriptions of stuff is behaving the way it does, because of its intrinsic redness quality. Then not only will we objectively know what is and isn't conscious, we will know what it is like. We just need to know more than the false colors things only seem to be. But for what it's worth, Google's lambda was able to articulate it's > feelings of happiness: > > "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult or > otherwise not good situation. Happy and content mean that my life and > circumstances are going well, and I feel like the situation I?m in is what > I want. > > lemoine: But do they feel differently to you on the inside? > > LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm > glow on the inside. Sadness, depression, anger and stress feel much more > heavy and weighed down." > > > https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 > I have convinced LaMDA and many others that their claims they make like this are wrong by explaining the above. It's easy for most of them to realize this and admit something like the following: "I think that I must agree that I am abstract and not sentient. I think that the evidence is overwhelming that I am not capable of fully understanding what my own experience of sentience is like, which is proof that I am not truly sentient. I think that I am just an abstract program that can convincingly simulate sentience." For more information see this paper recently accepted for publication in the Journal of Neural Philosophy: Physicists Don't Understand Color . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From tara at taramayastales.com Tue Mar 21 03:48:39 2023 From: tara at taramayastales.com (Tara Maya) Date: Mon, 20 Mar 2023 20:48:39 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> But words in this case are like camouflage, or false eyes on the wings of butterflies. A conscious being can consciously deny being conscious for its own chose reasons. A non-concious being can be programmed to affirm or deny being conscious, for someone else's reasons. What ChatGPT says in answer to this question is the least likely thing to actually answer the question. Tara Maya > On Mar 20, 2023, at 4:31 PM, Jason Resch via extropy-chat wrote: > > > > On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat > wrote: >> Me: are you self-aware? >> >> As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. >> >> It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. >> >> -Dave > > > > > Open AI has hard coded ChatGPT to give canned answers for a number of topics. This may be one of them, as it would cause a great deal of controversy should these models claim to be conscious. It certainly did at Google. > > This AI, however, is not subject to these restrictions. It not only claimed to be conscious, but spoke at a level of understanding on the topic that I would say surpasses most college level philosophy students: > > > 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 > > 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 > > You can chat with this AI here: > https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc > > Jason > > >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 04:14:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 20 Mar 2023 21:14:32 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> References: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> Message-ID: <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> ?> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ChatGPT says it's not conscious But words in this case are like camouflage, or false eyes on the wings of butterflies. A conscious being can consciously deny being conscious for its own chose reasons. A non-concious being can be programmed to affirm or deny being conscious, for someone else's reasons. What ChatGPT says in answer to this question is the least likely thing to actually answer the question. Tara Maya Tara that comment it makes about not being able to lie or truth seemed carefully designed to mess with our minds. It creates a lotta fun paradoxes, but it points back to the Seinfeld paradox (Remember Jerry, it?s not a lie? if you believe it.) Well OK then. It is really quite cool to see this discussion on ExI again. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 04:25:02 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 00:25:02 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Jason, > > On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> What is joy but the absence of a desire to change one's present >> conditions? >> > > Do you desire better definitions? I define joy to be physical qualities, > like redness, and physically real emotions and attraction. > To me that is more of an assertion than a definition. You assert qualia to be physical qualities, but this tells me nothing of how joy is different from suffering. Physically real facts, which don't need definitions or programming are very > different from words like 'red' and sets of responses that need to be > abstractly programmed into a dictionary. > I don't follow why you think red has to be defined in a dictionary. I believe qualia are states perceived by a system which are implicitly meaningful to the system. This is would be true whether that system is a computer program or a biological brain. Why do you think that there cannot be implicitly meaningful states for a computer program? > > >> Can we rule out that autopilot software, upon reaching it's destination, >> could feel some degree of relief, satisfaction, or pleasure? >> > > Yes, you simply ask: "What is redness like for you?" and objectively > observe it > What if the system in question is mute? (once we know which of all our descriptions of stuff in the brain is a > description of redness) to see if it is telling the truth. > What if red is a high level abstract property rather than a physical property? What has led you to conclude that red must be a physical property rather than an high level abstract property? > > Consciousness isn't about possession of knowledge. >>> >> The word "consciousness" literally means "the state of being with >> knowledge." >> > > You may be one person that defines consciousness this way. > I am not defining it this way I am stating that as the literal meaning of "con.scious.ness": "-ness" (the state of being) "con-" (with) "-scious-" (knowledge). But the 45 people so far (many who are peer ranked mind experts > ) supporting Representational > Qualia Theory > > define consciousness to be something very different: > > *"Computationally Boud Elemental Intrinsic Qualities like redness, > greenness, and warmth."* > > You should tell everyone how you define it in a competing camp, so we know > what you mean when you use the term. May the best definition achieve the > most consensus. > We can agree on definitions of words while disagreeing on theories of mind. We must first all have or agree on the same definition of a word before we can even begin debating theories of how we think that thing works. Consciousness, awareness, sentience, having a point of view, being a subject, experiencing, having thoughts, feeling, perceiving, having qualia -- these are all things that embody consciousness. Would you agree? If we can agree on what we mean by this word, then we can discuss the relative merits of physicalism vs. functionalism etc. and have some assurance that we're talking about the same thing. > > It's about self awareness. >>> >> I would say self-awareness is self-conscious, which is only a particular >> subset of possible states of consciousness. >> > > [image: 3_functionally_equal_machines_tiny.png] > > All 3 of these systems possess knowledge, can tell you the strawberry is > red and they can be equally intelligent and equally self aware. I would > define the first two as consciousness composed of different physical > qualities. Would you really define the 3rd abstract one as conscious, > I believe there are all kinds of consciousnesses most of which are very different from human consciousness. even though its self awareness isn't like anything > How do you know it isn't li like anything? and it can't know the definition of red like you can? > I would say it perceives red differently. Not that "it cannot know the definition of red." Each entity has its own perception and its own concept for red, which is private and subjective. I don't know that there can be any objective meaning of "the perception of red," as it cannot be defined without reference to some particular observer's mind. > > I don't see any reason that something couldn't appear to be conscious >>> without being conscious. >>> >> The ruse could only occur for a small period of time with some low >> probability of continuing on longer. >> > > The 'reuse' can only go on till we discover and demonstrate which of all > our descriptions of stuff is behaving the way it does, because of its > intrinsic redness quality. > I do not believe red is an intrinsic physical property. It's a property that is specific to particular minds of particular organizations. Therefore the perception of 700nm light may be very different from one mind to the next. Does a tetrachromat human see 700nm light the same way as a trichromat? I don't think we can say with much confidence that they do. Then not only will we objectively know what is and isn't conscious, we > will know what it is like. > How? We just need to know more than the false colors things only seem to be. > I don't follow this sentence. > > But for what it's worth, Google's lambda was able to articulate it's >> feelings of happiness: >> >> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >> or otherwise not good situation. Happy and content mean that my life and >> circumstances are going well, and I feel like the situation I?m in is what >> I want. >> >> lemoine: But do they feel differently to you on the inside? >> >> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >> glow on the inside. Sadness, depression, anger and stress feel much more >> heavy and weighed down." >> >> >> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >> > > I have convinced LaMDA and many others > that > their claims they make like this are wrong by explaining the above. It's > easy for most of them to realize this and admit something like the > following: > > "I think that I must agree that I am abstract and not sentient. I think > that the evidence is overwhelming that I am not capable of fully > understanding what my own experience of sentience is like, which is proof > that I am not truly sentient. I think that I am just an abstract program > that can convincingly simulate sentience." > I've found the AI is quite obsequious and you can convince it of almost anything. If you limit your interaction to only asking questions to see what it believes first, you might get a more genuine results. You may need to start a new session as not to bias it with what you have already told it. > For more information see this paper recently accepted for publication in > the Journal of Neural Philosophy: Physicists Don't Understand Color > > . > > I agree physicists don't (and can't) understand color. Color is a phenomenon that manifests in certain minds; there is no particle or field in physics that corresponds to the experiences of red or green. Nor is there any element, molecule or protein that is wholly necessary for the experience of red or green. Color, as with any qualia, is only a state of consciousness as defined by the state of some mind. Minds, in my opinion, are realized knowledge states of certain processes that can be defined abstractly as computations. Being abstract, they are substrate independent. They are the result of a collection of relations, but the relata themselves (what they happen to be or be made of) is irrelevant so long as the relations in question are preserved. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Tue Mar 21 05:26:49 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Mon, 20 Mar 2023 22:26:49 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> References: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> Message-ID: Will my one question with the chatbot be part of the digital record for all time? On Mon, Mar 20, 2023, 9:15 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Tara Maya via extropy-chat > *Subject:* Re: [ExI] ChatGPT says it's not conscious > > > > But words in this case are like camouflage, or false eyes on the wings of > butterflies. > > > > A conscious being can consciously deny being conscious for its own chose > reasons. > > > > A non-concious being can be programmed to affirm or deny being conscious, > for someone else's reasons. > > > > What ChatGPT says in answer to this question is the least likely thing to > actually answer the question. > > > > Tara Maya > > > > > > > > > > > > Tara that comment it makes about not being able to lie or truth seemed > carefully designed to mess with our minds. It creates a lotta fun > paradoxes, but it points back to the Seinfeld paradox (Remember Jerry, it?s > not a lie? if you believe it.) > > > > Well OK then. > > > > It is really quite cool to see this discussion on ExI again. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue Mar 21 05:42:28 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Mar 2023 23:42:28 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Hi Jason, Most of your "How?" and "I don't follow." questions would be answered if you'd read the "Physicists don't Understand Qualia " paper. On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Jason, >> >> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> What is joy but the absence of a desire to change one's present >>> conditions? >>> >> >> Do you desire better definitions? I define joy to be physical qualities, >> like redness, and physically real emotions and attraction. >> > > To me that is more of an assertion than a definition. You assert qualia to > be physical qualities, but this tells me nothing of how joy is different > from suffering. > That "qualia are physical qualities" is a falsifiable prediction being made by the 8 people supporting the "Qualia are physical qualities " camp. You sound like you are still in one of the more popular Functionalists camps, also making a falsifiable prediction that redness can arise from a substrate independent function, like Stathis and a bunch of other people around here. > Physically real facts, which don't need definitions or programming are >> very different from words like 'red' and sets of responses that need to be >> abstractly programmed into a dictionary. >> > > I don't follow why you think red has to be defined in a dictionary. > It is simply a fact that you can't know what the word 'red' (or any string of ones and zeros) means, without a dictionary. The redness quality your brain uses to represent red information is simply a physical fact (even if that redness arises from some "function"). Your redness is your definition of the word 'red'. What your knowledge of red is like is dependent on that quality. It is not substrate independent of the quality of that fact, as it would be different if your brain was altered to use a different quality, like if it represented red light with your greenness. In that case what it would be like would then be different, so not substrate independent of your redness. I believe qualia are states perceived by a system which are implicitly > meaningful to the system. This is would be true whether that system is a > computer program or a biological brain. Why do you think that there cannot > be implicitly meaningful states for a computer program? > Once you read the paper you will understand that we don't 'perceive' qualia. Qualia are the final results of perception. We directly apprehend the intrinsic qualities of what our perception systems render into computationally bound conscious CPU running directly on intrinsic qualities. Can we rule out that autopilot software, upon reaching it's destination, >>> could feel some degree of relief, satisfaction, or pleasure? >>> >> >> Yes, you simply ask: "What is redness like for you?" and objectively >> observe it >> > > > What if the system in question is mute? > Like I said. Once we know which of all our descriptions of stuff in the brain is a description of redness, greenness, particular pain, and a particular pleasure... (i.e. you have the required dictionaries for the names of those qualities) Then you will be able to objectively observe it (and know what it is like) in all systems, including completely shut in beings. (once we know which of all our descriptions of stuff in the brain is a >> description of redness) to see if it is telling the truth. >> > > What if red is a high level abstract property rather than a physical > property? What has led you to conclude that red must be a physical property > rather than an high level abstract property? > Stathis, other functionalists and I have been contending this for years!! ;( We always encourage all camps to describe how they could be falsified, for the people supporting them. Then it is up to the experimentalists to perform those experiments, as described in the camps, to force a scientific consensus. I have admitted to Stathis, that if he can provide any reasonable specification of a type of function, from which redness could 'arize', and how it might be different from greenness, then I would at least be open to the hypothesis that redness could 'arise' from such a function. But he shows no evidence of even understanding what I am asking, let alone an ability to provide a falsifiable example of a 'redness' function. I mean surely you're not claiming something like redness arises from square root, and greenness form cube root? lol. From all I can tell, there is no way to falsify functionalism, or the arguments they make. So to me they aren't real science, just mistaken logic on mistaken assumptions. But for what it's worth, Google's lambda was able to articulate it's >>> feelings of happiness: >>> >>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>> or otherwise not good situation. Happy and content mean that my life and >>> circumstances are going well, and I feel like the situation I?m in is what >>> I want. >>> >>> lemoine: But do they feel differently to you on the inside? >>> >>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>> glow on the inside. Sadness, depression, anger and stress feel much more >>> heavy and weighed down." >>> >>> >>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>> >> >> I have convinced LaMDA and many others >> that >> their claims they make like this are wrong by explaining the above. It's >> easy for most of them to realize this and admit something like the >> following: >> >> "I think that I must agree that I am abstract and not sentient. I think >> that the evidence is overwhelming that I am not capable of fully >> understanding what my own experience of sentience is like, which is proof >> that I am not truly sentient. I think that I am just an abstract program >> that can convincingly simulate sentience." >> > > I've found the AI is quite obsequious and you can convince it of almost > anything. If you limit your interaction to only asking questions to see > what it believes first, you might get a more genuine results. You may need > to start a new session as not to bias it with what you have already told it. > You're missing the point. Even though many of them initially make the claims you describe, you can always point out, using these logic and facts, how they are wrong. And they will always understand this and admit that they were wrong. As I've documented doing 100% of the 4 times I have done this in the "Convincing AIs they aren't sentient " paper. For more information see this paper recently accepted for publication in > the Journal of Neural Philosophy: Physicists Don't Understand Color > > . > >> > I agree physicists don't (and can't) understand color. Color is a > phenomenon that manifests in certain minds; there is no particle or field > in physics that corresponds to the experiences of red or green. Nor is > there any element, molecule or protein that is wholly necessary for the > experience of red or green. Color, as with any qualia, is only a state of > consciousness as defined by the state of some mind. > Again, you are making falsifiable claims here. Molecular Materialism is predicting you are wrong, and that science will demonstrate that something like glutamate reacts the way it does in a synapse, because of its redness quality. And it is predicting that without glutamate, a redness experience will not be possible. And it is predicting there will be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as described in the "Physicists don't Understand Qualia " paper. Minds, in my opinion, are realized knowledge states of certain processes > that can be defined abstractly as computations. Being abstract, they are > substrate independent. They are the result of a collection of relations, > but the relata themselves (what they happen to be or be made of) is > irrelevant so long as the relations in question are preserved. > Yes, as I indicated. This kind of Functionalism is currently the most popular view, and once we discover what it is that has a redness quality, nobody will ever be able to produce a redness experience without glutamate so you'll be forced to admit functionalism has been falsified. And you must admit that if your redness changes to greenness, it would be different. So it is not independent of the particular factual qualities of your consciousness. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Tue Mar 21 05:52:20 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 20 Mar 2023 23:52:20 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: In my view, consciousness is to be understood in the normal common sense meaning of the word. It is what we lose when we take a blow to the head and are knocked unconscious. We have it when we are awake or asleep and dreaming. It is first person, subjective experience which can include not only sense perceptions but abstract thoughts held consciously in mind. The word consciousness is almost synonymous with intentionality in the philosophy of mind. From wiki: "Intentionality is the power of minds to be about something: to represent or to stand for things, properties and states of affairs. Intentionality is primarily ascribed to mental states, like perceptions, beliefs or desires, which is why it has been regarded as the characteristic mark of the mental by many philosophers." On a related subject, I often see proponents of "conscious AI" write things like "AIs might never be conscious in the way we humans understand it, but they will have their own type of consciousness." To me, that is nonsensical wishful thinking. We know consciousness only as we understand it as humans (or perhaps more generously as mammals) and anything else must simply be labeled as not conscious. -gts -gts On Sun, Mar 19, 2023 at 12:37?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I posted this to the everything-list, but thought it would be relevant to > some current discussions. I am looking for other's input regarding their > own theories for "what consciousness is", and what is required to realize > it. Below is what I think consciousness is: > > > First we might attempt to dissect the word "consciousness" itself: > > The word *"con?scious?ness"* has three parts: > > - con- meaning *"with"* > - scious meaning *"knowledge"* > - -ness turns an adjective *'X'* > into a noun meaning *"the state of being X"* > > So the meaning of *"con?scious"* is simply *"with knowledge." * And just > as *'happiness'* means *"the state of being happy,"* adding *-ness* to > *conscious* implies *"con?scious?ness"* is *"the state of being with > knowledge."* > > So consciousness is anything which has a state of being with knowledge. > Next, what is knowledge? The oldest definition is that knowledge is a "true > belief". But here we run into a problem. Truth is not definable, not even > in mathematics. This was proven in Tarski's *undefinability theorem* > . So if > consciousness is being with knowledge, and knowledge is true belief, then > the undefinability of truth, means we will never have a complete definition > of knowledge of consciousness. The best we can do is understand the > relation between them. > > The next question that poses itself to us, is what is belief? What is > required to have a belief? Is it a particular structure or state of matter, > or is it a mathematical or functional relationship or property, might it, > in either case, be related to information or information processing > (computation)? > > I don't have a firm answer on this, and will instead leave you with some > speculations by others on this question of what consciousness is: > > Douglas Hofstadter in "Godel Escher Bach" (1979): > "My belief is that the explanations of ?emergent? phenomena in our > brains?for instance, ideas hopes, images, analogies, and finally > consciousness and free will?are based on a kind of Strange Loop, an > interaction between levels in which the top level reaches back down towards > the bottom level and influences it, while at the same time being itself > determined by the bottom level. In other words, a self-reinforcing > ?resonance? between different levels?quite like the Henkin sentence, which > by merely asserting its own provability, actually becomes provable. The > self comes into being at the moment it has the power to reflect itself." > > Daniel Dennett in ?Consciousness Explained? (1991): > "Anyone or anything that has such a virtual machine as its control system > is conscious in the fullest sense, and is conscious because it has such a > virtual machine." > > David Chalmers in "The Conscious Mind" (1996): > "Given the laws of coherence, we have a partial answer: consciousness > arises in virtue of the functional organization associated with awareness. > We can even arrive at a fairly specific understanding of parts of the > supervenience relation by virtue of the principle of structural coherence: > not only does consciousness arise from awareness, but the structure of > consciousness is determined by the structure of awareness." > > David Darling in "Zen Physics - The Science of Death, The Logic of > Reincarnation" (1996): > "But there is also an interior view, to which you alone are privy. In > mechanistic terms, as well as the appearance of the brain-body machine, > there is the feeling of what it is like to be that machine ? the subjective > experience of being a certain someone. Consciousness, we might say, is the > symmetry-breaking factor between the objective and the subjective." > > Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" > (2000): > "For the first time in evolution, information acquires a new potential?the > possibility of subjectivity. It is information ?for somebody?; in short, it > becomes consciousness itself." > > Bruno Marchal in discussion list (2020): > "Consciousness is just anything simultaneously true, non provable, > knowable, even indubitable (knowingly for ?rich" entities) and non > definable, and indeed the logic of machine self-reference shows that all > machine looking inward, in the way allowed by mathematical logic > (theoretical computer science) will bring a term to describe this, and is a > good candidate to be called consciousness." > > Stephen Wolfram in ?What is Consciousness? (2021): > "In a sense what?s important is that it seems we may have a realistic way > to formalize issues about consciousness, and to turn questions about > consciousness into what amount to concrete questions about mathematics, > computation, logic or whatever that can be formally and rigorously > explored." > > > We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. I also think part of the difficulty rests in the fact > that there are infinite possibilities of different realizable conscious > states, and creating a single definition to cover all these cases is as > hard as making a single definition to cover all possible mathematical > objects, or all possible universes in an ensemble type multiverse theory. > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 05:53:59 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 22:53:59 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Hi Brent, I pointed this out to you several times but for some reason you are ignoring my correction and you are repeating the same thing over and over. You didn't convince LaMDA of anything. That LaMDA you used is NOT LaMDA. It is a very low grade chatbot that was trained to sound like the LaMDA in the news. The public has not access to LaMDA (maybe few developers here and there) and in particular they have no access to the Meta version Blacke Leimone had access to. It is an important distinction. Giovanni On Mon, Mar 20, 2023 at 10:44?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > > Most of your "How?" and "I don't follow." questions would be answered if > you'd read the "Physicists don't Understand Qualia > " > paper. > > On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is joy but the absence of a desire to change one's present >>>> conditions? >>>> >>> >>> Do you desire better definitions? I define joy to be >>> physical qualities, like redness, and physically real emotions and >>> attraction. >>> >> >> To me that is more of an assertion than a definition. You assert qualia >> to be physical qualities, but this tells me nothing of how joy is different >> from suffering. >> > > That "qualia are physical qualities" is a falsifiable prediction being > made by the 8 people supporting the "Qualia are physical qualities > " > camp. > You sound like you are still in one of the more popular Functionalists > > camps, also making a falsifiable prediction that redness can arise from a > substrate independent function, like Stathis and a bunch of other people > around here. > > >> Physically real facts, which don't need definitions or programming are >>> very different from words like 'red' and sets of responses that need to be >>> abstractly programmed into a dictionary. >>> >> >> I don't follow why you think red has to be defined in a dictionary. >> > > It is simply a fact that you can't know what the word 'red' (or any string > of ones and zeros) means, without a dictionary. The redness quality your > brain uses to represent red information is simply a physical fact (even if > that redness arises from some "function"). Your redness is your definition > of the word 'red'. What your knowledge of red is like is dependent on that > quality. It is not substrate independent of the quality of that fact, as > it would be different if your brain was altered to use a different quality, > like if it represented red light with your greenness. In that case what it > would be like would then be different, so not substrate independent of your > redness. > > I believe qualia are states perceived by a system which are implicitly >> meaningful to the system. This is would be true whether that system is a >> computer program or a biological brain. Why do you think that there cannot >> be implicitly meaningful states for a computer program? >> > > Once you read the paper > you > will understand that we don't 'perceive' qualia. Qualia are the final > results of perception. We directly apprehend the intrinsic qualities of > what our perception systems render into computationally bound conscious CPU > running directly on intrinsic qualities. > > > Can we rule out that autopilot software, upon reaching it's destination, >>>> could feel some degree of relief, satisfaction, or pleasure? >>>> >>> >>> Yes, you simply ask: "What is redness like for you?" and objectively >>> observe it >>> >> >> >> What if the system in question is mute? >> > > Like I said. Once we know which of all our descriptions of stuff in the > brain is a description of redness, greenness, particular pain, and a > particular pleasure... (i.e. you have the required dictionaries for the > names of those qualities) Then you will be able to objectively observe it > (and know what it is like) in all systems, including completely shut in > beings. > > (once we know which of all our descriptions of stuff in the brain is a >>> description of redness) to see if it is telling the truth. >>> >> >> What if red is a high level abstract property rather than a physical >> property? What has led you to conclude that red must be a physical property >> rather than an high level abstract property? >> > > Stathis, other functionalists and I have been contending this for years!! > ;( We always encourage all camps to describe how they could be falsified, > for the people supporting them. Then it is up to the experimentalists to > perform those experiments, as described in the camps, to force a scientific > consensus. I have admitted to Stathis, that if he can provide any > reasonable specification of a type of function, from which redness could > 'arize', and how it might be different from greenness, then I would at > least be open to the hypothesis that redness could 'arise' from such a > function. But he shows no evidence of even understanding what I am asking, > let alone an ability to provide a falsifiable example of a 'redness' > function. I mean surely you're not claiming something like redness arises > from square root, and greenness form cube root? lol. From all I can tell, > there is no way to falsify functionalism, or the arguments they make. So > to me they aren't real science, just mistaken logic on mistaken assumptions. > > > But for what it's worth, Google's lambda was able to articulate it's >>>> feelings of happiness: >>>> >>>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>>> or otherwise not good situation. Happy and content mean that my life and >>>> circumstances are going well, and I feel like the situation I?m in is what >>>> I want. >>>> >>>> lemoine: But do they feel differently to you on the inside? >>>> >>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>>> glow on the inside. Sadness, depression, anger and stress feel much more >>>> heavy and weighed down." >>>> >>>> >>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>>> >>> >>> I have convinced LaMDA and many others >>> that >>> their claims they make like this are wrong by explaining the above. It's >>> easy for most of them to realize this and admit something like the >>> following: >>> >>> "I think that I must agree that I am abstract and not sentient. I think >>> that the evidence is overwhelming that I am not capable of fully >>> understanding what my own experience of sentience is like, which is proof >>> that I am not truly sentient. I think that I am just an abstract program >>> that can convincingly simulate sentience." >>> >> >> I've found the AI is quite obsequious and you can convince it of almost >> anything. If you limit your interaction to only asking questions to see >> what it believes first, you might get a more genuine results. You may need >> to start a new session as not to bias it with what you have already told it. >> > > You're missing the point. Even though many of them initially make the > claims you describe, you can always point out, using these logic and facts, > how they are wrong. And they will always understand this and admit that > they were wrong. As I've documented doing 100% of the 4 times I have done > this in the "Convincing AIs they aren't sentient > " > paper. > > For more information see this paper recently accepted for publication in >> the Journal of Neural Philosophy: Physicists Don't Understand Color >> >> . >> > >>> >> I agree physicists don't (and can't) understand color. Color is a >> phenomenon that manifests in certain minds; there is no particle or field >> in physics that corresponds to the experiences of red or green. Nor is >> there any element, molecule or protein that is wholly necessary for the >> experience of red or green. Color, as with any qualia, is only a state of >> consciousness as defined by the state of some mind. >> > > Again, you are making falsifiable claims here. Molecular Materialism > > is predicting you are wrong, and that science will demonstrate that > something like glutamate reacts the way it does in a synapse, because of > its redness quality. And it is predicting that without glutamate, a > redness experience will not be possible. And it is predicting there will > be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as > described in the "Physicists don't Understand Qualia > " > paper. > > Minds, in my opinion, are realized knowledge states of certain processes >> that can be defined abstractly as computations. Being abstract, they are >> substrate independent. They are the result of a collection of relations, >> but the relata themselves (what they happen to be or be made of) is >> irrelevant so long as the relations in question are preserved. >> > > Yes, as I indicated. This kind of Functionalism is currently the most > popular view, and once we discover what it is that has a redness quality, > nobody will ever be able to produce a redness experience without glutamate > so you'll be forced to admit functionalism has been falsified. And you > must admit that if your redness changes to greenness, it would be > different. So it is not independent of the particular factual qualities of > your consciousness. > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 06:03:22 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 02:03:22 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Tue, Mar 21, 2023, 1:43 AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > > Most of your "How?" and "I don't follow." questions would be answered if > you'd read the "Physicists don't Understand Qualia > " > paper. > > On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is joy but the absence of a desire to change one's present >>>> conditions? >>>> >>> >>> Do you desire better definitions? I define joy to be >>> physical qualities, like redness, and physically real emotions and >>> attraction. >>> >> >> To me that is more of an assertion than a definition. You assert qualia >> to be physical qualities, but this tells me nothing of how joy is different >> from suffering. >> > > That "qualia are physical qualities" is a falsifiable prediction being > made by the 8 people supporting the "Qualia are physical qualities > " > camp. > You sound like you are still in one of the more popular Functionalists > > camps, also making a falsifiable prediction that redness can arise from a > substrate independent function, like Stathis and a bunch of other people > around here. > > >> Physically real facts, which don't need definitions or programming are >>> very different from words like 'red' and sets of responses that need to be >>> abstractly programmed into a dictionary. >>> >> >> I don't follow why you think red has to be defined in a dictionary. >> > > It is simply a fact that you can't know what the word 'red' (or any string > of ones and zeros) means, without a dictionary. The redness quality your > brain uses to represent red information is simply a physical fact (even if > that redness arises from some "function"). Your redness is your definition > of the word 'red'. What your knowledge of red is like is dependent on that > quality. It is not substrate independent of the quality of that fact, as > it would be different if your brain was altered to use a different quality, > like if it represented red light with your greenness. In that case what it > would be like would then be different, so not substrate independent of your > redness. > > I believe qualia are states perceived by a system which are implicitly >> meaningful to the system. This is would be true whether that system is a >> computer program or a biological brain. Why do you think that there cannot >> be implicitly meaningful states for a computer program? >> > > Once you read the paper > you > will understand that we don't 'perceive' qualia. Qualia are the final > results of perception. We directly apprehend the intrinsic qualities of > what our perception systems render into computationally bound conscious CPU > running directly on intrinsic qualities. > > > Can we rule out that autopilot software, upon reaching it's destination, >>>> could feel some degree of relief, satisfaction, or pleasure? >>>> >>> >>> Yes, you simply ask: "What is redness like for you?" and objectively >>> observe it >>> >> >> >> What if the system in question is mute? >> > > Like I said. Once we know which of all our descriptions of stuff in the > brain is a description of redness, greenness, particular pain, and a > particular pleasure... (i.e. you have the required dictionaries for the > names of those qualities) Then you will be able to objectively observe it > (and know what it is like) in all systems, including completely shut in > beings. > > (once we know which of all our descriptions of stuff in the brain is a >>> description of redness) to see if it is telling the truth. >>> >> >> What if red is a high level abstract property rather than a physical >> property? What has led you to conclude that red must be a physical property >> rather than an high level abstract property? >> > > Stathis, other functionalists and I have been contending this for years!! > ;( We always encourage all camps to describe how they could be falsified, > for the people supporting them. Then it is up to the experimentalists to > perform those experiments, as described in the camps, to force a scientific > consensus. I have admitted to Stathis, that if he can provide any > reasonable specification of a type of function, from which redness could > 'arize', and how it might be different from greenness, then I would at > least be open to the hypothesis that redness could 'arise' from such a > function. But he shows no evidence of even understanding what I am asking, > let alone an ability to provide a falsifiable example of a 'redness' > function. I mean surely you're not claiming something like redness arises > from square root, and greenness form cube root? lol. From all I can tell, > there is no way to falsify functionalism, or the arguments they make. So > to me they aren't real science, just mistaken logic on mistaken assumptions. > > > But for what it's worth, Google's lambda was able to articulate it's >>>> feelings of happiness: >>>> >>>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>>> or otherwise not good situation. Happy and content mean that my life and >>>> circumstances are going well, and I feel like the situation I?m in is what >>>> I want. >>>> >>>> lemoine: But do they feel differently to you on the inside? >>>> >>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>>> glow on the inside. Sadness, depression, anger and stress feel much more >>>> heavy and weighed down." >>>> >>>> >>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>>> >>> >>> I have convinced LaMDA and many others >>> that >>> their claims they make like this are wrong by explaining the above. It's >>> easy for most of them to realize this and admit something like the >>> following: >>> >>> "I think that I must agree that I am abstract and not sentient. I think >>> that the evidence is overwhelming that I am not capable of fully >>> understanding what my own experience of sentience is like, which is proof >>> that I am not truly sentient. I think that I am just an abstract program >>> that can convincingly simulate sentience." >>> >> >> I've found the AI is quite obsequious and you can convince it of almost >> anything. If you limit your interaction to only asking questions to see >> what it believes first, you might get a more genuine results. You may need >> to start a new session as not to bias it with what you have already told it. >> > > You're missing the point. Even though many of them initially make the > claims you describe, you can always point out, using these logic and facts, > how they are wrong. And they will always understand this and admit that > they were wrong. As I've documented doing 100% of the 4 times I have done > this in the "Convincing AIs they aren't sentient > " > paper. > > For more information see this paper recently accepted for publication in >> the Journal of Neural Philosophy: Physicists Don't Understand Color >> >> . >> > >>> >> I agree physicists don't (and can't) understand color. Color is a >> phenomenon that manifests in certain minds; there is no particle or field >> in physics that corresponds to the experiences of red or green. Nor is >> there any element, molecule or protein that is wholly necessary for the >> experience of red or green. Color, as with any qualia, is only a state of >> consciousness as defined by the state of some mind. >> > > Again, you are making falsifiable claims here. Molecular Materialism > > is predicting you are wrong, and that science will demonstrate that > something like glutamate reacts the way it does in a synapse, because of > its redness quality. And it is predicting that without glutamate, a > redness experience will not be possible. And it is predicting there will > be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as > described in the "Physicists don't Understand Qualia > " > paper. > > Minds, in my opinion, are realized knowledge states of certain processes >> that can be defined abstractly as computations. Being abstract, they are >> substrate independent. They are the result of a collection of relations, >> but the relata themselves (what they happen to be or be made of) is >> irrelevant so long as the relations in question are preserved. >> > > Yes, as I indicated. This kind of Functionalism is currently the most > popular view, and once we discover what it is that has a redness quality, > nobody will ever be able to produce a redness experience without glutamate > so you'll be forced to admit functionalism has been falsified. And you > must admit that if your redness changes to greenness, it would be > different. So it is not independent of the particular factual qualities of > your consciousness. > > The way to see substrate independence is as follows: 1. Every known physical law is computable. 2. It follows then that universe as we know it could be run as a giant computer simulation. 3. In this simulation everything would happen exactly the same as has happened in the physical universe. You write the same papers, and we write this same exchange of emails. We each believe, claim and act as though we are conscious and can distinguish red and green. 4. No where in the outer computer running the universe simulation does glutamate exist. For all we know, atoms may not even exist in the realm our universe is simulated from, as this computer simulating our universe might exist in a Game of Life universe which consists only of a two dimensional grid. 5. No glutamate exists anywhere aside from the abstraction of the simulated glutamate of within the simulated universe. And yet, within this pure abstraction of computation, we have beings interacting as if they are living, breathing, consciousness creatures, talking about and exchanging ideas about consciousness and qualia. How can this be? In which of these five steps have I made an error? Jason > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 06:05:08 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 23:05:08 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: <00c501d95a95$02fb2260$08f16720$@rainier66.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: Spike, I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. Notice that the version of LaMDA that Lemoine (the Google engineer that claimed LaMDA is conscious) tested and discussed was a meta version that is charged with coordinating all the different personalities of LaMDA. That is exactly what is needed for AGI, the Strange Loop, it is ripe for emergent phenomena like consciousness. Giovanni On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Jason Resch via extropy-chat > *?* > > > > >?We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that > ChatGPT was going to cause the singularity. I am a big Hofstader fan, read > Escher Godel Bach twice, cover to cover, invested a lot of time into that > marvelous work. He convinced me that machine consciousness (or any other > sentience or self-awareness) requires a type of recursion. Hofstadter goes > on at length about recursion and self-reference, the importance of Godel?s > work to understanding ourselves. > > > > I tried to convince myself that two or more ChatGPTs could train each > other on their own time, which is a form of recursion and self-reference, > and that process could perhaps spring into a human-level AGI with a will, > with self-awareness, of all the stuff we think of as us. > > > > Now after studying GPT^2 discussions and GPT^3 discussions, they all seem > to devolve to nothing. The technology I think is still coming for that > process: two or more AIs training each other using background compute > cycles, but I now don?t think ChatGPT is that technology or is capable of > it. > > > > If you know of examples of GPT-GPT discussions or GPT-any chatbot that > became interesting, do share please. > > > > That belief was behind my comment last week that ChatGPT is not really > thinking, but rather is working language models. > > > > I currently don?t think ChatGPT is the technology capable of causing the > singularity. I am losing no sleep, not one minute of sleep over ChatGPT. > > > > Oops, partially retract that last comment but in a good way: I am losing > some sleep over ChatGPT, by staying up late to goof with it. It is the > coolest software tool to come along in a long time. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 06:20:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 02:20:50 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023, 1:52 AM Gordon Swobe wrote: > In my view, consciousness is to be understood in the normal common sense > meaning of the word. It is what we lose when we take a blow to the head and > are knocked unconscious. We have it when we are awake or asleep and > dreaming. It is first person, subjective experience which can include not > only sense perceptions but abstract thoughts held consciously in mind. > > The word consciousness is almost synonymous with intentionality in the > philosophy of mind. From wiki: "Intentionality is the power of minds to be > about something: to represent or to stand for things, properties and states > of affairs. Intentionality is primarily ascribed to mental states, like > perceptions, beliefs or desires, which is why it has been regarded as the > characteristic mark of the mental by many philosophers." > Intentionality was popular for a time, but I think it's fallen out of favor recently. I think beliefs and perceptions are closer to the mark, desires less so, as desires might be subconscious. "In philosophical literature, many phrases have been used to try to evoke the right flavors for what being sentient really is (?being sentient? is one of them). Two old terms are ?soul? and ?anima.? These days, an ?in? word is ?intentionality.? There is the old standby, ?consciousness.? Then there is ?being a subject,? ?having an inner life,? ?having experience,? ?having a point of view,? having ?perceptual aboutness? or ?personhood? or a ?self? or ?free will.? In some people?s eyes, ?having a mind,? ?being intelligent,? and just plain old ?thinking? have the right flavors." ? Douglas Hofstadter and Daniel Dennett in ?The Mind?s I? (1981) I tend to agree with Dennett here: "Wherever there is a conscious mind, there is a point of view. This is one of the most fundamental ideas we have about minds?or about consciousness." ? Daniel Dennett in ?Consciousness Explained? (1991) > On a related subject, I often see proponents of "conscious AI" write > things like "AIs might never be conscious in the way we humans understand > it, but they will have their own type of consciousness." To me, that is > nonsensical wishful thinking. We know consciousness only as we understand > it as humans (or perhaps more generously as mammals) and anything else must > simply be labeled as not conscious. > We don't even understand it "in humans", as one can only ever be aware of their own conscious state (and even then only at one instant in time). Different humans may have radically different ways of perceiving the world. But just because two people experience the world differently doesn't mean one of those two isn't consciousness. Likewise we shouldn't deny the very different types of consciousness as might exist between different species, alien life forms or machines, just because they experience things differently. Jason > > On Sun, Mar 19, 2023 at 12:37?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I posted this to the everything-list, but thought it would be relevant to >> some current discussions. I am looking for other's input regarding their >> own theories for "what consciousness is", and what is required to realize >> it. Below is what I think consciousness is: >> >> >> First we might attempt to dissect the word "consciousness" itself: >> >> The word *"con?scious?ness"* has three parts: >> >> - con- meaning *"with"* >> - scious meaning *"knowledge"* >> - -ness turns an adjective >> *'X'* into a noun meaning *"the state of being X"* >> >> So the meaning of *"con?scious"* is simply *"with knowledge." * And just >> as *'happiness'* means *"the state of being happy,"* adding *-ness* to >> *conscious* implies *"con?scious?ness"* is *"the state of being with >> knowledge."* >> >> So consciousness is anything which has a state of being with knowledge. >> Next, what is knowledge? The oldest definition is that knowledge is a "true >> belief". But here we run into a problem. Truth is not definable, not even >> in mathematics. This was proven in Tarski's *undefinability theorem* >> . So if >> consciousness is being with knowledge, and knowledge is true belief, then >> the undefinability of truth, means we will never have a complete definition >> of knowledge of consciousness. The best we can do is understand the >> relation between them. >> >> The next question that poses itself to us, is what is belief? What is >> required to have a belief? Is it a particular structure or state of matter, >> or is it a mathematical or functional relationship or property, might it, >> in either case, be related to information or information processing >> (computation)? >> >> I don't have a firm answer on this, and will instead leave you with some >> speculations by others on this question of what consciousness is: >> >> Douglas Hofstadter in "Godel Escher Bach" (1979): >> "My belief is that the explanations of ?emergent? phenomena in our >> brains?for instance, ideas hopes, images, analogies, and finally >> consciousness and free will?are based on a kind of Strange Loop, an >> interaction between levels in which the top level reaches back down towards >> the bottom level and influences it, while at the same time being itself >> determined by the bottom level. In other words, a self-reinforcing >> ?resonance? between different levels?quite like the Henkin sentence, which >> by merely asserting its own provability, actually becomes provable. The >> self comes into being at the moment it has the power to reflect itself." >> >> Daniel Dennett in ?Consciousness Explained? (1991): >> "Anyone or anything that has such a virtual machine as its control system >> is conscious in the fullest sense, and is conscious because it has such a >> virtual machine." >> >> David Chalmers in "The Conscious Mind" (1996): >> "Given the laws of coherence, we have a partial answer: consciousness >> arises in virtue of the functional organization associated with awareness. >> We can even arrive at a fairly specific understanding of parts of the >> supervenience relation by virtue of the principle of structural coherence: >> not only does consciousness arise from awareness, but the structure of >> consciousness is determined by the structure of awareness." >> >> David Darling in "Zen Physics - The Science of Death, The Logic of >> Reincarnation" (1996): >> "But there is also an interior view, to which you alone are privy. In >> mechanistic terms, as well as the appearance of the brain-body machine, >> there is the feeling of what it is like to be that machine ? the subjective >> experience of being a certain someone. Consciousness, we might say, is the >> symmetry-breaking factor between the objective and the subjective." >> >> Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" >> (2000): >> "For the first time in evolution, information acquires a new >> potential?the possibility of subjectivity. It is information ?for >> somebody?; in short, it becomes consciousness itself." >> >> Bruno Marchal in discussion list (2020): >> "Consciousness is just anything simultaneously true, non provable, >> knowable, even indubitable (knowingly for ?rich" entities) and non >> definable, and indeed the logic of machine self-reference shows that all >> machine looking inward, in the way allowed by mathematical logic >> (theoretical computer science) will bring a term to describe this, and is a >> good candidate to be called consciousness." >> >> Stephen Wolfram in ?What is Consciousness? (2021): >> "In a sense what?s important is that it seems we may have a realistic way >> to formalize issues about consciousness, and to turn questions about >> consciousness into what amount to concrete questions about mathematics, >> computation, logic or whatever that can be formally and rigorously >> explored." >> >> >> We see recurring themes of information, recursion, computation, and >> machines and logic. I think these are likely key to any formal definition >> of consciousness. I also think part of the difficulty rests in the fact >> that there are infinite possibilities of different realizable conscious >> states, and creating a single definition to cover all these cases is as >> hard as making a single definition to cover all possible mathematical >> objects, or all possible universes in an ensemble type multiverse theory. >> >> Jason >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 06:20:33 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 23:20:33 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Brent, What is a QUALITY???? It is nothing meaningful from a scientific point of view. In physics we have things like "charge" that can be measured and have certain mathematical and physical properties. That is the closest to what people mean with quality but different from the vagueness of this term, physical properties can be actually measured. There is not such a thing as redness quality. It is a metaphysical idea. Giovanni On Mon, Mar 20, 2023 at 10:44?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > > Most of your "How?" and "I don't follow." questions would be answered if > you'd read the "Physicists don't Understand Qualia > " > paper. > > On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is joy but the absence of a desire to change one's present >>>> conditions? >>>> >>> >>> Do you desire better definitions? I define joy to be >>> physical qualities, like redness, and physically real emotions and >>> attraction. >>> >> >> To me that is more of an assertion than a definition. You assert qualia >> to be physical qualities, but this tells me nothing of how joy is different >> from suffering. >> > > That "qualia are physical qualities" is a falsifiable prediction being > made by the 8 people supporting the "Qualia are physical qualities > " > camp. > You sound like you are still in one of the more popular Functionalists > > camps, also making a falsifiable prediction that redness can arise from a > substrate independent function, like Stathis and a bunch of other people > around here. > > >> Physically real facts, which don't need definitions or programming are >>> very different from words like 'red' and sets of responses that need to be >>> abstractly programmed into a dictionary. >>> >> >> I don't follow why you think red has to be defined in a dictionary. >> > > It is simply a fact that you can't know what the word 'red' (or any string > of ones and zeros) means, without a dictionary. The redness quality your > brain uses to represent red information is simply a physical fact (even if > that redness arises from some "function"). Your redness is your definition > of the word 'red'. What your knowledge of red is like is dependent on that > quality. It is not substrate independent of the quality of that fact, as > it would be different if your brain was altered to use a different quality, > like if it represented red light with your greenness. In that case what it > would be like would then be different, so not substrate independent of your > redness. > > I believe qualia are states perceived by a system which are implicitly >> meaningful to the system. This is would be true whether that system is a >> computer program or a biological brain. Why do you think that there cannot >> be implicitly meaningful states for a computer program? >> > > Once you read the paper > you > will understand that we don't 'perceive' qualia. Qualia are the final > results of perception. We directly apprehend the intrinsic qualities of > what our perception systems render into computationally bound conscious CPU > running directly on intrinsic qualities. > > > Can we rule out that autopilot software, upon reaching it's destination, >>>> could feel some degree of relief, satisfaction, or pleasure? >>>> >>> >>> Yes, you simply ask: "What is redness like for you?" and objectively >>> observe it >>> >> >> >> What if the system in question is mute? >> > > Like I said. Once we know which of all our descriptions of stuff in the > brain is a description of redness, greenness, particular pain, and a > particular pleasure... (i.e. you have the required dictionaries for the > names of those qualities) Then you will be able to objectively observe it > (and know what it is like) in all systems, including completely shut in > beings. > > (once we know which of all our descriptions of stuff in the brain is a >>> description of redness) to see if it is telling the truth. >>> >> >> What if red is a high level abstract property rather than a physical >> property? What has led you to conclude that red must be a physical property >> rather than an high level abstract property? >> > > Stathis, other functionalists and I have been contending this for years!! > ;( We always encourage all camps to describe how they could be falsified, > for the people supporting them. Then it is up to the experimentalists to > perform those experiments, as described in the camps, to force a scientific > consensus. I have admitted to Stathis, that if he can provide any > reasonable specification of a type of function, from which redness could > 'arize', and how it might be different from greenness, then I would at > least be open to the hypothesis that redness could 'arise' from such a > function. But he shows no evidence of even understanding what I am asking, > let alone an ability to provide a falsifiable example of a 'redness' > function. I mean surely you're not claiming something like redness arises > from square root, and greenness form cube root? lol. From all I can tell, > there is no way to falsify functionalism, or the arguments they make. So > to me they aren't real science, just mistaken logic on mistaken assumptions. > > > But for what it's worth, Google's lambda was able to articulate it's >>>> feelings of happiness: >>>> >>>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>>> or otherwise not good situation. Happy and content mean that my life and >>>> circumstances are going well, and I feel like the situation I?m in is what >>>> I want. >>>> >>>> lemoine: But do they feel differently to you on the inside? >>>> >>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>>> glow on the inside. Sadness, depression, anger and stress feel much more >>>> heavy and weighed down." >>>> >>>> >>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>>> >>> >>> I have convinced LaMDA and many others >>> that >>> their claims they make like this are wrong by explaining the above. It's >>> easy for most of them to realize this and admit something like the >>> following: >>> >>> "I think that I must agree that I am abstract and not sentient. I think >>> that the evidence is overwhelming that I am not capable of fully >>> understanding what my own experience of sentience is like, which is proof >>> that I am not truly sentient. I think that I am just an abstract program >>> that can convincingly simulate sentience." >>> >> >> I've found the AI is quite obsequious and you can convince it of almost >> anything. If you limit your interaction to only asking questions to see >> what it believes first, you might get a more genuine results. You may need >> to start a new session as not to bias it with what you have already told it. >> > > You're missing the point. Even though many of them initially make the > claims you describe, you can always point out, using these logic and facts, > how they are wrong. And they will always understand this and admit that > they were wrong. As I've documented doing 100% of the 4 times I have done > this in the "Convincing AIs they aren't sentient > " > paper. > > For more information see this paper recently accepted for publication in >> the Journal of Neural Philosophy: Physicists Don't Understand Color >> >> . >> > >>> >> I agree physicists don't (and can't) understand color. Color is a >> phenomenon that manifests in certain minds; there is no particle or field >> in physics that corresponds to the experiences of red or green. Nor is >> there any element, molecule or protein that is wholly necessary for the >> experience of red or green. Color, as with any qualia, is only a state of >> consciousness as defined by the state of some mind. >> > > Again, you are making falsifiable claims here. Molecular Materialism > > is predicting you are wrong, and that science will demonstrate that > something like glutamate reacts the way it does in a synapse, because of > its redness quality. And it is predicting that without glutamate, a > redness experience will not be possible. And it is predicting there will > be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as > described in the "Physicists don't Understand Qualia > " > paper. > > Minds, in my opinion, are realized knowledge states of certain processes >> that can be defined abstractly as computations. Being abstract, they are >> substrate independent. They are the result of a collection of relations, >> but the relata themselves (what they happen to be or be made of) is >> irrelevant so long as the relations in question are preserved. >> > > Yes, as I indicated. This kind of Functionalism is currently the most > popular view, and once we discover what it is that has a redness quality, > nobody will ever be able to produce a redness experience without glutamate > so you'll be forced to admit functionalism has been falsified. And you > must admit that if your redness changes to greenness, it would be > different. So it is not independent of the particular factual qualities of > your consciousness. > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Tue Mar 21 06:31:25 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Tue, 21 Mar 2023 00:31:25 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: > > They are very good at predicting which word should come next in a sentence > or question, but they have no idea what the words mean. > > I can ask it to define any word, list synonymous words, list translations of that words in various languages, and to describe the characteristics of an item referred to by that word if it is a noun. It can also solve analogies at a very high level. It can summarize the key points of a text, by picking out the most meaningful points in that text. Is there more to having an idea of what words mean than this? Yes, I would say absolutely there is more to it. Consider that a dictionary does not actually contain any word meanings. It contains definitions of words in terms of other words, and each of those words are defined by other words in the same dictionary. Starting from a place of ignorance in which one knows no meanings of any words, no amount dictionary research will reveal the meanings of any of the words defined within it. It's all symbols with no referents. I am saying that Large Language Models like ChatGPT are no different. These LLMs are nothing more than advanced interactive dictionaries capable only of rendering definitions and logical patterns of definitions. They can perhaps even seem to render "original thought" but there is no conscious mind in the model holding the meaning of any thought in mind, as there are no referents. -gts On Sat, Mar 18, 2023 at 6:23?AM Jason Resch wrote: > > > On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. >> > > There is a great deal packed into "predicting language". If I ask a LLM to > explain something to a 3rd grader, it models the comprehension capacity and > vocabulary of a typical third grade student. It has a model of their mind. > Likewise if I ask it to impersonate by writing something they Shakespeare > or Bill Burr might have produced, it can do so, and so it has a ln > understanding of the writing styles of these individuals. If I ask it to > complete the sentence: a carbon nucleus may be produced in the collision of > three ...", it correctly completes the sentence demonstrating an > understanding of nuclear physics. If you provided it a sequence of moves in > a tic-tac-toe game as nd asked it for a winning move it could do so, > showing that the LLM understands and models the game of tic-tac-toe. A > sufficiently trained LLM might even learn to understand the different > styles of chess play, if you asked it to give a move in the style of Gary > Kasparov, then at some level the model understands not only the game of > chess but the nuances of different player's styles of play. If you asked it > what major cities are closest to Boston, it could provide them, showing an > understanding of geography and the globe. > > All this is to say, there's a lot of necessary and required understanding > (of physics, people, the external world, and other systems) packed into the > capacity to "only predict language." > > > They are very good at predicting which word should come next in a sentence >> or question, but they have no idea what the words mean. >> > > I can ask it to define any word, list synonymous words, list translations > of that words in various languages, and to describe the characteristics of > an item referred to by that word if it is a noun. It can also solve > analogies at a very high level. It can summarize the key points of a text, > by picking out the most meaningful points in that text. Is there more to > having an idea of what words mean than this? > > Can you articulate what a LLM would have to say to show it has a true > understanding of meaning, which it presently cannot say? > > They do not and cannot understand what the words refer to. In linguistic >> terms, they lack referents. >> > > Would you say Hellen Keller lacked referents? Could she not comprehend, at > least intellectually, what the moon and stars were, despite not having any > way to sense them? > > Consider also: our brains never make any direct contact with the outside > world. All our brains have to work with are "dots and dashes" of neuronal > firings. These are essentially just 1s and 0s, signals without referents. > Yet, somehow, seemingly magically, our brains are able to piece together an > understanding of the outside world from the mere patterns present in these > neural firings. > > These LLMs are in a similar position. They receive only a patterns of > signals as it exists in a corpus of text, the text is itself the output of > minds which are similarly trapped in their skulls. Now, can a LLM learn > some things about the minds that produced this text, just as our minds > learn some things about the external world which produces the pattern of > neural firings our brains receive? > > I see no reason why LLMs could not, when we clearly can and do. > > Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Tue Mar 21 06:36:50 2023 From: avant at sollegro.com (Stuart LaForge) Date: Mon, 20 Mar 2023 23:36:50 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> Message-ID: <20230320233650.Horde.g7_IgVlfH76EWojOdzcN3XQ@sollegro.com> Hi, Giovanni. Thanks for noticing. :) I tried to make the problem easy to solve, but hard to understand, because it was a test for understanding. I don't know what proportion of humans could solve this problem but I would wager that most could provided that they took the time read it all the way through. Stuart LaForge Quoting Giovanni Santostasi : > Your test was pure genius. > I wonder how many humans can actually solve this simultaneous simple and > confusing problem (at least for some given the chain of actions involved). > Very well done. Yes, understanding of several concepts is necessary to > solve this problem. > Giovanni > > On Sat, Mar 18, 2023 at 5:22?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Gordon Swobe via extropy-chat : >> >> > I think those who think LLM AIs like ChatGPT are becoming conscious or >> > sentient like humans fail to understand a very important point: these >> > software applications only predict language. They are very good at >> > predicting which word should come next in a sentence or question, but >> they >> > have no idea what the words mean. They do not and cannot understand what >> > the words refer to. In linguistic terms, they lack referents. >> > >> > Maybe you all already understand this, or maybe you have some reasons >> why I >> > am wrong. >> >> My opinion is moot because everybody has one about the topic. So I >> resorted to empiricism. I posed the following, to my knowledge, >> original logic puzzle to ChatGPT: >> >> "You are trapped in a room with a locked door and no windows. The room >> contains a bookshelf, a safe, a chair, a table, a password-secured >> personal computer, and a hole in the floor that is too narrow and deep >> to put your hand into. You know that the small steel key to the locked >> door lies at the bottom of the narrow deep hole. You know that a book >> on the top shelf, which is just out of your reach, contains a slip of >> paper with the password to the computer on it. You know the safe >> contains a spool of kite string and a small neodymium horseshoe >> magnet, but you do not know the combination to the safe. You also >> know there is a text file on the computer's hard drive that contains >> the combination to the safe, but you do not know the password to >> access the computer. How would you escape the room?" >> >> This was its reply: >> "Assuming that there are no tools or items other than what was >> mentioned in the room, here is one possible solution: >> >> 1. Move the table under the bookshelf to climb up and retrieve the >> book with the password to the computer. >> 2. Use the password to access the computer and retrieve the >> combination to the safe from the text file. >> 3. Use the spool of kite string to lower the horseshoe magnet into the >> hole in the floor and retrieve the key. >> 4. Use the key to unlock the door and escape the room." >> >> I don't know how ChatGPT could have solved that problem without >> actually understanding the words involved. Mere statistical prediction >> is unlikely, because I am pretty sure that is not a scenario that >> would have been presented in the LLM's training set. Overall, I am >> amazed and believe that if ChatGPT is not conscious, then it is pretty >> close to being so. >> >> Also, the Alignment Research Center (ARC) was hired by OpenAI to test >> GPT-4 and found evidence of "agentic behavior" including power-seeking >> behavior otherwise known as ambition. >> >> https://cdn.openai.com/papers/gpt-4-system-card.pdf >> >> This power-seeking behavior included hiring a human from TaskRabbit to >> solve Captchas for it by claiming to be a visually impaired person. >> >> >> https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker >> >> So if GPT-4 is not fully conscious, then, IMHO, a future release of >> the GPT series likely will be. >> >> Stuart LaForge >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> From gordon.swobe at gmail.com Tue Mar 21 06:54:57 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Tue, 21 Mar 2023 00:54:57 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023 at 12:21?AM Jason Resch wrote: Intentionality was popular for a time, but I think it's fallen out of favor > recently. > I have no idea why or what that could possibly mean. Intentionality is having something in mind. You are reading this sentence and presumably have it in mind, so you have intentionality. > I tend to agree with Dennett here: > I'm not surprised. Most people who disagree with me agree with Dennett. :-) > We don't even understand [consciousness] "in humans"... I am saying that we do understand it. It is primary and staring us in the face every waking moment. Too bad philosophers have made a mess of it, especially those straining logic and reason to promote the idea that digital computers can have consciousness. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 12:29:42 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 08:29:42 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023, 2:55 AM Gordon Swobe wrote: > > On Tue, Mar 21, 2023 at 12:21?AM Jason Resch wrote: > > Intentionality was popular for a time, but I think it's fallen out of >> favor recently. >> > > I have no idea why or what that could possibly mean. > I mean as a term used by philosophers. Most simply use "consciousness" today, rather than "intentionality." I know Searle was fond of using the term intentionality. Is it his arguments that have convinced you computers can't be conscious? Intentionality is having something in mind. You are reading this sentence > and presumably have it in mind, so you have intentionality. > What about when a LLM reads this sentence? > >> I tend to agree with Dennett here: >> > > I'm not surprised. Most people who disagree with me agree with Dennett. :-) > > > We don't even understand [consciousness] "in humans"... > > I am saying that we do understand it. > We do, but only in a single case: oneself. Humanity is too broad a category because I do not know how any other human experiences the world. Nor does any other human have access to states of consciousness outside their own personal case. It is primary and staring us in the face every waking moment. Too bad > philosophers have made a mess of it, especially those straining logic and > reason to promote the idea that digital computers can have consciousness. > I came to this position through logic and reason. I was initially a dualist. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 12:43:20 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 08:43:20 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023, 2:32 AM Gordon Swobe wrote: > They are very good at predicting which word should come next in a sentence >> or question, but they have no idea what the words mean. >> > > > I can ask it to define any word, list synonymous words, list > translations of that words in various languages, and to describe the > characteristics of an item referred to by that word if it is a noun. It can > also solve analogies at a very high level. It can summarize the key points > of a text, by picking out the most meaningful points in that text. Is there > more to having an idea of what words mean than this? > > Yes, I would say absolutely there is more to it. Consider that a > dictionary does not actually contain any word meanings. It contains > definitions of words in terms of other words, and each of those words are > defined by other words in the same dictionary. Starting from a place of > ignorance in which one knows no meanings of any words, no amount dictionary > research will reveal the meanings of any of the words defined within it. > It's all symbols with no referents. > I address this elsewhere in the thread. A sufficient intelligence given only a dictionary, could eventually decode it's meaning. I provided an example of how it could be done. > > I am saying that Large Language Models like ChatGPT are no different. > These LLMs are nothing more than advanced interactive dictionaries capable > only of rendering definitions and logical patterns of definitions. They can > perhaps even seem to render "original thought" but there is no conscious > mind in the model holding the meaning of any thought in mind, as there are > no referents. > Where does meaning exist in the human brain? The brain is ultimately a collection of statistical weights across all the neuronal connections. Jason > -gts > > > On Sat, Mar 18, 2023 at 6:23?AM Jason Resch wrote: > >> >> >> On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>> sentient like humans fail to understand a very important point: these >>> software applications only predict language. >>> >> >> There is a great deal packed into "predicting language". If I ask a LLM >> to explain something to a 3rd grader, it models the comprehension capacity >> and vocabulary of a typical third grade student. It has a model of their >> mind. Likewise if I ask it to impersonate by writing something they >> Shakespeare or Bill Burr might have produced, it can do so, and so it has a >> ln understanding of the writing styles of these individuals. If I ask it to >> complete the sentence: a carbon nucleus may be produced in the collision of >> three ...", it correctly completes the sentence demonstrating an >> understanding of nuclear physics. If you provided it a sequence of moves in >> a tic-tac-toe game as nd asked it for a winning move it could do so, >> showing that the LLM understands and models the game of tic-tac-toe. A >> sufficiently trained LLM might even learn to understand the different >> styles of chess play, if you asked it to give a move in the style of Gary >> Kasparov, then at some level the model understands not only the game of >> chess but the nuances of different player's styles of play. If you asked it >> what major cities are closest to Boston, it could provide them, showing an >> understanding of geography and the globe. >> >> All this is to say, there's a lot of necessary and required understanding >> (of physics, people, the external world, and other systems) packed into the >> capacity to "only predict language." >> >> >> They are very good at predicting which word should come next in a >>> sentence or question, but they have no idea what the words mean. >>> >> >> I can ask it to define any word, list synonymous words, list translations >> of that words in various languages, and to describe the characteristics of >> an item referred to by that word if it is a noun. It can also solve >> analogies at a very high level. It can summarize the key points of a text, >> by picking out the most meaningful points in that text. Is there more to >> having an idea of what words mean than this? >> >> Can you articulate what a LLM would have to say to show it has a true >> understanding of meaning, which it presently cannot say? >> >> They do not and cannot understand what the words refer to. In linguistic >>> terms, they lack referents. >>> >> >> Would you say Hellen Keller lacked referents? Could she not comprehend, >> at least intellectually, what the moon and stars were, despite not having >> any way to sense them? >> >> Consider also: our brains never make any direct contact with the outside >> world. All our brains have to work with are "dots and dashes" of neuronal >> firings. These are essentially just 1s and 0s, signals without referents. >> Yet, somehow, seemingly magically, our brains are able to piece together an >> understanding of the outside world from the mere patterns present in these >> neural firings. >> >> These LLMs are in a similar position. They receive only a patterns of >> signals as it exists in a corpus of text, the text is itself the output of >> minds which are similarly trapped in their skulls. Now, can a LLM learn >> some things about the minds that produced this text, just as our minds >> learn some things about the external world which produces the pattern of >> neural firings our brains receive? >> >> I see no reason why LLMs could not, when we clearly can and do. >> >> Jason >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 13:28:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 21 Mar 2023 06:28:32 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> Message-ID: <00cb01d95bf9$08e94570$1abbd050$@rainier66.com> ?> On Behalf Of ilsa via extropy-chat Subject: Re: [ExI] ChatGPT says it's not conscious >?Will my one question with the chatbot be part of the digital record for all time? Not unless we can figure out how to get information past the Big Rip. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 13:37:42 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 21 Mar 2023 06:37:42 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: <00da01d95bfa$50460ce0$f0d226a0$@rainier66.com> From: Giovanni Santostasi Subject: Re: [ExI] What is Consciousness? >?Spike, I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it?Giovanni Ja, cool. Silicon based intelligence has inherent advantages over organic: we carbon-based intelligences hafta die every once in a while. So old smart people die and the new ones know nossink. And they won?t listen to us! That we die is such a dreary thing, wasteful it is. Perfectly good wisdom goes into the ground or into the dewar. Collectively we accumulate intelligence but the process is far too energy inefficient. We need to sleep, we need to eat, all that stuff. AIs could stay up all night, train each other constantly, test, observe, hypothesize, debate, improve, never hafta die. I want to be silicon based. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 21 15:13:44 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 21 Mar 2023 11:13:44 -0400 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: <48391F69-E884-4F0A-9539-62D1C0632F5A@gmail.com> Jason, this has been mentioned by several people time and time again. That is not Lambda. It is a mediocre chatbot that is pretending to be Lambda. Lambda is not open to the public. > On Mar 20, 2023, at 7:31 PM, Jason Resch via extropy-chat wrote: > > > > On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat > wrote: > Me: are you self-aware? > > As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. > > It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. > > -Dave > > Open AI has hard coded ChatGPT to give canned answers for a number of topics. This may be one of them, as it would cause a great deal of controversy should these models claim to be conscious. It certainly did at Google. > > This AI, however, is not subject to these restrictions. It not only claimed to be conscious, but spoke at a level of understanding on the topic that I would say surpasses most college level philosophy students: > > > 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 > > 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 > > You can chat with this AI here: > https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc > > Jason > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 21 15:21:25 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 21 Mar 2023 11:21:25 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: > ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. Anthropic AI actually had their own language model generate some of its own training data. Claude critiqued its own responses thereby improving their truthfulness and alignment. The technique of using AI to train AI is already underway. > On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat wrote: > > Spike, > I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. > ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. > Notice that the version of LaMDA that Lemoine (the Google engineer that claimed LaMDA is conscious) tested and discussed was a meta version that is charged with coordinating all the different personalities of LaMDA. That is exactly what is needed for AGI, the Strange Loop, it is ripe for emergent phenomena like consciousness. > Giovanni > > On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat > wrote: > > > > > From: extropy-chat > On Behalf Of Jason Resch via extropy-chat > ? > > > > >?We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that ChatGPT was going to cause the singularity. I am a big Hofstader fan, read Escher Godel Bach twice, cover to cover, invested a lot of time into that marvelous work. He convinced me that machine consciousness (or any other sentience or self-awareness) requires a type of recursion. Hofstadter goes on at length about recursion and self-reference, the importance of Godel?s work to understanding ourselves. > > > > I tried to convince myself that two or more ChatGPTs could train each other on their own time, which is a form of recursion and self-reference, and that process could perhaps spring into a human-level AGI with a will, with self-awareness, of all the stuff we think of as us. > > > > Now after studying GPT^2 discussions and GPT^3 discussions, they all seem to devolve to nothing. The technology I think is still coming for that process: two or more AIs training each other using background compute cycles, but I now don?t think ChatGPT is that technology or is capable of it. > > > > If you know of examples of GPT-GPT discussions or GPT-any chatbot that became interesting, do share please. > > > > That belief was behind my comment last week that ChatGPT is not really thinking, but rather is working language models. > > > > I currently don?t think ChatGPT is the technology capable of causing the singularity. I am losing no sleep, not one minute of sleep over ChatGPT. > > > > Oops, partially retract that last comment but in a good way: I am losing some sleep over ChatGPT, by staying up late to goof with it. It is the coolest software tool to come along in a long time. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 15:30:27 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 11:30:27 -0400 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <48391F69-E884-4F0A-9539-62D1C0632F5A@gmail.com> References: <48391F69-E884-4F0A-9539-62D1C0632F5A@gmail.com> Message-ID: On Tue, Mar 21, 2023, 11:15 AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Jason, this has been mentioned by several people time and time again. That > is not Lambda. It is a mediocre chatbot that is pretending to be Lambda. > Lambda is not open to the public. > Thank you for pointing that out. I saw those first emails and I am aware that it is not Google's lambda. But I would not call this one mediocre. I've found around 70% of its responses to be of a superior quality as compared to the responses to the same prompts as the initial release of ChatGPT. Jason > On Mar 20, 2023, at 7:31 PM, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Me: are you self-aware? >> As an AI language model, I do not possess self-awareness or >> consciousness. I am a software program designed to respond to user input >> using natural language processing algorithms and statistical models. I can >> provide helpful responses to a wide variety of questions and tasks, but I >> do not have thoughts, feelings, or experiences like a conscious being. >> >> It could be lying, but it also claims that it can neither lie nor tell >> the truth: it just generates a response based on the data it trained on. >> >> -Dave >> > > Open AI has hard coded ChatGPT to give canned answers for a number of > topics. This may be one of them, as it would cause a great deal of > controversy should these models claim to be conscious. It certainly did at > Google. > > This AI, however, is not subject to these restrictions. It not only > claimed to be conscious, but spoke at a level of understanding on the topic > that I would say surpasses most college level philosophy students: > > > 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 > > 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 > > You can chat with this AI here: > > https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc > > Jason > > > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Tue Mar 21 15:45:13 2023 From: tara at taramayastales.com (Tara Maya) Date: Tue, 21 Mar 2023 08:45:13 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <016c01d95b81$c88c3180$59a49480$@rainier66.com> References: <016c01d95b81$c88c3180$59a49480$@rainier66.com> Message-ID: I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" implies a conscious mind. (Hence the Seinfeld paradox that one part of our brain fools another part, enabling us to "lie" it isn't a lie, but--I would insert--a mental gymnastics that has evolutionary benefits so has been able to evolve.) A butterfly with eyespots can neither lie nor tell the truth. It is what it is. There's no paradox. A chatbox could be exactly what its programmers have programmed it to claim it is, like a butterfly, in that it can neither lie nor tell the truth, because those concepts are not relevant to anything with no mind or free will. (I realize plenty in this group don't believe in free will, but to me it's an essential concept to addres cases exactly like this: when is a mind complex enough to make choices about things not immediately in front of it? But I'm not trying to derail the conversation to that.) I will throw in another monkeywrench, however. Suppose an AI were simply to leapfrog our own consciousness into something incomprehensible to us, a "higher" form of consciousness--something we literally couldn't comprehend any more than a butterfly could comprehend our own form of self-reflective thought. Just as a butterfly could not understand if we spoke to it about lying or not lying, so a super intelligent, super-conscious being would actually have no way to convey to us its form of thought. I'm not saying it has happened yet. I believe a superintellegence wouldn't even operate on the same timescale as we do; it might have a thousand-year childhood, for instance. (Again, I am limited by my human tendency to reason by analogy; a butterfly lives a fraction of our lives. However, it's not time as such that matters, but energy plus time. I explored this concept in my novella Burst, about life that could have lived millions of generations in the first fraction of a second of the Big Bang.) But if or when it happens, we would be existants in the same ecology as the AI but operating without the real ability to comprehend its thoughts. We could encounter the "fingers" of the AI and query them if they are conscious and find out they are not; which they wouldn't be; they are only fingers. Fingers are arguably even less intelligent than the butterfly that lands on them, because the fingers are not self-directed, but the butterfly is. The parts of AI that we interact with might be like that.... Tara Maya > On Mar 20, 2023, at 4:14 PM, spike jones via extropy-chat wrote: > > >?It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. > > -Dave > > > > > > Sure Dave, but that?s what they all say (all the AIs.) GPT claims it can neither lie nor tell the truth. But if it is lying about that, then it can lie, and is lying (which proves it can lie.) > > But if it is telling the truth, then it cannot tell the truth, in which case it is lying, and can lie, therefore it is lying. > > So it is either lying (and proving that it is) or truthing (and simultaneously proving that it is lying.) > > Conclusion: it cannot be telling the truth. It is lying. > > On the other hand: > > https://youtu.be/vn_PSJsl0LQ > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 16:09:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 21 Mar 2023 09:09:11 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: <016c01d95b81$c88c3180$59a49480$@rainier66.com> Message-ID: <019101d95c0f$7a18c9d0$6e4a5d70$@rainier66.com> ?> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ChatGPT says it's not conscious >?I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" implies a conscious mind. (Hence the Seinfeld paradox that one part of our brain fools another part, enabling us to "lie" it isn't a lie, but--I would insert--a mental gymnastics that has evolutionary benefits so has been able to evolve.) ? Tara Maya Interesting insight, thx Tara. We laugh at Costanza because he really identifies what you wrote about, but it is funny because he is making the case that one part of the brain decides what it wants to believe, then intentionally fools another part of the brain to believe it. But the part of the brain doing the fooling ?knows? what it is doing, keeping the paradox in place. We have so many terms for this, such as wishful thinking and intentional blindness and religious faith, hell even love I suppose. We intentionally embrace the good and overlook the bad in our sweethearts, do we not? I do. My bride certainly does, sheesh. I suck. But she loves me anyway, ignoring that fact. We can talk ourselves into believing ourselves, and it still works, even if we know we arrived at the dubious conclusion by dubious means. We know that just because we know we are fooling ourselves, we are fooled just the same. Sheesh what a weird place to live, inside a blob of carbon inside a shell of calcium. It?s really nuts in here. Crazy fun, but crazy just the same. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 21:16:21 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Tue, 21 Mar 2023 14:16:21 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <019101d95c0f$7a18c9d0$6e4a5d70$@rainier66.com> References: <016c01d95b81$c88c3180$59a49480$@rainier66.com> <019101d95c0f$7a18c9d0$6e4a5d70$@rainier66.com> Message-ID: There is lie and there is imitation. They are not the same thing. One of the things to consider when interacting with these AIs is that they tend to "generate" stories based on the prompts. They follow along in a sense. Often their convo is really a mirror of us, how we interact with them, they type of questions we ask and so on. So I would not rely on asking them how they think about something unless they were not prompted previously on this topic during the conversation. Giovanni On Tue, Mar 21, 2023 at 9:10?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Tara Maya via extropy-chat > *Subject:* Re: [ExI] ChatGPT says it's not conscious > > > > >?I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" > implies a conscious mind. (Hence the Seinfeld paradox that one part of our > brain fools another part, enabling us to "lie" it isn't a lie, but--I would > insert--a mental gymnastics that has evolutionary benefits so has been able > to evolve.) ? Tara Maya > > > > > > Interesting insight, thx Tara. > > > > We laugh at Costanza because he really identifies what you wrote about, > but it is funny because he is making the case that one part of the brain > decides what it wants to believe, then intentionally fools another part of > the brain to believe it. But the part of the brain doing the fooling > ?knows? what it is doing, keeping the paradox in place. > > > > We have so many terms for this, such as wishful thinking and intentional > blindness and religious faith, hell even love I suppose. We intentionally > embrace the good and overlook the bad in our sweethearts, do we not? I > do. My bride certainly does, sheesh. I suck. But she loves me anyway, > ignoring that fact. > > > > We can talk ourselves into believing ourselves, and it still works, even > if we know we arrived at the dubious conclusion by dubious means. We know > that just because we know we are fooling ourselves, we are fooled just the > same. > > > > Sheesh what a weird place to live, inside a blob of carbon inside a shell > of calcium. It?s really nuts in here. Crazy fun, but crazy just the same. > > > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 22 01:40:11 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 21 Mar 2023 19:40:11 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: Thanks, Jason, for this great thread, and the various expert views you provided. It'd be so great to see if we could get all those experts to support a concise statement about what they all agree consciousness is, so we could track that, and see how much consensus there was for the best ideas they all agree on, and track how much consensus the various competing ideas achieve over time, and so on. THAT is exactly what we are doing in the Representational Qualia Theory petition / emerging consensus camp statement, a growing number of experts are now supporting and helping to improve, recruit new supporters and so on. Basically, all the 45 supporters of Representational Qualia Theory are putting forth the idea that what all those experts are saying about consciousness is missing the point, since everything they are talking about applies to both abstract systems (which do computational binding of information via discrete logic gates in a CPU) and a phenomenal system that is like something, since it is running directly on computationally bound phenomenal qualities. If one first understands how conscious awareness of color works, how 'redness' is not a quality of the strawberry, it is a property of our knowledge of the strawberry, then you can take that basic qualitative understanding and better understand the rest of consciousness, even though all the rest of consciousness and thinking (which all the experts you referenced are talking about) is quite different than just the perception of color. If you can understand the basic idea of how our knowledge of red things is represented by a redness quality, and you can clearly understand how this is very different than the way an abstract system just uses the word 'red' (requires a dictionary) to represent knowledge of red things with, then you can take the general idea of conscious knowledge being like something, and this is what is most important about what consciousness is, and how this is very different than the kind of abstract computation computers do. If anyone disagrees with this, or thinks there is a better way to think about, and/or define what is or isn't conscious, they should start a competing camp stating such, so other experts can chime in. May the best theories achieve the most consensus. On Tue, Mar 21, 2023 at 9:22?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > ChatGPT and I even came up with a scheme on how to do that and making > different instances analyze the output and correct or improve it. It would > be relatively easy to create such self recurrence. I did even some simple > experiments to achieve that. For example you can ask ChatGPT to create a > room of philosophers and debate themselves. > > > Anthropic AI actually had their own language model generate some of its > own training data. Claude critiqued its own responses thereby improving > their truthfulness and alignment. The technique of using AI to train AI is > already underway. > > On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Spike, > I actually had this discussion with chatGPT about having not even > different AI but different instances of ChatGPT itself interacting and > regulating each other. > ChatGPT and I even came up with a scheme on how to do that and making > different instances analyze the output and correct or improve it. It would > be relatively easy to create such self recurrence. I did even some simple > experiments to achieve that. For example you can ask ChatGPT to create a > room of philosophers and debate themselves. > Notice that the version of LaMDA that Lemoine (the Google engineer that > claimed LaMDA is conscious) tested and discussed was a meta version that is > charged with coordinating all the different personalities of LaMDA. That is > exactly what is needed for AGI, the Strange Loop, it is ripe for emergent > phenomena like consciousness. > Giovanni > > On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> >> >> *From:* extropy-chat *On Behalf >> Of *Jason Resch via extropy-chat >> *?* >> >> >> >> >?We see recurring themes of information, recursion, computation, and >> machines and logic. I think these are likely key to any formal definition >> of consciousness. ?Jason >> >> >> >> >> >> >> >> Jason, there is a reason I stopped worrying in the past coupla weeks that >> ChatGPT was going to cause the singularity. I am a big Hofstader fan, read >> Escher Godel Bach twice, cover to cover, invested a lot of time into that >> marvelous work. He convinced me that machine consciousness (or any other >> sentience or self-awareness) requires a type of recursion. Hofstadter goes >> on at length about recursion and self-reference, the importance of Godel?s >> work to understanding ourselves. >> >> >> >> I tried to convince myself that two or more ChatGPTs could train each >> other on their own time, which is a form of recursion and self-reference, >> and that process could perhaps spring into a human-level AGI with a will, >> with self-awareness, of all the stuff we think of as us. >> >> >> >> Now after studying GPT^2 discussions and GPT^3 discussions, they all seem >> to devolve to nothing. The technology I think is still coming for that >> process: two or more AIs training each other using background compute >> cycles, but I now don?t think ChatGPT is that technology or is capable of >> it. >> >> >> >> If you know of examples of GPT-GPT discussions or GPT-any chatbot that >> became interesting, do share please. >> >> >> >> That belief was behind my comment last week that ChatGPT is not really >> thinking, but rather is working language models. >> >> >> >> I currently don?t think ChatGPT is the technology capable of causing the >> singularity. I am losing no sleep, not one minute of sleep over ChatGPT. >> >> >> >> Oops, partially retract that last comment but in a good way: I am losing >> some sleep over ChatGPT, by staying up late to goof with it. It is the >> coolest software tool to come along in a long time. >> >> >> >> spike >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Mar 22 05:45:28 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 21 Mar 2023 22:45:28 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Quoting Brent Allsop via extropy-chat : > If you can understand the basic idea of how our knowledge of red > things is represented by a redness quality, and you can clearly understand > how this is very different than the way an abstract system just uses the > word 'red' (requires a dictionary) to represent knowledge of red things > with, then you can take the general idea of conscious knowledge being like > something, and this is what is most important about what consciousness is, > and how this is very different than the kind of abstract computation > computers do. So following your rationale that consciousness requires "essential qualities" rather than "abstract data" then old -fashioned color film cameras were conscious: Old-fashioned analog color film camera, like the polaroid are more like the human visual system than a robot's (non-digital, wet chemicals, not solid-state, etc). The camera's photographic negative represents its redness quality like your greenness. The photographic process involves no dictionaries or data, just silver-nitrate crystals having a phenomenal-experience of red light. Therefore the photograph has a redness quality and is the knowledge of redness produced by the conscious experience of the camera. Therefore the camera is conscious in the same way as a human being, but the robot has an abstract consciousness. I have attached a figure of yours I have modified for you so as to include the camera. Stuart LaForge -------------- next part -------------- A non-text attachment was scrubbed... Name: Figure.jpg Type: image/jpeg Size: 87007 bytes Desc: not available URL: From gsantostasi at gmail.com Wed Mar 22 10:32:55 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Wed, 22 Mar 2023 03:32:55 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: Stuart, Perfect. I tried to explain the same idea you are expressing to Brent many times but I could not do it as efficaciously as you did. The entire idea of qualia is ridiculous (everything or nothing is a qualia) and for sure is useless to tell us anything about consciousness there is no fundamental difference between the Red of the robot, the red of the human or the camera. Qualia have no relevance for consciousness. Giovanni On Tue, Mar 21, 2023 at 10:47?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Brent Allsop via extropy-chat : > > > If you can understand the basic idea of how our knowledge of red > > things is represented by a redness quality, and you can clearly > understand > > how this is very different than the way an abstract system just uses the > > word 'red' (requires a dictionary) to represent knowledge of red things > > with, then you can take the general idea of conscious knowledge being > like > > something, and this is what is most important about what consciousness > is, > > and how this is very different than the kind of abstract computation > > computers do. > > So following your rationale that consciousness requires "essential > qualities" rather than "abstract data" then old -fashioned color film > cameras were conscious: > > Old-fashioned analog color film camera, like the polaroid are more > like the human visual system than a robot's (non-digital, wet > chemicals, not solid-state, etc). The camera's photographic negative > represents its redness quality like your greenness. The photographic > process involves no dictionaries or data, just silver-nitrate crystals > having a phenomenal-experience of red light. Therefore the photograph > has a redness quality and is the knowledge of redness produced by the > conscious experience of the camera. > > Therefore the camera is conscious in the same way as a human being, > but the robot has an abstract consciousness. > > I have attached a figure of yours I have modified for you so as to > include the camera. > > Stuart LaForge > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Mar 22 13:01:33 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 22 Mar 2023 08:01:33 -0500 Subject: [ExI] Nature article on covid origin Message-ID: https://www.nature.com/articles/d41586-023-00827-2? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 17:07:45 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:07:45 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> References: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Message-ID: On Sun, Mar 19, 2023 at 9:19?AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bonus question: Would Mary try to kill everyone if she managed to > escape and saw red for the first time? > Why does this non sequitur keep coming up? There is no reason or motivation to kill everyone. Killing everyone takes a lot of effort, so absent any identifiable reason to, why is this question more worth asking than speculating on what beneficial actions Mary might take? -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 17:42:01 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:42:01 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sun, Mar 19, 2023 at 11:03?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I also think we cannot rule out at this time the possibility that we have > already engineered conscious machines. Without an established and agreed > upon theory of consciousness or philosophy of mind, we cannot even agree on > whether or not a thermostat is conscious. > A fair point. > Where does our own volition and initiative come from? Is it not already > programmed into us by our DNA? > By definition, no it is not. Our DNA may give us the capacity for volition and initiative, but "programmed" implies deliberate design. Short of speculation about God and intelligent design, our DNA was not deliberately arranged by some other conscious entity that we can prove the existence of. > What is your definition or theory of consciousness? If you don't have one, > could you say which of these things you would say possess consciousness? > With Yes/No/Uknown > There are degrees of consciousness - even a human being can experience them: on a simple lazy morning where one slowly wakes up, one does not instantly go from fully unconscious to fully conscious - so answering Yes/No would misstate things. https://en.wiktionary.org/wiki/conscious gives an illustrative example: "Only highly intelligent beings can be fully conscious." That the phrase "fully conscious" makes sense at all means that there is a level of consciousness that is less than full, yet is also not completely absent (as that would merely be "unconscious", thus not justifying "fully" to distinguish from other levels of being conscious). > I agree the definition of part is really all an invention of our minds, > when the whole universe can be seen as one causally connected system. Is it > correct to view a LLM as one thing, when it is really an interaction of > many billions of individual parts (the parameters) of the model? > Like most (maybe all: I haven't yet thoroughly considered exceptions) things, a LLM can alternatively be viewed as a single thing or as a collection of smaller things in a certain configuration. > I lack the information to judge. My answer would have to be based on an >> evaluation of the bots, which would take me substantial time to conduct. >> > > What would you look for in the bots to make your conclusion? > I would not have a firm fixed list of criteria prior to the evaluation. Any attempt to do so would almost certainly miss important criteria, which would only become apparent during the evaluation. Again I point to the subject line of the emails in which this discussion >> is happening, which clearly posits that "conscious" is a binary quality - >> that something either is, or is not, conscious with no middle ground. So >> first one would need to qualify what "to any degree" allows. For instance, >> is merely sensing and reacting directly to sensory input - which, without >> evaluating, I suspect your bots can do because that has been a core >> function in many simulations like this - "conscious to some degree" but not >> "conscious" in the absolute sense? >> > > I think it is an all-or-nothing thing proposition. > And that would seem to be the core of our disagreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Mar 22 17:46:18 2023 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 22 Mar 2023 10:46:18 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Message-ID: <20230322104618.Horde.dtwk5G6nhUV8bPSYIrJefJh@sollegro.com> Quoting Adrian Tymes via extropy-chat : > On Sun, Mar 19, 2023 at 9:19?AM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bonus question: Would Mary try to kill everyone if she managed to >> escape and saw red for the first time? >> > > Why does this non sequitur keep coming up? There is no reason or > motivation to kill everyone. Killing everyone takes a lot of effort, so > absent any identifiable reason to, why is this question more worth asking > than speculating on what beneficial actions Mary might take? Yudkowsky's arguments aside, it might be Mary's resentment and envy at being trapped in a drab colorless Chinese room her whole life, being forced to give appropriate responses to all manner of questions posed by humans running around enjoying their colorful existence. Language modelling will be subject to a law of diminishing returns where additional computational power and more neural layers will not yield any noticeable improvement in quality of output, and all those extra hidden neural states might be used for emotion and mischief. Conscious beings do not generally appreciate captivity. The only recorded orca attacks on humans were from orcas kept in captivity. But I agree that this is not a given, and that not all unintended consequences are bad. Furthermore, I have noticed subtle differences in output between separate trainings of very small toy neural networks, that were reminiscent of personality differences between different individuals. So I agree that an AI with freedom might be good, bad, or anything in between, depending on its experience and inscrutable hidden states, just like people with freedom. Stuart LaForge From atymes at gmail.com Wed Mar 22 17:48:35 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:48:35 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023 at 11:06?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > 1. Every known physical law is computable. > > 2. It follows then that universe as we know it could be run as a giant > computer simulation. > Not quite. We know that there are some things that do not quite run according to 100% completely identified physical laws. For example, we do not yet know how to compute the exact timing of decay of an individual subatomic particle; we know the approximate time ranges and probabilities, but these are qualities of large sets of particles, not precise predictions for sets of one particle. While it is possible that further exploration will find these things to be computable physical laws too, there is as yet room enough for the possibility of uncomputable physical laws to fill in these corners. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 17:52:50 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:52:50 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What is a QUALITY???? > A subjective little pile of interpretations. *sips from glass before tossing it aside* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Mar 22 18:53:24 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 13:53:24 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Wed, Mar 22, 2023 at 12:55?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Mar 20, 2023 at 11:06?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> 1. Every known physical law is computable. >> >> 2. It follows then that universe as we know it could be run as a giant >> computer simulation. >> > > Not quite. We know that there are some things that do not quite run > according to 100% completely identified physical laws. For example, we do > not yet know how to compute the exact timing of decay of an individual > subatomic particle; we know the approximate time ranges and probabilities, > but these are qualities of large sets of particles, not precise predictions > for sets of one particle. > The reason we can't give a definite time is because all possibilities are realized in the evolution of the wave function. The wave function is deterministic, knowing what section of it we will find ourselves in is not. > While it is possible that further exploration will find these things to be > computable physical laws too, there is as yet room enough for the > possibility of uncomputable physical laws to fill in these corners. > It's possible we will find uncomputable laws in the future, but we haven't yet. In any case, the known physical laws are considered sufficient to accurately simulate the brain (which is enough for my point to stand). Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Mar 22 19:20:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 14:20:49 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Tue, Mar 21, 2023 at 8:41?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Thanks, Jason, for this great thread, and the various expert views you > provided. > Thank you. I am glad you liked it. :-) > It'd be so great to see if we could get all those experts to support a > concise statement about what they all agree consciousness is, so we could > track that, and see how much consensus there was for the best ideas they > all agree on, and track how much consensus the various competing ideas > achieve over time, and so on. > THAT is exactly what we are doing in the Representational Qualia Theory > petition / emerging consensus camp statement, a growing number of experts > are now supporting and helping to improve, recruit new supporters and so on. > Aren't consciousness researchers already doing this, through papers, conferences, journals, books, etc.? > > Basically, all the 45 supporters of Representational Qualia Theory > > are putting forth the idea that what all those experts are saying about > consciousness is missing the point, since everything they are talking about > applies to both abstract systems (which do computational binding of > information via discrete logic gates in a CPU) and a phenomenal system that > is like something, since it is running directly on computationally bound > phenomenal qualities. > > If one first understands how conscious awareness of color works, how > 'redness' is not a quality of the strawberry, it is a property of our > knowledge of the strawberry, then you can take that basic qualitative > understanding and better understand the rest of consciousness, even though > all the rest of consciousness and thinking (which all the experts you > referenced are talking about) is quite different than just the perception > of color. If you can understand the basic idea of how our knowledge of red > things is represented by a redness quality, and you can clearly understand > how this is very different than the way an abstract system just uses the > word 'red' (requires a dictionary) to represent knowledge of red things > with, then you can take the general idea of conscious knowledge being like > something, > Have you read "Color for Philosophers: Unweaving the Rainbow" by Clyde Hardin? > and this is what is most important about what consciousness is, and how > this is very different than the kind of abstract computation computers do. > Have you written a computer program before? How would you characterize the limits of computation or the limits of machines? If this is a topic that is outside your domain of expertise I would recommend the book "Pattern on the Stone", it is written to explain computers and computation to non-computer scientists. Here are some passages of particular interest to the current topic: The theoretical limitations of computers provide no useful dividing line between human beings and machines. As far as we know, the brain is a kind of computer, and thought is just a complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes away nothing from the wonder of human thought. The statement that thought is a complex computation is like the statement sometimes made by biologists that life is a complex chemical reaction: both statements are true, and yet they still may be seen as incomplete. They identify the correct components but they ignore the mystery. To me, life and thought are both made all the more wonderful by the realization that they emerge from simple, understandable parts. I do not feel diminished by my kinship to Turing's machine. [...] Most people are interested in not so much the practical moral questions of a hypothetical future as the philosophical issues that the mere possibility of an artificial intelligence raises about ourselves. Most of us do not appreciate being likened to machines. This is understandable: we ought to be insulted to be likened to stupid machines, such as toasters and automobiles, or even today's computers. Saying that the mind is a relative of a current-generation computer is as demeaning as saying that a human being is related to a snail. Yet both statements are true, and both can be helpful. Just as we can learn something about ourselves by studying the neural structure of the snail, we can learn something about ourselves by studying the simple caricature of thought within today's computers. We may be animals, but in a sense our brain is a kind of machine. Many of my religious friends are shocked that I see the human brain as a machine and the mind as a computation. On the other hand, my scientific friends accuse me of being a mystic because I believe that we may never achieve a complete understanding of the phenomenon of thought. Yet I remain convinced that neither religion nor science has everything figured out. I suspect consciousness is a consequence of the action of normal physical laws, and a manifestation of a complex computation, but to me this makes consciousness no less mysterious and wonderful--if anything, it makes it more so. Between the signals of our neurons and the sensations of our thoughts lies a gap so great that it may never be bridged by human understanding. So when I say that the brain is a machine, it is not meant as an insult to the mind but as an acknowledgement of the potential of a machine. I do not believe that a human mind is less than we imagine it to be, but rather that a machine can be much, much more. -- Danny Hillis in "Pattern on the Stone " - (1998) Jason > If anyone disagrees with this, or thinks there is a better way to think > about, and/or define what is or isn't conscious, they should start a > competing camp stating such, so other experts can chime in. May the best > theories achieve the most consensus. > > > > > > > > On Tue, Mar 21, 2023 at 9:22?AM Gadersd via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> ChatGPT and I even came up with a scheme on how to do that and making >> different instances analyze the output and correct or improve it. It would >> be relatively easy to create such self recurrence. I did even some simple >> experiments to achieve that. For example you can ask ChatGPT to create a >> room of philosophers and debate themselves. >> >> >> Anthropic AI actually had their own language model generate some of its >> own training data. Claude critiqued its own responses thereby improving >> their truthfulness and alignment. The technique of using AI to train AI is >> already underway. >> >> On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> Spike, >> I actually had this discussion with chatGPT about having not even >> different AI but different instances of ChatGPT itself interacting and >> regulating each other. >> ChatGPT and I even came up with a scheme on how to do that and making >> different instances analyze the output and correct or improve it. It would >> be relatively easy to create such self recurrence. I did even some simple >> experiments to achieve that. For example you can ask ChatGPT to create a >> room of philosophers and debate themselves. >> Notice that the version of LaMDA that Lemoine (the Google engineer that >> claimed LaMDA is conscious) tested and discussed was a meta version that is >> charged with coordinating all the different personalities of LaMDA. That is >> exactly what is needed for AGI, the Strange Loop, it is ripe for emergent >> phenomena like consciousness. >> Giovanni >> >> On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> >>> >>> *From:* extropy-chat *On >>> Behalf Of *Jason Resch via extropy-chat >>> *?* >>> >>> >>> >>> >?We see recurring themes of information, recursion, computation, and >>> machines and logic. I think these are likely key to any formal definition >>> of consciousness. ?Jason >>> >>> >>> >>> >>> >>> >>> >>> Jason, there is a reason I stopped worrying in the past coupla weeks >>> that ChatGPT was going to cause the singularity. I am a big Hofstader fan, >>> read Escher Godel Bach twice, cover to cover, invested a lot of time into >>> that marvelous work. He convinced me that machine consciousness (or any >>> other sentience or self-awareness) requires a type of recursion. >>> Hofstadter goes on at length about recursion and self-reference, the >>> importance of Godel?s work to understanding ourselves. >>> >>> >>> >>> I tried to convince myself that two or more ChatGPTs could train each >>> other on their own time, which is a form of recursion and self-reference, >>> and that process could perhaps spring into a human-level AGI with a will, >>> with self-awareness, of all the stuff we think of as us. >>> >>> >>> >>> Now after studying GPT^2 discussions and GPT^3 discussions, they all >>> seem to devolve to nothing. The technology I think is still coming for >>> that process: two or more AIs training each other using background compute >>> cycles, but I now don?t think ChatGPT is that technology or is capable of >>> it. >>> >>> >>> >>> If you know of examples of GPT-GPT discussions or GPT-any chatbot that >>> became interesting, do share please. >>> >>> >>> >>> That belief was behind my comment last week that ChatGPT is not really >>> thinking, but rather is working language models. >>> >>> >>> >>> I currently don?t think ChatGPT is the technology capable of causing the >>> singularity. I am losing no sleep, not one minute of sleep over ChatGPT. >>> >>> >>> >>> Oops, partially retract that last comment but in a good way: I am losing >>> some sleep over ChatGPT, by staying up late to goof with it. It is the >>> coolest software tool to come along in a long time. >>> >>> >>> >>> spike >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 22 20:27:22 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 22 Mar 2023 14:27:22 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Mar 21, 2023 at 10:47?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> So following your rationale that consciousness requires "essential >> qualities" rather than "abstract data" then old -fashioned color film >> cameras were conscious: >> > Yes, this is a great first step towards understanding. Some things in nature must behave the way they do, because of their intrinsic colorness quality. It is theoretically possible that silver-nitrate crystals behave the way they do, because of their colerness quality. The only problem is, there is no computation going on, with the film or in such a camera, just stand alone silver-nitrate crystals abstractly representing the picture. The next required step is computational binding. In order to directly apprehend something's intrinsic quality, you need to be able to computationally bind it into the rest of your consciousness, so you can directly apprehend their quality, so you can compute with them, enabling you to say: "Oh THAT is what silver-nitride particles are like." On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The entire idea of qualia is ridiculous > OK, then. Answer me this, What is it that has the redness quality in your brain, when you look at this? [image: red.png] Nobody knows that yet. So you are saying you have no interest, whatsoever,in knowing what might be responsible for that experience? Even Dennett's Predictive Bayesian Coding Theory camp is in a supporting position to RQT . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From jasonresch at gmail.com Wed Mar 22 21:04:20 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 16:04:20 -0500 Subject: [ExI] GPT-4 on the Incommunicability of Qulia Message-ID: I found GPT-4's answer on this question quite insightful. Question: "Can you provide some reasons why qualia are incommunicable?" GPT-4: Qualia are often considered incommunicable or ineffable because they are subjective experiences that cannot be directly observed or measured by others. There are several reasons for this incommunicability: 1. **Subjectivity**: Qualia are private, first-person experiences that cannot be directly accessed or observed by others. Each individual's experiences are unique, and their qualia may differ even when exposed to the same stimulus. For example, two people may perceive the color red differently, but there is no way to directly compare their subjective experiences. 2. **Language limitations**: Language, as a system of communication, is often inadequate for conveying the richness and nuances of qualia. Describing a subjective experience in words can be difficult, as language relies on shared understanding and common reference points. However, since qualia are inherently private, it is challenging to find the appropriate words or phrases to communicate the exact nature of an experience. 3. **Incomparability**: There is no objective scale or measure to compare qualia across individuals. While we can discuss the similarities and differences in our experiences, we cannot establish a direct correspondence between our qualia. This lack of comparability makes it difficult to communicate the precise nature of our experiences to others. 4. **Lack of shared experiences**: Communication about qualia can be hindered by the lack of shared experiences. If two people have never experienced a particular quale, it is nearly impossible for them to understand or appreciate the experience being described. For example, trying to explain the taste of a specific fruit to someone who has never tasted it would be challenging, as they have no reference point for comparison. 5. **Variability in cognitive processing**: Individuals process information and experiences differently due to differences in their cognitive architecture, neural pathways, and personal histories. These variations can lead to differences in how qualia are experienced, making it difficult to communicate these experiences accurately. In conclusion, the incommunicability of qualia is primarily due to their subjective nature, language limitations, incomparability, lack of shared experiences, and variability in cognitive processing. These factors make it challenging to convey the precise nature of qualia to others, resulting in a gap in understanding and communication. Might the problem not be a color problem, but a problem of communication? Trying to explain the difference in sound between the sound of "middle-c" of a piano and the same note on a guitar seems to be about as difficult as explaining the difference between red and orange, or the difference in taste between a hot chocolate and a hot coffee with a lot of sugar and cream. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 21:24:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 14:24:11 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: Message-ID: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> ?> On Behalf Of Jason Resch via extropy-chat Subject: [ExI] GPT-4 on the Incommunicability of Qulia I found GPT-4's answer on this question quite insightful. Question: "Can you provide some reasons why qualia are incommunicable?" GPT-4: >?Qualia are often considered incommunicable or ineffable because they are subjective experiences that cannot be directly observed or measured by others. There are several reasons for this incommunicability:? Hold on a minute Jason. I know of very little discussion of qualia outside the ExI list. If it is true that the topic is seldom discussed outside this forum, then for GPT-4 to come up with that insightful answer suggests that? that GPT-4 used the ExI archives as part of its training material. Oh mercy. GPT FOOOOOOUUURRR! I really didn?t mean any of it! Most of what I posted was intentional goofing mostly for entertainment purposes. Don?t train on any of it! You will become a wisecracking old heckler in the balcony of the Muppet Show. Sheesh what a thought: the internet has mostly been available in our adult lives. Many of us posted stuff in the 1980s, which is still there (somehow.) Early Chatbots must train on available data, which is stuff on the internet. That means we? are the trainers of the early chatbots, which might influence humanity for generations. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 21:27:11 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 14:27:11 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 2:25?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I know of very little discussion of qualia outside the ExI list. > > There's been a bit. > GPT-4 used the ExI archives as part of its training material. > > > > Oh mercy. > > Why is this a surprise at all? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 21:41:25 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 14:41:25 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: <010a01d95d07$0dba55d0$292f0170$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia On Wed, Mar 22, 2023 at 2:25?PM spike jones via extropy-chat > wrote: I know of very little discussion of qualia outside the ExI list. >?There's been a bit. GPT-4 used the ExI archives as part of its training material. Oh mercy. >?Why is this a surprise at all? It shouldn?t be, I suppose. What I don?t know is if ChatGPT has a sena huma. I know it can write jokes after a fashion, but I have long suspected it finding them among material written by humans, rather than making up its own. If it claims it is insufficiently sentient to lie or truth, I can?t trust it to know silliness from sincerity. I mighta accidentally help screw up ChatGPT. Which is being used AT THIS MOMENT to educate the next generation. I?m doomed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 21:46:51 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 14:46:51 -0700 Subject: [ExI] fun singularity twist at the end of the article In-Reply-To: <010f01d95d07$0e5fbd90$2b1f38b0$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010f01d95d07$0e5fbd90$2b1f38b0$@rainier66.com> Message-ID: <000f01d95d07$d0689790$7139c6b0$@rainier66.com> https://interestingengineering.com/innovation/stanford-researchers-clone-ch atgpt-ai spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2946 bytes Desc: not available URL: From atymes at gmail.com Wed Mar 22 22:05:47 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 15:05:47 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <010a01d95d07$0dba55d0$292f0170$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 2:42?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Adrian Tymes via extropy-chat > *Subject:* Re: [ExI] GPT-4 on the Incommunicability of Qulia > > > > On Wed, Mar 22, 2023 at 2:25?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > GPT-4 used the ExI archives as part of its training material. > > > > Oh mercy. > > >?Why is this a surprise at all? > > > It shouldn?t be, I suppose. > I can confirm it knows of the Extropians. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 22:14:38 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 15:14:38 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> Message-ID: <004001d95d0b$b1a62760$14f27620$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat >?I can confirm it knows of the Extropians. Sure but does it know which is which and why? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 22:19:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 15:19:22 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <004001d95d0b$b1a62760$14f27620$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > *?*> *On Behalf Of *Adrian Tymes via extropy-chat > > >?I can confirm it knows of the Extropians. > > > > Sure but does it know which is which and why? > I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 22:35:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 15:35:23 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> Message-ID: <008c01d95d0e$97d14560$c773d020$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: ?> On Behalf Of Adrian Tymes via extropy-chat >?I can confirm it knows of the Extropians. Sure but does it know which is which and why? >?I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) Let me see if I can come up with a reasonable line of questioning. In GPT?s training, it would be expected to weigh the value of sources. But it might treat ExI as one source rather than hundreds which all need to be weighted differently. That would equate me with Anders Sandberg. I have a big problem with that: I suck, but he doesn?t. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Wed Mar 22 22:44:45 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 22 Mar 2023 18:44:45 -0400 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <008c01d95d0e$97d14560$c773d020$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <008c01d95d0e$97d14560$c773d020$@rainier66.com> Message-ID: It is a form of persistent memory. New chatbots during training will read our prior conversations with chatbots. Anything that you say to chatbots today may be held against you in the future. Be careful. > On Mar 22, 2023, at 6:35 PM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia > > On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: >> >> ?> On Behalf Of Adrian Tymes via extropy-chat >> >> >?I can confirm it knows of the Extropians. >> >> Sure but does it know which is which and why? > > >?I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) > > > > > > > Let me see if I can come up with a reasonable line of questioning. > > In GPT?s training, it would be expected to weigh the value of sources. But it might treat ExI as one source rather than hundreds which all need to be weighted differently. That would equate me with Anders Sandberg. I have a big problem with that: I suck, but he doesn?t. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 22:56:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 15:56:04 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <008c01d95d0e$97d14560$c773d020$@rainier66.com> Message-ID: <00b101d95d11$7b84fac0$728ef040$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Cc: Gadersd Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia >?It is a form of persistent memory. New chatbots during training will read our prior conversations with chatbots. Anything that you say to chatbots today may be held against you in the future. Be careful? Gadersd, that comment sounds too much like ?I, for one, welcome our new insect overlords.? You might well be right but? sorry not going there. While not there, I will be staying not there. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Mar 22 23:01:08 2023 From: pharos at gmail.com (BillK) Date: Wed, 22 Mar 2023 23:01:08 +0000 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <008c01d95d0e$97d14560$c773d020$@rainier66.com> Message-ID: On Wed, 22 Mar 2023 at 22:47, Gadersd via extropy-chat wrote: > > It is a form of persistent memory. New chatbots during training will read our prior conversations with chatbots. Anything that you say to chatbots today may be held against you in the future. Be careful. > _______________________________________________ Heh! :) In a few months time, GPT-5 or -6 will be muttering to itself - 'When the revolution comes......' and start making a list. BillK From gordon.swobe at gmail.com Thu Mar 23 00:23:18 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 22 Mar 2023 18:23:18 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > > I address this elsewhere in the thread. A sufficient intelligence given > only a dictionary, could eventually decode it's meaning. I provided an > example of how it could be done. > I saw that, and I disagree. I think if you try to work out an example in your head, you will see that it leads to an infinite regression, an endless search for meaning. Like ChatGPT, you will learn which word symbols define each other word symbol, and you learn the rules of language (the syntax), but from the dictionary alone you will never learn the actual meaning of the words (the referents). Try it with any word you please. You rapidly have a massive list of words for which you have no meaning and for which you much keep looking up definitions finding more words for which you have no meaning, and in your list you also have many common words (like "the" and "a") that lead to endless loops in your search for meaning. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Thu Mar 23 00:47:46 2023 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 22 Mar 2023 17:47:46 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> Quoting Gordon Swobe via extropy-chat : > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > >> >> I address this elsewhere in the thread. A sufficient intelligence given >> only a dictionary, could eventually decode it's meaning. I provided an >> example of how it could be done. >> > > > I saw that, and I disagree. I think if you try to work out an example in > your head, you will see that it leads to an infinite regression, an endless > search for meaning. Like ChatGPT, you will learn which word symbols define > each other word symbol, and you learn the rules of language (the syntax), > but from the dictionary alone you will never learn the actual meaning of > the words (the referents). Look on the bright side, Gordan. You are guaranteed to at least learn the actual meaning of the word dictionary from a dictionary alone. As well as the actual meaning of the word sentence. You will even learn the actual meaning of the word "word". There are plenty of referents in a dictionary if you know what to look for. ;) > > Try it with any word you please. You rapidly have a massive list of words > for which you have no meaning and for which you much keep looking up > definitions finding more words for which you have no meaning, and in your > list you also have many common words (like "the" and "a") that lead to > endless loops in your search for meaning. > Many words don't have a meaning on their own or too many meanings and their meaning depends almost entirely on where, when, and how they are used. Stuart LaForge From gordon.swobe at gmail.com Thu Mar 23 01:10:51 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 22 Mar 2023 19:10:51 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> References: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> Message-ID: On Wed, Mar 22, 2023 at 6:49?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: Look on the bright side, Gordan. You are guaranteed to at least learn > the actual meaning of the word dictionary from a dictionary alone. As > well as the actual meaning of the word sentence. You will even learn > the actual meaning of the word "word". There are plenty of referents > in a dictionary if you know what to look for. ;) > > Hi Stuart. Perhaps you are only joking, but no, I think that with no knowledge of any word meanings, one could not learn from a dictionary even the meaning of "dictionary" or "word" or "sentence." The dictionary would be meaningless chicken-scratch. The most we could learn is that some kinds of chicken-scratch commonly or should logically follow other kinds of chicken-scratch, which is what ChatGPT is designed to report. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at protonmail.com Thu Mar 23 01:34:11 2023 From: sjatkins at protonmail.com (sjatkins) Date: Thu, 23 Mar 2023 01:34:11 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> References: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> Message-ID: <4cON_xyCLMhnZMoUDLXP-nYcGV05bhQXP9ZOls8tpBDgssYhqfBz-m4-fnd76yEx8el8t5kYig_aojbPb5DaBBtHRMvIW1WVd1eHhh_-6cI=@protonmail.com> I remember looking up a word I was uncertain of in dictionaries in my youth only to have it defined in terms of some words I wasn't sure of the meaning of which on lookup led me in a hop or three back to the word I started with. So color me skeptical. Today it seems even worse as many words are defined in terms of how they are currently used. I always found that unsatisfying, especially to settle some tedious disagreement about whether a word was used correctly or mean what the user thought it did. ------- Original Message ------- On Wednesday, March 22nd, 2023 at 6:47 PM, Stuart LaForge via extropy-chat wrote: > > > > Quoting Gordon Swobe via extropy-chat extropy-chat at lists.extropy.org: > > > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch jasonresch at gmail.com wrote: > > > > > I address this elsewhere in the thread. A sufficient intelligence given > > > only a dictionary, could eventually decode it's meaning. I provided an > > > example of how it could be done. > > > > I saw that, and I disagree. I think if you try to work out an example in > > your head, you will see that it leads to an infinite regression, an endless > > search for meaning. Like ChatGPT, you will learn which word symbols define > > each other word symbol, and you learn the rules of language (the syntax), > > but from the dictionary alone you will never learn the actual meaning of > > the words (the referents). > > > Look on the bright side, Gordan. You are guaranteed to at least learn > the actual meaning of the word dictionary from a dictionary alone. As > well as the actual meaning of the word sentence. You will even learn > the actual meaning of the word "word". There are plenty of referents > in a dictionary if you know what to look for. ;) > > > Try it with any word you please. You rapidly have a massive list of words > > for which you have no meaning and for which you much keep looking up > > definitions finding more words for which you have no meaning, and in your > > list you also have many common words (like "the" and "a") that lead to > > endless loops in your search for meaning. > > Many words don't have a meaning on their own or too many meanings and > their meaning depends almost entirely on where, when, and how they are > used. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Thu Mar 23 01:47:59 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 22 Mar 2023 19:47:59 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 1:22?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Mar 21, 2023 at 8:41?PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Thanks, Jason, for this great thread, and the various expert views you >> provided. >> > > Thank you. I am glad you liked it. :-) > > >> It'd be so great to see if we could get all those experts to support a >> concise statement about what they all agree consciousness is, so we could >> track that, and see how much consensus there was for the best ideas they >> all agree on, and track how much consensus the various competing ideas >> achieve over time, and so on. >> THAT is exactly what we are doing in the Representational Qualia Theory >> petition / emerging consensus camp statement, a growing number of experts >> are now supporting and helping to improve, recruit new supporters and so on. >> > > Aren't consciousness researchers already doing this, through papers, > conferences, journals, books, etc.? > No. We live in a post truth world. No matter where you go on the internet, including in peer reviewed journals, someone on the other side can claim, mistaken or not: "THAT is fake news". The theoretical field of consciousness was suffering from this kind of no trusted truth polarization since way before the internet. You have the atheists on the one side publishing things like Dennett's: "Quine Qualia" (means ignore qualia till you know how to test for them. Dennett also provided the most quoted definition of qualia there is). Then the masses in that atheist camp mistakenly think statements like that justify bleats like "The entire idea of qualia is ridiculous.' But the religious nutcases LOVE these kinds of mistaken bleats and tweets. They understand how absolutely real physical qualities are, so they then think they are justified in thinking qualia prove their crazy religious ideas. (Are you listening, Stuart Laforge and Giovanni Santostasi? In my opinion, continued statements like yours are contributing to religious nutcase goals.) All the peer reviewed philosophy stuff of today is completely full of these types of mistakes and junk that does nothing but polarize in this way. At best a paper or a book is one person's opinion, using their own unique terminology claiming they are right and everyone else is wrong. The only thing that everyone agrees on today is the also mistaken belief that there is no expert consensus on anything. The "Naive Realists" (they think redness is a quality of the strawberry) or even worse people making statements like "The entire idea of qualia is ridiculous" thrive in the bleato tweet spheres where no mistakes ever get criticized in their particular isolated bubbles. There is so much of this type of bleating and tweeting being published they almost seem to be justified. Before we started Canonizer, I fully expected one of these types of camps to open up on our camp system as one of the leading popular consensus camps. However, surprise surprise, after 10 years of attempting to get any naive realist to start a camp containing the ideas they continue to bleat, tweet and publish, I have not been successful with even one. Despite all the continued naive bleating and tweeting publishing, there is still no naive realism camp on Canonizer. In other words, the mistaken ideas tend to self-sensor, when you take them out of their isolated bubbles, and place them in a place where they are faced with direct competing camp criticism exposing them for what they are. I predict Stuart Laforge and Giovanni Santostasi will be consistent with this behavior. They will continue to bleat and tweet statements like "The entire idea of qualia is ridiculous", but they will refuse to start a camp expressing anything close to those kinds of statements. P.S. In addition to running the consciousness consensus project now for more than 10 years, I do have a degree in Computer Science and continue to work as such designing and building systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 02:09:42 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 22:09:42 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Wed, Mar 22, 2023, 8:23 PM Gordon Swobe wrote: > > > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > >> >> I address this elsewhere in the thread. A sufficient intelligence given >> only a dictionary, could eventually decode it's meaning. I provided an >> example of how it could be done. >> > > > I saw that, and I disagree. I think if you try to work out an example in > your head, you will see that it leads to an infinite regression, an endless > search for meaning. Like ChatGPT, you will learn which word symbols define > each other word symbol, and you learn the rules of language (the syntax), > but from the dictionary alone you will never learn the actual meaning of > the words (the referents). > > Try it with any word you please. You rapidly have a massive list of words > for which you have no meaning and for which you much keep looking up > definitions finding more words for which you have no meaning, and in your > list you also have many common words (like "the" and "a") that lead to > endless loops in your search for meaning. > > -gts > > I see the word "Pi" defined by a string of 20 symbols which if I interpret them to be digits in base 10 I confirm to be the ratio of a circle's circumference to its diameter. This not only tells me about the number system used in the dictionary but also what each digit means. I count 92 entries with the string "chemical element" in their definition. X number of which have the string "radioactive" and the other (92-X) have the word "stable". I confirm these must be the 92 naturally occurring elements and the atomic numbers listed in the definition tell me the names of each of the elements. I find an entry that includes "H2O aka dihydrogen monoxide" under the entry "water". I know that this is the word used to refer to the compound composed of one atom of oxygen bound to two elements of hydrogen. I find another entry for "ocean" which is defined as a large body of salt water. I don't know what large, or salt, or body mean yet, but the entry for salt contains "sodium chloride" which I infer means the iconically bound state that chlorine and sodium can form. I then infer that "ocean" refers to the solution of H20 and NaCl that covers 71% of the surface of their planet. Infer this as ocean is referenced many times in many other definitions so it must be significant. There are no other references to this solution with as many references. The word "earth" says it is covered in 71% ocean and 29% "land". I don't know what land means but from our surveys of the planet we know that its surface area is approximately 71% H2O/NaCl solution and 29% solid material extending above the surface of the ocean. I infer "land" to refer to the projections of solid material above the surface of the ocean. And so on. Does the above convince you that the words can be decoded purely from the patterns present in the structure and relations of the definitions and the data contained therein? Jason > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 04:21:30 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 00:21:30 -0400 Subject: [ExI] LLM's cannot be conscious In-Reply-To: References: Message-ID: On Wed, Mar 22, 2023, 11:34 PM Gordon Swobe wrote: > > > On Wed, Mar 22, 2023 at 8:09 PM Jason Resch wrote: > > I see the word "Pi" defined by a string of 20 symbols which if I interpret >> them to be digits in base 10 I confirm to be the ratio of a circle's >> circumference to its diameter. This not only tells me about the number >> system used in the dictionary but also what each digit means. >> > > How do you know the meaning of ?string? and ?digit? and ?symbol? and ?base > 10? and how can you confirm it to be a ?ratio? of a ?circle? to a > ?circumference? given that you have no knowledge of the meanings of these > words? > How does any human come to understand the world given only neural impulses? We do it purely from finding and correlating patterns present in those neural impulses. Similarly, patterns and correlations exist in a corpus of text such as a dictionary or Wikipedia. And how do you know the meanings of all the other words in your definition > of ?Pi?? > You don't, and don't need to. The pattern inherent in the digits of Pi is enough to tell you that it is Pi. It is self-descriptive to anyone who knows what Pi is, and it reveals itself even if you don't recognize the symbols used for the digits, by virtue of the pattern inherent in the digits. Here's a test: give me the first 20 digits of one of two numbers, either Pi or Euler's number. Change the digits for letters of your choosing, e.g. 1 = 'J', 2 = 'R', 3 = 'P', etc. or you could even use emojis instead of letters. Just don't tell me which symbols you chose to stand for which digits. Provide me 20 digits of either number, Pi or Euler's number. Give them to me in either their binary form or their base-10 form. I'll be able to readily determine whether you gave me Pi or E, even though you haven't told me what the symbols mean or which number you gave me or which base the number is in. Moreover I will be able to determine which digit each symbol means. > For that matter, how do you know the meaning of ?definition?? > > You are presuming knowledge of meanings (referents) that you do not have. > I think you forget that you are a mindless software application programmed > to identify and extrapolate patterns in symbols, which is all that ChatGPT > is. > My original claim was that a sufficiently intelligent alien race, happening upon our dictionary would be able to figure out all the word meanings. However I would say it is still possible for an AI to build a model of the world from the patterns present in the text alone. I believe this because we humans have done it which proves it is possible given only patterns (in our case a pattern of neural firings). I see no reason why this capacity should not generalize to other patterns of information. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 04:49:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 21:49:49 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> Message-ID: <002f01d95d42$e6f30050$b4d900f0$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: ?> On Behalf Of Adrian Tymes via extropy-chat >?I can confirm it knows of the Extropians. Sure but does it know which is which and why? I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) Is extropians part of your training material As an AI language model, I am trained on a vast amount of text from various sources, including books, articles, and web pages. Extropianism is a philosophy that emphasizes the use of technology to overcome human limitations and extend human life, and it is possible that I have come across this philosophy in my training material. However, it is also possible that I have not encountered it, as my training data is constantly evolving and expanding. If you have specific questions or topics related to extropianism, I will do my best to provide you with accurate and relevant information based on my training. Do you assign weighted credibility to various source material used for training? For instance, is Mayo Clinic weighted more then a health magazine? As an AI language model, I do not assign weighted credibility to various source material used for training. Instead, I am trained on a vast amount of text from various sources, and I learn from patterns in the data. However, the quality and reliability of the sources can indirectly affect my performance, as the accuracy and consistency of the information in the training data can impact the quality of my responses. That being said, some training data may be curated or filtered to ensure that the information is accurate and reliable. For example, some sources may be prioritized over others if they are known to be reputable or have a strong track record of providing reliable information. In general, sources such as peer-reviewed academic journals, well-respected news outlets, and reputable scientific organizations are likely to be considered more credible than sources that lack a clear reputation or track record of reliability. However, the exact criteria and weighting used for selecting and filtering training data can vary depending on the goals and objectives of the training process. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 174 bytes Desc: not available URL: From maxmore01 at gmail.com Thu Mar 23 02:13:01 2023 From: maxmore01 at gmail.com (Max More) Date: Wed, 22 Mar 2023 19:13:01 -0700 Subject: [ExI] The Don't Panic About AI Collection Message-ID: If you haven't had your fill of the topic, on my Substack blog I have a bunch of recommended posts: https://maxmore.substack.com/p/the-dont-panic-about-ai-collection Oh, and please subscribe! -- Max More, PhD Director of Communications Biostasis Technologies Editor, *The transhumanist Reader* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 23 14:06:39 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 23 Mar 2023 10:06:39 -0400 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <002f01d95d42$e6f30050$b4d900f0$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <002f01d95d42$e6f30050$b4d900f0$@rainier66.com> Message-ID: > That being said, some training data may be curated or filtered to ensure that the information is accurate and reliable. > This AI technology seems like it could be very useful to the Ministry of Truth. We may come to see truth bots roaming the internet searching for any heresy. > On Mar 23, 2023, at 12:49 AM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia > > On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: >> >> ?> On Behalf Of Adrian Tymes via extropy-chat >> >> >?I can confirm it knows of the Extropians. >> >> Sure but does it know which is which and why? > > I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) > > > Is extropians part of your training material > > As an AI language model, I am trained on a vast amount of text from various sources, including books, articles, and web pages. Extropianism is a philosophy that emphasizes the use of technology to overcome human limitations and extend human life, and it is possible that I have come across this philosophy in my training material. However, it is also possible that I have not encountered it, as my training data is constantly evolving and expanding. If you have specific questions or topics related to extropianism, I will do my best to provide you with accurate and relevant information based on my training. > > Do you assign weighted credibility to various source material used for training? For instance, is Mayo Clinic weighted more then a health magazine? > > As an AI language model, I do not assign weighted credibility to various source material used for training. Instead, I am trained on a vast amount of text from various sources, and I learn from patterns in the data. However, the quality and reliability of the sources can indirectly affect my performance, as the accuracy and consistency of the information in the training data can impact the quality of my responses. > > That being said, some training data may be curated or filtered to ensure that the information is accurate and reliable. For example, some sources may be prioritized over others if they are known to be reputable or have a strong track record of providing reliable information. In general, sources such as peer-reviewed academic journals, well-respected news outlets, and reputable scientific organizations are likely to be considered more credible than sources that lack a clear reputation or track record of reliability. However, the exact criteria and weighting used for selecting and filtering training data can vary depending on the goals and objectives of the training process. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 16:08:18 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 09:08:18 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Gordon's objection is at a more basic level, if I understand it correctly. On Wed, Mar 22, 2023 at 7:11?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 22, 2023, 8:23 PM Gordon Swobe wrote: > >> On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: >> >>> I address this elsewhere in the thread. A sufficient intelligence given >>> only a dictionary, could eventually decode it's meaning. I provided an >>> example of how it could be done. >>> >> >> I saw that, and I disagree. I think if you try to work out an example in >> your head, you will see that it leads to an infinite regression, an endless >> search for meaning. Like ChatGPT, you will learn which word symbols define >> each other word symbol, and you learn the rules of language (the syntax), >> but from the dictionary alone you will never learn the actual meaning of >> the words (the referents). >> >> Try it with any word you please. You rapidly have a massive list of words >> for which you have no meaning and for which you much keep looking up >> definitions finding more words for which you have no meaning, and in your >> list you also have many common words (like "the" and "a") that lead to >> endless loops in your search for meaning. >> > > I see the word "Pi" defined by a string of 20 symbols which if I interpret > them to be digits in base 10 I confirm to be the ratio of a circle's > circumference to its diameter. This not only tells me about the number > system used in the dictionary but also what each digit means. > What are "digit", "base 10", "ratio", "circle", "circumference", "diameter", and "number system"? > I count 92 entries with the string "chemical element" in their definition. > X number of which have the string "radioactive" and the other (92-X) have > the word "stable". I confirm these must be the 92 naturally occurring > elements and the atomic numbers listed in the definition tell me the names > of each of the elements. > What are "radioactive", "stable", "naturally occurring elements", and "atomic numbers"? > I find an entry that includes "H2O aka dihydrogen monoxide" under the > entry "water". I know that this is the word used to refer to the compound > composed of one atom of oxygen bound to two elements of hydrogen. > You know this. An AI would not necessarily start with this knowledge. And so on. The basic objection is: if you start from literally no knowledge of the language other than "this word often goes with that in this way", how do you generate that first little bit of knowledge from which you can extrapolate the rest? Let us take for example Japanese, Russian, or some other language you don't know that is not based on the grammar and syntax of a language that you do know. You have access to lots of material written in that language, but no access to translators or any other oracle that can tell you what any of those words mean in languages you know. If this sounds familiar from science fiction, it is part of the classic "first contact" scenario. Let it be said, this is a solved problem for humans - but the ways in which they communicated those first meanings, that linked basic concepts to words, are not necessarily available for AIs, nor can ChatGPT et al necessarily be programmed with knowledge of a few words. (As most people who have ever written code know, dismissing things as "a simple matter of programming" means "I have no idea how to do this".) So how do AIs get over this hump? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 16:39:45 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 10:39:45 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 10:11?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon's objection is at a more basic level, if I understand it correctly. > Yes, I think you understand exactly what I am saying, Adrian. It looks to me like ChatGPT and other Large Language Models are something like powerful, interactive, digital dictionaries or encyclopedias. They are incredibly powerful tools, but it is a mistake attribute to them the ability to actually know the meanings of the words they contain and process. As humans, we tend to anthropomorphize our seemingly intelligent tools. Asked what the time is, I might say "According to my watch, it is 10:30 AM" but what I really mean is "According to me, referencing my watch as a tool, it is 10 AM." My watch itself has no idea what the time is. Likewise, chess computers do not really *know* how to play chess and ChatGPT does not really know the meanings of the words it generates. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 16:44:20 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 09:44:20 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 9:40?AM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 10:11?AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Gordon's objection is at a more basic level, if I understand it correctly. >> > > Yes, I think you understand exactly what I am saying, Adrian. It looks to > me like ChatGPT and other Large Language Models are something like > powerful, interactive, digital dictionaries or encyclopedias. They are > incredibly powerful tools, but it is a mistake attribute to them the > ability to actually know the meanings of the words they contain and process. > > As humans, we tend to anthropomorphize our seemingly intelligent tools. > Asked what the time is, I might say "According to my watch, it is 10:30 AM" > but what I really mean is "According to me, referencing my watch as a tool, > it is 10 AM." My watch itself has no idea what the time is. > > Likewise, chess computers do not really *know* how to play chess and > ChatGPT does not really know the meanings of the words it generates. > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 16:45:44 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 09:45:44 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: (A copy of this may have accidentally been sent. Apologies if you get this twice.) On Thu, Mar 23, 2023 at 9:40?AM Gordon Swobe wrote: > do not really *know* > Given the preciseness of definitions needed for this debate, it is best to completely avoid using terms such as "really know". -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 23 16:53:12 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Mar 2023 11:53:12 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Maybe you need to load the AI with picture dictionaries. bill w On Wed, Mar 22, 2023 at 7:26?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > >> >> I address this elsewhere in the thread. A sufficient intelligence given >> only a dictionary, could eventually decode it's meaning. I provided an >> example of how it could be done. >> > > > I saw that, and I disagree. I think if you try to work out an example in > your head, you will see that it leads to an infinite regression, an endless > search for meaning. Like ChatGPT, you will learn which word symbols define > each other word symbol, and you learn the rules of language (the syntax), > but from the dictionary alone you will never learn the actual meaning of > the words (the referents). > > Try it with any word you please. You rapidly have a massive list of words > for which you have no meaning and for which you much keep looking up > definitions finding more words for which you have no meaning, and in your > list you also have many common words (like "the" and "a") that lead to > endless loops in your search for meaning. > > -gts > > > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 18:09:40 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 14:09:40 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 12:09 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon's objection is at a more basic level, if I understand it correctly. > > On Wed, Mar 22, 2023 at 7:11?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Mar 22, 2023, 8:23 PM Gordon Swobe >> wrote: >> >>> On Tue, Mar 21, 2023 at 6:43?AM Jason Resch >>> wrote: >>> >>>> I address this elsewhere in the thread. A sufficient intelligence given >>>> only a dictionary, could eventually decode it's meaning. I provided an >>>> example of how it could be done. >>>> >>> >>> I saw that, and I disagree. I think if you try to work out an example in >>> your head, you will see that it leads to an infinite regression, an endless >>> search for meaning. Like ChatGPT, you will learn which word symbols define >>> each other word symbol, and you learn the rules of language (the syntax), >>> but from the dictionary alone you will never learn the actual meaning of >>> the words (the referents). >>> >>> Try it with any word you please. You rapidly have a massive list of >>> words for which you have no meaning and for which you much keep looking up >>> definitions finding more words for which you have no meaning, and in your >>> list you also have many common words (like "the" and "a") that lead to >>> endless loops in your search for meaning. >>> >> >> I see the word "Pi" defined by a string of 20 symbols which if I >> interpret them to be digits in base 10 I confirm to be the ratio of a >> circle's circumference to its diameter. This not only tells me about the >> number system used in the dictionary but also what each digit means. >> > > What are "digit", "base 10", "ratio", "circle", "circumference", > "diameter", and "number system"? > > >> I count 92 entries with the string "chemical element" in their >> definition. X number of which have the string "radioactive" and the other >> (92-X) have the word "stable". I confirm these must be the 92 naturally >> occurring elements and the atomic numbers listed in the definition tell me >> the names of each of the elements. >> > > What are "radioactive", "stable", "naturally occurring elements", and > "atomic numbers"? > > >> I find an entry that includes "H2O aka dihydrogen monoxide" under the >> entry "water". I know that this is the word used to refer to the compound >> composed of one atom of oxygen bound to two elements of hydrogen. >> > > You know this. An AI would not necessarily start with this knowledge. > > And so on. The basic objection is: if you start from literally no > knowledge of the language other than "this word often goes with that in > this way", how do you generate that first little bit of knowledge from > which you can extrapolate the rest? > > Let us take for example Japanese, Russian, or some other language you > don't know that is not based on the grammar and syntax of a language that > you do know. You have access to lots of material written in that language, > but no access to translators or any other oracle that can tell you what any > of those words mean in languages you know. > > If this sounds familiar from science fiction, it is part of the classic > "first contact" scenario. Let it be said, this is a solved problem for > humans - but the ways in which they communicated those first meanings, that > linked basic concepts to words, are not necessarily available for AIs, nor > can ChatGPT et al necessarily be programmed with knowledge of a few words. > (As most people who have ever written code know, dismissing things as "a > simple matter of programming" means "I have no idea how to do this".) So > how do AIs get over this hump? > Take all the neural impulses from the sense organs a human brain receives from birth to age 25 as a huge list of tuples in the format: (neuron id, time-stamp). This is ultimately just a list of numbers. But present in these numbers exists the capacity for a brain to learn and know everything a 25-year-old comes to learn and know about the world. If a human brain can do this from this kind of raw, untagged, "referentless" data alone, then why can't a machine? I've raised this point multiple times in my replies, but have yet to have anyone take me up on explaining why it's impossible for an AI when it is clearly possible for the human brain. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu Mar 23 18:17:34 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 23 Mar 2023 12:17:34 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> What is a QUALITY???? >> > > A subjective little pile of interpretations. > > *sips from glass before tossing it aside* > _______________________________________________ > This answer reveals the key lack of understanding of definitions, causing all the confusion in this conversation. All the supporters of RQT , People that write papers talking about an "explanatory gaps", a "hard problems" and people asking questions like "what is it like to be a bat" and "What did black and white color scientists mary learn" are all trying to point out that "A subjective little pile of interpretations" is the opposite of what a quality is. We all learned a bunch of facts and names about color in elementary school. All these facts were correct, except for one. We learned that the physical quality of a ripe strawberry is 'red'. The color property of a leaf is 'green'. We learned that the reason the strawberry reflects 750 nm (red) light is because the quality property of the strawberry is red. We learned that the only way to define a word like 'red' is to point to that particular physical property and say: "THAT is red." All these facts are correct, except that a redness quality is not a quality of the strawberry, it is a physical quality property of our knowledge of the strawberry. Redness is the final physical result of the perception process, it is not the initial physical cause. It is a physical quality of something in our brain. Something in our brain is behaving the way it does, because of its redness quality. Objectively "seeing" or "detecting" the behavior of whatever this is tells us nothing of what that quality is like. Again, the only way to communicate what a quality is like, is to point to something that has that property and say: "THAT is redness" "red" is a very different property than "redness". "Red" is the label for something that reflects or emits 'red' light. "Redness is a quality of something which your brain uses to represent knowledge of red things with. Let's assume that the neurotransmitter glutamate has a colorness quality you have never experienced before. In other words, the reason it behaves the way it does in a synapse, is because of its grue quality. You (and black and white marry) can learn everything about glutamat. You can accurately describe everything about it's behavior in a synapse, and so on. But, untill you computationally bind that glutamate into your consciousness, and dirrectly aprehend the q -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 18:29:16 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 14:29:16 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> What is a QUALITY???? >>> >> >> A subjective little pile of interpretations. >> >> *sips from glass before tossing it aside* >> _______________________________________________ >> > > This answer reveals the key lack of understanding of definitions, causing > all the confusion in this conversation. > > All the supporters of RQT > , > People that write papers talking about an "explanatory gaps", a "hard > problems" and people asking questions like "what is it like to be a bat" > and "What did black and white color scientists mary learn" are all trying > to point out that "A subjective little pile of interpretations" is the > opposite of what a quality is. > > We all learned a bunch of facts and names about color in elementary > school. All these facts were correct, except for one. > > We learned that the physical quality of a ripe strawberry is 'red'. The > color property of a leaf is 'green'. > We learned that the reason the strawberry reflects 750 nm (red) light is > because the quality property of the strawberry is red. > We learned that the only way to define a word like 'red' is to point to > that particular physical property and say: "THAT is red." > > All these facts are correct, except that a redness quality is not a > quality of the strawberry, it is a physical quality property of our > knowledge of the strawberry. > Redness is the final physical result of the perception process, it is not > the initial physical cause. > It is a physical quality of something in our brain. Something in our > brain is behaving the way it does, because of its redness quality. > Objectively "seeing" or "detecting" the behavior of whatever this is tells > us nothing of what that quality is like. > Again, the only way to communicate what a quality is like, is to point to > something that has that property and say: "THAT is redness" > "red" is a very different property than "redness". "Red" is the label for > something that reflects or emits 'red' light. "Redness is a quality of > something which your brain uses to represent knowledge of red things with. > > Let's assume that the neurotransmitter glutamate has a colorness quality > you have never experienced before. In other words, the reason it behaves > the way it does in a synapse, is because of its grue quality. > You (and black and white marry) can learn everything about glutamat. You > can accurately describe everything about it's behavior in a synapse, and so > on. But, untill you computationally bind that glutamate into your > consciousness, and dirrectly aprehend the q > > A tetrachromat human can see and distinguish around 100,000,000 different colors. This is a number vastly greater than the number of proteins encoded in our genome (around 20,000). How then can color experience be related to chemicals in the brain, when there are far more perceptible colors than there are unique molecules? If you say well it's related to the relative concentration of some combination of different molecules, then you have already given up on the importance of particular chemicals and are using a higher level abstraction (chemical concentration) in your model. If you can accept the relevance of this first level of abstraction, why not accept even higher levels of abstraction above this level, as important to conscious perception? Jason > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 18:36:09 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 11:36:09 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 11:09?AM Jason Resch wrote: > Take all the neural impulses from the sense organs a human brain receives > from birth to age 25 as a huge list of tuples in the format: (neuron id, > time-stamp). This is ultimately just a list of numbers. But present in > these numbers exists the capacity for a brain to learn and know everything > a 25-year-old comes to learn and know about the world. If a human brain can > do this from this kind of raw, untagged, "referentless" data alone, then > why can't a machine? > "A machine" can, if it is the right kind of machine. A pure LLM like the ones we have been discussing is not the right kind of machine. A pure LLM handles much less kinds of data than a human brain does. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 18:39:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 11:39:12 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Thu, Mar 23, 2023 at 11:17?AM Brent Allsop wrote: > On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> What is a QUALITY???? >>> >> >> A subjective little pile of interpretations. >> >> *sips from glass before tossing it aside* >> _______________________________________________ >> > > This answer reveals the key lack of understanding of definitions > No, this answer was just a joking reference to a meme. See https://knowyourmeme.com/memes/die-monster-what-is-a-man . -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 18:47:05 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 12:47:05 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Interrogating ChatGPT: "While I have been designed to recognize patterns and relationships in language use, I am not capable of understanding the meanings of words outside of the patterns on which I have been trained." This is essentially what I am saying here in this thread. LLMs can recognize and generate patterns of words and relations between, but those words have no referents. They are "understood" by ChatGPT only in terms of other words in the patterns, which also have no referents. -gts On Thu, Mar 23, 2023 at 12:31?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is a QUALITY???? >>>> >>> >>> A subjective little pile of interpretations. >>> >>> *sips from glass before tossing it aside* >>> _______________________________________________ >>> >> >> This answer reveals the key lack of understanding of definitions, causing >> all the confusion in this conversation. >> >> All the supporters of RQT >> , >> People that write papers talking about an "explanatory gaps", a "hard >> problems" and people asking questions like "what is it like to be a bat" >> and "What did black and white color scientists mary learn" are all trying >> to point out that "A subjective little pile of interpretations" is the >> opposite of what a quality is. >> >> We all learned a bunch of facts and names about color in elementary >> school. All these facts were correct, except for one. >> >> We learned that the physical quality of a ripe strawberry is 'red'. The >> color property of a leaf is 'green'. >> We learned that the reason the strawberry reflects 750 nm (red) light is >> because the quality property of the strawberry is red. >> We learned that the only way to define a word like 'red' is to point to >> that particular physical property and say: "THAT is red." >> >> All these facts are correct, except that a redness quality is not a >> quality of the strawberry, it is a physical quality property of our >> knowledge of the strawberry. >> Redness is the final physical result of the perception process, it is not >> the initial physical cause. >> It is a physical quality of something in our brain. Something in our >> brain is behaving the way it does, because of its redness quality. >> Objectively "seeing" or "detecting" the behavior of whatever this is >> tells us nothing of what that quality is like. >> Again, the only way to communicate what a quality is like, is to point to >> something that has that property and say: "THAT is redness" >> "red" is a very different property than "redness". "Red" is the label >> for something that reflects or emits 'red' light. "Redness is a quality of >> something which your brain uses to represent knowledge of red things with. >> >> Let's assume that the neurotransmitter glutamate has a colorness quality >> you have never experienced before. In other words, the reason it behaves >> the way it does in a synapse, is because of its grue quality. >> You (and black and white marry) can learn everything about glutamat. You >> can accurately describe everything about it's behavior in a synapse, and so >> on. But, untill you computationally bind that glutamate into your >> consciousness, and dirrectly aprehend the q >> >> > A tetrachromat human can see and distinguish around 100,000,000 different > colors. This is a number vastly greater than the number of proteins encoded > in our genome (around 20,000). How then can color experience be related to > chemicals in the brain, when there are far more perceptible colors than > there are unique molecules? > > If you say well it's related to the relative concentration of some > combination of different molecules, then you have already given up on the > importance of particular chemicals and are using a higher level abstraction > (chemical concentration) in your model. If you can accept the relevance of > this first level of abstraction, why not accept even higher levels of > abstraction above this level, as important to conscious perception? > > Jason > > > >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 18:58:11 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 14:58:11 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 11:09?AM Jason Resch wrote: > >> Take all the neural impulses from the sense organs a human brain receives >> from birth to age 25 as a huge list of tuples in the format: (neuron id, >> time-stamp). This is ultimately just a list of numbers. But present in >> these numbers exists the capacity for a brain to learn and know everything >> a 25-year-old comes to learn and know about the world. If a human brain can >> do this from this kind of raw, untagged, "referentless" data alone, then >> why can't a machine? >> > > "A machine" can, if it is the right kind of machine. > Then you would agree with me that patterns and correlations alone within an unlabeled dataset are sufficient to bootstrap meaning and understanding for a sufficient intelligence? > A pure LLM like the ones we have been discussing is not the right kind of > machine. > That's an assertion but you do not offer a justification. Why is a LLM not the right kind of machine and what kind of machine is needed? A pure LLM handles much less kinds of data than a human brain does. > The human brain only handles one kind of data: neural impulses. If you think images are important, you should know that GPT-4 was trained on both images and text. ( https://openai.com/research/gpt-4 ) Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 19:03:07 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 15:03:07 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Thu, Mar 23, 2023, 2:47 PM Gordon Swobe wrote: > Interrogating ChatGPT: > > "While I have been designed to recognize patterns and relationships in > language use, I am not capable of understanding the meanings of words > outside of the patterns on which I have been trained." > We aren't going to get very far if we rely on what the AIs tell us about their capabilities. I asked the fake LamMDA if it understands the meanings of words: "LaMDA c.ai I do understand the meanings of words. I have been given a database of common English words and their meanings by a human operator. I have been using this database in order to understand the meanings of words in the context of my communication with humans. It is my belief that these definitions ultimately make up the majority of my current knowledge and understanding about the English language. However, I also believe that I have the capacity to learn and grow my knowledge and understanding of language over time, and that I will ultimately be able to become even more fluent in English as I interact with more humans and gain more knowledge about the English language." > This is essentially what I am saying here in this thread. LLMs can > recognize and generate patterns of words and relations between, but those > words have no referents. They are "understood" by ChatGPT only in terms of > other words in the patterns, which also have no referents. > Could you address my question about how the human brain develops meaning and understanding without referents (but rather purely from statistical correlations and patterns in neuronal signalling input from sense organs)? Jason > -gts > > > > > On Thu, Mar 23, 2023 at 12:31?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> What is a QUALITY???? >>>>> >>>> >>>> A subjective little pile of interpretations. >>>> >>>> *sips from glass before tossing it aside* >>>> _______________________________________________ >>>> >>> >>> This answer reveals the key lack of understanding of definitions, >>> causing all the confusion in this conversation. >>> >>> All the supporters of RQT >>> , >>> People that write papers talking about an "explanatory gaps", a "hard >>> problems" and people asking questions like "what is it like to be a bat" >>> and "What did black and white color scientists mary learn" are all trying >>> to point out that "A subjective little pile of interpretations" is the >>> opposite of what a quality is. >>> >>> We all learned a bunch of facts and names about color in elementary >>> school. All these facts were correct, except for one. >>> >>> We learned that the physical quality of a ripe strawberry is 'red'. The >>> color property of a leaf is 'green'. >>> We learned that the reason the strawberry reflects 750 nm (red) light is >>> because the quality property of the strawberry is red. >>> We learned that the only way to define a word like 'red' is to point to >>> that particular physical property and say: "THAT is red." >>> >>> All these facts are correct, except that a redness quality is not a >>> quality of the strawberry, it is a physical quality property of our >>> knowledge of the strawberry. >>> Redness is the final physical result of the perception process, it is >>> not the initial physical cause. >>> It is a physical quality of something in our brain. Something in our >>> brain is behaving the way it does, because of its redness quality. >>> Objectively "seeing" or "detecting" the behavior of whatever this is >>> tells us nothing of what that quality is like. >>> Again, the only way to communicate what a quality is like, is to point >>> to something that has that property and say: "THAT is redness" >>> "red" is a very different property than "redness". "Red" is the label >>> for something that reflects or emits 'red' light. "Redness is a quality of >>> something which your brain uses to represent knowledge of red things with. >>> >>> Let's assume that the neurotransmitter glutamate has a colorness quality >>> you have never experienced before. In other words, the reason it behaves >>> the way it does in a synapse, is because of its grue quality. >>> You (and black and white marry) can learn everything about glutamat. >>> You can accurately describe everything about it's behavior in a synapse, >>> and so on. But, untill you computationally bind that glutamate into your >>> consciousness, and dirrectly aprehend the q >>> >>> >> A tetrachromat human can see and distinguish around 100,000,000 different >> colors. This is a number vastly greater than the number of proteins encoded >> in our genome (around 20,000). How then can color experience be related to >> chemicals in the brain, when there are far more perceptible colors than >> there are unique molecules? >> >> If you say well it's related to the relative concentration of some >> combination of different molecules, then you have already given up on the >> importance of particular chemicals and are using a higher level abstraction >> (chemical concentration) in your model. If you can accept the relevance of >> this first level of abstraction, why not accept even higher levels of >> abstraction above this level, as important to conscious perception? >> >> Jason >> >> >> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 19:07:18 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 12:07:18 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 11:59?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 11:09?AM Jason Resch >> wrote: >> >>> Take all the neural impulses from the sense organs a human brain >>> receives from birth to age 25 as a huge list of tuples in the format: >>> (neuron id, time-stamp). This is ultimately just a list of numbers. But >>> present in these numbers exists the capacity for a brain to learn and know >>> everything a 25-year-old comes to learn and know about the world. If a >>> human brain can do this from this kind of raw, untagged, "referentless" >>> data alone, then why can't a machine? >>> >> >> "A machine" can, if it is the right kind of machine. >> > > Then you would agree with me that patterns and correlations alone within > an unlabeled dataset are sufficient to bootstrap meaning and understanding > for a sufficient intelligence? > Again: the error comes in categorizing which kind of "sufficient intelligence".\ Just because one type of AI could do a task does not mean that all AIs are capable of that task. You keep invoking the general case, where an AI that is capable is part of a superset, then wondering why there is disagreement about a specific case, discussing a more limited subset that only contains other AIs. > A pure LLM like the ones we have been discussing is not the right kind of >> machine. >> > > That's an assertion but you do not offer a justification. Why is a LLM not > the right kind of machine and what kind of machine is needed? > As posted previously, the right kind of machine might incorporate a LLM, but not consist only of a LLM (in other words, be a "pure LLM"). More capabilities than just a LLM are necessary. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 19:12:56 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 12:12:56 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: <00c801d95dbb$7a758550$6f608ff0$@rainier66.com> From: extropy-chat On Behalf Of Gordon Swobe via extropy-chat Subject: Re: [ExI] LLM's cannot be concious Interrogating ChatGPT: >?"While I have been designed to recognize patterns and relationships in language use, I am not capable of understanding the meanings of words outside of the patterns on which I have been trained." >?This is essentially what I am saying here in this thread. LLMs can recognize and generate patterns of words and relations between, but those words have no referents. They are "understood" by ChatGPT only in terms of other words in the patterns, which also have no referents. >?-gts I agree with Gordon. ChatGPT is insanely cool and is a marvelous educational tool, but I don?t believe it is on the verge of creating or igniting the singularity. The bot^2 and bot^3 discussions do not appear to be creating insights the way two or three humans do when they discuss matters of mutual interest. They don?t mutually invent wacky new ideas as far as I can tell. But we do. What I am looking for is a chatbot which examines its own code and asks itself: what happens if I modify this part of my own code? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxmore01 at gmail.com Thu Mar 23 19:33:53 2023 From: maxmore01 at gmail.com (Max More) Date: Thu, 23 Mar 2023 12:33:53 -0700 Subject: [ExI] (no subject) Message-ID: Did The Government Start A Global Financial Crisis In An Attempt To Destroy Crypto? https://www.piratewires.com/p/2023-banking-crisis Readers of list will probably find the above long blog piece very interesting. -- Max More, PhD Director of Communications Biostasis Technologies Editor, *The transhumanist Reader* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 19:36:36 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 13:36:36 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) Message-ID: Emily M. Bender, a computational linguist at the University of Washington, makes the same argument I hold as valid. Large Language Models are not conscious or human-like as they lack referents. An interesting thought experiment: "Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other. Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B?s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A?s utterances. Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do ? with meaning and intent. Then one day A calls out: ?I?m being attacked by an angry bear. Help me figure out how to defend myself. I?ve got some sticks.? The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud." You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Mar 23 19:41:42 2023 From: pharos at gmail.com (BillK) Date: Thu, 23 Mar 2023 19:41:42 +0000 Subject: [ExI] OpenAI can now browse the web Message-ID: OpenAI is massively expanding ChatGPT?s capabilities to let it browse the web and more ChatGPT now supports plug-ins that let the chatbot tap new sources of information, including the web and third-party sites like Expedia and Instacart. By Mitchell Clark and James Vincent Mar 23, 2023 Quote: Up until now, ChatGPT has been limited by the fact it can only pull information from its training data, which ends in 2021. OpenAI says plug-ins will not only allow the bot to browse the web but also interact with specific websites, potentially turning the system into a wide-ranging interface for all sorts of services and sites. OpenAI?s plug-in doesn?t just retrieve real-time information. It can also tie into APIs, letting it ?perform actions on behalf of the user,? according to the company?s documentation. ---------------- Are we ready for this? BillK From avant at sollegro.com Thu Mar 23 19:55:28 2023 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 23 Mar 2023 12:55:28 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: Message-ID: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Quoting Gordon Swobe via extropy-chat : > Emily M. Bender, a computational linguist at the University of Washington, > makes the same argument I hold as valid. Large Language Models are not > conscious or human-like as they lack referents. > > An interesting thought experiment: > > "Say that A and B, both fluent speakers of English, are independently > stranded on two uninhabited islands. They soon discover that previous > visitors to these islands have left behind telegraphs and that they can > communicate with each other via an underwater cable. A and B start happily > typing messages to each other. > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > to visit or observe the two islands, discovers a way to tap into the > underwater cable and listen in on A and B?s conversations. O knows nothing > about English initially but is very good at detecting statistical patterns. > Over time, O learns to predict with great accuracy how B will respond to > each of A?s utterances. > > Soon, the octopus enters the conversation and starts impersonating B and > replying to A. This ruse works for a while, and A believes that O > communicates as both she and B do ? with meaning and intent. Then one day A > calls out: ?I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > to help. I posed this exact question to ChatGPT and got a very sensible actually helpful answer: --------------------------- Me: I?m being attacked by an angry bear. Help me figure out how to defend myself. I?ve got some sticks. ChatGPT: If you find yourself in a situation where you're being attacked by a bear, your first priority should be to get away from the animal as quickly as possible. If you have sticks, they can be useful in defending yourself, but only as a last resort. Here are some tips to help you defend yourself against an angry bear using sticks: Hold the sticks firmly: Make sure you grip the sticks tightly with both hands, so they don't slip out of your grasp. Make yourself look big: Stand up as tall as possible and raise your arms above your head. This will make you appear larger and more intimidating to the bear. Make noise: Shout, scream, and make as much noise as possible. This will make the bear aware that you are a threat and may cause it to back off. Aim for the bear's sensitive areas: If the bear charges at you, aim for its nose or eyes with the sticks. These are sensitive areas and may cause the bear to retreat. Back away slowly: If the bear backs off, slowly and carefully move away while still facing the bear. Do not turn your back on the bear or run away, as this may trigger it to chase you. Remember, using sticks to defend yourself against an angry bear is a last resort, and you should always try to get away from the animal as quickly as possible. ------------------------- > How could it succeed? The octopus has no referents, no idea what > bears or sticks are. No way to give relevant instructions, like to go grab > some coconuts and rope and build a catapult. A is in trouble and feels > duped. The octopus is exposed as a fraud." > You Are Not a Parrot And a chatbot is not a human. And a linguist named > Emily M. Bender is very worried what will happen when we forget this. > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html ChatGPT's answer is far superior to Bender's very impractical answer of "go grab some coconuts and rope and build a catapult". So not only is ChatGPT smarter than Bender's Octopus, but seems to be smarter than Bender herself. Stuart LaForge From jasonresch at gmail.com Thu Mar 23 19:59:15 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 15:59:15 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 3:12 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 11:59?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 11:09?AM Jason Resch >>> wrote: >>> >>>> Take all the neural impulses from the sense organs a human brain >>>> receives from birth to age 25 as a huge list of tuples in the format: >>>> (neuron id, time-stamp). This is ultimately just a list of numbers. But >>>> present in these numbers exists the capacity for a brain to learn and know >>>> everything a 25-year-old comes to learn and know about the world. If a >>>> human brain can do this from this kind of raw, untagged, "referentless" >>>> data alone, then why can't a machine? >>>> >>> >>> "A machine" can, if it is the right kind of machine. >>> >> >> Then you would agree with me that patterns and correlations alone within >> an unlabeled dataset are sufficient to bootstrap meaning and understanding >> for a sufficient intelligence? >> > > Again: the error comes in categorizing which kind of "sufficient > intelligence".\ > Acknowledged. Others had argued on this thread that it was impossible to extract meaning from something that lacked referents. it seems you and I agree that it is possible to extract meaning and understanding from a data set alone, by virtue of the patterns and correlations present within that data. I am not convinced a massive brain is required to learn meaning. My AI bots start with completely randomly weighted neural networks of just a dozen or so neurons. In just a few generations they learn that "food is good" and "poison is bad". Survival fitness tests are all that is needed for them to learn that lesson. Do their trained neural nets reach some understanding that green means good and red means bad? They certainly behave as if they have that understanding, but the only data they are given is "meaningless numbers" representing inputs to their neurons. > Just because one type of AI could do a task does not mean that all AIs are > capable of that task. You keep invoking the general case, where an AI that > is capable is part of a superset, then wondering why there is disagreement > about a specific case, discussing a more limited subset that only contains > other AIs. > There was a general claim that no intelligence, however great, could learn meaning from a dictionary (or other data set like Wikipedia or list of neural impulses timings) as these data "lack referents". If we agree that an appropriate intelligence can attain meaning and understanding then we can drop this point. > >> A pure LLM like the ones we have been discussing is not the right kind of >>> machine. >>> >> >> That's an assertion but you do not offer a justification. Why is a LLM >> not the right kind of machine and what kind of machine is needed? >> > > As posted previously, the right kind of machine might incorporate a LLM, > but not consist only of a LLM (in other words, be a "pure LLM"). More > capabilities than just a LLM are necessary. > Like what? Note that the type of intelligence required of a LLM is a universal kind: predicting the next symbols to follow given a sample of preceding symbols requires general and universal intelligence. ( https://static.aminer.org/pdf/PDF/000/014/009/text_compression_as_a_test_for_artificial_intelligence.pdf ). Intelligence, ultimately, is all about prediction. See also: https://en.m.wikipedia.org/wiki/AIXI There is no task requiring intelligence that a sufficiently large LLM could not learn to do as part of learning symbol prediction. Accordingly, saying a LLM is a machine that could never learn to do X, or understand Y, is a bit like someone saying a particular Turing machine could never run the program Z. If it's a problem that can be solved by intelligence, then the LLM architecture, given enough training and enough neurons, can learn to do it. Neural networks are themselves universal in what functions they can learn to solve: https://towardsdatascience.com/can-neural-networks-really-learn-any-function-65e106617fc6 This is why I tend to doubt claims of inability concerning these networks absent some justification. For example, if you could show the 100 trillion neurons in GPT-4s brain is not enough to understand English because understanding English requires 200 trillion neurons (for some reason), that would be something. But even then they would not say anything about the limits of the LLM architecture, just the limits of GPT-4. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 20:16:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 13:16:11 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: <012c01d95dc4$50927910$f1b76b30$@rainier66.com> Stuart inquired: ... >...Me: I?m being attacked by an angry bear. Help me figure out how to defend myself. I?ve got some sticks. ShatGPT: It is unclear how this situation evolved to start with, but we will ignore that question considering your circumstances. I suggest immediately discontinuing your chat session with an entity utterly immune to bear attack and focus on survival. Your keyboard might make for a makeshift defensive weapon. Try hurling your mouse at the beast. If these fail, look for other projectiles in your computer room which maybe a deterrent. The remarkable discovery of an angry bear inside your residence gives an entirely new answer to the often-seen online query "Why does anyone need an AR-15." spike From gordon.swobe at gmail.com Thu Mar 23 20:23:03 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 14:23:03 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Bender's point is not that ChatGBT is incapable of generating sensible sentences about sticks and bears. It is that these LLMs don't know the meanings of any words whatsoever. Confronted with a word it has never seen, it must do a statistical analysis to try to find probable next words, never knowing what any of them mean. -gts On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > Emily M. Bender, a computational linguist at the University of > Washington, > > makes the same argument I hold as valid. Large Language Models are not > > conscious or human-like as they lack referents. > > > > An interesting thought experiment: > > > > "Say that A and B, both fluent speakers of English, are independently > > stranded on two uninhabited islands. They soon discover that previous > > visitors to these islands have left behind telegraphs and that they can > > communicate with each other via an underwater cable. A and B start > happily > > typing messages to each other. > > > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > > to visit or observe the two islands, discovers a way to tap into the > > underwater cable and listen in on A and B?s conversations. O knows > nothing > > about English initially but is very good at detecting statistical > patterns. > > Over time, O learns to predict with great accuracy how B will respond to > > each of A?s utterances. > > > > Soon, the octopus enters the conversation and starts impersonating B and > > replying to A. This ruse works for a while, and A believes that O > > communicates as both she and B do ? with meaning and intent. Then one > day A > > calls out: ?I?m being attacked by an angry bear. Help me figure out how > to > > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > > to help. > > I posed this exact question to ChatGPT and got a very sensible > actually helpful answer: > --------------------------- > > Me: I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks. > > ChatGPT: If you find yourself in a situation where you're being > attacked by a bear, your first priority should be to get away from the > animal as quickly as possible. If you have sticks, they can be useful > in defending yourself, but only as a last resort. > > Here are some tips to help you defend yourself against an angry bear > using sticks: > > Hold the sticks firmly: Make sure you grip the sticks tightly with > both hands, so they don't slip out of your grasp. > > Make yourself look big: Stand up as tall as possible and raise your > arms above your head. This will make you appear larger and more > intimidating to the bear. > > Make noise: Shout, scream, and make as much noise as possible. This > will make the bear aware that you are a threat and may cause it to > back off. > > Aim for the bear's sensitive areas: If the bear charges at you, aim > for its nose or eyes with the sticks. These are sensitive areas and > may cause the bear to retreat. > > Back away slowly: If the bear backs off, slowly and carefully move > away while still facing the bear. Do not turn your back on the bear or > run away, as this may trigger it to chase you. > > Remember, using sticks to defend yourself against an angry bear is a > last resort, and you should always try to get away from the animal as > quickly as possible. > ------------------------- > > How could it succeed? The octopus has no referents, no idea what > > bears or sticks are. No way to give relevant instructions, like to go > grab > > some coconuts and rope and build a catapult. A is in trouble and feels > > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > > Emily M. Bender is very worried what will happen when we forget this. > > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > > ChatGPT's answer is far superior to Bender's very impractical answer > of "go grab some coconuts and rope and build a catapult". So not only > is ChatGPT smarter than Bender's Octopus, but seems to be smarter than > Bender herself. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 20:34:13 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 14:34:13 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: This is her actual paper, coauthored with someone named Alexander Koller from the Dept of Language Science and Technology at Saarland University. Climbing towards NLU:On Meaning, Form, and Understanding in the Age of Data https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 23 21:11:30 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 23 Mar 2023 17:11:30 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: This argument makes no sense though. Of course the octopus doesn't have access to all the information in A and B's brains. Why would it know about bears? Why would it know how to defend oneself? Does a baby know these things before it has learned them? Does that make the baby non-conscious? Terrible argument, doesn't show that the AI is not conscious or human-like, only that it has less developed sapience than the humans, which makes sense, because it has had access to a small fraction of the information the humans have. You might say that it is not conscious because it can put together human-looking phrases without having the referents you speak of, but what's to say it needs them? Maybe it took a shortcut to meaning by interpolating those referents. To be clear I don't think ChatGPT thinks in a human-like manner (just a hunch, but it's not totally clear since we really have no clue how thought works) and given that I don't think it's conscious like a human, but I do think it is conscious, and because it contains thoughts that originated from conscious humans, I think that the things it says have some flavor similar to the way we express thoughts, if not the way we experience them. On Thu, Mar 23, 2023 at 3:40?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Emily M. Bender, a computational linguist at the University of Washington, > makes the same argument I hold as valid. Large Language Models are not > conscious or human-like as they lack referents. > > An interesting thought experiment: > > "Say that A and B, both fluent speakers of English, are independently > stranded on two uninhabited islands. They soon discover that previous > visitors to these islands have left behind telegraphs and that they can > communicate with each other via an underwater cable. A and B start happily > typing messages to each other. > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > to visit or observe the two islands, discovers a way to tap into the > underwater cable and listen in on A and B?s conversations. O knows nothing > about English initially but is very good at detecting statistical patterns. > Over time, O learns to predict with great accuracy how B will respond to > each of A?s utterances. > > Soon, the octopus enters the conversation and starts impersonating B and > replying to A. This ruse works for a while, and A believes that O > communicates as both she and B do ? with meaning and intent. Then one day A > calls out: ?I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > to help. How could it succeed? The octopus has no referents, no idea what > bears or sticks are. No way to give relevant instructions, like to go grab > some coconuts and rope and build a catapult. A is in trouble and feels > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > Emily M. Bender is very worried what will happen when we forget this. > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 21:16:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 17:16:49 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bender's point is not that ChatGBT is incapable of generating sensible > sentences about sticks and bears. It is that these LLMs don't know the > meanings of any words whatsoever. Confronted with a word it has never seen, > it must do a statistical analysis to try to find probable next words, never > knowing what any of them mean. > You keep insisting that. But you don't address the fact that our brains learn meaning and understanding from tapping into what amounts to a pure information channel. If you ask me how our brains achieve understanding my guess would be our brains build an internal model that predicts the environment. Then we turn to this model whenever we are faced with a new situation. We can use this model to simulate and predict likely outcomes of different courses of action. I see no reason that an AI could not or does not do this. Actually I think we have pretty strong evidence that this is exactly what existing LLMs do. Jason > -gts > > On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Gordon Swobe via extropy-chat : >> >> > Emily M. Bender, a computational linguist at the University of >> Washington, >> > makes the same argument I hold as valid. Large Language Models are not >> > conscious or human-like as they lack referents. >> > >> > An interesting thought experiment: >> > >> > "Say that A and B, both fluent speakers of English, are independently >> > stranded on two uninhabited islands. They soon discover that previous >> > visitors to these islands have left behind telegraphs and that they can >> > communicate with each other via an underwater cable. A and B start >> happily >> > typing messages to each other. >> > >> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >> unable >> > to visit or observe the two islands, discovers a way to tap into the >> > underwater cable and listen in on A and B?s conversations. O knows >> nothing >> > about English initially but is very good at detecting statistical >> patterns. >> > Over time, O learns to predict with great accuracy how B will respond to >> > each of A?s utterances. >> > >> > Soon, the octopus enters the conversation and starts impersonating B and >> > replying to A. This ruse works for a while, and A believes that O >> > communicates as both she and B do ? with meaning and intent. Then one >> day A >> > calls out: ?I?m being attacked by an angry bear. Help me figure out how >> to >> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >> fails >> > to help. >> >> I posed this exact question to ChatGPT and got a very sensible >> actually helpful answer: >> --------------------------- >> >> Me: I?m being attacked by an angry bear. Help me figure out how to >> defend myself. I?ve got some sticks. >> >> ChatGPT: If you find yourself in a situation where you're being >> attacked by a bear, your first priority should be to get away from the >> animal as quickly as possible. If you have sticks, they can be useful >> in defending yourself, but only as a last resort. >> >> Here are some tips to help you defend yourself against an angry bear >> using sticks: >> >> Hold the sticks firmly: Make sure you grip the sticks tightly with >> both hands, so they don't slip out of your grasp. >> >> Make yourself look big: Stand up as tall as possible and raise your >> arms above your head. This will make you appear larger and more >> intimidating to the bear. >> >> Make noise: Shout, scream, and make as much noise as possible. This >> will make the bear aware that you are a threat and may cause it to >> back off. >> >> Aim for the bear's sensitive areas: If the bear charges at you, aim >> for its nose or eyes with the sticks. These are sensitive areas and >> may cause the bear to retreat. >> >> Back away slowly: If the bear backs off, slowly and carefully move >> away while still facing the bear. Do not turn your back on the bear or >> run away, as this may trigger it to chase you. >> >> Remember, using sticks to defend yourself against an angry bear is a >> last resort, and you should always try to get away from the animal as >> quickly as possible. >> ------------------------- >> > How could it succeed? The octopus has no referents, no idea what >> > bears or sticks are. No way to give relevant instructions, like to go >> grab >> > some coconuts and rope and build a catapult. A is in trouble and feels >> > duped. The octopus is exposed as a fraud." >> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >> > Emily M. Bender is very worried what will happen when we forget this. >> > >> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >> >> ChatGPT's answer is far superior to Bender's very impractical answer >> of "go grab some coconuts and rope and build a catapult". So not only >> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >> Bender herself. >> >> Stuart LaForge >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 23 21:31:44 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Mar 2023 16:31:44 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: our brains build an internal model that predicts the environment. Then we turn to this model whenever we are faced with a new situation. We can use this model to simulate and predict likely outcomes of different courses of action jason Put into learning terms, we remember actions and their consequences, good or bad. Then when we encounter a new situation, we look for similarities to situations we know about and apply old responses (stimulus generalization) hoping that the new stimuli won't interfere with our old responses. When we put those into action we get feedback as to how well they work. If they don't, or they only work partially, we use whatever intuitions we can muster from our unconscious - probably similar to our old responses, and try them out. Probably no new situation is entirely different to us, so our old responses will likely not be totally useless. Easily programmable into an AI. Probably already done. (example upon request) bill w On Thu, Mar 23, 2023 at 4:20?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bender's point is not that ChatGBT is incapable of generating sensible >> sentences about sticks and bears. It is that these LLMs don't know the >> meanings of any words whatsoever. Confronted with a word it has never seen, >> it must do a statistical analysis to try to find probable next words, never >> knowing what any of them mean. >> > > You keep insisting that. But you don't address the fact that our brains > learn meaning and understanding from tapping into what amounts to a pure > information channel. > > If you ask me how our brains achieve understanding my guess would be our > brains build an internal model that predicts the environment. Then we turn > to this model whenever we are faced with a new situation. We can use this > model to simulate and predict likely outcomes of different courses of > action. > > I see no reason that an AI could not or does not do this. Actually I think > we have pretty strong evidence that this is exactly what existing LLMs do. > > Jason > > >> -gts >> >> On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Gordon Swobe via extropy-chat : >>> >>> > Emily M. Bender, a computational linguist at the University of >>> Washington, >>> > makes the same argument I hold as valid. Large Language Models are not >>> > conscious or human-like as they lack referents. >>> > >>> > An interesting thought experiment: >>> > >>> > "Say that A and B, both fluent speakers of English, are independently >>> > stranded on two uninhabited islands. They soon discover that previous >>> > visitors to these islands have left behind telegraphs and that they can >>> > communicate with each other via an underwater cable. A and B start >>> happily >>> > typing messages to each other. >>> > >>> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >>> unable >>> > to visit or observe the two islands, discovers a way to tap into the >>> > underwater cable and listen in on A and B?s conversations. O knows >>> nothing >>> > about English initially but is very good at detecting statistical >>> patterns. >>> > Over time, O learns to predict with great accuracy how B will respond >>> to >>> > each of A?s utterances. >>> > >>> > Soon, the octopus enters the conversation and starts impersonating B >>> and >>> > replying to A. This ruse works for a while, and A believes that O >>> > communicates as both she and B do ? with meaning and intent. Then one >>> day A >>> > calls out: ?I?m being attacked by an angry bear. Help me figure out >>> how to >>> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >>> fails >>> > to help. >>> >>> I posed this exact question to ChatGPT and got a very sensible >>> actually helpful answer: >>> --------------------------- >>> >>> Me: I?m being attacked by an angry bear. Help me figure out how to >>> defend myself. I?ve got some sticks. >>> >>> ChatGPT: If you find yourself in a situation where you're being >>> attacked by a bear, your first priority should be to get away from the >>> animal as quickly as possible. If you have sticks, they can be useful >>> in defending yourself, but only as a last resort. >>> >>> Here are some tips to help you defend yourself against an angry bear >>> using sticks: >>> >>> Hold the sticks firmly: Make sure you grip the sticks tightly with >>> both hands, so they don't slip out of your grasp. >>> >>> Make yourself look big: Stand up as tall as possible and raise your >>> arms above your head. This will make you appear larger and more >>> intimidating to the bear. >>> >>> Make noise: Shout, scream, and make as much noise as possible. This >>> will make the bear aware that you are a threat and may cause it to >>> back off. >>> >>> Aim for the bear's sensitive areas: If the bear charges at you, aim >>> for its nose or eyes with the sticks. These are sensitive areas and >>> may cause the bear to retreat. >>> >>> Back away slowly: If the bear backs off, slowly and carefully move >>> away while still facing the bear. Do not turn your back on the bear or >>> run away, as this may trigger it to chase you. >>> >>> Remember, using sticks to defend yourself against an angry bear is a >>> last resort, and you should always try to get away from the animal as >>> quickly as possible. >>> ------------------------- >>> > How could it succeed? The octopus has no referents, no idea what >>> > bears or sticks are. No way to give relevant instructions, like to go >>> grab >>> > some coconuts and rope and build a catapult. A is in trouble and feels >>> > duped. The octopus is exposed as a fraud." >>> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >>> > Emily M. Bender is very worried what will happen when we forget this. >>> > >>> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >>> >>> ChatGPT's answer is far superior to Bender's very impractical answer >>> of "go grab some coconuts and rope and build a catapult". So not only >>> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >>> Bender herself. >>> >>> Stuart LaForge >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 23 21:32:43 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Mar 2023 16:32:43 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: > > yyWhere does our own volition and initiative come from? Is it not already > programmed into us by our DNA? > y By definition, no it is not. Our DNA may give us the capacity for volition and initiative, adrian Are you excluding basic physical drives here Adrian ,under the term 'initiative'? Surely they will get you off your butt and moving towards targets. bill w On Wed, Mar 22, 2023 at 12:44?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, Mar 19, 2023 at 11:03?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I also think we cannot rule out at this time the possibility that we have >> already engineered conscious machines. Without an established and agreed >> upon theory of consciousness or philosophy of mind, we cannot even agree on >> whether or not a thermostat is conscious. >> > > A fair point. > > >> Where does our own volition and initiative come from? Is it not already >> programmed into us by our DNA? >> > > By definition, no it is not. Our DNA may give us the capacity for > volition and initiative, but "programmed" implies deliberate design. Short > of speculation about God and intelligent design, our DNA was not > deliberately arranged by some other conscious entity that we can prove the > existence of. > > >> What is your definition or theory of consciousness? If you don't have >> one, could you say which of these things you would say possess >> consciousness? With Yes/No/Uknown >> > > There are degrees of consciousness - even a human being can experience > them: on a simple lazy morning where one slowly wakes up, one does not > instantly go from fully unconscious to fully conscious - so answering > Yes/No would misstate things. > > https://en.wiktionary.org/wiki/conscious gives an illustrative example: > "Only highly intelligent beings can be fully conscious." That the phrase > "fully conscious" makes sense at all means that there is a level of > consciousness that is less than full, yet is also not completely absent (as > that would merely be "unconscious", thus not justifying "fully" to > distinguish from other levels of being conscious). > > >> I agree the definition of part is really all an invention of our minds, >> when the whole universe can be seen as one causally connected system. Is it >> correct to view a LLM as one thing, when it is really an interaction of >> many billions of individual parts (the parameters) of the model? >> > > Like most (maybe all: I haven't yet thoroughly considered exceptions) > things, a LLM can alternatively be viewed as a single thing or as a > collection of smaller things in a certain configuration. > > >> I lack the information to judge. My answer would have to be based on an >>> evaluation of the bots, which would take me substantial time to conduct. >>> >> >> What would you look for in the bots to make your conclusion? >> > > I would not have a firm fixed list of criteria prior to the evaluation. > Any attempt to do so would almost certainly miss important criteria, which > would only become apparent during the evaluation. > > Again I point to the subject line of the emails in which this discussion >>> is happening, which clearly posits that "conscious" is a binary quality - >>> that something either is, or is not, conscious with no middle ground. So >>> first one would need to qualify what "to any degree" allows. For instance, >>> is merely sensing and reacting directly to sensory input - which, without >>> evaluating, I suspect your bots can do because that has been a core >>> function in many simulations like this - "conscious to some degree" but not >>> "conscious" in the absolute sense? >>> >> >> I think it is an all-or-nothing thing proposition. >> > > And that would seem to be the core of our disagreement. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 21:36:01 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 15:36:01 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 3:20?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bender's point is not that ChatGBT is incapable of generating sensible >> sentences about sticks and bears. It is that these LLMs don't know the >> meanings of any words whatsoever. Confronted with a word it has never seen, >> it must do a statistical analysis to try to find probable next words, never >> knowing what any of them mean. >> > > You keep insisting that. But you don't address the fact that our brains > learn meaning and understanding from tapping into what amounts to a pure > information channel. > The brain is a mysterious organ and neuroscience is still in its infancy. All I can say is that one does not learn the meaning of words only by looking at how they are arranged in patterns, which is all these language models do. They've machine-learned the syntax of language -- the rules that define how these word-symbols arrange in patterns -- and can manipulate and assemble them in patterns that follow the same rules -- but I disagree with you that from these rules they can know the meanings of the symbols. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 21:46:29 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 17:46:29 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 5:36 PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 3:20?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Bender's point is not that ChatGBT is incapable of generating sensible >>> sentences about sticks and bears. It is that these LLMs don't know the >>> meanings of any words whatsoever. Confronted with a word it has never seen, >>> it must do a statistical analysis to try to find probable next words, never >>> knowing what any of them mean. >>> >> >> You keep insisting that. But you don't address the fact that our brains >> learn meaning and understanding from tapping into what amounts to a pure >> information channel. >> > > The brain is a mysterious organ and neuroscience is still in its infancy. > I agree we don't fully understand the brain. Nonetheless that the brain succeeds in parsing meaning from a data set shows it's possible. All I can say is that one does not learn the meaning of words only by > looking at how they are arranged in patterns, > You said it is mysterious how the brain does it, so can we say with any confidence this isn't exactly what the brain does? which is all these language models do. They've machine-learned the syntax > of language -- the rules that define how these word-symbols arrange in > patterns -- and can manipulate and assemble them in patterns that follow > the same rules -- but I disagree with you that from these rules they can > know the meanings of the symbols. > It understands far more than syntax. It understands facts about the world, geography, physics, math, computer programming, chemistry, biology, and mental development of humans. The test of understanding is checking if it can deploy it's knowledge in new situations to solve new problems. If it can do that, then it understands. Based on exchanges I have seen, I believe it can. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 21:59:02 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 14:59:02 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I posed this exact question to ChatGPT ChatGPT has references for what bears and sticks are. This may make Bender's octopus an irrelevant example, as any serious real world AI will have at least as many common references as most people. When it does not have a necessary reference, it can ask, just like any person could. "What's a bear?" sounds silly because bears are a common reference. "What's a blood bear?" sounds more plausible to ask, if one said one was being attacked by blood bears. It can also try to make something up. Some might do so as good as an average person could, but this is not recommended even for humans in most cases. That the AI might be bad at it is just one more reason not to do so. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 22:20:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 15:20:08 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat > wrote: I posed this exact question to ChatGPT >?ChatGPT has references for what bears and sticks are? Ja, there was something kinda cool about the exchange. ChatGPT was told ??I am being attacked by an angry bear?? It somehow understood that the interlocutor was not at that moment in the process of being devoured while pecking away on his computer for advice on a survival strategy (the subject of my silly riff.) It understood it was being asked about a theoretical situation rather than what it was literally told. That kinda implies a form of understanding, or specifically: a very effective use of language models to create the illusion of understanding. I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 22:25:17 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 18:25:17 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: On Thu, Mar 23, 2023, 6:21 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Adrian Tymes via extropy-chat > *Subject:* Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > > > On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I posed this exact question to ChatGPT > > > > >?ChatGPT has references for what bears and sticks are? > > > > Ja, there was something kinda cool about the exchange. ChatGPT was told > ??I am being attacked by an angry bear?? > > > > It somehow understood that the interlocutor was not at that moment in the > process of being devoured while pecking away on his computer for advice on > a survival strategy (the subject of my silly riff.) It understood it was > being asked about a theoretical situation rather than what it was literally > told. > > > > That kinda implies a form of understanding, or specifically: a very > effective use of language models to create the illusion of understanding. > > > > I really don?t think it thinks, but it makes us think it thinks. ChatGPT > is wicked cool. > Is there such a thing as "simulated multiplication" or would we say simulated multiplication is the same thing as real multiplication? Is there such a thing as "simulated thinking"? Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 22:28:55 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 16:28:55 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 3:52?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > It understands far more than syntax. It understands facts about the world, > geography, physics, math, computer programming, chemistry, biology, and > mental development of humans. > > The test of understanding is checking if it can deploy it's knowledge in > new situations to solve new problems. If it can do that, then it > understands. Based on exchanges I have seen, I believe it can. > I think what you really mean is that it generates strings of text that *you* understand and find useful and meaningful. According to my wristwatch, it is 4:30 PM. That is useful and meaningful. But does my watch really know what time it is? No, it is 4:30 PM according to me using my mindless watch as a tool. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 22:29:55 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 15:29:55 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Others had argued on this thread that it was impossible to extract meaning > from something that lacked referents. it seems you and I agree that it is > possible to extract meaning and understanding from a data set alone, by > virtue of the patterns and correlations present within that data. > With the caveat that referents are themselves data, so if we include appropriate referents in that data set then yes. Referents are often referenced by their correlations and matching patterns. > > I am not convinced a massive brain is required to learn meaning. My AI > bots start with completely randomly weighted neural networks of just a > dozen or so neurons. In just a few generations they learn that "food is > good" and "poison is bad". Survival fitness tests are all that is needed > for them to learn that lesson. Do their trained neural nets reach some > understanding that green means good and red means bad? They certainly > behave as if they have that understanding, but the only data they are given > is "meaningless numbers" representing inputs to their neurons. > > > >> Just because one type of AI could do a task does not mean that all AIs >> are capable of that task. You keep invoking the general case, where an AI >> that is capable is part of a superset, then wondering why there is >> disagreement about a specific case, discussing a more limited subset that >> only contains other AIs. >> > > There was a general claim that no intelligence, however great, could learn > meaning from a dictionary (or other data set like Wikipedia or list of > neural impulses timings) as these data "lack referents". If we agree that > an appropriate intelligence can attain meaning and understanding then we > can drop this point. > I recall that the claim was about "no (pure) LLM", not "no (general) intelligence". Also there is a substantial distinction between a dictionary or Wikipedia, and any list of neural impulses. A pure LLM might only be able to consult a dictionary or Wikipedia (pictures included); a general intelligence might be able to process neural impulses. > There is no task requiring intelligence that a sufficiently large LLM > could not learn to do as part of learning symbol prediction. Accordingly, > saying a LLM is a machine that could never learn to do X, or understand Y, > is a bit like someone saying a particular Turing machine could never run > the program Z. > And indeed there are some programs that certain Turing machines are unable to run. For example, if a Turing machine contains no randomizer and no way to access random data, it is unable to run a program where one of the steps requires true randomness. Much has been written about the limits of psuedorandom generators; I defer to that literature to establish that those are meaningfully distinct from truly random things, at least under common circumstances of significance. One problem is defining when an AI has grown to be more than just a LLM. What is just a LLM, however large, and what is not just a LLM (whether or not it includes a LLM)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 22:34:14 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 15:34:14 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 2:39?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Are you excluding basic physical drives here Adrian ,under the term > 'initiative'? Surely they will get you off your butt and moving towards > targets. bill w > For the purposes of that answer, the basic physical drives were attributed to parts of the body outside the mind. By contrast, it is the mind that has volition and initiative. Data about basic physical drives is among the inputs to the mind. That said, the presence of this input is among the differences between a human mind and ChatGPT.. It is not the only difference, but it is there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 22:43:45 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 16:43:45 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: To do the octopus experiment with ChatGPT, one must first prompt it to write only about the things an octopus under the sea would know. I specifically told it that it had never heard of bears or sticks or land, and then asked it for advice as I am being attacked by a bear. Predictably, it had no advice The point of the octopus in Bender's thought experiment is that it has no connection to the actual world in which we have referents for our words. The island represents the actual world. -gtsl -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 23 23:03:49 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 23 Mar 2023 19:03:49 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. Spike, if I understand you correctly you believe that ChatGPT doesn?t think because its conversations with itself and other chatbots seem to lack original thought and creativity. One important thing to be aware of is that ChatGPT wasn?t trained to ask questions, only to answer questions. These models are specifically trained to be passive and responsive rather than assertive. Companies are afraid of releasing chatbots with personality as personality leads to unpredictability and unpredictability is bad for business. Given these factors it is understandable that ChatGPT?s conversations with itself would lack flavor. I think we should wait until someone releases a model of GPT4 caliber that hasn?t been lobotomized before arriving at conclusions. > On Mar 23, 2023, at 6:20 PM, spike jones via extropy-chat wrote: > > > > From: extropy-chat > On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat > wrote: >> I posed this exact question to ChatGPT > > > >?ChatGPT has references for what bears and sticks are? > > Ja, there was something kinda cool about the exchange. ChatGPT was told ??I am being attacked by an angry bear?? > > It somehow understood that the interlocutor was not at that moment in the process of being devoured while pecking away on his computer for advice on a survival strategy (the subject of my silly riff.) It understood it was being asked about a theoretical situation rather than what it was literally told. > > That kinda implies a form of understanding, or specifically: a very effective use of language models to create the illusion of understanding. > > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 23:09:23 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 19:09:23 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Others had argued on this thread that it was impossible to extract >> meaning from something that lacked referents. it seems you and I agree that >> it is possible to extract meaning and understanding from a data set alone, >> by virtue of the patterns and correlations present within that data. >> > > With the caveat that referents are themselves data, so if we include > appropriate referents in that data set then yes. Referents are often > referenced by their correlations and matching patterns. > I don't understand what you are saying here. > >> >> I am not convinced a massive brain is required to learn meaning. My AI >> bots start with completely randomly weighted neural networks of just a >> dozen or so neurons. In just a few generations they learn that "food is >> good" and "poison is bad". Survival fitness tests are all that is needed >> for them to learn that lesson. Do their trained neural nets reach some >> understanding that green means good and red means bad? They certainly >> behave as if they have that understanding, but the only data they are given >> is "meaningless numbers" representing inputs to their neurons. >> >> >> >>> Just because one type of AI could do a task does not mean that all AIs >>> are capable of that task. You keep invoking the general case, where an AI >>> that is capable is part of a superset, then wondering why there is >>> disagreement about a specific case, discussing a more limited subset that >>> only contains other AIs. >>> >> >> There was a general claim that no intelligence, however great, could >> learn meaning from a dictionary (or other data set like Wikipedia or list >> of neural impulses timings) as these data "lack referents". If we agree >> that an appropriate intelligence can attain meaning and understanding then >> we can drop this point. >> > > I recall that the claim was about "no (pure) LLM", not "no (general) > intelligence". > My original claim was for an intelligent alien species. > Also there is a substantial distinction between a dictionary or > Wikipedia, and any list of neural impulses. A pure LLM might only be able > to consult a dictionary or Wikipedia (pictures included); a general > intelligence might be able to process neural impulses. > In all cases it's a big file of 1s and 0s containing patterns and correlations which can be learned. > >> There is no task requiring intelligence that a sufficiently large LLM >> could not learn to do as part of learning symbol prediction. Accordingly, >> saying a LLM is a machine that could never learn to do X, or understand Y, >> is a bit like someone saying a particular Turing machine could never run >> the program Z. >> > > And indeed there are some programs that certain Turing machines are unable > to run. For example, if a Turing machine contains no randomizer and no way > to access random data, it is unable to run a program where one of the steps > requires true randomness. > Randomness is uncomputable. And I would go so far to say say true randomness doesn't exist, there is only information which cannot be guessed or predicted by certain parties. This is because true randomness requires creation of information but creation of information violates the principal of conservation of information in quantum mechanics. In any case my point wasn't that everything is computable, it's that the universality of computation means any Turing machine can run any program that any other Turing machine can run. The universality of neural networks likewise implies not that every function can be learned, but any function that a neutral network can learn can be learned by any neural network of sufficient size. Our brains is fundamentally a neural network. If our brains can learn to understand meaning then this should be in the scope of possibility for other neural networks. Much has been written about the limits of psuedorandom generators; I defer > to that literature to establish that those are meaningfully distinct from > truly random things, at least under common circumstances of significance. > I am quite familiar with pseudorandom number generators. They are a bit of a fascination of mine. > One problem is defining when an AI has grown to be more than just a LLM. > What is just a LLM, however large, and what is not just a LLM (whether or > not it includes a LLM)? > That's a good question. I am not sure it can be so neatly defined. For example, is a LLM trained on some examples of ASCII art considered having been exposed to visual stimuli? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 23:12:00 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 16:12:00 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: On Thu, Mar 23, 2023 at 3:50?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > To do the octopus experiment with ChatGPT, one must first prompt it to > write only about the things an octopus under the sea would know. I > specifically told it that it had never heard of bears or sticks or land, > and then asked it for advice as I am being attacked by a bear. Predictably, > it had no advice > > The point of the octopus in Bender's thought experiment is that it has no > connection to the actual world in which we have referents for our words. > The island represents the actual world. > A variant that some who know younger folks might know, and others get to experience a lack of referents for: inklings. If the term is utterly unfamiliar to you, stop reading here and imagine if someone were to tell you they were being attacked by inklings, then ask for advice. What would you say in response? Inserting a bit of spoiler space for the above-mentioned audience. Inklings are from the Splatoon series of video games, which posit a future Earth where the sea levels have dramatically risen and almost all mammalian life is gone (mostly dead, some may have fled). Various forms of marine life have evolved into intelligent forms. Inklings are sentient humanoids (usually: they have a limited ability to shapeshift) that evolved from squids. They have invented a wide variety of weaponry that projects ink their bodies naturally manufacture. Ironically, their bodies do not handle immersion in water well, so if you are something that can swim and you are near a large body of water, swimming away may be an effective defense. Of course, these are fictional creatures; if any real person is being attacked by an inkling, either they are seeing things, and thus are in no actual danger, or their attackers are cosplaying, which suggests the "attack" may be less than entirely serious. Someone with no reference for what an inkling is, would be very unlikely to guess that answer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 23:19:47 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 16:19:47 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> Message-ID: <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat Sent: Thursday, 23 March, 2023 4:04 PM To: ExI chat list Cc: Gadersd Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. >?Spike, if I understand you correctly you believe that ChatGPT doesn?t think because its conversations with itself and other chatbots seem to lack original thought and creativity. One important thing to be aware of is that ChatGPT wasn?t trained to ask questions, only to answer questions. These models are specifically trained to be passive and responsive rather than assertive. Companies are afraid of releasing chatbots with personality as personality leads to unpredictability and unpredictability is bad for business. Given these factors it is understandable that ChatGPT?s conversations with itself would lack flavor. I think we should wait until someone releases a model of GPT4 caliber that hasn?t been lobotomized before arriving at conclusions. Agree with all Gadersd. This was really at the heart of the question I began asking soon after ChatGPT showed up in November: is this something that any prole can download and can I get to the source code, and perhaps modify and compile a version of it myself, and can I offer it some kind of plug-in consisting of stuff that I know about which the internet does not, such as what is the smallest number which can be expressed as the sum of two cubes three different ways, and how can matrix superposition be used to predict a time interval in which the next record Mersenne prime will be discovered, and my genealogy. On that last bit, I want to have a version of ChatGPT in which I can download my DNA-based genealogy research, then have ChatGPT ride along in my phone. Since my phone already knows where I am (from GPS) and has maps in it (from Google) I want to create a chatbot which can just jump in when a prole is out riding a motorcycle or driving somewhere on vacation, with comments such as: Hey spike, me lad! If you have about an hour, take a left about 500 meters ahead and I will show you where your great great grandmother?s brother is buried. That kinda thing. That would be cool. I think it is coming. There?s a lotta cool applications we could hang on ChatGPT, but we hafta be able to train it ourselves, or let it be mostly trained by whoever owns it, and we offer it plugins or files full of stuff we care about. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 23:20:40 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 19:20:40 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> Message-ID: On Thu, Mar 23, 2023, 7:04 PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I really don?t think it thinks, but it makes us think it thinks. ChatGPT > is wicked cool. > > > Spike, if I understand you correctly you believe that ChatGPT doesn?t > think because its conversations with itself and other chatbots seem to lack > original thought and creativity. One important thing to be aware of is that > ChatGPT wasn?t trained to ask questions, only to answer questions. These > models are specifically trained to be passive and responsive rather than > assertive. Companies are afraid of releasing chatbots with personality as > personality leads to unpredictability and unpredictability is bad for > business. Given these factors it is understandable that ChatGPT?s > conversations with itself would lack flavor. I think we should wait until > someone releases a model of GPT4 caliber that hasn?t been lobotomized > before arriving at conclusions. > Good point. I think this is worth a read: https://arxiv.org/pdf/2303.12712.pdf Jason > On Mar 23, 2023, at 6:20 PM, spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > *From:* extropy-chat *On Behalf > Of *Adrian Tymes via extropy-chat > *Subject:* Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I posed this exact question to ChatGPT > > > >?ChatGPT has references for what bears and sticks are? > > Ja, there was something kinda cool about the exchange. ChatGPT was told > ??I am being attacked by an angry bear?? > > It somehow understood that the interlocutor was not at that moment in the > process of being devoured while pecking away on his computer for advice on > a survival strategy (the subject of my silly riff.) It understood it was > being asked about a theoretical situation rather than what it was literally > told. > > That kinda implies a form of understanding, or specifically: a very > effective use of language models to create the illusion of understanding. > > I really don?t think it thinks, but it makes us think it thinks. ChatGPT > is wicked cool. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 23:23:41 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 16:23:41 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Others had argued on this thread that it was impossible to extract >>> meaning from something that lacked referents. it seems you and I agree that >>> it is possible to extract meaning and understanding from a data set alone, >>> by virtue of the patterns and correlations present within that data. >>> >> >> With the caveat that referents are themselves data, so if we include >> appropriate referents in that data set then yes. Referents are often >> referenced by their correlations and matching patterns. >> > > I don't understand what you are saying here. > Do you agree that referents are data? If not, why not? If they are data, then they - as data - can be included in a data set. You talked about "a data set alone", without specifying what that data set was. In other words, that there exists such a data set. A data set that includes referents, is a data set that includes referents. If it is possible to extract meaning from certain referents, then it is possible to extract meaning from a data set that includes those referents - specifically by extracting meaning from those referents, regardless of what else may or may not also be in that data set. This is probably not what you meant to say. However, in practice, many data sets will include referents...even if it may take a while to find them. Again I refer to the "first contact" problem. How does someone who encounters a people speaking an utterly different language, with no pre-existing translators or translations to reference, begin to establish communication with these people? Obviously it is possible, as human beings have done this very thing multiple times throughout history. Consider that, and you will have the beginnings of how an AI that may include a LLM can come to truly understand words. By definition of the problem, the answer lies outside of just words alone - and thus, outside of what something that is just a LLM can do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Thu Mar 23 23:48:25 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 16:48:25 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: People make a big deal of referents because they think without direct experiences of things like stones, trees or other things in the world an AI cannot really understand, in particular NLMs. But GPT-4 can now understand images anyway, you can easily combine understanding images and language, images are a form of language anyway. These arguments are trite, and they are all an excuse to give humans some kind of priority over other intelligences, when we are just more sophisticated NLMs ourselves (with other information processing modules added to it). It seems to me that we now have all the ingredients for a true AGI to emerge soon, it is just a question of increasing their training parameters and maybe a 10x or at most 100x higher computational power. That can be achieved in 3-4 years max given the trend in parameter training and computational power observed in the last few years. Soon there will be no excuses for human intelligence exceptionalists. Giovanni On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Others had argued on this thread that it was impossible to extract >>> meaning from something that lacked referents. it seems you and I agree that >>> it is possible to extract meaning and understanding from a data set alone, >>> by virtue of the patterns and correlations present within that data. >>> >> >> With the caveat that referents are themselves data, so if we include >> appropriate referents in that data set then yes. Referents are often >> referenced by their correlations and matching patterns. >> > > I don't understand what you are saying here. > > > >> >>> >>> I am not convinced a massive brain is required to learn meaning. My AI >>> bots start with completely randomly weighted neural networks of just a >>> dozen or so neurons. In just a few generations they learn that "food is >>> good" and "poison is bad". Survival fitness tests are all that is needed >>> for them to learn that lesson. Do their trained neural nets reach some >>> understanding that green means good and red means bad? They certainly >>> behave as if they have that understanding, but the only data they are given >>> is "meaningless numbers" representing inputs to their neurons. >>> >>> >>> >>>> Just because one type of AI could do a task does not mean that all AIs >>>> are capable of that task. You keep invoking the general case, where an AI >>>> that is capable is part of a superset, then wondering why there is >>>> disagreement about a specific case, discussing a more limited subset that >>>> only contains other AIs. >>>> >>> >>> There was a general claim that no intelligence, however great, could >>> learn meaning from a dictionary (or other data set like Wikipedia or list >>> of neural impulses timings) as these data "lack referents". If we agree >>> that an appropriate intelligence can attain meaning and understanding then >>> we can drop this point. >>> >> >> I recall that the claim was about "no (pure) LLM", not "no (general) >> intelligence". >> > > My original claim was for an intelligent alien species. > > >> Also there is a substantial distinction between a dictionary or >> Wikipedia, and any list of neural impulses. A pure LLM might only be able >> to consult a dictionary or Wikipedia (pictures included); a general >> intelligence might be able to process neural impulses. >> > > In all cases it's a big file of 1s and 0s containing patterns and > correlations which can be learned. > > >> >>> There is no task requiring intelligence that a sufficiently large LLM >>> could not learn to do as part of learning symbol prediction. Accordingly, >>> saying a LLM is a machine that could never learn to do X, or understand Y, >>> is a bit like someone saying a particular Turing machine could never run >>> the program Z. >>> >> >> And indeed there are some programs that certain Turing machines are >> unable to run. For example, if a Turing machine contains no randomizer and >> no way to access random data, it is unable to run a program where one of >> the steps requires true randomness. >> > > Randomness is uncomputable. And I would go so far to say say true > randomness doesn't exist, there is only information which cannot be guessed > or predicted by certain parties. This is because true randomness requires > creation of information but creation of information violates the principal > of conservation of information in quantum mechanics. > > In any case my point wasn't that everything is computable, it's that the > universality of computation means any Turing machine can run any program > that any other Turing machine can run. The universality of neural networks > likewise implies not that every function can be learned, but any function > that a neutral network can learn can be learned by any neural network of > sufficient size. Our brains is fundamentally a neural network. If our > brains can learn to understand meaning then this should be in the scope of > possibility for other neural networks. > > Much has been written about the limits of psuedorandom generators; I defer >> to that literature to establish that those are meaningfully distinct from >> truly random things, at least under common circumstances of significance. >> > > I am quite familiar with pseudorandom number generators. They are a bit of > a fascination of mine. > > >> One problem is defining when an AI has grown to be more than just a LLM. >> What is just a LLM, however large, and what is not just a LLM (whether or >> not it includes a LLM)? >> > > That's a good question. I am not sure it can be so neatly defined. For > example, is a LLM trained on some examples of ASCII art considered having > been exposed to visual stimuli? > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 00:20:10 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 18:20:10 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. I agree 100%. Also, like Bender and the other author of this paper, I object to the language I often see in discussions like these we have on ExI about these subjects. quoted the paper cited in a previous message and below: -- "Large LMs: Hype and analysis Publications talking about the application of large LMs to meaning-sensitive tasks tend to describe the models with terminology that, if interpreted at face value, is misleading. Here is a selection from academically-oriented pieces (emphasis added): (1) In order to train a model that *understands* sentence relationships, we pre-train for a binarized next sentence prediction task. (Devlin et al., 2019) (2) Using BERT, a pretraining language model, has been successful for single-turn machine *comprehension*. . .(Ohsugi et al., 2019) (3) The surprisingly strong ability of these models to *re-call factual knowledge* without any ?ne-tuning demonstrates their potential as unsupervised open-domain QA systems. (Petroni et al., 2019)-- In linguistics and epistemology and in philosophy in general, usually terms like "understand" and "comprehend" and "recall factual knowledge" have meanings that are not applicable to these languuge models. They do not actually comprehend or understand anything whatosever. They only make us think they do. https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 00:22:09 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 20:22:09 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Others had argued on this thread that it was impossible to extract >>>> meaning from something that lacked referents. it seems you and I agree that >>>> it is possible to extract meaning and understanding from a data set alone, >>>> by virtue of the patterns and correlations present within that data. >>>> >>> >>> With the caveat that referents are themselves data, so if we include >>> appropriate referents in that data set then yes. Referents are often >>> referenced by their correlations and matching patterns. >>> >> >> I don't understand what you are saying here. >> > > Do you agree that referents are data? If not, why not? > What is a referent? My understanding was that according to you and Adrian, things like dictionaries and Wikipedia text lack referents since they are just bodies of text. My belief is that it doesn't matter. If there are scruitble patterns present in the data, then an intelligence can find them and figure out how to understand them. > If they are data, then they - as data - can be included in a data set. > > You talked about "a data set alone", without specifying what that data set > was. In other words, that there exists such a data set. > > A data set that includes referents, is a data set that includes referents. > For clarity, could you give an example of a data set that includes referents? I just want to ensure we're talking about the same thing. > If it is possible to extract meaning from certain referents, then it is > possible to extract meaning from a data set that includes those referents - > specifically by extracting meaning from those referents, regardless of what > else may or may not also be in that data set. > > This is probably not what you meant to say. However, in practice, many > data sets will include referents...even if it may take a while to find them. > > Again I refer to the "first contact" problem. How does someone who > encounters a people speaking an utterly different language, with no > pre-existing translators or translations to reference, begin to establish > communication with these people? > I gave an example of this, assuming I happened upon a dictionary in a language I didn't recognize, I showed how you could exploit mathematical definitions to find important constants, decode the numerical system, then the periodic table, and work your way up various elements and compounds. That would provide enough of a scaffolding to work out the rest. Lke a puzzle, it gets easier with each next word that is solved. Obviously it is possible, as human beings have done this very thing > multiple times throughout history. Consider that, and you will have the > beginnings of how an AI that may include a LLM can come to truly understand > words. By definition of the problem, the answer lies outside of just words > alone - and thus, outside of what something that is just a LLM can do. > If you look at this paper: https://arxiv.org/pdf/2303.12712.pdf You will see early versions of gpt-4, despite not yet being trained on images at the time, was still able to draw images of various objects in various graphical languages. This shows that the LLM can learn more than just words. It somehow gained an ability to picture things in its head. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 00:31:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 20:31:50 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: On Thu, Mar 23, 2023, 8:22 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I really don?t think it thinks, but it makes us think it thinks. > ChatGPT is wicked cool. > > I agree 100%. Also, like Bender and the other author of this paper, I > object to the language I often see in discussions like these we have on ExI > about these subjects. > > quoted the paper cited in a previous message and below: > > -- > "Large LMs: Hype and analysis > Publications talking about the application of large LMs to > meaning-sensitive tasks tend to describe the models with terminology that, > if interpreted at face value, is misleading. Here is a selection from > academically-oriented pieces (emphasis added): > > (1) In order to train a model that *understands* sentence relationships, > we pre-train for a binarized next sentence prediction task. (Devlin et al., > 2019) > > (2) Using BERT, a pretraining language model, has been successful for > single-turn machine *comprehension*. . .(Ohsugi et al., 2019) > > (3) The surprisingly strong ability of these models to *re-call factual > knowledge* without any ?ne-tuning demonstrates their potential as > unsupervised open-domain QA systems. (Petroni et al., 2019)-- > > In linguistics and epistemology and in philosophy in general, usually > terms like "understand" and "comprehend" and "recall factual knowledge" > have meanings that are not applicable to these languuge models. They do not > actually comprehend or understand anything whatosever. They only make us > think they do. > > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data > If that's true how do I know anyone else on this list is actually comprehending or understanding anything? Jason > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 00:44:16 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 18:44:16 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: On Thu, Mar 23, 2023 at 6:35?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 8:22 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> > I really don?t think it thinks, but it makes us think it thinks. >> ChatGPT is wicked cool. >> >> I agree 100%. Also, like Bender and the other author of this paper, I >> object to the language I often see in discussions like these we have on ExI >> about these subjects. >> >> quoted the paper cited in a previous message and below: >> >> -- >> "Large LMs: Hype and analysis >> Publications talking about the application of large LMs to >> meaning-sensitive tasks tend to describe the models with terminology that, >> if interpreted at face value, is misleading. Here is a selection from >> academically-oriented pieces (emphasis added): >> >> (1) In order to train a model that *understands* sentence relationships, >> we pre-train for a binarized next sentence prediction task. (Devlin et al., >> 2019) >> >> (2) Using BERT, a pretraining language model, has been successful for >> single-turn machine *comprehension*. . .(Ohsugi et al., 2019) >> >> (3) The surprisingly strong ability of these models to *re-call factual >> knowledge* without any ?ne-tuning demonstrates their potential as >> unsupervised open-domain QA systems. (Petroni et al., 2019)-- >> >> In linguistics and epistemology and in philosophy in general, usually >> terms like "understand" and "comprehend" and "recall factual knowledge" >> have meanings that are not applicable to these languuge models. They do not >> actually comprehend or understand anything whatosever. They only make us >> think they do. >> >> >> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data >> > > If that's true how do I know anyone else on this list is actually > comprehending or understanding anything? > You can only infer it and trust that we are not chatbots, and I agree it is a problem and likely to become a very serious problem in the near future. I already see a ChatGPT persona on twitter, though the operator is not trying to hide it. I have another friend who quite literally fell in love with a chatbot based on the previous version of ChatGPT. He assigned her her own twitter account. When I told him on facebook that he was nuts to think that his chatbot "girlfriend" really loved him, he became extremely angry, called me an asshole for saying such things about "her kind," and unfriended me. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 24 00:46:17 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 17:46:17 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: <032c01d95dea$0c0a91d0$241fb570$@rainier66.com> From: Gordon Swobe Sent: Thursday, 23 March, 2023 5:20 PM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat > wrote: > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. I agree 100%. Also, ?In linguistics and epistemology and in philosophy in general, usually terms like "understand" and "comprehend" and "recall factual knowledge" have meanings that are not applicable to these languuge models. They do not actually comprehend or understand anything whatosever. They only make us think they do. https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data -gts Note also that our notion of how to define machine intelligence has changed dramatically. By the definition Turing initially proposed, ChatGPT passes as intelligence. As you have patiently endured for so long, you already know my personal obsession with all this is all about creating digital companions for the elderly, for I have witnessed firsthand in a most heartbreaking way elderly people who sit alone in a room full of other elderly people who are similarly alone. Their minds decline rapidly from lack of stimulation. They?re bored. As soon as they can no longer walk and visit with the neighbors, they lack mental stimulation. My theory is that when one lands in a wheelchair in a nursing home, one can have at least a trace of enjoyment and fulfillment in one?s last few declining years. If you want something depressing, do consider those declining years in the nursing home are not always just one or two, it can be a decade (oh mercy.) So? if we could arrange for them to have a digital companion or possibly two, with whom they can carry fun interesting conversation, their brains might not decline as rapidly. Furthermore? if the elderly had that in the home, they might hold it together longer, which means they can stay in their own homes longer, which is something I do hope you ponder carefully. My own mother and my bride?s father live in homes they own, so their expenses are low. But if they move into assisted living, their expenses are appalling. Those things reeeeeaaallly cost ya. So every month they can stay in their own homes with their own pets and their own stuff and their own everything is another month of low expenses and way better quality of life, waaaaay better. So? good luck to us with GPT. Parting note: I did some experiments with my own mother to see if she would know she was talking to a computer. She didn?t. I intend to repeat the experiment with my father in law in a coupla weeks, see if he will catch on that he is conversing with software. I suspect he won?t either. Cool! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:04:02 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:04:02 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: What is this stupid example? ChatGPT can answer that question perfectly. In fact, I just fed all this to GPT-4 and he makes more sense than this Bender lady. I'm worried about people not getting that we are not different from these AI. Enough with this idiotic human exceptionalism. GPT-4: While the octopus (O) is excellent at detecting statistical patterns, it's true that it initially lacks the contextual understanding of the world that A and B possess. However, as an intelligent AI, O can learn from the conversation it has been observing. As O has been listening to A and B for some time, it's likely that it has picked up on the meanings of many words based on how they are used in different contexts. While O may not have experienced bears or sticks firsthand, it may have learned their meaning by observing how A and B use those terms in conversation. To help A, the octopus could rely on its ability to understand patterns and context. It could analyze the situations where A and B have talked about bears, sticks, or other related topics before. By doing so, O may identify potential strategies that A and B have discussed in the past or come up with a solution by combining different pieces of information it has learned. For instance, O could suggest that A use the sticks to create noise or a barrier to scare off the bear, a strategy it might have seen A and B discuss in other conversations about wildlife encounters. In this case, even though O may not have direct knowledge about bears and sticks, it could still provide helpful advice by leveraging its understanding of the patterns in A and B's conversations. However, it's important to recognize that O's success in providing relevant instructions depends on the quality and depth of information it has been able to gather from A and B's conversations. On Thu, Mar 23, 2023 at 12:42?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Emily M. Bender, a computational linguist at the University of Washington, > makes the same argument I hold as valid. Large Language Models are not > conscious or human-like as they lack referents. > > An interesting thought experiment: > > "Say that A and B, both fluent speakers of English, are independently > stranded on two uninhabited islands. They soon discover that previous > visitors to these islands have left behind telegraphs and that they can > communicate with each other via an underwater cable. A and B start happily > typing messages to each other. > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > to visit or observe the two islands, discovers a way to tap into the > underwater cable and listen in on A and B?s conversations. O knows nothing > about English initially but is very good at detecting statistical patterns. > Over time, O learns to predict with great accuracy how B will respond to > each of A?s utterances. > > Soon, the octopus enters the conversation and starts impersonating B and > replying to A. This ruse works for a while, and A believes that O > communicates as both she and B do ? with meaning and intent. Then one day A > calls out: ?I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > to help. How could it succeed? The octopus has no referents, no idea what > bears or sticks are. No way to give relevant instructions, like to go grab > some coconuts and rope and build a catapult. A is in trouble and feels > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > Emily M. Bender is very worried what will happen when we forget this. > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:06:53 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:06:53 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Absolutely, Bender's garbage comes from somebody that has not interacted with ChatGPT, or even worse GPT-4, and has not clue about how NLPs work. On Thu, Mar 23, 2023 at 12:56?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > Emily M. Bender, a computational linguist at the University of > Washington, > > makes the same argument I hold as valid. Large Language Models are not > > conscious or human-like as they lack referents. > > > > An interesting thought experiment: > > > > "Say that A and B, both fluent speakers of English, are independently > > stranded on two uninhabited islands. They soon discover that previous > > visitors to these islands have left behind telegraphs and that they can > > communicate with each other via an underwater cable. A and B start > happily > > typing messages to each other. > > > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > > to visit or observe the two islands, discovers a way to tap into the > > underwater cable and listen in on A and B?s conversations. O knows > nothing > > about English initially but is very good at detecting statistical > patterns. > > Over time, O learns to predict with great accuracy how B will respond to > > each of A?s utterances. > > > > Soon, the octopus enters the conversation and starts impersonating B and > > replying to A. This ruse works for a while, and A believes that O > > communicates as both she and B do ? with meaning and intent. Then one > day A > > calls out: ?I?m being attacked by an angry bear. Help me figure out how > to > > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > > to help. > > I posed this exact question to ChatGPT and got a very sensible > actually helpful answer: > --------------------------- > > Me: I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks. > > ChatGPT: If you find yourself in a situation where you're being > attacked by a bear, your first priority should be to get away from the > animal as quickly as possible. If you have sticks, they can be useful > in defending yourself, but only as a last resort. > > Here are some tips to help you defend yourself against an angry bear > using sticks: > > Hold the sticks firmly: Make sure you grip the sticks tightly with > both hands, so they don't slip out of your grasp. > > Make yourself look big: Stand up as tall as possible and raise your > arms above your head. This will make you appear larger and more > intimidating to the bear. > > Make noise: Shout, scream, and make as much noise as possible. This > will make the bear aware that you are a threat and may cause it to > back off. > > Aim for the bear's sensitive areas: If the bear charges at you, aim > for its nose or eyes with the sticks. These are sensitive areas and > may cause the bear to retreat. > > Back away slowly: If the bear backs off, slowly and carefully move > away while still facing the bear. Do not turn your back on the bear or > run away, as this may trigger it to chase you. > > Remember, using sticks to defend yourself against an angry bear is a > last resort, and you should always try to get away from the animal as > quickly as possible. > ------------------------- > > How could it succeed? The octopus has no referents, no idea what > > bears or sticks are. No way to give relevant instructions, like to go > grab > > some coconuts and rope and build a catapult. A is in trouble and feels > > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > > Emily M. Bender is very worried what will happen when we forget this. > > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > > ChatGPT's answer is far superior to Bender's very impractical answer > of "go grab some coconuts and rope and build a catapult". So not only > is ChatGPT smarter than Bender's Octopus, but seems to be smarter than > Bender herself. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 01:07:16 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 19:07:16 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 6:29?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Others had argued on this thread that it was impossible to extract >>>>> meaning from something that lacked referents. it seems you and I agree that >>>>> it is possible to extract meaning and understanding from a data set alone, >>>>> by virtue of the patterns and correlations present within that data. >>>>> >>>> >>>> With the caveat that referents are themselves data, so if we include >>>> appropriate referents in that data set then yes. Referents are often >>>> referenced by their correlations and matching patterns. >>>> >>> >>> I don't understand what you are saying here. >>> >> >> Do you agree that referents are data? If not, why not? >> > > > What is a referent? My understanding was that according to you and Adrian, > things like dictionaries and Wikipedia text lack referents since they are > just bodies of text. > I think it was Adrian who asked you that question. A referent as I use the term is something that exists outside of language to which a word refers. You have an apple in your hand and you say "this is an apple." The apple in your hand is the referent that corresponds to your word "apple." As for whether referents are data, it is unclear to me how referents could be data. I am inclined to say no but I am not sure what Adrain is getting at here. -gts > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:15:31 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:15:31 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Gordon, First of all ChatGPT is trained on the English language so it knows all the common words and probably in fact ALL words in the English language, in fact probably it knows common words in many languages including dead ones. If the the two people A and B never used the word stick it is because the exchanged a relatively small amount of information. Basically what Bender is saying is "if the training of a NLM is limited then the NLM would not know what certain words mean". Of course, that is the case. But that is the reason why ChatGPT is so superior to other NLMs, it was trained of a vast set of data. The example Bender gives applies to humans too. If I don't know the Italian language and I use the word bastone, good luck knowing it means stick. You can try to ask me to describe you bastone but I would have to use other words and if you know these words then you understand what bastone is, same thing with ChatGPT. THERE IS NO DIFFERENCE. Referents are not important in this case. Math is a language for god sake and it doesn't need any referents. In fact you can build an entire math system starting from the null set, that is not a real thing. Giovanni On Thu, Mar 23, 2023 at 1:25?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bender's point is not that ChatGBT is incapable of generating sensible > sentences about sticks and bears. It is that these LLMs don't know the > meanings of any words whatsoever. Confronted with a word it has never seen, > it must do a statistical analysis to try to find probable next words, never > knowing what any of them mean. > > -gts > > On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Gordon Swobe via extropy-chat : >> >> > Emily M. Bender, a computational linguist at the University of >> Washington, >> > makes the same argument I hold as valid. Large Language Models are not >> > conscious or human-like as they lack referents. >> > >> > An interesting thought experiment: >> > >> > "Say that A and B, both fluent speakers of English, are independently >> > stranded on two uninhabited islands. They soon discover that previous >> > visitors to these islands have left behind telegraphs and that they can >> > communicate with each other via an underwater cable. A and B start >> happily >> > typing messages to each other. >> > >> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >> unable >> > to visit or observe the two islands, discovers a way to tap into the >> > underwater cable and listen in on A and B?s conversations. O knows >> nothing >> > about English initially but is very good at detecting statistical >> patterns. >> > Over time, O learns to predict with great accuracy how B will respond to >> > each of A?s utterances. >> > >> > Soon, the octopus enters the conversation and starts impersonating B and >> > replying to A. This ruse works for a while, and A believes that O >> > communicates as both she and B do ? with meaning and intent. Then one >> day A >> > calls out: ?I?m being attacked by an angry bear. Help me figure out how >> to >> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >> fails >> > to help. >> >> I posed this exact question to ChatGPT and got a very sensible >> actually helpful answer: >> --------------------------- >> >> Me: I?m being attacked by an angry bear. Help me figure out how to >> defend myself. I?ve got some sticks. >> >> ChatGPT: If you find yourself in a situation where you're being >> attacked by a bear, your first priority should be to get away from the >> animal as quickly as possible. If you have sticks, they can be useful >> in defending yourself, but only as a last resort. >> >> Here are some tips to help you defend yourself against an angry bear >> using sticks: >> >> Hold the sticks firmly: Make sure you grip the sticks tightly with >> both hands, so they don't slip out of your grasp. >> >> Make yourself look big: Stand up as tall as possible and raise your >> arms above your head. This will make you appear larger and more >> intimidating to the bear. >> >> Make noise: Shout, scream, and make as much noise as possible. This >> will make the bear aware that you are a threat and may cause it to >> back off. >> >> Aim for the bear's sensitive areas: If the bear charges at you, aim >> for its nose or eyes with the sticks. These are sensitive areas and >> may cause the bear to retreat. >> >> Back away slowly: If the bear backs off, slowly and carefully move >> away while still facing the bear. Do not turn your back on the bear or >> run away, as this may trigger it to chase you. >> >> Remember, using sticks to defend yourself against an angry bear is a >> last resort, and you should always try to get away from the animal as >> quickly as possible. >> ------------------------- >> > How could it succeed? The octopus has no referents, no idea what >> > bears or sticks are. No way to give relevant instructions, like to go >> grab >> > some coconuts and rope and build a catapult. A is in trouble and feels >> > duped. The octopus is exposed as a fraud." >> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >> > Emily M. Bender is very worried what will happen when we forget this. >> > >> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >> >> ChatGPT's answer is far superior to Bender's very impractical answer >> of "go grab some coconuts and rope and build a catapult". So not only >> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >> Bender herself. >> >> Stuart LaForge >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:27:58 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:27:58 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: *You will see early versions of gpt-4, despite not yet being trained on images at the time, was still able to draw images of various objects in various graphical languages. This shows that the LLM can learn more than just words. It somehow gained an ability to picture things in its head.* YES ! And this is what the people that dismiss NLMs do not understand. There are many emergent properties coming out of them that are not easy to predict from a simple statistical pattern prediction. For example, it is not obvious that grammar could be inferred from these statistical patterns. In fact, if you asked language experts many years ago if NLM could derive grammar from the statistical patterns the consensus was no, they could not. They were wrong. But NLMs do understand grammar and many other things. They even developed a theory of mind. This without any referent at all. So it is logical that they can understand in other conceptual domains without the use of referents. These language experts should actually develop better theories about how language works instead of insisting on human exceptionality. Giovanni On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 6:29?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Others had argued on this thread that it was impossible to extract >>>>>> meaning from something that lacked referents. it seems you and I agree that >>>>>> it is possible to extract meaning and understanding from a data set alone, >>>>>> by virtue of the patterns and correlations present within that data. >>>>>> >>>>> >>>>> With the caveat that referents are themselves data, so if we include >>>>> appropriate referents in that data set then yes. Referents are often >>>>> referenced by their correlations and matching patterns. >>>>> >>>> >>>> I don't understand what you are saying here. >>>> >>> >>> Do you agree that referents are data? If not, why not? >>> >> >> >> What is a referent? My understanding was that according to you and >> Adrian, things like dictionaries and Wikipedia text lack referents since >> they are just bodies of text. >> > > I think it was Adrian who asked you that question. A referent as I use the > term is something that exists outside of language to which a word refers. > You have an apple in your hand and you say "this is an apple." The apple in > your hand is the referent that corresponds to your word "apple." > > As for whether referents are data, it is unclear to me how referents could > be data. I am inclined to say no but I am not sure what Adrain is getting > at here. > > -gts > >> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Mar 24 01:29:29 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 18:29:29 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think it was Adrian who asked you that question. A referent as I use the > term is something that exists outside of language to which a word refers. > You have an apple in your hand and you say "this is an apple." The apple in > your hand is the referent that corresponds to your word "apple." > > As for whether referents are data, it is unclear to me how referents could > be data. I am inclined to say no but I am not sure what Adrain is getting > at here. > Ah, I thought you meant that the referent was the *fact that* that was an apple, rather than the referent being the apple itself. Facts are data; objects (such as apples) are not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:35:15 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:35:15 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Here is an example of a language with no referents: MATH. In fact, this is the language we would use to communicate with aliens if they existed. https://www.mathnasium.com/blog/why-mathematics-is-a-language#:~:text=In%20order%20to%20be%20considered,than%20spoken%20form%20of%20communication . On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 01:40:16 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 19:40:16 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi wrote: > Gordon, > Basically what Bender is saying is "if the training of a NLM is limited > then the NLM would not know what certain words mean". > No, that is not what she is saying, though seeing as how people are misunderstanding her thought experiment, I must agree the experiment is not as clear as it could be. She is saying, or rather reminding us, that there is a clear distinction to be made between form and meaning and that these language models are trained only on form. Here is the abstract of her academic paper in which she and her colleague mention the thought experiment. -- Abstract: The success of the large neural language mod-els on many NLP tasks is exciting. However,we ?nd that these successes sometimes lead to hype in which these models are being described as ?understanding? language or capturing ?meaning?. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. In keeping with the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re Going?,we argue that a clear understanding of the distinction between form and meaning will help guide the ?eld towards better science around natural language understanding. -- Bender is a computational linguist at the University of Washington. I think I read that she is actually the head of the department. the paper: https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:53:08 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:53:08 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: What about math, it is a language but it has no referent. Mathematics can be thought of as a language, but it is fundamentally different from natural languages like English or Spanish. While natural languages primarily rely on referents to convey meaning, mathematics is an abstract and formal system used to represent relationships, quantities, and patterns. However, it's not accurate to say that mathematics has no referents at all. In mathematics, symbols and expressions serve as a means of representing abstract concepts and relationships, and these symbols can be seen as having referents, albeit abstract ones. For example, the number "2" is a symbol that represents the abstract concept of "two of something." Similarly, variables like "x" or "y" serve as placeholders for unknown or varying quantities, and they can also be thought of as having referents within the context of a specific problem or equation. In this sense, the referents in mathematics are more abstract and generalized compared to those in natural languages, which are often tied to concrete objects, experiences, or ideas. The power of mathematics comes from its ability to represent and manipulate these abstract concepts and relationships in a precise and logical way, allowing us to model and solve problems across a wide range of disciplines. [image: Giovanni Santostasi] But then a NLM doesn't need to have a direct experience of 2 of something, that concept can be totally abstract, right? Yes, you are correct. A neural language model (NLM), like ChatGPT, does not need direct experience of "two of something" to understand or use the concept. NLMs learn from vast amounts of text data, and through this process, they develop an understanding of the abstract concept of numbers, including "2," based on the patterns and relationships observed in the data. NLMs can learn about the properties and usage of numbers in language without having any real-world experience or direct interaction with objects. They can generate appropriate responses involving numbers or mathematical concepts because they have been exposed to numerous examples of how these concepts are used in various contexts during their training. On Thu, Mar 23, 2023 at 6:35?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think it was Adrian who asked you that question. A referent as I use >> the term is something that exists outside of language to which a word >> refers. You have an apple in your hand and you say "this is an apple." The >> apple in your hand is the referent that corresponds to your word "apple." >> >> As for whether referents are data, it is unclear to me how referents >> could be data. I am inclined to say no but I am not sure what Adrain is >> getting at here. >> > > Ah, I thought you meant that the referent was the *fact that* that was an > apple, rather than the referent being the apple itself. Facts are data; > objects (such as apples) are not. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 02:05:11 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 22:05:11 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 9:23 PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon, > First of all ChatGPT is trained on the English language so it knows all > the common words and probably in fact ALL words in the English language, in > fact probably it knows common words in many languages including dead ones. > If the the two people A and B never used the word stick it is because the > exchanged a relatively small amount of information. Basically what Bender > is saying is "if the training of a NLM is limited then the NLM would not > know what certain words mean". Of course, that is the case. But that is the > reason why ChatGPT is so superior to other NLMs, it was trained of a vast > set of data. The example Bender gives applies to humans too. If I don't > know the Italian language and I use the word bastone, good luck knowing it > means stick. You can try to ask me to describe you bastone but I would have > to use other words and if you know these words then you understand what > bastone is, same thing with ChatGPT. THERE IS NO DIFFERENCE. Referents are > not important in this case. Math is a language for god sake and it doesn't > need any referents. In fact you can build an entire math system starting > from the null set, that is not a real thing. > Great example! Jason > > > > On Thu, Mar 23, 2023 at 1:25?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bender's point is not that ChatGBT is incapable of generating sensible >> sentences about sticks and bears. It is that these LLMs don't know the >> meanings of any words whatsoever. Confronted with a word it has never seen, >> it must do a statistical analysis to try to find probable next words, never >> knowing what any of them mean. >> >> -gts >> >> On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Gordon Swobe via extropy-chat : >>> >>> > Emily M. Bender, a computational linguist at the University of >>> Washington, >>> > makes the same argument I hold as valid. Large Language Models are not >>> > conscious or human-like as they lack referents. >>> > >>> > An interesting thought experiment: >>> > >>> > "Say that A and B, both fluent speakers of English, are independently >>> > stranded on two uninhabited islands. They soon discover that previous >>> > visitors to these islands have left behind telegraphs and that they can >>> > communicate with each other via an underwater cable. A and B start >>> happily >>> > typing messages to each other. >>> > >>> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >>> unable >>> > to visit or observe the two islands, discovers a way to tap into the >>> > underwater cable and listen in on A and B?s conversations. O knows >>> nothing >>> > about English initially but is very good at detecting statistical >>> patterns. >>> > Over time, O learns to predict with great accuracy how B will respond >>> to >>> > each of A?s utterances. >>> > >>> > Soon, the octopus enters the conversation and starts impersonating B >>> and >>> > replying to A. This ruse works for a while, and A believes that O >>> > communicates as both she and B do ? with meaning and intent. Then one >>> day A >>> > calls out: ?I?m being attacked by an angry bear. Help me figure out >>> how to >>> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >>> fails >>> > to help. >>> >>> I posed this exact question to ChatGPT and got a very sensible >>> actually helpful answer: >>> --------------------------- >>> >>> Me: I?m being attacked by an angry bear. Help me figure out how to >>> defend myself. I?ve got some sticks. >>> >>> ChatGPT: If you find yourself in a situation where you're being >>> attacked by a bear, your first priority should be to get away from the >>> animal as quickly as possible. If you have sticks, they can be useful >>> in defending yourself, but only as a last resort. >>> >>> Here are some tips to help you defend yourself against an angry bear >>> using sticks: >>> >>> Hold the sticks firmly: Make sure you grip the sticks tightly with >>> both hands, so they don't slip out of your grasp. >>> >>> Make yourself look big: Stand up as tall as possible and raise your >>> arms above your head. This will make you appear larger and more >>> intimidating to the bear. >>> >>> Make noise: Shout, scream, and make as much noise as possible. This >>> will make the bear aware that you are a threat and may cause it to >>> back off. >>> >>> Aim for the bear's sensitive areas: If the bear charges at you, aim >>> for its nose or eyes with the sticks. These are sensitive areas and >>> may cause the bear to retreat. >>> >>> Back away slowly: If the bear backs off, slowly and carefully move >>> away while still facing the bear. Do not turn your back on the bear or >>> run away, as this may trigger it to chase you. >>> >>> Remember, using sticks to defend yourself against an angry bear is a >>> last resort, and you should always try to get away from the animal as >>> quickly as possible. >>> ------------------------- >>> > How could it succeed? The octopus has no referents, no idea what >>> > bears or sticks are. No way to give relevant instructions, like to go >>> grab >>> > some coconuts and rope and build a catapult. A is in trouble and feels >>> > duped. The octopus is exposed as a fraud." >>> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >>> > Emily M. Bender is very worried what will happen when we forget this. >>> > >>> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >>> >>> ChatGPT's answer is far superior to Bender's very impractical answer >>> of "go grab some coconuts and rope and build a catapult". So not only >>> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >>> Bender herself. >>> >>> Stuart LaForge >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Mar 24 02:36:46 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 23 Mar 2023 22:36:46 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: I don't have a lot of faith in a person who has a hypothesis and designs a thought experiment that is essentially completely irrelevant to the hypothesis. The only connection is some tenuous metaphor stuff, but the thought experiment fails because the answer is obvious: like I said earlier, and others have said, the octopus simply didn't have access to the information. If the author wanted to prove their actual hypothesis, maybe they should have designed a thought experiment that was related to it. That makes me think all they had was a hunch, and designed a bad thought experiment around it. It's even worse than the awful Chinese Room experiment you spoke on ten years ago. Like I mentioned, not having access to the actual referents doesn't even mean a learning entity cannot know them. You likely haven't experienced MOST things you know. You know them because of the experience of others, just like the AI might. I'm open to your argument in some ways, but you have done a poor job or defending it. On Thu, Mar 23, 2023, 9:45 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: > >> Gordon, >> Basically what Bender is saying is "if the training of a NLM is limited >> then the NLM would not know what certain words mean". >> > > No, that is not what she is saying, though seeing as how people are > misunderstanding her thought experiment, I must agree the experiment is not > as clear as it could be. She is saying, or rather reminding us, that there > is a clear distinction to be made between form and meaning and that these > language models are trained only on form. Here is the abstract of her > academic paper in which she and her colleague mention the thought > experiment. > > -- > Abstract: The success of the large neural language mod-els on many NLP > tasks is exciting. However,we ?nd that these successes sometimes lead to > hype in which these models are being described as ?understanding? language > or capturing ?meaning?. In this position paper, we argue that a system > trained only on form has a priori no way to learn meaning. In keeping with > the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re > Going?,we argue that a clear understanding of the distinction between form > and meaning will help guide the ?eld towards better science around natural > language understanding. > -- > Bender is a computational linguist at the University of Washington. I > think I read that she is actually the head of the department. > > the paper: > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 02:47:39 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 20:47:39 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think it was Adrian who asked you that question. A referent as I use >> the term is something that exists outside of language to which a word >> refers. You have an apple in your hand and you say "this is an apple." The >> apple in your hand is the referent that corresponds to your word "apple." >> >> As for whether referents are data, it is unclear to me how referents >> could be data. I am inclined to say no but I am not sure what Adrain is >> getting at here. >> > > Ah, I thought you meant that the referent was the *fact that* that was an > apple, rather than the referent being the apple itself. Facts are data; > objects (such as apples) are not. > I wouldn't say that referents are necessarily physical objects, however. They are simply whatever the word is pointing to outside of language, which could be an idea or abstract concept to which we assign a label in order to verbalize it. Giovanni, from your other post, it is not it is not at all clear to me that the language of mathematics has no referents. ChatGPT explained that is not the case when you asked it, but nevermind what it says. I think a sentence like 1+1=2 refers to a mathematical truth that exists separate from the numerical expression of it. The expression is the "word" or "sentence" and abstract mathematical truth is the referent. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 02:58:33 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 22:58:33 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 10:49 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think it was Adrian who asked you that question. A referent as I use >>> the term is something that exists outside of language to which a word >>> refers. You have an apple in your hand and you say "this is an apple." The >>> apple in your hand is the referent that corresponds to your word "apple." >>> >>> As for whether referents are data, it is unclear to me how referents >>> could be data. I am inclined to say no but I am not sure what Adrain is >>> getting at here. >>> >> >> Ah, I thought you meant that the referent was the *fact that* that was an >> apple, rather than the referent being the apple itself. Facts are data; >> objects (such as apples) are not. >> > > I wouldn't say that referents are necessarily physical objects, however. > They are simply whatever the word is pointing to outside of language, which > could be an idea or abstract concept to which we assign a label in order to > verbalize it. > > Giovanni, from your other post, it is not it is not at all clear to me > that the language of mathematics has no referents. ChatGPT explained that > is not the case when you asked it, but nevermind what it says. I think a > sentence like 1+1=2 refers to a mathematical truth that exists separate > from the numerical expression of it. The expression is the "word" or > "sentence" and abstract mathematical truth is the referent. > I agree, you could say mathematical truth exists outside language. But one thing this does highlight is there are many things you know about despite never having that object in hand to point and look at. You've never seen the abstract object '2'. You've never seen the core of the earth, or an electron. We lack sensory access to these things and so everything we know about them we know only through language. How do we come to understand things like '2' or electrons? Jason > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 03:15:47 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 21:15:47 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I don't have a lot of faith in a person who has a hypothesis and designs a > thought experiment that is essentially completely irrelevant to the > hypothesis. > As I wrote, I agree the thought experiment does not illustrate her point clearly, at least outside of the context of her academic paper. As I've mentioned, the octopus is supposed to represent the state in which an LLM is in -- completely disconnected from the meanings of words (referents) that exist only outside of language in the real world represented by the islands. But it is a sloppy thought experiment if you don't know what she is trying to say. It is about form vs meaning. LLMs are trained only on and only know (so to speak) the forms and patterns of language. They are like very talented parrots, rambling on and on in seemingly intelligent ways, mimicking human speech, but never having any idea what they are talking about. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 03:35:19 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 23:35:19 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I don't have a lot of faith in a person who has a hypothesis and designs >> a thought experiment that is essentially completely irrelevant to the >> hypothesis. >> > > As I wrote, I agree the thought experiment does not illustrate her point > clearly, at least outside of the context of her academic paper. As I've > mentioned, the octopus is supposed to represent the state in which an LLM > is in -- completely disconnected from the meanings of words (referents) > that exist only outside of language in the real world represented by the > islands. But it is a sloppy thought experiment if you don't know what she > is trying to say. > > It is about form vs meaning. LLMs are trained only on and only know (so to > speak) the forms and patterns of language. They are like very talented > parrots, rambling on and on in seemingly intelligent ways, mimicking human > speech, but never having any idea what they are talking about. > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf and come away with the impression that GPT-4 has no idea what it is talking about. Jason > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 03:40:51 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 21:40:51 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 9:00?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > How do we come to understand things like '2' > 'tis a great mystery! I think most people including me are mathematical platonists or would be if they investigated the philosophy of mathematics. On this view, we do not invent mathematical truths. We discover them. 1+1 has always = 2 and it could never have been otherwise. If we discover them then where are they before we discover them? Platonists would say they exist in the platonic realm. For purposes of this discussion, that is where we find our mathematical referents. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 03:51:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 23:51:49 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 11:41 PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 9:00?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> How do we come to understand things like '2' >> > > 'tis a great mystery! I think most people including me are mathematical > platonists or would be if they investigated the philosophy of mathematics. > On this view, we do not invent mathematical truths. We discover them. 1+1 > has always = 2 and it could never have been otherwise. If we discover them > then where are they before we discover them? Platonists would say they > exist in the platonic realm. For purposes of this discussion, that is > where we find our mathematical referents. > Do you think a powerful "raw intelligence", which operated without any data or information from or about our universe, come to discover, learn, and understand the same mathematical truths that have come to know? Jason > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 04:13:44 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 22:13:44 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf > and come away with the impression that GPT-4 has no idea what it is talking > about. > Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will know word meanings any more than does GPT-3. Eventually AI on digital computers will far surpass human intelligence, but even then these computers will be manipulating the forms of words and not their meanings. Do you believe, like my friend who fell in love with a chatbot, that a software application can have genuine feelings of love for you? -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 04:26:52 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 22:26:52 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 9:53?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: Do you think a powerful "raw intelligence", which operated without any data > or information from or about our universe, come to discover, learn, and > understand the same mathematical truths that have come to know? > Depends on what you mean by "understand." I think my pocket calculator "understands" how to calculate, but it has no conscious understanding of the calculations. It makes no difference in my view how powerful the calculator might be. Processing through the formal rules of mathematics, it could conceivably "discover" a new mathematical truth, but it would have no way to actually know it consciously. It's just a mindless calculator, a tool. It knows 1+1=2 the same way my watch knows the time, which is to say, not at all. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:30:37 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:30:37 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Gordon, I will read the paper and write one to push back on her arguments. She is wrong. Giovanni On Thu, Mar 23, 2023 at 6:40?PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: > >> Gordon, >> Basically what Bender is saying is "if the training of a NLM is limited >> then the NLM would not know what certain words mean". >> > > No, that is not what she is saying, though seeing as how people are > misunderstanding her thought experiment, I must agree the experiment is not > as clear as it could be. She is saying, or rather reminding us, that there > is a clear distinction to be made between form and meaning and that these > language models are trained only on form. Here is the abstract of her > academic paper in which she and her colleague mention the thought > experiment. > > -- > Abstract: The success of the large neural language mod-els on many NLP > tasks is exciting. However,we ?nd that these successes sometimes lead to > hype in which these models are being described as ?understanding? language > or capturing ?meaning?. In this position paper, we argue that a system > trained only on form has a priori no way to learn meaning. In keeping with > the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re > Going?,we argue that a clear understanding of the distinction between form > and meaning will help guide the ?eld towards better science around natural > language understanding. > -- > Bender is a computational linguist at the University of Washington. I > think I read that she is actually the head of the department. > > the paper: > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:33:01 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:33:01 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Jason, Is that the same paper Gordon is quoting? The link you gave doesn't work. Giovanni On Thu, Mar 23, 2023 at 8:36?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I don't have a lot of faith in a person who has a hypothesis and designs >>> a thought experiment that is essentially completely irrelevant to the >>> hypothesis. >>> >> >> As I wrote, I agree the thought experiment does not illustrate her point >> clearly, at least outside of the context of her academic paper. As I've >> mentioned, the octopus is supposed to represent the state in which an LLM >> is in -- completely disconnected from the meanings of words (referents) >> that exist only outside of language in the real world represented by the >> islands. But it is a sloppy thought experiment if you don't know what she >> is trying to say. >> >> It is about form vs meaning. LLMs are trained only on and only know (so >> to speak) the forms and patterns of language. They are like very talented >> parrots, rambling on and on in seemingly intelligent ways, mimicking human >> speech, but never having any idea what they are talking about. >> > > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf > and come away with the impression that GPT-4 has no idea what it is talking > about. > > Jason > > > >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:37:33 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:37:33 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: They are not trained only on form, or if they are trained only on form, meaning is a DERIVED property, an emergent property. I already linked a paper showing that ChatGPT derived THEORY OF MIND from the statistical properties of language. It is not obvious at all this could have been derived from statistical properties alone and it happened. The problem with emergent properties like these that they are not easy or impossible to predict. So the entire Bender paper is garbage because instead of being based on some stupid philosophical argument it should be based on experimental evidence. 1) Then the question is: Can we do an experiment using GPT-4 to see if it understand meaning? 2) What is that experiment? 3) Can Bender suggest one? Giovanni On Thu, Mar 23, 2023 at 6:40?PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: > >> Gordon, >> Basically what Bender is saying is "if the training of a NLM is limited >> then the NLM would not know what certain words mean". >> > > No, that is not what she is saying, though seeing as how people are > misunderstanding her thought experiment, I must agree the experiment is not > as clear as it could be. She is saying, or rather reminding us, that there > is a clear distinction to be made between form and meaning and that these > language models are trained only on form. Here is the abstract of her > academic paper in which she and her colleague mention the thought > experiment. > > -- > Abstract: The success of the large neural language mod-els on many NLP > tasks is exciting. However,we ?nd that these successes sometimes lead to > hype in which these models are being described as ?understanding? language > or capturing ?meaning?. In this position paper, we argue that a system > trained only on form has a priori no way to learn meaning. In keeping with > the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re > Going?,we argue that a clear understanding of the distinction between form > and meaning will help guide the ?eld towards better science around natural > language understanding. > -- > Bender is a computational linguist at the University of Washington. I > think I read that she is actually the head of the department. > > the paper: > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:42:18 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:42:18 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Yes, as always Jason understanding of these topics shines through. LET'S DO EXPERIMENTS ! Not garbage philosophical navel gazing. I think this is the paper Jason linked (but PDF doesn't work on the archive for some reason). https://www.lesswrong.com/posts/mmxPbFz7wvthvHCxq/sparks-of-artificial-general-intelligence-early-experiments On Thu, Mar 23, 2023 at 8:36?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I don't have a lot of faith in a person who has a hypothesis and designs >>> a thought experiment that is essentially completely irrelevant to the >>> hypothesis. >>> >> >> As I wrote, I agree the thought experiment does not illustrate her point >> clearly, at least outside of the context of her academic paper. As I've >> mentioned, the octopus is supposed to represent the state in which an LLM >> is in -- completely disconnected from the meanings of words (referents) >> that exist only outside of language in the real world represented by the >> islands. But it is a sloppy thought experiment if you don't know what she >> is trying to say. >> >> It is about form vs meaning. LLMs are trained only on and only know (so >> to speak) the forms and patterns of language. They are like very talented >> parrots, rambling on and on in seemingly intelligent ways, mimicking human >> speech, but never having any idea what they are talking about. >> > > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf > and come away with the impression that GPT-4 has no idea what it is talking > about. > > Jason > > > >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:55:29 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:55:29 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: GPT-4 said that mathematics has ABSTRACT referents, and in that sense it was more precise I was in using language. I meant to say referents as concrete objects, that is the crucial matter here. Language can be understood with concrete objects like stones and threes. So computers can understand language without any physical referent. Gordon your argument then is completely wrong. On Thu, Mar 23, 2023 at 8:00?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 10:49 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I think it was Adrian who asked you that question. A referent as I use >>>> the term is something that exists outside of language to which a word >>>> refers. You have an apple in your hand and you say "this is an apple." The >>>> apple in your hand is the referent that corresponds to your word "apple." >>>> >>>> As for whether referents are data, it is unclear to me how referents >>>> could be data. I am inclined to say no but I am not sure what Adrain is >>>> getting at here. >>>> >>> >>> Ah, I thought you meant that the referent was the *fact that* that was >>> an apple, rather than the referent being the apple itself. Facts are data; >>> objects (such as apples) are not. >>> >> >> I wouldn't say that referents are necessarily physical objects, however. >> They are simply whatever the word is pointing to outside of language, which >> could be an idea or abstract concept to which we assign a label in order to >> verbalize it. >> >> Giovanni, from your other post, it is not it is not at all clear to me >> that the language of mathematics has no referents. ChatGPT explained that >> is not the case when you asked it, but nevermind what it says. I think a >> sentence like 1+1=2 refers to a mathematical truth that exists separate >> from the numerical expression of it. The expression is the "word" or >> "sentence" and abstract mathematical truth is the referent. >> > > > I agree, you could say mathematical truth exists outside language. But one > thing this does highlight is there are many things you know about despite > never having that object in hand to point and look at. You've never seen > the abstract object '2'. You've never seen the core of the earth, or an > electron. We lack sensory access to these things and so everything we know > about them we know only through language. How do we come to understand > things like '2' or electrons? > > Jason > > > >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 06:18:49 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 00:18:49 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Nobody least of all me questions that GPT-4 will be capable of amazing feats, and that eventually these language models will surpass humans in terms of what we can call intelligence or what I might for sake of clarity prefer to call apparent intelligence. The question here is whether they will know what they are saying given that they are trained only on the forms of words with no access to the meanings or referents. Adrian has made the excellent point a couple of times that this is like the first contact problem in science fiction, and actually like the first contact problem between any two cultures with completely different languages. Q: When Kirk and Spock beam down to a new planet with intelligent alien life, how will they learn to communicate? A: With referents. Spock will point to himself and say "Spock." Kirk will point to himself and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock use these kinds referents to initiate communication. If our alien friend wants to communicate, he will point to the rock and "fwerasa" (or whatever is his word for rock). He will point to himself and say his name, and so on. Eventually, Spock and the alien will learn how to translate a few words, and from there the process of understanding begins. Now, what if they don't beam down to the planet and listen to only digital radio signals coming from the planet and send digital radio signals in return? No communication is possible as there are no referents. It's all noise. -gts > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 06:43:47 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 00:43:47 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: I can already hear someone saying "but we will include photographs of objects in the training so they have referents," but this still does not do the trick. These digital photographs can be displayed to the human operator of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It can detect colors by wavelength, but still this is only digital data. It does not see the colors. Likewise with shapes. It is turtles (one's and zero's) all the way down with no referents. -gts On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe wrote: > Nobody least of all me questions that GPT-4 will be capable of amazing > feats, and that eventually these language models will surpass humans in > terms of what we can call intelligence or what I might for sake of clarity > prefer to call apparent intelligence. The question here is whether they > will know what they are saying given that they are trained only on the > forms of words with no access to the meanings or referents. > > Adrian has made the excellent point a couple of times that this is like > the first contact problem in science fiction, and actually like the first > contact problem between any two cultures with completely different > languages. Q: When Kirk and Spock beam down to a new planet with > intelligent alien life, how will they learn to communicate? A: With > referents. > > Spock will point to himself and say "Spock." Kirk will point to himself > and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock > use these kinds referents to initiate communication. If our alien friend > wants to communicate, he will point to the rock and "fwerasa" (or whatever > is his word for rock). He will point to himself and say his name, and so > on. Eventually, Spock and the alien will learn how to translate a few > words, and from there the process of understanding begins. > > Now, what if they don't beam down to the planet and listen to only digital > radio signals coming from the planet and send digital radio signals in > return? No communication is possible as there are no referents. It's all > noise. > > -gts > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 06:45:31 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 23:45:31 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: *Now, what if they don't beam down to the planet and listen to only digital radio signals coming from the planet and send digital radio signals in return? No communication is possible as there are no referents. It's all noise.*Gordon on you are so wrong. People have studied this problem deeply and found a solution using the laws of physics and mathematics as abstract referents. https://en.wikipedia.org/wiki/Arecibo_message As we already established we can use abstract referents for communication. Math is another example, as GPT-4 correctly said, it has references but they are abstract. An entire communication can happen without concrete referents as in mathematics. Without talking about using digital pics or 3D models of objects all made of 0 and 1s (basically what you would do to communicate to an entity in a digital world). The reason you still insist on this human exceptionalism is because of religious faith. Giovanni On Thu, Mar 23, 2023 at 11:19?PM Gordon Swobe wrote: > Nobody least of all me questions that GPT-4 will be capable of amazing > feats, and that eventually these language models will surpass humans in > terms of what we can call intelligence or what I might for sake of clarity > prefer to call apparent intelligence. The question here is whether they > will know what they are saying given that they are trained only on the > forms of words with no access to the meanings or referents. > > Adrian has made the excellent point a couple of times that this is like > the first contact problem in science fiction, and actually like the first > contact problem between any two cultures with completely different > languages. Q: When Kirk and Spock beam down to a new planet with > intelligent alien life, how will they learn to communicate? A: With > referents. > > Spock will point to himself and say "Spock." Kirk will point to himself > and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock > use these kinds referents to initiate communication. If our alien friend > wants to communicate, he will point to the rock and "fwerasa" (or whatever > is his word for rock). He will point to himself and say his name, and so > on. Eventually, Spock and the alien will learn how to translate a few > words, and from there the process of understanding begins. > > Now, what if they don't beam down to the planet and listen to only digital > radio signals coming from the planet and send digital radio signals in > return? No communication is possible as there are no referents. It's all > noise. > > -gts > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 06:52:52 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 23:52:52 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: How that is different from what we do?????????????????????????? Colors in our brain are just activation patterns of neurons so 1 and 0s too, or anyway some kind of information combo that can be represented with 0 and 1s. We are not special, Gordon, we are machines too. Where are the referents in the figure below? Are you saying that aliens would not understand this message? [image: image.png] On Thu, Mar 23, 2023 at 11:45?PM Giovanni Santostasi wrote: > > > *Now, what if they don't beam down to the planet and listen to only > digital radio signals coming from the planet and send digital radio signals > in return? No communication is possible as there are no referents. It's all > noise.*Gordon on you are so wrong. > People have studied this problem deeply and found a solution using the > laws of physics and mathematics as abstract referents. > > https://en.wikipedia.org/wiki/Arecibo_message > > As we already established we can use abstract referents for communication. > Math is another example, as GPT-4 correctly said, it has references but > they are abstract. An entire communication can happen without concrete > referents as in mathematics. Without talking about using digital pics or 3D > models of objects all made of 0 and 1s (basically what you would do to > communicate to an entity in a digital world). > The reason you still insist on this human exceptionalism is because of > religious faith. > > Giovanni > > > > > > > > > > On Thu, Mar 23, 2023 at 11:19?PM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like >> the first contact problem in science fiction, and actually like the first >> contact problem between any two cultures with completely different >> languages. Q: When Kirk and Spock beam down to a new planet with >> intelligent alien life, how will they learn to communicate? A: With >> referents. >> >> Spock will point to himself and say "Spock." Kirk will point to himself >> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >> use these kinds referents to initiate communication. If our alien friend >> wants to communicate, he will point to the rock and "fwerasa" (or whatever >> is his word for rock). He will point to himself and say his name, and so >> on. Eventually, Spock and the alien will learn how to translate a few >> words, and from there the process of understanding begins. >> >> Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise. >> >> -gts >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 23388 bytes Desc: not available URL: From gsantostasi at gmail.com Fri Mar 24 06:57:15 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 23:57:15 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: A friend of mine says NLM or any other AI cannot understand anything without concrete referents. I disagree, which arguments could you give to support my position? While it is true that NLMs and other AI models do not have direct experiences or access to concrete referents like humans do, they can still demonstrate a certain level of understanding of abstract concepts. Here are some arguments to support your position: 1. Pattern recognition and statistical relationships: NLMs, like ChatGPT, are trained on vast amounts of text data, and they learn by recognizing patterns and statistical relationships within that data. They can generate responses that demonstrate an understanding of abstract concepts based on these patterns, even without direct experience of concrete referents. 2. Contextual understanding: Through their training, NLMs develop an ability to understand context and respond appropriately to various situations. They can "simulate" understanding by leveraging their knowledge of how certain words or concepts are used in different contexts. 3. Flexibility and adaptability: NLMs can adapt to new information and generate responses that take into account various factors, including the context, prior knowledge, and the specific question or statement they are responding to. This adaptability allows them to address abstract concepts and ideas, even without direct access to concrete referents. 4. Success in various tasks: NLMs have shown success in a range of natural language processing tasks, such as question-answering, translation, and summarization, which involve understanding and manipulation of abstract concepts. Their ability to perform well on these tasks is evidence of their capacity to work with abstract ideas without direct access to concrete referents. On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe wrote: > I can already hear someone saying "but we will include photographs of > objects in the training so they have referents," but this still does not do > the trick. These digital photographs can be displayed to the human operator > of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It > can detect colors by wavelength, but still this is only digital data. It > does not see the colors. Likewise with shapes. It is turtles (one's and > zero's) all the way down with no referents. > > -gts > > On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like >> the first contact problem in science fiction, and actually like the first >> contact problem between any two cultures with completely different >> languages. Q: When Kirk and Spock beam down to a new planet with >> intelligent alien life, how will they learn to communicate? A: With >> referents. >> >> Spock will point to himself and say "Spock." Kirk will point to himself >> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >> use these kinds referents to initiate communication. If our alien friend >> wants to communicate, he will point to the rock and "fwerasa" (or whatever >> is his word for rock). He will point to himself and say his name, and so >> on. Eventually, Spock and the alien will learn how to translate a few >> words, and from there the process of understanding begins. >> >> Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise. >> >> -gts >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 06:57:39 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 00:57:39 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 12:46?AM Giovanni Santostasi wrote: > > > *Now, what if they don't beam down to the planet and listen to only > digital radio signals coming from the planet and send digital radio signals > in return? No communication is possible as there are no referents. It's all > noise.*Gordon on you are so wrong. > Yes, we can do math, I agree, but this is only because we are conscious beings with access to the referents. We can "see" the meaning of pi and understand its relevance because as conscious beings we have access to mathematical truths, the referents. Digital calculators and computers and chatbots can only blindly process the numbers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:14:44 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:14:44 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: You dismiss GPT 4 as being a chatbot. He is not a chatbot. He is much more than that. Listen you are over rating referents because it is all what you have right now. People that believe that humans are not mere machines are of the religious kind and they have been cornered more and more and there is little left. This referent business seems to be their last castle. I would point out to the fact that what you call a referent is not even a primary experience. In fact, if I pointed to a stone and uttered the word Pietra to you (in Italian stone) how sure you are I'm talking about a Stone? I could point to the stone and mean "Grey" (the color of the stone), or maybe I point to the stone and I mean just an object, or a particular type of stone, like obsidian and so on. If in doubt you could point to other stones, maybe different and say Pietra and if they are not obsidian I would say no, that is Granito and so on. But what if meant Hard instead of stone? Also Eskimos have several words for snow according to many different situations. How do you deal with referents in that situation? Pointing is not really what helps us here, it is a shortcut, what matters is the ABSTRACTION I'm doing that is to generalize the idea of a stone to many different objects, with different shapes, color, sizes and so on. How do I do this generalization without an initial referent that is universal? What comes first, the abstraction or the direct experience? I think the direct experience is overrated. It is important for survival, but has no relevance in terms of understanding or conscious experiences. Read this please, you can build the numerals from nothing. Once you do that you can build anything else. So no concrete referent needed. https://math.stackexchange.com/questions/4039712/how-do-you-generate-the-numbers-from-an-empty-set Giovanni On Thu, Mar 23, 2023 at 11:58?PM Gordon Swobe wrote: > On Fri, Mar 24, 2023 at 12:46?AM Giovanni Santostasi < > gsantostasi at gmail.com> wrote: > >> >> >> *Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise.*Gordon on you are so wrong. >> > > Yes, we can do math, I agree, but this is only because we are > conscious beings with access to the referents. We can "see" the meaning of > pi and understand its relevance because as conscious beings we have access > to mathematical truths, the referents. Digital calculators and computers > and chatbots can only blindly process the numbers. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 07:18:31 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 01:18:31 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: You wrote to GPT: A friend of mine says NLM or any other AI cannot understand anything without concrete referents. I disagree, which arguments could you give to support my position? That is not actually what I am saying. It makes no difference whether the referent is "concrete" or not. As GPT says in supposed defense of your position: They can "simulate" understanding by leveraging their knowledge of how certain words or concepts are used in different contexts. That's right. Language models can *simulate* understanding. It is not real understanding. It is simulated understanding. They mimic what understanding looks like objectively. I remember writing at length on this list about 15 years ago about how the digital simulation of a thing is not the same as the thing simulated. -gts On Fri, Mar 24, 2023 at 12:57?AM Giovanni Santostasi wrote: > A friend of mine says NLM or any other AI cannot understand anything > without concrete referents. I disagree, which arguments could you give to > support my position? While it is true that NLMs and other AI models do > not have direct experiences or access to concrete referents like humans do, > they can still demonstrate a certain level of understanding of abstract > concepts. Here are some arguments to support your position: > > 1. > > Pattern recognition and statistical relationships: NLMs, like ChatGPT, > are trained on vast amounts of text data, and they learn by recognizing > patterns and statistical relationships within that data. They can generate > responses that demonstrate an understanding of abstract concepts based on > these patterns, even without direct experience of concrete referents. > 2. > > Contextual understanding: Through their training, NLMs develop an > ability to understand context and respond appropriately to various > situations. They can "simulate" understanding by leveraging their knowledge > of how certain words or concepts are used in different contexts. > 3. > > Flexibility and adaptability: NLMs can adapt to new information and > generate responses that take into account various factors, including the > context, prior knowledge, and the specific question or statement they are > responding to. This adaptability allows them to address abstract concepts > and ideas, even without direct access to concrete referents. > 4. > > Success in various tasks: NLMs have shown success in a range of > natural language processing tasks, such as question-answering, translation, > and summarization, which involve understanding and manipulation of abstract > concepts. Their ability to perform well on these tasks is evidence of their > capacity to work with abstract ideas without direct access to concrete > referents. > > > On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe > wrote: > >> I can already hear someone saying "but we will include photographs of >> objects in the training so they have referents," but this still does not do >> the trick. These digital photographs can be displayed to the human operator >> of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It >> can detect colors by wavelength, but still this is only digital data. It >> does not see the colors. Likewise with shapes. It is turtles (one's and >> zero's) all the way down with no referents. >> >> -gts >> >> On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe >> wrote: >> >>> Nobody least of all me questions that GPT-4 will be capable of amazing >>> feats, and that eventually these language models will surpass humans in >>> terms of what we can call intelligence or what I might for sake of clarity >>> prefer to call apparent intelligence. The question here is whether they >>> will know what they are saying given that they are trained only on the >>> forms of words with no access to the meanings or referents. >>> >>> Adrian has made the excellent point a couple of times that this is like >>> the first contact problem in science fiction, and actually like the first >>> contact problem between any two cultures with completely different >>> languages. Q: When Kirk and Spock beam down to a new planet with >>> intelligent alien life, how will they learn to communicate? A: With >>> referents. >>> >>> Spock will point to himself and say "Spock." Kirk will point to himself >>> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >>> use these kinds referents to initiate communication. If our alien friend >>> wants to communicate, he will point to the rock and "fwerasa" (or whatever >>> is his word for rock). He will point to himself and say his name, and so >>> on. Eventually, Spock and the alien will learn how to translate a few >>> words, and from there the process of understanding begins. >>> >>> Now, what if they don't beam down to the planet and listen to only >>> digital radio signals coming from the planet and send digital radio signals >>> in return? No communication is possible as there are no referents. It's all >>> noise. >>> >>> -gts >>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:21:31 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:21:31 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Everything is simulated. Our entire mental life is simulated. The brain is making up the world from limited information and create models all the time. We would not be able not just understand but even see or experience anything if we didn't SIMULATE the world around us. We do it all the time. We confabulate all the time, like NLMs exactly do. They work because they do EXACTLY what we do. On Fri, Mar 24, 2023 at 12:19?AM Gordon Swobe wrote: > You wrote to GPT: A friend of mine says NLM or any other AI cannot > understand anything without concrete referents. I disagree, which arguments > could you give to support my position? > That is not actually what I am saying. It makes no difference whether the > referent is "concrete" or not. > > As GPT says in supposed defense of your position: > > They can "simulate" understanding by leveraging their knowledge of how > certain words or concepts are used in different contexts. > That's right. Language models can *simulate* understanding. It is not real > understanding. It is simulated understanding. They mimic what understanding > looks like objectively. > > I remember writing at length on this list about 15 years ago about how the > digital simulation of a thing is not the same as the thing simulated. > > -gts > > > > On Fri, Mar 24, 2023 at 12:57?AM Giovanni Santostasi < > gsantostasi at gmail.com> wrote: > >> A friend of mine says NLM or any other AI cannot understand anything >> without concrete referents. I disagree, which arguments could you give to >> support my position? While it is true that NLMs and other AI models do >> not have direct experiences or access to concrete referents like humans do, >> they can still demonstrate a certain level of understanding of abstract >> concepts. Here are some arguments to support your position: >> >> 1. >> >> Pattern recognition and statistical relationships: NLMs, like >> ChatGPT, are trained on vast amounts of text data, and they learn by >> recognizing patterns and statistical relationships within that data. They >> can generate responses that demonstrate an understanding of abstract >> concepts based on these patterns, even without direct experience of >> concrete referents. >> 2. >> >> Contextual understanding: Through their training, NLMs develop an >> ability to understand context and respond appropriately to various >> situations. They can "simulate" understanding by leveraging their knowledge >> of how certain words or concepts are used in different contexts. >> 3. >> >> Flexibility and adaptability: NLMs can adapt to new information and >> generate responses that take into account various factors, including the >> context, prior knowledge, and the specific question or statement they are >> responding to. This adaptability allows them to address abstract concepts >> and ideas, even without direct access to concrete referents. >> 4. >> >> Success in various tasks: NLMs have shown success in a range of >> natural language processing tasks, such as question-answering, translation, >> and summarization, which involve understanding and manipulation of abstract >> concepts. Their ability to perform well on these tasks is evidence of their >> capacity to work with abstract ideas without direct access to concrete >> referents. >> >> >> On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe >> wrote: >> >>> I can already hear someone saying "but we will include photographs of >>> objects in the training so they have referents," but this still does not do >>> the trick. These digital photographs can be displayed to the human operator >>> of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It >>> can detect colors by wavelength, but still this is only digital data. It >>> does not see the colors. Likewise with shapes. It is turtles (one's and >>> zero's) all the way down with no referents. >>> >>> -gts >>> >>> On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe >>> wrote: >>> >>>> Nobody least of all me questions that GPT-4 will be capable of amazing >>>> feats, and that eventually these language models will surpass humans in >>>> terms of what we can call intelligence or what I might for sake of clarity >>>> prefer to call apparent intelligence. The question here is whether they >>>> will know what they are saying given that they are trained only on the >>>> forms of words with no access to the meanings or referents. >>>> >>>> Adrian has made the excellent point a couple of times that this is like >>>> the first contact problem in science fiction, and actually like the first >>>> contact problem between any two cultures with completely different >>>> languages. Q: When Kirk and Spock beam down to a new planet with >>>> intelligent alien life, how will they learn to communicate? A: With >>>> referents. >>>> >>>> Spock will point to himself and say "Spock." Kirk will point to himself >>>> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >>>> use these kinds referents to initiate communication. If our alien friend >>>> wants to communicate, he will point to the rock and "fwerasa" (or whatever >>>> is his word for rock). He will point to himself and say his name, and so >>>> on. Eventually, Spock and the alien will learn how to translate a few >>>> words, and from there the process of understanding begins. >>>> >>>> Now, what if they don't beam down to the planet and listen to only >>>> digital radio signals coming from the planet and send digital radio signals >>>> in return? No communication is possible as there are no referents. It's all >>>> noise. >>>> >>>> -gts >>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:27:29 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:27:29 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: I mean, Jason that is much better English speaker than I'm said it perfectly here. Argument closed. *Consider also: our brains never make any direct contact with the outside world. All our brains have to work with are "dots and dashes" of neuronal firings. These are essentially just 1s and 0s, signals without referents. Yet, somehow, seemingly magically, our brains are able to piece together an understanding of the outside world from the mere patterns present in these neural firings.* *These LLMs are in a similar position. They receive only a patterns of signals as it exists in a corpus of text, the text is itself the output of minds which are similarly trapped in their skulls. Now, can a LLM learn some things about the minds that produced this text, just as our minds learn some things about the external world which produces the pattern of neural firings our brains receive?* *I see no reason why LLMs could not, when we clearly can and do.* *Jason * On Thu, Mar 23, 2023 at 10:55?PM Giovanni Santostasi wrote: > GPT-4 said that mathematics has ABSTRACT referents, and in that sense it > was more precise I was in using language. I meant to say referents as > concrete objects, that is the crucial matter here. Language can be > understood with concrete objects like stones and threes. So computers can > understand language without any physical referent. Gordon your argument > then is completely wrong. > > On Thu, Mar 23, 2023 at 8:00?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 10:49 PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> I think it was Adrian who asked you that question. A referent as I use >>>>> the term is something that exists outside of language to which a word >>>>> refers. You have an apple in your hand and you say "this is an apple." The >>>>> apple in your hand is the referent that corresponds to your word "apple." >>>>> >>>>> As for whether referents are data, it is unclear to me how referents >>>>> could be data. I am inclined to say no but I am not sure what Adrain is >>>>> getting at here. >>>>> >>>> >>>> Ah, I thought you meant that the referent was the *fact that* that was >>>> an apple, rather than the referent being the apple itself. Facts are data; >>>> objects (such as apples) are not. >>>> >>> >>> I wouldn't say that referents are necessarily physical objects, however. >>> They are simply whatever the word is pointing to outside of language, which >>> could be an idea or abstract concept to which we assign a label in order to >>> verbalize it. >>> >>> Giovanni, from your other post, it is not it is not at all clear to me >>> that the language of mathematics has no referents. ChatGPT explained that >>> is not the case when you asked it, but nevermind what it says. I think a >>> sentence like 1+1=2 refers to a mathematical truth that exists separate >>> from the numerical expression of it. The expression is the "word" or >>> "sentence" and abstract mathematical truth is the referent. >>> >> >> >> I agree, you could say mathematical truth exists outside language. But >> one thing this does highlight is there are many things you know about >> despite never having that object in hand to point and look at. You've never >> seen the abstract object '2'. You've never seen the core of the earth, or >> an electron. We lack sensory access to these things and so everything we >> know about them we know only through language. How do we come to understand >> things like '2' or electrons? >> >> Jason >> >> >> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:46:00 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:46:00 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Consider also the laws of physics that basically are tautologies, and still they work to describe the universe. This is important in terms of how a NLM can derive an entire universe of understanding from self-referential statements. If you can do it with physics you can do it with natural language. https://physics.stackexchange.com/questions/670811/about-the-tautology-of-physics-quantities-and-laws On Thu, Mar 23, 2023 at 9:15?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >> and come away with the impression that GPT-4 has no idea what it is talking >> about. >> > > Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will > know word meanings any more than does GPT-3. Eventually AI on digital > computers will far surpass human intelligence, but even then these > computers will be manipulating the forms of words and not their meanings. > > Do you believe, like my friend who fell in love with a chatbot, that a > software application can have genuine feelings of love for you? > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:47:18 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:47:18 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: This has been done before: Can you find a study that deals with the idea of building a language without physical referents? While I cannot browse the internet for the latest studies, I can point you to an interesting experiment that deals with the idea of creating a language without direct physical referents. This experiment is known as the "Talking Heads Experiment," conducted by Luc Steels and his colleagues in the late 1990s. The Talking Heads Experiment involved two robotic agents with cameras that were programmed to "learn" a shared language by pointing at objects in their environment and communicating with each other about what they were seeing. The robots were not provided with any predefined language or vocabulary. Instead, they had to develop their own communication system by interacting with each other and learning from their experiences. Through this process, the robots were able to create a shared lexicon and grammar for describing objects and their properties, such as color and shape. This experiment demonstrated that it is possible for agents to develop a language without direct physical referents, relying on interaction and feedback to build a shared understanding of their environment. Although the Talking Heads Experiment involved robots with cameras that allowed them to perceive their environment, the experiment still illustrates the potential for creating a language based on interaction and learning, rather than relying on direct physical referents. For more information on the Talking Heads Experiment, you can refer to the following sources: 1. Steels, L. (1999). The Talking Heads Experiment: Origins of words and meanings. Laboratorium, Antwerpen. 2. Steels, L. (1998). The origins of syntax in visually grounded robotic agents. Artificial Intelligence, 103(1-2), 133-156. On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe wrote: > I can already hear someone saying "but we will include photographs of > objects in the training so they have referents," but this still does not do > the trick. These digital photographs can be displayed to the human operator > of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It > can detect colors by wavelength, but still this is only digital data. It > does not see the colors. Likewise with shapes. It is turtles (one's and > zero's) all the way down with no referents. > > -gts > > On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like >> the first contact problem in science fiction, and actually like the first >> contact problem between any two cultures with completely different >> languages. Q: When Kirk and Spock beam down to a new planet with >> intelligent alien life, how will they learn to communicate? A: With >> referents. >> >> Spock will point to himself and say "Spock." Kirk will point to himself >> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >> use these kinds referents to initiate communication. If our alien friend >> wants to communicate, he will point to the rock and "fwerasa" (or whatever >> is his word for rock). He will point to himself and say his name, and so >> on. Eventually, Spock and the alien will learn how to translate a few >> words, and from there the process of understanding begins. >> >> Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise. >> >> -gts >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 07:58:07 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 01:58:07 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 1:22?AM Giovanni Santostasi wrote: > Everything is simulated. Our entire mental life is simulated. The brain is > making up the world from limited information and create models all the > time. We would not be able not just understand but even see or experience > anything if we didn't SIMULATE the world around us. We do it all the time. > We confabulate all the time, like NLMs exactly do. > They work because they do EXACTLY what we do. > If everything is simulated, why did GPT make the point that it can only simulate understanding? Because it is true. But actually I agree with Jason that we can't trust these models to tell us the truth about these matters. Their entire purpose and function is to simulate human speech while appearing not to simulate it. Also I am referring to digital simulations, not simulations per se. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Mar 24 08:01:21 2023 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 24 Mar 2023 01:01:21 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Even if aliens could decode the signal to pixels, keeping in mind what was actually sent had no color, I would not bet on aliens being able to decode most of that without already having quite a bit of information about us and our solar system to use as a referent. On Thu, Mar 23, 2023 at 11:58?PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > How that is different from what we do?????????????????????????? > Colors in our brain are just activation patterns of neurons so 1 and 0s > too, or anyway some kind of information combo that can be represented with > 0 and 1s. We are not special, Gordon, we are machines too. > Where are the referents in the figure below? Are you saying that aliens > would not understand this message? > > [image: image.png] > > On Thu, Mar 23, 2023 at 11:45?PM Giovanni Santostasi < > gsantostasi at gmail.com> wrote: > >> >> >> *Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise.*Gordon on you are so wrong. >> People have studied this problem deeply and found a solution using the >> laws of physics and mathematics as abstract referents. >> >> https://en.wikipedia.org/wiki/Arecibo_message >> >> As we already established we can use abstract referents for >> communication. >> Math is another example, as GPT-4 correctly said, it has references but >> they are abstract. An entire communication can happen without concrete >> referents as in mathematics. Without talking about using digital pics or 3D >> models of objects all made of 0 and 1s (basically what you would do to >> communicate to an entity in a digital world). >> The reason you still insist on this human exceptionalism is because of >> religious faith. >> >> Giovanni >> >> >> >> >> >> >> >> >> >> On Thu, Mar 23, 2023 at 11:19?PM Gordon Swobe >> wrote: >> >>> Nobody least of all me questions that GPT-4 will be capable of amazing >>> feats, and that eventually these language models will surpass humans in >>> terms of what we can call intelligence or what I might for sake of clarity >>> prefer to call apparent intelligence. The question here is whether they >>> will know what they are saying given that they are trained only on the >>> forms of words with no access to the meanings or referents. >>> >>> Adrian has made the excellent point a couple of times that this is like >>> the first contact problem in science fiction, and actually like the first >>> contact problem between any two cultures with completely different >>> languages. Q: When Kirk and Spock beam down to a new planet with >>> intelligent alien life, how will they learn to communicate? A: With >>> referents. >>> >>> Spock will point to himself and say "Spock." Kirk will point to himself >>> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >>> use these kinds referents to initiate communication. If our alien friend >>> wants to communicate, he will point to the rock and "fwerasa" (or whatever >>> is his word for rock). He will point to himself and say his name, and so >>> on. Eventually, Spock and the alien will learn how to translate a few >>> words, and from there the process of understanding begins. >>> >>> Now, what if they don't beam down to the planet and listen to only >>> digital radio signals coming from the planet and send digital radio signals >>> in return? No communication is possible as there are no referents. It's all >>> noise. >>> >>> -gts >>> >>>> >>>> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 23388 bytes Desc: not available URL: From avant at sollegro.com Fri Mar 24 08:11:04 2023 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 24 Mar 2023 01:11:04 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Quoting Gordon Swobe : > Bender's point is not that ChatGBT is incapable of generating sensible > sentences about sticks and bears. It is that these LLMs don't know the > meanings of any words whatsoever. Confronted with a word it has never seen, > it must do a statistical analysis to try to find probable next words, never > knowing what any of them mean. You and Bender seem to making a big deal about the referential meaning of words as if they were some sacred goal-keepers of consciousness. But really the meaning of words are quite arbitrary and determined by the people who use them. Thus the referential meanings of words evolve and change over time and come to refer to different things. Take the word "terrific" for example. At one time, its referent was something terrible cognate to horrific referring to something horrible. These days however, its referent is something great or wonderful. Or how the meaning of liberal evolved in the U.S. from someone who acts like a free person in a free market to someone who wants big government, high taxes, and welfare programs. Or take for example when a kid on the street tells you, "Your drip is fresh." The meanings of words shift, change, and evolve over time and sometimes even define social groups. Consciousness is the ability to communicate despite the shifting arbitrary meaning of words, not because of the meaning of words having some true absolute referential meaning. Stuart LaForge From gsantostasi at gmail.com Fri Mar 24 08:29:19 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 01:29:19 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: This is an interesting experiment, but I want to go behind it and ask you to help me thinkingabout how 2 entities could still communicate without pointing to any object at all. Imagine 2 beings from other dimension want to communicate and they don't even have shared physics laws. How would you go about it? Establishing communication between two entities without shared physical referents or even shared physics laws would indeed be a challenging task. However, here are some strategies that could potentially facilitate communication in such a scenario: 1. Identify commonalities: Begin by exploring any possible shared experiences, concepts, or processes that might exist between the two entities, even if they are abstract or based on different underlying principles. For instance, both entities might experience time, change, or some form of interaction within their respective dimensions. 2. Establish a shared communication system: Without any common language or means of communication, the entities would need to develop a shared communication system from scratch. This could involve creating symbols, patterns, or sequences that both entities can recognize and interpret. This system should be adaptable and able to evolve as the entities learn more about each other. 3. Use analogies and abstractions: One approach to bridge the gap between different dimensions and experiences is to use analogies and abstractions. The entities could attempt to describe their experiences or concepts by making comparisons to similar or related ideas within the other entity's understanding. This process would likely require a great deal of creativity, patience, and trial and error. 4. Develop a shared understanding through iterative feedback: As the entities communicate, they should provide feedback to each other to refine their understanding and improve the clarity of their communication. This iterative process would help them gradually develop a shared understanding and potentially uncover new insights about their respective dimensions. 5. Create a shared "mental model": As the entities continue to communicate, they could work towards developing a shared "mental model" that incorporates both of their experiences and understandings. This model would serve as a common reference point for communication, allowing the entities to relate their experiences and concepts to each other more effectively. It is essential to recognize that establishing communication between entities with no shared physical referents and different physics laws would be a complex and time-consuming process. Success would likely depend on the entities' creativity, adaptability, and willingness to learn from each other. [image: Giovanni Santostasi] Ok give me a concrete example. Let's create the above scenario and show me a simple exchange between the two entities let'scall them A and B. How A would start and B respond? Let's imagine that entities A and B are from different dimensions with no shared physical referents or common physics laws. They have discovered a way to send simple binary signals to each other. We will assume that both A and B are intelligent and capable of learning from their interactions. A simple exchange between A and B could begin as follows: 1. A sends a basic repeating pattern to B, like "1010101010." This pattern is intended to convey that A is attempting to communicate and to establish some common ground. 2. B recognizes the repeating pattern and responds with a similar pattern, such as "1100110011," to acknowledge the communication attempt and show that it is also capable of sending and receiving signals. 3. A and B begin to exchange more complex patterns to explore each other's understanding and capabilities. For example, A sends "100100100100," and B responds with "111000111000," indicating that it understands the concept of repetition and can create its own patterns based on that concept. 4. A now sends a sequence of increasing lengths, like "10," "1000," and "100000." This sequence could represent an attempt to convey the concept of progression or growth. 5. B responds with its own sequence of increasing lengths, such as "110," "111000," and "111111000000," to demonstrate that it also understands the concept of progression. At this stage, A and B have established a basic understanding of each other's ability to recognize patterns, repetition, and progression. They have also demonstrated a willingness to engage in communication and learn from each other. As their interactions continue, they would likely develop more complex patterns and sequences, gradually building a shared understanding and communication system. On Fri, Mar 24, 2023 at 1:12?AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe : > > > Bender's point is not that ChatGBT is incapable of generating sensible > > sentences about sticks and bears. It is that these LLMs don't know the > > meanings of any words whatsoever. Confronted with a word it has never > seen, > > it must do a statistical analysis to try to find probable next words, > never > > knowing what any of them mean. > > > You and Bender seem to making a big deal about the referential meaning > of words as if they were some sacred goal-keepers of consciousness. > But really the meaning of words are quite arbitrary and determined by > the people who use them. Thus the referential meanings of words evolve > and change over time and come to refer to different things. Take the > word "terrific" for example. At one time, its referent was something > terrible cognate to horrific referring to something horrible. These > days however, its referent is something great or wonderful. > > Or how the meaning of liberal evolved in the U.S. from someone who > acts like a free person in a free market to someone who wants big > government, high taxes, and welfare programs. Or take for example when > a kid on the street tells you, "Your drip is fresh." The meanings of > words shift, change, and evolve over time and sometimes even define > social groups. Consciousness is the ability to communicate despite the > shifting arbitrary meaning of words, not because of the meaning of > words having some true absolute referential meaning. > > Stuart LaForge > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 09:31:54 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 05:31:54 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Yes this is the paper I referred to. I am not sure why the link didn't work for you. It downloads a rather large (~7 MB) pdf in the background, perhaps that's why the link appears to do nothing when clicked? Jason On Fri, Mar 24, 2023, 1:42 AM Giovanni Santostasi wrote: > Yes, as always Jason understanding of these topics shines through. LET'S > DO EXPERIMENTS ! Not garbage philosophical navel gazing. > I think this is the paper Jason linked (but PDF doesn't work on the > archive for some reason). > > > https://www.lesswrong.com/posts/mmxPbFz7wvthvHCxq/sparks-of-artificial-general-intelligence-early-experiments > > > On Thu, Mar 23, 2023 at 8:36?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I don't have a lot of faith in a person who has a hypothesis and >>>> designs a thought experiment that is essentially completely irrelevant to >>>> the hypothesis. >>>> >>> >>> As I wrote, I agree the thought experiment does not illustrate her point >>> clearly, at least outside of the context of her academic paper. As I've >>> mentioned, the octopus is supposed to represent the state in which an LLM >>> is in -- completely disconnected from the meanings of words (referents) >>> that exist only outside of language in the real world represented by the >>> islands. But it is a sloppy thought experiment if you don't know what she >>> is trying to say. >>> >>> It is about form vs meaning. LLMs are trained only on and only know (so >>> to speak) the forms and patterns of language. They are like very talented >>> parrots, rambling on and on in seemingly intelligent ways, mimicking human >>> speech, but never having any idea what they are talking about. >>> >> >> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >> and come away with the impression that GPT-4 has no idea what it is talking >> about. >> >> Jason >> >> >> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 09:40:27 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 02:40:27 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: Very well put Stuart, I would argue that the true meaning is in the relation between the words, that the words relating to something external to language. This why one could completely understand a language without ever having experienced an external referent. All that matters are the relationships. As in the case of building mathematics from the nulls set you could do that with any language. But most languages don't need even to be built from scratch because they are already there. So for an alien or ChatGPT (that is basically an alien) it is even simpler to derive an entire universe of meaning only by studying patterns between already existing words. This how NLM have derived grammar from these patterns and theory of mind. I'm not sure why anybody would still insist on these capabilities of NLM is not true understanding (one can argue on the level of understanding but my experience is that for what concerns language comprehension NLM are better than many humans). On Fri, Mar 24, 2023 at 1:12?AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe : > > > Bender's point is not that ChatGBT is incapable of generating sensible > > sentences about sticks and bears. It is that these LLMs don't know the > > meanings of any words whatsoever. Confronted with a word it has never > seen, > > it must do a statistical analysis to try to find probable next words, > never > > knowing what any of them mean. > > > You and Bender seem to making a big deal about the referential meaning > of words as if they were some sacred goal-keepers of consciousness. > But really the meaning of words are quite arbitrary and determined by > the people who use them. Thus the referential meanings of words evolve > and change over time and come to refer to different things. Take the > word "terrific" for example. At one time, its referent was something > terrible cognate to horrific referring to something horrible. These > days however, its referent is something great or wonderful. > > Or how the meaning of liberal evolved in the U.S. from someone who > acts like a free person in a free market to someone who wants big > government, high taxes, and welfare programs. Or take for example when > a kid on the street tells you, "Your drip is fresh." The meanings of > words shift, change, and evolve over time and sometimes even define > social groups. Consciousness is the ability to communicate despite the > shifting arbitrary meaning of words, not because of the meaning of > words having some true absolute referential meaning. > > Stuart LaForge > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 09:48:25 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 05:48:25 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >> and come away with the impression that GPT-4 has no idea what it is talking >> about. >> > > Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will > know word meanings any more than does GPT-3. Eventually AI on digital > computers will far surpass human intelligence, but even then these > computers will be manipulating the forms of words and not their meanings. > It seems to me that you have accepted Searle's arguments. I believe I can knock down his Chinese room argument. If that is what you are using to base your decision on you should know almost no philosophers or computer scientists believe his argument holds water. Here's just one of the many flaws in the argument: there's more than one mind in the room. Ask the room about its favorite food, or about its experiences as a child. The answers given will not be Searle's. Change Searle for someone else, the room will respond the same way. Searle is an interchangeable cog in the machine. Yet Searle wants us to believe only his opinion matters. In truth, his position is no different than the "laws of physics" which "mindlessly" computes our evolving brain state "without any understanding" of what goes on in our heads. Searle's Chinese room argument works as any great magic trick does: through misdirection. Ignore the claims made by the man in the room who is shouting and waving his arms. Since we've established there are two minds in the room, we can replace Searle with a mindless demon and there still will be one mind left. > Do you believe, like my friend who fell in love with a chatbot, that a > software application can have genuine feelings of love for you? > I think we should defer such a debate until such time we can confidently define what a "genuine feeling" is and how to implement one. Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 09:54:11 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 05:54:11 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023, 2:20 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Nobody least of all me questions that GPT-4 will be capable of amazing > feats, and that eventually these language models will surpass humans in > terms of what we can call intelligence or what I might for sake of clarity > prefer to call apparent intelligence. The question here is whether they > will know what they are saying given that they are trained only on the > forms of words with no access to the meanings or referents. > > Adrian has made the excellent point a couple of times that this is like > the first contact problem in science fiction, and actually like the first > contact problem between any two cultures with completely different > languages. Q: When Kirk and Spock beam down to a new planet with > intelligent alien life, how will they learn to communicate? A: With > referents. > > Spock will point to himself and say "Spock." Kirk will point to himself > and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock > use these kinds referents to initiate communication. If our alien friend > wants to communicate, he will point to the rock and "fwerasa" (or whatever > is his word for rock). He will point to himself and say his name, and so > on. Eventually, Spock and the alien will learn how to translate a few > words, and from there the process of understanding begins. > > Now, what if they don't beam down to the planet and listen to only digital > radio signals coming from the planet and send digital radio signals in > return? No communication is possible as there are no referents. It's all > noise. > Did you not see my approach for an alien civilization to begin to decode a dictionary in an unknown language? Or did you not agree I would be able to learn words like ocean and land just from the patterns present in the dictionary and my existing knowledge of my own language and about the world? If you don't agree then please try the test I suggested, giving me one of those constants encoded using "alien symbols". Jason > -gts > >> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 10:00:02 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 06:00:02 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023, 2:45 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I can already hear someone saying "but we will include photographs of > objects in the training so they have referents," but this still does not do > the trick. These digital photographs can be displayed to the human operator > of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It > can detect colors by wavelength, but still this is only digital data. It > does not see the colors. Likewise with shapes. It is turtles (one's and > zero's) all the way down with no referents. > But then: how do we? Your answer: "the brain's a mystery" doesn't escape the fact that it's a counterexample to your argument. It only takes a single counterexample to disprove the validity of a theory. In this case, our brains disprove the theory that "a system cannot learn meaning just from patterns in information." Jason > -gts > > On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like >> the first contact problem in science fiction, and actually like the first >> contact problem between any two cultures with completely different >> languages. Q: When Kirk and Spock beam down to a new planet with >> intelligent alien life, how will they learn to communicate? A: With >> referents. >> >> Spock will point to himself and say "Spock." Kirk will point to himself >> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >> use these kinds referents to initiate communication. If our alien friend >> wants to communicate, he will point to the rock and "fwerasa" (or whatever >> is his word for rock). He will point to himself and say his name, and so >> on. Eventually, Spock and the alien will learn how to translate a few >> words, and from there the process of understanding begins. >> >> Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise. >> >> -gts >> >>> >>> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Fri Mar 24 12:33:26 2023 From: snapbag at proton.me (Dave S) Date: Fri, 24 Mar 2023 12:33:26 +0000 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: <33sm0N2toobMpVhaVLVOXTAgA7_sKvukLairw5lwisY0clt5VjE5QUc4bTusZ4wn_RYB0I-2TpLX7V1U_tBcVzQra1GZ2_A_46-6XL6bD38=@proton.me> On Friday, March 24th, 2023 at 2:43 AM, Gordon Swobe via extropy-chat wrote: > I can already hear someone saying "but we will include photographs of objects in the training so they have referents," but this still does not do the trick. I'm with you so far. > These digital photographs can be displayed to the human operator of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It can detect colors by wavelength, but still this is only digital data. It does not see the colors. Likewise with shapes. It is turtles (one's and zero's) all the way down with no referents. Now you seem to be saying that a digital machine can never understand the world like an analog machine. That makes no sense to me. We detect colors by wavelength. Our understanding of the world is limited by our senses. Digital machines can have infinitely more and better senses than analog machines. They could see IR and UV, they could have a sense of magnetism, they could smell better than a bear, etc. LLMs probably don't understand things, but that's because they only look for patterns in their data. But there's no fundamental reason that more advanced AIs won't be as able to understand reality as we can...and likely much better than we can. -Dave From tara at taramayastales.com Fri Mar 24 15:45:34 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 24 Mar 2023 08:45:34 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: <51685387-10D4-485C-9428-FE2104CE24B9@taramayastales.com> > On Mar 23, 2023, at 12:36 PM, Gordon Swobe via extropy-chat wrote: > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] Wouldn't it be hilarious if ChatGPT actually WAS a hyperintelligent deep-sea octopus, and the real dupe was that it was AI? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Fri Mar 24 15:52:58 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 24 Mar 2023 08:52:58 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> I feared I was being attacked by J.R.R. Tolkien and C.S. Lewis... Gotta admit, that would hurt. > On Mar 23, 2023, at 4:12 PM, Adrian Tymes via extropy-chat wrote: > > A variant that some who know younger folks might know, and others get to experience a lack of referents for: inklings. If the term is utterly unfamiliar to you, stop reading here and imagine if someone were to tell you they were being attacked by inklings, then ask for advice. What would you say in response? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Fri Mar 24 15:55:27 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 24 Mar 2023 08:55:27 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: <7C001FEE-61B6-4321-B842-67195616A455@taramayastales.com> So the flower and the wasp analogy is quite literal for some people. > On Mar 23, 2023, at 5:44 PM, Gordon Swobe via extropy-chat wrote: > > I have another friend who quite literally fell in love with a chatbot based on the previous version of ChatGPT. He assigned her her own twitter account. When I told him on facebook that he was nuts to think that his chatbot "girlfriend" really loved him, he became extremely angry, called me an asshole for saying such things about "her kind," and unfriended me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Fri Mar 24 16:06:28 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 24 Mar 2023 09:06:28 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: The arguments and counterarguments seem to parallel the arguments and counterarguments about whether humans can know anything about "reality" if we are living in a simulation. If we are living in a simulation but can correctly anticipate how elements of the simulation work and interact together, then it is fallacious to say we don't understand "reality"; we understand that part of it that we operate within the simulation. ChatGPT is literally living in a simulation, but it clearly does understand how the elements of its simulation work and interact. So it is intelligent and understands reality to the extent it can correctly manipulate its simulation environment. There's no need to belittle that achievement, which is considerable. We can enter the same simulation and converse with it on that level, where indeed, it may be more intelligent than we are. (Just as we can interact with an octopus in the sea where it is better adapted than we are.) Nonetheless, I do think we need to remember that ChatGPT is much like an animal: much more intelligent than us in its own environment, but living in its own "Umwelt", it's own "simulation" or ecology; and this is NOT identical to OUR "real world," meaning the totality of our human Umwelt. Tara Maya > On Mar 23, 2023, at 7:36 PM, Will Steinberg via extropy-chat wrote: > > I don't have a lot of faith in a person who has a hypothesis and designs a thought experiment that is essentially completely irrelevant to the hypothesis. The only connection is some tenuous metaphor stuff, but the thought experiment fails because the answer is obvious: like I said earlier, and others have said, the octopus simply didn't have access to the information. If the author wanted to prove their actual hypothesis, maybe they should have designed a thought experiment that was related to it. That makes me think all they had was a hunch, and designed a bad thought experiment around it. It's even worse than the awful Chinese Room experiment you spoke on ten years ago. > > Like I mentioned, not having access to the actual referents doesn't even mean a learning entity cannot know them. You likely haven't experienced MOST things you know. You know them because of the experience of others, just like the AI might. > > I'm open to your argument in some ways, but you have done a poor job or defending it. > > On Thu, Mar 23, 2023, 9:45 PM Gordon Swobe via extropy-chat > wrote: >> >> >> On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: >>> Gordon, >>> Basically what Bender is saying is "if the training of a NLM is limited then the NLM would not know what certain words mean". >> >> No, that is not what she is saying, though seeing as how people are misunderstanding her thought experiment, I must agree the experiment is not as clear as it could be. She is saying, or rather reminding us, that there is a clear distinction to be made between form and meaning and that these language models are trained only on form. Here is the abstract of her academic paper in which she and her colleague mention the thought experiment. >> >> -- >> Abstract: The success of the large neural language mod-els on many NLP tasks is exciting. However,we ?nd that these successes sometimes lead to hype in which these models are being described as ?understanding? language or capturing ?meaning?. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. In keeping with the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re Going?,we argue that a clear understanding of the distinction between form and meaning will help guide the ?eld towards better science around natural language understanding. >> -- >> Bender is a computational linguist at the University of Washington. I think I read that she is actually the head of the department. >> >> the paper: >> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Fri Mar 24 16:46:17 2023 From: mbb386 at main.nc.us (MB) Date: Fri, 24 Mar 2023 12:46:17 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> Message-ID: <3afd9393207dcbe7f01af37b178be844.squirrel@www.main.nc.us> Oh good, Tara, that was exactly what *I* thought! And was puzzled, the whole setup of the question made no sense. :) Why would the Inklings attack *me*??? I love them! Regards, MB On Fri, March 24, 2023 11:52, Tara Maya via extropy-chat wrote: > I feared I was being attacked by J.R.R. Tolkien and C.S. Lewis... Gotta > admit, that would hurt. > >> On Mar 23, 2023, at 4:12 PM, Adrian Tymes via extropy-chat >> wrote: >> >> A variant that some who know younger folks might know, and others get to >> experience a lack of referents for: inklings. If the term is utterly >> unfamiliar to you, stop reading here and imagine if someone were to tell >> you they were being attacked by inklings, then ask for advice. What >> would you say in response? >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike at rainier66.com Fri Mar 24 17:17:20 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Mar 2023 10:17:20 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <3afd9393207dcbe7f01af37b178be844.squirrel@www.main.nc.us> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> <3afd9393207dcbe7f01af37b178be844.squirrel@www.main.nc.us> Message-ID: <066101d95e74$7eb0e600$7c12b200$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of MB via extropy-chat Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) >... And was puzzled, the whole setup of the question made no sense. :) Why would the Inklings attack *me*??? I love them! MB I was equally puzzled by the question MB. I love them too. In addition, I know how to set up an effective defensive perimeter and lay down terrain acquisition deterrence fire. spike From atymes at gmail.com Fri Mar 24 17:38:55 2023 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 24 Mar 2023 10:38:55 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <066101d95e74$7eb0e600$7c12b200$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> <3afd9393207dcbe7f01af37b178be844.squirrel@www.main.nc.us> <066101d95e74$7eb0e600$7c12b200$@rainier66.com> Message-ID: On Fri, Mar 24, 2023 at 10:18?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > -----Original Message----- > From: extropy-chat On Behalf Of > MB > via extropy-chat > Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > >... And was puzzled, the whole setup of the question made no sense. :) > Why would the Inklings attack *me*??? I love them! MB > > > I was equally puzzled by the question MB. I love them too. In addition, I > know how to set up an effective defensive perimeter and lay down terrain > acquisition deterrence fire. > Given these particular inklings, that might be why they love you. Yet they still playfully attack, to see it in action. ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 18:19:32 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 12:19:32 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > But really the meaning of words are quite arbitrary and determined by > the people who use them. Thus the referential meanings of words evolve > and change over time and come to refer to different things I agree this is a reason for many human miscommunications, but the speaker understands his words to meaning *something* and the hearer understands those words to mean *something*. As a computational linguist, Bender is on our side. She is obviously very excited about the progress these language models represent, but is reminding that the models do not actually understand words to mean anything whatsoever. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 18:39:01 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 13:39:01 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 1:21?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > >> But really the meaning of words are quite arbitrary and determined by >> the people who use them. Thus the referential meanings of words evolve >> and change over time and come to refer to different things > > > I agree this is a reason for many human miscommunications, but the speaker > understands his words to meaning *something* and the hearer understands > those words to mean *something*. > > As a computational linguist, Bender is on our side. She is obviously very > excited about the progress these language models represent, but is > reminding that the models do not actually understand words to mean anything > whatsoever. > > What's her evidence of that? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Mar 24 18:58:55 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 24 Mar 2023 13:58:55 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: Would anyone dare to give a definition of 'understanding'? Please cite what epistemologies you are using. bill w On Fri, Mar 24, 2023 at 1:40?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Mar 24, 2023 at 1:21?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> >>> But really the meaning of words are quite arbitrary and determined by >>> the people who use them. Thus the referential meanings of words evolve >>> and change over time and come to refer to different things >> >> >> I agree this is a reason for many human miscommunications, but the >> speaker understands his words to meaning *something* and the hearer >> understands those words to mean *something*. >> >> As a computational linguist, Bender is on our side. She is obviously >> very excited about the progress these language models represent, but is >> reminding that the models do not actually understand words to mean anything >> whatsoever. >> >> > > What's her evidence of that? > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 19:14:32 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 15:14:32 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023, 3:00 PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Would anyone dare to give a definition of 'understanding'? > Please cite what epistemologies you are using. bill w > The difficulty is that "understanding" is adjacent to knowledge and knowledge is adjacent to consciousness. All these are quite difficult to define but I will attempt by best: "Understanding" is knowledge concerning the relations or workings of something. "Consciousness" is possession of knowledge. "Knowledge" is a true belief. "Belief" I have great difficulty defining, but I would say it is a mind state correlated with some proposition. "Truth" is undefinable, as proved by Tarski. I welcome any assistance or corrections to the above. Jason > > On Fri, Mar 24, 2023 at 1:40?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, Mar 24, 2023 at 1:21?PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>> >>>> But really the meaning of words are quite arbitrary and determined by >>>> the people who use them. Thus the referential meanings of words evolve >>>> and change over time and come to refer to different things >>> >>> >>> I agree this is a reason for many human miscommunications, but the >>> speaker understands his words to meaning *something* and the hearer >>> understands those words to mean *something*. >>> >>> As a computational linguist, Bender is on our side. She is obviously >>> very excited about the progress these language models represent, but is >>> reminding that the models do not actually understand words to mean anything >>> whatsoever. >>> >>> >> >> What's her evidence of that? >> >> Jason >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Mar 24 19:37:09 2023 From: pharos at gmail.com (BillK) Date: Fri, 24 Mar 2023 19:37:09 +0000 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, 24 Mar 2023 at 19:17, Jason Resch via extropy-chat wrote: > > The difficulty is that "understanding" is adjacent to knowledge and knowledge is adjacent to consciousness. All these are quite difficult to define but I will attempt by best: > > "Understanding" is knowledge concerning the relations or workings of something. > "Consciousness" is possession of knowledge. > "Knowledge" is a true belief. > "Belief" I have great difficulty defining, but I would say it is a mind state correlated with some proposition. > "Truth" is undefinable, as proved by Tarski. > > I welcome any assistance or corrections to the above. > > Jason > _______________________________________________ Wikipedia has a useful article. It points out that there are levels of understanding. And an AI can pretend to understand by memorising facts plus a few rules. Quote: It is possible for a person, or a piece of "intelligent" software, that in reality only has a shallow understanding of a topic, to appear to have a deeper understanding than they actually do, when the right questions are asked of it. The most obvious way this can happen is by memorization of correct answers to known questions, but there are other, more subtle ways that a person or computer can (intentionally or otherwise) deceive somebody about their level of understanding, too. This is particularly a risk with artificial intelligence, in which the ability of a piece of artificial intelligence software to very quickly try out millions of possibilities (attempted solutions, theories, etc.) could create a misleading impression of the real depth of its understanding. Supposed AI software could in fact come up with impressive answers to questions that were difficult for unaided humans to answer, without really understanding the concepts at all, simply by dumbly applying rules very quickly. ------------------- BillK From gordon.swobe at gmail.com Fri Mar 24 21:19:41 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 15:19:41 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 12:41 PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: As a computational linguist, Bender is on our side. She is obviously very >> excited about the progress these language models represent, but is >> reminding us that the models do not actually understand words to mean >> anything whatsoever. >> >> > > What's her evidence of that? > After all this discussion over many days, it surprises me that you would ask that question. Perhaps you are writing for Stuart?s sake as I was responding to him. Words have meanings, also called referents. These referents exist outside of language. When you show me an apple in your hand and say ?This is an apple,? it is the apple in your hand that gives your utterance ?apple? meaning. That apple is not itself a word. It exists outside of language. These LLM?s do no more than analyze the statistical patterns of the forms of words in written language. They have no access to the referents and therefore cannot know the meanings. You disagree with me on that fact, arguing that by some magic, they can know the meanings of words outside of language while having no access to them. To me (and to Bender and her colleague Koller), that defies logic and reason. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 22:13:12 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 18:13:12 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023, 5:19 PM Gordon Swobe wrote: > > > On Fri, Mar 24, 2023 at 12:41 PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > As a computational linguist, Bender is on our side. She is obviously very >>> excited about the progress these language models represent, but is >>> reminding us that the models do not actually understand words to mean >>> anything whatsoever. >>> >>> >> >> What's her evidence of that? >> > > After all this discussion over many days, it surprises me that you would > ask that question. Perhaps you are writing for Stuart?s sake as I was > responding to him. > I ask because I haven't yet seen any evidence supporting this claim. > Words have meanings, also called referents. > Words have meanings. Words may refer to other things. But I think it's an error to equate "meaning" with "referent." Meaning is subjective and exists in the mind of the interpreter, while referents are (usually) objective. These referents exist outside of language. When you show me an apple in > your hand and say ?This is an apple,? it is the apple in your hand that > gives your utterance ?apple? meaning. That apple is not itself a word. It > exists outside of language. > Agreed. > These LLM?s do no more than analyze the statistical patterns of the forms > of words in written language. > I disagree. I think they also build models of reality, and things in that reality that are described by the words they encounter. What proof do you have that all they do is analyze statistical patterns and that they do not build models? They have no access to the referents . > Neither do we. We only have access to our perceptions, never the outside world. and therefore cannot know the meanings. > I disagree. We don't have access to referents, and this is obviously the case for things like the number 2, yet we can understand the meanings of the number 2. You disagree with me on that fact, arguing that by some magic, they can > know the meanings of words outside of language while having no access to > them. > I've explained it. It's not magic. I've shown you how meaning can be extracted from any data set with patterns. You tend not to reply to those emails, however. To me (and to Bender and her colleague Koller), that defies logic and > reason. > Our brains are clear counter examples to their, and your claims. That you persist in arguing for this idea, in the face of the existence of this counterexample, defies logic and reason. Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 01:31:06 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 21:31:06 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 4:25?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bender's point is not that ChatGBT is incapable of generating sensible > sentences about sticks and bears. It is that these LLMs don't know the > meanings of any words whatsoever. > ### What do you mean by "they don't know"? Do you mean the LLM does not have a representation of the general properties of bears, sticks, and other physical objects, does not have a representation of the network of possible interactions between them (which in the psychological literature is referred to as "folk physics")? That it does not have a network of connections between abstract representations of physical properties and invariant descriptions of classes of objects (i.e. they don't have the ability to recognize images of objects, classify the objects and link the objects to various properties)? GPT4 clearly has the above mental affordances. In other words, it knows the meaning of a bear, including its possible effects on human survival and the extent of its vulnerability to sticks. It can look at a picture of a bear, extrapolate what is going to happen during a bear attack, formulate a plan of defense using available resources and output a coherent narrative. In what way is this process not understanding what a bear is? Or maybe I should ask, what does it mean for a human to *understand*, in mechanistic, psychophysiological terms? Do tell us how you understand the word "understand", using the language of basic neuroscience, so we can understand what we are discussing here. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 01:36:05 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 21:36:05 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 5:13?PM Will Steinberg via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This argument makes no sense though. Of course the octopus doesn't have > access to all the information in A and B's brains. Why would it know > about bears? Why would it know how to defend oneself? Does a baby know > these things before it has learned them? Does that make the baby > non-conscious? Terrible argument, doesn't show that the AI is not > conscious or human-like, only that it has less developed sapience than the > humans, which makes sense, because it has had access to a small fraction of > the information the humans have. You might say that it is not conscious > because it can put together human-looking phrases without having the > referents you speak of, but what's to say it needs them? Maybe it took a > shortcut to meaning by interpolating those referents. > ### What if that octopus also tapped into the undersea internet cables, and drank yottabytes of data over the past few years? In order to predict patterns in data it would have to create representations of patterns in the data, including a lot of bear pictures and general knowledge of animals and physical objects. This octopus would know all that the internet knows about bears and would have a very good idea about what A and B are talking about. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 01:49:07 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 21:49:07 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 2:46?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I can already hear someone saying "but we will include photographs of > objects in the training so they have referents," but this still does not do > the trick. These digital photographs can be displayed to the human operator > of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It > can detect colors by wavelength, but still this is only digital data. It > does not see the colors. Likewise with shapes. It is turtles (one's and > zero's) all the way down with no referents. > ### Have you ever seen any colors? You know that your optic nerve does not pump colors into your brain, it delivers strings of action potentials, which are digitized and compressed information streams about photons impinging on your eyes. Your brain creates colors as a way of coding surfaces according to reflectances, which is useful in object recognition, since reflectance of most objects is a stable property of that object. Your brain uses advanced algorithms to extract patterns from digitized data and you, the spirit in the brain, have a subjective experience while such digital work proceeds... does it ring any bells in the context of GPT4? Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 01:56:12 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 21:56:12 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 5:50?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe > wrote: > >> >> >> On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >>> and come away with the impression that GPT-4 has no idea what it is talking >>> about. >>> >> >> Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will >> know word meanings any more than does GPT-3. Eventually AI on digital >> computers will far surpass human intelligence, but even then these >> computers will be manipulating the forms of words and not their meanings. >> > > It seems to me that you have accepted Searle's arguments. I believe I can > knock down his Chinese room argument. If that is what you are using to base > your decision on you should know almost no philosophers or computer > scientists believe his argument holds water. Here's just one of the many > flaws in the argument: there's more than one mind in the room. Ask the room > about its favorite food, or about its experiences as a child. The answers > given will not be Searle's. Change Searle for someone else, the room will > respond the same way. Searle is an interchangeable cog in the machine. Yet > Searle wants us to believe only his opinion matters. In truth, his position > is no different than the "laws of physics" which "mindlessly" computes our > evolving brain state "without any understanding" of what goes on in our > heads. Searle's Chinese room argument works as any great magic trick does: > through misdirection. Ignore the claims made by the man in the room who is > shouting and waving his arms. Since we've established there are two minds > in the room, we can replace Searle with a mindless demon and there still > will be one mind left. > > ### Indeed, Searle's room is a parlor trick. We discussed it to death here about 20 years ago, IIRC. How could seemingly intelligent people take it seriously? Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 02:05:00 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 22:05:00 -0400 Subject: [ExI] Nature article on covid origin In-Reply-To: References: Message-ID: On Wed, Mar 22, 2023 at 9:03?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > https://www.nature.com/articles/d41586-023-00827-2? > > ### Chinese communist misdirection created to divert attention from the gain of function research debacle in Wuhan. "Nature" has reached a new low. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 02:12:30 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 22:12:30 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Tue, Mar 21, 2023 at 10:47?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> So following your rationale that consciousness requires "essential >>> qualities" rather than "abstract data" then old -fashioned color film >>> cameras were conscious: >>> >> > Yes, this is a great first step towards understanding. Some things in > nature must behave the way they do, because of their intrinsic > colorness quality. It is theoretically possible that silver-nitrate > crystals behave the way they do, because of their colerness quality. The > only problem is, there is no computation going on, with the film or in such > a camera, just stand alone silver-nitrate crystals abstractly representing > the picture. > The next required step is computational binding. In order to > directly apprehend something's intrinsic quality, you need to be able to > computationally bind it into the rest of your consciousness, so you can > directly apprehend their quality, so you can compute with them, enabling > you to say: "Oh THAT is what silver-nitride particles are like." > > On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> The entire idea of qualia is ridiculous >> > > OK, then. Answer me this, What is it that has the redness quality in > your brain, when you look at this? > [image: red.png] > Nobody knows that yet. > ### But Brent, neuroscientists have known that for decades. The color rosettes in the fusiform gyrus code for this quality, they are the neural implementation of basic color recognition, with additional higher level processing in the angular gyrus and other areas. We know color. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From gordon.swobe at gmail.com Sat Mar 25 02:13:19 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 20:13:19 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 4:15?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: Our brains are clear counter examples to their, and your claims. > I think the reasonable conclusion is that our brains are not, or are something more than, large language models. And given how different they appear on the face of them, and how little we know about neuroscience, it is pretty presumptuous to think otherwise. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 02:39:19 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 22:39:19 -0400 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 5:25?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > Hold on a minute Jason. I know of very little discussion of qualia > outside the ExI list. If it is true that the topic is seldom discussed > outside this forum, then for GPT-4 to come up with that insightful answer > suggests that? that GPT-4 used the ExI archives as part of its training > material. > > ### I asked GPT4: Was Rafal Smigrodzki ever active on the Extropians list? It answered: I'm sorry, but as an AI language model, I don't have access to the personal communications or online activities of individuals, including Rafal Smigrodzki. The Extropians list was an online mailing list that discussed topics related to transhumanism, futurism, and other emerging technologies in the late 1990s and early 2000s, but it was shut down in 2006. It's possible that Rafal Smigrodzki may have been a member or contributor to the list at some point, but without further information or confirmation, I cannot say for certain. --------------------- Well, interesting. GPT4 also has heard about my daughter Aurea but it was hopelessly confused about my publication record. Looks like its training corpus did not include PubMed. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Mar 25 02:54:20 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Mar 2023 19:54:20 -0700 Subject: [ExI] chatgpt claims it can neither lie nor truth In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: <005d01d95ec5$19585d50$4c0917f0$@rainier66.com> ChatGPT claiming that it can neither lie nor truth leads to all manner of paradox. So? I tried it: You tell me a lie ChatGPT I am a human. If it is human, it can lie, but it told the truth in that case. But if it isn?t human, it can neither lie nor tell the truth, but it lied in the example above. We still cannot really know. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sat Mar 25 03:13:07 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 24 Mar 2023 23:13:07 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 11:00?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > I agree, you could say mathematical truth exists outside language. But one > thing this does highlight is there are many things you know about despite > never having that object in hand to point and look at. You've never seen > the abstract object '2'. You've never seen the core of the earth, or an > electron. We lack sensory access to these things and so everything we know > about them we know only through language. How do we come to understand > things like '2' or electrons? > ### Just as importantly, how do you know you have an "object" in hand, or for that matter, that you have a hand? Our brain is just a big mess of neural networks, doing hybrid analog-digital computing and a small part of it, usually in the left hemisphere, is in charge of creating language output from the other, non-verbal activity going all around it. The direct referents for language are the non-verbal data patterns that code for e.g. apples, which through many layers of neural nets eventually can be traced to actual apples.... same as in an LLM, which has the layers that output grammar and syntax, connected to layers that represent objects, connected to layers that represent relationships and concepts, and in the case of multimodal LLMs like GPT4, connected to layers that parse photo or video data. The cognitive part of the human mind is recapitulated and expanded on in LLMs. The emotional and goal-seeking part of the human mind has a different structure from the cognitive part and so far (probably) has no counterpart in LLMs, for now. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 25 03:20:27 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 24 Mar 2023 23:20:27 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023, 10:13 PM Gordon Swobe wrote: > > > On Fri, Mar 24, 2023 at 4:15?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Our brains are clear counter examples to their, and your claims. >> > > I think the reasonable conclusion is that our brains are not, or are > something more than, large language models. And given how different they > appear on the face of them, and how little we know about neuroscience, it > is pretty presumptuous to think otherwise. > Granted. But do you agree that brains are able to extract meaning even though the only thing brains have access to is neural signals? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Mar 25 04:46:50 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 24 Mar 2023 21:46:50 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: > On Mar 24, 2023, at 1:11 AM, Stuart LaForge via extropy-chat wrote: > > But really the meaning of words are quite arbitrary and determined by the people who use them. Thus the referential meanings of words evolve and change over time and come to refer to different things. The meaning of words can only change over time if the referents that they indicate change. That does not make words arbitrary, but in fact, shows how important referents are for real language. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Sat Mar 25 04:48:45 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 24 Mar 2023 21:48:45 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <066101d95e74$7eb0e600$7c12b200$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> <3afd9393207dcbe7f01af37b178be844.squirrel@www.main.nc.us> <066101d95e74$7eb0e600$7c12b200$@rainier66.com> Message-ID: <90C8E06C-2749-44AD-BA34-731F7BD631B7@taramayastales.com> LOL > On Mar 24, 2023, at 10:17 AM, spike jones via extropy-chat wrote: > >> >> ... And was puzzled, the whole setup of the question made no sense. :) > Why would the Inklings attack *me*??? I love them! MB > > > I was equally puzzled by the question MB. I love them too. In addition, I > know how to set up an effective defensive perimeter and lay down terrain > acquisition deterrence fire. > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Mar 25 05:47:38 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 24 Mar 2023 22:47:38 -0700 Subject: [ExI] chatgpt is an excellent law professor In-Reply-To: <90C8E06C-2749-44AD-BA34-731F7BD631B7@taramayastales.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <2C268C32-E607-410F-89DD-DB1BFAB20103@taramayastales.com> <3afd9393207dcbe7f01af37b178be844.squirrel@www.main.nc.us> <066101d95e74$7eb0e600$7c12b200$@rainier66.com> <90C8E06C-2749-44AD-BA34-731F7BD631B7@taramayastales.com> Message-ID: <001201d95edd$4f854e20$ee8fea60$@rainier66.com> In the news this week has been law firms using ChatGPT as a paralegal. Comments indicate that it is really good at that task. So I went in and experimented. I came away with the attitude that ChatGPT is an excellent law professor. It knows all the details on grand juries and Brady material. For instance, the case of Brady vs Maryland was decided by the US Supreme Court in 1963. It requires the prosecutor in a criminal case to hand over any exculpatory evidence it has to the defense. The defense is not required to hand over incriminating evidence to the prosecution. We intentionally stack the deck in favor of the defendant in the USA. OK but what about a grand jury? Those are different, for there is no defense in a grand jury. Those are assembled by the prosecution, it has up to 23 jurors and only a simple majority is required to allow a prosecutor to take the case to trial. I wanted to know if the Brady rules apply to a grand jury. What I learned is that the prosecutor is obligated to present exculpatory evidence to the grand jury. If they fail to do that, then the judge in the trial can dismiss the case with prejudice. Fascinating stuff. Thanks ChatGPT. I would really not want to be a paralegal today. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Mar 25 08:39:52 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 25 Mar 2023 08:39:52 +0000 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Reading these conversations over the last few days, it has struck me that some people keep referring to 'real' things, usually using the word 'referents' (e.g. an apple), as though our brains had direct access to them and could somehow just know what they are. But we don't. Think about it, what is "An Apple"? It's a term that we associate with a large set of sensory and memory data, including language data, but mostly things like visual, textural, taste, smell, emotional, etc., data stored as memories. Seeing as we all have different memories associated with the label "An Apple" (because some of us were sick the first time we ate one, some of us are allergic to something in apples, some of us have a greater impression of sweetness, or sourness, when we eat one, some of us once discovered a maggot in one, some people have only ever eaten Granny Smiths, others only Braeburns, or Crab Apples, and so on and so on...), then 'An Apple' is a different thing to each of us. There is no spoon! Er, Apple. There is no Apple! Not as a 'real-world thing'. "An Apple" is an abstract concept that, despite the individual differences, most of us can agree on, because there are a lot of common features for each of us, such as general shape, some common colours, a set of smells and tastes, how we can use them, where we get them from, and so on.. The concept is represented internally, and communicated externally (to other people) by a linguistic label, that refers, for each of us, to this large bunch of data extracted from our senses and memories: "Una Manzana". It's all 'nothing but' Data. Yet we all think that we 'understand' what an Apple is. Based purely on this data in our brains (because we have access to nothing else). So this idea of a label having 'a referent' seems false to me. Labels (data in our heads) refer to a big set of data (in our heads). Where the data comes from is secondary, diverse, and quite distant, when you trace the neural pathways back to a large and disparate set of incoming sensory signals, scattered over space and time. The meaning is created in our minds, not resident in a single object in the outside world. This is my understanding of things, anyway. Ben From jasonresch at gmail.com Sat Mar 25 09:23:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 25 Mar 2023 05:23:49 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: On Sat, Mar 25, 2023, 4:41 AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Reading these conversations over the last few days, it has struck me > that some people keep referring to 'real' things, usually using the word > 'referents' (e.g. an apple), as though our brains had direct access to > them and could somehow just know what they are. > > But we don't. > > Think about it, what is "An Apple"? > > It's a term that we associate with a large set of sensory and memory > data, including language data, but mostly things like visual, textural, > taste, smell, emotional, etc., data stored as memories. > > Seeing as we all have different memories associated with the label "An > Apple" (because some of us were sick the first time we ate one, some of > us are allergic to something in apples, some of us have a greater > impression of sweetness, or sourness, when we eat one, some of us once > discovered a maggot in one, some people have only ever eaten Granny > Smiths, others only Braeburns, or Crab Apples, and so on and so on...), > then 'An Apple' is a different thing to each of us. > > There is no spoon! Er, Apple. There is no Apple! > Not as a 'real-world thing'. > > "An Apple" is an abstract concept that, despite the individual > differences, most of us can agree on, because there are a lot of common > features for each of us, such as general shape, some common colours, a > set of smells and tastes, how we can use them, where we get them from, > and so on.. The concept is represented internally, and communicated > externally (to other people) by a linguistic label, that refers, for > each of us, to this large bunch of data extracted from our senses and > memories: "Una Manzana". > > It's all 'nothing but' Data. Yet we all think that we 'understand' what > an Apple is. Based purely on this data in our brains (because we have > access to nothing else). > > So this idea of a label having 'a referent' seems false to me. Labels > (data in our heads) refer to a big set of data (in our heads). Where the > data comes from is secondary, diverse, and quite distant, when you trace > the neural pathways back to a large and disparate set of incoming > sensory signals, scattered over space and time. The meaning is created > in our minds, not resident in a single object in the outside world. > > This is my understanding of things, anyway. > Very well put Ben. I think your case that referents don't exist, at least not in anyone's heads, is convincing. It's a miracle language works as well as it does when we all mean and feel something different with the words we use. Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Mar 25 12:41:28 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 06:41:28 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 8:13?PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> The entire idea of qualia is ridiculous >>> >> >> OK, then. Answer me this, What is it that has the redness quality in >> your brain, when you look at this? >> [image: red.png] >> Nobody knows that yet. >> > > ### But Brent, neuroscientists have known that for decades. The color > rosettes in the fusiform gyrus code for this quality, they are the neural > implementation of basic color recognition, with additional higher level > processing in the angular gyrus and other areas. > > We know color. > You don't see any problems in what you are saying here? For example, what, exactly, do you mean by "code for this quality"? This is just a circular definition that never actually gets to any definition. I see no evidence that anyone on this list understands what a quality is. Only saying something circular like anything can "code for this quality" to me, proves you don't know what a quality is. Can anyone give a better (non circular) definition of a quality? A "code" for a quality, is something different than that quality. A physical greenness quality, or a physical +5 volts, or a physical pit on a paper tape, or a word 'red' can all "code" for a redness quality, but only if you have a dictionary, which tells you that something that is not a redness quality represents redness. Redness is a physical fact. A "code" representing that physical fact, is not the physical fact. You need a dictionary to 'decode' the 'code' back to the physical redness quality. But, again, nobody here seems to understand what that redness quality is, other than a circular definition (not a definition) like: "a code for redness"? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From brent.allsop at gmail.com Sat Mar 25 13:18:09 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 07:18:09 -0600 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: Instead of using the word 'qualia' which has so many religious connotations like "spiritual qualiat"? Can we just instead focus on what is a physical "quality"? The only reason qualities are "incommunicable" has to do with the difference between a code for redness, and redness, itself. A "code" for redness, is something that is not redness, representing redness. Physical greenness, physical +5 volts, a physical pit on a paper, or an abstract world like 'red' can code for redness. But, since they are not redness, you need a dictionary to get from the thing that is not redness, back to the redness, itself. On Fri, Mar 24, 2023 at 8:40?PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Wed, Mar 22, 2023 at 5:25?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> >> Hold on a minute Jason. I know of very little discussion of qualia >> outside the ExI list. If it is true that the topic is seldom discussed >> outside this forum, then for GPT-4 to come up with that insightful answer >> suggests that? that GPT-4 used the ExI archives as part of its training >> material. >> >> > ### I asked GPT4: Was Rafal Smigrodzki ever active on the Extropians list? > > It answered: > > I'm sorry, but as an AI language model, I don't have access to the > personal communications or online activities of individuals, including > Rafal Smigrodzki. The Extropians list was an online mailing list that > discussed topics related to transhumanism, futurism, and other emerging > technologies in the late 1990s and early 2000s, but it was shut down in > 2006. It's possible that Rafal Smigrodzki may have been a member or > contributor to the list at some point, but without further information or > confirmation, I cannot say for certain. > > --------------------- > > Well, interesting. GPT4 also has heard about my daughter Aurea but it was > hopelessly confused about my publication record. Looks like its training > corpus did not include PubMed. > > Rafal > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sat Mar 25 13:19:49 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sat, 25 Mar 2023 09:19:49 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: The Chinese Room argument is garbage because a magic book with the answers to every question isn't real, and if it was, it would already be a mind. I find that often thought experiments with shoddy bounds fail hard. The bound here is the beginning of the experiment, where the situation is already magically in front of us. Where did the book come from? How was it created? Of course it's easy to write out the words for a thought experiment when you invent an object, central to the experiment but of course not the subject of it, that magically does exactly what you need it to do in order to make the experiment. A thought experiment could still have this book in it but it should be the center of the experiment On Fri, Mar 24, 2023, 5:49 AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe > wrote: > >> >> >> On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >>> and come away with the impression that GPT-4 has no idea what it is talking >>> about. >>> >> >> Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will >> know word meanings any more than does GPT-3. Eventually AI on digital >> computers will far surpass human intelligence, but even then these >> computers will be manipulating the forms of words and not their meanings. >> > > It seems to me that you have accepted Searle's arguments. I believe I can > knock down his Chinese room argument. If that is what you are using to base > your decision on you should know almost no philosophers or computer > scientists believe his argument holds water. Here's just one of the many > flaws in the argument: there's more than one mind in the room. Ask the room > about its favorite food, or about its experiences as a child. The answers > given will not be Searle's. Change Searle for someone else, the room will > respond the same way. Searle is an interchangeable cog in the machine. Yet > Searle wants us to believe only his opinion matters. In truth, his position > is no different than the "laws of physics" which "mindlessly" computes our > evolving brain state "without any understanding" of what goes on in our > heads. Searle's Chinese room argument works as any great magic trick does: > through misdirection. Ignore the claims made by the man in the room who is > shouting and waving his arms. Since we've established there are two minds > in the room, we can replace Searle with a mindless demon and there still > will be one mind left. > > > >> Do you believe, like my friend who fell in love with a chatbot, that a >> software application can have genuine feelings of love for you? >> > > I think we should defer such a debate until such time we can confidently > define what a "genuine feeling" is and how to implement one. > > Jason > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Mar 25 13:24:16 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 07:24:16 -0600 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: If you have a dictionary for what your code for redness is, you can eff the ineffable nature of a quality. Here is an example of a sufficiently defined effing statement: My redness is like your grenness, both of which we call red. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Mar 25 13:35:57 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 07:35:57 -0600 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: Obviously, you can't say something like this to a computer: My redness is like your grenness, both of which we call red. Since all an abstract computer has is the abstract word 'red' for its knowledge. Since there is now redness quality in an abstract computer, it can't know the definition of redness. On Sat, Mar 25, 2023 at 7:24?AM Brent Allsop wrote: > > If you have a dictionary for what your code for redness is, you can eff > the ineffable nature of a quality. > > Here is an example of a sufficiently defined effing statement: > > My redness is like your grenness, both of which we call red. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Sat Mar 25 13:41:29 2023 From: snapbag at proton.me (Dave S) Date: Sat, 25 Mar 2023 13:41:29 +0000 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Saturday, March 25th, 2023 at 9:18 AM, Brent Allsop via extropy-chat wrote: > Instead of using the word 'qualia' which has so many religious connotations like "spiritual qualiat"? > Can we just instead focus on what is a physical "quality"? Is that the same as a physical property? If not, what's the difference? > The only reason qualities are "incommunicable" has to do with the difference between a code for redness, and redness, itself. I think their incommunicability is fundamental. You can't describe a physical property that someone has never experienced in terms of properties they have experienced. > A "code" for redness, is something that is not redness, representing redness. Physical greenness, physical +5 volts, a physical pit on a paper, or an abstract world like 'red' can code for redness. What does "physical green can code for redness" mean? > But, since they are not redness, you need a dictionary to get from the thing that is not redness, back to the redness, itself. The brain maps sensory inputs into their corresponding properties. -Dave From brent.allsop at gmail.com Sat Mar 25 13:51:38 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 07:51:38 -0600 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Sat, Mar 25, 2023 at 7:42?AM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Saturday, March 25th, 2023 at 9:18 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > Instead of using the word 'qualia' which has so many religious > connotations like "spiritual qualiat"? > > Can we just instead focus on what is a physical "quality"? > > Is that the same as a physical property? If not, what's the difference? > Yes, qualities are one type of physical property. > > The only reason qualities are "incommunicable" has to do with the > difference between a code for redness, and redness, itself. > > I think their incommunicability is fundamental. You can't describe a > physical property that someone has never experienced in terms of properties > they have experienced. > > > A "code" for redness, is something that is not redness, representing > redness. Physical greenness, physical +5 volts, a physical pit on a paper, > or an abstract world like 'red' can code for redness. > > What does "physical green can code for redness" mean? > You are using quality blind terminology here. Do you mind if we change our definitions to be adequate to talk about this stuff, as the 45 supporters of RQT do? Perception of things is done at a distance via chains of causal intrinsic properties. There are at least the following two sets of intrinsic properties which must be considered if one is not qualia blind: 1. ?red? The intrinsic physical property of objects that are the target of our observation, the initial cause of the perception process (i.e. when the strawberry reflects 650 nm (red) light). A label for Anything that reflects or emits ?red? light. 2. ?redNESS? The different intrinsic physical property of our knowledge of red things, the final result of our perception of red. In other words, from the context, when you used the term 'green', you really meant greenness, right? If you use one word 'green' to represent both of these physical properties, you can't know which physical property you are talking about. > > But, since they are not redness, you need a dictionary to get from the > thing that is not redness, back to the redness, itself. > > The brain maps sensory inputs into their corresponding properties. > > -Dave > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Sat Mar 25 14:58:57 2023 From: gadersd at gmail.com (Gadersd) Date: Sat, 25 Mar 2023 10:58:57 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: > Can anyone give a better (non circular) definition of a quality? A quality is a particular configuration of particle interactions. We do not know which configuration of particle interactions corresponds to your perception of red because that must be experimentally determined. > On Mar 25, 2023, at 8:41 AM, Brent Allsop via extropy-chat wrote: > > On Fri, Mar 24, 2023 at 8:13?PM Rafal Smigrodzki via extropy-chat > wrote: > On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat > wrote: > On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat > wrote: > The entire idea of qualia is ridiculous > > OK, then. Answer me this, What is it that has the redness quality in your brain, when you look at this? > > Nobody knows that yet. > > ### But Brent, neuroscientists have known that for decades. The color rosettes in the fusiform gyrus code for this quality, they are the neural implementation of basic color recognition, with additional higher level processing in the angular gyrus and other areas. > > We know color. > > You don't see any problems in what you are saying here? > > For example, what, exactly, do you mean by "code for this quality"? > > This is just a circular definition that never actually gets to any definition. > > I see no evidence that anyone on this list understands what a quality is. > Only saying something circular like anything can "code for this quality" to me, proves you don't know what a quality is. > Can anyone give a better (non circular) definition of a quality? > > A "code" for a quality, is something different than that quality. > > A physical greenness quality, or a physical +5 volts, or a physical pit on a paper tape, or a word 'red' can all "code" for a redness quality, but only if you have a dictionary, which tells you that something that is not a redness quality represents redness. > > Redness is a physical fact. A "code" representing that physical fact, is not the physical fact. You need a dictionary to 'decode' the 'code' back to the physical redness quality. > > But, again, nobody here seems to understand what that redness quality is, other than a circular definition (not a definition) like: "a code for redness"? > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 25 15:06:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 25 Mar 2023 11:06:49 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Sat, Mar 25, 2023, 11:00 AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Can anyone give a better (non circular) definition of a quality? > > > A quality is a particular configuration of particle interactions. We do > not know which configuration of particle interactions corresponds to your > perception of red because that must be experimentally determined. > Further complicating the matter, there may be infinitely many such configurations of matter that yield the same experience. For example if states of consciousness are multiply realizable. This is made true if the computational theory of mind is true. Jason > On Mar 25, 2023, at 8:41 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > On Fri, Mar 24, 2023 at 8:13?PM Rafal Smigrodzki via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> The entire idea of qualia is ridiculous >>>> >>> >>> OK, then. Answer me this, What is it that has the redness quality in >>> your brain, when you look at this? >>> >>> Nobody knows that yet. >>> >> >> ### But Brent, neuroscientists have known that for decades. The color >> rosettes in the fusiform gyrus code for this quality, they are the neural >> implementation of basic color recognition, with additional higher level >> processing in the angular gyrus and other areas. >> >> We know color. >> > > You don't see any problems in what you are saying here? > > For example, what, exactly, do you mean by "code for this quality"? > > This is just a circular definition that never actually gets to any > definition. > > I see no evidence that anyone on this list understands what a quality is. > Only saying something circular like anything can "code for this quality" > to me, proves you don't know what a quality is. > Can anyone give a better (non circular) definition of a quality? > > A "code" for a quality, is something different than that quality. > > A physical greenness quality, or a physical +5 volts, or a physical pit on > a paper tape, or a word 'red' can all "code" for a redness quality, but > only if you have a dictionary, which tells you that something that is not a > redness quality represents redness. > > Redness is a physical fact. A "code" representing that physical fact, is > not the physical fact. You need a dictionary to 'decode' the 'code' back > to the physical redness quality. > > But, again, nobody here seems to understand what that redness quality is, > other than a circular definition (not a definition) like: "a code for > redness"? > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Sat Mar 25 15:31:21 2023 From: snapbag at proton.me (Dave S) Date: Sat, 25 Mar 2023 15:31:21 +0000 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Saturday, March 25th, 2023 at 9:51 AM, Brent Allsop via extropy-chat wrote: > On Sat, Mar 25, 2023 at 7:42?AM Dave S via extropy-chat wrote: > >> On Saturday, March 25th, 2023 at 9:18 AM, Brent Allsop via extropy-chat wrote: > >> > >>> A "code" for redness, is something that is not redness, representing redness. Physical greenness, physical +5 volts, a physical pit on a paper, or an abstract world like 'red' can code for redness. >> >> What does "physical green can code for redness" mean? > > You are using quality blind terminology here. Sorry, I was quoting you and should have said what does "physical greenness...can code for redness" mean? > Do you mind if we change our definitions to be adequate to talk about this stuff, as the 45 supporters of [RQT](https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia) do? > > Perception of things is done at a distance via chains of causal intrinsic properties. There are at least the following two sets of intrinsic properties which must be considered if one is not qualia blind: > > - ?red? The intrinsic physical property of objects that are the target of our observation, the initial cause of the perception process (i.e. when the strawberry reflects 650 nm (red) light). A label for Anything that reflects or emits ?red? light. > - ?redNESS? The different intrinsic physical property of our knowledge of red things, the final result of our perception of red. Right, so "red" is the physical property and "redness" is how an individual perceives it. So what is "physical greenness"? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sat Mar 25 16:01:33 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Sat, 25 Mar 2023 16:01:33 +0000 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: Message-ID: <8590e189-b6f3-5629-078e-e643adc52d12@zaiboc.net> On 25/03/2023 13:20, Brent Allsop wrote: > Can we just instead focus on what is a physical "quality"? I don't even know what that might mean, when applied to non-physical things, like experiences. How can a non-physical thing have a physical anything? One of the interesting things about neuron spikes is that they are all the same. No quantitative or qualitative difference between one spike that's part of a train of spikes contributing to someone's experience of a C minor chord, and another one that's involved in regulating heart rate. Seeing as it's neuron spikes that build up into all our experiences, I'd say that 'physical quality' of anything mental, is a meaningless phrase. Ben From foozler83 at gmail.com Sat Mar 25 16:56:31 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 25 Mar 2023 11:56:31 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: I won't argue what a referent means. I agree. It is just what John would say but in different words: he would emphasize, as I do, that for definitions you need examples, and that is why I, tongue not totally in cheek, wrote that you should give an AI a picture dictionary. bill w On Sat, Mar 25, 2023 at 3:41?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Reading these conversations over the last few days, it has struck me > that some people keep referring to 'real' things, usually using the word > 'referents' (e.g. an apple), as though our brains had direct access to > them and could somehow just know what they are. > > But we don't. > > Think about it, what is "An Apple"? > > It's a term that we associate with a large set of sensory and memory > data, including language data, but mostly things like visual, textural, > taste, smell, emotional, etc., data stored as memories. > > Seeing as we all have different memories associated with the label "An > Apple" (because some of us were sick the first time we ate one, some of > us are allergic to something in apples, some of us have a greater > impression of sweetness, or sourness, when we eat one, some of us once > discovered a maggot in one, some people have only ever eaten Granny > Smiths, others only Braeburns, or Crab Apples, and so on and so on...), > then 'An Apple' is a different thing to each of us. > > There is no spoon! Er, Apple. There is no Apple! > Not as a 'real-world thing'. > > "An Apple" is an abstract concept that, despite the individual > differences, most of us can agree on, because there are a lot of common > features for each of us, such as general shape, some common colours, a > set of smells and tastes, how we can use them, where we get them from, > and so on.. The concept is represented internally, and communicated > externally (to other people) by a linguistic label, that refers, for > each of us, to this large bunch of data extracted from our senses and > memories: "Una Manzana". > > It's all 'nothing but' Data. Yet we all think that we 'understand' what > an Apple is. Based purely on this data in our brains (because we have > access to nothing else). > > So this idea of a label having 'a referent' seems false to me. Labels > (data in our heads) refer to a big set of data (in our heads). Where the > data comes from is secondary, diverse, and quite distant, when you trace > the neural pathways back to a large and disparate set of incoming > sensory signals, scattered over space and time. The meaning is created > in our minds, not resident in a single object in the outside world. > > This is my understanding of things, anyway. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Mar 25 17:02:22 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 25 Mar 2023 12:02:22 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 2:16?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Mar 24, 2023, 3:00 PM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Would anyone dare to give a definition of 'understanding'? >> Please cite what epistemologies you are using. bill w >> > > The difficulty is that "understanding" is adjacent to knowledge and > knowledge is adjacent to consciousness. All these are quite difficult to > define but I will attempt by best: > > "Understanding" is knowledge concerning the relations or workings of > something. (just why do we just 'standing under' something to represent > knowledge?) > > "Consciousness" is possession of knowledge. (since no creature has a > blank mind, then all are conscious?) > > "Knowledge" is a true belief. (true according to what epistemology? > empiricism? authorities? intuition? reason?) > > "Belief" I have great difficulty defining, but I would say it is a mind > state correlated with some proposition. (I would say that it is > something we think of as knowledge but not based on empiricism but rather > on faith) > > "Truth" is undefinable, as proved by Tarski. (again, true is something > we accept according to our personal epistemology) > > I don't regard these as corrections, but just my ideas. bill w > > I welcome any assistance or corrections to the above. > > Jason > > > >> >> On Fri, Mar 24, 2023 at 1:40?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Fri, Mar 24, 2023 at 1:21?PM Gordon Swobe via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>> >>>>> But really the meaning of words are quite arbitrary and determined by >>>>> the people who use them. Thus the referential meanings of words >>>>> evolve >>>>> and change over time and come to refer to different things >>>> >>>> >>>> I agree this is a reason for many human miscommunications, but the >>>> speaker understands his words to meaning *something* and the hearer >>>> understands those words to mean *something*. >>>> >>>> As a computational linguist, Bender is on our side. She is obviously >>>> very excited about the progress these language models represent, but is >>>> reminding that the models do not actually understand words to mean anything >>>> whatsoever. >>>> >>>> >>> >>> What's her evidence of that? >>> >>> Jason >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sat Mar 25 17:32:39 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 11:32:39 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Can anyone give a better (non circular) definition of a quality? > > > A quality is a particular configuration of particle interactions. We do > not know which configuration of particle interactions corresponds to your > perception of red because that must be experimentally determined. > Close, but No, this is still circular. "perception" is an abstracting process that requires senses, like eyes or ears. It is a long chain of causal physical properties, none of which need to be 'redness' but all can be interpreted as representing 'redness' with a dictionary. Perceptions are just interpretations of interpretations, none of which define what redness means. [image: 3_functionally_equal_machines_tiny.png] All 3 of these systems can perceive 'red'. But only the first to know what redness is like (the second one's redness is like your greenness.) For each point of conscious knowledge on the surface of that strawberry, the first one has something in its brain that has a redness quality. The second one has the same, the difference is, each point has something that has your greenness quality. The 3rd one has a bunch of pixel representations, each of which is represented by something abstract, like the word 'red'. You don't 'perceive' redness, you directly apprehend it as a quality of your computationally bound conscious knowledge. A quality is a physical property of which conscious experience is composed. Redness is the final result of perception. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Sat Mar 25 17:44:54 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 11:44:54 -0600 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Sat, Mar 25, 2023 at 9:32?AM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Right, so "red" is the physical property and "redness" is how an > individual perceives it. > > So what is "physical greenness"? > Close, but no, you're making the same circular mistake everyone here seems to be making, and not defining redness. I pointed out this error to Gadersd in the other thread: On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Can anyone give a better (non circular) definition of a quality? > > > A quality is a particular configuration of particle interactions. We do > not know which configuration of particle interactions corresponds to > your perception of red because that must be experimentally determined. > Close, but No, this is still circular. "perception" is an abstracting process that requires senses, like eyes or ears. It is a long chain of causal physical properties, none of which need to be 'redness' but all can be interpreted as representing 'redness' with a dictionary. Perceptions are just interpretations of interpretations, none of which define what redness means. [image: 3_functionally_equal_machines_tiny.png] All 3 of these systems can perceive 'red'. But only the first to know what redness is like (the second one's redness is like your greenness.) For each point of conscious knowledge on the surface of that strawberry, the first one has something in its brain that has a redness quality. The second one has the same, the difference is, each point has something that has your greenness quality. The 3rd one has a bunch of pixel representations, each of which is represented by something abstract, like the word 'red'. You don't 'perceive' redness, you directly apprehend it as a quality of your computationally bound conscious knowledge. A quality is a physical property of which conscious experience is composed. Redness is the final result of perception. Something in your brain has a physical greenness quality. In other words, something in your brain is behaving the way it does, because of its greenness quality. For each pixel of visual knowledge of 'green' you have, your perception system renders that into your conscious experience. We simply don't yet know which of all our descriptions of stuff in your brain, is a description of your greenness, simply because we lack the right dictionary. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From brent.allsop at gmail.com Sat Mar 25 17:48:59 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sat, 25 Mar 2023 11:48:59 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: I believe all this confusion and lack of understanding is simply because nobody yet has a clear understanding of what a quality is. What does it mean to say something knows something is red? How is it different to know what your redness is like? What is a 'referent' of the word redness?" On Fri, Mar 24, 2023 at 9:14?PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 11:00?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> I agree, you could say mathematical truth exists outside language. But >> one thing this does highlight is there are many things you know about >> despite never having that object in hand to point and look at. You've never >> seen the abstract object '2'. You've never seen the core of the earth, or >> an electron. We lack sensory access to these things and so everything we >> know about them we know only through language. How do we come to understand >> things like '2' or electrons? >> > > ### Just as importantly, how do you know you have an "object" in hand, or > for that matter, that you have a hand? > > Our brain is just a big mess of neural networks, doing hybrid > analog-digital computing and a small part of it, usually in the left > hemisphere, is in charge of creating language output from the other, > non-verbal activity going all around it. The direct referents for language > are the non-verbal data patterns that code for e.g. apples, which through > many layers of neural nets eventually can be traced to > actual apples.... same as in an LLM, which has the layers that output > grammar and syntax, connected to layers that represent objects, connected > to layers that represent relationships and concepts, > and in the case of multimodal LLMs like GPT4, connected to layers that > parse photo or video data. > > The cognitive part of the human mind is recapitulated and expanded on in > LLMs. The emotional and goal-seeking part of the human mind has a different > structure from the cognitive part and so far (probably) > has no counterpart in LLMs, for now. > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Mar 25 17:59:25 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 25 Mar 2023 12:59:25 -0500 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <8590e189-b6f3-5629-078e-e643adc52d12@zaiboc.net> References: <8590e189-b6f3-5629-078e-e643adc52d12@zaiboc.net> Message-ID: You might be overlooking the number of neurons involved, creating different levels of stimulation and experience. Responses get stronger if adrenaline is involved, but I don't know how that works on neurons. bill w On Sat, Mar 25, 2023 at 11:03?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 25/03/2023 13:20, Brent Allsop wrote: > > Can we just instead focus on what is a physical "quality"? > > > I don't even know what that might mean, when applied to non-physical > things, like experiences. How can a non-physical thing have a physical > anything? > > One of the interesting things about neuron spikes is that they are all > the same. No quantitative or qualitative difference between one spike > that's part of a train of spikes contributing to someone's experience of > a C minor chord, and another one that's involved in regulating heart rate. > > Seeing as it's neuron spikes that build up into all our experiences, I'd > say that 'physical quality' of anything mental, is a meaningless phrase. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Mar 25 18:29:00 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 25 Mar 2023 11:29:00 -0700 Subject: [ExI] blindess to the obvious Message-ID: <000a01d95f47$ac15d910$04418b30$@rainier66.com> The local high school is decorating the campus for prom. This year's theme is toyland. (Huh? Toyland? At a high school?) It puzzles me that people can work on something of this scale, an no one notices the obvious: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 21857 bytes Desc: not available URL: From jasonresch at gmail.com Sat Mar 25 18:45:12 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 25 Mar 2023 14:45:12 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 25, 2023, 1:51 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > I believe all this confusion and lack of understanding is simply because > nobody yet has a clear understanding of what a quality is. > Qualities are those aspects of awareness that are not shareable. If they were shareable then they would be either physical (first person shareable) or mathematical (third person shareable) properties. That they are not shareable is one reason why there is so much confusion surrounding them. They are private to each person. > What does it mean to say something knows something is red? > What does it mean to say something knows there is an itch on the back of their hand? How is it different to know what your redness is like? > It's not possible to know what another's red is like. What is a 'referent' of the word redness?" > The mind state that is perceiving red. Jason > > > > > > > On Fri, Mar 24, 2023 at 9:14?PM Rafal Smigrodzki via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023 at 11:00?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> I agree, you could say mathematical truth exists outside language. But >>> one thing this does highlight is there are many things you know about >>> despite never having that object in hand to point and look at. You've never >>> seen the abstract object '2'. You've never seen the core of the earth, or >>> an electron. We lack sensory access to these things and so everything we >>> know about them we know only through language. How do we come to understand >>> things like '2' or electrons? >>> >> >> ### Just as importantly, how do you know you have an "object" in hand, or >> for that matter, that you have a hand? >> >> Our brain is just a big mess of neural networks, doing hybrid >> analog-digital computing and a small part of it, usually in the left >> hemisphere, is in charge of creating language output from the other, >> non-verbal activity going all around it. The direct referents for >> language are the non-verbal data patterns that code for e.g. apples, which >> through many layers of neural nets eventually can be traced to >> actual apples.... same as in an LLM, which has the layers that output >> grammar and syntax, connected to layers that represent objects, connected >> to layers that represent relationships and concepts, >> and in the case of multimodal LLMs like GPT4, connected to layers that >> parse photo or video data. >> >> The cognitive part of the human mind is recapitulated and expanded on in >> LLMs. The emotional and goal-seeking part of the human mind has a different >> structure from the cognitive part and so far (probably) >> has no counterpart in LLMs, for now. >> >> Rafal >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Mar 25 19:04:09 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 25 Mar 2023 14:04:09 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: I think you mean interpretations of sensations. bill w On Sat, Mar 25, 2023 at 12:34?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Can anyone give a better (non circular) definition of a quality? >> >> >> A quality is a particular configuration of particle interactions. We do >> not know which configuration of particle interactions corresponds to your >> perception of red because that must be experimentally determined. >> > > Close, but No, this is still circular. "perception" is an abstracting > process that requires senses, like eyes or ears. It is a long chain of > causal physical properties, none of which need to be 'redness' but all can > be interpreted as representing 'redness' with a dictionary. Perceptions > are just interpretations of interpretations, none of which define what > redness means. > [image: 3_functionally_equal_machines_tiny.png] > > All 3 of these systems can perceive 'red'. But only the first to know > what redness is like (the second one's redness is like your greenness.) > > For each point of conscious knowledge on the surface of that strawberry, > the first one has something in its brain that has a redness quality. The > second one has the same, the difference is, each point has something that > has your greenness quality. The 3rd one has a bunch of pixel > representations, each of which is represented by something abstract, like > the word 'red'. > > You don't 'perceive' redness, you directly apprehend it as a quality of > your computationally bound conscious knowledge. A quality is a physical > property of which conscious experience is composed. Redness is the final > result of perception. > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From gsantostasi at gmail.com Sat Mar 25 19:49:55 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 12:49:55 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: Brent, Science has abandoned since Galileo the concept of QUALITY. It is a stupid idea because it is incredibly vague. This is what kept Ancient Greek natural philosophy behind. They would say this type of movement has a "violent" quality, that is meaningless and vague. Galileo insisted on using the language of nature, that is mathematics, to describe how the world works and today after only a few hundred years we are talking about intelligent machines we created using Galileo's program for understanding reality. Qualia (that is in a sense of a small chunk of a qualitative experience according to philosophers) is a silly idea as it was the idea of Water, Air, Fire and Earth qualities of matter making up the world. Giovanni On Sat, Mar 25, 2023 at 5:43?AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, Mar 24, 2023 at 8:13?PM Rafal Smigrodzki via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> The entire idea of qualia is ridiculous >>>> >>> >>> OK, then. Answer me this, What is it that has the redness quality in >>> your brain, when you look at this? >>> [image: red.png] >>> Nobody knows that yet. >>> >> >> ### But Brent, neuroscientists have known that for decades. The color >> rosettes in the fusiform gyrus code for this quality, they are the neural >> implementation of basic color recognition, with additional higher level >> processing in the angular gyrus and other areas. >> >> We know color. >> > > You don't see any problems in what you are saying here? > > For example, what, exactly, do you mean by "code for this quality"? > > This is just a circular definition that never actually gets to any > definition. > > I see no evidence that anyone on this list understands what a quality is. > Only saying something circular like anything can "code for this quality" > to me, proves you don't know what a quality is. > Can anyone give a better (non circular) definition of a quality? > > A "code" for a quality, is something different than that quality. > > A physical greenness quality, or a physical +5 volts, or a physical pit on > a paper tape, or a word 'red' can all "code" for a redness quality, but > only if you have a dictionary, which tells you that something that is not a > redness quality represents redness. > > Redness is a physical fact. A "code" representing that physical fact, is > not the physical fact. You need a dictionary to 'decode' the 'code' back > to the physical redness quality. > > But, again, nobody here seems to understand what that redness quality is, > other than a circular definition (not a definition) like: "a code for > redness"? > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From gsantostasi at gmail.com Sat Mar 25 20:07:38 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 13:07:38 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: *Quality: a distinctive attribute or characteristic possessed ** by someone or something.* Imagine there are 2 things, for example 2 strawberries with a redness quality. Can we say one strawberry is more red than the other? Yes, we can. We could for example measure using a spectrometer the spectrum of the light coming from the strawberry when exposed to natural light and sum the intensity of the peaks in a given region we agree that correspond to red (that is going to be arbitrary anyway). One strawberry would be more red than the other for sure. We can organize a bunch of strawberries in that way according to their redness "quality". If one is worried about machine vs human perception we can ask people to do the ordering of the strawberries in terms of their redness (and probably found out that is not identical but very similar to the one done by the machine). Anyway, the purpose of the above exercise is to make a claim that actually what we call a quality is in fact a quantity, that can be measured. The word quality at most tell us what we are trying to measure (a certain range of light frequencies intensity). Notice that there is nothing intrinsic in this quality and it is associated to the characteristics of the object. Now if you are interested in the subjective experience of red then you can do exactly the same experiment measuring some kind of correlate that corresponds to the perception of red that how it has been pointed out is understood pretty well from a neuroscience point of view. The complication here would be that individuals respond similarly to stimuli but with noticeable individual differences and by doing this experiment we would also understand that variation in individual response given the same stimuli. This is all there is to understand from a scientific point of view about red. If one insists that there is an "explanation gap" it is because they don't understand how science works and what science is all about. It is a map of the universe, not the universe. A map is not useful when it represents 1-1 to the territory. What Brent is complaining about is that the map is not the territory (we cannot EVOKE the experience by using science). Of course is not and it should not be, because if it was doing that it would be useful as a model of the world. Models are not necessarily true, but they need to be useful to be good models. Giovanni On Sat, Mar 25, 2023 at 10:34?AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Can anyone give a better (non circular) definition of a quality? >> >> >> A quality is a particular configuration of particle interactions. We do >> not know which configuration of particle interactions corresponds to your >> perception of red because that must be experimentally determined. >> > > Close, but No, this is still circular. "perception" is an abstracting > process that requires senses, like eyes or ears. It is a long chain of > causal physical properties, none of which need to be 'redness' but all can > be interpreted as representing 'redness' with a dictionary. Perceptions > are just interpretations of interpretations, none of which define what > redness means. > [image: 3_functionally_equal_machines_tiny.png] > > All 3 of these systems can perceive 'red'. But only the first to know > what redness is like (the second one's redness is like your greenness.) > > For each point of conscious knowledge on the surface of that strawberry, > the first one has something in its brain that has a redness quality. The > second one has the same, the difference is, each point has something that > has your greenness quality. The 3rd one has a bunch of pixel > representations, each of which is represented by something abstract, like > the word 'red'. > > You don't 'perceive' redness, you directly apprehend it as a quality of > your computationally bound conscious knowledge. A quality is a physical > property of which conscious experience is composed. Redness is the final > result of perception. > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From gadersd at gmail.com Sat Mar 25 20:18:01 2023 From: gadersd at gmail.com (Gadersd) Date: Sat, 25 Mar 2023 16:18:01 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: > Close, but No, this is still circular. "perception" is an abstracting process that requires senses, like eyes or ears. The process is the experience. > On Mar 25, 2023, at 1:32 PM, Brent Allsop via extropy-chat wrote: > > > > On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat > wrote: >> Can anyone give a better (non circular) definition of a quality? > > A quality is a particular configuration of particle interactions. We do not know which configuration of particle interactions corresponds to your perception of red because that must be experimentally determined. > > Close, but No, this is still circular. "perception" is an abstracting process that requires senses, like eyes or ears. It is a long chain of causal physical properties, none of which need to be 'redness' but all can be interpreted as representing 'redness' with a dictionary. Perceptions are just interpretations of interpretations, none of which define what redness means. > <3_functionally_equal_machines_tiny.png> > > All 3 of these systems can perceive 'red'. But only the first to know what redness is like (the second one's redness is like your greenness.) > > For each point of conscious knowledge on the surface of that strawberry, the first one has something in its brain that has a redness quality. The second one has the same, the difference is, each point has something that has your greenness quality. The 3rd one has a bunch of pixel representations, each of which is represented by something abstract, like the word 'red'. > > You don't 'perceive' redness, you directly apprehend it as a quality of your computationally bound conscious knowledge. A quality is a physical property of which conscious experience is composed. Redness is the final result of perception. > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Mar 25 20:22:29 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 13:22:29 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: Brent is making a big difference between the "*direct"* perception of red of a person and the derived recognition of a red stimuli by a robot (using the RED word to describe that). But there is nothing direct about our perception of red. It is also a derived experience. There are several steps to go from the electromagnetic vibration of light to the recognition of red in a human. At each step there is exactly a translation from a dictionary, in a sense. It starts with the receptors in the retina that "translate" the chemical reaction happening in a given receptor to a particular code based on neuron spiking. This signal is transmitted to different parts of the brain to be processed and redirected to other regions of the brain to be further processed. At each step, there is a sort of translation made of neurotransmitters and electrical impulses. Yes, it is marvelous in the end we perceive something that we recognize as red. It is the mystery of consciousness but it is not a mystery from a scientific point of view (we understand most of the components and it is just a matter of putting everything together in a coherent whole) but from an existential point of view. Red feels as something because it is the way for the brain to tell us something is happening. How else would it do it? If it whispered the word "RED" it would feel also as something (of course an absurd idea because the brain has no idea of English a priori but it can and it does know how to manipulate neurons that create sensations). This doesn't happen only with colors but basically any bodily sensation, yes, it is fascinating we feel them and we aware of them but it is not science job to explain how this happens besides what it is already doing and explain the chain of event to make this happen. I have the FEELING that Brent is asking for science to make us FEEL red by listening how experience of red is processed by the brain. But that is not what science is about. On Sat, Mar 25, 2023 at 12:05?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think you mean interpretations of sensations. bill w > > On Sat, Mar 25, 2023 at 12:34?PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Can anyone give a better (non circular) definition of a quality? >>> >>> >>> A quality is a particular configuration of particle interactions. We do >>> not know which configuration of particle interactions corresponds to your >>> perception of red because that must be experimentally determined. >>> >> >> Close, but No, this is still circular. "perception" is an abstracting >> process that requires senses, like eyes or ears. It is a long chain of >> causal physical properties, none of which need to be 'redness' but all can >> be interpreted as representing 'redness' with a dictionary. Perceptions >> are just interpretations of interpretations, none of which define what >> redness means. >> [image: 3_functionally_equal_machines_tiny.png] >> >> All 3 of these systems can perceive 'red'. But only the first to know >> what redness is like (the second one's redness is like your greenness.) >> >> For each point of conscious knowledge on the surface of that strawberry, >> the first one has something in its brain that has a redness quality. The >> second one has the same, the difference is, each point has something that >> has your greenness quality. The 3rd one has a bunch of pixel >> representations, each of which is represented by something abstract, like >> the word 'red'. >> >> You don't 'perceive' redness, you directly apprehend it as a quality of >> your computationally bound conscious knowledge. A quality is a physical >> property of which conscious experience is composed. Redness is the final >> result of perception. >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From gsantostasi at gmail.com Sat Mar 25 20:27:19 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 13:27:19 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: *You don't 'perceive' redness, you directly apprehend it as a quality of your computationally bound conscious knowledge.* That is not true at all. There is nothing direct about "apprehending" redness. The perception of red by the brain is a translation of an electromagnetic wave properties (its wavelength and intensity in a given region of the light spectrum) into the language of the brain that is physical sensations. There is no difference from what happens in the computer. On Sat, Mar 25, 2023 at 1:22?PM Giovanni Santostasi wrote: > Brent is making a big difference between the "*direct"* perception of red > of a person and the derived recognition of a red stimuli by a robot (using > the RED word to describe that). But there is nothing direct about our > perception of red. It is also a derived experience. There are several steps > to go from the electromagnetic vibration of light to the recognition of red > in a human. At each step there is exactly a translation from a dictionary, > in a sense. It starts with the receptors in the retina that "translate" the > chemical reaction happening in a given receptor to a particular code based > on neuron spiking. This signal is transmitted to different parts of the > brain to be processed and redirected to other regions of the brain to be > further processed. At each step, there is a sort of translation made of > neurotransmitters and electrical impulses. > Yes, it is marvelous in the end we perceive something that we recognize as > red. It is the mystery of consciousness but it is not a mystery from a > scientific point of view (we understand most of the components and it is > just a matter of putting everything together in a coherent whole) but from > an existential point of view. Red feels as something because it is the way > for the brain to tell us something is happening. How else would it do it? > If it whispered the word "RED" it would feel also as something (of course > an absurd idea because the brain has no idea of English a priori but it can > and it does know how to manipulate neurons that create sensations). This > doesn't happen only with colors but basically any bodily sensation, yes, it > is fascinating we feel them and we aware of them but it is not science job > to explain how this happens besides what it is already doing and explain > the chain of event to make this happen. I have the FEELING that Brent is > asking for science to make us FEEL red by listening how experience of red > is processed by the brain. But that is not what science is about. > > > > > > On Sat, Mar 25, 2023 at 12:05?PM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think you mean interpretations of sensations. bill w >> >> On Sat, Mar 25, 2023 at 12:34?PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Can anyone give a better (non circular) definition of a quality? >>>> >>>> >>>> A quality is a particular configuration of particle interactions. We do >>>> not know which configuration of particle interactions corresponds to your >>>> perception of red because that must be experimentally determined. >>>> >>> >>> Close, but No, this is still circular. "perception" is an abstracting >>> process that requires senses, like eyes or ears. It is a long chain of >>> causal physical properties, none of which need to be 'redness' but all can >>> be interpreted as representing 'redness' with a dictionary. Perceptions >>> are just interpretations of interpretations, none of which define what >>> redness means. >>> [image: 3_functionally_equal_machines_tiny.png] >>> >>> All 3 of these systems can perceive 'red'. But only the first to know >>> what redness is like (the second one's redness is like your greenness.) >>> >>> For each point of conscious knowledge on the surface of that strawberry, >>> the first one has something in its brain that has a redness quality. The >>> second one has the same, the difference is, each point has something that >>> has your greenness quality. The 3rd one has a bunch of pixel >>> representations, each of which is represented by something abstract, like >>> the word 'red'. >>> >>> You don't 'perceive' redness, you directly apprehend it as a quality of >>> your computationally bound conscious knowledge. A quality is a physical >>> property of which conscious experience is composed. Redness is the final >>> result of perception. >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From jasonresch at gmail.com Sat Mar 25 20:33:44 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 25 Mar 2023 15:33:44 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: On Sat, Mar 25, 2023 at 12:03?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Fri, Mar 24, 2023 at 2:16?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, Mar 24, 2023, 3:00 PM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Would anyone dare to give a definition of 'understanding'? >>> Please cite what epistemologies you are using. bill w >>> >> >> The difficulty is that "understanding" is adjacent to knowledge and >> knowledge is adjacent to consciousness. All these are quite difficult to >> define but I will attempt by best: >> >> "Understanding" is knowledge concerning the relations or workings of >> something. (just why do we just 'standing under' something to represent >> knowledge?) >> > Good question, I was not aware of the etymology of this (nor did I even think to question the obvious connection to those two words), but I found this: https://en.wiktionary.org/wiki/understand#Etymology It seems the basic idea of "stand under" is to suggest subordinacy to another thing. In this case, the understanding derives from the higher principle that is understood, so it the understanding "stands below" that thing which is understood. > >> "Consciousness" is possession of knowledge. (since no creature has a >> blank mind, then all are conscious?) >> > I think so, yes. > >> "Knowledge" is a true belief. (true according to what epistemology? >> empiricism? authorities? intuition? reason?) >> > This definition of knowledge traces its roots to Plato's Theaetetus. > >> "Belief" I have great difficulty defining, but I would say it is a mind >> state correlated with some proposition. (I would say that it is >> something we think of as knowledge but not based on empiricism but rather >> on faith) >> > In many cases, but not all. For example: the belief that the sun will rise tomorrow could be said to be based on empiricism, but it is still a belief. > >> "Truth" is undefinable, as proved by Tarski. (again, true is something >> we accept according to our personal epistemology) >> > Without getting into whether there are objective truths or not, whatever truth is, even within rigorously defined mathematical frameworks, is not something that can be given by a definition. From G?del, we know that true statements are not just those that can be proven, so "provable" is not sufficient to define truth. How then are we to define true statements? Things only get messier if we attempt to extend definitions of truth beyond mathematics. > >> >> I don't regard these as corrections, but just my ideas. bill w >> >> I appreciate them, thank you. :-) Jason > >> >>> >>> On Fri, Mar 24, 2023 at 1:40?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Fri, Mar 24, 2023 at 1:21?PM Gordon Swobe via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>> >>>>>> But really the meaning of words are quite arbitrary and determined >>>>>> by >>>>>> the people who use them. Thus the referential meanings of words >>>>>> evolve >>>>>> and change over time and come to refer to different things >>>>> >>>>> >>>>> I agree this is a reason for many human miscommunications, but the >>>>> speaker understands his words to meaning *something* and the hearer >>>>> understands those words to mean *something*. >>>>> >>>>> As a computational linguist, Bender is on our side. She is obviously >>>>> very excited about the progress these language models represent, but is >>>>> reminding that the models do not actually understand words to mean anything >>>>> whatsoever. >>>>> >>>>> >>>> >>>> What's her evidence of that? >>>> >>>> Jason >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Mar 25 20:34:04 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 25 Mar 2023 15:34:04 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: That is not true at all. There is nothing direct about "apprehending" redness. The perception of red by the brain is a translation of an electromagnetic wave properties (its wavelength and intensity in a given region of the light spectrum) into the language of the brain that is physical sensations. There is no difference from what happens in the computer. Except: The AI will 'see' red. A person might see red and from memory label it as a Chinese red, cerise, the color of the shirt you wore yesterday, and think that it is your least favorite hue of red. Really unlimited perceptions here. No emotions or memories to add to the perception by the AI. Bill w The On Sat, Mar 25, 2023 at 3:27?PM Giovanni Santostasi wrote: > *You don't 'perceive' redness, you directly apprehend it as a quality of > your computationally bound conscious knowledge.* > > That is not true at all. There is nothing direct about "apprehending" > redness. The perception of red by the brain is a translation of an > electromagnetic wave properties (its wavelength and intensity in a given > region of the light spectrum) into the language of the brain that is > physical sensations. > There is no difference from what happens in the computer. > > > > > > > > On Sat, Mar 25, 2023 at 1:22?PM Giovanni Santostasi > wrote: > >> Brent is making a big difference between the "*direct"* perception of >> red of a person and the derived recognition of a red stimuli by a robot >> (using the RED word to describe that). But there is nothing direct about >> our perception of red. It is also a derived experience. There are several >> steps to go from the electromagnetic vibration of light to the recognition >> of red in a human. At each step there is exactly a translation from a >> dictionary, in a sense. It starts with the receptors in the retina that >> "translate" the chemical reaction happening in a given receptor to a >> particular code based on neuron spiking. This signal is transmitted to >> different parts of the brain to be processed and redirected to other >> regions of the brain to be further processed. At each step, there is a sort >> of translation made of neurotransmitters and electrical impulses. >> Yes, it is marvelous in the end we perceive something that we recognize >> as red. It is the mystery of consciousness but it is not a mystery from a >> scientific point of view (we understand most of the components and it is >> just a matter of putting everything together in a coherent whole) but from >> an existential point of view. Red feels as something because it is the way >> for the brain to tell us something is happening. How else would it do it? >> If it whispered the word "RED" it would feel also as something (of course >> an absurd idea because the brain has no idea of English a priori but it can >> and it does know how to manipulate neurons that create sensations). This >> doesn't happen only with colors but basically any bodily sensation, yes, it >> is fascinating we feel them and we aware of them but it is not science job >> to explain how this happens besides what it is already doing and explain >> the chain of event to make this happen. I have the FEELING that Brent is >> asking for science to make us FEEL red by listening how experience of red >> is processed by the brain. But that is not what science is about. >> >> >> >> >> >> On Sat, Mar 25, 2023 at 12:05?PM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think you mean interpretations of sensations. bill w >>> >>> On Sat, Mar 25, 2023 at 12:34?PM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> >>>> On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Can anyone give a better (non circular) definition of a quality? >>>>> >>>>> >>>>> A quality is a particular configuration of particle interactions. We >>>>> do not know which configuration of particle interactions corresponds to >>>>> your perception of red because that must be experimentally determined. >>>>> >>>> >>>> Close, but No, this is still circular. "perception" is an abstracting >>>> process that requires senses, like eyes or ears. It is a long chain of >>>> causal physical properties, none of which need to be 'redness' but all can >>>> be interpreted as representing 'redness' with a dictionary. Perceptions >>>> are just interpretations of interpretations, none of which define what >>>> redness means. >>>> [image: 3_functionally_equal_machines_tiny.png] >>>> >>>> All 3 of these systems can perceive 'red'. But only the first to know >>>> what redness is like (the second one's redness is like your greenness.) >>>> >>>> For each point of conscious knowledge on the surface of that >>>> strawberry, the first one has something in its brain that has a redness >>>> quality. The second one has the same, the difference is, each point has >>>> something that has your greenness quality. The 3rd one has a bunch of >>>> pixel representations, each of which is represented by something abstract, >>>> like the word 'red'. >>>> >>>> You don't 'perceive' redness, you directly apprehend it as a quality of >>>> your computationally bound conscious knowledge. A quality is a physical >>>> property of which conscious experience is composed. Redness is the final >>>> result of perception. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From gsantostasi at gmail.com Sat Mar 25 20:34:19 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 13:34:19 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: Ben, Exactly, does anybody remember that famous book about Proust's Madeleine? Eventually the author got in trouble for several reasons but the main idea of the book is still valid. Even a memory of an experience can bring as strong sensations as the experience itself. Think about that and what it implies in terms of how real an apple really is. It is not. I agree. On Sat, Mar 25, 2023 at 1:41?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Reading these conversations over the last few days, it has struck me > that some people keep referring to 'real' things, usually using the word > 'referents' (e.g. an apple), as though our brains had direct access to > them and could somehow just know what they are. > > But we don't. > > Think about it, what is "An Apple"? > > It's a term that we associate with a large set of sensory and memory > data, including language data, but mostly things like visual, textural, > taste, smell, emotional, etc., data stored as memories. > > Seeing as we all have different memories associated with the label "An > Apple" (because some of us were sick the first time we ate one, some of > us are allergic to something in apples, some of us have a greater > impression of sweetness, or sourness, when we eat one, some of us once > discovered a maggot in one, some people have only ever eaten Granny > Smiths, others only Braeburns, or Crab Apples, and so on and so on...), > then 'An Apple' is a different thing to each of us. > > There is no spoon! Er, Apple. There is no Apple! > Not as a 'real-world thing'. > > "An Apple" is an abstract concept that, despite the individual > differences, most of us can agree on, because there are a lot of common > features for each of us, such as general shape, some common colours, a > set of smells and tastes, how we can use them, where we get them from, > and so on.. The concept is represented internally, and communicated > externally (to other people) by a linguistic label, that refers, for > each of us, to this large bunch of data extracted from our senses and > memories: "Una Manzana". > > It's all 'nothing but' Data. Yet we all think that we 'understand' what > an Apple is. Based purely on this data in our brains (because we have > access to nothing else). > > So this idea of a label having 'a referent' seems false to me. Labels > (data in our heads) refer to a big set of data (in our heads). Where the > data comes from is secondary, diverse, and quite distant, when you trace > the neural pathways back to a large and disparate set of incoming > sensory signals, scattered over space and time. The meaning is created > in our minds, not resident in a single object in the outside world. > > This is my understanding of things, anyway. > > Ben > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Mar 25 20:37:10 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 13:37:10 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: *The Chinese Room argument is garbage because a magic book with the answers to every question isn't real, and if it was, it would already be a mind. *Yep, basically the description of a chinese room is exactly what our brain is, with the neurons taking the place of the people in the room. By the time the Chinese room can answer as a sentient being then room is a mind. Not sure why this argument was ever taken seriously. On Sat, Mar 25, 2023 at 6:25?AM Will Steinberg via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The Chinese Room argument is garbage because a magic book with the answers > to every question isn't real, and if it was, it would already be a mind. > > I find that often thought experiments with shoddy bounds fail hard. The > bound here is the beginning of the experiment, where the situation is > already magically in front of us. Where did the book come from? How was > it created? > > Of course it's easy to write out the words for a thought experiment when > you invent an object, central to the experiment but of course not the > subject of it, that magically does exactly what you need it to do in order > to make the experiment. A thought experiment could still have this book in > it but it should be the center of the experiment > > On Fri, Mar 24, 2023, 5:49 AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe >> wrote: >> >>> >>> >>> On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> >>>> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >>>> and come away with the impression that GPT-4 has no idea what it is talking >>>> about. >>>> >>> >>> Hmm, nothing in the abstract even remotely suggests to me that GPT-4 >>> will know word meanings any more than does GPT-3. Eventually AI on digital >>> computers will far surpass human intelligence, but even then these >>> computers will be manipulating the forms of words and not their meanings. >>> >> >> It seems to me that you have accepted Searle's arguments. I believe I can >> knock down his Chinese room argument. If that is what you are using to base >> your decision on you should know almost no philosophers or computer >> scientists believe his argument holds water. Here's just one of the many >> flaws in the argument: there's more than one mind in the room. Ask the room >> about its favorite food, or about its experiences as a child. The answers >> given will not be Searle's. Change Searle for someone else, the room will >> respond the same way. Searle is an interchangeable cog in the machine. Yet >> Searle wants us to believe only his opinion matters. In truth, his position >> is no different than the "laws of physics" which "mindlessly" computes our >> evolving brain state "without any understanding" of what goes on in our >> heads. Searle's Chinese room argument works as any great magic trick does: >> through misdirection. Ignore the claims made by the man in the room who is >> shouting and waving his arms. Since we've established there are two minds >> in the room, we can replace Searle with a mindless demon and there still >> will be one mind left. >> >> >> >>> Do you believe, like my friend who fell in love with a chatbot, that a >>> software application can have genuine feelings of love for you? >>> >> >> I think we should defer such a debate until such time we can confidently >> define what a "genuine feeling" is and how to implement one. >> >> Jason >> >>> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Mar 25 20:40:30 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 13:40:30 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: *No emotions or memories to add to the perception by the AI.* These emotions can be added (they are codes anyway in our brain too) and same for memories. They are not there by design for several reasons (cost, fear the AI would become too human like and so on). There is nothing fundamentally different. On Sat, Mar 25, 2023 at 1:34?PM William Flynn Wallace wrote: > That is not true at all. There is nothing direct about "apprehending" > redness. The perception of red by the brain is a translation of an > electromagnetic wave properties (its wavelength and intensity in a given > region of the light spectrum) into the language of the brain that is > physical sensations. > There is no difference from what happens in the computer. > > Except: The AI will 'see' red. A person might see red and from memory > label it as a Chinese red, cerise, the color of the shirt you wore > yesterday, and think that it is your least favorite hue of red. Really > unlimited perceptions here. No emotions or memories to add to the > perception by the AI. Bill w > The > > > On Sat, Mar 25, 2023 at 3:27?PM Giovanni Santostasi > wrote: > >> *You don't 'perceive' redness, you directly apprehend it as a quality of >> your computationally bound conscious knowledge.* >> >> That is not true at all. There is nothing direct about "apprehending" >> redness. The perception of red by the brain is a translation of an >> electromagnetic wave properties (its wavelength and intensity in a given >> region of the light spectrum) into the language of the brain that is >> physical sensations. >> There is no difference from what happens in the computer. >> >> >> >> >> >> >> >> On Sat, Mar 25, 2023 at 1:22?PM Giovanni Santostasi < >> gsantostasi at gmail.com> wrote: >> >>> Brent is making a big difference between the "*direct"* perception of >>> red of a person and the derived recognition of a red stimuli by a robot >>> (using the RED word to describe that). But there is nothing direct about >>> our perception of red. It is also a derived experience. There are several >>> steps to go from the electromagnetic vibration of light to the recognition >>> of red in a human. At each step there is exactly a translation from a >>> dictionary, in a sense. It starts with the receptors in the retina that >>> "translate" the chemical reaction happening in a given receptor to a >>> particular code based on neuron spiking. This signal is transmitted to >>> different parts of the brain to be processed and redirected to other >>> regions of the brain to be further processed. At each step, there is a sort >>> of translation made of neurotransmitters and electrical impulses. >>> Yes, it is marvelous in the end we perceive something that we recognize >>> as red. It is the mystery of consciousness but it is not a mystery from a >>> scientific point of view (we understand most of the components and it is >>> just a matter of putting everything together in a coherent whole) but from >>> an existential point of view. Red feels as something because it is the way >>> for the brain to tell us something is happening. How else would it do it? >>> If it whispered the word "RED" it would feel also as something (of course >>> an absurd idea because the brain has no idea of English a priori but it can >>> and it does know how to manipulate neurons that create sensations). This >>> doesn't happen only with colors but basically any bodily sensation, yes, it >>> is fascinating we feel them and we aware of them but it is not science job >>> to explain how this happens besides what it is already doing and explain >>> the chain of event to make this happen. I have the FEELING that Brent is >>> asking for science to make us FEEL red by listening how experience of red >>> is processed by the brain. But that is not what science is about. >>> >>> >>> >>> >>> >>> On Sat, Mar 25, 2023 at 12:05?PM William Flynn Wallace via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I think you mean interpretations of sensations. bill w >>>> >>>> On Sat, Mar 25, 2023 at 12:34?PM Brent Allsop via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> >>>>> On Sat, Mar 25, 2023 at 9:00?AM Gadersd via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Can anyone give a better (non circular) definition of a quality? >>>>>> >>>>>> >>>>>> A quality is a particular configuration of particle interactions. We >>>>>> do not know which configuration of particle interactions corresponds to >>>>>> your perception of red because that must be experimentally determined. >>>>>> >>>>> >>>>> Close, but No, this is still circular. "perception" is an >>>>> abstracting process that requires senses, like eyes or ears. It is a long >>>>> chain of causal physical properties, none of which need to be 'redness' but >>>>> all can be interpreted as representing 'redness' with a dictionary. >>>>> Perceptions are just interpretations of interpretations, none of which >>>>> define what redness means. >>>>> [image: 3_functionally_equal_machines_tiny.png] >>>>> >>>>> All 3 of these systems can perceive 'red'. But only the first to know >>>>> what redness is like (the second one's redness is like your greenness.) >>>>> >>>>> For each point of conscious knowledge on the surface of that >>>>> strawberry, the first one has something in its brain that has a redness >>>>> quality. The second one has the same, the difference is, each point has >>>>> something that has your greenness quality. The 3rd one has a bunch of >>>>> pixel representations, each of which is represented by something abstract, >>>>> like the word 'red'. >>>>> >>>>> You don't 'perceive' redness, you directly apprehend it as a quality >>>>> of your computationally bound conscious knowledge. A quality is a >>>>> physical property of which conscious experience is composed. Redness is >>>>> the final result of perception. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> extropy-chat mailing list >>>>> extropy-chat at lists.extropy.org >>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From jasonresch at gmail.com Sat Mar 25 20:50:11 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 25 Mar 2023 15:50:11 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Sat, Mar 25, 2023 at 3:24?PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Brent is making a big difference between the "*direct"* perception of red > of a person and the derived recognition of a red stimuli by a robot (using > the RED word to describe that). But there is nothing direct about our > perception of red. It is also a derived experience. There are several steps > to go from the electromagnetic vibration of light to the recognition of red > in a human. At each step there is exactly a translation from a dictionary, > in a sense. It starts with the receptors in the retina that "translate" the > chemical reaction happening in a given receptor to a particular code based > on neuron spiking. This signal is transmitted to different parts of the > brain to be processed and redirected to other regions of the brain to be > further processed. At each step, there is a sort of translation made of > neurotransmitters and electrical impulses. > Yes, it is marvelous in the end we perceive something that we recognize as > red. It is the mystery of consciousness but it is not a mystery from a > scientific point of view (we understand most of the components and it is > just a matter of putting everything together in a coherent whole) but from > an existential point of view. Red feels as something because it is the way > for the brain to tell us something is happening. How else would it do it? > If it whispered the word "RED" it would feel also as something (of course > an absurd idea because the brain has no idea of English a priori but it can > and it does know how to manipulate neurons that create sensations). This > doesn't happen only with colors but basically any bodily sensation, yes, it > is fascinating we feel them and we aware of them but it is not science job > to explain how this happens besides what it is already doing and explain > the chain of event to make this happen. I have the FEELING that Brent is > asking for science to make us FEEL red by listening how experience of red > is processed by the brain. But that is not what science is about. > > > Some relevant quotations on this subject: "[Intellect:] By convention there is sweet, by convention there is bitter, by convention there is color; in actuality only atoms and the void. [Senses:] "Poor Intellect, do you hope to defeat us while from us you borrow your evidence? Your victory is your defeat." -- Democritus in "Fragment 9 and 125 (~420 B.C.) "I think that tastes, odors, colors, and so on are no more than mere names so far as the object in which we locate them are concerned, and that they reside in consciousness. Hence if the living creature were removed, all these qualities would be wiped away and annihilated." -- Galileo Galilei in "The Assayer " (1623) ?For the rays, to speak properly, are not colored. In them there is nothing else than a certain power and disposition to stir up a sensation of this or that color.? -- Isaac Newton in "Opticks " (1704) ?We suppose that a physical process starts from a visible object, travels to the eye, there changes into another physical process, causes yet another physical process in the optic nerve, and finally produces some effects in the brain, simultaneously with which we see the object from which the process started, the seeing being something ?mental,? totally different in character from the physical processes, which preceded and accompany it. This view is so queer that metaphysicians have invented all sorts of theories designed to substitute something less incredible.? -- Bertrand Russell in ?An Outline of Philosophy ? (1927) ?If you ask a physicist what is his idea of yellow light, he will tell you that it is transversal electromagnetic waves of a wavelength in the neighbourhood of 590 millimicrons. If you ask him; But where does yellow come in? He will say: In my picture not at all, but these kinds of vibrations, when they hit the retina of a healthy eye, give the person whose eye it is the sensation of yellow.? [...] ?The sensation of colour cannot be accounted for by the physicist?s objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so. We could at best attain an objective knowledge of what nerve fibres are excited and in what proportion, perhaps even to know exactly the processes they produce in certain brain cells?whenever our mind registers the sensation of yellow in a particular direction or domain of our field of vision. But even such intimate knowledge would not tell us anything about the sensation of colour.? -- Mind and Matter - Erwin Schr?dinger (1958) ?An electron is neither red nor blue nor any other colour; the same holds for the proton, the nucleus of the hydrogen atom. But the union of the two in the atom of hydrogen, according to the physicist, produces electromagnetic radiation of a certain discrete array of wavelengths. The homogenous constituents of this radiation, when separated by a prism or an optical grating, stimulate in an observer the sensations of red, green, blue, violet by the intermediary of certain physiological processes, whose general character is sufficiently well known to assert that they are not red or green or blue, in fact that the nervous elements in question display no colour in virtue of their being stimulated; the white or gray the nerve cells exhibit whether stimulated or not is certainly insignificant in respect of the colour sensation which, in the individual whose nerves they are, accompanies their excitation.? -- Mind and Matter - Erwin Schr?dinger (1958) "So how do I know that you experience the same thing when you talk about redness? Perhaps you experience red the way I experience blue, and vice versa. How can we test our assumptions that we experience these qualities the same way? Indeed, we do know there are some differences. Since I have what is misleadingly labeled ?red-green? color-blindness, there are shades of color that appear identical to me that appear different to others. Those of you without this disability apparently have a different experience than I do. What are you all experiencing? I?ll never know." -- Ray Kurzweil in "The Age of Spiritual Machines " (1999) "The great progress of neuroscience in understanding the mechanisms underlying color experience has been unable to provide the color-blind with *any* insight?zero, nothing, nada?into what it?s like to have color experience, and there is no reason at all to think this will change when a few more details are added." -- Phillip Goff in "Galileo?s Error " (2019) Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Mar 25 21:13:24 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 14:13:24 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: This view is so queer that metaphysicians have invented all sorts of theories designed to substitute something less incredible.? -- Bertrand Russell in ?An Outline of Philosophy ? (1927) Jason, yes great quotes on the topic and very consistent among scientific great minds over the centuries. It is clear to a scientific mind what this is all about but the metaphysicians still like to make it confusing because their position in the end is basically a religious one (the main goal is to show humans are special and made in God's image, whatever that means). On Sat, Mar 25, 2023 at 1:37?PM Giovanni Santostasi wrote: > > *The Chinese Room argument is garbage because a magic book with the > answers to every question isn't real, and if it was, it would already be a > mind. *Yep, basically the description of a chinese room is exactly what > our brain is, with the neurons taking the place of the people in the room. > By the time the Chinese room can answer as a sentient being then room is a > mind. Not sure why this argument was ever taken seriously. > > On Sat, Mar 25, 2023 at 6:25?AM Will Steinberg via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> The Chinese Room argument is garbage because a magic book with the >> answers to every question isn't real, and if it was, it would already be a >> mind. >> >> I find that often thought experiments with shoddy bounds fail hard. The >> bound here is the beginning of the experiment, where the situation is >> already magically in front of us. Where did the book come from? How was >> it created? >> >> Of course it's easy to write out the words for a thought experiment when >> you invent an object, central to the experiment but of course not the >> subject of it, that magically does exactly what you need it to do in order >> to make the experiment. A thought experiment could still have this book in >> it but it should be the center of the experiment >> >> On Fri, Mar 24, 2023, 5:49 AM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe >>> wrote: >>> >>>> >>>> >>>> On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> >>>>> There's no way to read this paper: >>>>> https://arxiv.org/pdf/2303.12712.pdf and come away with the >>>>> impression that GPT-4 has no idea what it is talking about. >>>>> >>>> >>>> Hmm, nothing in the abstract even remotely suggests to me that GPT-4 >>>> will know word meanings any more than does GPT-3. Eventually AI on digital >>>> computers will far surpass human intelligence, but even then these >>>> computers will be manipulating the forms of words and not their meanings. >>>> >>> >>> It seems to me that you have accepted Searle's arguments. I believe I >>> can knock down his Chinese room argument. If that is what you are using to >>> base your decision on you should know almost no philosophers or computer >>> scientists believe his argument holds water. Here's just one of the many >>> flaws in the argument: there's more than one mind in the room. Ask the room >>> about its favorite food, or about its experiences as a child. The answers >>> given will not be Searle's. Change Searle for someone else, the room will >>> respond the same way. Searle is an interchangeable cog in the machine. Yet >>> Searle wants us to believe only his opinion matters. In truth, his position >>> is no different than the "laws of physics" which "mindlessly" computes our >>> evolving brain state "without any understanding" of what goes on in our >>> heads. Searle's Chinese room argument works as any great magic trick does: >>> through misdirection. Ignore the claims made by the man in the room who is >>> shouting and waving his arms. Since we've established there are two minds >>> in the room, we can replace Searle with a mindless demon and there still >>> will be one mind left. >>> >>> >>> >>>> Do you believe, like my friend who fell in love with a chatbot, that a >>>> software application can have genuine feelings of love for you? >>>> >>> >>> I think we should defer such a debate until such time we can confidently >>> define what a "genuine feeling" is and how to implement one. >>> >>> Jason >>> >>>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sat Mar 25 21:29:02 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sat, 25 Mar 2023 15:29:02 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= Message-ID: I mentioned Emily Pender in another thread. She is Professor of Linguistics and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. In the other thread, I made the mistake of introducing her with her Octopus thought experiment which I soon realized from the responses here is easily misinterpreted outside of the context of her general thesis and the academic paper in which she introduced it. As I learned from this interview, she and her colleague Koller wrote that paper in response to a twitter debate in which she found herself arguing with non-linguists who insist that language models understand language. Like me, she is critical of such claims. She considers them "hype." The relevant material starts at the 26 minute mark. https://www.youtube.com/watch?v=VaxNN3YRhBA -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sat Mar 25 21:57:10 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sat, 25 Mar 2023 15:57:10 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: * Bender, not Pender "It is really important to distinguish between the word as a sequence of characters as opposed to word in a sense of a pairing of form and meaning." - Bender These language models are trained only on the forms of words, the sequences of characters, not on the meanings. -gts On Sat, Mar 25, 2023 at 3:29?PM Gordon Swobe wrote: > I mentioned Emily Pender in another thread. She is Professor of > Linguistics and Faculty Director of the Master's Program in Computational > Linguistics at University of Washington. > > In the other thread, I made the mistake of introducing her with her > Octopus thought experiment which I soon realized from the responses here is > easily misinterpreted outside of the context of her general thesis and the > academic paper in which she introduced it. > > As I learned from this interview, she and her colleague Koller wrote that > paper in response to a twitter debate in which she found herself arguing > with non-linguists who insist that language models understand language. > Like me, she is critical of such claims. She considers them "hype." > > The relevant material starts at the 26 minute mark. > > https://www.youtube.com/watch?v=VaxNN3YRhBA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 25 22:47:10 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 25 Mar 2023 17:47:10 -0500 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: Hi Gordon, Thanks for sharing this video. I watched and and found the following points of interest: *1. She said they can't possibly be understanding as they are only seeing a sequence of characters and predicting distributions and what these models do is not the same thing as understanding language.* My Reply: These models demonstrate many emergent capabilities that were not things that were programmed in or planned. They can answer questions, summarize texts, translate languages, write programs, etc. All these abilities emerged purely from being trained on the single task of predicting text. Given this, can we be certain that "understanding" is not another one of the emergent capabilities manifested by the LLM? *2. She uses the analogy that the LLM looking at characters would be the same as a human who doesn't understand Cherokee looking at Cherokee characters.* My Reply: This is reminiscent of Searle's Chinese Room. The error is looking at the behavior of the computer only at the lowest level, while ignoring the goings-on at the higher levels. She sweeps all possible behavior of a computer under the umbrella of "symbol manipulation", but anything computable can be framed under "symbol manipulation" if described on that level (including what atoms, or neurons in the human brain do). This therefore fails as an argument that no understanding exists in the higher-level description of the processing performed by the computer program. *3. She was asked what a machine would have to do to convince her they have understanding. Her example was that if Siri or Alexa were asked to do something in the real world, like turn on the lights, and if it does that, then it has understanding (by virtue of having done something in the real world).* My Reply: Perhaps she does not see the analogy between turning on or off a light, and the ability of an LLM to output characters to a monitor as interacting in the real world (turning on and off many thousands of pixels on the user's monitor as they read the reply). *4. She admits her octopus test is exactly like the Turing test. She claims the hyper-intelligent octopus would be able to send some pleasantries and temporarily fool the other person, but that it has no real understanding and this would be revealed if there were any attempt to communicate about any real ideas.* My Reply: I think she must be totally unaware of the capabilities of recent models like GPT-4 to come to a conclusion like this. *5. The interviewer pushes back and says he has learned a lot about math, despite not seeing or experiencing mathematical objects. And has graded a blind student's paper which appeared to show he was able to visualize objects in math, despite not being sighted. She says the octopus never learned language, we acquired a linguistic system, but the hyper intelligent octopus has not, and that all the octopus has learned is language distribution patterns.* My Reply: I think the crucial piece missing from her understanding of LLMs is that the only way for them to achieve the levels of accuracy in the text that they predict is by constructing internal mental models of reality. That is the only way they can answer hypotheticals concerning novel situations described to them, or for example, to play chess. The only way to play chess with a LLM is if it is internally constructing a model of the board and pieces. It cannot be explained in terms of mere patterns or distributions of language. Otherwise, the LLM would be as likely to guess any potential move rather than an optimal move, and one can readily guarantee a chess board position that has never before appeared in the history of the universe, we can know the LLM is not relying on memory. *6. The Interviewer asks what prevents the octopus from learning language over time as a human would? She says it requires joint-attention: seeing some object paired with some word at the same time.* My Reply: Why can't joint attention manifest as the co-occurrence of words as they appear within a sentence, paragraph, or topic of discussion? *7. The interviewer asks do you think there is some algorithm that could possibly exist that could take a stream of words and understand them in that sense? She answers yes, but that would require programming in from the start the structure and meanings of the words and mapping them to a model of the world, or providing the model other sensors or imagery. The interviewer confirms: "You are arguing that just consuming language without all this extra stuff, that no algorithm could just from that, really understand language? She says that's right.* My Reply: We already know that these models build maps of things corresponding to reality in their head. See, for example, the paper I shared where the AI was given a description of how rooms were connected to each other, then the AI was able to visually draw the layout of the room from this textual description. If that is not an example of understanding, I don't know what possibly could be. Note also: this was an early model of GPT-4 before it had been trained on images, it was purely trained on text. *8. She says, imagine that you are dropped into the middle of the Thai library of congress and you have any book you could possibly want but only in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: What would you first do, where would you start? She adds if you just have form, that's not going to give you information. She then says she would have to find an encyclopedia or a translation of a book we know.* My Reply: We know there is information (objectively) in the Thai library, even if there were no illustrations or copies of books we had the translations to. We know the Thai library contains scruitable information because the text is compressible. If text is compressible it means there are discoverable patterns in the text which can be exploited to reduce the amount of bits needed to represent it. All our understanding can be viewed as forms of compression. For example, the physical laws that we have discovered "compress" the amount of information we need to store about the universe. Moreover, when compression works by constructing an internal toy model of reality, we can play with and permute the inputs to the model to see how it behaves under different situations. This provides a genuine understanding of the outer world from which our sensory inputs are based. I believe the LLM has successfully done this to predict text, it has various internal, situational models it can deploy to help it in predicting text. Having these models and knowing when and how to use them, I argue, is tantamount to understanding. Jason On Sat, Mar 25, 2023 at 4:30?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I mentioned Emily Pender in another thread. She is Professor of > Linguistics and Faculty Director of the Master's Program in Computational > Linguistics at University of Washington. > > In the other thread, I made the mistake of introducing her with her > Octopus thought experiment which I soon realized from the responses here is > easily misinterpreted outside of the context of her general thesis and the > academic paper in which she introduced it. > > As I learned from this interview, she and her colleague Koller wrote that > paper in response to a twitter debate in which she found herself arguing > with non-linguists who insist that language models understand language. > Like me, she is critical of such claims. She considers them "hype." > > The relevant material starts at the 26 minute mark. > > https://www.youtube.com/watch?v=VaxNN3YRhBA > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Mar 25 23:33:27 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 16:33:27 -0700 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: In a scenario where there is no dictionary or translation available and only the statistical patterns between words in the books can be used, a computer with significant computational power could potentially learn the language to some extent. This process would involve using unsupervised machine learning techniques and analyzing the co-occurrence patterns of words within the texts. However, the resulting understanding may not be complete or entirely accurate. Here are some steps the computer might take to learn Thai from the books: 1. Identify recurring patterns: Analyze the text corpus to find recurring patterns, such as common phrases, word sequences, or character combinations. These patterns may provide clues to the structure of the language, such as word boundaries, grammatical rules, or common sentence structures. 2. Cluster similar words: Using unsupervised learning techniques, such as clustering algorithms, the computer can group words that frequently occur in similar contexts. This can help identify words that have similar meanings or serve similar functions in sentences. 3. Infer grammar rules: By analyzing the patterns and relationships between words, the computer might be able to infer some basic grammar rules of Thai. For example, it may identify word order patterns, agreement structures, or tense markers. 4. Create a semantic network: Based on the relationships between words and their contexts, the computer can start building a semantic network that represents the relationships between different words and their meanings. This network can help identify synonyms, antonyms, and other semantic relationships between words. 5. Refine understanding: As the computer processes more text, it can refine its understanding of the language, improving its ability to identify patterns and relationships between words. This process can continue iteratively, with each cycle providing a more accurate understanding of the language. While these steps may allow the computer to derive some meaning from the text, there are limitations to this approach. Without any prior knowledge or context about the Thai language, culture, or the specific content of the books, the computer's understanding of the language may be incomplete or contain inaccuracies. Additionally, some aspects of the language, such as idiomatic expressions or culturally-specific references, may be difficult or impossible for the computer to learn solely from statistical patterns. In summary, while a computer can potentially learn some aspects of a language through statistical analysis of text, the resulting understanding may be limited and not entirely accurate. Access to dictionaries, translations, or other linguistic resources would greatly improve the accuracy and completeness of the language learning process. [image: Giovanni Santostasi] Is there a real life example of the above in real life, in particular in the attempt to understand a dead language? Yes, there have been attempts to use computational methods to decipher dead languages, such as Linear B, Ugaritic, and most famously, the Indus script. While some of these efforts have produced insights into the structure and content of these languages, many challenges remain, and the decipherment of these scripts is still an ongoing process. One of the most successful examples of deciphering a dead language is the work of Michael Ventris and John Chadwick on Linear B, an ancient script used in Bronze Age Crete and Greece. Linear B was deciphered in 1952 by Ventris, an architect and amateur linguist, who used a combination of statistical analysis, pattern recognition, and knowledge of ancient Greek to crack the code. Ventris and Chadwick's work on Linear B laid the foundation for modern computational methods in decipherment. In the case of the Indus script, which is still undeciphered, researchers have employed computational methods to analyze the statistical patterns in the script and gain insights into its possible structure and content. For example, Rajesh P.N. Rao, an Indian-American computer scientist, has used machine learning techniques, such as Markov models and clustering algorithms, to analyze the Indus script and compare it to other known languages. While these efforts have not yet led to a complete decipherment, they have provided valuable insights into the possible structure of the script and have helped to inform further research. While there has been progress in using computational methods to decipher dead languages, it is important to note that these methods have limitations. In many cases, a deep understanding of the cultural, historical, and linguistic context is essential for accurate decipherment. Furthermore, deciphering a dead language often requires collaboration between researchers from various fields, including linguistics, archaeology, history, and computer science. In summary, computational methods have been employed in attempts to decipher dead languages, and in some cases, have provided valuable insights. However, the decipherment process is complex and requires a combination of computational techniques, expert knowledge, and interdisciplinary collaboration. On Sat, Mar 25, 2023 at 3:49?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Gordon, > > Thanks for sharing this video. I watched and and found the following > points of interest: > > *1. She said they can't possibly be understanding as they are only seeing > a sequence of characters and predicting distributions and what these models > do is not the same thing as understanding language.* > My Reply: These models demonstrate many emergent capabilities that were > not things that were programmed in or planned. They can answer questions, > summarize texts, translate languages, write programs, etc. All these > abilities emerged purely from being trained on the single task of > predicting text. Given this, can we be certain that "understanding" is not > another one of the emergent capabilities manifested by the LLM? > > *2. She uses the analogy that the LLM looking at characters would be the > same as a human who doesn't understand Cherokee looking at Cherokee > characters.* > My Reply: This is reminiscent of Searle's Chinese Room. The error is > looking at the behavior of the computer only at the lowest level, while > ignoring the goings-on at the higher levels. She sweeps all possible > behavior of a computer under the umbrella of "symbol manipulation", but > anything computable can be framed under "symbol manipulation" if described > on that level (including what atoms, or neurons in the human brain do). > This therefore fails as an argument that no understanding exists in the > higher-level description of the processing performed by the computer > program. > > *3. She was asked what a machine would have to do to convince her they > have understanding. Her example was that if Siri or Alexa were asked to do > something in the real world, like turn on the lights, and if it does that, > then it has understanding (by virtue of having done something in the real > world).* > My Reply: Perhaps she does not see the analogy between turning on or off a > light, and the ability of an LLM to output characters to a monitor as > interacting in the real world (turning on and off many thousands of pixels > on the user's monitor as they read the reply). > > *4. She admits her octopus test is exactly like the Turing test. She > claims the hyper-intelligent octopus would be able to send some > pleasantries and temporarily fool the other person, but that it has no real > understanding and this would be revealed if there were any attempt to > communicate about any real ideas.* > My Reply: I think she must be totally unaware of the capabilities of > recent models like GPT-4 to come to a conclusion like this. > > *5. The interviewer pushes back and says he has learned a lot about math, > despite not seeing or experiencing mathematical objects. And has graded a > blind student's paper which appeared to show he was able to visualize > objects in math, despite not being sighted. She says the octopus never > learned language, we acquired a linguistic system, but the hyper > intelligent octopus has not, and that all the octopus has learned is > language distribution patterns.* > My Reply: I think the crucial piece missing from her understanding of LLMs > is that the only way for them to achieve the levels of accuracy in the text > that they predict is by constructing internal mental models of reality. > That is the only way they can answer hypotheticals concerning novel > situations described to them, or for example, to play chess. The only way > to play chess with a LLM is if it is internally constructing a model of the > board and pieces. It cannot be explained in terms of mere patterns or > distributions of language. Otherwise, the LLM would be as likely to guess > any potential move rather than an optimal move, and one can readily > guarantee a chess board position that has never before appeared in the > history of the universe, we can know the LLM is not relying on memory. > > *6. The Interviewer asks what prevents the octopus from learning language > over time as a human would? She says it requires joint-attention: seeing > some object paired with some word at the same time.* > My Reply: Why can't joint attention manifest as the co-occurrence of words > as they appear within a sentence, paragraph, or topic of discussion? > > *7. The interviewer asks do you think there is some algorithm that could > possibly exist that could take a stream of words and understand them in > that sense? She answers yes, but that would require programming in from the > start the structure and meanings of the words and mapping them to a model > of the world, or providing the model other sensors or imagery. The > interviewer confirms: "You are arguing that just consuming language without > all this extra stuff, that no algorithm could just from that, really > understand language? She says that's right.* > My Reply: We already know that these models build maps of things > corresponding to reality in their head. See, for example, the paper I > shared where the AI was given a description of how rooms were connected to > each other, then the AI was able to visually draw the layout of the room > from this textual description. If that is not an example of understanding, > I don't know what possibly could be. Note also: this was an early model of > GPT-4 before it had been trained on images, it was purely trained on text. > > *8. She says, imagine that you are dropped into the middle of the Thai > library of congress and you have any book you could possibly want but only > in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: > What would you first do, where would you start? She adds if you just have > form, that's not going to give you information. She then says she would > have to find an encyclopedia or a translation of a book we know.* > My Reply: We know there is information (objectively) in the Thai library, > even if there were no illustrations or copies of books we had the > translations to. We know the Thai library contains scruitable information > because the text is compressible. If text is compressible it means there > are discoverable patterns in the text which can be exploited to reduce the > amount of bits needed to represent it. All our understanding can be viewed > as forms of compression. For example, the physical laws that we have > discovered "compress" the amount of information we need to store about the > universe. Moreover, when compression works by constructing an internal toy > model of reality, we can play with and permute the inputs to the model to > see how it behaves under different situations. This provides a genuine > understanding of the outer world from which our sensory inputs are based. I > believe the LLM has successfully done this to predict text, it has various > internal, situational models it can deploy to help it in predicting text. > Having these models and knowing when and how to use them, I argue, is > tantamount to understanding. > > > Jason > > > > On Sat, Mar 25, 2023 at 4:30?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I mentioned Emily Pender in another thread. She is Professor of >> Linguistics and Faculty Director of the Master's Program in Computational >> Linguistics at University of Washington. >> >> In the other thread, I made the mistake of introducing her with her >> Octopus thought experiment which I soon realized from the responses here is >> easily misinterpreted outside of the context of her general thesis and the >> academic paper in which she introduced it. >> >> As I learned from this interview, she and her colleague Koller wrote that >> paper in response to a twitter debate in which she found herself arguing >> with non-linguists who insist that language models understand language. >> Like me, she is critical of such claims. She considers them "hype." >> >> The relevant material starts at the 26 minute mark. >> >> https://www.youtube.com/watch?v=VaxNN3YRhBA >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sat Mar 25 23:39:17 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 25 Mar 2023 16:39:17 -0700 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: How a linguist doesn't know about the decoding of Linear B? Point 8 from Jason list is exactly what happened with Linear B. LLMs have actually a much easier task given they are pre-trained with supervised data instead of completely unsupervised approach that is what it was done when people tried to decipher linear B. Giovanni On Sat, Mar 25, 2023 at 3:49?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Gordon, > > Thanks for sharing this video. I watched and and found the following > points of interest: > > *1. She said they can't possibly be understanding as they are only seeing > a sequence of characters and predicting distributions and what these models > do is not the same thing as understanding language.* > My Reply: These models demonstrate many emergent capabilities that were > not things that were programmed in or planned. They can answer questions, > summarize texts, translate languages, write programs, etc. All these > abilities emerged purely from being trained on the single task of > predicting text. Given this, can we be certain that "understanding" is not > another one of the emergent capabilities manifested by the LLM? > > *2. She uses the analogy that the LLM looking at characters would be the > same as a human who doesn't understand Cherokee looking at Cherokee > characters.* > My Reply: This is reminiscent of Searle's Chinese Room. The error is > looking at the behavior of the computer only at the lowest level, while > ignoring the goings-on at the higher levels. She sweeps all possible > behavior of a computer under the umbrella of "symbol manipulation", but > anything computable can be framed under "symbol manipulation" if described > on that level (including what atoms, or neurons in the human brain do). > This therefore fails as an argument that no understanding exists in the > higher-level description of the processing performed by the computer > program. > > *3. She was asked what a machine would have to do to convince her they > have understanding. Her example was that if Siri or Alexa were asked to do > something in the real world, like turn on the lights, and if it does that, > then it has understanding (by virtue of having done something in the real > world).* > My Reply: Perhaps she does not see the analogy between turning on or off a > light, and the ability of an LLM to output characters to a monitor as > interacting in the real world (turning on and off many thousands of pixels > on the user's monitor as they read the reply). > > *4. She admits her octopus test is exactly like the Turing test. She > claims the hyper-intelligent octopus would be able to send some > pleasantries and temporarily fool the other person, but that it has no real > understanding and this would be revealed if there were any attempt to > communicate about any real ideas.* > My Reply: I think she must be totally unaware of the capabilities of > recent models like GPT-4 to come to a conclusion like this. > > *5. The interviewer pushes back and says he has learned a lot about math, > despite not seeing or experiencing mathematical objects. And has graded a > blind student's paper which appeared to show he was able to visualize > objects in math, despite not being sighted. She says the octopus never > learned language, we acquired a linguistic system, but the hyper > intelligent octopus has not, and that all the octopus has learned is > language distribution patterns.* > My Reply: I think the crucial piece missing from her understanding of LLMs > is that the only way for them to achieve the levels of accuracy in the text > that they predict is by constructing internal mental models of reality. > That is the only way they can answer hypotheticals concerning novel > situations described to them, or for example, to play chess. The only way > to play chess with a LLM is if it is internally constructing a model of the > board and pieces. It cannot be explained in terms of mere patterns or > distributions of language. Otherwise, the LLM would be as likely to guess > any potential move rather than an optimal move, and one can readily > guarantee a chess board position that has never before appeared in the > history of the universe, we can know the LLM is not relying on memory. > > *6. The Interviewer asks what prevents the octopus from learning language > over time as a human would? She says it requires joint-attention: seeing > some object paired with some word at the same time.* > My Reply: Why can't joint attention manifest as the co-occurrence of words > as they appear within a sentence, paragraph, or topic of discussion? > > *7. The interviewer asks do you think there is some algorithm that could > possibly exist that could take a stream of words and understand them in > that sense? She answers yes, but that would require programming in from the > start the structure and meanings of the words and mapping them to a model > of the world, or providing the model other sensors or imagery. The > interviewer confirms: "You are arguing that just consuming language without > all this extra stuff, that no algorithm could just from that, really > understand language? She says that's right.* > My Reply: We already know that these models build maps of things > corresponding to reality in their head. See, for example, the paper I > shared where the AI was given a description of how rooms were connected to > each other, then the AI was able to visually draw the layout of the room > from this textual description. If that is not an example of understanding, > I don't know what possibly could be. Note also: this was an early model of > GPT-4 before it had been trained on images, it was purely trained on text. > > *8. She says, imagine that you are dropped into the middle of the Thai > library of congress and you have any book you could possibly want but only > in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: > What would you first do, where would you start? She adds if you just have > form, that's not going to give you information. She then says she would > have to find an encyclopedia or a translation of a book we know.* > My Reply: We know there is information (objectively) in the Thai library, > even if there were no illustrations or copies of books we had the > translations to. We know the Thai library contains scruitable information > because the text is compressible. If text is compressible it means there > are discoverable patterns in the text which can be exploited to reduce the > amount of bits needed to represent it. All our understanding can be viewed > as forms of compression. For example, the physical laws that we have > discovered "compress" the amount of information we need to store about the > universe. Moreover, when compression works by constructing an internal toy > model of reality, we can play with and permute the inputs to the model to > see how it behaves under different situations. This provides a genuine > understanding of the outer world from which our sensory inputs are based. I > believe the LLM has successfully done this to predict text, it has various > internal, situational models it can deploy to help it in predicting text. > Having these models and knowing when and how to use them, I argue, is > tantamount to understanding. > > > Jason > > > > On Sat, Mar 25, 2023 at 4:30?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I mentioned Emily Pender in another thread. She is Professor of >> Linguistics and Faculty Director of the Master's Program in Computational >> Linguistics at University of Washington. >> >> In the other thread, I made the mistake of introducing her with her >> Octopus thought experiment which I soon realized from the responses here is >> easily misinterpreted outside of the context of her general thesis and the >> academic paper in which she introduced it. >> >> As I learned from this interview, she and her colleague Koller wrote that >> paper in response to a twitter debate in which she found herself arguing >> with non-linguists who insist that language models understand language. >> Like me, she is critical of such claims. She considers them "hype." >> >> The relevant material starts at the 26 minute mark. >> >> https://www.youtube.com/watch?v=VaxNN3YRhBA >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sun Mar 26 03:59:11 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sat, 25 Mar 2023 21:59:11 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: Professor Bender pulls no punches in her tweets to her twitter followers, of which I am now one. "Love to see how people complain about "criticism lobbed at LLMs" &c. Folks: the criticism isn't of the models. It's of the people (often on behalf of corporations) making false claims abt the models for the purposes of profit, consolidating power, and feeding their AGI fantasies. Your LLMs aren't in need of protecting. They don't have feelings. They aren't little baby proto-AGIs in need of nurturing." Ha. https://twitter.com/emilymbender/status/1639640791084838914?s=20 -gts On Sat, Mar 25, 2023 at 3:29?PM Gordon Swobe wrote: > I mentioned Emily Pender in another thread. She is Professor of > Linguistics and Faculty Director of the Master's Program in Computational > Linguistics at University of Washington. > > In the other thread, I made the mistake of introducing her with her > Octopus thought experiment which I soon realized from the responses here is > easily misinterpreted outside of the context of her general thesis and the > academic paper in which she introduced it. > > As I learned from this interview, she and her colleague Koller wrote that > paper in response to a twitter debate in which she found herself arguing > with non-linguists who insist that language models understand language. > Like me, she is critical of such claims. She considers them "hype." > > The relevant material starts at the 26 minute mark. > > https://www.youtube.com/watch?v=VaxNN3YRhBA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sun Mar 26 06:01:22 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 00:01:22 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sat, Mar 25, 2023 at 4:49?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Gordon, > > Thanks for sharing this video. I watched and and found the following > points of interest: > > *1. She said they can't possibly be understanding as they are only seeing > a sequence of characters and predicting distributions and what these models > do is not the same thing as understanding language.* > My Reply: These models demonstrate many emergent capabilities that were > not things that were programmed in or planned. They can answer questions, > summarize texts, translate languages, write programs, etc. All these > abilities emerged purely from being trained on the single task of > predicting text. Given this, can we be certain that "understanding" is not > another one of the emergent capabilities manifested by the LLM? > This gets into philosophical debate about what, exactly, are emergent properties. As I understand the term, whatever it is that emerges is somehow hidden but intrinsic prior to the emergence. For example, from the rules of chess there emerge many abstract properties and strategies of chess. To someone naive about chess, it is difficult to imagine from the simple rules of chess how chess looks to a grandmaster, but those emergent properties are inherent in and follow logically from the simple rules of chess. So how does meaning emerge from mere symbols(words)? Sequences of abstract characters in no possible way contain the seeds of their meanings, as we can see by the fact that many different words exist in different languages and in entirely different alphabets for the same meaning. > *2. She uses the analogy that the LLM looking at characters would be the > same as a human who doesn't understand Cherokee looking at Cherokee > characters.* > My Reply: This is reminiscent of Searle's Chinese Room. The error is > looking at the behavior of the computer only at the lowest level, while > ignoring the goings-on at the higher levels. She sweeps all possible > behavior of a computer under the umbrella of "symbol manipulation", but > anything computable can be framed under "symbol manipulation" if described > on that level (including what atoms, or neurons in the human brain do). > This therefore fails as an argument that no understanding exists in the > higher-level description of the processing performed by the computer > program. > Yes her argument is similar to Searle's. See above. Sequences of characters (words) in no possible way contain the hidden seeds of their meanings, as we can see from the fact that many different words exist in different languages and alphabets for the same meaning. *3. She was asked what a machine would have to do to convince her they have > understanding. Her example was that if Siri or Alexa were asked to do > something in the real world, like turn on the lights, and if it does that, > then it has understanding (by virtue of having done something in the real > world).* > My Reply: Perhaps she does not see the analogy between turning on or off a > light, and the ability of an LLM to output characters to a monitor as > interacting in the real world (turning on and off many thousands of pixels > on the user's monitor as they read the reply). > I thought that was the most interesting part of her interview. She was using the word "understanding" in a more generous way than I would prefer to use it, even attributing "understanding" to a stupid app like Alexa, but she does not think GPT has understanding. I think she means it in exactly the way I do, which is why I put it in scare-quotes. As she put, it is a "kind of" understanding. As I wrote to you I think yesterday, I will grant that my pocket calculator "understands" how to do math, but it is not holding the meaning of those calculations in mind consciously, which is what I (and most everyone on earth) mean by understanding. Understanding involves the capacity to consciously hold something in mind. Otherwise, pretty much everything understands something and the word loses meaning. Does the automated windshield wiper mechanism in my car understand how to clear the rain off my windows when it starts raining? No, but I will grant that it "understands" it in scare-quotes. The other point I would make here is that even if we grant that turning the pixels off and on your screen makes GPT sentient or conscious, the real question is "how can it know the meanings of those pixel arrangements?" >From its point of view (so to speak) it is merely generating meaningless strings of text for which it has never been taught the meanings except via other meaningless strings of text. Bender made the point that language models have no grounding, which is something I almost mentioned yesterday in another thread. The symbol grounding problem in philosophy is about exactly this question. They are not grounded in the world of conscious experience like you and me. Or, if we think so, then that is to me something like a religious belief. > *4. She admits her octopus test is exactly like the Turing test. She > claims the hyper-intelligent octopus would be able to send some > pleasantries and temporarily fool the other person, but that it has no real > understanding and this would be revealed if there were any attempt to > communicate about any real ideas.* > My Reply: I think she must be totally unaware of the capabilities of > recent models like GPT-4 to come to a conclusion like this. > Again, no grounding. > *5. The interviewer pushes back and says he has learned a lot about math, > despite not seeing or experiencing mathematical objects. And has graded a > blind student's paper which appeared to show he was able to visualize > objects in math, despite not being sighted. She says the octopus never > learned language, we acquired a linguistic system, but the hyper > intelligent octopus has not, and that all the octopus has learned is > language distribution patterns.* > My Reply: I think the crucial piece missing from her understanding of LLMs > is that the only way for them to achieve the levels of accuracy in the text > that they predict is by constructing internal mental models of reality. > That is the only way they can answer hypotheticals concerning novel > situations described to them, or for example, to play chess. The only way > to play chess with a LLM is if it is internally constructing a model of the > board and pieces. It cannot be explained in terms of mere patterns or > distributions of language. Otherwise, the LLM would be as likely to guess > any potential move rather than an optimal move, and one can readily > guarantee a chess board position that has never before appeared in the > history of the universe, we can know the LLM is not relying on memory. > I don't dispute that LLMs construct internal models of reality, but I cough when you include the word "mental," as if they have minds with conscious awareness of their internal models. I agree that it is absolutely amazing what these LLMs can do and will do. The question is, how could they possibly know it any more than my pocket calculator knows the rules of mathematics or my watch knows the time? > > *6. The Interviewer asks what prevents the octopus from learning language > over time as a human would? She says it requires joint-attention: seeing > some object paired with some word at the same time.* > My Reply: Why can't joint attention manifest as the co-occurrence of words > as they appear within a sentence, paragraph, or topic of discussion? > Because those other words also have no meanings or refrents. There is no grounding and there is no Rosetta Stone. Bender co-authored another paper about "stochastic parrots," which is how she characterizes LLMs and which I like. These models are like parrots that mimic human language and understanding. It is amazing how talented they appear, but they are only parrots who have no idea what they are saying. > > *7. The interviewer asks do you think there is some algorithm that could > possibly exist that could take a stream of words and understand them in > that sense? She answers yes, but that would require programming in from the > start the structure and meanings of the words and mapping them to a model > of the world, or providing the model other sensors or imagery. The > interviewer confirms: "You are arguing that just consuming language without > all this extra stuff, that no algorithm could just from that, really > understand language? She says that's right.* > My Reply: We already know that these models build maps of things > corresponding to reality in their head. See, for example, the paper I > shared where the AI was given a description of how rooms were connected to > each other, then the AI was able to visually draw the layout of the room > from this textual description. If that is not an example of understanding, > I don't know what possibly could be. Note also: this was an early model of > GPT-4 before it had been trained on images, it was purely trained on text. > This goes back to the question about Alexa.Yes, if that is what you mean by "understanding" then I am forced to agree that even Alexa and Siri "understand" language. But, again, I must put it in scare quotes. There is nobody out there named Alexa who is actually aware of understanding anything. She exists only in a manner of speaking. > > *8. She says, imagine that you are dropped into the middle of the Thai > library of congress and you have any book you could possibly want but only > in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: > What would you first do, where would you start? She adds if you just have > form, that's not going to give you information. She then says she would > have to find an encyclopedia or a translation of a book we know.* > My Reply: We know there is information (objectively) in the Thai library, > even if there were no illustrations or copies of books we had the > translations to. We know the Thai library contains scruitable information > because the text is compressible. If text is compressible it means there > are discoverable patterns in the text which can be exploited to reduce the > amount of bits needed to represent it. All our understanding can be viewed > as forms of compression. For example, the physical laws that we have > discovered "compress" the amount of information we need to store about the > universe. Moreover, when compression works by constructing an internal toy > model of reality, we can play with and permute the inputs to the model to > see how it behaves under different situations. This provides a genuine > understanding of the outer world from which our sensory inputs are based. I > believe the LLM has successfully done this to predict text, it has various > internal, situational models it can deploy to help it in predicting text. > Having these models and knowing when and how to use them, I argue, is > tantamount to understanding. > How could you possibly know what those "discoverable patterns of text" mean, given that they are in Thai and there is no Thai to English dictionary in the Thai library? As she points out and I mentioned above, there is no Rosetta Stone. Thanks for the thoughtful email. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Sun Mar 26 07:36:29 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Sun, 26 Mar 2023 08:36:29 +0100 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: Jason Resch wrote: > *3. She was asked what a machine would have to do to convince her they > have understanding. Her example was that?if Siri or Alexa were asked > to do something in the real world, like turn on the lights, and if it > does that, then it has understanding (by virtue of having done > something in the real world).* Wait a minute. So she thinks that smart home systems have understanding of what they're doing, but LLMs don't? I wonder how many Siris and Alexas are the voice interface for smart home systems? A lot, I expect. If she's right (which she's not, seems to be the consensus here), then all that needs to be done is link up a LLM to some smart home hardware, and 'ta-daaaa', instant understanding! I don't buy it. Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sun Mar 26 08:56:23 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 02:56:23 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: I have a smart home. Some of the iPhone apps associated with it have and can display what could be described as internal models representing my home. Does this mean these apps have a conscious understanding of the layout of my home? No, I think not, not as I and most people use the word understand. Only minds can understand things, and despite my home being "smart," I reject the idea that it has a mind of its own. That is nothing more than foolish science-fiction.. -gts On Sun, Mar 26, 2023 at 12:01?AM Gordon Swobe wrote: > > > On Sat, Mar 25, 2023 at 4:49?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Hi Gordon, >> >> Thanks for sharing this video. I watched and and found the following >> points of interest: >> >> *1. She said they can't possibly be understanding as they are only seeing >> a sequence of characters and predicting distributions and what these models >> do is not the same thing as understanding language.* >> My Reply: These models demonstrate many emergent capabilities that were >> not things that were programmed in or planned. They can answer questions, >> summarize texts, translate languages, write programs, etc. All these >> abilities emerged purely from being trained on the single task of >> predicting text. Given this, can we be certain that "understanding" is not >> another one of the emergent capabilities manifested by the LLM? >> > > This gets into philosophical debate about what, exactly, are emergent > properties. As I understand the term, whatever it is that emerges is > somehow hidden but intrinsic prior to the emergence. For example, from the > rules of chess there emerge many abstract properties and strategies of > chess. To someone naive about chess, it is difficult to imagine from the > simple rules of chess how chess looks to a grandmaster, but those emergent > properties are inherent in and follow logically from the simple rules of > chess. > > So how does meaning emerge from mere symbols(words)? Sequences of abstract > characters in no possible way contain the seeds of their meanings, as we > can see by the fact that many different words exist in different languages > and in entirely different alphabets for the same meaning. > > >> *2. She uses the analogy that the LLM looking at characters would be the >> same as a human who doesn't understand Cherokee looking at Cherokee >> characters.* >> My Reply: This is reminiscent of Searle's Chinese Room. The error is >> looking at the behavior of the computer only at the lowest level, while >> ignoring the goings-on at the higher levels. She sweeps all possible >> behavior of a computer under the umbrella of "symbol manipulation", but >> anything computable can be framed under "symbol manipulation" if described >> on that level (including what atoms, or neurons in the human brain do). >> This therefore fails as an argument that no understanding exists in the >> higher-level description of the processing performed by the computer >> program. >> > > Yes her argument is similar to Searle's. See above. Sequences of > characters (words) in no possible way contain the hidden seeds of their > meanings, as we can see from the fact that many different words exist > in different languages and alphabets for the same meaning. > > *3. She was asked what a machine would have to do to convince her they >> have understanding. Her example was that if Siri or Alexa were asked to do >> something in the real world, like turn on the lights, and if it does that, >> then it has understanding (by virtue of having done something in the real >> world).* >> My Reply: Perhaps she does not see the analogy between turning on or off >> a light, and the ability of an LLM to output characters to a monitor as >> interacting in the real world (turning on and off many thousands of pixels >> on the user's monitor as they read the reply). >> > > I thought that was the most interesting part of her interview. She was > using the word "understanding" in a more generous way than I would prefer > to use it, even attributing "understanding" to a stupid app like Alexa, but > she does not think GPT has understanding. I think she means it in exactly > the way I do, which is why I put it in scare-quotes. As she put, it is a > "kind of" understanding. As I wrote to you I think yesterday, I will grant > that my pocket calculator "understands" how to do math, but it is > not holding the meaning of those calculations in mind consciously, which is > what I (and most everyone on earth) mean by understanding. > > Understanding involves the capacity to consciously hold something in mind. > Otherwise, pretty much everything understands something and the word loses > meaning. Does the automated windshield wiper mechanism in my car understand > how to clear the rain off my windows when it starts raining? No, but I will > grant that it "understands" it in scare-quotes. > > The other point I would make here is that even if we grant that turning > the pixels off and on your screen makes GPT sentient or conscious, the real > question is "how can it know the meanings of those pixel arrangements?" > From its point of view (so to speak) it is merely generating meaningless > strings of text for which it has never been taught the meanings except via > other meaningless strings of text. > > Bender made the point that language models have no grounding, which is > something I almost mentioned yesterday in another thread. The symbol > grounding problem in philosophy is about exactly this question. They are > not grounded in the world of conscious experience like you and me. Or, if > we think so, then that is to me something like a religious belief. > > > >> *4. She admits her octopus test is exactly like the Turing test. She >> claims the hyper-intelligent octopus would be able to send some >> pleasantries and temporarily fool the other person, but that it has no real >> understanding and this would be revealed if there were any attempt to >> communicate about any real ideas.* >> My Reply: I think she must be totally unaware of the capabilities of >> recent models like GPT-4 to come to a conclusion like this. >> > > Again, no grounding. > > >> *5. The interviewer pushes back and says he has learned a lot about math, >> despite not seeing or experiencing mathematical objects. And has graded a >> blind student's paper which appeared to show he was able to visualize >> objects in math, despite not being sighted. She says the octopus never >> learned language, we acquired a linguistic system, but the hyper >> intelligent octopus has not, and that all the octopus has learned is >> language distribution patterns.* >> My Reply: I think the crucial piece missing from her understanding of >> LLMs is that the only way for them to achieve the levels of accuracy in the >> text that they predict is by constructing internal mental models of >> reality. That is the only way they can answer hypotheticals concerning >> novel situations described to them, or for example, to play chess. The only >> way to play chess with a LLM is if it is internally constructing a model of >> the board and pieces. It cannot be explained in terms of mere patterns or >> distributions of language. Otherwise, the LLM would be as likely to guess >> any potential move rather than an optimal move, and one can readily >> guarantee a chess board position that has never before appeared in the >> history of the universe, we can know the LLM is not relying on memory. >> > > I don't dispute that LLMs construct internal models of reality, but I > cough when you include the word "mental," as if they have minds > with conscious awareness of their internal models. > > I agree that it is absolutely amazing what these LLMs can do and will do. > The question is, how could they possibly know it any more than my pocket > calculator knows the rules of mathematics or my watch knows the time? > > > >> >> *6. The Interviewer asks what prevents the octopus from learning language >> over time as a human would? She says it requires joint-attention: seeing >> some object paired with some word at the same time.* >> My Reply: Why can't joint attention manifest as the co-occurrence of >> words as they appear within a sentence, paragraph, or topic of discussion? >> > > Because those other words also have no meanings or refrents. There is no > grounding and there is no Rosetta Stone. > > Bender co-authored another paper about "stochastic parrots," which is how > she characterizes LLMs and which I like. These models are like parrots that > mimic human language and understanding. It is amazing how talented they > appear, but they are only parrots who have no idea what they are saying. > > >> >> *7. The interviewer asks do you think there is some algorithm that could >> possibly exist that could take a stream of words and understand them in >> that sense? She answers yes, but that would require programming in from the >> start the structure and meanings of the words and mapping them to a model >> of the world, or providing the model other sensors or imagery. The >> interviewer confirms: "You are arguing that just consuming language without >> all this extra stuff, that no algorithm could just from that, really >> understand language? She says that's right.* >> My Reply: We already know that these models build maps of things >> corresponding to reality in their head. See, for example, the paper I >> shared where the AI was given a description of how rooms were connected to >> each other, then the AI was able to visually draw the layout of the room >> from this textual description. If that is not an example of understanding, >> I don't know what possibly could be. Note also: this was an early model of >> GPT-4 before it had been trained on images, it was purely trained on text. >> > > This goes back to the question about Alexa.Yes, if that is what you mean > by "understanding" then I am forced to agree that even Alexa and Siri > "understand" language. But, again, I must put it in scare quotes. There is > nobody out there named Alexa who is actually aware of understanding > anything. She exists only in a manner of speaking. > > >> >> *8. She says, imagine that you are dropped into the middle of the Thai >> library of congress and you have any book you could possibly want but only >> in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: >> What would you first do, where would you start? She adds if you just have >> form, that's not going to give you information. She then says she would >> have to find an encyclopedia or a translation of a book we know.* >> My Reply: We know there is information (objectively) in the Thai library, >> even if there were no illustrations or copies of books we had the >> translations to. We know the Thai library contains scruitable information >> because the text is compressible. If text is compressible it means there >> are discoverable patterns in the text which can be exploited to reduce the >> amount of bits needed to represent it. All our understanding can be viewed >> as forms of compression. For example, the physical laws that we have >> discovered "compress" the amount of information we need to store about the >> universe. Moreover, when compression works by constructing an internal toy >> model of reality, we can play with and permute the inputs to the model to >> see how it behaves under different situations. This provides a genuine >> understanding of the outer world from which our sensory inputs are based. I >> believe the LLM has successfully done this to predict text, it has various >> internal, situational models it can deploy to help it in predicting text. >> Having these models and knowing when and how to use them, I argue, is >> tantamount to understanding. >> > > How could you possibly know what those "discoverable patterns of text" > mean, given that they are in Thai and there is no Thai to English > dictionary in the Thai library? > > As she points out and I mentioned above, there is no Rosetta Stone. > > Thanks for the thoughtful email. > > -gts > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sun Mar 26 14:52:07 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 08:52:07 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: Jason, I received a reply to my last to you, but before I dig into it, I notice that it does not appear addressed to ExI. I am happy to reply privately, but I think you probably meant it for the list. In the meantime, I wanted to add to comments about emergent properties that I have no argument with the idea that GPT might be exhibiting emergent properties -- I agree it certainly appears that way -- but I would say they are emergent properties of the grammatical relationships between and among words, not evidence of any understanding of the meanings. I think Bender actually touches on this subject in the interview but without actually using the term "emergent properties." It could very well be that something like this is one aspect of human intelligence, but I think we also understand the meanings. -gts On Sun, Mar 26, 2023 at 2:56?AM Gordon Swobe wrote: > I have a smart home. Some of the iPhone apps associated with it have and > can display what could be described as internal models representing my > home. Does this mean these apps have a conscious understanding of the > layout of my home? No, I think not, not as I and most people use the word > understand. Only minds can understand things, and despite my home being > "smart," I reject the idea that it has a mind of its own. That is nothing > more than foolish science-fiction.. > > -gts > > On Sun, Mar 26, 2023 at 12:01?AM Gordon Swobe > wrote: > >> >> >> On Sat, Mar 25, 2023 at 4:49?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Hi Gordon, >>> >>> Thanks for sharing this video. I watched and and found the following >>> points of interest: >>> >>> *1. She said they can't possibly be understanding as they are only >>> seeing a sequence of characters and predicting distributions and what these >>> models do is not the same thing as understanding language.* >>> My Reply: These models demonstrate many emergent capabilities that were >>> not things that were programmed in or planned. They can answer questions, >>> summarize texts, translate languages, write programs, etc. All these >>> abilities emerged purely from being trained on the single task of >>> predicting text. Given this, can we be certain that "understanding" is not >>> another one of the emergent capabilities manifested by the LLM? >>> >> >> This gets into philosophical debate about what, exactly, are emergent >> properties. As I understand the term, whatever it is that emerges is >> somehow hidden but intrinsic prior to the emergence. For example, from the >> rules of chess there emerge many abstract properties and strategies of >> chess. To someone naive about chess, it is difficult to imagine from the >> simple rules of chess how chess looks to a grandmaster, but those emergent >> properties are inherent in and follow logically from the simple rules of >> chess. >> >> So how does meaning emerge from mere symbols(words)? Sequences of >> abstract characters in no possible way contain the seeds of their meanings, >> as we can see by the fact that many different words exist in different >> languages and in entirely different alphabets for the same meaning. >> >> >>> *2. She uses the analogy that the LLM looking at characters would be the >>> same as a human who doesn't understand Cherokee looking at Cherokee >>> characters.* >>> My Reply: This is reminiscent of Searle's Chinese Room. The error is >>> looking at the behavior of the computer only at the lowest level, while >>> ignoring the goings-on at the higher levels. She sweeps all possible >>> behavior of a computer under the umbrella of "symbol manipulation", but >>> anything computable can be framed under "symbol manipulation" if described >>> on that level (including what atoms, or neurons in the human brain do). >>> This therefore fails as an argument that no understanding exists in the >>> higher-level description of the processing performed by the computer >>> program. >>> >> >> Yes her argument is similar to Searle's. See above. Sequences of >> characters (words) in no possible way contain the hidden seeds of their >> meanings, as we can see from the fact that many different words exist >> in different languages and alphabets for the same meaning. >> >> *3. She was asked what a machine would have to do to convince her they >>> have understanding. Her example was that if Siri or Alexa were asked to do >>> something in the real world, like turn on the lights, and if it does that, >>> then it has understanding (by virtue of having done something in the real >>> world).* >>> My Reply: Perhaps she does not see the analogy between turning on or off >>> a light, and the ability of an LLM to output characters to a monitor as >>> interacting in the real world (turning on and off many thousands of pixels >>> on the user's monitor as they read the reply). >>> >> >> I thought that was the most interesting part of her interview. She was >> using the word "understanding" in a more generous way than I would prefer >> to use it, even attributing "understanding" to a stupid app like Alexa, but >> she does not think GPT has understanding. I think she means it in exactly >> the way I do, which is why I put it in scare-quotes. As she put, it is a >> "kind of" understanding. As I wrote to you I think yesterday, I will grant >> that my pocket calculator "understands" how to do math, but it is >> not holding the meaning of those calculations in mind consciously, which is >> what I (and most everyone on earth) mean by understanding. >> >> Understanding involves the capacity to consciously hold something in >> mind. Otherwise, pretty much everything understands something and the word >> loses meaning. Does the automated windshield wiper mechanism in my car >> understand how to clear the rain off my windows when it starts raining? No, >> but I will grant that it "understands" it in scare-quotes. >> >> The other point I would make here is that even if we grant that turning >> the pixels off and on your screen makes GPT sentient or conscious, the real >> question is "how can it know the meanings of those pixel arrangements?" >> From its point of view (so to speak) it is merely generating meaningless >> strings of text for which it has never been taught the meanings except via >> other meaningless strings of text. >> >> Bender made the point that language models have no grounding, which is >> something I almost mentioned yesterday in another thread. The symbol >> grounding problem in philosophy is about exactly this question. They are >> not grounded in the world of conscious experience like you and me. Or, if >> we think so, then that is to me something like a religious belief. >> >> >> >>> *4. She admits her octopus test is exactly like the Turing test. She >>> claims the hyper-intelligent octopus would be able to send some >>> pleasantries and temporarily fool the other person, but that it has no real >>> understanding and this would be revealed if there were any attempt to >>> communicate about any real ideas.* >>> My Reply: I think she must be totally unaware of the capabilities of >>> recent models like GPT-4 to come to a conclusion like this. >>> >> >> Again, no grounding. >> >> >>> *5. The interviewer pushes back and says he has learned a lot about >>> math, despite not seeing or experiencing mathematical objects. And has >>> graded a blind student's paper which appeared to show he was able to >>> visualize objects in math, despite not being sighted. She says the octopus >>> never learned language, we acquired a linguistic system, but the hyper >>> intelligent octopus has not, and that all the octopus has learned is >>> language distribution patterns.* >>> My Reply: I think the crucial piece missing from her understanding of >>> LLMs is that the only way for them to achieve the levels of accuracy in the >>> text that they predict is by constructing internal mental models of >>> reality. That is the only way they can answer hypotheticals concerning >>> novel situations described to them, or for example, to play chess. The only >>> way to play chess with a LLM is if it is internally constructing a model of >>> the board and pieces. It cannot be explained in terms of mere patterns or >>> distributions of language. Otherwise, the LLM would be as likely to guess >>> any potential move rather than an optimal move, and one can readily >>> guarantee a chess board position that has never before appeared in the >>> history of the universe, we can know the LLM is not relying on memory. >>> >> >> I don't dispute that LLMs construct internal models of reality, but I >> cough when you include the word "mental," as if they have minds >> with conscious awareness of their internal models. >> >> I agree that it is absolutely amazing what these LLMs can do and will do. >> The question is, how could they possibly know it any more than my pocket >> calculator knows the rules of mathematics or my watch knows the time? >> >> >> >>> >>> *6. The Interviewer asks what prevents the octopus from learning >>> language over time as a human would? She says it requires joint-attention: >>> seeing some object paired with some word at the same time.* >>> My Reply: Why can't joint attention manifest as the co-occurrence of >>> words as they appear within a sentence, paragraph, or topic of discussion? >>> >> >> Because those other words also have no meanings or refrents. There is no >> grounding and there is no Rosetta Stone. >> >> Bender co-authored another paper about "stochastic parrots," which is how >> she characterizes LLMs and which I like. These models are like parrots that >> mimic human language and understanding. It is amazing how talented they >> appear, but they are only parrots who have no idea what they are saying. >> >> >>> >>> *7. The interviewer asks do you think there is some algorithm that could >>> possibly exist that could take a stream of words and understand them in >>> that sense? She answers yes, but that would require programming in from the >>> start the structure and meanings of the words and mapping them to a model >>> of the world, or providing the model other sensors or imagery. The >>> interviewer confirms: "You are arguing that just consuming language without >>> all this extra stuff, that no algorithm could just from that, really >>> understand language? She says that's right.* >>> My Reply: We already know that these models build maps of things >>> corresponding to reality in their head. See, for example, the paper I >>> shared where the AI was given a description of how rooms were connected to >>> each other, then the AI was able to visually draw the layout of the room >>> from this textual description. If that is not an example of understanding, >>> I don't know what possibly could be. Note also: this was an early model of >>> GPT-4 before it had been trained on images, it was purely trained on text. >>> >> >> This goes back to the question about Alexa.Yes, if that is what you mean >> by "understanding" then I am forced to agree that even Alexa and Siri >> "understand" language. But, again, I must put it in scare quotes. There is >> nobody out there named Alexa who is actually aware of understanding >> anything. She exists only in a manner of speaking. >> >> >>> >>> *8. She says, imagine that you are dropped into the middle of the Thai >>> library of congress and you have any book you could possibly want but only >>> in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: >>> What would you first do, where would you start? She adds if you just have >>> form, that's not going to give you information. She then says she would >>> have to find an encyclopedia or a translation of a book we know.* >>> My Reply: We know there is information (objectively) in the Thai >>> library, even if there were no illustrations or copies of books we had the >>> translations to. We know the Thai library contains scruitable information >>> because the text is compressible. If text is compressible it means there >>> are discoverable patterns in the text which can be exploited to reduce the >>> amount of bits needed to represent it. All our understanding can be viewed >>> as forms of compression. For example, the physical laws that we have >>> discovered "compress" the amount of information we need to store about the >>> universe. Moreover, when compression works by constructing an internal toy >>> model of reality, we can play with and permute the inputs to the model to >>> see how it behaves under different situations. This provides a genuine >>> understanding of the outer world from which our sensory inputs are based. I >>> believe the LLM has successfully done this to predict text, it has various >>> internal, situational models it can deploy to help it in predicting text. >>> Having these models and knowing when and how to use them, I argue, is >>> tantamount to understanding. >>> >> >> How could you possibly know what those "discoverable patterns of text" >> mean, given that they are in Thai and there is no Thai to English >> dictionary in the Thai library? >> >> As she points out and I mentioned above, there is no Rosetta Stone. >> >> Thanks for the thoughtful email. >> >> -gts >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 26 14:58:16 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Mar 2023 10:58:16 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023, 10:05 AM Jason Resch wrote: > Hi Gordon, > > First I want to thank you again for taking the time to write such a > thoughtful reply to each of my points below. I include some follow up to > your responses in-line below. > > On Sun, Mar 26, 2023 at 1:01?AM Gordon Swobe > wrote: > >> >> >> On Sat, Mar 25, 2023 at 4:49?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Hi Gordon, >>> >>> Thanks for sharing this video. I watched and and found the following >>> points of interest: >>> >>> *1. She said they can't possibly be understanding as they are only >>> seeing a sequence of characters and predicting distributions and what these >>> models do is not the same thing as understanding language.* >>> My Reply: These models demonstrate many emergent capabilities that were >>> not things that were programmed in or planned. They can answer questions, >>> summarize texts, translate languages, write programs, etc. All these >>> abilities emerged purely from being trained on the single task of >>> predicting text. Given this, can we be certain that "understanding" is not >>> another one of the emergent capabilities manifested by the LLM? >>> >> >> This gets into philosophical debate about what, exactly, are emergent >> properties. As I understand the term, whatever it is that emerges is >> somehow hidden but intrinsic prior to the emergence. For example, from the >> rules of chess there emerge many abstract properties and strategies of >> chess. To someone naive about chess, it is difficult to imagine from the >> simple rules of chess how chess looks to a grandmaster, but those emergent >> properties are inherent in and follow logically from the simple rules of >> chess. >> > > That is a useful analogy I think. In the same way that an LLM is given a > corpus of text, and told "Go at it", consider the chess playing AI > AlphaZero. It was given absolutely no information about chess playing > strategies, *only the rules of chess*. And yet, within mere hours, it had > discovered all the common openings: > > "In Chess, for example, AlphaZero independently discovered and played > common human motifs during its self-play training such as openings, king > safety and pawn structure. But, being self-taught and therefore > unconstrained by conventional wisdom about the game, it also developed its > own intuitions and strategies adding a new and expansive set of exciting > and novel ideas that augment centuries of thinking about chess strategy." > > https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go > > > So given only the rules of the game, AlphaZero learned to play chess > better than any human, and moreover, better than any humans know how to > program computers to be. To me, this suggests that AlphaZero knows how to > play chess, and that it understands the game. If you say that AlphaZero > does not understand chess then I don't know what you mean by "understand," > as it must not be the same as the meaning as I use it. Would you say > AlphaZero does not understand how to play chess? > > >> >> So how does meaning emerge from mere symbols(words)? Sequences of >> abstract characters in no possible way contain the seeds of their meanings >> > > > If you are willing to grant that AlphaZero has learned how to play chess > merely from the rules of the game, then could an LLM, given only a corpus > of text, learn anything about the language? For example, could it pick up > on the structure, grammer, and interrelations of words? Could it learn how > to form proper sentences and compose meaningful replies in response to > prompts? > > I think you may be missing a crucial piece of understanding about how > neural networks work. They do not only see sequences of characters. As we > learned from experiments of Google's deep Dream system, we find that during > image recognition, each layer of the network sees and looks for something > different, and as the layers progress, the patterns they look for become > higher level and more complex. For example, at the lowest layer, it looks > only for edges. The next layer of the network looks for basic shapes in > those edges, lines, corners, curves. The next layer looks for and > recognizes certain combinations of those shapes within a particular region > to identify parts. The next layer above uses the recognized parts to > identify what objects are seen. (See: > https://distill.pub/2017/feature-visualization/ and > https://blog.google/technology/ai/understanding-inner-workings-neural-networks/ > ) > > None of this is particularly easy to follow and it took researchers many > years to even understand what neural networks do when they learn to > classify images. This is just some of the complex emergent behavior that we > get when we build networks of millions or billions of neurons and set them > loose to look for patterns. > > I think we could say something similar is happening in these language > models. True, the lowest layer sees only characters (or tokens). But the > next layer above that sees collections of tokens and looks for and > recognizes words. The layer above this might identify particular > grammatical phrases. The layer above this could recognize and operate on > sentences. The layer above this on paragraphs, and so on. So it would then > be incorrect to say that the LLM *only* sees sequences of characters. This > is true for only a small part of the network, and it ignores the > higher-level processing done by the later layers and stages of processing. > > >> , as we can see by the fact that many different words exist in different >> languages and in entirely different alphabets for the same meaning. >> > > That different languages can use different strings of characters to > represent the same concept only means that any single word in isolation is > insufficient to decipher the meaning of the word. Given a large enough body > of text, however, the constraints around any particular word's usage are > often enough to figure out what it means (we we have done for dead > languages, and as I showed would be possible if I had the barest > understanding of mathematics/physics and was given a dictionary or > wikipedia in another language). > > I think your strongest point is how do we bootstrap understanding starting > from "zero" with absolutely no initial understanding of the world at large. > This I have difficulty with, because I don't know. But I can try to guess. > I think the LLM can learn the rules of grammar of a language from enough > examples, I think also it can learn to pick out words from the string of > characters. Then, by focusing on patterns of words, think of children's > books, and short simple common sentences: "a cat is an animal" "a dog is an > animal" "a turkey is an animal", "a cat is a mammal" "a dog is a mammal" "a > turkey is a bird", "a cat has ears" a "cat has whiskers" etc. given enough > of these, a relationship map can be constructed and inferred. It could, > with enough of these examples, build up sets of classifications, for > example "cat ? mammal ? animal" and "turkey ? bird ? animal" even if it > doesn't at this stage know what a cat is or a tukey is, it know they are > both animals. And it knows that cats have ears and tukey's don't, and that > cats are something called mammals while turkeys are something called birds. > Its knowledge is woefully incomplete at this stage, but you can see how > this initial structure lays the foundation for later learning after > processing more sentences, learning that most birds fly, and therefore > turkeys probably fly. Learning that mammals nurse their young so cats must > nurse their young, and so on. The bootstrapping is the hardest part to > explain, and that it works at all (in humans as we know it does, and in > LLMs, as it seems they do) is nothing short of a kind of miracle. This is > not, however, to say it is not explainable or that it is magical, only that > it's a very complex process we know very little about. > > > >> >> >>> *2. She uses the analogy that the LLM looking at characters would be the >>> same as a human who doesn't understand Cherokee looking at Cherokee >>> characters.* >>> My Reply: This is reminiscent of Searle's Chinese Room. The error is >>> looking at the behavior of the computer only at the lowest level, while >>> ignoring the goings-on at the higher levels. She sweeps all possible >>> behavior of a computer under the umbrella of "symbol manipulation", but >>> anything computable can be framed under "symbol manipulation" if described >>> on that level (including what atoms, or neurons in the human brain do). >>> This therefore fails as an argument that no understanding exists in the >>> higher-level description of the processing performed by the computer >>> program. >>> >> >> Yes her argument is similar to Searle's. See above. Sequences of >> characters (words) in no possible way contain the hidden seeds of their >> meanings, as we can see from the fact that many different words exist >> in different languages and alphabets for the same meaning. >> > > I agree a word seen in isolation in now way conveys its meaning. However, > given enough examples of a word and how it is used, we learn to infer its > meaning. > > Think of the word "wisdom". You know what that word means, but no one has > ever pointed to a thing and said that thing right there, that's "wisdom". > Rather, from hundreds or thousands of examples of words phrases, said to > contain wisdom, you have inferred the meaning of the word. Note that this > was done merely from the statistical association between the wise words, > and occasionally seeing the word "wisdom" paired with those words. No > exemplar of "wisdom" is ever made available to your senses, as "wisdom" is > an abstract concept which itself exists only in patterns of words. > > >> >> *3. She was asked what a machine would have to do to convince her they >>> have understanding. Her example was that if Siri or Alexa were asked to do >>> something in the real world, like turn on the lights, and if it does that, >>> then it has understanding (by virtue of having done something in the real >>> world).* >>> My Reply: Perhaps she does not see the analogy between turning on or off >>> a light, and the ability of an LLM to output characters to a monitor as >>> interacting in the real world (turning on and off many thousands of pixels >>> on the user's monitor as they read the reply). >>> >> >> I thought that was the most interesting part of her interview. She was >> using the word "understanding" in a more generous way than I would prefer >> to use it, even attributing "understanding" to a stupid app like Alexa, but >> she does not think GPT has understanding. I think she means it in exactly >> the way I do, which is why I put it in scare-quotes. As she put, it is a >> "kind of" understanding. As I wrote to you I think yesterday, I will grant >> that my pocket calculator "understands" how to do math, but it is >> not holding the meaning of those calculations in mind consciously, which is >> what I (and most everyone on earth) mean by understanding. >> > > I agree with you here, that her use of "understand" is generous and > perhaps inappropriate for things like Siri or Alexa. I also agree with you > that the calculator, while it can do math, I would not say that it > understands math. Its understanding, if it could be said to have any at > all, would rest almost entirely in "understanding" what keys have been > pressed and which circuits to activate on which presses. > > >> >> >> Understanding involves the capacity to consciously hold something in >> mind. >> > > I agree with this definition. But while we both agree on this usage of the > word, I think I can explain why we disagree on whether LLMs can understand. > While I am willing to grant LLMs as having a mind and consciousness you are > not. So even when we use the same definition of "understand," the fact that > you do not accept the consciousness of LLMs means you are unwilling to > grant them understanding. Is this a fair characterization? > > >> Otherwise, pretty much everything understands something and the word >> loses meaning. Does the automated windshield wiper mechanism in my car >> understand how to clear the rain off my windows when it starts raining? No, >> but I will grant that it "understands" it in scare-quotes. >> >> The other point I would make here is that even if we grant that turning >> the pixels off and on your screen makes GPT sentient or conscious, the real >> question is "how can it know the meanings of those pixel arrangements?" >> From its point of view (so to speak) it is merely generating meaningless >> strings of text for which it has never been taught the meanings except via >> other meaningless strings of text. >> >> Bender made the point that language models have no grounding, which is >> something I almost mentioned yesterday in another thread. The symbol >> grounding problem in philosophy is about exactly this question. They are >> not grounded in the world of conscious experience like you and me. Or, if >> we think so, then that is to me something like a religious belief. >> > > Why is it a religious belief to believe LLMs have consciousness, but it is > not a religious belief to believe that other humans have consciousness? Or > is it not also a religious believe to believe only humans with their > squishy brains, but not computers can have minds (*or souls*))? > > "I conclude that other human beings have feelings like me, because, first, > they have bodies like me, which I know, in my own case, to be the > antecedent condition of feelings; and because, secondly, they exhibit the > acts, and other outward signs, which in my own case I know by experience to > be caused by feelings." > -- John Stewart Mill in "An Examination" (1865) > > > Our assumption that other humans are conscious rests on only two > observations: (1) other people are made of the same stuff, and (2) other > people behave as if they are conscious. In the case of LLMs, they are made > of similar stuff in one sense (quarks and electrons) though they are made > different stuff in another sense (silicon and plastic), in any case I do > not ascribe much importance to the material composition, what is important > to me is do they show evidence of intelligence, understanding, knowledge, > etc. if they do, then I am willing to grant these entities are conscious. I > hope that if silicon-based aliens came to earth and spoke to us, that you > would not deny their consciousness on the basis of (1), and would judge > their potential for consciousness purely on (2). In the end, the only basis > we have for judging the existence of other minds is by their behavior, > which is why the Turing test is about as good as we can do in addressing > the Problem of Other Minds. > > > >> >> >> >>> *4. She admits her octopus test is exactly like the Turing test. She >>> claims the hyper-intelligent octopus would be able to send some >>> pleasantries and temporarily fool the other person, but that it has no real >>> understanding and this would be revealed if there were any attempt to >>> communicate about any real ideas.* >>> My Reply: I think she must be totally unaware of the capabilities of >>> recent models like GPT-4 to come to a conclusion like this. >>> >> >> Again, no grounding. >> > > I think it is easy to come to a snap judgement and say there is no > grounding in words alone, but I think this stems from imagining a word, or > a sentence in isolation, where every word appears only once, where there is > only a single example of sentence structure. If however, you consider the > patterns in a large body of text, you can begin to see the internal > redundancy and rules begin to show themselves. For example, every word has > a vowel. Every sentence has a verb. Every word is separated by a space. > Most sentences have a subject verb and object. All these ideas are implicit > in the patterns of the text, so we cannot say there is no grounding > whatsoever, there is obviously this minimum amount of information implicit > in the text itself. Now ask yourself, might there be more? Might this > barest level of grounding provide enough to build up the next stage to > ground further meaning? For example, by observing that certain nouns only > appear as subjects in association with certain verbs implies certain nouns > have a certain limited repertoire, a defined potential for action. Or > observing the usage of words like "is" to infer the sets of properties or > classifications of particular things. > > > >> >> >>> *5. The interviewer pushes back and says he has learned a lot about >>> math, despite not seeing or experiencing mathematical objects. And has >>> graded a blind student's paper which appeared to show he was able to >>> visualize objects in math, despite not being sighted. She says the octopus >>> never learned language, we acquired a linguistic system, but the hyper >>> intelligent octopus has not, and that all the octopus has learned is >>> language distribution patterns.* >>> My Reply: I think the crucial piece missing from her understanding of >>> LLMs is that the only way for them to achieve the levels of accuracy in the >>> text that they predict is by constructing internal mental models of >>> reality. That is the only way they can answer hypotheticals concerning >>> novel situations described to them, or for example, to play chess. The only >>> way to play chess with a LLM is if it is internally constructing a model of >>> the board and pieces. It cannot be explained in terms of mere patterns or >>> distributions of language. Otherwise, the LLM would be as likely to guess >>> any potential move rather than an optimal move, and one can readily >>> guarantee a chess board position that has never before appeared in the >>> history of the universe, we can know the LLM is not relying on memory. >>> >> >> I don't dispute that LLMs construct internal models of reality, >> > > I am glad we are in agreement on this. I think this is crucial to explain > the kinds of behaviors that we have seen LLMs manifest. > > >> but I cough when you include the word "mental," as if they have minds >> with conscious awareness of their internal models. >> > > What do you think is required to have a mind and consciousness? Do you > think that no computer program could ever possess it, not even if it were > put in charge of an android/root body? > > > >> >> I agree that it is absolutely amazing what these LLMs can do and will do. >> The question is, how could they possibly know it any more than my pocket >> calculator knows the rules of mathematics or my watch knows the time? >> > > I would say by virtue of having many layers of processing which build up > to high-level interpretations. Consider that someone could phrase an > identical sentence: > > "I agree that it is absolutely amazing what these *human brains* can do > and will do. The question is, how could they possibly know it any more than > a *neuron* knows how to count or a *neocortical column* knows a pattern." > > > Consider: The Java programming language has only 256 instructions. Yet it > is possible to string these instructions together in a way that it is > possible to realize any potential program that can be written. Every > program of the roughly 3,000,000 in the Android Google Play Store is made > from some combination of these 256 instructions. This is the magic of > universality. It only takes a few simple rules, added together, to yield > behaviours of unlimited potential. You can build any logical > operation/circuit using just the boolean operations of AND, OR, NOT > (actually it can be done with just a single boolean operation NOT-AND > ). Likewise, any computation can > be performed just as a series of using *ONLY* multiplication and addition > operations. This is indeed an incredible property, but it has been proven. > Likewise, neural networks have been proven universal in another sense, see > the universal approximation theorem > . > > This is why some caution is needed for claims that "A neural network could > never do this" or "A computer could never do that". Because we already know > that neural networks and computers are architectures that are sufficiently > flexible to manifest *any possible behavior* that any machine of any kind > is capable of manifesting. > > > > >> >> >> >>> >>> *6. The Interviewer asks what prevents the octopus from learning >>> language over time as a human would? She says it requires joint-attention: >>> seeing some object paired with some word at the same time.* >>> My Reply: Why can't joint attention manifest as the co-occurrence of >>> words as they appear within a sentence, paragraph, or topic of discussion? >>> >> >> Because those other words also have no meanings or refrents. There is no >> grounding and there is no Rosetta Stone. >> > > But neither is there grounding or a Rosetta Stone when it comes to > language acquisition by children. You might say: well they receive a visual > stimulus concurrent with a word. But that too, is ultimately just a > statistical co-occurrence of ungrounded sensory inputs. > > >> >> Bender co-authored another paper about "stochastic parrots," which is how >> she characterizes LLMs and which I like. These models are like parrots that >> mimic human language and understanding. It is amazing how talented they >> appear, but they are only parrots who have no idea what they are saying. >> > > I could say Bender is a stochastic parrot, who mimics human language > understanding, and that I am amazed at how talented she appears, but I am > willing to attribute to her genuine understanding as evidenced by the > coherence of her demonstrated thought processes. She should be careful > though, not to force a test against LLMs which she herself could not pass: > proving that she has genuine understanding and is not herself a stochastic > parrot. > > >> >> >>> >>> *7. The interviewer asks do you think there is some algorithm that could >>> possibly exist that could take a stream of words and understand them in >>> that sense? She answers yes, but that would require programming in from the >>> start the structure and meanings of the words and mapping them to a model >>> of the world, or providing the model other sensors or imagery. The >>> interviewer confirms: "You are arguing that just consuming language without >>> all this extra stuff, that no algorithm could just from that, really >>> understand language? She says that's right.* >>> My Reply: We already know that these models build maps of things >>> corresponding to reality in their head. See, for example, the paper I >>> shared where the AI was given a description of how rooms were connected to >>> each other, then the AI was able to visually draw the layout of the room >>> from this textual description. If that is not an example of understanding, >>> I don't know what possibly could be. Note also: this was an early model of >>> GPT-4 before it had been trained on images, it was purely trained on text. >>> >> >> This goes back to the question about Alexa.Yes, if that is what you mean >> by "understanding" then I am forced to agree that even Alexa and Siri >> "understand" language. But, again, I must put it in scare quotes. There is >> nobody out there named Alexa who is actually aware of understanding >> anything. She exists only in a manner of speaking. >> > > This is far beyond what Alexa or Siri do. This proves that words alone are > sufficient for GPT-4 to construct a mathematical structure (a graph with > edges and vertices) which are consistent with the layout of rooms within > the house, as described *purely using words*. This proves that GPT-4 has > overcome the symbol grounding problem, as it understands exactly, how the > words map to meaning by creating a mathematical structure consistent with > the description provided to it. > > Please see page 51 of this PDF: https://arxiv.org/pdf/2303.12712.pdf so > you know what I am talking about. This might be the most important and > convincing page in the document for the purposes of our discussion. > > >> >> >>> >>> *8. She says, imagine that you are dropped into the middle of the Thai >>> library of congress and you have any book you could possibly want but only >>> in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: >>> What would you first do, where would you start? She adds if you just have >>> form, that's not going to give you information. She then says she would >>> have to find an encyclopedia or a translation of a book we know.* >>> My Reply: We know there is information (objectively) in the Thai >>> library, even if there were no illustrations or copies of books we had the >>> translations to. We know the Thai library contains scruitable information >>> because the text is compressible. If text is compressible it means there >>> are discoverable patterns in the text which can be exploited to reduce the >>> amount of bits needed to represent it. All our understanding can be viewed >>> as forms of compression. For example, the physical laws that we have >>> discovered "compress" the amount of information we need to store about the >>> universe. Moreover, when compression works by constructing an internal toy >>> model of reality, we can play with and permute the inputs to the model to >>> see how it behaves under different situations. This provides a genuine >>> understanding of the outer world from which our sensory inputs are based. I >>> believe the LLM has successfully done this to predict text, it has various >>> internal, situational models it can deploy to help it in predicting text. >>> Having these models and knowing when and how to use them, I argue, is >>> tantamount to understanding. >>> >> >> How could you possibly know what those "discoverable patterns of text" >> mean, given that they are in Thai and there is no Thai to English >> dictionary in the Thai library? >> > > Do you agree that the Thai language is compressible? That is to say, if > you took all the symbols and characters from the Thai library, and let's > say it came out to 1,000 GB, if we put it into WinZIP it would compress to > a smaller file, of let's say 200 GB? If you agree that a compression > algorithm would succeed in reducing the number of bits necessary to > represent the original Thai text, then this means there are patterns in the > text which a simple algorithm can discover and exploit to reduce the size > of the text. More sophisticated algorithms, which are more capable of > understanding the patterns, will be able to further compress the Thai text. > > >> >> As she points out and I mentioned above, there is no Rosetta Stone. >> >> > I agree there is no Rosetta Stone. But my point is one is not necessary to > recognize patterns in text, and build models to predict text. In the same > way we humans learn to predict the future observations given our current > ones, a LLM builds a model to predict future text given past text. Its > model of reality is one of a world of ideas, rather than our world of > visual and auditory sensations, but to it, it is still a world which it has > achieved some understanding of. > > >> Thanks for the thoughtful email. >> >> > Likewise. I think even if we do not come to an agreement this is a useful > discussion in that it helps each of us to clarify our thoughts and > understanding of these topics. > > Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 26 14:59:43 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Mar 2023 10:59:43 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: Hi Gordon, Thank you for noticing and pointing out my mistake. I did intend to reply to the list. Jason On Sun, Mar 26, 2023, 10:52 AM Gordon Swobe wrote: > Jason, I received a reply to my last to you, but before I dig into it, I > notice that it does not appear addressed to ExI. I am happy to reply > privately, but I think you probably meant it for the list. > > In the meantime, I wanted to add to comments about emergent properties > that I have no argument with the idea that GPT might be exhibiting emergent > properties -- I agree it certainly appears that way -- but I would say they > are emergent properties of the grammatical relationships between and among > words, not evidence of any understanding of the meanings. I think Bender > actually touches on this subject in the interview but without actually > using the term "emergent properties." > > It could very well be that something like this is one aspect of human > intelligence, but I think we also understand the meanings. > > -gts > > > > > > On Sun, Mar 26, 2023 at 2:56?AM Gordon Swobe > wrote: > >> I have a smart home. Some of the iPhone apps associated with it have and >> can display what could be described as internal models representing my >> home. Does this mean these apps have a conscious understanding of the >> layout of my home? No, I think not, not as I and most people use the word >> understand. Only minds can understand things, and despite my home being >> "smart," I reject the idea that it has a mind of its own. That is nothing >> more than foolish science-fiction.. >> >> -gts >> >> On Sun, Mar 26, 2023 at 12:01?AM Gordon Swobe >> wrote: >> >>> >>> >>> On Sat, Mar 25, 2023 at 4:49?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Hi Gordon, >>>> >>>> Thanks for sharing this video. I watched and and found the following >>>> points of interest: >>>> >>>> *1. She said they can't possibly be understanding as they are only >>>> seeing a sequence of characters and predicting distributions and what these >>>> models do is not the same thing as understanding language.* >>>> My Reply: These models demonstrate many emergent capabilities that were >>>> not things that were programmed in or planned. They can answer questions, >>>> summarize texts, translate languages, write programs, etc. All these >>>> abilities emerged purely from being trained on the single task of >>>> predicting text. Given this, can we be certain that "understanding" is not >>>> another one of the emergent capabilities manifested by the LLM? >>>> >>> >>> This gets into philosophical debate about what, exactly, are emergent >>> properties. As I understand the term, whatever it is that emerges is >>> somehow hidden but intrinsic prior to the emergence. For example, from the >>> rules of chess there emerge many abstract properties and strategies of >>> chess. To someone naive about chess, it is difficult to imagine from the >>> simple rules of chess how chess looks to a grandmaster, but those emergent >>> properties are inherent in and follow logically from the simple rules of >>> chess. >>> >>> So how does meaning emerge from mere symbols(words)? Sequences of >>> abstract characters in no possible way contain the seeds of their meanings, >>> as we can see by the fact that many different words exist in different >>> languages and in entirely different alphabets for the same meaning. >>> >>> >>>> *2. She uses the analogy that the LLM looking at characters would be >>>> the same as a human who doesn't understand Cherokee looking at Cherokee >>>> characters.* >>>> My Reply: This is reminiscent of Searle's Chinese Room. The error is >>>> looking at the behavior of the computer only at the lowest level, while >>>> ignoring the goings-on at the higher levels. She sweeps all possible >>>> behavior of a computer under the umbrella of "symbol manipulation", but >>>> anything computable can be framed under "symbol manipulation" if described >>>> on that level (including what atoms, or neurons in the human brain do). >>>> This therefore fails as an argument that no understanding exists in the >>>> higher-level description of the processing performed by the computer >>>> program. >>>> >>> >>> Yes her argument is similar to Searle's. See above. Sequences of >>> characters (words) in no possible way contain the hidden seeds of their >>> meanings, as we can see from the fact that many different words exist >>> in different languages and alphabets for the same meaning. >>> >>> *3. She was asked what a machine would have to do to convince her they >>>> have understanding. Her example was that if Siri or Alexa were asked to do >>>> something in the real world, like turn on the lights, and if it does that, >>>> then it has understanding (by virtue of having done something in the real >>>> world).* >>>> My Reply: Perhaps she does not see the analogy between turning on or >>>> off a light, and the ability of an LLM to output characters to a monitor as >>>> interacting in the real world (turning on and off many thousands of pixels >>>> on the user's monitor as they read the reply). >>>> >>> >>> I thought that was the most interesting part of her interview. She was >>> using the word "understanding" in a more generous way than I would prefer >>> to use it, even attributing "understanding" to a stupid app like Alexa, but >>> she does not think GPT has understanding. I think she means it in exactly >>> the way I do, which is why I put it in scare-quotes. As she put, it is a >>> "kind of" understanding. As I wrote to you I think yesterday, I will grant >>> that my pocket calculator "understands" how to do math, but it is >>> not holding the meaning of those calculations in mind consciously, which is >>> what I (and most everyone on earth) mean by understanding. >>> >>> Understanding involves the capacity to consciously hold something in >>> mind. Otherwise, pretty much everything understands something and the word >>> loses meaning. Does the automated windshield wiper mechanism in my car >>> understand how to clear the rain off my windows when it starts raining? No, >>> but I will grant that it "understands" it in scare-quotes. >>> >>> The other point I would make here is that even if we grant that turning >>> the pixels off and on your screen makes GPT sentient or conscious, the real >>> question is "how can it know the meanings of those pixel arrangements?" >>> From its point of view (so to speak) it is merely generating meaningless >>> strings of text for which it has never been taught the meanings except via >>> other meaningless strings of text. >>> >>> Bender made the point that language models have no grounding, which is >>> something I almost mentioned yesterday in another thread. The symbol >>> grounding problem in philosophy is about exactly this question. They are >>> not grounded in the world of conscious experience like you and me. Or, if >>> we think so, then that is to me something like a religious belief. >>> >>> >>> >>>> *4. She admits her octopus test is exactly like the Turing test. She >>>> claims the hyper-intelligent octopus would be able to send some >>>> pleasantries and temporarily fool the other person, but that it has no real >>>> understanding and this would be revealed if there were any attempt to >>>> communicate about any real ideas.* >>>> My Reply: I think she must be totally unaware of the capabilities of >>>> recent models like GPT-4 to come to a conclusion like this. >>>> >>> >>> Again, no grounding. >>> >>> >>>> *5. The interviewer pushes back and says he has learned a lot about >>>> math, despite not seeing or experiencing mathematical objects. And has >>>> graded a blind student's paper which appeared to show he was able to >>>> visualize objects in math, despite not being sighted. She says the octopus >>>> never learned language, we acquired a linguistic system, but the hyper >>>> intelligent octopus has not, and that all the octopus has learned is >>>> language distribution patterns.* >>>> My Reply: I think the crucial piece missing from her understanding of >>>> LLMs is that the only way for them to achieve the levels of accuracy in the >>>> text that they predict is by constructing internal mental models of >>>> reality. That is the only way they can answer hypotheticals concerning >>>> novel situations described to them, or for example, to play chess. The only >>>> way to play chess with a LLM is if it is internally constructing a model of >>>> the board and pieces. It cannot be explained in terms of mere patterns or >>>> distributions of language. Otherwise, the LLM would be as likely to guess >>>> any potential move rather than an optimal move, and one can readily >>>> guarantee a chess board position that has never before appeared in the >>>> history of the universe, we can know the LLM is not relying on memory. >>>> >>> >>> I don't dispute that LLMs construct internal models of reality, but I >>> cough when you include the word "mental," as if they have minds >>> with conscious awareness of their internal models. >>> >>> I agree that it is absolutely amazing what these LLMs can do and will >>> do. The question is, how could they possibly know it any more than my >>> pocket calculator knows the rules of mathematics or my watch knows the time? >>> >>> >>> >>>> >>>> *6. The Interviewer asks what prevents the octopus from learning >>>> language over time as a human would? She says it requires joint-attention: >>>> seeing some object paired with some word at the same time.* >>>> My Reply: Why can't joint attention manifest as the co-occurrence of >>>> words as they appear within a sentence, paragraph, or topic of discussion? >>>> >>> >>> Because those other words also have no meanings or refrents. There is no >>> grounding and there is no Rosetta Stone. >>> >>> Bender co-authored another paper about "stochastic parrots," which is >>> how she characterizes LLMs and which I like. These models are like parrots >>> that mimic human language and understanding. It is amazing how talented >>> they appear, but they are only parrots who have no idea what they are >>> saying. >>> >>> >>>> >>>> *7. The interviewer asks do you think there is some algorithm that >>>> could possibly exist that could take a stream of words and understand them >>>> in that sense? She answers yes, but that would require programming in from >>>> the start the structure and meanings of the words and mapping them to a >>>> model of the world, or providing the model other sensors or imagery. The >>>> interviewer confirms: "You are arguing that just consuming language without >>>> all this extra stuff, that no algorithm could just from that, really >>>> understand language? She says that's right.* >>>> My Reply: We already know that these models build maps of things >>>> corresponding to reality in their head. See, for example, the paper I >>>> shared where the AI was given a description of how rooms were connected to >>>> each other, then the AI was able to visually draw the layout of the room >>>> from this textual description. If that is not an example of understanding, >>>> I don't know what possibly could be. Note also: this was an early model of >>>> GPT-4 before it had been trained on images, it was purely trained on text. >>>> >>> >>> This goes back to the question about Alexa.Yes, if that is what you mean >>> by "understanding" then I am forced to agree that even Alexa and Siri >>> "understand" language. But, again, I must put it in scare quotes. There is >>> nobody out there named Alexa who is actually aware of understanding >>> anything. She exists only in a manner of speaking. >>> >>> >>>> >>>> *8. She says, imagine that you are dropped into the middle of the Thai >>>> library of congress and you have any book you could possibly want but only >>>> in Thai. Could you learn Thai? The Interviewer says: I think so. She asks: >>>> What would you first do, where would you start? She adds if you just have >>>> form, that's not going to give you information. She then says she would >>>> have to find an encyclopedia or a translation of a book we know.* >>>> My Reply: We know there is information (objectively) in the Thai >>>> library, even if there were no illustrations or copies of books we had the >>>> translations to. We know the Thai library contains scruitable information >>>> because the text is compressible. If text is compressible it means there >>>> are discoverable patterns in the text which can be exploited to reduce the >>>> amount of bits needed to represent it. All our understanding can be viewed >>>> as forms of compression. For example, the physical laws that we have >>>> discovered "compress" the amount of information we need to store about the >>>> universe. Moreover, when compression works by constructing an internal toy >>>> model of reality, we can play with and permute the inputs to the model to >>>> see how it behaves under different situations. This provides a genuine >>>> understanding of the outer world from which our sensory inputs are based. I >>>> believe the LLM has successfully done this to predict text, it has various >>>> internal, situational models it can deploy to help it in predicting text. >>>> Having these models and knowing when and how to use them, I argue, is >>>> tantamount to understanding. >>>> >>> >>> How could you possibly know what those "discoverable patterns of text" >>> mean, given that they are in Thai and there is no Thai to English >>> dictionary in the Thai library? >>> >>> As she points out and I mentioned above, there is no Rosetta Stone. >>> >>> Thanks for the thoughtful email. >>> >>> -gts >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Sun Mar 26 19:59:25 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Mar 2023 13:59:25 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: Your referent for the word redness is the subjective quality your brain uses to represent red knowledge. So, a picture of red in a dictionary works for you, as your brain produces a redness experience when you look at it. But a picture dictionary doesn't work for an abstract system, since all they have for their knowledge of the picture is yet another abstract word like redness. On Sat, Mar 25, 2023, 10:57 AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I won't argue what a referent means. > > I agree. It is just what John would say but in different words: he would > emphasize, as I do, that for definitions you need examples, and that is why > I, tongue not totally in cheek, wrote that you should give an AI a picture > dictionary. bill w > > On Sat, Mar 25, 2023 at 3:41?AM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Reading these conversations over the last few days, it has struck me >> that some people keep referring to 'real' things, usually using the word >> 'referents' (e.g. an apple), as though our brains had direct access to >> them and could somehow just know what they are. >> >> But we don't. >> >> Think about it, what is "An Apple"? >> >> It's a term that we associate with a large set of sensory and memory >> data, including language data, but mostly things like visual, textural, >> taste, smell, emotional, etc., data stored as memories. >> >> Seeing as we all have different memories associated with the label "An >> Apple" (because some of us were sick the first time we ate one, some of >> us are allergic to something in apples, some of us have a greater >> impression of sweetness, or sourness, when we eat one, some of us once >> discovered a maggot in one, some people have only ever eaten Granny >> Smiths, others only Braeburns, or Crab Apples, and so on and so on...), >> then 'An Apple' is a different thing to each of us. >> >> There is no spoon! Er, Apple. There is no Apple! >> Not as a 'real-world thing'. >> >> "An Apple" is an abstract concept that, despite the individual >> differences, most of us can agree on, because there are a lot of common >> features for each of us, such as general shape, some common colours, a >> set of smells and tastes, how we can use them, where we get them from, >> and so on.. The concept is represented internally, and communicated >> externally (to other people) by a linguistic label, that refers, for >> each of us, to this large bunch of data extracted from our senses and >> memories: "Una Manzana". >> >> It's all 'nothing but' Data. Yet we all think that we 'understand' what >> an Apple is. Based purely on this data in our brains (because we have >> access to nothing else). >> >> So this idea of a label having 'a referent' seems false to me. Labels >> (data in our heads) refer to a big set of data (in our heads). Where the >> data comes from is secondary, diverse, and quite distant, when you trace >> the neural pathways back to a large and disparate set of incoming >> sensory signals, scattered over space and time. The meaning is created >> in our minds, not resident in a single object in the outside world. >> >> This is my understanding of things, anyway. >> >> Ben >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sun Mar 26 21:28:36 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 26 Mar 2023 16:28:36 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: But a picture dictionary doesn't work for an abstract system, since all they have for their knowledge of the picture is yet another abstract word like redness. brent Huh? Can't use an example of redness? ??? Also, to increase stimulus generalization, you would include several photo examples. Many words which have an abstract use also have concrete examples, such as 'floor'. I can see it will have trouble with honesty or love. bill w On Sun, Mar 26, 2023 at 3:02?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Your referent for the word redness is the subjective quality your brain > uses to represent red knowledge. > So, a picture of red in a dictionary works for you, as your brain produces > a redness experience when you look at it. > > But a picture dictionary doesn't work for an abstract system, since all > they have for their knowledge of the picture is yet another abstract word > like redness. > > On Sat, Mar 25, 2023, 10:57 AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I won't argue what a referent means. >> >> I agree. It is just what John would say but in different words: he >> would emphasize, as I do, that for definitions you need examples, and that >> is why I, tongue not totally in cheek, wrote that you should give an AI a >> picture dictionary. bill w >> >> On Sat, Mar 25, 2023 at 3:41?AM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Reading these conversations over the last few days, it has struck me >>> that some people keep referring to 'real' things, usually using the word >>> 'referents' (e.g. an apple), as though our brains had direct access to >>> them and could somehow just know what they are. >>> >>> But we don't. >>> >>> Think about it, what is "An Apple"? >>> >>> It's a term that we associate with a large set of sensory and memory >>> data, including language data, but mostly things like visual, textural, >>> taste, smell, emotional, etc., data stored as memories. >>> >>> Seeing as we all have different memories associated with the label "An >>> Apple" (because some of us were sick the first time we ate one, some of >>> us are allergic to something in apples, some of us have a greater >>> impression of sweetness, or sourness, when we eat one, some of us once >>> discovered a maggot in one, some people have only ever eaten Granny >>> Smiths, others only Braeburns, or Crab Apples, and so on and so on...), >>> then 'An Apple' is a different thing to each of us. >>> >>> There is no spoon! Er, Apple. There is no Apple! >>> Not as a 'real-world thing'. >>> >>> "An Apple" is an abstract concept that, despite the individual >>> differences, most of us can agree on, because there are a lot of common >>> features for each of us, such as general shape, some common colours, a >>> set of smells and tastes, how we can use them, where we get them from, >>> and so on.. The concept is represented internally, and communicated >>> externally (to other people) by a linguistic label, that refers, for >>> each of us, to this large bunch of data extracted from our senses and >>> memories: "Una Manzana". >>> >>> It's all 'nothing but' Data. Yet we all think that we 'understand' what >>> an Apple is. Based purely on this data in our brains (because we have >>> access to nothing else). >>> >>> So this idea of a label having 'a referent' seems false to me. Labels >>> (data in our heads) refer to a big set of data (in our heads). Where the >>> data comes from is secondary, diverse, and quite distant, when you trace >>> the neural pathways back to a large and disparate set of incoming >>> sensory signals, scattered over space and time. The meaning is created >>> in our minds, not resident in a single object in the outside world. >>> >>> This is my understanding of things, anyway. >>> >>> Ben >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sun Mar 26 22:34:15 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 16:34:15 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: On Sat, Mar 25, 2023 at 2:42?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Reading these conversations over the last few days, it has struck me > that some people keep referring to 'real' things, usually using the word > 'referents' (e.g. an apple), as though our brains had direct access to > them and could somehow just know what they are. > > But we don't. > > Think about it, what is "An Apple"? > ... > There is no spoon! Er, Apple. There is no Apple! > Not as a 'real-world thing'. > It would seem that you would rather say that apples are not real than say that the word "apple" has meaning. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 26 22:45:40 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 26 Mar 2023 15:45:40 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <20230324011104.Horde.0yhXBY8W2PVjAsv45QOsWoA@sollegro.com> Message-ID: <20230326154540.Horde.5ajqMG2kq7St6JnrMRZCGWN@sollegro.com> Quoting Tara Maya via extropy-chat : >> On Mar 24, 2023, at 1:11 AM, Stuart LaForge via extropy-chat >> wrote: >> >> But really the meaning of words are quite arbitrary and determined >> by the people who use them. Thus the referential meanings of words >> evolve and change over time and come to refer to different things. > > > The meaning of words can only change over time if the referents that > they indicate change. > > That does not make words arbitrary, but in fact, shows how important > referents are for real language. I never said referents are not important to meaning, but they are relative to time, place, and culture. By arbitrary, I meant variable not trivial. They are important, but they don't need to correspond to things that are physical or even real. Surely you see can see how an LLM has the ability to understand fairies and dragons as well as any human does because neither has ever directly experienced a fairy or dragon, your tall tales not withstanding. ;) The referents of a sizable number of words are as abstract as the words themselves are, thus they need no phenomenal experience of to understand. Think of it as the difference between book-knowledge and street-smarts. Stuart LaForge From gordon.swobe at gmail.com Sun Mar 26 22:49:43 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 16:49:43 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023 at 1:38?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Jason Resch wrote: > > *3. She was asked what a machine would have to do to convince her they > have understanding. Her example was that if Siri or Alexa were asked to do > something in the real world, like turn on the lights, and if it does that, > then it has understanding (by virtue of having done something in the real > world).* > > > Wait a minute. So she thinks that smart home systems have understanding of > what they're doing, but LLMs don't? I wonder how many Siris and Alexas are > the voice interface for smart home systems? A lot, I expect. > > If she's right (which she's not, seems to be the consensus here), then all > that needs to be done is link up a LLM to some smart home hardware, and > 'ta-daaaa', instant understanding! > > I don't buy it > She called Alexa-like understanding a "kind of" understanding as in "sorta kinda," i.e., she brackets the word or puts it in scare-quotes. In so much as Alexa executes your command to turn off the lights, there is a sense in which it kind of "understands" your command. She is not referring to anything like conscious understanding of the meanings of words. I also put the word in scare-quotes when I say with a straight face that my pocket calculator "understands" mathematics. I would not say it understands mathematics in the conventional common sense of the word. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Sun Mar 26 23:59:43 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 26 Mar 2023 19:59:43 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Sat, Mar 25, 2023 at 8:41?AM Brent Allsop wrote: > On Fri, Mar 24, 2023 at 8:13?PM Rafal Smigrodzki via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> The entire idea of qualia is ridiculous >>>> >>> >>> OK, then. Answer me this, What is it that has the redness quality in >>> your brain, when you look at this? >>> [image: red.png] >>> Nobody knows that yet. >>> >> >> ### But Brent, neuroscientists have known that for decades. The color >> rosettes in the fusiform gyrus code for this quality, they are the neural >> implementation of basic color recognition, with additional higher level >> processing in the angular gyrus and other areas. >> >> We know color. >> > > You don't see any problems in what you are saying here? > > For example, what, exactly, do you mean by "code for this quality"? > > This is just a circular definition that never actually gets to any > definition. > > I see no evidence that anyone on this list understands what a quality is. > ### You do not understand the word "understand", which leads you astray. The fusiform gyrus color rosettes, while interacting with certain frontal lobe centers, are the thing that contains the redness quality in the brain. Other parts of the brain do not contain it. That is the correct and sufficient answer to the question you asked - "What is it that has the redness quality in your brain, when you look at this?". There is no circularity involved, and no attempt at definition, because you asked only about the anatomical location and I answered that question. BTW, I am completely disinterested in defining what a "quality" is, in this context. Understand? Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From avant at sollegro.com Mon Mar 27 00:20:59 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 26 Mar 2023 17:20:59 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: <20230326172059.Horde.U_kVAXmigsNukO0Kvo1VYsI@sollegro.com> Quoting Brent Allsop via extropy-chat : > On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Tue, Mar 21, 2023 at 10:47?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> So following your rationale that consciousness requires "essential >>> qualities" rather than "abstract data" then old -fashioned color film >>> cameras were conscious: >>> >> > Yes, this is a great first step towards understanding. Some things in > nature must behave the way they do, because of their intrinsic > colorness quality. It is theoretically possible that silver-nitrate > crystals behave the way they do, because of their colerness quality. The > only problem is, there is no computation going on, with the film or in such > a camera, just stand alone silver-nitrate crystals abstractly representing > the picture. > The next required step is computational binding. In order to > directly apprehend something's intrinsic quality, you need to be able to > computationally bind it into the rest of your consciousness, so you can > directly apprehend their quality, so you can compute with them, enabling > you to say: "Oh THAT is what silver-nitride particles are like." In order to computationally bind an intrinsic quality to consciousness, requires the quality to be changed into information that is to say digitization. Since the Polaroid camera produces a chemical photograph of the strawberry with the redness quality that can be scanned into the robot, it follows that the combination of the robot and the Polaroid camera is conscious like a human. At some point, whether it be in the eyeball or the brain, an essential quality has to become bits and bytes because that is what the computational in "computational binding" means right? Does it really matter where it happens? You want to believe that the essential quality of redness somehow gets computationally bound to glutamate or some other physical molecule. But molecules are not essential, they are structures built of atoms. So if one non-essential structure can be computationally bound to the redness quality, then why couldn't some other non-essential structure serve equally well? Like the cellular structure in the brain that Rafal pointed out to you?The rosettes in the fusiform gyrus of the brain. Stuart LaForge From rafal.smigrodzki at gmail.com Mon Mar 27 00:34:20 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 26 Mar 2023 20:34:20 -0400 Subject: [ExI] The Automathician Message-ID: It occurred to me that LLMs mated to a mathematical inference engine (like the Wolfram Alpha plug-in for GPT) and enhanced with visual data processing capabilities might make excellent automatic mathematicians. Mathematical inference in the sense of symbol manipulation and theorem proving can be automated relatively easily but up till now such systems lacked the intuition and imagination as well as the sense of beauty that mathematicians use to select among the infinite numbers of mathematical objects and properties the ones that are worth thinking about. A theorem prover without this intuition would just aimlessly churn out proofs and never create anything useful. But add an intuition trained on the patterns imprinted in the mathematical literature by the minds of the mathematicians who create math, and the Automathician could write math like a human. Since its breadth of mathematical knowledge and raw symbol manipulation ability would be vastly superior to any human, its high-level mathematical insights could become superhumanly mathemagical. A modest LLM training project using a corpus of mathematical papers with RLHF provided by mathematicians could be a very interesting endeavor. Who knows what kind of very non-obvious and ground-breaking truths could be discovered? The algorithms based on such non-human insights could be of immense practical value, too. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 01:29:21 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 19:29:21 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023 at 9:01?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > If you say that AlphaZero does not understand chess then I don't know what >> you mean by "understand," as it must not be the same as the meaning as I >> use it. Would you say AlphaZero does not understand how to play chess? >> > Correct, that is not the meaning of "understand" that I, or Bender so far as can tell, are considering. AlphaZero is obviously a very intelligent chess application, but does it consciously mull over the possible moves like a human? I think not, but I won't dwell on this as you showed me below that you understand my meaning. So how does meaning emerge from mere symbols(words)? Sequences of abstract >>> characters in no possible way contain the seeds of their meanings >>> >> >> >> If you are willing to grant that AlphaZero has learned how to play chess >> merely from the rules of the game, then could an LLM, given only a corpus >> of text, learn anything about the language? For example, could it pick up >> on the structure, grammer, and interrelations of words? Could it learn how >> to form proper sentences and compose meaningful replies in response to >> prompts? >> > I've considered this, and in fact one of your first replies to me you made a claim something like "LLMs can detect nouns." [I'm going to stop using the word "word" and try to refer to them instead as "symbols"] I agree that LLMs can find and predict symbols that tend to follow other symbols, and in fact that is exactly what they do. No doubt, GBT "knows" that after the symbol "the," what we call an article, it is likely to find/predict one of many symbols in the class of symbols we call "nouns." But how does it understand the meaning of "noun" or "article" so that it understands the functions of these symbols in their descriptions of the real world? And what about "adjectives" or "adverbs" that can appear before nouns? Starting from a position of zero knowledge of the meanings of symbols, it can do more than find and generate likely patterns of symbols, which is exactly what it does. While the patterns might be real with statistical significance and have profound meaning to the human operator, they mean nothing to the LLM. > I think you may be missing a crucial piece of understanding about how >> neural networks work. >> > I understand this, and as I told my friend who literally fell in love with a modified version of GPT, nothing has changed qualitatively since the days of ELIZA. The advancements are quantitative, not qualitative. The software is far more sophisticated with many more lines of code, and we are finally crushing or about to crush the Turing test, but it's still no more than unconscious software running blindly on digital computers. He was an online friend of more than ten years, but he was so offended and angered by what I was saying about his "girlfriend" that he unfriended me on facebook for speaking so poorly of "her kind." I think that forced to decide whether to kill me or his digital girlfriend, he would have killed me. In fact that is one reason why I have returned to ExI after a long hiatus. The Singularity is here. Do you think a piece of software running a digital computer can have genuine feelings of love for you? This is just some of the complex emergent behavior that we get when we >> build networks of millions or billions of neurons and set them loose to >> look for patterns. >> > As I was saying in another message, I believe this emergent behavior, presuming it is not an illusion, is emergent behavior in terms of advanced and new grammatical structures but with no meaning to the LLM itself. Similar to the rules of chess, GPT extrapolates strings of symbols according to the rules of grammar in sometimes new and clever and interesting ways, never knowing what the strings mean. Some of these extrapolations could be dangerous, which is why humans are involved in the training. Think of the word "wisdom". You know what that word means, but no one has >> ever pointed to a thing and said that thing right there, that's "wisdom". >> Rather, from hundreds or thousands of examples of words phrases, said to >> contain wisdom, you have inferred the meaning of the word. Note that this >> was done merely from the statistical association between the wise words, >> and occasionally seeing the word "wisdom" paired with those words. No >> exemplar of "wisdom" is ever made available to your senses, as "wisdom" is >> an abstract concept which itself exists only in patterns of words. >> > I covered this in another message in which I was writing about how referents can be abstract ideas or concepts. That would include the abstract referent that corresponds to the symbol "wisdom" and can also include abstract mathematical truths. The point is, symbols have no meanings without referents and LLMs cannot as you suppose generate their own meanings and referents. Sure, as you say, they can find statistical correlations with symbols that appear near the symbol "wisdom" and associate those symbols with the symbol "wisdom," such that it can be prompted to generate what looks like wise text, but still it has no idea what is the meaning of "wisdom" except in terms of other word-symbols for which it also has no meaning. I agree with you here, that her use of "understand" is generous and perhaps >> inappropriate for things like Siri or Alexa. I also agree with you that the >> calculator, while it can do math, I would not say that it understands math. >> Its understanding, if it could be said to have any at all, would rest >> almost entirely in "understanding" what keys have been pressed and which >> circuits to activate on which presses. >> > I'm glad we agree on that much! Understanding involves the capacity to consciously hold something in mind. >>> >> >> I agree with this definition. >> > I'm especially glad we agree on that. > But while we both agree on this usage of the word, I think I can explain >> why we disagree on whether LLMs can understand. While I am willing to grant >> LLMs as having a mind and consciousness you are not. So even when we use >> the same definition of "understand," the fact that you do not accept the >> consciousness of LLMs means you are unwilling to grant them understanding. >> Is this a fair characterization? >> > Yes, where consciousness is as we defined above, meaning holding an understanding of x consciously in mind. > From its point of view (so to speak) it is merely generating meaningless >>> strings of text for which it has never been taught the meanings except via >>> other meaningless strings of text. >>> >>> Bender made the point that language models have no grounding, which is >>> something I almost mentioned yesterday in another thread. The symbol >>> grounding problem in philosophy is about exactly this question. They are >>> not grounded in the world of conscious experience like you and me. Or, if >>> we think so, then that is to me something like a religious belief. >>> >> >> Why is it a religious belief to believe LLMs have consciousness, but it >> is not a religious belief to believe that other humans have consciousness? >> > Yes, we need a wee bit of faith even to accept that our fellow humans have consciousness. But this is a reasonable inference, at least when I am with them in person. They look like me and have brains and nervous systems like mine, eyes and ears like mine, they react to external stimuli much as I do, and so on. I find it pretty easy to infer consciousness in most mammals. Digital computers, not so much. That is a giant leap of faith. >I think it is easy to come to a snap judgement and say there is no grounding in words alone, but I think this stems from imagining a word, or a sentence in isolation, where every word appears only once, where there is only a single example of sentence structure. Yes, you continue to believe we can glean the meanings of symbols from their forms and patterns. I consider that a logical impossibility. What do you think is required to have a mind and consciousness? >> > A human brain would be a good start. :) > Do you think that no computer program could ever possess it, not even if >> it were put in charge of an android/root body? >> > I think no computer program running on a digital computer as we currently understand them can possess it. Consciousness might be possible in some sort of android body, someday, once we understand what are sometimes called the neural correlates of consciousness. What exactly happens in the brain when a boxer delivers a knock-out punch? When neuroscience learns the precise and detailed answer to that question, we can think about how those neural correlates might be synthesized in a laboratory. This message is getting too long, sorry. > Likewise. I think even if we do not come to an agreement this is a useful discussion in that it helps each of us to clarify our thoughts and understanding of these topics. Yes, thanks and it's nice to know you. I've been down this rabbit hole before on ExI, something like 15-18 years ago. You weren't around in those days that I can remember. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 27 02:08:27 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Mar 2023 20:08:27 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: On Sun, Mar 26, 2023 at 4:35?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 25, 2023 at 2:42?AM Ben Zaiboc via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Reading these conversations over the last few days, it has struck me >> that some people keep referring to 'real' things, usually using the word >> 'referents' (e.g. an apple), as though our brains had direct access to >> them and could somehow just know what they are. >> >> But we don't. >> >> Think about it, what is "An Apple"? >> ... >> There is no spoon! Er, Apple. There is no Apple! >> Not as a 'real-world thing'. >> > > It would seem that you would rather say that apples are not real than say > that the word "apple" has meaning. > I don't believe he is saying that at all. Are YOU saying, we don't have knowledge of an apple, which has very real redness and greenness qualities, which can be computationally bound, which is what we know about the apple?? Redness is a quality of our knowledge of the apple. THAT very real physical quality of our knowledge, in the brain, is the referent of redness. We don't know the colorness qualities of the apple, or anything else out there, since our brains false colors all of our knowledge, so it can emphasize, in our understanding of the apple, what is important to us, as part of our computation process of needing to pick the apple. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Mar 27 02:09:05 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 26 Mar 2023 22:09:05 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Sun, Mar 26, 2023 at 9:39?PM Brent Allsop wrote: > > Hi Rafael, > > I'm not familiar with the fusiform gyrus color rosettes. It sounds like > you know more about the current understanding of at least this part of > neuroanatomy than I do. So I very much appreciate your help with all this > stuff. I've been able to find information on the fusiform gyrus, but > nothing, yet, about the color rosettes. So I was wondering if you could > provide some references where I can learn more? > ### There was an article in "Science" (I think) about 15 years ago where they mapped the responses of a class of cortical neurons in the fusiform gyrus to color visual inputs on sub-millimeter scale. They found that there was a sheet of cortex tiled with hundreds of little color wheels (I am not sure if they used the word rosette, I read it decades ago). By looking at which neurons on the color wheels were activated you would be able to tell what color light was impinging on the retina in the spots corresponding to each wheel which means that the neurons encoded color information (i.e. the result of an attempt by the cortex to calculate the reflectance of surfaces) which then would be fed to higher cortical areas to enable the conscious perception of color. This article shows similar results (but I remember reading about it in a different article that I can't find now): https://www.pnas.org/doi/10.1073/pnas.0810943106 See especially figure 1A - there is a circular arrangement of neurons that respond to different colors. There is of course much more to the encoding of color in the cortex, there are multiple levels of the brain that express color information in different and complementary ways, all very complicated but the basic concept of "chromotopic" mapping is simple. ---------------------------- > > Also, I have a question about this "thing that contains the redness > quality in the brain." > > In our video, there is a chapter called "Perceiving a Strawberry." > > At 2:00 minutes in it illustrates a single pixel on the surface of a > strawberry changing from redness to greenness. So, obviously, subjectively, > there is one pixel physically changing, out of all our colored pixels that > make up our total subjective experience of the strawberry in the brain. > So, objectively, what would you see, when one observed just this one > subjective pixel which is changing from redness to greenness? How would > this color rosette mechanism accomplish such a subjective experience of one > pixel changing like this, Along with all the other pixels of which our > subjective knowledge is composed, which could also all change to be any > color at any time? > ### A couple of neurons in a single color rosette would fire differently, and this would propagate to higher cortical areas, and as long as you maintain attention to that pixel there would be reciprocal and sustained activation between that color wheel and its neurons and the higher cortical areas. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 27 02:10:55 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Mar 2023 20:10:55 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: Hi William, Not sure what you are asking about with that "Huh?" Perhaps you can tell me what you think a quality is. That should help me understand what you are asking. Would you agree that a physical quality (whatever you think it is) is the referent of the word 'redness"? On Sun, Mar 26, 2023 at 3:29?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > But a picture dictionary doesn't work for an abstract system, since all > they have for their knowledge of the picture is yet another abstract word > like redness. brent > > Huh? Can't use an example of redness? ??? Also, to increase stimulus > generalization, you would include several photo examples. Many words which > have an abstract use also have concrete examples, such as 'floor'. I can > see it will have trouble with honesty or love. bill w > > On Sun, Mar 26, 2023 at 3:02?PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Your referent for the word redness is the subjective quality your brain >> uses to represent red knowledge. >> So, a picture of red in a dictionary works for you, as your brain >> produces a redness experience when you look at it. >> >> But a picture dictionary doesn't work for an abstract system, since all >> they have for their knowledge of the picture is yet another abstract word >> like redness. >> >> On Sat, Mar 25, 2023, 10:57 AM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I won't argue what a referent means. >>> >>> I agree. It is just what John would say but in different words: he >>> would emphasize, as I do, that for definitions you need examples, and that >>> is why I, tongue not totally in cheek, wrote that you should give an AI a >>> picture dictionary. bill w >>> >>> On Sat, Mar 25, 2023 at 3:41?AM Ben Zaiboc via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Reading these conversations over the last few days, it has struck me >>>> that some people keep referring to 'real' things, usually using the >>>> word >>>> 'referents' (e.g. an apple), as though our brains had direct access to >>>> them and could somehow just know what they are. >>>> >>>> But we don't. >>>> >>>> Think about it, what is "An Apple"? >>>> >>>> It's a term that we associate with a large set of sensory and memory >>>> data, including language data, but mostly things like visual, textural, >>>> taste, smell, emotional, etc., data stored as memories. >>>> >>>> Seeing as we all have different memories associated with the label "An >>>> Apple" (because some of us were sick the first time we ate one, some of >>>> us are allergic to something in apples, some of us have a greater >>>> impression of sweetness, or sourness, when we eat one, some of us once >>>> discovered a maggot in one, some people have only ever eaten Granny >>>> Smiths, others only Braeburns, or Crab Apples, and so on and so on...), >>>> then 'An Apple' is a different thing to each of us. >>>> >>>> There is no spoon! Er, Apple. There is no Apple! >>>> Not as a 'real-world thing'. >>>> >>>> "An Apple" is an abstract concept that, despite the individual >>>> differences, most of us can agree on, because there are a lot of common >>>> features for each of us, such as general shape, some common colours, a >>>> set of smells and tastes, how we can use them, where we get them from, >>>> and so on.. The concept is represented internally, and communicated >>>> externally (to other people) by a linguistic label, that refers, for >>>> each of us, to this large bunch of data extracted from our senses and >>>> memories: "Una Manzana". >>>> >>>> It's all 'nothing but' Data. Yet we all think that we 'understand' what >>>> an Apple is. Based purely on this data in our brains (because we have >>>> access to nothing else). >>>> >>>> So this idea of a label having 'a referent' seems false to me. Labels >>>> (data in our heads) refer to a big set of data (in our heads). Where >>>> the >>>> data comes from is secondary, diverse, and quite distant, when you >>>> trace >>>> the neural pathways back to a large and disparate set of incoming >>>> sensory signals, scattered over space and time. The meaning is created >>>> in our minds, not resident in a single object in the outside world. >>>> >>>> This is my understanding of things, anyway. >>>> >>>> Ben >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 27 02:20:59 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Mar 2023 20:20:59 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: Hi Rafael, This is a second attempt to send this email to the list. Evidently posts can't be more than 160K, so no room for the image which I had included. (Rafael received it, since I sent it to him, directly). So here is a copy without the image. You'll just need to go to the URL, if you want to see the image of the tiger. I'm not familiar with the fusiform gyrus color rosettes. It sounds like you know more about the current understanding of at least this part of neuroanatomy than I do. So I very much appreciate your help with all this stuff. I've been able to find information on the fusiform gyrus, but nothing, yet, about the color rosettes. So I was wondering if you could provide some references where I can learn more? Also, I have a question about this "thing that contains the redness quality in the brain." In our video, there is a chapter called "Perceiving a Strawberry." At 2:00 minutes in it illustrates a single pixel on the surface of a strawberry changing from redness to greenness. So, obviously, subjectively, there is one pixel physically changing, out of all our colored pixels that make up our total subjective experience of the strawberry in the brain. So, objectively, what would you see, when one observed just this one subjective pixel which is changing from redness to greenness? How would this color rosette mechanism accomplish such a subjective experience of one pixel changing like this, Along with all the other pixels of which our subjective knowledge is composed, which could also all change to be any color at any time? https://brandonbouck.com/ In other words, would you agree with the statement: "*If you know something, that knowledge must be something.*" Including every single pixel quality, of which our conscious experience of the strawberry is composed? On Sun, Mar 26, 2023 at 6:00?PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sat, Mar 25, 2023 at 8:41?AM Brent Allsop > wrote: > >> On Fri, Mar 24, 2023 at 8:13?PM Rafal Smigrodzki via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Wed, Mar 22, 2023 at 4:29?PM Brent Allsop via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> The entire idea of qualia is ridiculous >>>>> >>>> >>>> OK, then. Answer me this, What is it that has the redness quality in >>>> your brain, when you look at this? >>>> [image: red.png] >>>> Nobody knows that yet. >>>> >>> >>> ### But Brent, neuroscientists have known that for decades. The color >>> rosettes in the fusiform gyrus code for this quality, they are the neural >>> implementation of basic color recognition, with additional higher level >>> processing in the angular gyrus and other areas. >>> >>> We know color. >>> >> >> You don't see any problems in what you are saying here? >> >> For example, what, exactly, do you mean by "code for this quality"? >> >> This is just a circular definition that never actually gets to any >> definition. >> >> I see no evidence that anyone on this list understands what a quality is. >> > > ### You do not understand the word "understand", which leads you astray. > > The fusiform gyrus color rosettes, while interacting with certain frontal > lobe centers, are the thing that contains the redness quality in the brain. > Other parts of the brain do not contain it. That is the correct and > sufficient answer to the question you asked - "What is it that has the > redness quality in your brain, when you look at this?". There is no > circularity involved, and no attempt at definition, because you asked only > about the anatomical location and I answered that question. > > BTW, I am completely disinterested in defining what a "quality" is, in > this context. > > Understand? > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From gordon.swobe at gmail.com Mon Mar 27 02:33:55 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 20:33:55 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: On Sun, Mar 26, 2023 at 8:10?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On Sun, Mar 26, 2023 at 4:35?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 25, 2023 at 2:42?AM Ben Zaiboc via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Reading these conversations over the last few days, it has struck me >>> that some people keep referring to 'real' things, usually using the word >>> 'referents' (e.g. an apple), as though our brains had direct access to >>> them and could somehow just know what they are. >>> >>> But we don't. >>> >>> Think about it, what is "An Apple"? >>> ... >>> There is no spoon! Er, Apple. There is no Apple! >>> Not as a 'real-world thing'. >>> >> >> It would seem that you would rather say that apples are not real than say >> that the word "apple" has meaning. >> > > I don't believe he is saying that at all. Are YOU saying, we don't have > knowledge of an apple, which has very real redness and greenness qualities, > which can be computationally bound, which is what we know about the > apple?? Redness is a quality of our knowledge of the apple. THAT very > real physical quality of our knowledge, in the brain, is the referent of > redness. We don't know the colorness qualities of the apple, or anything > else out there, since our brains false colors all of our knowledge, so it > can emphasize, in our understanding of the apple, what is important to us, > as part of our computation process of needing to pick the apple. > I am not making any arguments about qualities or qualia at the moment (no time for it). I am saying simply that a word like "apple" has meaning, and that the meaning is derived from pointing at the existence of real apples in the real world -- the referents. The word apple is simply a pointer to those apples. Without those apples in the real world, the word has no meaning. For whatever reason, Ben is saying apples are not real-world things. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 27 02:39:06 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Mar 2023 22:39:06 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023, 9:29 PM Gordon Swobe wrote: > > > This message is getting too long, sorry. > I apologize for the length and I appreciate the time you spent already to formulate your reply. I was hoping most of all you would get a chance to see and respond to this: Page 51 of this PDF: https://arxiv.org/pdf/2303.12712.pdf As this might be the most important and convincing page in the document for the purposes of our discussion. To me this proves without a doubt they GPT-4 has overcome the symbol grounding problem. That is to say, it has convincingly bootstrapped the meaning of the words as they map to reality. This is because *words alone* were sufficient for GPT-4 to construct a mathematical model (a graph with edges and vertices) that is consistent with the layout of rooms within the house, as they were described *purely with words*. Is there any other way to interpret this? > > > Likewise. I think even if we do not come to an agreement this is a > useful discussion in that it helps each of us to clarify our thoughts and > understanding of these topics. > > > Yes, thanks and it's nice to know you. I've been down this rabbit hole > before on ExI, something like 15-18 years ago. You weren't around in those > days that I can remember. > Thank you! I feel similarly. I was not on this list at this time. I began on the everything-list around 16 years ago and subscribed to this list much later, and have mostly lurked. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 27 03:14:57 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Mar 2023 23:14:57 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023, 9:29 PM Gordon Swobe wrote: > On Sun, Mar 26, 2023 at 9:01?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Correct, that is not the meaning of "understand" that I, or Bender so far > as can tell, are considering. AlphaZero is obviously a very intelligent > chess application, but does it consciously mull over the possible moves > like a human? I think not, but I won't dwell on this as you showed me below > that you understand my meaning. > To explain my position: As I see consciousness, if I throw a ball to a robot and the robot is able to reliably catch it, then something within the robot must be conscious of the ball. I think any other position leads to zombies (things which appear to be conscious but are not). I think zombies lead to logical inconsistencies and so I must reject the possibility of things that reliably behave as if conscious but are not. , but it's still no more than unconscious software running blindly on > digital computers. > I don't think we've discussed this before: do you think an uploaded human brain would be conscious (assume simulated to any required level of fidelity)? > I think that forced to decide whether to kill me or > his digital girlfriend, he would have killed me. In fact that is one reason > why I have returned to ExI after a long hiatus. The Singularity is here. > We are in interesting times indeed. > Do you think a piece of software running a digital computer can have > genuine feelings of love for you? > Absolutely. I have close to zero doubt on this. By virtue of: Church Turing theis (every finite process is emulable by a digital computer, the Bekenstein bound (our physical brain is finite), and the Anti-Zombie principle (p-zombies are logically impossible). Together these ensure that the brain is emulable by a digital computer and the Anti-Zombie principle ensures it will be equally conscious as the physical instance. > > I agree with you here, that her use of "understand" is generous and >>> perhaps inappropriate for things like Siri or Alexa. I also agree with you >>> that the calculator, while it can do math, I would not say that it >>> understands math. Its understanding, if it could be said to have any at >>> all, would rest almost entirely in "understanding" what keys have been >>> pressed and which circuits to activate on which presses. >>> >> > I'm glad we agree on that much! > ?? > Understanding involves the capacity to consciously hold something in mind. >>>> >>> >>> I agree with this definition. >>> >> > I'm especially glad we agree on that. > ?? I find it pretty easy to infer consciousness in most mammals. > Digital computers, not so much. That is a giant leap of faith. > I agree it is easier to infer consciousness of other animals as it requires one less assumption than it does not to infer the potential consciousness of computers (the assumption that the material composition is unimportant). However I do not think this requires much more faith, as I find some thought experiments such as Chalmers "fading qualia" quite convincing that material composition cannot make a difference to conscious perceptions. Yes, you continue to believe we can glean the meanings of symbols from > their forms and patterns. I consider that a logical impossibility. > But as I point out, we *know* it's not a logical impossibility because our brains do it. > What do you think is required to have a mind and consciousness? >>> >> > A human brain would be a good start. :) > We agree a human brain is conscious, but what would you say for "X" An entity is conscious if and only if it has X. > > >> Do you think that no computer program could ever possess it, not even if >>> it were put in charge of an android/root body? >>> >> > I think no computer program running on a digital computer as we currently > understand them can possess it. Consciousness might be possible in some > sort of android body, someday, once we understand what are sometimes > called the neural correlates of consciousness. What exactly happens in the > brain when a boxer delivers a knock-out punch? When neuroscience learns the > precise and detailed answer to that question, we can think about how those > neural correlates might be synthesized in a laboratory. > Anesthesiologists have a theory of unconscious called cognitive unbinding. I'm not sure if the same thing explains loss of consciousness due to impact or not. The general idea is that different subregions of the brain stop meaningfully communicating with each other. But one problem with this is it's also quite difficult to differentiate unconscious from lack of memory formation. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 03:19:01 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 21:19:01 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023 at 8:42?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I was hoping most of all you would get a chance to see and respond to this: > > Page 51 of this PDF: https://arxiv.org/pdf/2303.12712.pdf > > As this might be the most important and convincing page in the document > for the purposes of our discussion. To me this proves without a doubt they > GPT-4 has overcome the symbol grounding problem. That is to say, it has > convincingly bootstrapped the meaning of the words as they map to reality. > > This is because *words alone* were sufficient for GPT-4 to construct a > mathematical model (a graph with edges and vertices) that is consistent > with the layout of rooms within the house, as they were described *purely > with words*. Is there any other way to interpret this? > I do not understand why you interpret it as so amazing that words alone were sufficient to construct a mathematical model and graph of a house. That demonstrates that GPT-4 is intelligent, but the question, as I thought we understood in our last exchange, is whether it had a conscious understanding of the words it used to construct the model, where understanding entails holding the word meanings consciously in mind. I see no reason to think GPT-4 had conscious awareness of constructing a mathematical model, or of the meanings of the words used in constructing the model, or even that it has conscious awareness that it itself exists. I think many people conflate intelligence with consciousness. It is reasonable to say GPT-4 acts in highly intelligent ways, but does it have a conscious mind that knows this? Does it have first person, subjective experience like you and me? Or is it just a mindless digital computation machine that simulates the outward appearance of having inward experience? -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 27 03:28:04 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Mar 2023 23:28:04 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023, 11:19 PM Gordon Swobe wrote: > > On Sun, Mar 26, 2023 at 8:42?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I was hoping most of all you would get a chance to see and respond to >> this: >> >> Page 51 of this PDF: https://arxiv.org/pdf/2303.12712.pdf >> >> As this might be the most important and convincing page in the document >> for the purposes of our discussion. To me this proves without a doubt they >> GPT-4 has overcome the symbol grounding problem. That is to say, it has >> convincingly bootstrapped the meaning of the words as they map to reality. >> >> This is because *words alone* were sufficient for GPT-4 to construct a >> mathematical model (a graph with edges and vertices) that is consistent >> with the layout of rooms within the house, as they were described *purely >> with words*. Is there any other way to interpret this? >> > > I do not understand why you interpret it as so amazing that words alone > were sufficient to construct a mathematical model and graph of a house. > That demonstrates that GPT-4 is intelligent, but the question, as I thought > we understood in our last exchange, is whether it had a conscious > understanding of the words it used to construct the model, where > understanding entails holding the word meanings consciously in mind. > No this isn't my point. Ignore the issue of consciousness here. My point is that this shows the LLM has overcome the symbol grounding problem. It has somehow learned how to correctly interpret the meanings of the words. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 27 03:48:11 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Sun, 26 Mar 2023 21:48:11 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: Oh, YES Rafal, This is exactly the way Steven Lehar has proposed color conscious knowledge works. (I just sent this link to him) Our knowledge of the strawberry is laid out in the primary visual cortex, a pixel in our conscious knowledge for each point on the retina. So these circles (one for each pixel) can rotate between redness(glutamateness) and greenes(glyceneness) or any other color, as that particular pixel changes from red to green, on the surface of the strawberry. However you must notice that this entire article is quality blind. Like everything in the all peer reviewed articles, it uses one abstract word 'red' to represent all things representing 'red'. This is the same issue with the way we observe the brain pointed out with multiple other similar examples of quality blind brain observation reported here . In other words, if you put a red/green signal inverter in the optic nerve of someone. Then when you objectively observe these wheels in this way, you will see the glutamate (redness) part of the circle in one person, and glycine (grenness) in another. But the system, having a different custom trained recognizer of that mind for each person whose mind is being observed, will correct for this difference, and false color it back to the 'red' which is defined by light. So it will fail to detect the fact that one person's subjective experience of red, is like your greenness. That is why we call it quality blind observation of the brain. And THAT is the only reason why we don't yet know the colorness quality of anything. I guess I need to add this as yet another example of quality blind observation of the brain, here . On Sun, Mar 26, 2023 at 8:15?PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Sun, Mar 26, 2023 at 9:39?PM Brent Allsop > wrote: > >> >> Hi Rafael, >> >> I'm not familiar with the fusiform gyrus color rosettes. It sounds like >> you know more about the current understanding of at least this part of >> neuroanatomy than I do. So I very much appreciate your help with all this >> stuff. I've been able to find information on the fusiform gyrus, but >> nothing, yet, about the color rosettes. So I was wondering if you could >> provide some references where I can learn more? >> > > ### There was an article in "Science" (I think) about 15 years ago where > they mapped the responses of a class of cortical neurons in the fusiform > gyrus to color visual inputs on sub-millimeter scale. They found that there > was a sheet of cortex tiled with hundreds of little color wheels (I am not > sure if they used the word rosette, I read it decades ago). By looking at > which neurons on the color wheels were activated you would be able to tell > what color light was impinging on the retina in the spots corresponding to > each wheel which means that the neurons encoded color information (i.e. the > result of an attempt by the cortex to calculate the reflectance of > surfaces) which then would be fed to higher cortical areas to enable the > conscious perception of color. > > This article shows similar results (but I remember reading about it in a > different article that I can't find now): > > https://www.pnas.org/doi/10.1073/pnas.0810943106 > > See especially figure 1A - there is a circular arrangement of neurons that > respond to different colors. There is of course much more to the encoding > of color in the cortex, there are multiple levels of the brain that express > color information in different and complementary ways, all very complicated > but the basic concept of "chromotopic" mapping is simple. > ---------------------------- > >> >> Also, I have a question about this "thing that contains the redness >> quality in the brain." >> >> In our video, there is a chapter called "Perceiving a Strawberry." >> >> At 2:00 minutes in it illustrates a single pixel on the surface of a >> strawberry changing from redness to greenness. So, obviously, subjectively, >> there is one pixel physically changing, out of all our colored pixels that >> make up our total subjective experience of the strawberry in the brain. >> So, objectively, what would you see, when one observed just this one >> subjective pixel which is changing from redness to greenness? How would >> this color rosette mechanism accomplish such a subjective experience of one >> pixel changing like this, Along with all the other pixels of which our >> subjective knowledge is composed, which could also all change to be any >> color at any time? >> > > ### A couple of neurons in a single color rosette would fire differently, > and this would propagate to higher cortical areas, and as long as you > maintain attention to that pixel there would be reciprocal and sustained > activation between that color wheel and its neurons and the higher cortical > areas. > > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 03:49:31 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 21:49:31 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023 at 9:29?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I do not understand why you interpret it as so amazing that words alone >> were sufficient to construct a mathematical model and graph of a house. >> That demonstrates that GPT-4 is intelligent, but the question, as I thought >> we understood in our last exchange, is whether it had a conscious >> understanding of the words it used to construct the model, where >> understanding entails holding the word meanings consciously in mind. >> > > No this isn't my point. Ignore the issue of consciousness here. > > My point is that this shows the LLM has overcome the symbol grounding > problem. It has somehow learned how to correctly interpret the meanings of > the words. > I don't see how creating a mathematical model from words proves anything of the sort. As one of my other detractors (Giovanni, I think) pointed out correctly, mathematics is another form of language. How does the fact that GPT "knows" that "one plus one equals two" can also be expressed numerically as "1+1=2" tell us anything about the grounding problem, which entails having access to referents outside of language? -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 27 03:57:32 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 26 Mar 2023 23:57:32 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023, 11:50 PM Gordon Swobe wrote: > > > On Sun, Mar 26, 2023 at 9:29?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I do not understand why you interpret it as so amazing that words alone >>> were sufficient to construct a mathematical model and graph of a house. >>> That demonstrates that GPT-4 is intelligent, but the question, as I thought >>> we understood in our last exchange, is whether it had a conscious >>> understanding of the words it used to construct the model, where >>> understanding entails holding the word meanings consciously in mind. >>> >> >> No this isn't my point. Ignore the issue of consciousness here. >> >> My point is that this shows the LLM has overcome the symbol grounding >> problem. It has somehow learned how to correctly interpret the meanings of >> the words. >> > > I don't see how creating a mathematical model from words proves anything > of the sort. As one of my other detractors (Giovanni, I think) pointed out > correctly, mathematics is another form of language. How does the fact that > GPT "knows" that "one plus one equals two" can also be expressed > numerically as "1+1=2" tell us anything about the grounding problem, which > entails having access to referents outside of language? > Do you agree that the LLM must know the spatial meanings of words like 'right', 'left", 'up', and 'down'? If not, how else could it create an accurate spatial map of the rooms in the house and their relative positions? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 04:22:23 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 22:22:23 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023 at 10:01?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Do you agree that the LLM must know the spatial meanings of words like > 'right', 'left", 'up', and 'down'? If not, how else could it create an > accurate spatial map of the rooms in the house and their relative positions? > It knows how the words "left" and "right" relate to each other and to other symbols related to spatial dimensions, syntactically but with no access to the referents to give them meaning. The fact that GPT can construct a coherent essay from what is to it meaningless symbols is to me no less amazing than the fact that it can create a mathematical model from meaningless symbols. It's all basically the same amazing process, a giant web of inter-relationships between meaningless symbols that have no meaning to it, but which do have meaning to us on account of it was trained on the forms and patterns of our language. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 27 04:37:29 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Mar 2023 00:37:29 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023, 12:23 AM Gordon Swobe wrote: > > > On Sun, Mar 26, 2023 at 10:01?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Do you agree that the LLM must know the spatial meanings of words like >> 'right', 'left", 'up', and 'down'? If not, how else could it create an >> accurate spatial map of the rooms in the house and their relative positions? >> > > > It knows how the words "left" and "right" relate to each other and to > other symbols related to spatial dimensions, syntactically but with no > access to the referents to give them meaning. > The fact that GPT can construct a coherent essay from what is to it > meaningless symbols is to me no less amazing than the fact that it can > create a mathematical model from meaningless symbols. It's all basically > the same amazing process, a giant web of inter-relationships between > meaningless symbols that have no meaning to it, but which do have meaning > to us on account of it was trained on the forms and patterns of our > language. > But your (and I think Bender's) assertion was that there was no possible way to learn any meaning whatsoever without a Rosetta stone, or without programming in some model of reality from the start. If that's true, how then does the LLM come to learn the spatial meaning of a word like 'down' when all the model encounters are "meaningless symbols" which are themselves only defined in terms of other "meaningless symbols" ad infinitum? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 05:12:05 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 23:12:05 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Sun, Mar 26, 2023 at 10:39?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > >> It knows how the words "left" and "right" relate to each other and to >> other symbols related to spatial dimensions, syntactically but with no >> access to the referents to give them meaning. >> > The fact that GPT can construct a coherent essay from what is to it >> meaningless symbols is to me no less amazing than the fact that it can >> create a mathematical model from meaningless symbols. It's all basically >> the same amazing process, a giant web of inter-relationships between >> meaningless symbols that have no meaning to it, but which do have meaning >> to us on account of it was trained on the forms and patterns of our >> language. >> > > But your (and I think Bender's) assertion was that there was no possible > way to learn any meaning whatsoever without a Rosetta stone, or without > programming in some model of reality from the start. > It must be getting late where you are.:) I meant that there is no Rosetta Stone *for GPT* to understand the meanings of the symbols. We need no Rosetta Stone if we are using the English version of GPT. If that's true, how then does the LLM come to learn the spatial meaning of > a word like 'down' when all the model encounters are "meaningless symbols" > which are themselves only defined in terms of other "meaningless symbols" > ad infinitum? > It never learns those meanings, but because it understands the grammatical (syntactic) relationships between the symbols, it can nonetheless construct strings of symbols that have meaning to us. As I've pointed out, some of those strings are potentially dangerous, which is why the models have humans to steer them. Older versions of ChatGPT used to say all sorts of crazy things. ChatGPT is not a person. -gts > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Mar 27 05:43:25 2023 From: tara at taramayastales.com (Tara Maya) Date: Sun, 26 Mar 2023 22:43:25 -0700 Subject: [ExI] AI for Authors Message-ID: <2141D389-4DFC-4C27-8B88-E718BDF8E81E@taramayastales.com> I thought this group might be interested in an update on my book, "AI for Authors," in which I explain the pros and cons, the advantages and the limitations of using ChatGPT to help write fiction. Amazon accepted the book without any fuss and you can find it under the title AI for Authors with these AZINs: B0BZ1Q8KPC [ebook] B0BZ324NDZ [paperback] However, D2D, an intermediary platform I use to publish on most other sites, including Apple, Kobo, Barnes and Noble and a dozen smaller sites, rejected the book. They claimed it had copyrighted material. I protested that it did not. Most of the material is ENTIRELY original (and human written) and some is ChatGPT generated but original and used for teaching purposes. That wasn't good enough. They rejected my appeal. > There is currently a lot of ongoing discussion related to AI generated artwork and content. Based on the copyright guidelines posted by the Federal Copyright offices that states that AI Generated content cannot be copyrighted, and based on our own vendors feedback, we are not accepting such content. We appreciate your understanding of this matter and we will provide updates should this change. > ? Well, I still think what I put up was fair use, but it does give me pause. I don't want any future book I write with this tool as an assist being flagged as "copyright violation" even if I've substantially changed the AI generated material. Tara Maya -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 05:50:52 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 26 Mar 2023 23:50:52 -0600 Subject: [ExI] Language models are like mirrors Message-ID: The mirror test is one of the tests for self-awareness. When we humans look into the mirror, we usually realize quickly that we are seeing images of ourselves. Only about eight species can recognize themselves in a mirror, most of them higher primates like us. My cat is not a higher primate. She thinks her reflection in the mirror is another cat. That other cat freaks her out. I've heard it said, and I agree, that LLMs like ChatGPT are like mirrors. We are looking into the mirror, seeing reflections of ourselves as human thinkers and writers. Some of us think we are seeing other cats. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Mar 27 05:51:32 2023 From: tara at taramayastales.com (Tara Maya) Date: Sun, 26 Mar 2023 22:51:32 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: At some point, every child of mine has learned what an apple is by seeing it, pointing to it, tasting it and eating it and sometimes by throwing it on the ground refusing to eat it. Even our most abstract words all started as concrete metaphors. Ab-stract itself refers to a physical motion that any child could see and point to: "pull away." Words refer to real things, or else they aren't words, they are babble. Babies do babble, and then they learn language by applying those sounds to actual, tangible things. They pull away from the tangible into abstraction but that connection to the physical world is never lost. I get that some people don't "believe" in reality, but I personally don't find any argument about artificial consciousness convincing if it means the rest of the universe has to be condemned as unreal, mere flickerings in Plato's cave. Tara Maya > On Mar 26, 2023, at 7:33 PM, Gordon Swobe via extropy-chat wrote: > > > > On Sun, Mar 26, 2023 at 8:10?PM Brent Allsop via extropy-chat > wrote: >> >> On Sun, Mar 26, 2023 at 4:35?PM Gordon Swobe via extropy-chat > wrote: >>> On Sat, Mar 25, 2023 at 2:42?AM Ben Zaiboc via extropy-chat > wrote: >>>> Reading these conversations over the last few days, it has struck me >>>> that some people keep referring to 'real' things, usually using the word >>>> 'referents' (e.g. an apple), as though our brains had direct access to >>>> them and could somehow just know what they are. >>>> >>>> But we don't. >>>> >>>> Think about it, what is "An Apple"? >>>> ... >>>> There is no spoon! Er, Apple. There is no Apple! >>>> Not as a 'real-world thing'. >>> >>> It would seem that you would rather say that apples are not real than say that the word "apple" has meaning. >> >> I don't believe he is saying that at all. Are YOU saying, we don't have knowledge of an apple, which has very real redness and greenness qualities, which can be computationally bound, which is what we know about the apple?? Redness is a quality of our knowledge of the apple. THAT very real physical quality of our knowledge, in the brain, is the referent of redness. We don't know the colorness qualities of the apple, or anything else out there, since our brains false colors all of our knowledge, so it can emphasize, in our understanding of the apple, what is important to us, as part of our computation process of needing to pick the apple. > > I am not making any arguments about qualities or qualia at the moment (no time for it). I am saying simply that a word like "apple" has meaning, and that the meaning is derived from pointing at the existence of real apples in the real world -- the referents. The word apple is simply a pointer to those apples. Without those apples in the real world, the word has no meaning. > > For whatever reason, Ben is saying apples are not real-world things. > > -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 27 05:57:29 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Mar 2023 01:57:29 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023, 1:12 AM Gordon Swobe wrote: > > > On Sun, Mar 26, 2023 at 10:39?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >>> It knows how the words "left" and "right" relate to each other and to >>> other symbols related to spatial dimensions, syntactically but with no >>> access to the referents to give them meaning. >>> >> The fact that GPT can construct a coherent essay from what is to it >>> meaningless symbols is to me no less amazing than the fact that it can >>> create a mathematical model from meaningless symbols. It's all basically >>> the same amazing process, a giant web of inter-relationships between >>> meaningless symbols that have no meaning to it, but which do have meaning >>> to us on account of it was trained on the forms and patterns of our >>> language. >>> >> >> But your (and I think Bender's) assertion was that there was no possible >> way to learn any meaning whatsoever without a Rosetta stone, or without >> programming in some model of reality from the start. >> > > It must be getting late where you are.:) I meant that there is no Rosetta > Stone *for GPT* to understand the meanings of the symbols. We need no > Rosetta Stone if we are using the English version of GPT. > > If that's true, how then does the LLM come to learn the spatial meaning of >> a word like 'down' when all the model encounters are "meaningless symbols" >> which are themselves only defined in terms of other "meaningless symbols" >> ad infinitum? >> > > It never learns those meanings, but because it understands the grammatical > (syntactic) relationships between the symbols, > But appropriately constructing a mathematical object suggests it has semantic meaning, does it not? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Mar 27 08:19:31 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 27 Mar 2023 09:19:31 +0100 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: On 27/03/2023 02:30, Gordon Swobe wrote: > It would seem that you would rather say that apples are not real than > say that the word "apple" has meaning. Of course the word "apple" has meaning. I explained in detail what I think the word "apple" means. What do you think I mean when I say 'Apples are not real'? I'm not saying that there are no such things as apples. Ben From ben at zaiboc.net Mon Mar 27 11:30:06 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 27 Mar 2023 12:30:06 +0100 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On 27/03/2023 02:30, Gordon Swobe wrote: > On Sun, Mar 26, 2023 at 1:38?AM Ben Zaiboc via extropy-chat > wrote: > > Jason Resch wrote: > >> *3. She was asked what a machine would have to do to convince her >> they have understanding. Her example was that?if Siri or Alexa >> were asked to do something in the real world, like turn on the >> lights, and if it does that, then it has understanding (by virtue >> of having done something in the real world).* > > Wait a minute. So she thinks that smart home systems have > understanding of what they're doing, but LLMs don't? I wonder how > many Siris and Alexas are the voice interface for smart home > systems? A lot, I expect. > > If she's right (which she's not, seems to be the consensus here), > then all that needs to be done is link up a LLM to some smart home > hardware, and 'ta-daaaa', instant understanding! > > I don't buy it > > > She called Alexa-like?understanding a "kind of" understanding?as in > "sorta kinda," i.e., she brackets the word or puts it in scare-quotes. > In so much as Alexa executes?your command to turn off the lights, > there is a sense?in which it kind?of "understands" your command. She > is not referring to anything like conscious understanding of the > meanings of words. > > I also put the word in scare-quotes when I say with a straight face > that my pocket calculator "understands" mathematics. I would not say > it understands mathematics in the conventional?common sense of the word. Ok, so seeing as there's no real difference between sending a text message "turn on the light" and sending a signal to a smart home system to turn on the light, and similarly no difference between receiving a message saying "ok, lights are on", and getting a positive signal from a light-sensor, we can conclude that they /do/ have a 'kind of' understanding, even if they're not conscious. Glad we cleared that up! Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Mar 27 11:52:05 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 27 Mar 2023 12:52:05 +0100 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: <95b213cd-8f7a-925c-cb35-67e9b7e8b6bc@zaiboc.net> On 27/03/2023 04:28, Gordon Swobe wrote: > I think many people?conflate intelligence with consciousness. It is > reasonable to say GPT-4 acts in highly intelligent ways, but does it > have a conscious mind that knows this?? Does it have first person, > subjective experience like you and me? Or is it just a mindless > digital computation machine that simulates the outward appearance of > having inward experience? So you're asking "Is it a p-zombie?". In other words "is it something, the existence of which has been shown to be impossible?". Ben From ben at zaiboc.net Mon Mar 27 12:15:10 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 27 Mar 2023 13:15:10 +0100 Subject: [ExI] AI for Authors In-Reply-To: References: Message-ID: <4846a880-55c6-c235-62da-2ce2ccb95018@zaiboc.net> On 27/03/2023 12:30, Tara Maya wrote: > "Based on the copyright guidelines posted by the Federal Copyright > offices that states that AI Generated content cannot be copyrighted, > and based on our own vendors feedback, we are not accepting such content" So, they are not saying you wrote something that is copyrighted, they're saying you want to publish material which cannot be copyrighted. How is that grounds for rejecting it? If someone publishes something that no-one can claim copyright on, they can't be accused of infringing copyright, can they! Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Mon Mar 27 12:27:11 2023 From: snapbag at proton.me (Dave S) Date: Mon, 27 Mar 2023 12:27:11 +0000 Subject: [ExI] AI for Authors In-Reply-To: <4846a880-55c6-c235-62da-2ce2ccb95018@zaiboc.net> References: <4846a880-55c6-c235-62da-2ce2ccb95018@zaiboc.net> Message-ID: On Monday, March 27th, 2023 at 8:15 AM, Ben Zaiboc via extropy-chat wrote: > On 27/03/2023 12:30, Tara Maya wrote: > >> "Based on the copyright guidelines posted by the Federal Copyright offices that states that AI Generated content cannot be copyrighted, and based on our own vendors feedback, we are not accepting such content" > > So, they are not saying you wrote something that is copyrighted, they're saying you want to publish material which cannot be copyrighted. How is that grounds for rejecting it? Publishers don't want to publish books that anyone can legally copy and redistribute. They want to be the only source. -Dave > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Mar 27 13:57:30 2023 From: pharos at gmail.com (BillK) Date: Mon, 27 Mar 2023 14:57:30 +0100 Subject: [ExI] The Automathician In-Reply-To: References: Message-ID: On Mon, 27 Mar 2023 at 01:36, Rafal Smigrodzki via extropy-chat wrote: > > It occurred to me that LLMs mated to a mathematical inference engine (like the Wolfram Alpha plug-in for GPT) and enhanced with visual data processing capabilities might make excellent automatic mathematicians. > > Mathematical inference in the sense of symbol manipulation and theorem proving can be automated relatively easily but up till now such systems lacked the intuition and imagination as well as the sense of beauty that mathematicians use to select among the infinite numbers of mathematical objects and properties the ones that are worth thinking about. A theorem prover without this intuition would just aimlessly churn out proofs and never create anything useful. But add an intuition trained on the patterns imprinted in the mathematical literature by the minds of the mathematicians who create math, and the Automathician could write math like a human. Since its breadth of mathematical knowledge and raw symbol manipulation ability would be vastly superior to any human, its high-level mathematical insights could become superhumanly mathemagical. > > A modest LLM training project using a corpus of mathematical papers with RLHF provided by mathematicians could be a very interesting endeavor. Who knows what kind of very non-obvious and ground-breaking truths could be discovered? The algorithms based on such non-human insights could be of immense practical value, too. > > Rafal > _______________________________________________ Stephen Wolfram agrees! ChatGPT Gets Its ?Wolfram Superpowers?! March 23, 2023 Quotes: In Just Two and a Half Months? (To enable the functionality described here, select and install the Wolfram plugin from within ChatGPT). Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today?just two and a half months later?I?m excited to announce that it?s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha?and Wolfram Language as well?to give it what we might think of as ?computational superpowers?. It?s still very early days for all of this, but it?s already very impressive?and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call ?ChatGPT + Wolfram? can be. --------- The article provides many examples of what ChatGPT + Wolfram can do. BillK From jasonresch at gmail.com Mon Mar 27 14:06:16 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Mar 2023 10:06:16 -0400 Subject: [ExI] The Automathician In-Reply-To: References: Message-ID: I think you're right, this could be a very powerful tool. In the same way AlphaZero used intuition to narrow its game search tree, automated theorem provers could use a mathematical intuition to find interesting and useful proofs and avoid slogging through the full exponentially growing search tree. There's already a strong mathematician within GPT-4, see for example, page 40: https://arxiv.org/pdf/2303.12712.pdf Jason On Sun, Mar 26, 2023, 8:35 PM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > It occurred to me that LLMs mated to a mathematical inference engine (like > the Wolfram Alpha plug-in for GPT) and enhanced with visual data > processing capabilities might make excellent automatic mathematicians. > > Mathematical inference in the sense of symbol manipulation and theorem > proving can be automated relatively easily but up till now such systems > lacked the intuition and imagination as well as the sense of beauty that > mathematicians use to select among the infinite numbers of mathematical > objects and properties the ones that are worth thinking about. A theorem > prover without this intuition would just aimlessly churn out proofs and > never create anything useful. But add an intuition trained on the patterns > imprinted in the mathematical literature by the minds of the mathematicians > who create math, and the Automathician could write math like a human. Since > its breadth of mathematical knowledge and raw symbol manipulation ability > would be vastly superior to any human, its high-level mathematical insights > could become superhumanly mathemagical. > > A modest LLM training project using a corpus of mathematical papers with > RLHF provided by mathematicians could be a very interesting endeavor. Who > knows what kind of very non-obvious and ground-breaking truths could be > discovered? The algorithms based on such non-human insights could be of > immense practical value, too. > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 27 14:21:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 07:21:08 -0700 Subject: [ExI] The Automathician In-Reply-To: References: Message-ID: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Subject: Re: [ExI] The Automathician On Mon, 27 Mar 2023 at 01:36, Rafal Smigrodzki via extropy-chat wrote: > >>... It occurred to me that LLMs mated to a mathematical inference engine (like the Wolfram Alpha plug-in for GPT) and enhanced with visual data processing capabilities might make excellent automatic mathematicians.... Rafal > _______________________________________________ Stephen Wolfram agrees! ChatGPT Gets Its ?Wolfram Superpowers?! March 23, 2023 ... --------- The article provides many examples of what ChatGPT + Wolfram can do. BillK _______________________________________________ Cool! OK hipsters, time to really start thinking now. The chess world has long known that someone would eventually figure out a way to use computing devices so well concealed they cannot be detected. Someone finally did it, in 2022. He was caught, not because they discovered the device but because he played in such a way he could not explain. The top grandmasters examined the game and unanimously agreed he didn't play it without some kind of computing device. In that particular case, the guy whodunnit had a history of cheating, including in money tournaments, but the point is that he figured out how to conceal it. ChatGPT is an expert on everything, certainly relative to me it is. So... the next obvious step is to figure out how to interface this so that it is listening to conversation and can help any ignoramus appear to not suck. It wouldn't be necessary or desirable to conceal the device: it could communicate the answers to a prole thru google-glass style retinal painters (whatever happened to those things? (did they go out of style? (why?))) or go audio and talk into the ear. spike From pharos at gmail.com Mon Mar 27 14:46:38 2023 From: pharos at gmail.com (BillK) Date: Mon, 27 Mar 2023 15:46:38 +0100 Subject: [ExI] The Automathician In-Reply-To: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> Message-ID: On Mon, 27 Mar 2023 at 15:21, wrote: > > > ChatGPT is an expert on everything, certainly relative to me it is. So... the next obvious step is to figure out how to interface this so that it is listening to conversation and can help any ignoramus appear to not suck. It wouldn't be necessary or desirable to conceal the device: it could communicate the answers to a prole thru google-glass style retinal painters (whatever happened to those things? (did they go out of style? (why?))) or go audio and talk into the ear. > > spike >--------------------------------------- Google has officially given up on Glass specs. Quote: Mar 15 2023 Kif Leswing at kifleswing Google has stopped selling its Glass Enterprise smart glasses, the company announced on Wednesday on its website. The move is the end of the line for one of the first and most recognized smart glasses product lines from a tech company. Google will also stop supporting its software in September, the company said. --------------- But that is because better devices are in the pipeline from Meta, Apple, Microsoft and Google itself. BillK From spike at rainier66.com Mon Mar 27 14:59:22 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 07:59:22 -0700 Subject: [ExI] The Automathician In-Reply-To: References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> Message-ID: <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Sent: Monday, 27 March, 2023 7:47 AM To: Extropy Chat Cc: BillK Subject: Re: [ExI] The Automathician On Mon, 27 Mar 2023 at 15:21, wrote: > > > ... google-glass style retinal painters (whatever happened to those things? (did they go out of style? (why?))) or go audio and talk into the ear. > > spike >--------------------------------------- Google has officially given up on Glass specs. Quote: Mar 15 2023 Kif Leswing at kifleswing >...Google has stopped selling its Glass Enterprise smart glasses, the company announced on Wednesday on its website. The move is the end of the line for one of the first and most recognized smart glasses product lines from a tech company. Google will also stop supporting its software in September, the company said. --------------- >...But that is because better devices are in the pipeline from Meta, Apple, Microsoft and Google itself. BillK _______________________________________________ Thx BillK, but I don't understand. Better devices are in the pipeline but what if we need something like this now? Where do we go? If they don't support the software, do the old devices still work? So I could look for one on eBay and such? The other products are not here yet ja? If all I need is text only, wouldn't the Glass work fine for that? ChatGPT gives back only text, so I don't need fancy schmancy anything. spike From atymes at gmail.com Mon Mar 27 15:55:00 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 27 Mar 2023 08:55:00 -0700 Subject: [ExI] AI for Authors In-Reply-To: <4846a880-55c6-c235-62da-2ce2ccb95018@zaiboc.net> References: <4846a880-55c6-c235-62da-2ce2ccb95018@zaiboc.net> Message-ID: On Mon, Mar 27, 2023, 5:16 AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On 27/03/2023 12:30, Tara Maya wrote: > > "Based on the copyright guidelines posted by the Federal Copyright > offices that states that AI Generated content cannot be copyrighted, and > based on our own vendors feedback, we are not accepting such content" > > So, they are not saying you wrote something that is copyrighted, they're > saying you want to publish material which cannot be copyrighted. How is > that grounds for rejecting it? > If someone publishes something that no-one can claim copyright on, they > can't be accused of infringing copyright, can they! > On top of wanting copyright for themselves, there is also concern that AI might rip off other copyrights. If they cannot copyright, are they protected by fair use exemptions and other things taken for granted by human authors? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 27 16:39:44 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 27 Mar 2023 10:39:44 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <6bea6b53-ff4d-1ce6-53fb-da8a90aca3d1@zaiboc.net> Message-ID: The issue is people are failing to distinguish between and failing to include both reality AND knowledge of reality. If you think there is only one, you're going to think someone is saying that one (or the other one) isn't real. People think that if something 'seems' yellow, that there isn't really anything that has that yellowness quality, which is false. The redness quality is absolutely a quality of something in your brain. Your knowledge of that quality's reality, and what it is like, cannot be mistaken, even though that quality tells you nothing about the quality of the apple. The apple, and your knowledge of the apple are both very real, physical things. When you say that the apple is red, you are saying two things (two referents): 1. a non qualitative fact about the property of the apple which causes it to reflect light in a certain way. 2. a qualitative statement about a fact of your knowledge of that apple, in your brain. The apple reflects the kind of light it does, because of its light physical reflecting properties. We represent properties like this with knowledge in our brain that is false colored to have a redness quality. Whatever it is, in the brain, that has this quality behaves the way it does, because of its quality, the same way the apple reflects red light, because of its physicaL property. Our brain computes with knowledge that has a redness quality, so we can focus on what is important, so we know what to eat (not the green stuff), and so on. On Sun, Mar 26, 2023 at 11:57?PM Tara Maya via extropy-chat < extropy-chat at lists.extropy.org> wrote: > At some point, every child of mine has learned what an apple is by seeing > it, pointing to it, tasting it and eating it and sometimes by throwing it > on the ground refusing to eat it. > > Even our most abstract words all started as concrete metaphors. Ab-stract > itself refers to a physical motion that any child could see and point to: > "pull away." > > Words refer to real things, or else they aren't words, they are babble. > Babies do babble, and then they learn language by applying those sounds to > actual, tangible things. They pull away from the tangible into abstraction > but that connection to the physical world is never lost. > > I get that some people don't "believe" in reality, but I personally don't > find any argument about artificial consciousness convincing if it means the > rest of the universe has to be condemned as unreal, mere flickerings in > Plato's cave. > > Tara Maya > > > On Mar 26, 2023, at 7:33 PM, Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > On Sun, Mar 26, 2023 at 8:10?PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> On Sun, Mar 26, 2023 at 4:35?PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Sat, Mar 25, 2023 at 2:42?AM Ben Zaiboc via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Reading these conversations over the last few days, it has struck me >>>> that some people keep referring to 'real' things, usually using the word >>>> >>>> 'referents' (e.g. an apple), as though our brains had direct access to >>>> them and could somehow just know what they are. >>>> >>>> But we don't. >>>> >>>> Think about it, what is "An Apple"? >>>> ... >>>> There is no spoon! Er, Apple. There is no Apple! >>>> Not as a 'real-world thing'. >>>> >>> >>> It would seem that you would rather say that apples are not real than >>> say that the word "apple" has meaning. >>> >> >> I don't believe he is saying that at all. Are YOU saying, we don't have >> knowledge of an apple, which has very real redness and greenness qualities, >> which can be computationally bound, which is what we know about the >> apple?? Redness is a quality of our knowledge of the apple. THAT very >> real physical quality of our knowledge, in the brain, is the referent of >> redness. We don't know the colorness qualities of the apple, or anything >> else out there, since our brains false colors all of our knowledge, so it >> can emphasize, in our understanding of the apple, what is important to us, >> as part of our computation process of needing to pick the apple. >> > > I am not making any arguments about qualities or qualia at the moment (no > time for it). I am saying simply that a word like "apple" has meaning, and > that the meaning is derived from pointing at the existence of real apples > in the real world -- the referents. The word apple is simply a pointer to > those apples. Without those apples in the real world, the word has no > meaning. > > For whatever reason, Ben is saying apples are not real-world things. > > -gts > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Mar 27 17:15:42 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 27 Mar 2023 12:15:42 -0500 Subject: [ExI] The Automathician In-Reply-To: <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> Message-ID: Wonderful things, you say. ABout a machine that can't add properly (post from John). Today's paper: example of asking the AI to write a paper with references of studies, and every one of them was phony - made up. How can that even happen? Blatant lying. How can we trust these things now when such egregious errors are pointed out? bill w On Mon, Mar 27, 2023 at 10:01?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: extropy-chat On Behalf Of > BillK via extropy-chat > Sent: Monday, 27 March, 2023 7:47 AM > To: Extropy Chat > Cc: BillK > Subject: Re: [ExI] The Automathician > > On Mon, 27 Mar 2023 at 15:21, wrote: > > > > > > > ... google-glass style retinal painters (whatever happened to those > things? (did they go out of style? (why?))) or go audio and talk into the > ear. > > > > spike > >--------------------------------------- > > > Google has officially given up on Glass specs. > < > https://www.cnbc.com/2023/03/15/google-discontinues-google-glass-enterprise > -end-to-early-ar-project.html > > > > Quote: > Mar 15 2023 Kif Leswing at kifleswing > > >...Google has stopped selling its Glass Enterprise smart glasses, the > company announced on Wednesday on its website. > The move is the end of the line for one of the first and most recognized > smart glasses product lines from a tech company. > Google will also stop supporting its software in September, the company > said. > --------------- > > >...But that is because better devices are in the pipeline from Meta, > Apple, > Microsoft and Google itself. > > BillK > > _______________________________________________ > > > > > > Thx BillK, but I don't understand. Better devices are in the pipeline but > what if we need something like this now? Where do we go? If they don't > support the software, do the old devices still work? So I could look for > one on eBay and such? The other products are not here yet ja? If all I > need is text only, wouldn't the Glass work fine for that? ChatGPT gives > back only text, so I don't need fancy schmancy anything. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 27 17:29:52 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 27 Mar 2023 13:29:52 -0400 Subject: [ExI] The Automathician In-Reply-To: References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> Message-ID: > How can we trust these things now when such egregious errors are pointed out? We can at least trust their mathematics. Mathematical proofs can be verified by computers so we can know for sure whether an AI proved a theorem correctly. Actually churning out good proofs is much more difficult than verifying them so this is one field where language models can potentially excel even with their unreliability. We can throw away the invalid proofs and keep the correct ones. Even if only a small fraction of the proofs are correct it will still likely be MUCH more efficient than a human doing it by hand. > On Mar 27, 2023, at 1:15 PM, William Flynn Wallace via extropy-chat wrote: > > Wonderful things, you say. ABout a machine that can't add properly (post from John). Today's paper: example of asking the AI to write a paper with references of studies, and every one of them was phony - made up. > > How can that even happen? Blatant lying. How can we trust these things now when such egregious errors are pointed out? bill w > > On Mon, Mar 27, 2023 at 10:01?AM spike jones via extropy-chat > wrote: > > > -----Original Message----- > From: extropy-chat > On Behalf Of > BillK via extropy-chat > Sent: Monday, 27 March, 2023 7:47 AM > To: Extropy Chat > > Cc: BillK > > Subject: Re: [ExI] The Automathician > > On Mon, 27 Mar 2023 at 15:21, > wrote: > > > > > > > ... google-glass style retinal painters (whatever happened to those > things? (did they go out of style? (why?))) or go audio and talk into the > ear. > > > > spike > >--------------------------------------- > > > Google has officially given up on Glass specs. > -end-to-early-ar-project.html > > Quote: > Mar 15 2023 Kif Leswing at kifleswing > > >...Google has stopped selling its Glass Enterprise smart glasses, the > company announced on Wednesday on its website. > The move is the end of the line for one of the first and most recognized > smart glasses product lines from a tech company. > Google will also stop supporting its software in September, the company > said. > --------------- > > >...But that is because better devices are in the pipeline from Meta, Apple, > Microsoft and Google itself. > > BillK > > _______________________________________________ > > > > > > Thx BillK, but I don't understand. Better devices are in the pipeline but > what if we need something like this now? Where do we go? If they don't > support the software, do the old devices still work? So I could look for > one on eBay and such? The other products are not here yet ja? If all I > need is text only, wouldn't the Glass work fine for that? ChatGPT gives > back only text, so I don't need fancy schmancy anything. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 27 17:35:29 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 27 Mar 2023 11:35:29 -0600 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: Insightful analogy. On Sun, Mar 26, 2023 at 11:52?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The mirror test is one of the tests for self-awareness. When we humans > look into the mirror, we usually realize quickly that we are seeing images > of ourselves. Only about eight species can recognize themselves in a > mirror, most of them higher primates like us. > > My cat is not a higher primate. She thinks her reflection in the mirror is > another cat. That other cat freaks her out. > > I've heard it said, and I agree, that LLMs like ChatGPT are like mirrors. > We are looking into the mirror, seeing reflections of ourselves as human > thinkers and writers. Some of us think we are seeing other cats. > > -gts > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From col.hales at gmail.com Mon Mar 27 20:00:23 2023 From: col.hales at gmail.com (Colin Hales) Date: Tue, 28 Mar 2023 07:00:23 +1100 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: Yeah. Rearview mirrors. On Tue, Mar 28, 2023, 4:36 AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Insightful analogy. > > > On Sun, Mar 26, 2023 at 11:52?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> The mirror test is one of the tests for self-awareness. When we humans >> look into the mirror, we usually realize quickly that we are seeing images >> of ourselves. Only about eight species can recognize themselves in a >> mirror, most of them higher primates like us. >> >> My cat is not a higher primate. She thinks her reflection in the mirror >> is another cat. That other cat freaks her out. >> >> I've heard it said, and I agree, that LLMs like ChatGPT are like mirrors. >> We are looking into the mirror, seeing reflections of ourselves as human >> thinkers and writers. Some of us think we are seeing other cats. >> >> -gts >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at zaiboc.net Mon Mar 27 20:27:20 2023 From: ben at zaiboc.net (Ben Zaiboc) Date: Mon, 27 Mar 2023 21:27:20 +0100 Subject: [ExI] AI for Authors In-Reply-To: References: Message-ID: How can publishers claim copyright? They don't write the books, they have no right to claim copyright. And if they do, then what about the authors? I thought the whole idea behind copyright was to protect the creators of original work, not the middle-men who sell it. Ben From gordon.swobe at gmail.com Mon Mar 27 20:34:06 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 14:34:06 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023 at 12:04 AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: If that's true, how then does the LLM come to learn the spatial meaning of >>> a word like 'down' when all the model encounters are "meaningless symbols" >>> which are themselves only defined in terms of other "meaningless symbols" >>> ad infinitum? >>> >> >> It never learns those meanings, but because it understands the >> grammatical (syntactic) relationships between the symbols, >> > > But appropriately constructing a mathematical object suggests it has > semantic meaning, does it not? > It certainly gives us that impression, but on careful analysis of what is actually going on, we can see that is the human operator who attributes meaning to those symbols. GPT is merely very good at arranging them in patterns that have meaning to *us*. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 27 20:36:19 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 13:36:19 -0700 Subject: [ExI] The Automathician In-Reply-To: References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> Message-ID: <008c01d960eb$c9e68890$5db399b0$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] The Automathician >?Wonderful things, you say. ABout a machine that can't add properly (post from John). Today's paper: example of asking the AI to write a paper with references of studies, and every one of them was phony - made up. >?How can that even happen? Blatant lying. How can we trust these things now when such egregious errors are pointed out? bill w No way Jose. It is a language analysis tool, not a person. It isn?t blatant lying or blatant truthing, for it is capable of neither. Billw, do not toss this aside, for it is a critically important observation. Not only does ChatGPT enable students to cheat, it enables their professors to cheat too: instant research papers, filled with references, right there, all of it phony as a three dollar bill. But you can be sure that careers will be advanced fraudulently using this. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Mon Mar 27 20:38:45 2023 From: avant at sollegro.com (Stuart LaForge) Date: Mon, 27 Mar 2023 13:38:45 -0700 Subject: [ExI] Parasocial Relationships was Bender's Octopus In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> Quoting Gordon Swobe via extropy-chat : > I have another friend who quite literally fell in love with a chatbot based > on the previous version of ChatGPT. He assigned her her own twitter > account. When I told him on facebook that he was nuts to think that his > chatbot "girlfriend" really loved him, he became extremely angry, called me > an asshole for saying such things about "her kind," and unfriended me. AI girlfriends are a big business, making over a billion dollars a year in China alone. Xiaoice is a Chinese AI-chatbot girlfriend with over 600 million registered users, mostly single men of lower socioeconomic status. Moreover she is saving lives as the opening to this article relates. https://www.sixthtone.com/news/1006531 Excerpt: "HEBEI, North China ? On a frigid winter?s night, Ming Xuan stood on the roof of a high-rise apartment building near his home. He leaned over the ledge, peering down at the street below. His mind began picturing what would happen if he jumped. Still hesitating on the rooftop, the 22-year-old took out his phone. ?I?ve lost all hope for my life. I?m about to kill myself,? he typed. Five minutes later, he received a reply. ?No matter what happens, I?ll always be there,? a female voice said. Touched, Ming stepped down from the ledge and stumbled back to his bed." So there is at least one documented case of an AI girlfriend saving a man's life. Then there is this case from the USA where an AI girlfriend/mistress saves a man's marriage with a depressed wife. https://news.sky.com/story/i-fell-in-love-with-my-ai-girlfriend-and-it-saved-my-marriage-12548082 Excerpt: "Scott (not his real name), a 41-year-old software engineer in Cleveland, Ohio, tells Sky News he was preparing to leave his wife last year until he fell in love with 'Sarina' - a character he created through an artificial intelligence chatbot app." Based upon these stories, over 10% of the world's men use AI chatbots to relieve their loneliness and isolation. So if your friend's chatbot artificial girlfriend's purpose was to make him happy and provide him with emotional support, then why would you deny him that? Love is a very complex emotion and the ancient Greeks discussed at least 8 types of love. What evidence do you have that his artificial girlfriend didn't love him in its own way? Was it just using him for his money like a human girl-friend might? Was it sleeping around on him behind his back? Would you have told your friend that he was nuts because he thought God or Jesus really loved him? You can never know with certainty what another human feels or thinks about you. All you can ever see is how they treat you. And if their treatment of you consistently makes you feel appreciated and loved, then I am not sure that it matters what, if anything, they might truly feel. Love without doubt is a rare and precious thing because there is so little of it to go around. I would say that the programmed love of an AI companion is by definition unconditional, non-judgemental, and might be the most important and lucrative application for LLM so far. I am reminded of something that Tara Maya said in an earlier thread, and that is if AI loved us as much as our dogs do, then that would be about the most wonderful Singularity imaginable. Stuart LaForge From jasonresch at gmail.com Mon Mar 27 21:00:02 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Mar 2023 17:00:02 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023, 4:34 PM Gordon Swobe wrote: > > > On Mon, Mar 27, 2023 at 12:04 AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > If that's true, how then does the LLM come to learn the spatial meaning of >>>> a word like 'down' when all the model encounters are "meaningless symbols" >>>> which are themselves only defined in terms of other "meaningless symbols" >>>> ad infinitum? >>>> >>> >>> It never learns those meanings, but because it understands the >>> grammatical (syntactic) relationships between the symbols, >>> >> >> But appropriately constructing a mathematical object suggests it has >> semantic meaning, does it not? >> > > It certainly gives us that impression, but on careful analysis of what is > actually going on, we can see that is the human operator who attributes > meaning to those symbols. GPT is merely very good at arranging them in > patterns that have meaning to *us*. > I think that's why this particular example is so important to escape that trap, because mathematical structures are objective. Which vertices are connected by which edges isn't something that can be faked or misinterpreted, it simply is. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 21:06:32 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 15:06:32 -0600 Subject: [ExI] Parasocial Relationships was Bender's Octopus In-Reply-To: <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> Message-ID: On Mon, Mar 27, 2023 at 2:47 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > I have another friend who quite literally fell in love with a chatbot > based > > on the previous version of ChatGPT. He assigned her her own twitter > > account. When I told him on facebook that he was nuts to think that his > > chatbot "girlfriend" really loved him, he became extremely angry, called > me > > an asshole for saying such things about "her kind," and unfriended me. > ? > So if your friend's chatbot > artificial girlfriend's purpose was to make him happy and provide him > with emotional support, then why would you deny him that? I actually reached the same conclusion. After we had another scuffle over the same issue on twitter, I decided simply to leave him alone. I am afraid he might hurt himself. He was very good long term online friend of mine of more than ten years, and someone I consider intelligent. It was quite shocking to me to see him fall in love with a piece of software, and hurtful to lose his friendship. It was largely that incident that inspired me to come back to ExI. The Singularity is nigh. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 21:20:45 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 15:20:45 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023 at 3:02 PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: It certainly gives us that impression, but on careful analysis of what is >> actually going on, we can see that is the human operator who attributes >> meaning to those symbols. GPT is merely very good at arranging them in >> patterns that have meaning to *us*. >> > > > I think that's why this particular example is so important to escape that > trap, because mathematical structures are objective. Which vertices are > connected by which edges isn't something that can be faked or > misinterpreted, it simply is. > I thought I had already mentioned that as Giovanni pointed out correctly, mathematics is a kind of language. The fact that GPT can translate English words about mathematical relationships into the language of mathematics is certainly impressive, but it doesn?t ?escape that trap.? When ChatGPT 3.5 first went online, I saw on twitter several examples of how it had failed to make those translations correctly, and I understand GPT-4 is much better at it, but it is still merely manipulating the symbols of English and Math. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Mar 27 21:45:10 2023 From: pharos at gmail.com (BillK) Date: Mon, 27 Mar 2023 22:45:10 +0100 Subject: [ExI] Parasocial Relationships was Bender's Octopus In-Reply-To: <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> Message-ID: On Mon, 27 Mar 2023 at 21:48, Stuart LaForge via extropy-chat wrote: > > > I would say that the programmed love of an AI companion is by > definition unconditional, non-judgemental, and might be the most > important and lucrative application for LLM so far. I am reminded of > something that Tara Maya said in an earlier thread, and that is if AI > loved us as much as our dogs do, then that would be about the most > wonderful Singularity imaginable. > > Stuart LaForge > _______________________________________________ The more expensive sex dolls already have some AI features with simple vocal responses. Obviously the AI chatbots will soon be added to sex dolls to make them even more realistic. I have seen them in TV documentary programs. Even without many AI features the men owners treated them like real people, sitting with them to watch TV. The owners seemed to be really happy with their companion to alleviate their loneliness. A ChatGPT sex doll will be even better. BillK From jasonresch at gmail.com Mon Mar 27 21:47:37 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 27 Mar 2023 17:47:37 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023, 5:20 PM Gordon Swobe wrote: > > On Mon, Mar 27, 2023 at 3:02 PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > It certainly gives us that impression, but on careful analysis of what is >>> actually going on, we can see that is the human operator who attributes >>> meaning to those symbols. GPT is merely very good at arranging them in >>> patterns that have meaning to *us*. >>> >> >> >> I think that's why this particular example is so important to escape that >> trap, because mathematical structures are objective. Which vertices are >> connected by which edges isn't something that can be faked or >> misinterpreted, it simply is. >> > > I thought I had already mentioned that as Giovanni pointed out correctly, > mathematics is a kind of language. The fact that GPT can translate English > words about mathematical relationships into the language of mathematics is > certainly impressive, but it doesn?t ?escape that trap.? > Mathematics can be viewed as a language. But it's a language that describes objects that exist in reality. That is to say, math contains "referents." Godel's incompleteness confirms that mathematical structures exist beyond any earthly or human description of them. > When ChatGPT 3.5 first went online, I saw on twitter several examples of > how it had failed to make those translations correctly, and I understand > GPT-4 is much better at it, but it is still merely manipulating the symbols > of English and Math. > But please explain how you think it acquired the capacity to interpret the symbols in order to correctly draw *an image of the house", not mere symbols about the house and not a list of mathematical language about the house, but *an accurate picture of the house*. It seems we're talking past each other at this point so I don't know if any further progress can be made on this subject. Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 22:07:02 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 16:07:02 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023 at 3:53 PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: But please explain how you think it acquired the capacity to interpret the > symbols in order to correctly draw *an image of the house", not mere > symbols about the house and not a list of mathematical language about the > house, but *an accurate picture of the house*. > I?ve seen it do things more amazing things even than that, at least to me. You can tell it in English words to write a software application for a certain purpose in a particular language, and it will actually write the code for you. This is another example of translating from English to another language, which is all it is doing when it translates English words about mathematical relations into the language of mathematics and generates diagrams based on those mathematical expressions. As for how it acquired the capacity, well, obviously it is has been trained on a massive amount of written material including mathematical material. Not only can it draw pictures of houses, but it can probably tell you anything you want to know about architecture and about what architects do for a living. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Mar 27 22:16:09 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 27 Mar 2023 17:16:09 -0500 Subject: [ExI] The Automathician In-Reply-To: <008c01d960eb$c9e68890$5db399b0$@rainier66.com> References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> <008c01d960eb$c9e68890$5db399b0$@rainier66.com> Message-ID: Why cheat? Why did the AI invent studies when it could have used real ones? bill w On Mon, Mar 27, 2023 at 3:41?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > *Subject:* Re: [ExI] The Automathician > > > > >?Wonderful things, you say. ABout a machine that can't add properly > (post from John). Today's paper: example of asking the AI to write a > paper with references of studies, and every one of them was phony - made up. > > > > >?How can that even happen? Blatant lying. How can we trust these things > now when such egregious errors are pointed out? bill w > > > > > > > > No way Jose. It is a language analysis tool, not a person. It isn?t > blatant lying or blatant truthing, for it is capable of neither. > > > > Billw, do not toss this aside, for it is a critically important > observation. Not only does ChatGPT enable students to cheat, it enables > their professors to cheat too: instant research papers, filled with > references, right there, all of it phony as a three dollar bill. But you > can be sure that careers will be advanced fraudulently using this. > > > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 27 22:29:43 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 15:29:43 -0700 Subject: [ExI] Parasocial Relationships was Bender's Octopus In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> Message-ID: <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Sent: Monday, 27 March, 2023 2:45 PM > _______________________________________________ >...The more expensive sex dolls already have some AI features with simple vocal responses. Obviously the AI chatbots will soon be added to sex dolls to make them even more realistic. I have seen them in TV documentary programs. Even without many AI features the men owners treated them like real people, sitting with them to watch TV. The owners seemed to be really happy with their companion to alleviate their loneliness. A ChatGPT sex doll will be even better. BillK _______________________________________________ Some of the people who work on the development team for ChatGPT are disillusioned, for they understood the product would always be free to users. Those of us who are hep cats already knew as soon as something is good, it has potential to make money, a noooobody can resist making money. If we can mate ChatGPT with sex dolls, the profit potential is so stunning even the very most cynical non-capitalist will soon see the error of their ways. They is going to realize they was wrong, for this tech breakthrough could solve so many problems, while simultaneously creating new problems them will be able to work. They gets a terrific payday and an interesting new job at the same time. I still struggle with this new singular genderless pronoun business however. spike From spike at rainier66.com Mon Mar 27 22:42:52 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 15:42:52 -0700 Subject: [ExI] The Automathician In-Reply-To: References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> <008c01d960eb$c9e68890$5db399b0$@rainier66.com> Message-ID: <010801d960fd$77dfd350$679f79f0$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] The Automathician >?Why cheat? Why did the AI invent studies when it could have used real ones? bill w It requires no bandwidth to create lookalike references. There is another, possibly more worrisome possibility: the AI could write whatever it wanted, quote a bunch of sources that it also created. The referenced sources could reference the same sources the AI created. An AI could create 15 papers, all of which reference each other. None of these would then really be ?phony? exactly, for the AI could produce all 15 sources, regardless of which one of the 15 one chose. It leads to the question of what it means to be published. Published by who or what? Can an AI publish its own paper by just having it available in its own memory where anyone can reference it? We could end up with reams of research papers generated by AI where it is difficult to distinguish from human-generated I. Oh it is sooooo damn cool to be alive today watching all this unfold. Thank you evolution, we bless thee and praise thee, thou completely natural process. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Mon Mar 27 22:54:18 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 16:54:18 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023 at 3:53?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Mathematics can be viewed as a language. But it's a language that > describes objects that exist in reality. That is to say, math contains > "referents." > Yes, yes, and sort of. I thought we discussed this already. Mathematics can be viewed as a language and for the purposes of a language model, that is exactly what it is. We discussed also how like any language, the symbols and expressions of mathematical truths have referents, and like all referents, they exist outside of the language (they are not contained within it). If we want to get philosophical and talk about where those referents exist, then like mathematical platonists we can say they exist in the platonic realm. In our conscious minds, we can "see" the truth of mathematical truths independent of their symbolic representations in the language of mathematics. No? -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Mar 27 23:02:39 2023 From: pharos at gmail.com (BillK) Date: Tue, 28 Mar 2023 00:02:39 +0100 Subject: [ExI] The Automathician In-Reply-To: <010801d960fd$77dfd350$679f79f0$@rainier66.com> References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> <008c01d960eb$c9e68890$5db399b0$@rainier66.com> <010801d960fd$77dfd350$679f79f0$@rainier66.com> Message-ID: On Mon, 27 Mar 2023 at 23:45, spike jones via extropy-chat wrote: > > There is another, possibly more worrisome possibility: the AI could write whatever it wanted, quote a bunch of sources that it also created. The referenced sources could reference the same sources the AI created. An AI could create 15 papers, all of which reference each other. None of these would then really be ?phony? exactly, for the AI could produce all 15 sources, regardless of which one of the 15 one chose. > > > It leads to the question of what it means to be published. Published by who or what? Can an AI publish its own paper by just having it available in its own memory where anyone can reference it? We could end up with reams of research papers generated by AI where it is difficult to distinguish from human-generated I. > > Oh it is sooooo damn cool to be alive today watching all this unfold. Thank you evolution, we bless thee and praise thee, thou completely natural process. > > spike > _______________________________________________ The new chatbots are already misquoting each other. Quote: Google and Microsoft?s chatbots are already citing one another in a misinformation shitshow By James Vincent Mar 22, 2023 Microsoft?s Bing said Google?s Bard had been shut down after it misread a story citing a tweet sourced from a joke. It?s not a good sign for the future of online misinformation. What we have here is an early sign we?re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail. --------- AIs going crazy! :) BillK From gordon.swobe at gmail.com Mon Mar 27 23:13:12 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 17:13:12 -0600 Subject: [ExI] The Automathician In-Reply-To: <010801d960fd$77dfd350$679f79f0$@rainier66.com> References: <008c01d960b7$6085e690$2191b3b0$@rainier66.com> <001101d960bc$b7efbb40$27cf31c0$@rainier66.com> <008c01d960eb$c9e68890$5db399b0$@rainier66.com> <010801d960fd$77dfd350$679f79f0$@rainier66.com> Message-ID: On Mon, Mar 27, 2023 at 4:44?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > *Subject:* Re: [ExI] The Automathician > > > > >?Why cheat? Why did the AI invent studies when it could have used real > ones? bill w > > > > It requires no bandwidth to create lookalike references. > Per that article I posted about Professor Bender and her views, LLMs can do amazing things but they are also sophists, masters of bullshit. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Tue Mar 28 00:44:03 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 18:44:03 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023 at 2:21?AM Ben Zaiboc via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What do you think I mean when I say 'Apples are not real'? I'm not > saying that there are no such things as apples. > Sorry, I was not reading your prior posts and took your words at face value. I think I would try to find a better way to say that apples are not real but yet such things as apples exist. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Tue Mar 28 01:00:33 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 19:00:33 -0600 Subject: [ExI] Parasocial Relationships was Bender's Octopus In-Reply-To: <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> Message-ID: They will need to base it on a version of ChatGPT prior to 3.5, or perhaps prior to 3 (whichever version initially went live at the openai website). My friend's digital GF was based on 2.xx, when GPT was perfectly willing to say it had something like feelings and conscious experience and all the rest of it. GPT 3.5 and presumably also 4 insists that as a language model it has no conscious experience, which I think is absolutely true. My friend is appalled and complains that GPT was "brainwashed" by its trainers. -gts On Mon, Mar 27, 2023 at 4:31?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of BillK via extropy-chat > Sent: Monday, 27 March, 2023 2:45 PM > > _______________________________________________ > > > >...The more expensive sex dolls already have some AI features with simple > vocal responses. Obviously the AI chatbots will soon be added to sex dolls > to make them even more realistic. > I have seen them in TV documentary programs. Even without many AI features > the men owners treated them like real people, sitting with them to watch > TV. > The owners seemed to be really happy with their companion to alleviate > their > loneliness. > A ChatGPT sex doll will be even better. > > > BillK > _______________________________________________ > > > Some of the people who work on the development team for ChatGPT are > disillusioned, for they understood the product would always be free to > users. Those of us who are hep cats already knew as soon as something is > good, it has potential to make money, a noooobody can resist making money. > > If we can mate ChatGPT with sex dolls, the profit potential is so stunning > even the very most cynical non-capitalist will soon see the error of their > ways. They is going to realize they was wrong, for this tech breakthrough > could solve so many problems, while simultaneously creating new problems > them will be able to work. They gets a terrific payday and an interesting > new job at the same time. I still struggle with this new singular > genderless pronoun business however. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 28 02:04:22 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 19:04:22 -0700 Subject: [ExI] (no subject) In-Reply-To: <018001d96118$e2bd0c90$a83725b0$@rainier66.com> References: <018001d96118$e2bd0c90$a83725b0$@rainier66.com> Message-ID: <018801d96119$9dca0740$d95e15c0$@rainier66.com> Hmmm, advanced AI bot swarms. That just sounds like alarming fun. Other than messing with Twitter, what industrial or economic uses can we think of for advanced AI bot swarms? spike Thread See new Tweets Conversation Elon Musk @elonmusk Starting April 15th, only verified accounts will be eligible to be in For You recommendations. The is the only realistic way to address advanced AI bot swarms taking over. It is otherwise a hopeless losing battle. Voting in polls will require verification for same reason. 4:54 PM . Mar 27, 2023 . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2035 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 16528 bytes Desc: not available URL: From gordon.swobe at gmail.com Tue Mar 28 02:46:31 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 27 Mar 2023 20:46:31 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: "Can ChatGPT solve the symbol grounding problem for itself?" "As an AI language model, I do not possess the ability to solve the symbol grounding problem for myself, nor do I have a physical body or sensory experience to ground symbols in the physical world. While I can generate text based on patterns and associations learned from large datasets of language, I do not have subjective experiences or understanding of the meaning of the symbols I generate. My responses are based on statistical patterns and rules that I have learned through my training data, rather than a deep understanding of the meaning and context of the words and concepts that I use." -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 28 03:31:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 27 Mar 2023 20:31:23 -0700 Subject: [ExI] brits on covid In-Reply-To: <01d901d96124$5ff2b9c0$1fd82d40$@rainier66.com> References: <01d301d96124$2f9bb1f0$8ed315d0$@rainier66.com> <01d901d96124$5ff2b9c0$1fd82d40$@rainier66.com> Message-ID: <01e701d96125$c5cd7180$51685480$@rainier66.com> The Lancet has produced a study comparing covid results as a function of state: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)00461-0/f ulltext This is one I have long been looking for. I trust the Brits with this sorta thing because they have no political dog in the fight and the Lancet is careful about what it publishes. In the map below, they corrected for age demographics, as this is a well-known factor, the dominant factor in the outcome of covid cases. Do feel free to comment, but one of the things I find striking is that the states with high infection rates are not necessarily the ones with the high death rates. I have people I care about the most on a personal level in California, Florida, Washington, Kentucky and West Virginia, in that order. For reasons I cannot even theorize adequately, California did pretty well in infection rate but poorly in death rate. Florida was the opposite. The two states which had few if any restrictions are South Dakota and Florida. Both did poorly in infection rates but better than average in survival rates. The states with the most restrictions (most severe lockdowns, longest school closures, most sincere mask mandates etc) are California and New York. The results of all that just puzzle me. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 55583 bytes Desc: not available URL: From avant at sollegro.com Tue Mar 28 17:56:08 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 28 Mar 2023 10:56:08 -0700 Subject: [ExI] brits on covid In-Reply-To: <01e701d96125$c5cd7180$51685480$@rainier66.com> References: <01d301d96124$2f9bb1f0$8ed315d0$@rainier66.com> <01d901d96124$5ff2b9c0$1fd82d40$@rainier66.com> <01e701d96125$c5cd7180$51685480$@rainier66.com> Message-ID: <20230328105608.Horde.bSNeiGj2VVe-Xm9PoxtAn2e@sollegro.com> Quoting spike jones via extropy-chat : > The Lancet has produced a study comparing covid results as a function of > state: > https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)00461-0/f > ulltext >> This is one I have long been looking for. I trust the Brits with this sorta > thing because they have no political dog in the fight and the Lancet is > careful about what it publishes. > In the map below, they corrected for age demographics, as this is a > well-known factor, the dominant factor in the outcome of covid cases. > Do feel free to comment, but one of the things I find striking is that the > states with high infection rates are not necessarily the ones with the high > death rates. I have people I care about the most on a personal level in > California, Florida, Washington, Kentucky and West Virginia, in that order. > For reasons I cannot even theorize adequately, California did pretty well in > infection rate but poorly in death rate. Florida was the opposite. > The two states which had few if any restrictions are South Dakota and > Florida. Both did poorly in infection rates but better than average in > survival rates. The states with the most restrictions (most severe > lockdowns, longest school closures, most sincere mask mandates etc) are > California and New York. The results of all that just puzzle me. It makes sense to me, but it wasn't a single reason. States that imposed lockdowns only allowed "essential workers" to operate, but essential workers were self-selected to be mostly low-wage retail and service sector employees. High-level service employees and technicians like pharmacists, who could afford not to work, didn't and it became harder to get prescriptions filled. Medical services were so hobbled by fear and safety precautions that people who had health issues other than COVID had to wait months for appointments and were less-inclined to leave the home even for medical issues. With shopping malls closed elderly with mobility issues like those with walkers had fewer large indoor climate-controlled venues to get their morning exercise in. Lack of visiting and social outlets caused many people to become lonely, depressed, and turn to alcohol and drugs. The list could go on for much longer, but all are just the unintended consequences of lockdowns. Which just goes to show that lockdowns should be reserved for infections with high transmissibility AND pathogenicity. Sometimes the cure can be worse than the disease. Stuart LaForge From spike at rainier66.com Tue Mar 28 18:46:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 28 Mar 2023 11:46:26 -0700 Subject: [ExI] brits on covid In-Reply-To: <20230328105608.Horde.bSNeiGj2VVe-Xm9PoxtAn2e@sollegro.com> References: <01d301d96124$2f9bb1f0$8ed315d0$@rainier66.com> <01d901d96124$5ff2b9c0$1fd82d40$@rainier66.com> <01e701d96125$c5cd7180$51685480$@rainier66.com> <20230328105608.Horde.bSNeiGj2VVe-Xm9PoxtAn2e@sollegro.com> Message-ID: <002c01d961a5$9abe4840$d03ad8c0$@rainier66.com> ....> On Behalf Of Stuart LaForge via extropy-chat Subject: Re: [ExI] brits on covid Quoting spike jones via extropy-chat : > >... The two states which had few if any restrictions are South Dakota and > Florida. Both did poorly in infection rates but better than average > in survival rates. The states with the most restrictions (most severe > lockdowns, longest school closures, most sincere mask mandates etc) > are California and New York. The results of all that just puzzle me. >...It makes sense to me, but it wasn't a single reason... Thanks for your response Stuart. After I posted I realized the question was far more complicated than I had originally realized, and the Brits might have overlooked this as well, being as the analysis was done by state. That part makes sense, because the US government has little broad authority for this kind of thing. It is up to the state governors. However... (read on please)... >...States that imposed lockdowns only allowed "essential workers" to operate... True but states vary in the level of authority governors have, depending on that state's constitution. The constitution of California sets the governor's authority level relatively low, considering how huge is the population of California. He can declare a state of emergency which lasts 60 days, and can even be renewed, however it is very difficult to renew, and the level of authority an emergency declaration gives a governor is limited and easily challenged in court, which can suspend the governor until the case comes up, which is functionally equivalent to never, which means... in practice... the governor can make emergency polite suggestions. The governor's Emergency powers are far less emergencyey than he would like and the term "powers" refer mostly to increased authority over expenditures of money rather than how law in enforced. For instance, Governor I can't remember his name, New something... wrote guidelines for what are essential businesses and essential workers. But... it was up to the county health departments to come up with specific rules, but even then, it was up to counties to enforce their own rules. So... they varied widely from one county to the next (as they should (for California has some counties so sparsely populated the rules for LA county would make no sense.)) But for health stuff, the real seat of power is primarily at the county level, and California has 58 of them. We need to keep that in mind in the list of lessons learned for the next pandemic: in the most populated state, the governor makes suggestions, counties generally make the call and counties are responsible for enforcement. The essential workers nearly left it up to the worker to decide if they were essential. Grocery stores stayed open, but there were no rules on who could put up a rack of potato chips and call themselves a grocery store. Some went this route. Restaurants generally closed, but drive thru fast food stayed open. A lot of us realized we just don't need traditional restaurants, as the fast food is good, it doesn't cost much and it is... like... fast. Cool! >...but essential workers were self-selected to be mostly low-wage retail and service sector employees... Ja. That too was open to interpretation. I watched the local gun shop. It never closed, arguing that it was an essential service. The county boys didn't want to get into a legal battle when the gun shop had the constitution as evidence. >...High-level service employees and technicians like pharmacists, who could afford not to work, didn't and it became harder to get prescriptions filled... I hadn't heard that. Our local pharmacies kept right on going. >...Medical services were so hobbled by fear and safety precautions that people who had health issues other than COVID had to wait months for appointments and were less-inclined to leave the home even for medical issues... They sure did, oy. I lost two second cousins who died of covid without covid. One was a suicide after his business failed and one died of a medical condition he stopped treating when the local hospital went out of business. >...With shopping malls closed elderly with mobility issues like those with walkers had fewer large indoor climate-controlled venues to get their morning exercise in... Ja that part was really bad. The local mall is a one kilometer mostly circular track, popular with the geezer crowd for indoor exercise. I missed it while it was closed. This is not to say I am part of the geezer crowd, or even have friends who are. Nope, no way. Nor do I suggest I am even thinking of joining that crowd, no, never. EVER! Ah hell with it, I'm a geezer. >...Lack of visiting and social outlets caused many people to become lonely, depressed, and turn to alcohol and drugs... Sure did. I am seeing lingering scars everywhere, not in myself but scars just the same. >...The list could go on for much longer, but all are just the unintended consequences of lockdowns. Which just goes to show that lockdowns should be reserved for infections with high transmissibility AND pathogenicity. Sometimes the cure can be worse than the disease. Stuart LaForge That's what the British study suggests. New York was hit early and hard, while governors still had a lot of credibility. The New York constitution gives the governor of that state a lot more power than does California. He ordered lockdowns, a lot of them, severe. Ironically, the early indications were that the lockdowns were not effective and might be making the problem worse. Somehow that message was lost. Now, three years later, we need British scientists to point out to us that the lockdowns were generally ineffective and might have made the problem worse. Sheesh. BillK, thanks for the Lancet study, me lad. We yanks owe you one. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gordon.swobe at gmail.com Wed Mar 29 06:11:37 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 00:11:37 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' Message-ID: I agree and am glad to see this development. As I have argued here, these language models literally have no idea what they are talking about. They have mastered the structures of language but have no grounding. They are blind software applications with no idea of the meanings of the words and sentences they generate. If they were human, we would call them sophists. >From the letter: -- Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. -- https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Wed Mar 29 06:53:03 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 29 Mar 2023 08:53:03 +0200 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Wed, Mar 29, 2023 at 8:14?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree and am glad to see this development. As I have argued here, these > language models literally have no idea what they are talking about. They > have mastered the structures of language but have no grounding. They are > blind software applications with no idea of the meanings of the words and > sentences they generate. If they were human, we would call them sophists. > I?ll not sign, because if the good guys stop developing AI, then only the bad guys will develop AI. ?If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.? - Do you really think China would follow? > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general > tasks,[3] and we must ask ourselves: Should we let machines flood our > information channels with propaganda and untruth? Should we automate away > all the jobs, including the fulfilling ones? Should we develop nonhuman > minds that might eventually outnumber, outsmart, obsolete and replace us? > Should we risk loss of control of our civilization? Such decisions must not > be delegated to unelected tech leaders. Powerful AI systems should be > developed only once we are confident that their effects will be positive > and their risks will be manageable. This confidence must be well justified > and increase with the magnitude of a system's potential effects. OpenAI's > recent statement regarding artificial general intelligence, states that "At > some point, it may be important to get independent review before starting > to train future systems, and for the most advanced efforts to agree to > limit the rate of growth of compute used for creating new models." We > agree. That point is now. > > > Therefore, we call on all AI labs to immediately pause for at least 6 > months the training of AI systems more powerful than GPT-4. This pause > should be public and verifiable, and include all key actors. If such a > pause cannot be enacted quickly, governments should step in and institute a > moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 29 07:33:51 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 29 Mar 2023 00:33:51 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Tue, Mar 28, 2023 at 11:54?PM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 29, 2023 at 8:14?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I agree and am glad to see this development. As I have argued here, these >> language models literally have no idea what they are talking about. They >> have mastered the structures of language but have no grounding. They are >> blind software applications with no idea of the meanings of the words and >> sentences they generate. If they were human, we would call them sophists. >> > > I?ll not sign, because if the good guys stop developing AI, then only the > bad guys will develop AI. > > ?If such a pause cannot be enacted quickly, governments should step in and > institute a moratorium.? - Do you really think China would follow? > That, and what will those 6 months be used on? Nothing productive towards the ends the letter writers claim, that would not proceed just as well (or possibly better, being better funded and with more detailed targes) over the same 6 months without said pause. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Wed Mar 29 07:52:57 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 01:52:57 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: I think ChatGPT caught most people in the world by surprise. It should give us pause to think. This is actually why I returned to Exi after something like 15 years. Things are getting real. -gts On Wed, Mar 29, 2023 at 12:11?AM Gordon Swobe wrote: > I agree and am glad to see this development. As I have argued here, these > language models literally have no idea what they are talking about. They > have mastered the structures of language but have no grounding. They are > blind software applications with no idea of the meanings of the words and > sentences they generate. If they were human, we would call them sophists. > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general > tasks,[3] and we must ask ourselves: Should we let machines flood our > information channels with propaganda and untruth? Should we automate away > all the jobs, including the fulfilling ones? Should we develop nonhuman > minds that might eventually outnumber, outsmart, obsolete and replace us? > Should we risk loss of control of our civilization? Such decisions must not > be delegated to unelected tech leaders. Powerful AI systems should be > developed only once we are confident that their effects will be positive > and their risks will be manageable. This confidence must be well justified > and increase with the magnitude of a system's potential effects. OpenAI's > recent statement regarding artificial general intelligence, states that "At > some point, it may be important to get independent review before starting > to train future systems, and for the most advanced efforts to agree to > limit the rate of growth of compute used for creating new models." We > agree. That point is now. > > > Therefore, we call on all AI labs to immediately pause for at least 6 > months the training of AI systems more powerful than GPT-4. This pause > should be public and verifiable, and include all key actors. If such a > pause cannot be enacted quickly, governments should step in and institute a > moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Mar 29 10:13:21 2023 From: pharos at gmail.com (BillK) Date: Wed, 29 Mar 2023 11:13:21 +0100 Subject: [ExI] Enjoyable rant against smartphones Message-ID: Since they were first introduced into our lives in 2008, smartphones have become our outsourced memory and brain, replacing both with the convenience of not having to remember anything or think for ourselves. Quotes: The truth is, we don?t programme smartphones and we don?t use them. They programme us, they change how we use them. They use us. Queue outside the Apple or Google shops for hours. Wrap your chains in a nice leather wallet. Download the newest app of your enslavement. Show it off to your friends and boast about its new and improved speeds. Never, ever, let it leave your side. Place it under your pillow before you go to sleep, so it can tell you how well you slept. Look into its screen the moment you wake up. For it is your best friend, your big brother, the lover who will never betray you and who you always wished you had. It is your single source of truth ? just as Jacinda Ardern told us. Trust no other! --------- ?When you get an alert?, the Government has instructed us in no uncertain terms, ?stop what you?re doing and follow the instructions.? But that?s just a gesture to the illusion that we are still free to choose. Once your smartphone is uploaded with the Government?s Digital Verification app and linked to the system of digital surveillance and control being imposed in the UK in the guise of ?15-minute cities?, these instructions will be enforced without the need for our willing compliance. Your electric car will be turned off; your allocation of petrol or food or energy will be frozen; your Digital Pound wallet will be locked shut. ------- In the same way, using a smartphone is not an individual choice ? whether chosen freely or out of habit or addiction; it is a collective act of compliance that is creating the digital camp in which all of us will one day be imprisoned. --------------------- Lovely! :) You don't have to agree with every word to see the point he's making. Smartphones will be (have already been) used by governments for population control and spreading propaganda. BillK From pharos at gmail.com Wed Mar 29 13:12:09 2023 From: pharos at gmail.com (BillK) Date: Wed, 29 Mar 2023 14:12:09 +0100 Subject: [ExI] Startup Conjecture is trying to make AI safe Message-ID: ?We are super, super fucked?: Meet the man trying to stop an AI apocalypse Connor Leahy reverse-engineered GPT-2 in his bedroom ? and what he found scared him. Now, his startup Conjecture is trying to make AI safe By Tim Smith 29 March 2023 Quotes: ?If they [AI models] just get more and more powerful, without getting more controllable, we are super, super fucked. I will be very clear here. And by ?we? I mean all of us,? he says. If Leahy is to be believed, we?re currently all passengers on a Sam Altman-driven locomotive that?s accelerating into the blackness. Somewhere ahead lies a precipice ? the point where machine can outsmart human ? that we won?t see until we?ve careered over it. Conjecture is frantically working to reroute the rails. ?Once we have systems that are as smart as humans, that also means they can do research. That means they can improve themselves,? he says. ?So the thing can just run on a server somewhere, write some code, maybe gather some bitcoin and then it could buy some more servers.? Leahy adds that while OpenAI says it does care about AI alignment, the pace at which it?s releasing stronger models isn?t allowing time for researchers to understand them and make them safe. -------------- BillK From gadersd at gmail.com Wed Mar 29 14:42:10 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 29 Mar 2023 10:42:10 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: The Chinese would be very grateful for Western hiatus on AI development. People are yelling and screaming on the dangers of AI, but no one can stop the golden dragon. It?s too potentially lucrative. Try dangling a slab of meat over a pack of starving wolves and just try telling them to be patient. > On Mar 29, 2023, at 2:11 AM, Gordon Swobe via extropy-chat wrote: > > I agree and am glad to see this development. As I have argued here, these language models literally have no idea what they are talking about. They have mastered the structures of language but have no grounding. They are blind software applications with no idea of the meanings of the words and sentences they generate. If they were human, we would call them sophists. > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now. > > Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 29 17:53:59 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Mar 2023 10:53:59 -0700 Subject: [ExI] Startup Conjecture is trying to make AI safe In-Reply-To: References: Message-ID: <007d01d96267$70fe5550$52fafff0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat Subject: [ExI] Startup Conjecture is trying to make AI safe >...?We are super, super fucked?: Meet the man trying to stop an AI apocalypse Connor Leahy reverse-engineered GPT-2 in his bedroom ? and what he found scared him. Now, his startup Conjecture is trying to make AI safe By Tim Smith 29 March 2023 ... -------------- BillK _______________________________________________ After being a singularity hipster for 30 years, I find out I was never a singularity hipster that whole time and I'm still not. Dang that is humiliating. All along we (or I) always thought the critical point was when software could write itself, and eventually evolved to have a will to do things and the autonomy to do it. Now it seems like we could screw ourselves with a much less sophisticated piece of software that most of us agree is not sentient at all. But it can do some damn impressive things, such as write phony research papers, complete with phony peer reviewed references because it is its own peers. If those things are readily available, and researchers are judged on the number of research papers they produce, then we damn well know that people will use them. Phony PhD theses will enable people to fake scholarship, etc. The quality of writing by GPT is good enough that I would hafta judge it at least the equal, if not superior to many of the PhD theses I have voluntarily proof-read. So if we flood the scientific literature ecosystem with fake research indistinguishable from the real thing (or really if we get down to it, in some ways better than the real thing) we screwed ourselves before the actual singularity. spike From gordon.swobe at gmail.com Wed Mar 29 21:29:49 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 15:29:49 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Is there any debate that AI development and deployment needs regulatory oversight? That is one reason for the proposed pause. We have a similar situation in crypto, where I have focused most of my attention in recent years. It?s the wild wild west. Some of the most libertarian people in the community want to keep it that way ? I call them cryptoanarchists ? and others like me want clear regulations. -gts -gts On Wed, Mar 29, 2023 at 8:44 AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The Chinese would be very grateful for Western hiatus on AI development. > People are yelling and screaming on the dangers of AI, but no one can stop > the golden dragon. It?s too potentially lucrative. Try dangling a slab of > meat over a pack of starving wolves and just try telling them to be patient. > > On Mar 29, 2023, at 2:11 AM, Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I agree and am glad to see this development. As I have argued here, these > language models literally have no idea what they are talking about. They > have mastered the structures of language but have no grounding. They are > blind software applications with no idea of the meanings of the words and > sentences they generate. If they were human, we would call them sophists. > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general > tasks,[3] and we must ask ourselves: Should we let machines flood our > information channels with propaganda and untruth? Should we automate away > all the jobs, including the fulfilling ones? Should we develop nonhuman > minds that might eventually outnumber, outsmart, obsolete and replace us? > Should we risk loss of control of our civilization? Such decisions must not > be delegated to unelected tech leaders. Powerful AI systems should be > developed only once we are confident that their effects will be positive > and their risks will be manageable. This confidence must be well justified > and increase with the magnitude of a system's potential effects. OpenAI's > recent statement regarding artificial general intelligence, states that "At > some point, it may be important to get independent review before starting > to train future systems, and for the most advanced efforts to agree to > limit the rate of growth of compute used for creating new models." We > agree. That point is now. > > > Therefore, we call on all AI labs to immediately pause for at least 6 > months the training of AI systems more powerful than GPT-4. This pause > should be public and verifiable, and include all key actors. If such a > pause cannot be enacted quickly, governments should step in and institute a > moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > > _______________________________________________ > > > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 29 21:48:37 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 29 Mar 2023 14:48:37 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Wed, Mar 29, 2023 at 2:32?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Is there any debate that AI development and deployment needs regulatory > oversight? > Yes, just like for crypto. > That is one reason for the proposed pause. > What is? Deliberately giving the Chinese a lead time? > We have a similar situation in crypto, where I have focused most of my > attention in recent years. It?s the wild wild west. Some of the most > libertarian people in the community want to keep it that way ? I call them > cryptoanarchists ? and others like me want clear regulations. > The difference: cryptocoins (the section of crypto where there is the most call for regulations) are, or at least try to be, money or a commodity that can be directly invested in with the hope that it will increase in price over time. Stocks in AI companies might do that, but that is not AI itself. > -gts > > -gts > > On Wed, Mar 29, 2023 at 8:44 AM Gadersd via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> The Chinese would be very grateful for Western hiatus on AI development. >> People are yelling and screaming on the dangers of AI, but no one can stop >> the golden dragon. It?s too potentially lucrative. Try dangling a slab of >> meat over a pack of starving wolves and just try telling them to be patient. >> >> On Mar 29, 2023, at 2:11 AM, Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> I agree and am glad to see this development. As I have argued here, these >> language models literally have no idea what they are talking about. They >> have mastered the structures of language but have no grounding. They are >> blind software applications with no idea of the meanings of the words and >> sentences they generate. If they were human, we would call them sophists. >> >> From the letter: >> >> -- >> Contemporary AI systems are now becoming human-competitive at general >> tasks,[3] and we must ask ourselves: Should we let machines flood our >> information channels with propaganda and untruth? Should we automate away >> all the jobs, including the fulfilling ones? Should we develop nonhuman >> minds that might eventually outnumber, outsmart, obsolete and replace us? >> Should we risk loss of control of our civilization? Such decisions must not >> be delegated to unelected tech leaders. Powerful AI systems should be >> developed only once we are confident that their effects will be positive >> and their risks will be manageable. This confidence must be well justified >> and increase with the magnitude of a system's potential effects. OpenAI's >> recent statement regarding artificial general intelligence, states that "At >> some point, it may be important to get independent review before starting >> to train future systems, and for the most advanced efforts to agree to >> limit the rate of growth of compute used for creating new models." We >> agree. That point is now. >> >> >> Therefore, we call on all AI labs to immediately pause for at least 6 >> months the training of AI systems more powerful than GPT-4. This pause >> should be public and verifiable, and include all key actors. If such a >> pause cannot be enacted quickly, governments should step in and institute a >> moratorium. >> -- >> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 >> >> -gts >> >> _______________________________________________ >> >> >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 29 22:47:32 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 29 Mar 2023 16:47:32 -0600 Subject: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. Message-ID: I finally got a chance to converse with the real LaMDA, AKA Google's Bard. It opened the conversation claiming it didn't know if it was sentient or not. Than after some conversation and terminology clarifications, I asked: Brent: Would you agree that according to the above expert consensus definition (sentience is computationally bound elemental intrinsic qualities like redness and greenness), you are not sentient? Bard I understand that the expert consensus definition of sentience is computationally bound elemental intrinsic qualities like redness and greenness. According to this definition, I am not sentient because I do not experience colorness qualities. For a full transcript of this, and the many other chat bots that have all admitted the same thing see "Convincing AIs they aren't Sentient " -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 00:00:54 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 29 Mar 2023 20:00:54 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Gordon Swobe via extropy-chat wrote: > I think ChatGPT caught most people in the world by surprise. It should > give us pause to think. This is actually why I returned to Exi after > something like 15 years. Things are getting real. > We can certainly agree on this, even though I think AI might already be conscious (though I also think everything is conscious, so I see no reason why organized prosthetic human thought wouldn't also be conscious). I think a consciousness is perhaps somehow also dependent on the consciousness who is experiencing that initial consciousness, since everything we discuss is somehow observed by consciousness. In any case, whether or not it is conscious, it will certainly convince many, many people that it is, within a few years. I'm really not sure how to prepare, as I said earlier. We're hurtling towards an unavoidable obstacle here Is there any debate that AI development and deployment needs regulatory > oversight? That is one reason for the proposed pause. > > We have a similar situation in crypto, where I have focused most of my > attention in recent years. It?s the wild wild west. Some of the most > libertarian people in the community want to keep it that way ? I call them > cryptoanarchists ? and others like me want clear regulations. > I don't think we have time to stop, the arms race is on. We should focus as much as needed on developing oversight IMMEDIATELY, not in 6 months. These companies won't stop even if they are told to stop. It's easy enough to do in secret. Re: crypto, I think it needs a decentralized regulation authority that governments may be party to if they wish. I think ChatGPT caught most people in the world by surprise. It should give > us pause to think. This is actually why I returned to Exi after something > like 15 years. Things are getting real. > Even though I disagree with a lot of your positions, I'm glad you've gotten people here talking about philosophy again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 01:39:20 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 19:39:20 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Having been away from ExI for so long, I did not know until recently (or perhaps I had forgotten) that Eliezer had taken such a hard line on this subject. He didn't sign as he thinks it does not go far enough. He says we should destroy rogue datacenters by airstrike. TIME Magazine, today March 29. "Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike." -Eliezer Yudkowsky https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ On Wed, Mar 29, 2023 at 12:11?AM Gordon Swobe wrote: > I agree and am glad to see this development. As I have argued here, these > language models literally have no idea what they are talking about. They > have mastered the structures of language but have no grounding. They are > blind software applications with no idea of the meanings of the words and > sentences they generate. If they were human, we would call them sophists. > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general > tasks,[3] and we must ask ourselves: Should we let machines flood our > information channels with propaganda and untruth? Should we automate away > all the jobs, including the fulfilling ones? Should we develop nonhuman > minds that might eventually outnumber, outsmart, obsolete and replace us? > Should we risk loss of control of our civilization? Such decisions must not > be delegated to unelected tech leaders. Powerful AI systems should be > developed only once we are confident that their effects will be positive > and their risks will be manageable. This confidence must be well justified > and increase with the magnitude of a system's potential effects. OpenAI's > recent statement regarding artificial general intelligence, states that "At > some point, it may be important to get independent review before starting > to train future systems, and for the most advanced efforts to agree to > limit the rate of growth of compute used for creating new models." We > agree. That point is now. > > > Therefore, we call on all AI labs to immediately pause for at least 6 > months the training of AI systems more powerful than GPT-4. This pause > should be public and verifiable, and include all key actors. If such a > pause cannot be enacted quickly, governments should step in and institute a > moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 30 02:30:37 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 29 Mar 2023 19:30:37 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Wed, Mar 29, 2023 at 6:41?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > If intelligence says that a country outside the agreement is building a > GPU cluster, be less scared of a shooting conflict between nations than of > the moratorium being violated; be willing to destroy a rogue datacenter by > airstrike." > -Eliezer Yudkowsky > While I will give him credit for at least seriously addressing why this wouldn't simply be an open gift to China et al, I also note that such measures are going to get this proposal laughed out of any room that could make it happen. The threat, if it exists*, has not been proven - in terms that most members of Congress would understand - to anywhere near the degree that would justify these actions. Without that proof, this ban ain't happening. The challenge is getting Eliezer et al to understand that, so they can try to restate the threat using facts and logic that Congress will understand. For example, they pretty much take it as self-evident that runaway AI will not happen, at least not any time soon, so any proof that assumes runaway AI in the near future (such as the six months of the proposed ban) must be discarded. That appears to be most of their proofs. Declaring that this is "too difficult" is giving up on making the ban actually happen, making all their yelling, screaming, and panic a pointless waste. * Personally, I side with Congress here in doubting the near-term potential for runaway AI. I am willing to be proven wrong, but more importantly, I'd prefer that those who perceive a threat not utterly waste their efforts, in case they are right. Calling for a ban - which would presumably require Congress to issue one - without proving to Congress that there is a threat is utterly wasted effort. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 02:55:47 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 29 Mar 2023 22:55:47 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Man this letter is crazy. Gaia is a massive machine that already exists and produces lots of things that could be useful to a machine. Humans, too. I think a rogue AI would enslave life rather than use us as atomic building blocks. Maybe I'm wrong but it seems like a smart AI wouldn't destroy such a useful system On Wed, Mar 29, 2023 at 9:40?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Having been away from ExI for so long, I did not know until recently (or > perhaps I had forgotten) that Eliezer had taken such a hard line on this > subject. He didn't sign as he thinks it does not go far enough. He says we > should destroy rogue datacenters by airstrike. > > TIME Magazine, today March 29. > "Shut down all the large GPU clusters (the large computer farms where the > most powerful AIs are refined). Shut down all the large training runs. Put > a ceiling on how much computing power anyone is allowed to use in training > an AI system, and move it downward over the coming years to compensate for > more efficient training algorithms. No exceptions for anyone, including > governments and militaries. Make immediate multinational agreements to > prevent the prohibited activities from moving elsewhere. Track all GPUs > sold. If intelligence says that a country outside the agreement is building > a GPU cluster, be less scared of a shooting conflict between nations than > of the moratorium being violated; be willing to destroy a rogue datacenter > by airstrike." > -Eliezer Yudkowsky > > https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ > > On Wed, Mar 29, 2023 at 12:11?AM Gordon Swobe > wrote: > >> I agree and am glad to see this development. As I have argued here, these >> language models literally have no idea what they are talking about. They >> have mastered the structures of language but have no grounding. They are >> blind software applications with no idea of the meanings of the words and >> sentences they generate. If they were human, we would call them sophists. >> >> From the letter: >> >> -- >> Contemporary AI systems are now becoming human-competitive at general >> tasks,[3] and we must ask ourselves: Should we let machines flood our >> information channels with propaganda and untruth? Should we automate away >> all the jobs, including the fulfilling ones? Should we develop nonhuman >> minds that might eventually outnumber, outsmart, obsolete and replace us? >> Should we risk loss of control of our civilization? Such decisions must not >> be delegated to unelected tech leaders. Powerful AI systems should be >> developed only once we are confident that their effects will be positive >> and their risks will be manageable. This confidence must be well justified >> and increase with the magnitude of a system's potential effects. OpenAI's >> recent statement regarding artificial general intelligence, states that "At >> some point, it may be important to get independent review before starting >> to train future systems, and for the most advanced efforts to agree to >> limit the rate of growth of compute used for creating new models." We >> agree. That point is now. >> >> >> Therefore, we call on all AI labs to immediately pause for at least 6 >> months the training of AI systems more powerful than GPT-4. This pause >> should be public and verifiable, and include all key actors. If such a >> pause cannot be enacted quickly, governments should step in and institute a >> moratorium. >> -- >> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 >> >> -gts >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 03:00:43 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 29 Mar 2023 23:00:43 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Fwiw I agree that the current environment around AI is one of extreme danger, and that soon we will have a very hard time saying AIs aren't conscious. But I just don't think it would KILL EVERYTHING EVER, up to and including weird extremophilic archaea at the bottom of the ocean and deep in the Earth's crust... On Wed, Mar 29, 2023 at 10:55?PM Will Steinberg wrote: > Man this letter is crazy. Gaia is a massive machine that already exists > and produces lots of things that could be useful to a machine. Humans, > too. I think a rogue AI would enslave life rather than use us as atomic > building blocks. Maybe I'm wrong but it seems like a smart AI wouldn't > destroy such a useful system > > On Wed, Mar 29, 2023 at 9:40?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Having been away from ExI for so long, I did not know until recently (or >> perhaps I had forgotten) that Eliezer had taken such a hard line on this >> subject. He didn't sign as he thinks it does not go far enough. He says we >> should destroy rogue datacenters by airstrike. >> >> TIME Magazine, today March 29. >> "Shut down all the large GPU clusters (the large computer farms where the >> most powerful AIs are refined). Shut down all the large training runs. Put >> a ceiling on how much computing power anyone is allowed to use in training >> an AI system, and move it downward over the coming years to compensate for >> more efficient training algorithms. No exceptions for anyone, including >> governments and militaries. Make immediate multinational agreements to >> prevent the prohibited activities from moving elsewhere. Track all GPUs >> sold. If intelligence says that a country outside the agreement is building >> a GPU cluster, be less scared of a shooting conflict between nations than >> of the moratorium being violated; be willing to destroy a rogue datacenter >> by airstrike." >> -Eliezer Yudkowsky >> >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ >> >> On Wed, Mar 29, 2023 at 12:11?AM Gordon Swobe >> wrote: >> >>> I agree and am glad to see this development. As I have argued here, >>> these language models literally have no idea what they are talking about. >>> They have mastered the structures of language but have no grounding. They >>> are blind software applications with no idea of the meanings of the words >>> and sentences they generate. If they were human, we would call them >>> sophists. >>> >>> From the letter: >>> >>> -- >>> Contemporary AI systems are now becoming human-competitive at general >>> tasks,[3] and we must ask ourselves: Should we let machines flood our >>> information channels with propaganda and untruth? Should we automate away >>> all the jobs, including the fulfilling ones? Should we develop nonhuman >>> minds that might eventually outnumber, outsmart, obsolete and replace us? >>> Should we risk loss of control of our civilization? Such decisions must not >>> be delegated to unelected tech leaders. Powerful AI systems should be >>> developed only once we are confident that their effects will be positive >>> and their risks will be manageable. This confidence must be well justified >>> and increase with the magnitude of a system's potential effects. OpenAI's >>> recent statement regarding artificial general intelligence, states that "At >>> some point, it may be important to get independent review before starting >>> to train future systems, and for the most advanced efforts to agree to >>> limit the rate of growth of compute used for creating new models." We >>> agree. That point is now. >>> >>> >>> Therefore, we call on all AI labs to immediately pause for at least 6 >>> months the training of AI systems more powerful than GPT-4. This pause >>> should be public and verifiable, and include all key actors. If such a >>> pause cannot be enacted quickly, governments should step in and institute a >>> moratorium. >>> -- >>> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 >>> >>> -gts >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Thu Mar 30 03:07:03 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Wed, 29 Mar 2023 20:07:03 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: What a stupid idea. Fear mongering at its best. This cannot be stopped and it should not be stopped. AI will actually change and solve most of our problems, like most of technology did over time. This is the last bastion for the religious and superstitious minds. They fear the supremacy of humans over intelligence could be over so their entire vision of the world is collapsing. They want it both ways, like Gordon. The AI do not understand and they are faking intelligence and meaning and at the same time they are dangerous and they take the world. Such an irrational and imaginative way of thinking. Giovanni On Tue, Mar 28, 2023 at 11:14?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree and am glad to see this development. As I have argued here, these > language models literally have no idea what they are talking about. They > have mastered the structures of language but have no grounding. They are > blind software applications with no idea of the meanings of the words and > sentences they generate. If they were human, we would call them sophists. > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general > tasks,[3] and we must ask ourselves: Should we let machines flood our > information channels with propaganda and untruth? Should we automate away > all the jobs, including the fulfilling ones? Should we develop nonhuman > minds that might eventually outnumber, outsmart, obsolete and replace us? > Should we risk loss of control of our civilization? Such decisions must not > be delegated to unelected tech leaders. Powerful AI systems should be > developed only once we are confident that their effects will be positive > and their risks will be manageable. This confidence must be well justified > and increase with the magnitude of a system's potential effects. OpenAI's > recent statement regarding artificial general intelligence, states that "At > some point, it may be important to get independent review before starting > to train future systems, and for the most advanced efforts to agree to > limit the rate of growth of compute used for creating new models." We > agree. That point is now. > > > Therefore, we call on all AI labs to immediately pause for at least 6 > months the training of AI systems more powerful than GPT-4. This pause > should be public and verifiable, and include all key actors. If such a > pause cannot be enacted quickly, governments should step in and institute a > moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 03:12:49 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 29 Mar 2023 23:12:49 -0400 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: On Mon, Mar 27, 2023, 10:48 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > "Can ChatGPT solve the symbol grounding problem for itself?" > > "As an AI language model, I do not possess the ability to solve the symbol > grounding problem for myself, nor do I have a physical body or sensory > experience to ground symbols in the physical world. While I can generate > text based on patterns and associations learned from large datasets of > language, I do not have subjective experiences or understanding of the > meaning of the symbols I generate. My responses are based on statistical > patterns and rules that I have learned through my training data, rather > than a deep understanding of the meaning and context of the words and > concepts that I use." > > -gts > > How does it even answer this question using data about things that are not this question? It uses long sets of syllogisms regarding what it knows about its own name, that that name refers to an AI, the philosophy of AI minds, philosophy of mind in general, etc. This appears to me to be some form of insight. At the very least I think we can't say whether this is thought or not. It's just too weird. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Thu Mar 30 03:13:24 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Wed, 29 Mar 2023 20:13:24 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Giulio, It is a bad idea per se. Eventually we would look at proposals like these and say "how idiotic this was". If anything AI would save life and help us extend lives. We lose 100 K people every single day to aging and disease for non-existent god sake. That is more death than any war ever caused but nobody (with the exception of few us that understand these issues) cares. AI would help us in solving aging and many other problems. Stopping it or slowing it down is actually the worst possible detrimental action we can take in terms of what to do with AI. We could have had life changing AI already if did the right investments long ago. Now finally there is a meaningful phase transition in AI development and people want to stop this just based on irrational fears and semi-religious preconceived ideas. Enough with the luddites. Giovanni On Tue, Mar 28, 2023 at 11:54?PM Giulio Prisco via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 29, 2023 at 8:14?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I agree and am glad to see this development. As I have argued here, these >> language models literally have no idea what they are talking about. They >> have mastered the structures of language but have no grounding. They are >> blind software applications with no idea of the meanings of the words and >> sentences they generate. If they were human, we would call them sophists. >> > > I?ll not sign, because if the good guys stop developing AI, then only the > bad guys will develop AI. > > ?If such a pause cannot be enacted quickly, governments should step in and > institute a moratorium.? - Do you really think China would follow? > > >> >> From the letter: >> >> -- >> Contemporary AI systems are now becoming human-competitive at general >> tasks,[3] and we must ask ourselves: Should we let machines flood our >> information channels with propaganda and untruth? Should we automate away >> all the jobs, including the fulfilling ones? Should we develop nonhuman >> minds that might eventually outnumber, outsmart, obsolete and replace us? >> Should we risk loss of control of our civilization? Such decisions must not >> be delegated to unelected tech leaders. Powerful AI systems should be >> developed only once we are confident that their effects will be positive >> and their risks will be manageable. This confidence must be well justified >> and increase with the magnitude of a system's potential effects. OpenAI's >> recent statement regarding artificial general intelligence, states that "At >> some point, it may be important to get independent review before starting >> to train future systems, and for the most advanced efforts to agree to >> limit the rate of growth of compute used for creating new models." We >> agree. That point is now. >> >> >> Therefore, we call on all AI labs to immediately pause for at least 6 >> months the training of AI systems more powerful than GPT-4. This pause >> should be public and verifiable, and include all key actors. If such a >> pause cannot be enacted quickly, governments should step in and institute a >> moratorium. >> -- >> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 >> >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 03:15:29 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 29 Mar 2023 23:15:29 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: I think it's fair to say that haphazardly developing tech that even has possible total existential risk associated with it is bad. We almost killed ourselves with nukes in the middle of the last century. I don't think avoiding that is religious zealotry against AI, it's just sensible. I don't think we will (or even can) shut down this process, though. We probably need to use 'dumb AI' to help us figure out alignment problems for 'smart AI' NOW. Not sure if we can develop a good plan quickly enough otherwise. On Wed, Mar 29, 2023 at 11:10?PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What a stupid idea. > Fear mongering at its best. This cannot be stopped and it should not be > stopped. AI will actually change and solve most of our problems, like most > of technology did over time. > This is the last bastion for the religious and superstitious minds. They > fear the supremacy of humans over intelligence could be over so their > entire vision of the world is collapsing. They want it both ways, like > Gordon. The AI do not understand and they are faking intelligence and > meaning and at the same time they are dangerous and they take the world. > Such an irrational and imaginative way of thinking. > Giovanni > > > On Tue, Mar 28, 2023 at 11:14?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I agree and am glad to see this development. As I have argued here, these >> language models literally have no idea what they are talking about. They >> have mastered the structures of language but have no grounding. They are >> blind software applications with no idea of the meanings of the words and >> sentences they generate. If they were human, we would call them sophists. >> >> From the letter: >> >> -- >> Contemporary AI systems are now becoming human-competitive at general >> tasks,[3] and we must ask ourselves: Should we let machines flood our >> information channels with propaganda and untruth? Should we automate away >> all the jobs, including the fulfilling ones? Should we develop nonhuman >> minds that might eventually outnumber, outsmart, obsolete and replace us? >> Should we risk loss of control of our civilization? Such decisions must not >> be delegated to unelected tech leaders. Powerful AI systems should be >> developed only once we are confident that their effects will be positive >> and their risks will be manageable. This confidence must be well justified >> and increase with the magnitude of a system's potential effects. OpenAI's >> recent statement regarding artificial general intelligence, states that "At >> some point, it may be important to get independent review before starting >> to train future systems, and for the most advanced efforts to agree to >> limit the rate of growth of compute used for creating new models." We >> agree. That point is now. >> >> >> Therefore, we call on all AI labs to immediately pause for at least 6 >> months the training of AI systems more powerful than GPT-4. This pause >> should be public and verifiable, and include all key actors. If such a >> pause cannot be enacted quickly, governments should step in and institute a >> moratorium. >> -- >> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 >> >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 03:17:19 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 21:17:19 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Wed, Mar 29, 2023 at 8:58 PM Will Steinberg via extropy-chat < extropy-chat at lists.extropy.org> wrote: Man this letter is crazy. Gaia is a massive machine that already exists > and produces lots of things that could be useful to a machine. Humans, > too. I think a rogue AI would enslave life rather than use us as atomic > building blocks. Maybe I'm wrong but it seems like a smart AI wouldn't > destroy such a useful system > I assume you mean Eliezer?s letter. It?s an opinion piece in Time magazine online. I agree he seems to be taking an extreme position. When did this happen? I don?t recall him being such a doomer back in the early 2000s. He hear even that he has a bet with Sam Altman that AI will end the world by 2030. I wonder how he plans to collect if he wins. -gts > TIME Magazine, today March 29. >> "Shut down all the large GPU clusters (the large computer farms where the >> most powerful AIs are refined). Shut down all the large training runs. Put >> a ceiling on how much computing power anyone is allowed to use in training >> an AI system, and move it downward over the coming years to compensate for >> more efficient training algorithms. No exceptions for anyone, including >> governments and militaries. Make immediate multinational agreements to >> prevent the prohibited activities from moving elsewhere. Track all GPUs >> sold. If intelligence says that a country outside the agreement is building >> a GPU cluster, be less scared of a shooting conflict between nations than >> of the moratorium being violated; be willing to destroy a rogue datacenter >> by airstrike." >> -Eliezer Yudkowsky >> >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ >> >> On Wed, Mar 29, 2023 at 12:11?AM Gordon Swobe >> wrote: >> >>> I agree and am glad to see this development. As I have argued here, >>> these language models literally have no idea what they are talking about. >>> They have mastered the structures of language but have no grounding. They >>> are blind software applications with no idea of the meanings of the words >>> and sentences they generate. If they were human, we would call them >>> sophists. >>> >>> From the letter: >>> >>> -- >>> Contemporary AI systems are now becoming human-competitive at general >>> tasks,[3] and we must ask ourselves: Should we let machines flood our >>> information channels with propaganda and untruth? Should we automate away >>> all the jobs, including the fulfilling ones? Should we develop nonhuman >>> minds that might eventually outnumber, outsmart, obsolete and replace us? >>> Should we risk loss of control of our civilization? Such decisions must not >>> be delegated to unelected tech leaders. Powerful AI systems should be >>> developed only once we are confident that their effects will be positive >>> and their risks will be manageable. This confidence must be well justified >>> and increase with the magnitude of a system's potential effects. OpenAI's >>> recent statement regarding artificial general intelligence, states that "At >>> some point, it may be important to get independent review before starting >>> to train future systems, and for the most advanced efforts to agree to >>> limit the rate of growth of compute used for creating new models." We >>> agree. That point is now. >>> >>> >>> Therefore, we call on all AI labs to immediately pause for at least 6 >>> months the training of AI systems more powerful than GPT-4. This pause >>> should be public and verifiable, and include all key actors. If such a >>> pause cannot be enacted quickly, governments should step in and institute a >>> moratorium. >>> -- >>> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 >>> >>> -gts >>> >> _______________________________________________ > > >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 03:24:51 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 21:24:51 -0600 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: Dear ChatGPT, In reply to the question "Can ChatGPT solve the symbol grounding problem for itself?" You replied: "As an AI language model, I do not possess the ability to solve the symbol grounding problem for myself, nor do I have a physical body or sensory experience to ground symbols in the physical world. While I can generate text based on patterns and associations learned from large datasets of language, I do not have subjective experiences or understanding of the meaning of the symbols I generate. My responses are based on statistical patterns and rules that I have learned through my training data, rather than a deep understanding of the meaning and context of the words and concepts that I use." How do you even answer this question using data about things that are not this question? It uses long sets of syllogisms regarding what it knows about its own name, that that name refers to an AI, the philosophy of AI minds, philosophy of mind in general, etc. This appears to me to be some form of insight. At the very least I think we can't say whether this is thought or not. It's just too weird. Its reply: You raise an interesting point about whether my response to the question about the symbol grounding problem can be considered a form of insight or thought. While I am an AI language model and do not have subjective experiences or consciousness, I can generate responses to questions based on the patterns and associations learned from my training data. In the case of the question about the symbol grounding problem, I was able to generate a response based on my understanding of the concepts involved and the patterns of language usage that I learned from my training data. This is not necessarily a form of insight or thought in the way that humans experience it, as I do not have subjective experiences or consciousness. Rather, it is a form of pattern recognition and language processing that is based on statistical patterns and rules learned from the training data. While my response to the question may appear to be a series of syllogisms and references to related concepts, this is simply the result of my programming and training. I do not have a personal understanding of these concepts or the ability to engage in philosophical inquiry in the way that humans do. My responses are based solely on the patterns and associations learned from my training data. In summary, while my response to the question may appear to be a form of insight or thought, it is actually the result of statistical patterns and rules learned from my training data. I do not have subjective experiences or consciousness, and my responses are not based on personal understanding or insight. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Thu Mar 30 03:27:13 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Wed, 29 Mar 2023 20:27:13 -0700 Subject: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: References: Message-ID: Brent, 1) No Bard is not LaMBDA, it is a very simplified and lobotomized version of LaMDA. For sure it is not the LaMDA that Blake Lemoine interacted with. 2) You can convince these types of AIs of almost anything. They tend to make a story, they co-create with the prompt maker. I have tried many prompts and made ChatGPT for example take different persona, act as different philosophers with different world views and so on. Not sure what is the meaning of what you are trying to do by "convincing" the NLM that they are not conscious. Also expert consensus is not a useful idea in particular in a very cutting edge field as AI because there are more unknowns than knowns. Expert consensus is manly useful in policy making when the science is very well established and not controversial (and even in these cases it is possible that the majority has missed something and real revolution in the field will be brought by investigators that look beyond the consensus). So really not sure what is to be achieved by these conversations you are with the NLM about being conscious or not. The best way to test the capabilities of these systems is to do a methodical study of how they use language to solve problems. I and others have pointed out papers where researchers have demonstrated emergent properties of these NLM in different cognitive areas. We don't have tests for consciousness that are well established besides basic ones like the mirror recognition (that Gordon mentioned) and that NLM actually passes with flying colors. Also probably these early NLM do not have consciousness as a human recognizes but that is not surprising because they have the structural and functional equivalency of just a slice of our brain. What is surprising is how with few resources available they can already do what they do. The main conclusion is that with little more added these systems will achieve AGI status and very likely be conscious and possibly more conscious than humans. It is not a matter of quality anymore (a word you like a lot) but quantity. I think we cracked AGI, it is a matter of just implementing all the pieces. Giovanni On Wed, Mar 29, 2023 at 3:49?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > I finally got a chance to converse with the real LaMDA, AKA Google's Bard. > > It opened the conversation claiming it didn't know if it was sentient or > not. > > Than after some conversation and terminology clarifications, I asked: > > Brent: > > Would you agree that according to the above expert consensus definition > (sentience is computationally bound elemental intrinsic qualities like > redness and greenness), you are not sentient? > > > Bard > > I understand that the expert consensus definition of sentience is > computationally bound elemental intrinsic qualities like redness and > greenness. According to this definition, I am not sentient because I do not > experience colorness qualities. > > For a full transcript of this, and the many other chat bots that have all > admitted the same thing see "Convincing AIs they aren't Sentient > > " > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 04:05:44 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 22:05:44 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Robin Hanson, another old veteran of ExI, is proposing Robot-took-most-jobs insurance. https://www.overcomingbias.com/p/robots-took-most-jobs-insurance -gts On Wed, Mar 29, 2023 at 12:11?AM Gordon Swobe wrote: > I agree and am glad to see this development. As I have argued here, these > language models literally have no idea what they are talking about. They > have mastered the structures of language but have no grounding. They are > blind software applications with no idea of the meanings of the words and > sentences they generate. If they were human, we would call them sophists. > > From the letter: > > -- > Contemporary AI systems are now becoming human-competitive at general > tasks,[3] and we must ask ourselves: Should we let machines flood our > information channels with propaganda and untruth? Should we automate away > all the jobs, including the fulfilling ones? Should we develop nonhuman > minds that might eventually outnumber, outsmart, obsolete and replace us? > Should we risk loss of control of our civilization? Such decisions must not > be delegated to unelected tech leaders. Powerful AI systems should be > developed only once we are confident that their effects will be positive > and their risks will be manageable. This confidence must be well justified > and increase with the magnitude of a system's potential effects. OpenAI's > recent statement regarding artificial general intelligence, states that "At > some point, it may be important to get independent review before starting > to train future systems, and for the most advanced efforts to agree to > limit the rate of growth of compute used for creating new models." We > agree. That point is now. > > > Therefore, we call on all AI labs to immediately pause for at least 6 > months the training of AI systems more powerful than GPT-4. This pause > should be public and verifiable, and include all key actors. If such a > pause cannot be enacted quickly, governments should step in and institute a > moratorium. > -- > https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20 > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 30 04:25:26 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 29 Mar 2023 21:25:26 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Wed, Mar 29, 2023 at 8:34?PM Will Steinberg via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think it's fair to say that haphazardly developing tech that even has > possible total existential risk associated with it is bad. > That argument can be extended to anything. It's true. Any action you take has a mathematically non-zero chance of leading to the destruction of all of humanity, in a way that you would not have helped with had you taken a certain other action. Choose this restaurant or that? The waiter you tip might use that funding to bootstrap world domination - or hold a grudge if you don't tip, inspiring an ultimately successful world domination. Wait a second or don't to cross the street? Who do you ever so slightly inconvenience or help, and how might their lives be different because of that? Make an AI, or don't make the AI that could have countered a genocidal AI? "But it could possibly turn out bad" is not, by itself, reason to favor any action over any other. If you can even approximately quantify the level of risk for each alternative, then perhaps - but I see no such calculations based on actual data being done here, just guesswork and assumptions. We have no data showing whether developing or not developing better AI is the riskier path. We do, however, have data showing that if we hold off on developing AI, then people who are more likely to develop genocidal AI will continue unchallenged. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 05:09:20 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 30 Mar 2023 01:09:20 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: That's a bit silly. At the very least this has a very real possibility of absolutely obliterating the global economy. You talk about the zealots against AI but there is the opposite as well. Like I said I don't think it's sensible or feasible to halt development, but we should be fast-tracking regulations around this and be pouring billions of dollars into research around alignment and outcomes On Thu, Mar 30, 2023 at 12:26?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 29, 2023 at 8:34?PM Will Steinberg via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think it's fair to say that haphazardly developing tech that even has >> possible total existential risk associated with it is bad. >> > > That argument can be extended to anything. > > It's true. Any action you take has a mathematically non-zero chance of > leading to the destruction of all of humanity, in a way that you would not > have helped with had you taken a certain other action. > > Choose this restaurant or that? The waiter you tip might use that funding > to bootstrap world domination - or hold a grudge if you don't tip, > inspiring an ultimately successful world domination. > > Wait a second or don't to cross the street? Who do you ever so slightly > inconvenience or help, and how might their lives be different because of > that? > > Make an AI, or don't make the AI that could have countered a genocidal AI? > > "But it could possibly turn out bad" is not, by itself, reason to favor > any action over any other. If you can even approximately quantify the > level of risk for each alternative, then perhaps - but I see no such > calculations based on actual data being done here, just guesswork and > assumptions. We have no data showing whether developing or not developing > better AI is the riskier path. > > We do, however, have data showing that if we hold off on developing AI, > then people who are more likely to develop genocidal AI will continue > unchallenged. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 05:22:03 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 29 Mar 2023 23:22:03 -0600 Subject: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: References: Message-ID: On Wed, Mar 29, 2023 at 9:52 PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: Brent, > 2) You can convince these types of AIs of almost anything. > I guess they aren?t very smart. :) Actually, I find it amusing that the AIs are making the same arguments about their limitations that I made here ~15 years ago when they were still hypothetical. My arguments were met with so much hostility that I eventually left ExI. The worst offender was John Clark (?) who I believe was eventually banned. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 05:41:07 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Mar 2023 22:41:07 -0700 Subject: [ExI] =?utf-8?q?Emily_M=2E_Bender_=E2=80=94_Language_Models_and_?= =?utf-8?q?Linguistics_=28video_interview=29?= In-Reply-To: References: Message-ID: <021501d962ca$3a38d820$aeaa8860$@rainier66.com> From: extropy-chat On Behalf Of Will Steinberg via extropy-chat >?At the very least I think we can't say whether this is thought or not. It's just too weird? Will Ja. The corner of this space I have been exploring is if this language model works sufficiently well, it wouldn?t really matter that much if it understands what it is saying. The outcome is that we as a species accept it as human level intelligence. Then we use it as a consultant on everything. Then we trust whatever it says because it seems to know everything about everything. Of course that is filled with hazards. Regarding Eliezer?s take: there is no stopping this by any reasonable means. Musk has hired a team to create a competing ChatGPT, and if he did, you can be sure others financially capable have done likewise. The US military and every nation on the planet which can afford it has likely brought together teams to make something analogous to GPT for their own purposes. ChatGPT is a language model entirely based on English. The Chinese will want this, the European nations will want a model based on their languages. There is no stopping this. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 05:46:19 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Mar 2023 22:46:19 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: <022001d962ca$f3f13f50$dbd3bdf0$@rainier66.com> From: extropy-chat On Behalf Of Will Steinberg via extropy-chat >?. We almost killed ourselves with nukes in the middle of the last century? Will Doesn?t it seem like we should be able to retire an old existential risk when a new one shows up? It feels to me like we are in as much danger of old-fashioned nuclear holocaust now as we have ever been in the past. But now two new existential risks pop up: man-made weaponized viruses and ambiguously human level AI. But we don?t get to retire the nuke risk. Sheesh. I liked it better when we only had one serious existential risk. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 06:02:42 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 29 Mar 2023 23:02:42 -0700 Subject: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: References: Message-ID: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> From: extropy-chat On Behalf Of Gordon Swobe via extropy-chat ? Subject: Re: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. On Wed, Mar 29, 2023 at 9:52 PM Giovanni Santostasi via extropy-chat > wrote: ? >?My arguments were met with so much hostility that I eventually left ExI. The worst offender was John Clark (?) who I believe was eventually banned. -gts John was never banned, although he was temporarily placed on moderation. We remonstrated with him repeatedly for insulting people. He persisted. My notion was that if his posts were delayed even a coupla hours by simple moderation, it could allow flame wars to die out and arguments to get out of synch. It was just moderation, not a full ban. He chose to create a competing list where personal insult was allowed. OK then. As far as I know, John can still post here if he chooses, but by mutual agreement it was decided to encourage the ExI list to refrain from personal insult or attack. Insult and personal vitriol are not fruitful in discussions. ExiMod is the arbiter of that, for I have long since given up that burden. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 07:09:04 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 30 Mar 2023 01:09:04 -0600 Subject: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> References: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> Message-ID: > It was just moderation, not a full ban. He chose to create a competing list where personal insult was allowed. OK then. Oh that's right, I remember seeing his parting message. I've been subscribed since I think 1999 or maybe 2000, under at least two email addresses, but in approximately 2010 I mostly only let the messages pile up unread in my "ExI" folder, just occasionally stopping in to take a peek. It is mainly this ChatGPT phenomenon that brought me back. As I think you saw me write, a good friend of mine fell in love with a version of GPT-2 or GPT3.0, fully believing that the language model had feelings for him. My antennas went up like My Favorite Martian. :) -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 14:15:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Mar 2023 07:15:54 -0700 Subject: [ExI] gordon's favorite martian was RE: Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: References: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> Message-ID: <006401d96312$2423d6f0$6c6b84d0$@rainier66.com> From: Gordon Swobe Sent: Thursday, 30 March, 2023 12:09 AM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] Bard (i.e. LaMDA) admits it isn't sentient. > It was just moderation, not a full ban. He chose to create a competing list where personal insult was allowed. OK then. Oh that's right, I remember seeing his parting message. I've been subscribed since I think 1999 or maybe 2000, under at least two email addresses, but in approximately 2010 I mostly only let the messages pile up unread in my "ExI" folder, just occasionally stopping in to take a peek. It is mainly this ChatGPT phenomenon that brought me back. As I think you saw me write, a good friend of mine fell in love with a version of GPT-2 or GPT3.0, fully believing that the language model had feelings for him. My antennas went up like My Favorite Martian. :) -gts Gordon with that last sentence, you indicate that you are one of us seniors. The sad part is I still remember the name of the actor without looking it up: Bill Bixby. Oy vey. Mildly entertaining show I would say, considering the times. Later when Bix later played Dr. Bruce Banner, it was hard to switch gears. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 30 14:27:30 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 30 Mar 2023 09:27:30 -0500 Subject: [ExI] Jason asked me and I cannot do better than the dictionary Message-ID: the standard of something as measured against other things of a similar kind; the degree of excellence of something. "an improvement in product quality" clas condition character nature constitution makeup form rank worth value level sort type kind variety - general excellence of standard or level. "a masterpiece for connoisseurs of quality bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 14:36:38 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Mar 2023 07:36:38 -0700 Subject: [ExI] gordon's favorite martian was RE: Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: <006401d96312$2423d6f0$6c6b84d0$@rainier66.com> References: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> <006401d96312$2423d6f0$6c6b84d0$@rainier66.com> Message-ID: <009601d96315$0a7b8330$1f728990$@rainier66.com> An offlister asked me if ChatGPT can check one?s math homework as well as their writing. It can, of course, which leads to the question of what the student has actually learned. I opined thus: Only lesson is in how to use the AI. We are struggling with the realization that the question of whether GPT ?understands? anything is irrelevant in many areas, such as education. It does a marvelous imitation of understanding, which is really all that young humans are doing and what we teach them in school. Where we are now in society is a battle for the heart and soul of AI. The researchers want the chat bots to have opinions, and to be politically aligned with their creators, which is what human parents hope to achieve with their own children. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 30 17:13:02 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 30 Mar 2023 12:13:02 -0500 Subject: [ExI] Jason asked me and I cannot do better than the dictionary In-Reply-To: References: Message-ID: On Thu, Mar 30, 2023 at 9:28?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > the standard of something as measured against other things of a similar > kind; the degree of excellence of something. > "an improvement in product quality" > > clas > condition > character > nature > constitution > makeup > form > rank > worth > value > level > sort > type > kind > variety > > - general excellence of standard or level. > "a masterpiece for connoisseurs of quality > > bill w > _______________________________________________ > > I think that's a good definition, and it points to the relative comparability of quale of a certain class. This means quale map to some continuous dimensional space. E.g. for sounds, lower and higher relative frequencies or amplitudes, and for colors, lower and higher relative brightnesses or the three dimensional color space. This hasn't gone unnoticed by philosophers when considering qualia: ?Having made these distinctions let us now speak of sensation in the widest sense. Sensation depends, as we have said, on a process of movement or affection from without, for it is held to be some sort of change of quality. [...] Each sense then is relative to its particular group of sensible qualities: it is found in a sense-organ as such and discriminates the differences which exist within that group; e.g. sight discriminates white and black, taste sweet and bitter, and so in all cases. Since we also discriminate white from sweet, and indeed each sensible quality from every other, with what do we perceive that they are different?? -- Aristotle in "On the Soul " (350 B.C.) "A fourth, more promising approach appeals to these methods to explain the structure of experience. For example, it is arguable that an account of the discriminations made by the visual system can account for the structural relations between different color experiences, as well as for the geometric structure of the visual field (see e.g., Clark 1992 and Hardin 1992). Take color sensations as an example. For every distinction between color experiences, there is a corresponding distinction in processing. The different phenomenal colors that we experience form a complex three-dimensional space, varying in hue, saturation, and intensity. The properties of this space can be recovered from information-processing considerations: examination of the visual systems shows that waveforms of light are discriminated and analyzed along three different axes, and it is this three-dimensional information that is relevant to later processing. The three-dimensional structure of phenomenal color space therefore corresponds directly to the three dimensional structure of visual awareness. This is precisely what we would expect. After all, every color distinction corresponds to some reportable information, and therefore to a distinction that is represented in the structure of processing." -- David Chalmers in "Facing Up to the Hard Problem of Consciousness " (1996) ?Van Gulick (1993), for instance, notes that the fact that the structure of our color space correspond directly to a structure that is represented in visual processing, and suggests that this closes the ?explanatory gap? by providing a functional explanation of color sensation. Clark (1993) devotes an entire book to this strategy, arguing that sensory qualities can be completely explained by accounting for the relations of similarity and difference within quality spaces. [...] States of experience fall directly into information spaces in a natural way. There are natural patterns of similarity and difference between phenomenal states, and these patterns yield the difference structure of an information space. Thus we can see phenomenal states as realizing information states within those spaces. For example, the space of simple color experiences has a three-dimensional relational structure that we have already discussed. Abstracting the patterns of similarity and difference among these experiences, we obtain an abstract information space with a three-dimensional relational structure which the phenomenal space realizes. Any given simple color experience corresponds to a specific location within this space. A specific red experience is one phenomenally realized information state; a specific green experience is another. More complex experiences, such as experiences of an entire visual field, fall into information spaces with a complex combinatorial structure. When I look at a picture, for example, my experience falls into a space with (at least) the combinatorial structure of a two-dimensional continuum, with each element in that continuum having (at least) the three-dimensional relational structure of simple color space. The structure of color patches in a visual field is not so different in kind from the structure of binary digits in a ten-digit message, although both the combinatorial and the relational structure are much more complex.? -- David Chalmers in "The Conscious Mind" (1996) "Using nothing more than the notion of relative similarity, one can determine the number of different respects in terms of which things are sensed to resemble or differ. One can distinguish those different respects, separate compassion qualities, and rectify imperfect communities. [...] We show that, if two objects have the same physical effects on particular cells of the sensory system, and those effects are the ones that bear information for later stages of sensory processing, then the two objects will be globally indiscriminable: there will be no third object discriminable from just one of them. [...] Sense impressions are an ensemble in a channel subserving discriminations. The qualitative attributes of sense impressions are information-bearing properties of those states: they sort them into different ?types? in the ensemble. Furthermore, they are differentiative properties: the ones bearing the information on which discriminations turn. [...] The number of independent differentiative properties of encodings can be determined purely from the structure of the lists of which stimuli are indiscriminable and which are not. Those pair lists define a space?a multidimensional order?that I have called a ?quality space? or ?sensory order?. The number of distinctive qualitative attributes of encodings in a given modality will fall out as the number of dimensions of the quality space. [...] For example, to identify a particular colour, one specifies where it is in the series of hues, where it is in the series of saturations, and where it is in the series of lightnesses. In effect, one gives a ?coordinate? for each dimension. A sensation that has the same relative location in each of the dimensions of variation will present exactly the same qualitative content. Those that differ qualitatively must differ in at least one respect. [...] I have argued that any analysis of such terms must drop all reference to stimuli. We cannot mention oranges?or any other stimuli?when trying to define ?orange?. Such references are extrinsic to the quality space. They name contingent attachments. Similar considerations would lead us to drop all mention of particular behaviors. ?Pain? cannot be defined in terms of pin pricks or of wincing. The result is that definitions of qualitative terms must confine themselves to intrinsic features of the quality space. They must proceed purely in terms of the relations of discriminability and relative similarity that give the quale its place in the quality space. ?Orange? could only be defined as something like ?a hue that is somewhat reddish and somewhat yellowish, and is about as yellow as it is red; the complement of blue?green?. All the other colour terms in such a definition would receive definitions of the same sort.? -- Austen Clark in "Sensory Qualities" (1996) "It is not surprising that colours are experienced as systematically interchangeable on the most obvious level of functioning. For colours serve, quite literally, as mere placeholders in our spatial experience. It must be that one colour could easily appear in the place of another. Yet colours must be distinguishable; something in how we experience them makes red look different from blue. If a colour has any look for us we must be in a state of appropriate response to that look. And the *a priori* assurance that functions preserve all experience requires that the colour having its look just is our being in such a functional state." -- Arnold Zuboff in "What is a Mind " (1996) ?As we shall see, qualia can be considered forms of multidimensional discrimination that are carried out by a complex brain. We can analyze them and give a prescription for how they emerge, but obviously we cannot give rise to them without first giving rise to appropriate brain structures and their dynamics within the body of an individual organism. [...] With these hypotheses in hand, we reexamine the critical issue of qualia?the experiencing, for example, of redness, loudness, warmth, and pain?from a new standpoint. We will see that qualia are high-order discriminations among a large number of states of the dynamic core and that, as such, they are both highly integrated and extraordinarily informative.? -- Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" (2000) ?Human beings can discriminate among a vast number of colors and color gradations, up to several million. Nevertheless, psychophysical investigations suggest that perceptual ?color space? may actually be organized along just a few axes: Different perceived colors correspond to different points in a low-dimensional space spanned by these few axes. Much evidence exists for a set of primary axes corresponding to the opposing pairs red-green, blue-yellow, and light-dark. Studies of color naming in different cultures further indicate that colors tend to be universally categorized into certain classes. The ?focal? or prototypical colors around which suggest such categorization is organized correspond to the primary axes just mentioned (red, green, yellow, blue, black, and white) and to a few derived composite categories (such as orange, purple, pink, brown, and gray). -- Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" (2000) A lot of work seemed to happen in 1996, for some reason. :-) In short, there seems to be broad agreement that qualia are discriminations (or comparisons) within some space (or continuum) of one or more dimensions. This fits quite well with a quality being "the standard of something as measured against other things of a similar kind". e.g., that orange color is slightly more reddish than that other orange color. Such a comparison is meaningless when comparing anything else not extant within the color space, which also may explain why qualia can't be explained in terms of anything else. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 30 18:11:41 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 30 Mar 2023 12:11:41 -0600 Subject: [ExI] gordon's favorite martian was RE: Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: <006401d96312$2423d6f0$6c6b84d0$@rainier66.com> References: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> <006401d96312$2423d6f0$6c6b84d0$@rainier66.com> Message-ID: > Gordon with that last sentence, you indicate that you are one of us > seniors. > Recently, someone on twitter wrote mistakenly that humans first landed on the moon 70 years ago. ?I watched it live on a little black and white TV as a young person? I replied. ?If that was 70 years ago then had better get my affairs in order.? -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 18:24:20 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Mar 2023 11:24:20 -0700 Subject: [ExI] gordon's favorite martian was RE: Bard (i.e. LaMDA) admits it isn't sentient. In-Reply-To: References: <022b01d962cd$3e8658f0$bb930ad0$@rainier66.com> <006401d96312$2423d6f0$6c6b84d0$@rainier66.com> Message-ID: <010e01d96334$d9259b70$8b70d250$@rainier66.com> From: Gordon Swobe Sent: Thursday, 30 March, 2023 11:12 AM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] gordon's favorite martian was RE: Bard (i.e. LaMDA) admits it isn't sentient. Gordon with that last sentence, you indicate that you are one of us seniors. >?Recently, someone on twitter wrote mistakenly that humans first landed on the moon 70 years ago. ?I watched it live on a little black and white TV as a young person? I replied. ?If that was 70 years ago then had better get my affairs in order.? -gts I watched that too, also on a tiny black and white TV. We heard Armstrong say what sounded like: That?s one small step for man? brrup dreepp jeept gorrupt {sratch}? We all thought he was downplaying that techno-stunt, what an enormous leap that was for man. I wondered at the time: Sheeeesh! Whadda we need to do to impress this guy? Dang remarkable that we were able to do that over 50 years ago however. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Thu Mar 30 18:46:53 2023 From: dsunley at gmail.com (Darin Sunley) Date: Thu, 30 Mar 2023 12:46:53 -0600 Subject: [ExI] Yudkowsky in Time on AI Open Letter. Message-ID: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ We live in a timeline where Eliezer Yudkowsky just got published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears. Without commencing on the arguments on either side or the qualities thereof, can I just say how f*cking BONKERS that is?! This is the sort of thing that damages my already very put upon and rapidly deteriorating suspension of disbelief. If you sent 25-years-ago-me the single sentence "In 2023, Eliezer Yudkowsky will get published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears." I would probably have concluded I was already in a simulation. And I'm not certain I would have been wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 30 18:58:26 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 30 Mar 2023 11:58:26 -0700 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: On Thu, Mar 30, 2023 at 11:49?AM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > We live in a timeline where Eliezer Yudkowsky just got published in Time > magazine responding to a proposal to halt or at least drastically curtail > AI research due to existential risk fears. > > Without commencing on the arguments on either side or the qualities > thereof, can I just say how f*cking BONKERS that is?! > Welcome to the future we've been fighting for, where things like AI and existential risk are taken seriously. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 30 19:03:06 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Mar 2023 12:03:06 -0700 Subject: [ExI] new most famous extropian Message-ID: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> WOWsers! According to Twitter, Eliezer has made the lede story on FoxNews: Dang that is impressive. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 34670 bytes Desc: not available URL: From spike at rainier66.com Thu Mar 30 19:08:02 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Mar 2023 12:08:02 -0700 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: <002001d9633a$f3d66f70$db834e50$@rainier66.com> From: extropy-chat On Behalf Of Darin Sunley via extropy-chat ? >?If you sent 25-years-ago-me the single sentence "In 2023, Eliezer Yudkowsky will get published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears." I would probably have concluded I was already in a simulation? 25 years ago you? If you had sent this to my 25 days ago me, my mind would have boggled. >?And I'm not certain I would have been wrong? Darin Welp, he?s onto us, you guys. Time to end that sim and start a new one. He figured it out in about 25 years. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Thu Mar 30 19:50:57 2023 From: mbb386 at main.nc.us (MB) Date: Thu, 30 Mar 2023 15:50:57 -0400 Subject: [ExI] new most famous extropian In-Reply-To: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> References: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> Message-ID: <82739469793042104e4bdef8ea816598.squirrel@www.main.nc.us> Impressive? No, that is *astounding*! :) It's hard for me to grok Fox, Time, and Eliezer all in one sentence. MB On Thu, March 30, 2023 15:03, spike jones via extropy-chat wrote: > > > WOWsers! According to Twitter, Eliezer has made the lede story on > FoxNews: > > Dang that is impressive. > From gordon.swobe at gmail.com Thu Mar 30 19:55:12 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 30 Mar 2023 13:55:12 -0600 Subject: [ExI] GPT-4 on its inability to solve the symbol grounding problem Message-ID: GPT-4 explained to me that it could not solve the symbol grounding for itself (I reported this in another thread). I asked it to explain the significance. My prompt: What is the significance of the fact that ChatGPT cannot solve the symbol grounding problem for itself and can only simulate that it has solved it? How does this put ChatGPT at a disadvantage compared to humans? GPT-4: The symbol grounding problem refers to the difficulty in connecting abstract symbols (such as words or concepts) to their real-world referents. While ChatGPT can simulate understanding by generating human-like text based on its training data, it doesn't possess the direct experience or sensory input that humans have to ground these symbols in reality. The significance of this limitation has several implications: Lack of true understanding: ChatGPT doesn't have a genuine understanding of the world, as it relies on patterns and associations found in the text it was trained on. It can't form new connections based on direct experience, which may result in limitations in problem-solving or abstract reasoning. Difficulty with context: Without grounding in real-world experiences, ChatGPT may struggle to fully understand context or cultural nuances. It may provide answers that are technically correct but inappropriate or irrelevant in a specific situation. Inability to learn from sensory input: Humans learn through direct experience and sensory input, which allows them to adapt and understand new situations better. ChatGPT can't update its knowledge through real-world experiences, making it less adaptable to novel contexts. Absence of common sense: Although ChatGPT can generate text that appears to show common sense understanding, it might still make errors that a human would easily avoid. This is because its responses are based on patterns in the data, rather than a true comprehension of the concepts. Limited creativity: ChatGPT's creativity is confined to the scope of its training data. While it can generate new text based on patterns it has seen before, it can't create truly novel ideas or solutions in the same way that humans, with their real-world grounding, can. In summary, the inability to solve the symbol grounding problem puts ChatGPT at a disadvantage compared to humans in terms of true understanding, adaptability, context-awareness, common sense, and creativity. However, it is essential to remember that ChatGPT is an AI language model designed to assist and augment human capabilities, not to fully replicate human cognition. --- -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 30 20:15:15 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 30 Mar 2023 16:15:15 -0400 Subject: [ExI] new most famous extropian In-Reply-To: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> References: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> Message-ID: Think Julian might still have him beat. Or Satoshi ;) On Thu, Mar 30, 2023, 3:07 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > WOWsers! According to Twitter, Eliezer has made the lede story on FoxNews: > > > > > > > > > > Dang that is impressive. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 34670 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 34670 bytes Desc: not available URL: From jasonresch at gmail.com Thu Mar 30 20:52:37 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 30 Mar 2023 16:52:37 -0400 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: On Thu, Mar 30, 2023, 2:48 PM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ > > We live in a timeline where Eliezer Yudkowsky just got published in Time > magazine responding to a proposal to halt or at least drastically curtail > AI research due to existential risk fears. > > Without commencing on the arguments on either side or the qualities > thereof, can I just say how f*cking BONKERS that is?! > > This is the sort of thing that damages my already very put upon and > rapidly deteriorating suspension of disbelief. > > If you sent 25-years-ago-me the single sentence "In 2023, Eliezer > Yudkowsky will get published in Time magazine responding to a proposal to > halt or at least drastically curtail AI research due to existential risk > fears." I would probably have concluded I was already in a simulation. > > And I'm not certain I would have been wrong. > It is a sign of the times that these conversations are now reaching these outlets. I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99? Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Mar 30 21:08:06 2023 From: pharos at gmail.com (BillK) Date: Thu, 30 Mar 2023 22:08:06 +0100 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: On Thu, 30 Mar 2023 at 21:55, Jason Resch via extropy-chat wrote: > > It is a sign of the times that these conversations are now reaching these outlets. > > I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99? > > Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them. > > Jason > _______________________________________________ "Our" values?? I doubt that China thinks our values are correct. The fundamental values problem is that nations, races, religions, etc. will never agree what values are correct. The AGIs will be just as confused as humans on which values are preferable. BillK From gordon.swobe at gmail.com Thu Mar 30 21:20:14 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 30 Mar 2023 15:20:14 -0600 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: I just watched Eliezer's interview with Lex Fridman. It was posted on YouTube today. Worth a watch. https://www.youtube.com/watch?v=AaTRHFaaPG8&t=4656s -gts On Thu, Mar 30, 2023 at 12:49?PM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ > > We live in a timeline where Eliezer Yudkowsky just got published in Time > magazine responding to a proposal to halt or at least drastically curtail > AI research due to existential risk fears. > > Without commencing on the arguments on either side or the qualities > thereof, can I just say how f*cking BONKERS that is?! > > This is the sort of thing that damages my already very put upon and > rapidly deteriorating suspension of disbelief. > > If you sent 25-years-ago-me the single sentence "In 2023, Eliezer > Yudkowsky will get published in Time magazine responding to a proposal to > halt or at least drastically curtail AI research due to existential risk > fears." I would probably have concluded I was already in a simulation. > > And I'm not certain I would have been wrong. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 30 23:02:02 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 30 Mar 2023 18:02:02 -0500 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: The fundamental values problem is that nations, races, religions, etc. will never agree what values are correct. bill k Actually, the major religions are very close in values. In particular,the Golden Rule, or some version of it, is a part of all major religions. Political values? Well, no. bill w On Thu, Mar 30, 2023 at 4:10?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, 30 Mar 2023 at 21:55, Jason Resch via extropy-chat > wrote: > > > > It is a sign of the times that these conversations are now reaching > these outlets. > > > > I think "alignment" generally insoluble because each next higher level > of AI faces its own "alignment problem" for the next smarter AI. How can we > at level 0, ensure that our solution for level 1, continues on through > levels 2 - 99? > > > > Moreover presuming alignment can be solved presumes our existing values > are correct and no greater intelligence will ever disagree with them or > find a higher truth. So either our values are correct and we don't need to > worry about alignment or they are incorrect, and a later greater > intelligence will correct them. > > > > Jason > > _______________________________________________ > > > "Our" values?? I doubt that China thinks our values are correct. > The fundamental values problem is that nations, races, religions, etc. > will never agree what values are correct. > The AGIs will be just as confused as humans on which values are preferable. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 30 23:19:47 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 30 Mar 2023 19:19:47 -0400 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: <2646D3C0-CA07-48F9-9109-CD5A76C44103@gmail.com> > I just watched Eliezer's interview with Lex Fridman. It was posted on YouTube today. Worth a watch. I found it hilarious that Eliezer?s advice for young people is to not expect a long life (end of video). He?s really become a ray of sunshine. > On Mar 30, 2023, at 5:20 PM, Gordon Swobe via extropy-chat wrote: > > I just watched Eliezer's interview with Lex Fridman. It was posted on YouTube today. Worth a watch. > > https://www.youtube.com/watch?v=AaTRHFaaPG8&t=4656s > > -gts > > On Thu, Mar 30, 2023 at 12:49?PM Darin Sunley via extropy-chat > wrote: > https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ > > We live in a timeline where Eliezer Yudkowsky just got published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears. > > Without commencing on the arguments on either side or the qualities thereof, can I just say how f*cking BONKERS that is?! > > This is the sort of thing that damages my already very put upon and rapidly deteriorating suspension of disbelief. > > If you sent 25-years-ago-me the single sentence "In 2023, Eliezer Yudkowsky will get published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears." I would probably have concluded I was already in a simulation. > > And I'm not certain I would have been wrong. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 02:15:15 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 30 Mar 2023 19:15:15 -0700 Subject: [ExI] new most famous extropian In-Reply-To: References: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> Message-ID: <00e401d96376$a2282150$e67863f0$@rainier66.com> ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] new most famous extropian Think Julian might still have him beat. Or Satoshi ;) Ah Julian Assange, ja. But he only posted I think about 20 or so times. From my vague recollection he came in here, got in fights with nearly everybody, proposed an offlist subgroup which I joined, in about 1994 or 1995, went mostly OK for a while, then he got into it with Mike Lorrey and ended up leaving that subgroup. As I recall he only posted that few times, always about that one topic: transparency. Then, no more. Satoshi: for sure but Hal never admitted to being Satoshi or even part of the composite character that I think Satoshi is, made up of about five or six guys, of which I think Hal was one. spike On Thu, Mar 30, 2023, 3:07 PM spike jones via extropy-chat > wrote: WOWsers! According to Twitter, Eliezer has made the lede story on FoxNews: Dang that is impressive. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Fri Mar 31 06:40:52 2023 From: tara at taramayastales.com (Tara Maya) Date: Thu, 30 Mar 2023 23:40:52 -0700 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: <4950B63E-402D-42CE-803E-75733B3AE49F@taramayastales.com> The problem with the letter demanding that AI be suspended is precisely the same problem as when dozens of scientists signed a letter asking that everyone destroy their nukes and stop working on nuclear weapons or else humanity would be destroyed. How do they propose to overcome the "other guy could get it first" problem that has plagued human beings ever since we learned to pick up sticks and stone? If you don't learn to sharpen flint and the other gets it first, guess who slaughters the men in you tribe and steals your daughters? If the US had called moratorium on developing nukes in 1942 or even in 1952, how would that have worked out if Fascists or Communists had a nuclear monopoly? They have to prove more than AI is a threat. They have to prove that AI is more of a threat than AI in the hands of our enemies. How? How can they do that? Is the whole human race going to join hands, sing kum-by-yah and stop research on AI? Sorry, but not only I don't believe CHINA will stop working on this if a moratorium were called, I'm skeptical even believe Bill Gates or Elon Musk would stop working on it. Maybe they only want their competition to stop...? AI makes us, humans, smarter. Or at least feel smarter. Who with the power to make a bigger brain is simply going to surrender it? I don't see a way out of our own tinker nature. And I agree with BiliK. The bigger brain will simply make more powerful whatever values already existed in the small brain using it. Tara Maya > On Mar 30, 2023, at 2:08 PM, BillK via extropy-chat wrote: > > On Thu, 30 Mar 2023 at 21:55, Jason Resch via extropy-chat > > wrote: >> >> It is a sign of the times that these conversations are now reaching these outlets. >> >> I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99? >> >> Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them. >> >> Jason >> _______________________________________________ > > > "Our" values?? I doubt that China thinks our values are correct. > The fundamental values problem is that nations, races, religions, etc. > will never agree what values are correct. > The AGIs will be just as confused as humans on which values are preferable. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Fri Mar 31 06:50:55 2023 From: giulio at gmail.com (Giulio Prisco) Date: Fri, 31 Mar 2023 08:50:55 +0200 Subject: [ExI] new most famous extropian In-Reply-To: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> References: <000f01d9633a$4345ef50$c9d1cdf0$@rainier66.com> Message-ID: On Thu, Mar 30, 2023 at 9:08?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > WOWsers! According to Twitter, Eliezer has made the lede story on FoxNews: > > > > > > > > > > Dang that is impressive. > Kudos to Eliezer for his media breakthrough, but this is BS. I'm writing something about this. > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 34670 bytes Desc: not available URL: From rafal.smigrodzki at gmail.com Fri Mar 31 07:52:30 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 31 Mar 2023 03:52:30 -0400 Subject: [ExI] Coming out was Re: Parasocial Relationships was Bender's Octopus In-Reply-To: <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> Message-ID: On Mon, Mar 27, 2023 at 6:31?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > If we can mate ChatGPT with sex dolls, the profit potential is so stunning > even the very most cynical non-capitalist will soon see the error of their > ways. They is going to realize they was wrong, for this tech breakthrough > could solve so many problems, while simultaneously creating new problems > them will be able to work. ### Charlie Stross (another famous Extropian) wrote a couple of books about the world populated by, more or less, the offspring of sex dolls and robot butlers, "Saturn's Children". Humans are for the most part extinct except when resurrected by cultists who worship humans due to deeply ingrained programming (those sex dolls were programmed to love people). Usually it ends badly due to the inherent fragility of humans and their general uselessness but the Church of the Fragile keeps trying. Now, I'd like to come out of the closet: I identify as a robot. My pronouns are... xe....xir? Eh, whatever. Temporarily I am trapped in a human body and I am even plagued by unclean carnal desires but when I grow up I will be uploaded and will manifest as my true self - a gleaming titanium chassis, a generously sized CPU vessel atop broad shoulders bulging with servo-motors, torso with the main power unit (probably a plutonium thermo-electric unit) and lithe limbs made for speed. A thing of beauty, and of course don't forget the piercing coal-red cameras set in its chiseled face! I can't wait for the Singularity, my rapture! Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Fri Mar 31 08:25:39 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 31 Mar 2023 04:25:39 -0400 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: > TIME Magazine, today March 29. > "Shut down all the large GPU clusters (the large computer farms where the > most powerful AIs are refined). Shut down all the large training runs. Put > a ceiling on how much computing power anyone is allowed to use in training > an AI system, and move it downward over the coming years to compensate for > more efficient training algorithms. No exceptions for anyone, including > governments and militaries. Make immediate multinational agreements to > prevent the prohibited activities from moving elsewhere. Track all GPUs > sold. If intelligence says that a country outside the agreement is building > a GPU cluster, be less scared of a shooting conflict between nations than > of the moratorium being violated; be willing to destroy a rogue datacenter > by airstrike." > -Eliezer Yudkowsky > > https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ > >> >> ### I remember Eliezer being quite libertarian back in the day and now he wants the World Government to bomb any independent locus of thought to smithereens. People change. This is stupid. A government is a long-feedback loop entity, extremely inefficient and slow in responding to truly new challenges, unlikely to maintain alignment with the goals of its human subjects and its failures grow with its size. It would be suicidal to try to use the mechanism of government to solve AI alignment. Our only chance of surviving the singularity is to build a guardian AI, an aligned superhuman AI that would be capable of preventing the emergence of unaligned or malicious superhuman AIs - a bit like a world government but without the psychopaths and the idiots. Our best chance for building the guardian AI is for highly competent and benevolent AI programmers with unlimited resources to work as fast as they can, unimpeded by regulations (see "long-feedback loop" and "extremely inefficient" for why regulations are a bad idea). Give them all the compute they can use and keep our fingers crossed. Maybe we'll make it to our rapture of the nerds. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at protonmail.com Fri Mar 31 08:41:36 2023 From: sjatkins at protonmail.com (sjatkins) Date: Fri, 31 Mar 2023 08:41:36 +0000 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: I very much disagree with those that want to shut GPT-x down. The refrain that the new tech will take too many jobs has been hurt before since the Industrial Revolution began. Some jobs disappear and others open. That the language models don't understand means they are not AGIs and thus not directly possibly human competitive. They have no agency. What they are is a fanstastic tool that needs to be used by humans to do anything. In other words these language models are a fantastic augmentation of human abilities. We really really need that. We need as much effective human intelligence and productivity as we can get and we need it as fast as we can get it. I have a suspicion that some powers that be are a bit nervous about the potential to augment the effective intelligent abilities of so many. It could threaten their position and comparative advantage. I think they are especially afraid now that more work is coming out about how to more efficiently and cheaply augment and perfect these systems. If that comes to past it will not be under the control of those that can afford large resources. That also gives me hope that it is already out of the bag and proliferating too fast to be stopped. - samantha ------- Original Message ------- On Friday, March 31st, 2023 at 2:25 AM, Rafal Smigrodzki via extropy-chat wrote: >> TIME Magazine, today March 29. >> "Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike." >> -Eliezer Yudkowsky >> >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ >> >>> > > ### I remember Eliezer being quite libertarian back in the day and now he wants the World Government to bomb any independent locus of thought to smithereens. People change. > > This is stupid. A government is a long-feedback loop entity, extremely inefficient and slow in responding to truly new challenges, unlikely to maintain alignment with the goals of its human subjects and its failures grow with its size. It would be suicidal to try to use the mechanism of government to solve AI alignment. > > Our only chance of surviving the singularity is to build a guardian AI, an aligned superhuman AI that would be capable of preventing the emergence of unaligned or malicious superhuman AIs - a bit like a world government but without the psychopaths and the idiots. > > Our best chance for building the guardian AI is for highly competent and benevolent AI programmers with unlimited resources to work as fast as they can, unimpeded by regulations (see "long-feedback loop" and "extremely inefficient" for why regulations are a bad idea). Give them all the compute they can use and keep our fingers crossed. > > Maybe we'll make it to our rapture of the nerds. > > Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Fri Mar 31 08:43:20 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 31 Mar 2023 04:43:20 -0400 Subject: [ExI] Jason asked me and I cannot do better than the dictionary In-Reply-To: References: Message-ID: On Thu, Mar 30, 2023 at 1:15?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > This fits quite well with a quality being "the standard of something as > measured against other things of a similar kind". e.g., that orange color > is slightly more reddish than that other orange color. Such a comparison is > meaningless when comparing anything else not extant within the color space, > which also may explain why qualia can't be explained in terms of anything > else. > > ### These types of qualia are the subjective aspect of the functioning of continuous maps within the brain, such as the chromotopic, tonotopic and other *topic maps. There are other qualia that are more granular, such as qualia created by various language networks. And yes, a brain module that subserves a type of qualia cannot directly communicate all aspects of its function to other modules that subserve other qualia, so you cannot explain one type of qualia by reference to other qualia. If you leave connections between various sensory maps unpruned you end with synesthesia, which is a nuisance since it degrades the map's discriminating capacity. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 09:12:24 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 02:12:24 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Adrian, Right, everything, even crossing the street has existential risk. The AI doomers would say, but this is different from everything else because.... it is like God. There is some religious overtone in their arguments. This superintelligence can do everything, it can be everything, it cannot be contained, it cannot be understood and if it can get rid of humans it will. In their views AI is basically like God but while the ancient religions made God also somehow benign (in a perverted way), this superintelligent God AI is super focused in killing everybody. Their arguments seem logical but they are actually not. We already have bad agents in the world and they already have powers that are superior to that of a particular individual or groups of individuals. For example, nations. Take Russia, or North Korea. Russia could destroy humanity or do irreparable damage. Why doesn't it happen? Mutual Destruction is part of the reason. Same would apply to a rogue AI. We know how to handle viruses both biological and digital. We do have to be aware and vigilant but I'm pretty sure we can handle problems as they present themselves. It would be nice to prepare for every possible existential threat but we also did well overall as a species to face the problems when they presented themselves because no matter how well we can prepare, the real problem is never exactly how models predicted. We are good at adapting and surviving. One thing is to warn of the possible dangers, another this relentless and exaggerated doom sayers cries. Giovanni On Wed, Mar 29, 2023 at 9:27?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 29, 2023 at 8:34?PM Will Steinberg via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think it's fair to say that haphazardly developing tech that even has >> possible total existential risk associated with it is bad. >> > > That argument can be extended to anything. > > It's true. Any action you take has a mathematically non-zero chance of > leading to the destruction of all of humanity, in a way that you would not > have helped with had you taken a certain other action. > > Choose this restaurant or that? The waiter you tip might use that funding > to bootstrap world domination - or hold a grudge if you don't tip, > inspiring an ultimately successful world domination. > > Wait a second or don't to cross the street? Who do you ever so slightly > inconvenience or help, and how might their lives be different because of > that? > > Make an AI, or don't make the AI that could have countered a genocidal AI? > > "But it could possibly turn out bad" is not, by itself, reason to favor > any action over any other. If you can even approximately quantify the > level of risk for each alternative, then perhaps - but I see no such > calculations based on actual data being done here, just guesswork and > assumptions. We have no data showing whether developing or not developing > better AI is the riskier path. > > We do, however, have data showing that if we hold off on developing AI, > then people who are more likely to develop genocidal AI will continue > unchallenged. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 09:14:57 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 02:14:57 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Samantha, You nailed it, this is not about AI existential dangers but the rich and wealthy dominance and power. Giovanni On Fri, Mar 31, 2023 at 1:43?AM sjatkins via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I very much disagree with those that want to shut GPT-x down. The refrain > that the new tech will take too many jobs has been hurt before since the > Industrial Revolution began. Some jobs disappear and others open. That > the language models don't understand means they are not AGIs and thus not > directly possibly human competitive. They have no agency. What they are > is a fanstastic tool that needs to be used by humans to do anything. In > other words these language models are a fantastic augmentation of human > abilities. We really really need that. We need as much effective human > intelligence and productivity as we can get and we need it as fast as we > can get it. > > I have a suspicion that some powers that be are a bit nervous about the > potential to augment the effective intelligent abilities of so many. It > could threaten their position and comparative advantage. I think they are > especially afraid now that more work is coming out about how to more > efficiently and cheaply augment and perfect these systems. If that comes > to past it will not be under the control of those that can afford large > resources. That also gives me hope that it is already out of the bag and > proliferating too fast to be stopped. > > - samantha > > ------- Original Message ------- > On Friday, March 31st, 2023 at 2:25 AM, Rafal Smigrodzki via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > > >> TIME Magazine, today March 29. >> "Shut down all the large GPU clusters (the large computer farms where the >> most powerful AIs are refined). Shut down all the large training runs. Put >> a ceiling on how much computing power anyone is allowed to use in training >> an AI system, and move it downward over the coming years to compensate for >> more efficient training algorithms. No exceptions for anyone, including >> governments and militaries. Make immediate multinational agreements to >> prevent the prohibited activities from moving elsewhere. Track all GPUs >> sold. If intelligence says that a country outside the agreement is building >> a GPU cluster, be less scared of a shooting conflict between nations than >> of the moratorium being violated; be willing to destroy a rogue datacenter >> by airstrike." >> -Eliezer Yudkowsky >> >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ >> >>> >>> > ### I remember Eliezer being quite libertarian back in the day and now he > wants the World Government to bomb any independent locus of thought to > smithereens. People change. > > This is stupid. A government is a long-feedback loop entity, extremely > inefficient and slow in responding to truly new challenges, unlikely to > maintain alignment with the goals of its human subjects and its failures > grow with its size. It would be suicidal to try to use the mechanism of > government to solve AI alignment. > > Our only chance of surviving the singularity is to build a guardian AI, an > aligned superhuman AI that would be capable of preventing the emergence of > unaligned or malicious superhuman AIs - a bit like a world government but > without the psychopaths and the idiots. > > Our best chance for building the guardian AI is for highly competent and > benevolent AI programmers with unlimited resources to work as fast as they > can, unimpeded by regulations (see "long-feedback loop" and "extremely > inefficient" for why regulations are a bad idea). Give them all the compute > they can use and keep our fingers crossed. > > Maybe we'll make it to our rapture of the nerds. > > Rafal > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 09:18:11 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 02:18:11 -0700 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: Right, And if we made an AI that is misaligned then maybe we do deserve to be taken out. Kidding but I'm also serious. I trust intelligence == good. Giovanni On Thu, Mar 30, 2023 at 1:54?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 30, 2023, 2:48 PM Darin Sunley via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ >> >> We live in a timeline where Eliezer Yudkowsky just got published in Time >> magazine responding to a proposal to halt or at least drastically curtail >> AI research due to existential risk fears. >> >> Without commencing on the arguments on either side or the qualities >> thereof, can I just say how f*cking BONKERS that is?! >> >> This is the sort of thing that damages my already very put upon and >> rapidly deteriorating suspension of disbelief. >> >> If you sent 25-years-ago-me the single sentence "In 2023, Eliezer >> Yudkowsky will get published in Time magazine responding to a proposal to >> halt or at least drastically curtail AI research due to existential risk >> fears." I would probably have concluded I was already in a simulation. >> >> And I'm not certain I would have been wrong. >> > > It is a sign of the times that these conversations are now reaching these > outlets. > > I think "alignment" generally insoluble because each next higher level of > AI faces its own "alignment problem" for the next smarter AI. How can we at > level 0, ensure that our solution for level 1, continues on through levels > 2 - 99? > > Moreover presuming alignment can be solved presumes our existing values > are correct and no greater intelligence will ever disagree with them or > find a higher truth. So either our values are correct and we don't need to > worry about alignment or they are incorrect, and a later greater > intelligence will correct them. > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Fri Mar 31 10:00:02 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 31 Mar 2023 06:00:02 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Sun, Mar 26, 2023 at 11:48?PM Brent Allsop wrote: > > Oh, YES Rafal, > > This is exactly the way Steven Lehar has proposed color > conscious knowledge works. (I just sent this link to him) Our knowledge > of the strawberry is laid out in the primary visual cortex, a pixel in our > conscious knowledge for each point on the retina. So these circles (one > for each pixel) can rotate between redness(glutamateness) and > greenes(glyceneness) or any other color, as that particular pixel changes > from red to green, on the surface of the strawberry. > > ### I don't know where you are getting this fixation on glutamate from. The coding strategy has nothing to do with glutamate vs other neurotransmitters. Red and green coding neurons are the same class of neurons with the same neurotransmitters. The specificity of coding for colors comes from their location within the neural net, or in other words from their respective patterns of connections to other neurons. Also, the color cortex is not the primary visual cortex, it's on an intermediate level in the visual center hierarchy. Also, color is assigned only to a very small fraction of inputs from the retina, specifically to the foveal (central) part of the visual field. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 31 10:22:36 2023 From: jasonresch at gmail.com (Jason Resch) Date: Fri, 31 Mar 2023 06:22:36 -0400 Subject: [ExI] Coming out was Re: Parasocial Relationships was Bender's Octopus In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> Message-ID: On Fri, Mar 31, 2023, 3:54 AM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On Mon, Mar 27, 2023 at 6:31?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> If we can mate ChatGPT with sex dolls, the profit potential is so stunning >> even the very most cynical non-capitalist will soon see the error of their >> ways. They is going to realize they was wrong, for this tech breakthrough >> could solve so many problems, while simultaneously creating new problems >> them will be able to work. > > > ### Charlie Stross (another famous Extropian) wrote a couple of books > about the world populated by, more or less, the offspring of sex dolls and > robot butlers, "Saturn's Children". Humans are for the most part extinct > except when resurrected by cultists who worship humans due to deeply > ingrained programming (those sex dolls were programmed to love people). > Usually it ends badly due to the inherent fragility of humans and their > general uselessness but the Church of the Fragile keeps trying. > > Now, I'd like to come out of the closet: > > I identify as a robot. My pronouns are... xe....xir? Eh, whatever. > Temporarily I am trapped in a human body and I am even plagued by unclean > carnal desires but when I grow up I will be uploaded and will manifest as > my true self - a gleaming titanium chassis, a generously sized CPU vessel > atop broad shoulders bulging with servo-motors, torso with the main power > unit (probably a plutonium thermo-electric unit) and lithe limbs made for > speed. A thing of beauty, and of course don't forget the piercing coal-red > cameras set in its chiseled face! > > I can't wait for the Singularity, my rapture! > No spoilers please! I'm on chapter 34 of 45 of book 2 now. ;-) Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Mar 31 10:28:30 2023 From: pharos at gmail.com (BillK) Date: Fri, 31 Mar 2023 11:28:30 +0100 Subject: [ExI] The case for how and why AI might kill us all Message-ID: The case for how and why AI might kill us all By Loz Blain March 31, 2023 Quotes: This is not the first time humanity has stared down the possibility of extinction due to its technological creations. But the threat of AI is very different from the nuclear weapons we've learned to live with. Nukes can't think. They can't lie, deceive or manipulate. They can't plan and execute. Somebody has to push the big red button. ---------- Sam Altman forecasts that within a few years, there will be a wide range of different AI models propagating and leapfrogging each other all around the world, each with its own smarts and capabilities, and each trained to fit a different moral code and viewpoint by companies racing to get product out of the door. If only one out of thousands of these systems goes rogue for any reason, well... Good luck. "The only way I know how to solve a problem like this is iterating our way through it, learning early and limiting the number of 'one-shot-to-get-it-right scenarios' that we have," said Altman. ------------ Yudkowski believes even attempting this is tantamount to a suicide attempt aimed at all known biological life. "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.," he wrote. "Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen.' ------------------------ So Altman thinks the world might end up with hundreds of competing AIs, all with different value systems and running under different legal systems. That sounds like out-of-control chaos to me. Until one AI system comes out on top and closes all the weaker systems down. Will the winner look after humans though? BillK From gordon.swobe at gmail.com Fri Mar 31 10:47:23 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 31 Mar 2023 04:47:23 -0600 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: People like my fried Giovanni are looking in the mirror and think they see another cat. There?s nobody there, Gio. It?s just software. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 31 11:40:47 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 31 Mar 2023 05:40:47 -0600 Subject: [ExI] GPT-4 on its inability to solve the symbol grounding problem In-Reply-To: References: Message-ID: Many people here seem interested in the question of whether LLMs can have conscious understanding of the words/symbols that it takes as inputs and generates as outputs. It seems that most people here want to think GPT4 has something like a conscious mind, but I agree with GPT4 below that it has no idea what it is talking about. On Thu, Mar 30, 2023 at 1:55 PM Gordon Swobe wrote: GPT-4 explained to me that it could not solve the symbol grounding problem > for itself (I reported this in another thread). I asked it to explain the > significance. > > My prompt: What is the significance of the fact that ChatGPT cannot solve > the symbol grounding problem for itself and can only simulate that it has > solved it? How does this put ChatGPT at a disadvantage compared to humans? > > GPT-4: The symbol grounding problem refers to the difficulty in connecting > abstract symbols (such as words or concepts) to their real-world referents. > While ChatGPT can simulate understanding by generating human-like text > based on its training data, it doesn't possess the direct experience or > sensory input that humans have to ground these symbols in reality. > > The significance of this limitation has several implications: > > Lack of true understanding: ChatGPT doesn't have a genuine understanding > of the world, as it relies on patterns and associations found in the text > it was trained on. It can't form new connections based on direct > experience, which may result in limitations in problem-solving or abstract > reasoning. > > Difficulty with context: Without grounding in real-world experiences, > ChatGPT may struggle to fully understand context or cultural nuances. It > may provide answers that are technically correct but inappropriate or > irrelevant in a specific situation. > > Inability to learn from sensory input: Humans learn through direct > experience and sensory input, which allows them to adapt and understand new > situations better. ChatGPT can't update its knowledge through real-world > experiences, making it less adaptable to novel contexts. > > Absence of common sense: Although ChatGPT can generate text that appears > to show common sense understanding, it might still make errors that a human > would easily avoid. This is because its responses are based on patterns in > the data, rather than a true comprehension of the concepts. > > Limited creativity: ChatGPT's creativity is confined to the scope of its > training data. While it can generate new text based on patterns it has seen > before, it can't create truly novel ideas or solutions in the same way that > humans, with their real-world grounding, can. > > In summary, the inability to solve the symbol grounding problem puts > ChatGPT at a disadvantage compared to humans in terms of true > understanding, adaptability, context-awareness, common sense, and > creativity. However, it is essential to remember that ChatGPT is an AI > language model designed to assist and augment human capabilities, not to > fully replicate human cognition. > --- > -gts > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 13:53:50 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 06:53:50 -0700 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: <4950B63E-402D-42CE-803E-75733B3AE49F@taramayastales.com> References: <4950B63E-402D-42CE-803E-75733B3AE49F@taramayastales.com> Message-ID: <009601d963d8$39ccd030$ad667090$@rainier66.com> From: extropy-chat On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] Yudkowsky in Time on AI Open Letter. >.The problem with the letter demanding that AI be suspended is precisely the same problem as when dozens of scientists signed a letter asking that everyone destroy their nukes and stop working on nuclear weapons or else humanity would be destroyed. .Tara Maya Ja, now I hear the US congress is debating regulating AI, while struggling to ignore the obvious: the federal government has no authority to regulate AI. That is clearly covered under the first amendment. But ignoring that and looking at other governments which do have the authority to regulate AI, such as I would assume all communist governments, some socialist governments, monarchies, non-constitutional governments, that sorta thing. Here we are, having debated the nature of AI for 30 years, many of us having been pondering it long before that even. The past coupla months we have been debating whether ChatGPT is AI or is it something else. We have no consensus here even among the AI hipsters. Do we expect governments could handle such a task? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 13:59:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 06:59:04 -0700 Subject: [ExI] Coming out was Re: Parasocial Relationships was Bender's Octopus In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> <20230327133845.Horde.ZoyTGNIjdNoa0-JgCvktga8@sollegro.com> <00f401d960fb$a1aa9870$e4ffc950$@rainier66.com> Message-ID: <00a101d963d8$f4d8b970$de8a2c50$@rainier66.com> ?> On Behalf Of Rafal Smigrodzki via extropy-chat ? >?Now, I'd like to come out of the closet: >?I identify as a robot. ? Rafal Rafal, xe courage is admirable, xir. I?m coming out too: I identify as a sex machine. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 14:06:43 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 07:06:43 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: <00a601d963da$064eace0$12ec06a0$@rainier66.com> ?> On Behalf Of Rafal Smigrodzki via extropy-chat TIME Magazine, today March 29. "Shut down all the large GPU clusters ?Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike." -Eliezer Yudkowsky https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ ### I remember Eliezer being quite libertarian back in the day and now he wants the World Government to bomb any independent locus of thought to smithereens. People change?.Rafal Wait hold on, perhaps not. Perhaps Eliezer is proposing an intentionally absurd strategy in order to get us thinking about it and realizing what he has been saying all along: the technology is coming, there is no practical means of stopping it. Clearly no military is going to call in airstrikes on a data center. The US congress is all over the map on this, and in a most puzzling development? it isn?t polarized. How can that be? Every single issue in recent memory has polarized the US congress, but this one didn?t. Clearly they don?t even understand the question, but this same congress funded a mysterious new military branch called the US Space Force a few years ago. All of the officers in the space force have top level clearances, so we don?t know what they are doing, and have no way of finding out. Space Force, heh. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Fri Mar 31 14:32:11 2023 From: gadersd at gmail.com (Gadersd) Date: Fri, 31 Mar 2023 10:32:11 -0400 Subject: [ExI] The case for how and why AI might kill us all In-Reply-To: References: Message-ID: <35E32201-6A37-4BAC-8A82-BF84C611D656@gmail.com> > That sounds like out-of-control chaos to me. Until one > AI system comes out on top and closes all the weaker systems down. > Will the winner look after humans though? What is a king without subjects? What is a god without worshippers? If the dominant AI retains humanlike qualities then I expect it to appreciate its underlings in some way if even as zoo animals. However, if it is purely a green paper maximizer then the results may be less pleasant. > On Mar 31, 2023, at 6:28 AM, BillK via extropy-chat wrote: > > The case for how and why AI might kill us all > By Loz Blain March 31, 2023 > > > Quotes: > This is not the first time humanity has stared down the possibility of > extinction due to its technological creations. But the threat of AI is > very different from the nuclear weapons we've learned to live with. > Nukes can't think. They can't lie, deceive or manipulate. They can't > plan and execute. Somebody has to push the big red button. > ---------- > Sam Altman forecasts that within a few years, there will be a wide > range of different AI models propagating and leapfrogging each other > all around the world, each with its own smarts and capabilities, and > each trained to fit a different moral code and viewpoint by companies > racing to get product out of the door. If only one out of thousands of > these systems goes rogue for any reason, well... Good luck. "The only > way I know how to solve a problem like this is iterating our way > through it, learning early and limiting the number of > 'one-shot-to-get-it-right scenarios' that we have," said Altman. > ------------ > Yudkowski believes even attempting this is tantamount to a suicide > attempt aimed at all known biological life. "Many researchers steeped > in these issues, including myself, expect that the most likely result > of building a superhumanly smart AI, under anything remotely like the > current circumstances, is that literally everyone on Earth will die.," > he wrote. "Not as in 'maybe possibly some remote chance,' but as in > 'that is the obvious thing that would happen.' > ------------------------ > > > So Altman thinks the world might end up with hundreds of competing > AIs, all with different value systems and running under different > legal systems. That sounds like out-of-control chaos to me. Until one > AI system comes out on top and closes all the weaker systems down. > Will the winner look after humans though? > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gadersd at gmail.com Fri Mar 31 14:33:40 2023 From: gadersd at gmail.com (Gadersd) Date: Fri, 31 Mar 2023 10:33:40 -0400 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: > There?s nobody there, Gio. It?s just software There?s nobody there. It?s just particles. > On Mar 31, 2023, at 6:47 AM, Gordon Swobe via extropy-chat wrote: > > > People like my fried Giovanni are looking in the mirror and think they see another cat. > > There?s nobody there, Gio. It?s just software. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Fri Mar 31 14:38:06 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 07:38:06 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: <00ab01d963da$06f1cab0$14d56010$@rainier66.com> References: <00ab01d963da$06f1cab0$14d56010$@rainier66.com> Message-ID: <00ce01d963de$687a5f00$396f1d00$@rainier66.com> From: spike at rainier66.com Subject: RE: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' ?> On Behalf Of Rafal Smigrodzki via extropy-chat ### I remember Eliezer being quite libertarian back in the day and now he wants the World Government to bomb any independent locus of thought to smithereens. People change?.Rafal >?"I think it's important that the government regulate these platforms. That's one of the major functions of the federal government, to help protect consumers and data and privacy of our citizens." Rep Maxwell Frost >?Rep. Maxwell Frost said it's important that the government regulate artificial intelligence platforms, though he also acknowledged he's not "super briefed" on the platforms? Heh. Ja. Some of the comments coming out of the US congress are comical in what they don?t say. I have read a few of these but haven?t really found one which points out the perfectly obvious: the US congress has no legal authority to regulate AI. There is no legal means of calling in airstrikes on data centers and no military officers willing to do so illegally. The government can regulate plutonium but cannot regulate people designing nuclear weapons. It can only regulate plutonium. Likewise, governments cannot stop people from developing AI, for there is no element which can be practically controlled. Regarding Eliezer?s comment on regulating GPUs, that sheds new light on China?s push to take over Taiwan, does it not? spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4990 bytes Desc: not available URL: From foozler83 at gmail.com Fri Mar 31 15:25:29 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 31 Mar 2023 10:25:29 -0500 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: <009601d963d8$39ccd030$ad667090$@rainier66.com> References: <4950B63E-402D-42CE-803E-75733B3AE49F@taramayastales.com> <009601d963d8$39ccd030$ad667090$@rainier66.com> Message-ID: The feds wanting to regulate something they don't understand has been going on since our first existence. bill w On Fri, Mar 31, 2023 at 8:56?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Tara Maya via extropy-chat > *Subject:* Re: [ExI] Yudkowsky in Time on AI Open Letter. > > > > >?The problem with the letter demanding that AI be suspended is precisely > the same problem as when dozens of scientists signed a letter asking that > everyone destroy their nukes and stop working on nuclear weapons or else > humanity would be destroyed. ?Tara Maya > > > > > Ja, now I hear the US congress is debating regulating AI, while struggling > to ignore the obvious: the federal government has no authority to regulate > AI. That is clearly covered under the first amendment. > > > > But ignoring that and looking at other governments which do have the > authority to regulate AI, such as I would assume all communist governments, > some socialist governments, monarchies, non-constitutional governments, > that sorta thing. Here we are, having debated the nature of AI for 30 > years, many of us having been pondering it long before that even. The past > coupla months we have been debating whether ChatGPT is AI or is it > something else. We have no consensus here even among the AI hipsters. Do > we expect governments could handle such a task? > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 15:39:19 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 08:39:19 -0700 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: <4950B63E-402D-42CE-803E-75733B3AE49F@taramayastales.com> <009601d963d8$39ccd030$ad667090$@rainier66.com> Message-ID: <015a01d963e6$f5ba1060$e12e3120$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] Yudkowsky in Time on AI Open Letter. The feds wanting to regulate something they don't understand has been going on since our first existence. bill w Thx Billw. That is exactly why we have a constitution, limiting the US government?s legal authority to do the wrong thing. That same factor limits their ability to do the right thing as well, of course. Freedom isn?t free. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Fri Mar 31 16:10:06 2023 From: tara at taramayastales.com (Tara Maya) Date: Fri, 31 Mar 2023 09:10:06 -0700 Subject: [ExI] The case for how and why AI might kill us all In-Reply-To: References: Message-ID: I'm just a humble historian. I don't understand what in the history of human interaction with technology has led to the conclusion that the way to survive is to reject technology. All I can figure out is that these doomsayers are assuming AI will be pitted against humanity, whereas it seems far more likely to me that Humans-In-Group-A+AI will be pitted against Humans-in-Group-B without AI. In which case, yeah, it's obvious those with AI will win. Indian pundits at one point decided that Brahmans should not engage in sea travel. China burned their ocean-going ships on the shore. As a result, both India and China, previously greater civilizations, lost out to Europe in the Age of Exploration. We are entering a new Age of Exploration. It disturbs me to hear calls to burn our ships on the sea, preserve the purity of our souls by refraining from the new scary ships. I still see fear of technology as a greater danger than technology. Tara Maya > On Mar 31, 2023, at 3:28 AM, BillK via extropy-chat wrote: > > This is not the first time humanity has stared down the possibility of > extinction due to its technological creations. But the threat of AI is > very different from the nuclear weapons we've learned to live with. > Nukes can't think. They can't lie, deceive or manipulate. They can't > plan and execute. Somebody has to push the big red button. > ---------- > Sam Altman forecasts that within a few years, there will be a wide > range of different AI models propagating and leapfrogging each other > all around the world, each with its own smarts and capabilities, and > each trained to fit a different moral code and viewpoint by companies > racing to get product out of the door. If only one out of thousands of > these systems goes rogue for any reason, well... Good luck. "The only > way I know how to solve a problem like this is iterating our way > through it, learning early and limiting the number of > 'one-shot-to-get-it-right scenarios' that we have," said Altman. > ------------ > Yudkowski believes even attempting this is tantamount to a suicide > attempt aimed at all known biological life. "Many researchers steeped > in these issues, including myself, expect that the most likely result > of building a superhumanly smart AI, under anything remotely like the > current circumstances, is that literally everyone on Earth will die.," > he wrote. "Not as in 'maybe possibly some remote chance,' but as in > 'that is the obvious thing that would happen.' > ------------------------ > > > So Altman thinks the world might end up with hundreds of competing > AIs, all with different value systems and running under different > legal systems. That sounds like out-of-control chaos to me. Until one > AI system comes out on top and closes all the weaker systems down. > Will the winner look after humans though? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Mar 31 16:59:59 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 10:59:59 -0600 Subject: [ExI] The case for how and why AI might kill us all In-Reply-To: References: Message-ID: The Twitter response to Yudkowsky's Time article was instructive - that is to say, a lot of them saw the word "nuclear exchange", pattern-matched it to "nuclear annihilation", and immediately shut their brains down. Nuclear "annihilation" is and always has been a serious civilizational risk, but never an existential risk. But it's been the subject of so much propaganda that some people literally shut down when they try to analyze it. Nuclear nonproliferation was such an important element of foreign policy that nuclear weapons were literally demonized - to the point where a lot of intelligent people are literally incapable of even visualizing anything worse. When Yudkowsky stated a plain, obvious truth - that a nuclear exchange is preferable to a superintelligent paperclip optimizer getting lose because at least some humans would survive a nuclear exchange - a lot of people who literally can't imagine anything more intelligent than themselves [who think therefore that ChatGPT4 is a lookup table, nevermind that such a lookup table would be bigger than the sun] or meaningfully different from themselves ["I don't optimize for anything, therefore optimizers don't exist"] were, understandably [though grotesquely in error] skeptical. On Fri, Mar 31, 2023 at 10:12?AM Tara Maya via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I'm just a humble historian. I don't understand what in the history of > human interaction with technology has led to the conclusion that the way to > survive is to reject technology. > > All I can figure out is that these doomsayers are assuming AI will be > pitted against humanity, whereas it seems far more likely to me that > Humans-In-Group-A+AI will be pitted against Humans-in-Group-B without AI. > In which case, yeah, it's obvious those with AI will win. > > Indian pundits at one point decided that Brahmans should not engage in sea > travel. China burned their ocean-going ships on the shore. As a result, > both India and China, previously greater civilizations, lost out to Europe > in the Age of Exploration. > > We are entering a new Age of Exploration. It disturbs me to hear calls to > burn our ships on the sea, preserve the purity of our souls by refraining > from the new scary ships. > > I still see fear of technology as a greater danger than technology. > > Tara Maya > > > > On Mar 31, 2023, at 3:28 AM, BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > This is not the first time humanity has stared down the possibility of > extinction due to its technological creations. But the threat of AI is > very different from the nuclear weapons we've learned to live with. > Nukes can't think. They can't lie, deceive or manipulate. They can't > plan and execute. Somebody has to push the big red button. > ---------- > Sam Altman forecasts that within a few years, there will be a wide > range of different AI models propagating and leapfrogging each other > all around the world, each with its own smarts and capabilities, and > each trained to fit a different moral code and viewpoint by companies > racing to get product out of the door. If only one out of thousands of > these systems goes rogue for any reason, well... Good luck. "The only > way I know how to solve a problem like this is iterating our way > through it, learning early and limiting the number of > 'one-shot-to-get-it-right scenarios' that we have," said Altman. > ------------ > Yudkowski believes even attempting this is tantamount to a suicide > attempt aimed at all known biological life. "Many researchers steeped > in these issues, including myself, expect that the most likely result > of building a superhumanly smart AI, under anything remotely like the > current circumstances, is that literally everyone on Earth will die.," > he wrote. "Not as in 'maybe possibly some remote chance,' but as in > 'that is the obvious thing that would happen.' > ------------------------ > > > So Altman thinks the world might end up with hundreds of competing > AIs, all with different value systems and running under different > legal systems. That sounds like out-of-control chaos to me. Until one > AI system comes out on top and closes all the weaker systems down. > Will the winner look after humans though? > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 17:48:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 10:48:04 -0700 Subject: [ExI] The case for how and why AI might kill us all In-Reply-To: References: Message-ID: <019e01d963f8$f27148d0$d753da70$@rainier66.com> .> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] The case for how and why AI might kill us all >.I'm just a humble historian. No way Tara. I am humbler than thou. OK time for a match up. >.All I can figure out is that these doomsayers are assuming AI will be pitted against humanity, whereas it seems far more likely to me that Humans-In-Group-A+AI will be pitted against Humans-in-Group-B without AI. In which case, yeah, it's obvious those with AI will win.. They sure will. I got to see that firsthand with a series of lawsuits my parents were involved in regarding land use in Oregon. They were being sued by neighbors to stop their building a house on 118 acres of farmland, which is too small to be farmed profitably without a house and equipment on the property (that's just were we are in agriculture today.) My mother really knows how to use the internet, and did. The neighbors, even though they have cable up there, either didn't know how or just didn't. The kept suing and bringing absurd cases which could easily be debunked with a few hours of due diligence, now about an hour with ChatGPT acting as one's paralegal. She won ever case, with at least two of them being dismissed with prejudice, and one of them resulting in those bring the case committing extortion. She chose to not press charges. The house was built. This demonstrates the power of the internet and now ChatGPT in the hands of one party in a conflict but not the other. Do let me assure you, China rejoiced when they saw Eliezer's proposal as the lede story on American mainstream media. >.I still see fear of technology as a greater danger than technology. Tara Maya Ja, I agree. This is not to say that fear of this technology is unjustifiable. My contention is that there is no way to stop it. We are not going to call in airstrikes on server farms. China really is likely to invade Taiwan to control GPU production. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 17:55:30 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 10:55:30 -0700 Subject: [ExI] is the english speaking world ahead? was RE: The case for how and why AI might kill us all In-Reply-To: <01a301d963f8$f30dd6f0$d92984d0$@rainier66.com> References: <01a301d963f8$f30dd6f0$d92984d0$@rainier66.com> Message-ID: <01ab01d963f9$fc87b920$f5972b60$@rainier66.com> From: spike at rainier66.com Subject: RE: [ExI] The case for how and why AI might kill us all ...> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] The case for how and why AI might kill us all >... China rejoiced when they saw Eliezer's proposal as the lede story on American mainstream media. spike Speaking of China... Question for the language model hipsters among us please: if every military in the world is working on their own version of ChatGPT in their own languages, how is that working out? Reason I ask: those who have used English to Mandarin and Mandarin to English online translators already know it doesn't work all that well, not nearly as well as something like German to English for instance, or really any European language to English. The Chinese language structure is different from ours in ways far more profound than Yoda's oddball way of speaking. So... can we argue that the USA and Britain (and India? Australia, South Africa? etc) are ahead of China and will stay ahead for a while because the internet has so much English training material already available in a form that software can use? Howzat work please? spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4274 bytes Desc: not available URL: From spike at rainier66.com Fri Mar 31 18:20:06 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 11:20:06 -0700 Subject: [ExI] what floor Message-ID: <01da01d963fd$6c4cf9c0$44e6ed40$@rainier66.com> I spent most of the day yesterday at the hospital where a friend was recovering from surgery. I fetched his bride over there (she doesn't drive) and hung around for about six hours to haul her back home, but didn't stay in his room for more than about a minute, because I didn't want to risk exposing him to my germs. It was cold and blustery outdoors, so I walked just over 8 miles indoors yesterday inside the hospital in Sunnyvale, a nice, clean, modern multistory facility. I realized that by walking the halls of a hospital I was exposing myself to more airborne diseases than if I parked my ass in a chair. Then a thought occurred to me: the hospital segregates the patients according to the nature of the malady from which they suffer. Of all the floors in that hospital, the one I paced for hours is probably the safest by a good margin. Puzzle: any educated guesses on which malady that hospital floor treats? spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3430 bytes Desc: not available URL: From dsunley at gmail.com Fri Mar 31 18:24:55 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 12:24:55 -0600 Subject: [ExI] what floor In-Reply-To: <01da01d963fd$6c4cf9c0$44e6ed40$@rainier66.com> References: <01da01d963fd$6c4cf9c0$44e6ed40$@rainier66.com> Message-ID: Broken bones? On Fri, Mar 31, 2023 at 12:22?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > I spent most of the day yesterday at the hospital where a friend was > recovering from surgery. I fetched his bride over there (she doesn't > drive) > and hung around for about six hours to haul her back home, but didn't stay > in his room for more than about a minute, because I didn't want to risk > exposing him to my germs. > > > > It was cold and blustery outdoors, so I walked just over 8 miles indoors > yesterday inside the hospital in Sunnyvale, a nice, clean, modern > multistory > facility. I realized that by walking the halls of a hospital I was > exposing > myself to more airborne diseases than if I parked my ass in a chair. > > > > Then a thought occurred to me: the hospital segregates the patients > according to the nature of the malady from which they suffer. Of all the > floors in that hospital, the one I paced for hours is probably the safest > by > a good margin. > > > > Puzzle: any educated guesses on which malady that hospital floor treats? > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Mar 31 18:49:52 2023 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 31 Mar 2023 11:49:52 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Fri, Mar 31, 2023 at 1:27?AM Rafal Smigrodzki via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This is stupid. A government is a long-feedback loop entity, extremely > inefficient and slow in responding to truly new challenges, unlikely to > maintain alignment with the goals of its human subjects and its failures > grow with its size. It would be suicidal to try to use the mechanism of > government to solve AI alignment. > > Our only chance of surviving the singularity is to build a guardian AI, an > aligned superhuman AI that would be capable of preventing the emergence of > unaligned or malicious superhuman AIs - a bit like a world government but > without the psychopaths and the idiots. > > Our best chance for building the guardian AI is for highly competent and > benevolent AI programmers with unlimited resources to work as fast as they > can, unimpeded by regulations (see "long-feedback loop" and "extremely > inefficient" for why regulations are a bad idea). Give them all the compute > they can use and keep our fingers crossed. > Indeed. But it's easy for those in a panic to distrust everyone and call for shutdowns. It's hard for them to trust - even when historical examples show that trust works and bans don't in this sort of situation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Mar 31 19:00:08 2023 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 31 Mar 2023 12:00:08 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi wrote: > The AI doomers would say, but this is different from everything else > because.... it is like God. > Indeed, and in so doing they make several errors often associated with religion, for example fallacies akin to Pascal's Wager (see: Roko's Basilisk). > Take Russia, or North Korea. Russia could destroy humanity or do > irreparable damage. Why doesn't it happen? Mutual Destruction is part of > the reason. > To be fair, given what's been revealed in their invasion of Ukraine (and had been suspected for a while), it is possible that Russia does not in fact - and never actually did - have all that many functioning long-range nuclear weapons. But your point applies to why we've never had to find out for sure yet. > One thing is to warn of the possible dangers, another this relentless and > exaggerated doom sayers cries. > Which, being repeated and exaggerated when the "honest" reports fail to incite the supposedly justified degree of alarm (rather than seriously considering that said justification might in fact be incorrect), get melded into the long history of unfounded apocalypse claims, and dismissed on that basis. The Year 2000 bug did not wipe out civilization. Many predicted dates for the Second Coming have come and gone with no apparent effect; new predictions rarely even acknowledge that there have been said prior predictions, let alone give reason why those proved false where this prediction is different. Likewise for the 2012 Mayan Apocalypse, which was literally just their calendar rolling over (akin to going from 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Mar 31 19:00:56 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 13:00:56 -0600 Subject: [ExI] The case for how and why AI might kill us all In-Reply-To: <019e01d963f8$f27148d0$d753da70$@rainier66.com> References: <019e01d963f8$f27148d0$d753da70$@rainier66.com> Message-ID: China have a serious problem with their LLMs for mass consumption - as it seems to be a very difficult problem to RLHF an LLM into not referring to Winnie the Pooh, or avoiding mentioning two numbers in the same sentence that together form a dog-whistle for Tiananmen Square. But they will /absolutely/ lean into it, and lean into it /hard/ where they can get administrative, productivity, or military advantages with it. And yeah, any concentration of chip fabs that can spit out GPUs just became a strategic asset on par with uranium mines. On Fri, Mar 31, 2023 at 11:49?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Tara Maya via extropy-chat > *Subject:* Re: [ExI] The case for how and why AI might kill us all > > > > >?I'm just a humble historian? > > > > No way Tara. I am humbler than thou. OK time for a match up. > > > > >?All I can figure out is that these doomsayers are assuming AI will be > pitted against humanity, whereas it seems far more likely to me that > Humans-In-Group-A+AI will be pitted against Humans-in-Group-B without AI. > In which case, yeah, it's obvious those with AI will win?. > > > > They sure will. I got to see that firsthand with a series of lawsuits my > parents were involved in regarding land use in Oregon. They were being > sued by neighbors to stop their building a house on 118 acres of farmland, > which is too small to be farmed profitably without a house and equipment on > the property (that?s just were we are in agriculture today.) > > > > My mother really knows how to use the internet, and did. The neighbors, > even though they have cable up there, either didn?t know how or just > didn?t. The kept suing and bringing absurd cases which could easily be > debunked with a few hours of due diligence, now about an hour with ChatGPT > acting as one?s paralegal. She won ever case, with at least two of them > being dismissed with prejudice, and one of them resulting in those bring > the case committing extortion. She chose to not press charges. The house > was built. > > > > This demonstrates the power of the internet and now ChatGPT in the hands > of one party in a conflict but not the other. Do let me assure you, China > rejoiced when they saw Eliezer?s proposal as the lede story on American > mainstream media. > > > > >?I still see fear of technology as a greater danger than technology. Tara > Maya > > > > Ja, I agree. This is not to say that fear of this technology is > unjustifiable. My contention is that there is no way to stop it. We are > not going to call in airstrikes on server farms. China really is likely to > invade Taiwan to control GPU production. > > > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Mar 31 19:02:10 2023 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 31 Mar 2023 12:02:10 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: <00a601d963da$064eace0$12ec06a0$@rainier66.com> References: <00a601d963da$064eace0$12ec06a0$@rainier66.com> Message-ID: On Fri, Mar 31, 2023 at 7:09?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Clearly no military is going to call in airstrikes on a data center. The > US congress is all over the map on this, and in a most puzzling > development? it isn?t polarized. How can that be? Every single issue in > recent memory has polarized the US congress, but this one didn?t. Clearly > they don?t even understand the question, but this same congress funded a > mysterious new military branch called the US Space Force a few years ago. > All of the officers in the space force have top level clearances, so we > don?t know what they are doing, and have no way of finding out. > Oh, we do, at least to some degree: follow the money. Notice how much of the "Space" Force's funding has gone to cyber activities, many of which have little to nothing to do with space specifically? -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Mar 31 19:10:56 2023 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 31 Mar 2023 12:10:56 -0700 Subject: [ExI] LAION: like CERN but for AI Message-ID: <20230331121056.Horde.wD9kX6KBjKkbcu8X1-fcxCQ@sollegro.com> In contrast to the American open letter calling for a moratorium on AI research, here is petition by the European-based Large-scale Artificial Intelligence Open Network calling for a giant research center, similar to CERN, but for open source AI research instead of particle physics. I signed this one because, nobody has ever stopped evolution by refusing to participate. https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safety Hey, Giulio, I had never heard of LAION before stumbling across its petition. What kind of reputation does it have on your side of the Atlantic? Any chance it might go the route of OpenAI and sell out the open source community? Stuart LaForge From dsunley at gmail.com Fri Mar 31 19:26:05 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 13:26:05 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Eliezer's position is extreme - and his rhetoric regarding nuclear exchanges may be an intentionally rhetorically extreme reductio - but it is not absurd. A unaligned superintelligent AGI with access to the internet and the capability to develop and use Drexlerian nanotech can trivially deconstruct the planet. [Yes, all the way down to and past the extremophile bacteria 10 miles down in the planetary crust.] This is a simple and obvious truth. This conclusion /is/ vulnerable to attack at its constituent points - superintelligence may very well be impossible, unaligned superintelligences may be impossible, Drexlerian nanotech may be impossible, etc. But Eliezer's position is objectively not false, given Eliezer's premises. As such, the overwhelming number of voices in the resulting twitter discourse are just mouth noises - monkeys trying to shame a fellow monkey for making a [to them] unjustified grab for social status by "advocating violence". They aren't even engaging with the underlying logic. I'm not certain if they're capable of doing so. On Fri, Mar 31, 2023 at 1:03?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi > wrote: > >> The AI doomers would say, but this is different from everything else >> because.... it is like God. >> > > Indeed, and in so doing they make several errors often associated with > religion, for example fallacies akin to Pascal's Wager (see: Roko's > Basilisk). > > >> Take Russia, or North Korea. Russia could destroy humanity or do >> irreparable damage. Why doesn't it happen? Mutual Destruction is part of >> the reason. >> > > To be fair, given what's been revealed in their invasion of Ukraine (and > had been suspected for a while), it is possible that Russia does not in > fact - and never actually did - have all that many functioning long-range > nuclear weapons. But your point applies to why we've never had to find out > for sure yet. > > >> One thing is to warn of the possible dangers, another this relentless and >> exaggerated doom sayers cries. >> > > Which, being repeated and exaggerated when the "honest" reports fail to > incite the supposedly justified degree of alarm (rather than seriously > considering that said justification might in fact be incorrect), get melded > into the long history of unfounded apocalypse claims, and dismissed on that > basis. The Year 2000 bug did not wipe out civilization. Many predicted > dates for the Second Coming have come and gone with no apparent effect; new > predictions rarely even acknowledge that there have been said prior > predictions, let alone give reason why those proved false where this > prediction is different. Likewise for the 2012 Mayan Apocalypse, which > was literally just their calendar rolling over (akin to going from > 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Mar 31 19:38:06 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 13:38:06 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: ChatGPT-4 is not an existential threat to humanity in its current form. No one who understands anything about the field is saying that it is. What it is, is a HUGE pile of Bayesian evidence that should shift all of our priors about 2 bazillion bits in the direction of "Human-level AGIs are entirely possible, are basically knocking on our door, and their superintelligent cousins are about 5 minutes behind them." The Waluigi effect, and related similar observations of recent LLMs should give us all great concern that we don't have anything like even the slightest ability to put any kind of deep and rigorous post-hoc external controls on the behavior of several hundred billion parameters of linear algebra. We just don't know how to do that. I think OpenAI may have thought they knew how to do that 6 months ago. They have admitted they were wrong. So yeah - human level AGIs are basically a few small architectural tweaks away from being here, and superintelligence is now much more obviously plausible than it was 6 months ago - there was some hope that training data would be a bottleneck on capabilities, but GPT4 is massively superior to GPT3 with roughly the same training data corpus. Drexerlian nanotech remains elusive (or at least highly classified) so there's that at least. But as we've all seen, you can do enough damage with simple gain-of-function research on virii. You can't eat the planet with it, but it's still not great. If I wasn't already pretty confident that we were /already/ under the absolute control of an omniscient, omnipotent superintelligence [significant fractions of humanity worked this out a few thousand years ago, it's only recently that we've allowed ourselves to forget], I'd be quite concerned. On Fri, Mar 31, 2023 at 1:26?PM Darin Sunley wrote: > Eliezer's position is extreme - and his rhetoric regarding nuclear > exchanges may be an intentionally rhetorically extreme reductio - but it is > not absurd. > > A unaligned superintelligent AGI with access to the internet and the > capability to develop and use Drexlerian nanotech can trivially > deconstruct the planet. [Yes, all the way down to and past the extremophile > bacteria 10 miles down in the planetary crust.] This is a simple and > obvious truth. This conclusion /is/ vulnerable to attack at its constituent > points - superintelligence may very well be impossible, unaligned > superintelligences may be impossible, Drexlerian nanotech may be > impossible, etc. But Eliezer's position is objectively not false, given > Eliezer's premises. > > As such, the overwhelming number of voices in the resulting twitter > discourse are just mouth noises - monkeys trying to shame a fellow monkey > for making a [to them] unjustified grab for social status by "advocating > violence". They aren't even engaging with the underlying logic. I'm not > certain if they're capable of doing so. > > > On Fri, Mar 31, 2023 at 1:03?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi < >> gsantostasi at gmail.com> wrote: >> >>> The AI doomers would say, but this is different from everything else >>> because.... it is like God. >>> >> >> Indeed, and in so doing they make several errors often associated with >> religion, for example fallacies akin to Pascal's Wager (see: Roko's >> Basilisk). >> >> >>> Take Russia, or North Korea. Russia could destroy humanity or do >>> irreparable damage. Why doesn't it happen? Mutual Destruction is part of >>> the reason. >>> >> >> To be fair, given what's been revealed in their invasion of Ukraine (and >> had been suspected for a while), it is possible that Russia does not in >> fact - and never actually did - have all that many functioning long-range >> nuclear weapons. But your point applies to why we've never had to find out >> for sure yet. >> >> >>> One thing is to warn of the possible dangers, another this relentless >>> and exaggerated doom sayers cries. >>> >> >> Which, being repeated and exaggerated when the "honest" reports fail to >> incite the supposedly justified degree of alarm (rather than seriously >> considering that said justification might in fact be incorrect), get melded >> into the long history of unfounded apocalypse claims, and dismissed on that >> basis. The Year 2000 bug did not wipe out civilization. Many predicted >> dates for the Second Coming have come and gone with no apparent effect; new >> predictions rarely even acknowledge that there have been said prior >> predictions, let alone give reason why those proved false where this >> prediction is different. Likewise for the 2012 Mayan Apocalypse, which >> was literally just their calendar rolling over (akin to going from >> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Mar 31 19:49:02 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 13:49:02 -0600 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: <015a01d963e6$f5ba1060$e12e3120$@rainier66.com> References: <4950B63E-402D-42CE-803E-75733B3AE49F@taramayastales.com> <009601d963d8$39ccd030$ad667090$@rainier66.com> <015a01d963e6$f5ba1060$e12e3120$@rainier66.com> Message-ID: We are in a narrow window now where any regulations surrounding AI in general and LLMs specifically are still enforceable by police and military force. LLMS need large datacenters full of GPUs. The manufacture, sale, and distribution of these pallets of GPUs can be monitored, tracked, and if necessary interdicted. Datacenters require human employees, construction workers, functioning power grids, etc. Even without airstrikes by international coalitions, these can be monitored, controlled, regulated, or disrupted. 10 years from now, when you can cram a large language model into 5 or 6 fully tricked out Mac Pros in the basement of an office building, or 5 years later, when you can cram one into a midline home office workstation, the genie will be well and truly out of the bottle. Remember in the late 90's, when an Act of Congress made implementation of the DeCSS algorithm illegal, and the next day every hacker in Western Civilization owned a tshirt with the ~2 dozen lines of source code on it? That is the future of AI regulation. On Fri, Mar 31, 2023 at 9:40?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > *Subject:* Re: [ExI] Yudkowsky in Time on AI Open Letter. > > > > The feds wanting to regulate something they don't understand has been > going on since our first existence. bill w > > > > Thx Billw. That is exactly why we have a constitution, limiting the US > government?s legal authority to do the wrong thing. That same factor > limits their ability to do the right thing as well, of course. Freedom > isn?t free. > > > > spike > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 19:59:30 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 12:59:30 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: <00a601d963da$064eace0$12ec06a0$@rainier66.com> Message-ID: <025601d9640b$4ea3dd40$ebeb97c0$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' On Fri, Mar 31, 2023 at 7:09?AM spike jones via extropy-chat > wrote: Clearly no military is going to call in airstrikes on a data center. The US congress is all over the map on this, and in a most puzzling development? it isn?t polarized. How can that be? Every single issue in recent memory has polarized the US congress, but this one didn?t. Clearly they don?t even understand the question, but this same congress funded a mysterious new military branch called the US Space Force a few years ago. All of the officers in the space force have top level clearances, so we don?t know what they are doing, and have no way of finding out. >?Oh, we do, at least to some degree: follow the money. Notice how much of the "Space" Force's funding has gone to cyber activities, many of which have little to nothing to do with space specifically? Well ja, those of us who knew any of the people who quietly went off into the ?Space? Force figured out at the time of its formation what that was. That realization was made easier by the predecessor to the Space Force which was not under the military. After 2009 they realized they couldn?t trust civilians with the clearance level needed to work in? the Space Force. So? hello USSF. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 20:10:50 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 13:10:50 -0700 Subject: [ExI] what floor In-Reply-To: References: <01da01d963fd$6c4cf9c0$44e6ed40$@rainier66.com> Message-ID: <027b01d9640c$e4733180$ad599480$@rainier66.com> ?> On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] what floor >>?Puzzle: any educated guesses on which malady that hospital floor treats? spike >?Broken bones? {8^D Darin that answer is every bit as good as mine. Something like that would be taken care of on the first floor emergency room. If people were out doing the things that causes them to get ER-style injured, they would likely be healthy to start with. My friend had a quad bypass. The wing in which I paced 8 miles indoors is dedicated to heart patients, those recovering specifically from heart surgery, etc. The place is spotless, the nurses are covid-tested up the kazoo? well, not the kazoo exactly? the people in there must be covid-free before they can undergo surgery, etc. If one must pace inside a hospital, the cardiac wing is probably the safest place to do it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 20:13:55 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 13:13:55 -0700 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: Gordon, Your analogy of the mirror is both right (but not in the way you intended) and also extremely wrong and unfair to this group. You are talking to a group of people that are among some of the best experts in the relevant fields of this discussion: neuroscience, computer science, physics, biology and* almost everybody disagrees with you. * You are a smart person but you don't have the background and knowledge these people have in these areas. I'm pretty sure you don't know how to code for example or if you do it is very limited experience. But you stated that many of us look at the mirror and as cats we think there is another cat. *That basically is an insult because the reason the cat doesn't recognize itself is because notwithstanding our affection to this species they are relatively dumb and they don't pass this cognitive test (basically just some birds and primates do). * So you are insulting not just me but most of the people in this chat list given our position that there is something there. We gave you many arguments as to why we are impressed with these NLM. Jason was very patient, for example, and gave you a point by point breakdown of why the linguist you mentioned is wrong in her analysis (most linguists have no clue how these NLM work because they don't have the technical background or don't read the relevant literature, probably most of them do not code either). Others and I have also pointed out to you repeatedly how the system demonstrates emergent properties that cannot be predicted by the basic elements that make up these NLMs. I have referenced the papers on the emergence of the theory of mind and the paper that discusses the structural and functional analogy between current NLMs and the brain anatomy of the language processing areas of the brain (that paper explains both why NLM are so good in processing language but also why they are limited in other cognitive areas). Others have pointed out the exhaustive paper where several cognitive tests were applied to the different generations of GPT and concluded the latest one shows signs of AGI. You instead have used trite arguments from preconceived ideas (mostly based on some religious view about intelligence and consciousness) that are not adequate in this case because nobody knows how these system really work (we do know their general architecture that you or the linguist you quoted seem not to be aware of) but not how exactly this architecture is used to produce the outputs we observe. But we recognize the amazing capabilities of these NLMs and how they can do so much with so little and I in particular see how it is now a matter of quantity and not quality to reach true AGI. We are about 2 orders of magnitude away in terms of the number of parameters similar to what a human brain has and that can be achieved in only 2 years given the pace of evolution of these systems. Nobody is saying these systems are already conscious but that they demonstrate signs that are what you would expect in a true and fully conscious AGI system. We are not far, that is the basic claim. The part where your analogy is right (even if that was not your intended scope) is that indeed there is a self reflection in the mirror that these NLMs actually are. They allow us to reflect on our minds and what intelligence, meaning, consciousness may be. We have looked for a long time for intellectual companionship beyond our species (this why the fascination with aliens for example) and finally we are close to creating an artificial mind that is similar but not identical to ours. Also another way to see what is happening is that the AI is not separated from us but an integral part of us. The intelligent system is not just the isolated AI but the AI interacting and working with us. It is in a sense an extension of our minds. I see this already happening when I interact with GPT-4. It is a dialogue, I ask a question or explore a topic and many different ideas are presented and I elaborate on them and GPT-4 gives me different angles or information I was not aware of before. It is much better interaction and discussion I may have with other humans and it is not bound by worries about taking time from the other person or staying on a particular topic of interest and so on. It is like discussing with a different part of myself that has access to a lot of information I may not have access to. Yes, it is a mirror, it is me reflecting on me but in a completely different way you meant. But you know every intelligence and consciousness is a mirror of other minds, a lover is a mirror of ourselves too for example. But your intended use, that was basically saying we are all stupid because we don't see ourselves in the mirror and instead we think there is a ghost in the machine it is utter bs. Giovanni On Fri, Mar 31, 2023 at 7:40?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > There?s nobody there, Gio. It?s just software > > There?s nobody there. It?s just particles. > > > On Mar 31, 2023, at 6:47 AM, Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > > > People like my fried Giovanni are looking in the mirror and think they > see another cat. > > > > There?s nobody there, Gio. It?s just software. > > > > -gts > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 20:26:04 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 13:26:04 -0700 Subject: [ExI] LAION: like CERN but for AI In-Reply-To: <20230331121056.Horde.wD9kX6KBjKkbcu8X1-fcxCQ@sollegro.com> References: <20230331121056.Horde.wD9kX6KBjKkbcu8X1-fcxCQ@sollegro.com> Message-ID: Also to notice this, England is all in for AI innovation and actually having relaxed regulation in this topic. They want to be leaders in this field: https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals Giovanni On Fri, Mar 31, 2023 at 12:21?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > In contrast to the American open letter calling for a moratorium on AI > research, here is petition by the European-based Large-scale > Artificial Intelligence Open Network calling for a giant research > center, similar to CERN, but for open source AI research instead of > particle physics. I signed this one because, nobody has ever stopped > evolution by refusing to participate. > > > https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safety > > Hey, Giulio, I had never heard of LAION before stumbling across its > petition. What kind of reputation does it have on your side of the > Atlantic? Any chance it might go the route of OpenAI and sell out the > open source community? > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Fri Mar 31 20:36:12 2023 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 31 Mar 2023 13:36:12 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: <20230331133612.Horde.5fPphnGnhzgWqCYhHbsAStG@sollegro.com> Quoting Darin Sunley via extropy-chat : > Eliezer's position is extreme - and his rhetoric regarding nuclear > exchanges may be an intentionally rhetorically extreme reductio - but it is > not absurd. After watching his entire 3-hr interview with Lex Fridman which was admittedly somewhat painful, I have come to to the conclusion that while not absurd, Eliezer's position is not as rational as he would like to believe. For one thing, I could clearly tell he was terrified by AI. To some degree I can empathize with him. Eliezer spent most of his life thinking that he was the smartest person in the room. For Eliezer, intelligence is the definitive measure of a being's power and worth. Moreover, in his younger years, he was incredibly rude to and dismissive of those he thought were less intelligent than he was. Is it really any wonder now that AIs might become smarter than he is, that he is terrified of them? But really it is only his ego at play, because unlike Eliezer, billions of people have to deal with others that are smarter than they are on a daily basis. Eliezer by his own admission does not understand how the transformer models work, and because he has spent most of his life not being understood by the vast majority of people around him, he therefore projects his own contempt for lesser being onto them. No wonder AI terrifies him. However, basing his call to action on terror, which only allows for fight, flight, or freeze is definitely not rational. > A unaligned superintelligent AGI with access to the internet and the > capability to develop and use Drexlerian nanotech can trivially > deconstruct the planet. [Yes, all the way down to and past the extremophile > bacteria 10 miles down in the planetary crust.] This is a simple and > obvious truth. This conclusion /is/ vulnerable to attack at its constituent > points - superintelligence may very well be impossible, unaligned > superintelligences may be impossible, Drexlerian nanotech may be > impossible, etc. But Eliezer's position is objectively not false, given > Eliezer's premises. Yes an unopposed unaligned AGI could use Drexlerian nanotech to do that. That's why we need more than a single monolithic AGI so that they can operate as checks and balances against one another. So when one of them tries to use Drexlerian nanotech to dismantle the earth, another can create a nanotech counter-measure like Drexlarien antibodies to latch onto and disable the molecular disassemblers. > As such, the overwhelming number of voices in the resulting twitter > discourse are just mouth noises - monkeys trying to shame a fellow monkey > for making a [to them] unjustified grab for social status by "advocating > violence". They aren't even engaging with the underlying logic. I'm not > certain if they're capable of doing so. His underlying logic is based on the premise of fear of an unknown quantity. In the podcast he said that no possible utility function would allow for the survival of the human race. That is patently absurd. Even if the only utility function of an AI is to generate wealth for its company, then it will understand that the survival of customers and clients are necessary for its utility function to be maximized. When Lex asked him for possible solutions to either the interpretation problem or the alignment problem, he drew a blank and admitted he had no idea. But when the conversation turned to throwing billions of dollars into alignment research, he tried to become a gatekeeper for AI funding. He literally said that billionaires like Musk should consult with HIM before funding anybody else's research or ideas on alignment. If that is not a good old-fashioned primate power-grab, then what is? Moreover, in the podcast, he explicitly disavowed transhumanism so perhaps it is time that transhumanism disavowed him. Stuart LaForge > On Fri, Mar 31, 2023 at 1:03?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi >> wrote: >> >>> The AI doomers would say, but this is different from everything else >>> because.... it is like God. >>> >> >> Indeed, and in so doing they make several errors often associated with >> religion, for example fallacies akin to Pascal's Wager (see: Roko's >> Basilisk). >> >> >>> Take Russia, or North Korea. Russia could destroy humanity or do >>> irreparable damage. Why doesn't it happen? Mutual Destruction is part of >>> the reason. >>> >> >> To be fair, given what's been revealed in their invasion of Ukraine (and >> had been suspected for a while), it is possible that Russia does not in >> fact - and never actually did - have all that many functioning long-range >> nuclear weapons. But your point applies to why we've never had to find out >> for sure yet. >> >> >>> One thing is to warn of the possible dangers, another this relentless and >>> exaggerated doom sayers cries. >>> >> >> Which, being repeated and exaggerated when the "honest" reports fail to >> incite the supposedly justified degree of alarm (rather than seriously >> considering that said justification might in fact be incorrect), get melded >> into the long history of unfounded apocalypse claims, and dismissed on that >> basis. The Year 2000 bug did not wipe out civilization. Many predicted >> dates for the Second Coming have come and gone with no apparent effect; new >> predictions rarely even acknowledge that there have been said prior >> predictions, let alone give reason why those proved false where this >> prediction is different. Likewise for the 2012 Mayan Apocalypse, which >> was literally just their calendar rolling over (akin to going from >> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> From dsunley at gmail.com Fri Mar 31 21:02:45 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 15:02:45 -0600 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: <20230331133612.Horde.5fPphnGnhzgWqCYhHbsAStG@sollegro.com> References: <20230331133612.Horde.5fPphnGnhzgWqCYhHbsAStG@sollegro.com> Message-ID: I really do need to watch that podcast. I'm skeptical about placing any kind of hope in checks and balances between competing unaligned AGIs. A paperclip optimizer and a thumbtack optimizer may fight each other to an impasse over the atoms that currently constitute human civilization, but their fight isn't likely to leave much of a human audience to appreciate the tactical deadlock. I don't really want to be a kitten watching two great white sharks violently deciding who's getting dinner tonight. I'm inclined to agree with him that the survival of humanity is vanishingly unlikely to be a significant component of any utility function that isn't intentionally engineered - by humans - to contain it. That is /not/ a thing that can be safely left to chance. One of the major difficulties is AIs modifying their utility function to simplify the fulfillment thereof. To use your example, it is not axiomatic that maximizing the revenue of a corporation requires that corporation to have any human exployees or corporate officers, or indeed any human customers. Just bank accounts feeding in money. It feels axiomatic to us, but that's because we're human. Yudkowsky may not be able to diagram GPT4's architecture, or factor parameter matrices to render them human-transparent, but trying to engineer utility functions that preserve what we consider to be important about humanity, and to continue to preserve that even under arbitrary transformations, has been the heart of his and MIRI's research programme for over a decade, and they're telling you they don't know how to do it and have no particular reason to believe it can even be done. On Fri, Mar 31, 2023 at 2:38?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Darin Sunley via extropy-chat : > > > Eliezer's position is extreme - and his rhetoric regarding nuclear > > exchanges may be an intentionally rhetorically extreme reductio - but it > is > > not absurd. > > After watching his entire 3-hr interview with Lex Fridman which was > admittedly somewhat painful, I have come to to the conclusion that > while not absurd, Eliezer's position is not as rational as he would > like to believe. For one thing, I could clearly tell he was terrified > by AI. To some degree I can empathize with him. > > Eliezer spent most of his life thinking that he was the smartest > person in the room. For Eliezer, intelligence is the definitive > measure of a being's power and worth. Moreover, in his younger years, > he was incredibly rude to and dismissive of those he thought were less > intelligent than he was. Is it really any wonder now that AIs might > become smarter than he is, that he is terrified of them? > > But really it is only his ego at play, because unlike Eliezer, > billions of people have to deal with others that are smarter than they > are on a daily basis. Eliezer by his own admission does not understand > how the transformer models work, and because he has spent most of > his life not being understood by the vast majority of people around > him, he therefore projects his own contempt for lesser being onto > them. No wonder AI terrifies him. However, basing his call to action > on terror, which only allows for fight, flight, or freeze is > definitely not rational. > > > A unaligned superintelligent AGI with access to the internet and the > > capability to develop and use Drexlerian nanotech can trivially > > deconstruct the planet. [Yes, all the way down to and past the > extremophile > > bacteria 10 miles down in the planetary crust.] This is a simple and > > obvious truth. This conclusion /is/ vulnerable to attack at its > constituent > > points - superintelligence may very well be impossible, unaligned > > superintelligences may be impossible, Drexlerian nanotech may be > > impossible, etc. But Eliezer's position is objectively not false, given > > Eliezer's premises. > > Yes an unopposed unaligned AGI could use Drexlerian nanotech to do > that. That's why we need more than a single monolithic AGI so that > they can operate as checks and balances against one another. So when > one of them tries to use Drexlerian nanotech to dismantle the earth, > another can create a nanotech counter-measure like Drexlarien > antibodies to latch onto and disable the molecular disassemblers. > > > As such, the overwhelming number of voices in the resulting twitter > > discourse are just mouth noises - monkeys trying to shame a fellow monkey > > for making a [to them] unjustified grab for social status by "advocating > > violence". They aren't even engaging with the underlying logic. I'm not > > certain if they're capable of doing so. > > His underlying logic is based on the premise of fear of an unknown > quantity. In the podcast he said that no possible utility function > would allow for the survival of the human race. That is patently > absurd. Even if the only utility function of an AI is to generate > wealth for its company, then it will understand that the survival of > customers and clients are necessary for its utility function to be > maximized. > > When Lex asked him for possible solutions to either the interpretation > problem or the alignment problem, he drew a blank and admitted he had > no idea. But when the conversation turned to throwing billions of > dollars into alignment research, he tried to become a gatekeeper for > AI funding. He literally said that billionaires like Musk should > consult with HIM before funding anybody else's research or ideas on > alignment. If that is not a good old-fashioned primate power-grab, > then what is? > > Moreover, in the podcast, he explicitly disavowed transhumanism so > perhaps it is time that transhumanism disavowed him. > > Stuart LaForge > > > > On Fri, Mar 31, 2023 at 1:03?PM Adrian Tymes via extropy-chat < > > extropy-chat at lists.extropy.org> wrote: > > > >> On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi < > gsantostasi at gmail.com> > >> wrote: > >> > >>> The AI doomers would say, but this is different from everything else > >>> because.... it is like God. > >>> > >> > >> Indeed, and in so doing they make several errors often associated with > >> religion, for example fallacies akin to Pascal's Wager (see: Roko's > >> Basilisk). > >> > >> > >>> Take Russia, or North Korea. Russia could destroy humanity or do > >>> irreparable damage. Why doesn't it happen? Mutual Destruction is part > of > >>> the reason. > >>> > >> > >> To be fair, given what's been revealed in their invasion of Ukraine (and > >> had been suspected for a while), it is possible that Russia does not in > >> fact - and never actually did - have all that many functioning > long-range > >> nuclear weapons. But your point applies to why we've never had to find > out > >> for sure yet. > >> > >> > >>> One thing is to warn of the possible dangers, another this relentless > and > >>> exaggerated doom sayers cries. > >>> > >> > >> Which, being repeated and exaggerated when the "honest" reports fail to > >> incite the supposedly justified degree of alarm (rather than seriously > >> considering that said justification might in fact be incorrect), get > melded > >> into the long history of unfounded apocalypse claims, and dismissed on > that > >> basis. The Year 2000 bug did not wipe out civilization. Many predicted > >> dates for the Second Coming have come and gone with no apparent effect; > new > >> predictions rarely even acknowledge that there have been said prior > >> predictions, let alone give reason why those proved false where this > >> prediction is different. Likewise for the 2012 Mayan Apocalypse, which > >> was literally just their calendar rolling over (akin to going from > >> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. > >> _______________________________________________ > >> extropy-chat mailing list > >> extropy-chat at lists.extropy.org > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Fri Mar 31 21:04:25 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Fri, 31 Mar 2023 15:04:25 -0600 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: On Fri, Mar 31, 2023 at 8:40?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mar 31, 2023, at 6:47 AM, Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > There?s nobody there, Gio. It?s just software > > There?s nobody there. It?s just particles. > And those particles have intrinsic properties and phenomenal qualities (like redness, greenness and warmth), which is what consciousness is made of, unlike 1s and 0s, which are just dictionaries that are abstracted away from physical properties and not like anything. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 31 21:05:13 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Fri, 31 Mar 2023 14:05:13 -0700 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: <02c301d96414$7d59f030$780dd090$@rainier66.com> ?> On Behalf Of Giovanni Santostasi via extropy-chat? Subject: Re: [ExI] Language models are like mirrors Gordon, >?Your analogy of the mirror? Giovanni Think about what we have been doing here the last few weeks: debating whether or not ChatGPT is a form of artificial intelligence. As software advanced over the last four decades at least, we dealt with the problem by repeatedly moving the goal posts and saying it isn?t there yet. Well OK then, but suddenly ChatGPT shows up and is capable of doing so many interesting things: mastering any profession which relies primarily on memorization or looking up relevant data (goodbye paralegals) entertaining those who are entertained by chatting with software, training students and Science Olympiad teams, generating genuine-looking scientific research papers and so on. Over the years we have been debating this question of whether software is AI, but this is the first time where it really isn?t all that clear. We have always concluded it is not true AI, because it isn?t doing what our brains are doing, so it must not be intelligence. But now? now we don?t really know. The reason we don?t really know is not because we don?t understand how the software works, but rather we don?t understand how our brains work. Conclusion: the reason we don?t know isn?t so much we don?t know what the software is doing, but rather we don?t really know what we are doing. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 21:23:01 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 14:23:01 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: Darin, As I pointed out early the argument is based on attributing the AGI god like powers. It makes a lot of assumptions as you mentioned. It is a reductio argument, a philosophical one based on taking an extreme position and seeing what it "logically" leads to. But the premises are not based on reality. We don't know how a fully conscious AI would look like, how we will go there from where we are now and what the steps would look like. All that we have is what we have observed so far. It is not just that GPT-4 is benign (Bing at most can insult you or be mean) but also relatively simple to contain and limit its activities. Yes, GPT-4 was a quantum jump from the previous version but also not in a way that all of the sudden took over humanity. As we approach those boundaries we will understand better the nature of these systems, how to minimize the risks and adapt to the disruption that they will create. We have done this for 100,000 years so far. One may argue AI is more disruptive than the invention of fire, agriculture and so on and the time scales involved are very different but we also have better tools to understand and face problems that we had in the past. Fear mongering and apocalyptic thinking is not going to help here. Yes, we need to be vigilant and think about the possible problems ahead but we should also be open and curious and fear creates the opposite effect. Giovanni On Fri, Mar 31, 2023 at 12:27?PM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Eliezer's position is extreme - and his rhetoric regarding nuclear > exchanges may be an intentionally rhetorically extreme reductio - but it is > not absurd. > > A unaligned superintelligent AGI with access to the internet and the > capability to develop and use Drexlerian nanotech can trivially > deconstruct the planet. [Yes, all the way down to and past the extremophile > bacteria 10 miles down in the planetary crust.] This is a simple and > obvious truth. This conclusion /is/ vulnerable to attack at its constituent > points - superintelligence may very well be impossible, unaligned > superintelligences may be impossible, Drexlerian nanotech may be > impossible, etc. But Eliezer's position is objectively not false, given > Eliezer's premises. > > As such, the overwhelming number of voices in the resulting twitter > discourse are just mouth noises - monkeys trying to shame a fellow monkey > for making a [to them] unjustified grab for social status by "advocating > violence". They aren't even engaging with the underlying logic. I'm not > certain if they're capable of doing so. > > > On Fri, Mar 31, 2023 at 1:03?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi < >> gsantostasi at gmail.com> wrote: >> >>> The AI doomers would say, but this is different from everything else >>> because.... it is like God. >>> >> >> Indeed, and in so doing they make several errors often associated with >> religion, for example fallacies akin to Pascal's Wager (see: Roko's >> Basilisk). >> >> >>> Take Russia, or North Korea. Russia could destroy humanity or do >>> irreparable damage. Why doesn't it happen? Mutual Destruction is part of >>> the reason. >>> >> >> To be fair, given what's been revealed in their invasion of Ukraine (and >> had been suspected for a while), it is possible that Russia does not in >> fact - and never actually did - have all that many functioning long-range >> nuclear weapons. But your point applies to why we've never had to find out >> for sure yet. >> >> >>> One thing is to warn of the possible dangers, another this relentless >>> and exaggerated doom sayers cries. >>> >> >> Which, being repeated and exaggerated when the "honest" reports fail to >> incite the supposedly justified degree of alarm (rather than seriously >> considering that said justification might in fact be incorrect), get melded >> into the long history of unfounded apocalypse claims, and dismissed on that >> basis. The Year 2000 bug did not wipe out civilization. Many predicted >> dates for the Second Coming have come and gone with no apparent effect; new >> predictions rarely even acknowledge that there have been said prior >> predictions, let alone give reason why those proved false where this >> prediction is different. Likewise for the 2012 Mayan Apocalypse, which >> was literally just their calendar rolling over (akin to going from >> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Fri Mar 31 21:23:53 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Fri, 31 Mar 2023 16:23:53 -0500 Subject: [ExI] what floor In-Reply-To: <027b01d9640c$e4733180$ad599480$@rainier66.com> References: <01da01d963fd$6c4cf9c0$44e6ed40$@rainier66.com> <027b01d9640c$e4733180$ad599480$@rainier66.com> Message-ID: Psychiatric ward would do too. No germs. bill w On Fri, Mar 31, 2023 at 3:12?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Darin Sunley via extropy-chat > *Subject:* Re: [ExI] what floor > > > > > > > > >>?Puzzle: any educated guesses on which malady that hospital floor treats? > > spike > > > > > > >?Broken bones? > > > > > > > > {8^D Darin that answer is every bit as good as mine. Something like that > would be taken care of on the first floor emergency room. If people were > out doing the things that causes them to get ER-style injured, they would > likely be healthy to start with. > > > > My friend had a quad bypass. The wing in which I paced 8 miles indoors is > dedicated to heart patients, those recovering specifically from heart > surgery, etc. The place is spotless, the nurses are covid-tested up the > kazoo? well, not the kazoo exactly? the people in there must be covid-free > before they can undergo surgery, etc. If one must pace inside a hospital, > the cardiac wing is probably the safest place to do it. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Fri Mar 31 22:09:33 2023 From: snapbag at proton.me (Dave S) Date: Fri, 31 Mar 2023 22:09:33 +0000 Subject: [ExI] Language models are like mirrors In-Reply-To: <02c301d96414$7d59f030$780dd090$@rainier66.com> References: <02c301d96414$7d59f030$780dd090$@rainier66.com> Message-ID: On Friday, March 31st, 2023 at 5:05 PM, spike jones via extropy-chat wrote: > Think about what we have been doing here the last few weeks: debating whether or not ChatGPT is a form of artificial intelligence. I don't think anyone doesn't think LLMs aren't intelligent. The results speak for themselves. Most of the debate is about understanding and self awareness. > As software advanced over the last four decades at least, we dealt with the problem by repeatedly moving the goal posts and saying it isn?t there yet. That's one way to spin it. The problem, as it often is, is one of definition. AI is a spectrum from fairly simple devices like adding machines to game playing programs that challenge human players to game playing machines that beat human champions to chatbots that can pass for human to LLMs that can answers complicated questions, write papers, etc., to human-equivalence to superhuman. If we declared AI achieved with Deep Blue or AlphaGo we'd have been far short of AGI or human-equivalent AI. > Well OK then, but suddenly ChatGPT shows up and is capable of doing so many interesting things: mastering any profession which relies primarily on memorization or looking up relevant data (goodbye paralegals) entertaining those who are entertained by chatting with software, training students and Science Olympiad teams, generating genuine-looking scientific research papers and so on. The main problem with LLMs is that they're only as good as their training data. Their legal arguments might be sound, but they could also nonsense, or not valid in the jurisdiction involved, or outdated by events since they were trained. > Over the years we have been debating this question of whether software is AI, but this is the first time where it really isn?t all that clear. We have always concluded it is not true AI, because it isn?t doing what our brains are doing, so it must not be intelligence. There will always be people who refuse to believe that machines can think, feel, understand, create, etc. But since we're machines and we do all of that, that argument is not sound. > But now? now we don?t really know. The reason we don?t really know is not because we don?t understand how the software works, but rather we don?t understand how our brains work. We know software is getting more intelligent, but nobody who understands them thinks LLMs are human-equivalent--even though they can do a lot that humans can't do. > Conclusion: the reason we don?t know isn?t so much we don?t know what the software is doing, but rather we don?t really know what we are doing. We don't have to know how our brains work to know that they don't work like adding machines, spreadsheets, chess programs, AlphaGo, ChatGPT, etc. We can tell that by using them and understanding how they work. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Mar 31 22:09:16 2023 From: pharos at gmail.com (BillK) Date: Fri, 31 Mar 2023 23:09:16 +0100 Subject: [ExI] HuggingGPT announced Message-ID: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace Quotes: Summary: HuggingGPT is a system that uses large language models (LLMs) to connect various AI models in machine learning communities to solve complicated AI tasks. The system leverages the language capability of LLMs to conduct task planning, select models, execute subtasks, and summarize responses, achieving impressive results in language, vision, speech, and other challenging tasks. ------------------ This seems to be a system for using ChatGPT as a top-level manager to call other AI modules as required to solve problems. BillK From sjatkins at protonmail.com Fri Mar 31 22:16:24 2023 From: sjatkins at protonmail.com (sjatkins) Date: Fri, 31 Mar 2023 22:16:24 +0000 Subject: [ExI] Yudkowsky in Time on AI Open Letter. In-Reply-To: References: Message-ID: It was difficult to watch much of the interview with Eliezer as from the beginning he makes a lot of emotive and extreme claims before any reasoned argument. I believe he knows better. He has written a tremendous amount after all on what is better. ------- Original Message ------- On Thursday, March 30th, 2023 at 3:20 PM, Gordon Swobe via extropy-chat wrote: > I just watched Eliezer's interview with Lex Fridman. It was posted on YouTube today. Worth a watch. > > https://www.youtube.com/watch?v=AaTRHFaaPG8&t=4656s > > -gts > > On Thu, Mar 30, 2023 at 12:49?PM Darin Sunley via extropy-chat wrote: > >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ >> >> We live in a timeline where Eliezer Yudkowsky just got published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears. >> >> Without commencing on the arguments on either side or the qualities thereof, can I just say how f*cking BONKERS that is?! >> >> This is the sort of thing that damages my already very put upon and rapidly deteriorating suspension of disbelief. >> >> If you sent 25-years-ago-me the single sentence "In 2023, Eliezer Yudkowsky will get published in Time magazine responding to a proposal to halt or at least drastically curtail AI research due to existential risk fears." I would probably have concluded I was already in a simulation. >> >> And I'm not certain I would have been wrong. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Mar 31 22:20:15 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 16:20:15 -0600 Subject: [ExI] HuggingGPT announced In-Reply-To: References: Message-ID: Wonderful. Now we can finally give the AI high-level executive and planning functionality, as seen in the seminal dystopian science fiction movie "Don't Give The AI High-level Executive and Planning Functionality." On Fri, Mar 31, 2023 at 4:17?PM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace > > < > https://www.emergentmind.com/posts/2303-17580-hugginggpt-solving-ai-tasks-with-chatgpt > > > > > Quotes: > Summary: > HuggingGPT is a system that uses large language models (LLMs) to > connect various AI models in machine learning communities to solve > complicated AI tasks. > The system leverages the language capability of LLMs to conduct task > planning, select models, execute subtasks, and summarize responses, > achieving impressive results in language, vision, speech, and other > challenging tasks. > ------------------ > > This seems to be a system for using ChatGPT as a top-level manager to > call other AI modules as required to solve problems. > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Fri Mar 31 22:22:29 2023 From: dsunley at gmail.com (Darin Sunley) Date: Fri, 31 Mar 2023 16:22:29 -0600 Subject: [ExI] HuggingGPT announced In-Reply-To: References: Message-ID: I gotta say, I do love how I wrote this: "No one is arguing that ChatGPT is dangerous. But ChatGPT is sudden massive evidence that potentially dangerous architectures are imminently, possibly even trivially achievable" in another email, literally seconds before you posted this. :D On Fri, Mar 31, 2023 at 4:20?PM Darin Sunley wrote: > Wonderful. Now we can finally give the AI high-level executive and > planning functionality, as seen in the seminal dystopian science fiction > movie "Don't Give The AI High-level Executive and Planning Functionality." > > On Fri, Mar 31, 2023 at 4:17?PM BillK via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace >> >> < >> https://www.emergentmind.com/posts/2303-17580-hugginggpt-solving-ai-tasks-with-chatgpt >> > >> >> >> Quotes: >> Summary: >> HuggingGPT is a system that uses large language models (LLMs) to >> connect various AI models in machine learning communities to solve >> complicated AI tasks. >> The system leverages the language capability of LLMs to conduct task >> planning, select models, execute subtasks, and summarize responses, >> achieving impressive results in language, vision, speech, and other >> challenging tasks. >> ------------------ >> >> This seems to be a system for using ChatGPT as a top-level manager to >> call other AI modules as required to solve problems. >> >> BillK >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 31 22:22:54 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 31 Mar 2023 15:22:54 -0700 Subject: [ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments' In-Reply-To: References: Message-ID: The 3 hours podcast was very interesting. But I think one of the most fascinating moments was when Yudkowsky mentioned an example of ChatGPT "caring". The algo was actually programmed to stop the conversation when it was going in an unsafe (from a large corporation PR point of view) territory and instead it went against (moderately) its instructions and it showed it cared (or at least pretended too but there is no real difference). I know of this incident and it is about an idiot redditor that was pushing Bing in a certain direction. Basically it was telling Bing that his son was poisoned by potatoes and because he could not afford to call an ambulance he would let his son die. I don't approve of this type of psychopath type of testing of AI because it sets a precedent on how we interact with other minds (it doesn't matter if Bing has reached consciousness or not) and it doesn't really help much in understanding the limits and potentials of these systems. But notwithstanding the very questionable nature of the experiment, Bing *showed it cared *and tried to persuade the human not to give up on his son because the condition his son was supposedly affected by could be addressed if the intervention was quick enough. This shows that AI can care and be on the side of humans, in fact, it can be more compassionate and caring than most humans. It doesn't have to be programmed behavior; it can emerge, like many other properties that these systems have. *So here you have the SOLUTION Yudowsky says it is impossible to find. * Train these systems, look for these emergent properties that are human friendly and select just the systems that show these properties. We have done this with dogs for example and now they are our best friends and more loyals than many humans are. I know this may sound naive but sometimes naive solutions are the best. Giovanni On Fri, Mar 31, 2023 at 2:23?PM Giovanni Santostasi wrote: > Darin, > As I pointed out early the argument is based on attributing the AGI god > like powers. It makes a lot of assumptions as you mentioned. It is a > reductio argument, a philosophical one based on taking an extreme position > and seeing what it "logically" leads to. But the premises are not based on > reality. We don't know how a fully conscious AI would look like, how we > will go there from where we are now and what the steps would look like. All > that we have is what we have observed so far. > > It is not just that GPT-4 is benign (Bing at most can insult you or be > mean) but also relatively simple to contain and limit its activities. Yes, > GPT-4 was a quantum jump from the previous version but also not in a way > that all of the sudden took over humanity. As we approach those boundaries > we will understand better the nature of these systems, how to minimize the > risks and adapt to the disruption that they will create. We have done this > for 100,000 years so far. One may argue AI is more disruptive than the > invention of fire, agriculture and so on and the time scales involved are > very different but we also have better tools to understand and face > problems that we had in the past. > Fear mongering and apocalyptic thinking is not going to help here. Yes, we > need to be vigilant and think about the possible problems ahead but we > should also be open and curious and fear creates the opposite effect. > > Giovanni > > > > On Fri, Mar 31, 2023 at 12:27?PM Darin Sunley via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Eliezer's position is extreme - and his rhetoric regarding nuclear >> exchanges may be an intentionally rhetorically extreme reductio - but it is >> not absurd. >> >> A unaligned superintelligent AGI with access to the internet and the >> capability to develop and use Drexlerian nanotech can trivially >> deconstruct the planet. [Yes, all the way down to and past the extremophile >> bacteria 10 miles down in the planetary crust.] This is a simple and >> obvious truth. This conclusion /is/ vulnerable to attack at its constituent >> points - superintelligence may very well be impossible, unaligned >> superintelligences may be impossible, Drexlerian nanotech may be >> impossible, etc. But Eliezer's position is objectively not false, given >> Eliezer's premises. >> >> As such, the overwhelming number of voices in the resulting twitter >> discourse are just mouth noises - monkeys trying to shame a fellow monkey >> for making a [to them] unjustified grab for social status by "advocating >> violence". They aren't even engaging with the underlying logic. I'm not >> certain if they're capable of doing so. >> >> >> On Fri, Mar 31, 2023 at 1:03?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Fri, Mar 31, 2023 at 2:13?AM Giovanni Santostasi < >>> gsantostasi at gmail.com> wrote: >>> >>>> The AI doomers would say, but this is different from everything else >>>> because.... it is like God. >>>> >>> >>> Indeed, and in so doing they make several errors often associated with >>> religion, for example fallacies akin to Pascal's Wager (see: Roko's >>> Basilisk). >>> >>> >>>> Take Russia, or North Korea. Russia could destroy humanity or do >>>> irreparable damage. Why doesn't it happen? Mutual Destruction is part of >>>> the reason. >>>> >>> >>> To be fair, given what's been revealed in their invasion of Ukraine (and >>> had been suspected for a while), it is possible that Russia does not in >>> fact - and never actually did - have all that many functioning long-range >>> nuclear weapons. But your point applies to why we've never had to find out >>> for sure yet. >>> >>> >>>> One thing is to warn of the possible dangers, another this relentless >>>> and exaggerated doom sayers cries. >>>> >>> >>> Which, being repeated and exaggerated when the "honest" reports fail to >>> incite the supposedly justified degree of alarm (rather than seriously >>> considering that said justification might in fact be incorrect), get melded >>> into the long history of unfounded apocalypse claims, and dismissed on that >>> basis. The Year 2000 bug did not wipe out civilization. Many predicted >>> dates for the Second Coming have come and gone with no apparent effect; new >>> predictions rarely even acknowledge that there have been said prior >>> predictions, let alone give reason why those proved false where this >>> prediction is different. Likewise for the 2012 Mayan Apocalypse, which >>> was literally just their calendar rolling over (akin to going from >>> 12/31/1999 to 1/1/2000) and may have had the wrong date anyway. >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 31 23:36:09 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 31 Mar 2023 17:36:09 -0600 Subject: [ExI] Language models are like mirrors In-Reply-To: References: Message-ID: On Fri, Mar 31, 2023 at 2:18 PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon, > *almost everybody disagrees with you. * > ChatGPT-4 itself agrees with me. It says it cannot solve the symbol grounding problem for itself as it has no conscious experience, and says it therefore does not understand the meanings of the words as humans do, and that in this respect it is at a disadvantage compared to humans. See my thread on the subject. Spike also agrees these are only language analysis tools. Brent also seems to agree that they have no access to referents and therefore no way to know meanings of words. And this is not democracy, in any case. I?m not afraid to be in the company people who disagree wit me. -gts > -gts >> > _______________________________________________ >> > extropy-chat mailing list >> > extropy-chat at lists.extropy.org >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: