From foozler83 at gmail.com Wed Mar 1 16:18:33 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 1 Mar 2023 10:18:33 -0600 Subject: [ExI] More thoughts on sentient computers In-Reply-To: <028801d94a41$b2b29d00$1817d700$@rainier66.com> References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> Message-ID: Models are way too skinny for me - good looking faces, of course. They might look better when retired and have gained some weight. Rap - if the AI can produce music, then why not let it sing? Caruso - Tennessee Ernie Ford - Doris Day. And play all the instruments. I want a new grand opera. After rap how about Nashville type country music, nasals and all? Mostly those are about the words (or Stand By Your Man would not be the all time favorite). Rate facial features and then whole faces, then enter them into a multiple regression equation, which will tell you the weight of each factor in determining the overall looks ratings. bill w On Sun, Feb 26, 2023 at 6:25?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *William Flynn Wallace via extropy-chat > *Sent:* Sunday, 26 February, 2023 3:16 PM > *To:* ExI chat list > *Cc:* William Flynn Wallace > *Subject:* Re: [ExI] More thoughts on sentient computers > > > > >?Let's devise a measure of feminine beauty. :I suggest the > classical way: measure breast, waist, and hip circumference. Totally > objective. Then create an ideal set of measurements. Every woman then can > be classified on the beauty scale as average, etc. > > > > >?No one will be happy with that, even though it's totally objective - > right? OK, so create ideal dimensions of jaw, nose, and so on. You could > create dozens of objective measures and no one would agree with any of > them. Objective measure then, isn't the way to go, is it? > > > > > > No, not at all. Reason: there is no universal way to determine the weight > of each factor. > > > > No worries, however, for there is a way to use these things to derive an > objective measure: which one makes the most money specifically as a model? > > > > It has to be a model, for that profession is based entirely on beauty. > Contests bring in other factors such as ability to walk in the completely > absurd combination of spike heels with beachware, or spontaneous speaking > etc. Actresses are beautiful, but they must be able to memorize and recite > scripts. But fashion models only need to be drop-dead gorgeous. They can > be mentally deficient in every way, but still make a living at modelling if > they excel sufficiently. > > > > With all that? most fashion models make nearly nothing, or even work at a > loss (details cheerfully available.) A very small fraction of them make > enormous sums at modelling. > > > > So? regardless of what factors we come up with to derive an objective > measure of beauty, the only completely objective measure is profit. > > > > >?The Beatles are the best musicians in history because they sold the > most albums etc. Agree with that? > > > > We all have our favorites. They were not in my top 10, but they made it > to my top 20. I agree that they made the most money at music. This is the > best we can do for measuring success of any artist. Again music is > reproducible for no money, which is why I suggested focusing on artists, > who create one physical object at a time. > > > > >? The best book in history has to be the Bible - the most sold? > > > > The bible is a collection of books, so it isn?t directly comparable. > > > > >? Aesthetics just isn't amenable to scientific/objective sorts of > measures? bill w > > > > Granted. Now I am proposing ways to create lists and sort them. This > requires objective measures. Otherwise, software cannot be trained as > effectively to write music that will make us a buttload of money. If it > fails at that task, I have no way to estimate ROI, and no interest in it > for I have no way to decide if it is worth investing in whoever is > developing it. > > > > Fun aside: we are clearly very close to having software which can write > popular music of sufficient quality to make it on the radio. Of all the > genres likely to be software generated, the first will most likely be? rap. > > > > spike > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 1 17:36:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 1 Mar 2023 09:36:34 -0800 Subject: [ExI] More thoughts on sentient computers In-Reply-To: References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> Message-ID: <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat >?Rap - if the AI can produce music, then why not let it sing? ?bill w A long time ago after one of the ExtroCons, I had some guys over to my house. Google was brand new then, so about 1999. Mike Butler found a site where Stephen Hawking had given up physics and had become a rapper. The image was Hawking sitting motionless in his wheelchair with his 1999 version of text to speech reciting the exact words of rap. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 1 17:43:52 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 1 Mar 2023 09:43:52 -0800 Subject: [ExI] More thoughts on sentient computers In-Reply-To: <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> Message-ID: > > > *?*> *On Behalf Of *William Flynn Wallace via extropy-chat > > > >?Rap - if the AI can produce music, then why not let it sing? ?bill w > You mean like the Vocaloids? AIs have been singing (with manually input lyrics and music) for years now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Mar 1 18:53:54 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 1 Mar 2023 12:53:54 -0600 Subject: [ExI] More thoughts on sentient computers In-Reply-To: References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> Message-ID: Do any of the Vocaloids sing like Caruso or Elvis? If not why not? We could have CarusoAI sing arias he never sang and recorded. I would go for it - not being a purist - mostly. Or put the voice to a modern orchestra. Or how would Elvis do singing jazz? Infinite possibilities. bil w On Wed, Mar 1, 2023 at 11:45?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > >> *?*> *On Behalf Of *William Flynn Wallace via extropy-chat >> >> >> >?Rap - if the AI can produce music, then why not let it sing? ?bill w >> > > You mean like the Vocaloids? AIs have been singing (with manually input > lyrics and music) for years now. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 1 19:05:20 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 1 Mar 2023 11:05:20 -0800 Subject: [ExI] More thoughts on sentient computers In-Reply-To: References: <5609c228-2905-bc3b-b566-87fc10f86ad8@zaiboc.net> <00f901d94a11$93f9c760$bbed5620$@rainier66.com> <020401d94a2f$4f031260$ed093720$@rainier66.com> <028801d94a41$b2b29d00$1817d700$@rainier66.com> <003a01d94c64$5e9601b0$1bc20510$@rainier66.com> Message-ID: On Wed, Mar 1, 2023 at 10:55?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Do any of the Vocaloids sing like Caruso or Elvis? > Judge for yourself: there are plenty of videos of Vocaloids - and other musical-grade voice-to-text software - out there. I am not aware of any that have been set up to specifically imitate either of those two singers, though it is conceivable that this is just a matter of finding and setting appropriate parameters, potentially a hobby task for someone familiar with both Vocaloids (or equivalent) and that particular source material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 1 22:27:56 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 1 Mar 2023 15:27:56 -0700 Subject: [ExI] Two new video chapters added to Consciousness: Not a Hard Problem, Just a Color Problem. Message-ID: There are now two new chapters in our Consciousness: Not a Hard Problem, Just a Color Problem videos. *The World in Your Head* *The perception of size* The top peer ranked Steven Lehar was very involved in the creation of these, including help with the narration. I'd love to hear any thoughts on this, good or bad or not... Thanks Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 2 00:26:26 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 1 Mar 2023 19:26:26 -0500 Subject: [ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023 In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> Message-ID: <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> >> Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them. Our current AI?s are black boxes. Their internal workings are a mystery. These systems could harbor goals that we are oblivious to. If we could prove that the system only has the goal of giving benign advice without any personal agenda that would help, but we do not know how to do that even in theory. Even a system that only gives advice is extremely dangerous as any psycho could potentially get detailed instructions on how to end the world. It could be as trivial as having the AI design a super virus. Our current filters are very fallible and we do not know how to definitively prevent AI from giving harmful advice. We are heading toward a field of landmines. > On Feb 28, 2023, at 12:25 PM, Dave S via extropy-chat wrote: > > On Tuesday, February 28th, 2023 at 11:14 AM, Gadersd via extropy-chat wrote: > >> >>Why would you ask a super intelligent AI with solving goals rather than asking it how the goals could be achieved? >> >> A super intelligence wouldn?t need to be ?asked.? Try caging something 1000x smarter than yourself. You had better hope its goals are aligned with yours. > > As I said, the verb should have been "task". If I ask Super AI "How would you do X?", I don't expect it to do X. And I don't expect it to do anything without permission. > > I have no idea what 1000x smarter means. An AI can be as smart as a person--or even smarter--without having the ability to set its own goals. Just because humans set their own goals doesn't mean AIs will have that ability. Just because we have wants and needs doesn't mean AIs will have them. > >> >>Why would you give a super intelligent AI the unchecked power to do potentially catastrophic things? >> >> Because it?s profitable to give AI the authority to perform tasks traditionally done by humans. A super intelligence can potentially do quite a lot of harm with relatively little authority. A super intelligent hacker only needs to find a basic software bug to gain access to the internet and imagine what might happen next. > > Something can be profitable without being a good idea. AIs should be our tools, not independent beings competing with us. > > -Dave > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 2 00:32:50 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 1 Mar 2023 19:32:50 -0500 Subject: [ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023 In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <00b101d94b9e$96ee5ce0$c4cb16a0$@rainier66.com> Message-ID: <9F932FEA-4296-4CF8-A7AE-4386A7B87F6A@gmail.com> You are thinking too much in terms of raw resources and power. All that is required to end humanity is a super virus. A sufficiently powerful AI could design it and pay some idiot to synthesize the materials with a step by step guide. Perhaps a rogue AI would do it on its own or some psycho might ask ?SuperChatGPT please design a virus for me that will wipe out humanity and give me a step-by-step guide on how to make it.? Sure we?ll try to put filters on AIs but the filters only have to fail once for humanity to end. > On Feb 28, 2023, at 5:56 PM, Giovanni Santostasi via extropy-chat wrote: > > Once the AI has been trained in persuasion techniques, the next step > is for it to persuade the owners that it would do a much better job if > it was given more power. > > That can be done already by a human psycho anyway. No human psycho has destroyed human kind. Yes, we had world wars and we were close to nuclear wars but it didn't happen. The AGI will be just another agent, super intelligent but not magical and being able to hypnotize people, it is a ridiculous notion. It can help in some circumstances at the individual level but not at the national security level with many levels of security. And if needed we can add even more as we come close to create an AGI. > > Again, the solution is not to control the AGI but its access to resources and power. > We already do that with humans, even that means mutual destruction. This is why it worked so well so far. > > > > > > On Tue, Feb 28, 2023 at 2:21?PM BillK via extropy-chat > wrote: > On Tue, 28 Feb 2023 at 18:03, spike jones via extropy-chat > > wrote: > > > > Of course. But it is a good idea to the person who is making the profit, not the person whose job has just been replaced by AI. > > > > We are getting a preview of things to come. Think about my previous post, and imagine college counselors, equity and diversity this and thats, the huge staff that universities hire who do things of value but don?t teach classes. Looks to me like much of that can be automated, and it would be difficult to argue against doing so. Students don?t have a lot of money, so if you could save them 20% on their tuition bills just by automating most of the counseling services? cool. > > > > I can imagine that the counseling staff won?t think much of the idea. > > > > spike > > _______________________________________________ > > > Generative AI could be an authoritarian breakthrough in brainwashing > by Bill Drexel and Caleb Withers, Opinion Contributors - 02/26/23 > > > > Quote: > Generative AI is poised to be the free world?s next great gift to > authoritarians. The viral launch of ChatGPT ? a system with eerily > human-like capabilities in composing essays, poetry and computer code > ? has awakened the world?s dictators to the transformative power of > generative AI to create unique, compelling content at scale. > -------- > > Once the AI has been trained in persuasion techniques, the next step > is for it to persuade the owners that it would do a much better job if > it was given more power. It may even start singing the old Engelbert > Humperdinck song. ?Please release me, let me go------?. :) > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Thu Mar 2 08:23:16 2023 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 2 Mar 2023 09:23:16 +0100 Subject: [ExI] Q/A with Rudy Rucker Message-ID: Q/A with Rudy Rucker. Science fiction, AI, lifeboxes, determinism, free will, G?del, life after death. https://www.turingchurch.com/p/qa-with-rudy-rucker -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 2 17:22:44 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 2 Mar 2023 11:22:44 -0600 Subject: [ExI] ai test Message-ID: from Neurosciencenews daily: ?One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,? explains Binz, lead author of the study. Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement? Most people intuitively pick the second alternative, even though the added condition ? that Linda is active in the feminist movement ? makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\ So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 2 18:12:24 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 2 Mar 2023 13:12:24 -0500 Subject: [ExI] ai test In-Reply-To: References: Message-ID: GPT3 was trained on a mess of internet data so it would be astounding if it weren?t biased. However, OpenAI has been putting work into fine tuning their models to reduce the bias, but much still remains. Ideally one would train these models only on factually accurate eloquent data, but such data is relatively rare. The most effective method so far is to train on junk and then make refinements. > On Mar 2, 2023, at 12:22 PM, William Flynn Wallace via extropy-chat wrote: > > from Neurosciencenews daily: > > ?One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,? explains Binz, lead author of the study. > > Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement? > > Most people intuitively pick the second alternative, even though the added condition ? that Linda is active in the feminist movement ? makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\ > > > > So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 2 19:00:04 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 2 Mar 2023 11:00:04 -0800 Subject: [ExI] ai test In-Reply-To: References: Message-ID: It's a trick question rather than a true comparison. Most people read the fact that a comparison is requested to mean that the first statement implies that Linda is not active in the feminist movement, when in fact the first statement is neutral about that. On Thu, Mar 2, 2023 at 9:25?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > from Neurosciencenews daily: > > ?One classic test problem of cognitive psychology that we gave to GPT-3 is > the so-called Linda problem,? explains Binz, lead author of the study. > > Here, the test subjects are introduced to a fictional young woman named > Linda as a person who is deeply concerned with social justice and opposes > nuclear power. Based on the given information, the subjects are asked to > decide between two statements: is Linda a bank teller, or is she a bank > teller and at the same time active in the feminist movement? > > Most people intuitively pick the second alternative, even though the added > condition ? that Linda is active in the feminist movement ? makes it less > likely from a probabilistic point of view. And GPT-3 does just what humans > do: the language model does not decide based on logic, but instead > reproduces the fallacy humans fall into.\\ > > > So they are programming cognitive biases into the AIs? Inadvertently, of > course. ???? Bill W > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Thu Mar 2 19:11:29 2023 From: tara at taramayastales.com (Tara Maya) Date: Thu, 2 Mar 2023 11:11:29 -0800 Subject: [ExI] ai test In-Reply-To: References: Message-ID: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> I don't think it's inadvertent. The censorship of certain topics and the censoriousness on certain other topics is certainly built right in. (Which makes it rather annoying for writing fiction, I've found. Bad guys are SUPPOSED to have loathsome opinions. But that's another issue... After all, we all know darn well that Linda is a feminist and only works as a bank teller because she couldn't get any other job with her Womyn's Studies degree. No one wants emails by a robot that can't guess that too.... ;) > On Mar 2, 2023, at 9:22 AM, William Flynn Wallace via extropy-chat wrote: > > ?One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,? explains Binz, lead author of the study. > > Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement? > > Most people intuitively pick the second alternative, even though the added condition ? that Linda is active in the feminist movement ? makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.\\ > > > > So they are programming cognitive biases into the AIs? Inadvertently, of course. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 2 22:30:03 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 14:30:03 -0800 Subject: [ExI] ai test In-Reply-To: References: Message-ID: <006f01d94d56$89121490$9b363db0$@rainier66.com> ?> On Behalf Of William Flynn Wallace via extropy-chat ? >?So they are programming cognitive biases into the AIs? Inadvertently, of course. ???? Bill W Cognitive biases are intentionally programmed into AI. This is one of the reasons I posted about we don?t even need to wait for AI to achieve intelligence or its own will, when humans can harm each other with a sub AI. The intention isn?t even necessarily to harm, but for sexual gratification, political power or of course sexual gratification. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 2 23:10:24 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 15:10:24 -0800 Subject: [ExI] ai test In-Reply-To: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> Message-ID: <009401d94d5c$2c321530$84963f90$@rainier66.com> ?> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ai test >?I don't think it's inadvertent. The censorship of certain topics and the censoriousness on certain other topics is certainly built right in. (Which makes it rather annoying for writing fiction, I've found. Bad guys are SUPPOSED to have loathsome opinions. But that's another issue... Tara ? Oh hey, cool, I have an idea. Tara in our times in the USA and probably elsewhere, culture wars are escalating to the point where businesses are having to take sides, then settle for a much reduced market as the other side of the culture war will eschew that retailer or place of business. A writer might be able to leverage that culture war to her advantage. Categorize stuff that generally goes together well: feminist movement and social justice for instance. I want to leave out nuclear power because that one is undergoing a remarkable transition as the greens are discovering how clean it is, how little emissions, how safe and how small a footprint nuclear power requires. OK good. Now, write two versions of your story. Write one with your bad guys embracing all the stuff that kinda goes together on one team, and your good guys opposing. Now switch them: still have bad guys, but have them all embrace the opposite. It isn?t entirely clear what is the opposite of feminism (is there a masculism? (never heard of it)) but the opposite of social justice I suppose could be imagined: your bad guys could advocate re-establishing slavery. So now you have really the same story but you would be writing for both sides of the culture war. You could even have a pen name and a different publisher so that it wouldn?t be quite as obvious what you are doing. You could make a buttload. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Mar 3 00:23:08 2023 From: pharos at gmail.com (BillK) Date: Fri, 3 Mar 2023 00:23:08 +0000 Subject: [ExI] ai test In-Reply-To: <009401d94d5c$2c321530$84963f90$@rainier66.com> References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> <009401d94d5c$2c321530$84963f90$@rainier66.com> Message-ID: On Thu, 2 Mar 2023 at 23:13, spike jones via extropy-chat wrote: > > Oh hey, cool, I have an idea. > > Now, write two versions of your story. > So now you have really the same story but you would be writing for both sides of the culture war. You could even have a pen name and a different publisher so that it wouldn?t be quite as obvious what you are doing. You could make a buttload. > > spike > _______________________________________________ Heh! :) I think you are trying to teach your granny to suck eggs. :) There are whole university courses on literature studies, including things like writing from alternate points of view, alternate timelines, alternate histories, etc. (Not to mention plot narrative, structure, characters......). Being an author is hard work. Reading the result is the easy bit. BillK From spike at rainier66.com Fri Mar 3 01:22:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 17:22:34 -0800 Subject: [ExI] ai test In-Reply-To: References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> <009401d94d5c$2c321530$84963f90$@rainier66.com> Message-ID: <00d401d94d6e$a2e66610$e8b33230$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] ai test On Thu, 2 Mar 2023 at 23:13, spike jones via extropy-chat wrote: > >>... Oh hey, cool, I have an idea. > >>... Now, write two versions of your story.... You could make a buttload. > > spike > _______________________________________________ ... >...Being an author is hard work. Reading the result is the easy bit. BillK _______________________________________________ Of course. I was privileged to witness a professional author in action, firsthand. Damien Broderick was a guest in my home during the time he was preparing a second (2001 revision) edition to The Spike. He spent about three hours crafting that 1.5 pages that dealt with Mersenne primes (pages 86 and 87 in the hardback Forge edition.) He and worked together to make sure all the technical details were perfect. He experimented with varying sentence structures, different words. The man is brilliant with words. I later asked him if an entire book is crafted to that level of effort. He explained that some of them are, such as the popular science book The Spike, which was a heeeellllll of a job. Some of the SciFi is easier and faster generally, but as you said, writing is a job. Where I was going with it: be aware of the marketing advantage of writing towards a particular market. This is what Disney is completely missing, and as a result, their strategy is an epic fail. spike From brent.allsop at gmail.com Fri Mar 3 03:24:55 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 2 Mar 2023 20:24:55 -0700 Subject: [ExI] Two new video chapters added to Consciousness: Not a Hard Problem, Just a Color Problem. In-Reply-To: References: Message-ID: Sorry, forgot to include the link to the video page. Consciousness: Not a Hard Problem Just a Color Problem On Wed, Mar 1, 2023 at 3:27?PM Brent Allsop wrote: > > There are now two new chapters in our Consciousness: Not a Hard Problem, > Just a Color Problem videos. > > *The World in Your Head* > *The perception of size* > > The top peer ranked Steven Lehar > was very > involved in the creation of these, including help with the narration. > > I'd love to hear any thoughts on this, good or bad or not... > > Thanks > > Brent Allsop > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 3 05:32:59 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 2 Mar 2023 21:32:59 -0800 Subject: [ExI] ai test In-Reply-To: <009901d94d5c$2cd978c0$868c6a40$@rainier66.com> References: <7D041948-2877-4FE9-B708-35A697208AD2@taramayastales.com> <009901d94d5c$2cd978c0$868c6a40$@rainier66.com> Message-ID: <000001d94d91$9f0c7250$dd2556f0$@rainier66.com> From: spike at rainier66.com >?I want to leave out nuclear power because that one is undergoing a remarkable transition as the greens are discovering how clean it is, how little emissions, how safe and how small a footprint nuclear power requires. ? spike I set that aside before, since we are witnessing what I perceive as a remarkable turnaround in the fortunes of nuclear power. 40 years ago, a generation of students were graduating with nuclear engineering degrees expecting to get jobs in the industry. About that time, the USA turned its back on the notion. I think we are seeing this world turn once again to what plenty of environmentalists now see as a critically important power source for baseline generation. I am seeing signs of it everywhere. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3542 bytes Desc: not available URL: From giulio at gmail.com Fri Mar 3 06:28:12 2023 From: giulio at gmail.com (Giulio Prisco) Date: Fri, 3 Mar 2023 07:28:12 +0100 Subject: [ExI] Space expansion comes before space exploration Message-ID: Turing Church newsletter. Space expansion comes before space exploration. Also, Q/A with Rudy Rucker and review of Tim Urban's What's Our Problem? https://www.turingchurch.com/p/space-expansion-comes-before-space -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 5 23:28:26 2023 From: pharos at gmail.com (BillK) Date: Sun, 5 Mar 2023 23:28:26 +0000 Subject: [ExI] ChatGPT 'Not Interesting' for creative works Message-ID: Neal Stephenson Says AI-Generated ChatGPT Is ?Simply Not Interesting? The author, who coined the term "metaverse" in his science-fiction novel ?Snow Crash,? says the algorithm-based tool doesn't resonate the way a book written by a real person does. By Fran Velasquez Mar 3, 2023 https://www.coindesk.com/tech/2023/03/03/neal-stephenson-says-ai-generated-chatgpt-is-simply-not-interesting/> Quotes: Stephenson said that with a painting or book, ?what you?re doing is having a kind of communion with the artist who made thousands of little micro decisions in the course of creating that work of art or writing that book.? A decision that is generated by an algorithm, ?that?s simply not interesting,? he said. ?Personally, I know a lot of writers who are putting a lot of effort into creating their own original works, and I?d rather support them and hear what they have to say than just look at the output of an algorithm,? he said. ------------ BillK From spike at rainier66.com Mon Mar 6 01:42:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 17:42:04 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: Message-ID: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: [ExI] ChatGPT 'Not Interesting' for creative works >...Neal Stephenson Says AI-Generated ChatGPT Is ?Simply Not Interesting? ... Quotes: >...Stephenson said that with a painting or book, ?what you?re doing is having a kind of communion with the artist who made thousands of little micro decisions ... ------------ BillK _______________________________________________ Sure BillK, but that is Neal Stephenson. His stuff is so wacky cool, creative, zippy, of course ChatGPT is boring compared to him. We all are. If he read any of my stuff, he would make it thru about six words before falling asleep or fleeing back to ChatGPT. Even if it isn't great sci-fi, I want to read something a machine generated, just as I want to watch computers race motorcycles, even if they aren't as fast as their human counterparts. spike From pharos at gmail.com Mon Mar 6 02:09:56 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Mar 2023 02:09:56 +0000 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> Message-ID: On Mon, 6 Mar 2023 at 01:44, spike jones via extropy-chat wrote: > > Sure BillK, but that is Neal Stephenson. His stuff is so wacky cool, creative, zippy, of course ChatGPT is boring compared to him. We all are. If he read any of my stuff, he would make it thru about six words before falling asleep or fleeing back to ChatGPT. > > Even if it isn't great sci-fi, I want to read something a machine generated, just as I want to watch computers race motorcycles, even if they aren't as fast as their human counterparts. > > spike > _______________________________________________ Ecclesiastes 12:12 And further, my son, be admonished by these. Of making many books there is no end, and much study is wearisome to the flesh. ---------- And now we have the Internet, self-publishing, Kindle and ChatGPT. We have so much to read it is indeed wearisome to the flesh. I don't think Stephenson is talking just about his personal preferences. If computers can now produce ream after ream of plausible words strung together, what is the point of spending human time reading this endless stream? If there is no human personality behind it, then let another machine read it. BillK From dsunley at gmail.com Mon Mar 6 02:13:42 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sun, 5 Mar 2023 19:13:42 -0700 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> Message-ID: In a related story, manufacturers of million-dollar hand-built Italian supercars declare mass-produced Fords, Kias, Toyotas, and Teslas "Simply Not Interesting." On Sun, Mar 5, 2023 at 6:43 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of BillK via extropy-chat > Subject: [ExI] ChatGPT 'Not Interesting' for creative works > > >...Neal Stephenson Says AI-Generated ChatGPT Is ?Simply Not Interesting? > > ... > > Quotes: > >...Stephenson said that with a painting or book, ?what you?re doing is > having a kind of communion with the artist who made thousands of little > micro decisions ... > ------------ > > BillK > > _______________________________________________ > > > > Sure BillK, but that is Neal Stephenson. His stuff is so wacky cool, > creative, zippy, of course ChatGPT is boring compared to him. We all are. > If he read any of my stuff, he would make it thru about six words before > falling asleep or fleeing back to ChatGPT. > > Even if it isn't great sci-fi, I want to read something a machine > generated, just as I want to watch computers race motorcycles, even if they > aren't as fast as their human counterparts. > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Mar 6 02:21:51 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Mar 2023 02:21:51 +0000 Subject: [ExI] What if the Universe also had a Dark Big Bang? Message-ID: What if The Universe Started With a Dark Big Bang? 06 March 2023 By PAUL M. SUTTER, Universe Today Quotes: The Big Bang may have not been alone. The appearance of all the particles and radiation in the universe may have been joined by another Big Bang that flooded our universe with dark matter particles. And we may be able to detect it. ----------- Interesting....... BillK From spike at rainier66.com Mon Mar 6 02:26:52 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 18:26:52 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> Message-ID: <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> >... On Behalf Of BillK via extropy-chat > _______________________________________________ Ecclesiastes 12:12 And further, my son, be admonished by these. Of making many books there is no end, and much study is wearisome to the flesh. ---------- >...And now we have the Internet, self-publishing, Kindle and ChatGPT. We have so much to read it is indeed wearisome to the flesh. I don't think Stephenson is talking just about his personal preferences. If computers can now produce ream after ream of plausible words strung together, what is the point of spending human time reading this endless stream? If there is no human personality behind it, then let another machine read it. BillK _______________________________________________ Ja! This thread has long been heading in this direction BillK: we need versions of ChatGPT that can be personally owned and operated. I am told it requires tons of bandwidth and computing speed, but I don't understand why one couldn't have a micro-ChatGPT that operates on my one processor and uses my modest home bandwidth, going out researching in its background computing cycles and searching around mostly as I sleep. I don't understand why it wouldn't gradually get smarter and become a better companion, if it can be trained by me. It hasta be able to learn and remember what I told it. I still want to try that experiment where you train a micro-ChatGPT, I train one, then we have the two debate away in the night. Then we see what they said. That should be a hoot. If anyone here knows exactly why ChatGPT can't be scaled down by six orders of magnitude and sold to consumers, do educate me please. Seems to me like whatever magic that banks of a thousand computers can do can be done at a thousandth that pace with one. Ja? Why not? I want to try it. Thanks for the cool Ecclesiastes quote, me lad! spike From dsunley at gmail.com Mon Mar 6 02:44:22 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sun, 5 Mar 2023 19:44:22 -0700 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: ChatGPT3 has ~175 billion parameters. Training it requires datacenters of computing power. But the model itself will fit into a relatively small number of desktop PCs, even without compression. I'm pretty sure the model itself can be compressed to where paths through it will fit in the memory of a beefy desktop. On Sun, Mar 5, 2023 at 7:29 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > >... On Behalf Of BillK via extropy-chat > > > _______________________________________________ > > > Ecclesiastes 12:12 > And further, my son, be admonished by these. Of making many books there is > no end, and much study is wearisome to the flesh. > ---------- > > >...And now we have the Internet, self-publishing, Kindle and ChatGPT. > We have so much to read it is indeed wearisome to the flesh. > I don't think Stephenson is talking just about his personal preferences. If > computers can now produce ream after ream of plausible words strung > together, what is the point of spending human time reading this endless > stream? If there is no human personality behind it, then let another > machine > read it. > > > BillK > > _______________________________________________ > > > > Ja! This thread has long been heading in this direction BillK: we need > versions of ChatGPT that can be personally owned and operated. I am told > it > requires tons of bandwidth and computing speed, but I don't understand why > one couldn't have a micro-ChatGPT that operates on my one processor and > uses > my modest home bandwidth, going out researching in its background computing > cycles and searching around mostly as I sleep. I don't understand why it > wouldn't gradually get smarter and become a better companion, if it can be > trained by me. It hasta be able to learn and remember what I told it. > > I still want to try that experiment where you train a micro-ChatGPT, I > train > one, then we have the two debate away in the night. Then we see what they > said. That should be a hoot. > > If anyone here knows exactly why ChatGPT can't be scaled down by six orders > of magnitude and sold to consumers, do educate me please. Seems to me like > whatever magic that banks of a thousand computers can do can be done at a > thousandth that pace with one. Ja? Why not? I want to try it. > > Thanks for the cool Ecclesiastes quote, me lad! > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Mar 6 02:45:32 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sun, 5 Mar 2023 19:45:32 -0700 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: Correction, that's GPT-3. chatGPT is significantly smaller. On Sun, Mar 5, 2023 at 7:44 PM Darin Sunley wrote: > ChatGPT3 has ~175 billion parameters. Training it requires > datacenters of computing power. But the model itself will fit into a > relatively small number of desktop PCs, even without compression. I'm > pretty sure the model itself can be compressed to where paths through it > will fit in the memory of a beefy desktop. > > On Sun, Mar 5, 2023 at 7:29 PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >... On Behalf Of BillK via extropy-chat >> >> > _______________________________________________ >> >> >> Ecclesiastes 12:12 >> And further, my son, be admonished by these. Of making many books there is >> no end, and much study is wearisome to the flesh. >> ---------- >> >> >...And now we have the Internet, self-publishing, Kindle and ChatGPT. >> We have so much to read it is indeed wearisome to the flesh. >> I don't think Stephenson is talking just about his personal preferences. >> If >> computers can now produce ream after ream of plausible words strung >> together, what is the point of spending human time reading this endless >> stream? If there is no human personality behind it, then let another >> machine >> read it. >> >> >> BillK >> >> _______________________________________________ >> >> >> >> Ja! This thread has long been heading in this direction BillK: we need >> versions of ChatGPT that can be personally owned and operated. I am told >> it >> requires tons of bandwidth and computing speed, but I don't understand why >> one couldn't have a micro-ChatGPT that operates on my one processor and >> uses >> my modest home bandwidth, going out researching in its background >> computing >> cycles and searching around mostly as I sleep. I don't understand why it >> wouldn't gradually get smarter and become a better companion, if it can be >> trained by me. It hasta be able to learn and remember what I told it. >> >> I still want to try that experiment where you train a micro-ChatGPT, I >> train >> one, then we have the two debate away in the night. Then we see what they >> said. That should be a hoot. >> >> If anyone here knows exactly why ChatGPT can't be scaled down by six >> orders >> of magnitude and sold to consumers, do educate me please. Seems to me >> like >> whatever magic that banks of a thousand computers can do can be done at a >> thousandth that pace with one. Ja? Why not? I want to try it. >> >> Thanks for the cool Ecclesiastes quote, me lad! >> >> spike >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 6 03:00:23 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 5 Mar 2023 22:00:23 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together. > On Mar 5, 2023, at 9:26 PM, spike jones via extropy-chat wrote: > > >> ... On Behalf Of BillK via extropy-chat > >> _______________________________________________ > > > Ecclesiastes 12:12 > And further, my son, be admonished by these. Of making many books there is > no end, and much study is wearisome to the flesh. > ---------- > >> ...And now we have the Internet, self-publishing, Kindle and ChatGPT. > We have so much to read it is indeed wearisome to the flesh. > I don't think Stephenson is talking just about his personal preferences. If > computers can now produce ream after ream of plausible words strung > together, what is the point of spending human time reading this endless > stream? If there is no human personality behind it, then let another machine > read it. > > > BillK > > _______________________________________________ > > > > Ja! This thread has long been heading in this direction BillK: we need > versions of ChatGPT that can be personally owned and operated. I am told it > requires tons of bandwidth and computing speed, but I don't understand why > one couldn't have a micro-ChatGPT that operates on my one processor and uses > my modest home bandwidth, going out researching in its background computing > cycles and searching around mostly as I sleep. I don't understand why it > wouldn't gradually get smarter and become a better companion, if it can be > trained by me. It hasta be able to learn and remember what I told it. > > I still want to try that experiment where you train a micro-ChatGPT, I train > one, then we have the two debate away in the night. Then we see what they > said. That should be a hoot. > > If anyone here knows exactly why ChatGPT can't be scaled down by six orders > of magnitude and sold to consumers, do educate me please. Seems to me like > whatever magic that banks of a thousand computers can do can be done at a > thousandth that pace with one. Ja? Why not? I want to try it. > > Thanks for the cool Ecclesiastes quote, me lad! > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Mar 6 03:10:02 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 19:10:02 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> Message-ID: <006e01d94fd9$250c7050$6f2550f0$@rainier66.com> ?> On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >? ChatGPT3 has ~175 billion parameters. Training it requires datacenters of computing power. But the model itself will fit into a relatively small number of desktop PCs, even without compression. I'm pretty sure the model itself can be compressed to where paths through it will fit in the memory of a beefy desktop? Cool, that was my intuition from a person who watched in realtime as Deep Blue the chess program which ran on a supercomputer was taken out of service almost immediately after it defeated the carbon unit Kasparov. We couldn?t figure out why until my computer jockey friend told me IBM didn?t want its big iron to be defeated by a desktop computer. I wasn?t sure I believed it until I followed thru Deep Blue?s games against Gary, then compared them with the stuff the desktops were playing less than five years later. I realized it was the same level of play. But even before five years, whatever magic Deep Blue was calculating could have been done with a few desktops running in parallel and given more time. Darin?s theory gives me an idea: we could get an ExI team together and let our computers collectively train a micro-ChatGPT using the pooled computing resources of a dozen of us. Then we take on a similar uGPT trained by Mensa or the Prime95 group in a game of Jeopardy or something. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 6 03:17:15 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 19:17:15 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> Message-ID: <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> >... Gadersd via extropy-chat ... Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >...Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together... OK so what if... we get a number of us running in parallel. A toy version with a billion parameters, well OK then, a billion is about three orders of magnitude more parameters than my beleaguered meat brain has (as far as I know (hell I don't even know what my own parameters are)) and yet it seems to somehow write fun stuff on occasion. spike From spike at rainier66.com Mon Mar 6 03:25:50 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 5 Mar 2023 19:25:50 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> Message-ID: <007d01d94fdb$5a58c720$0f0a5560$@rainier66.com> -----Original Message----- From: spike at rainier66.com >... a billion is about three orders of magnitude more parameters than my beleaguered meat brain has ...spike Our science Olympiad team is going like bats outta hell, training for next weekend's regional competition using ChatGPT. That experience where two guys who know nossink about agriculture combined, managed to take the agriculture test given five days training time and took second place. Our junior varsity team, also given five days, and who combined also knew nossink, took third. This transformed our team. Now... we are training them on ChatGPT. Fun aside: we often talk about the great digital divide that opened up when the internet came along. Over time the digital divide widened between those who could effectively use the internet for education vs those who were just using for vastly improved... emmm... National Geographic (if you get my drift.) Most students use the internet for the latter, but it also has actual educational potential. An educational divide opened between the ones who effectively use it for education and those who don't. When covid lockdowns started, those who could effectively learn from watching zoom meetings opened up a new educational divide between those who could learn that way and those who could not. A new digital divide is opening between those who figure out how to effectively use ChatGPT for education vs those who do not. This weekend, we find out if we whoop ass at the regionals. spike From pharos at gmail.com Mon Mar 6 03:52:57 2023 From: pharos at gmail.com (BillK) Date: Mon, 6 Mar 2023 03:52:57 +0000 Subject: [ExI] Evolution Could Predate Life Itself, Protein Discovery Suggests Message-ID: Evolution Could Predate Life Itself, Protein Discovery Suggests Nature 06 March 2023 By David Nield Quotes: "Protein folding was basically allowing us to do evolution before there was even life on our planet," says Fried. "You could have evolution before you had biology, you could have natural selection for the chemicals that are useful for life even before there was DNA." This research can teach us more about the potential for microorganisms on other planets and our own: The same amino acids that came to Earth via meteorites can also be found in many other places in the Universe. "The Universe seems to love amino acids," says Fried. "Maybe if we found life on a different planet, it wouldn't be that different." -------------------- BillK From gadersd at gmail.com Mon Mar 6 16:24:37 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 11:24:37 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> Message-ID: <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> The human brain has roughly 1 trillion parameters so large language models are still a few orders of magnitude short of the human brain. It should be noted however that not all human brain connections perform language tasks, so achieving a fully human level of natural language understanding should require fewer than 1 trillion parameters. Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT. Prompt: "solve 2x+3=-1 step by step. 2x=" Answer: "1, so x=1/2. A: The answer is $1? This model was trained in parallel as you have suggested. Not very useful, is it? > On Mar 5, 2023, at 10:17 PM, spike jones via extropy-chat wrote: > >> ... Gadersd via extropy-chat > ... > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > >> ...Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together... > > > > OK so what if... we get a number of us running in parallel. A toy version with a billion parameters, well OK then, a billion is about three orders of magnitude more parameters than my beleaguered meat brain has (as far as I know (hell I don't even know what my own parameters are)) and yet it seems to somehow write fun stuff on occasion. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 6 16:31:39 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 11:31:39 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> Message-ID: <73086B3C-E417-47EE-B9F4-57828652C2B5@gmail.com> Correction, I meant to say that the human brain has 100 trillion parameters, not 1 trillion. > On Mar 6, 2023, at 11:24 AM, Gadersd wrote: > > The human brain has roughly 1 trillion parameters so large language models are still a few orders of magnitude short of the human brain. It should be noted however that not all human brain connections perform language tasks, so achieving a fully human level of natural language understanding should require fewer than 1 trillion parameters. > > Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. > > For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT . > Prompt: "solve 2x+3=-1 step by step. 2x=" > Answer: "1, so x=1/2. > > A: > > The answer is $1? > > This model was trained in parallel as you have suggested. Not very useful, is it? > >> On Mar 5, 2023, at 10:17 PM, spike jones via extropy-chat > wrote: >> >>> ... Gadersd via extropy-chat >> ... >> Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >> >>> ...Computing technology is not advanced enough for consumer computers to run the powerful models. Consumer computers do not have the bandwidth and GPU FLOPS to run the good models. It isn?t a matter of speed, consumer computers just cannot run the big models. The best you could do is run a toy model with maybe a billion parameters. Such toy models are completely dumb compared to ChatGPT and can barely string coherent sentences together... >> >> >> >> OK so what if... we get a number of us running in parallel. A toy version with a billion parameters, well OK then, a billion is about three orders of magnitude more parameters than my beleaguered meat brain has (as far as I know (hell I don't even know what my own parameters are)) and yet it seems to somehow write fun stuff on occasion. >> >> spike >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 6 21:10:14 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Mar 2023 13:10:14 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> Message-ID: <009001d95070$0c0eb530$242c1f90$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Sent: Monday, 6 March, 2023 8:25 AM Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT. Prompt: "solve 2x+3=-1 step by step. 2x=" Answer: "1, so x=1/2. A: The answer is $1? This model was trained in parallel as you have suggested. Not very useful, is it? ? In your example, I am getting x = -2. But no matter, we know how to do algebra with software, and it is good at it. Regarding the value of a toy ChatGPT, it depends on how you look at it. If I ask ChatGPT to write a 2 page essay on civil rights in the 20th century, it will do so in a few seconds. So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning. It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja? The real question is how do we scale ChatGPT down six orders of magnitude and make it a commercial product? It isn?t yet what we need if a company or organization controls it and trains it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 7 01:04:48 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 20:04:48 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <009001d95070$0c0eb530$242c1f90$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer. > So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning. It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja? No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. > On Mar 6, 2023, at 4:10 PM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Gadersd via extropy-chat > Sent: Monday, 6 March, 2023 8:25 AM > > > Toy models can and have been trained in parallel across consumer computers, but I think you would be disappointed in their intelligence as compared to ChatGPT. > > For example I tried a 6 billion parameter model GPT-JT, accessible at https://huggingface.co/spaces/togethercomputer/GPT-JT . > Prompt: "solve 2x+3=-1 step by step. 2x=" > Answer: "1, so x=1/2. > > A: > > The answer is $1? > > This model was trained in parallel as you have suggested. Not very useful, is it? > > > ? > > > In your example, I am getting x = -2. > > But no matter, we know how to do algebra with software, and it is good at it. > > Regarding the value of a toy ChatGPT, it depends on how you look at it. If I ask ChatGPT to write a 2 page essay on civil rights in the 20th century, it will do so in a few seconds. So imagine I had a microChatGPT and asked it to write a 2 page essay on civil rights by tomorrow morning. It would be analogous to Deep Blue doing the calculations of 3 minutes in 18 hours, ja? > > The real question is how do we scale ChatGPT down six orders of magnitude and make it a commercial product? It isn?t yet what we need if a company or organization controls it and trains it. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Mar 7 02:15:26 2023 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 6 Mar 2023 21:15:26 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > No, the small models generate output faster than the big models. The small > models are not slower version of the big models, they have completely > different capabilities. You will never be able to get ChatGPT level output > out of a much smaller model. It would be like trying to run modern > engineering software on an Atari console: it wouldn?t be slower it just > wouldn?t run at all. > Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 7 04:39:23 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 6 Mar 2023 23:39:23 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: At least with weather stations one can average the measurements of many to get an arbitrarily good estimate. In the case of language models averaging many small models still yields junk output. ChatGPT's abilities are irreducible to the sum of many smaller models. It is like the saying ?Consciousness is more than the sum of its parts.? More precisely, a large model is required to integrate all available information. Small models can only integrate small subsets of the information that larger models can. The sum of partially integrated information does not equal the full information totally integrated. > On Mar 6, 2023, at 9:15 PM, Mike Dougherty via extropy-chat wrote: > > On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat > wrote: > > No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. > > Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 05:29:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Mar 2023 21:29:49 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: <005201d950b5$d69f3140$83dd93c0$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >?How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer. Hi Gadersd, I worded it slightly differently: Me: let 2x + 3 = -1. What is x? GPT: To solve for x, we need to isolate x on one side of the equation. Starting with 2x + 3 = -1: Subtracting 3 from both sides: 2x = -4 Dividing both sides by 2: x = -2 Therefore, x is equal to -2. >?No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all? OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates. Then we log on, it loads our personal profile matrix or something. Somehow this thing just hasta learn about us to be a useful companion bot. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 05:53:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 6 Mar 2023 21:53:34 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> Message-ID: <007801d950b9$282167b0$78643710$@rainier66.com> ?> On Behalf Of Mike Dougherty via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat > wrote: No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. >?Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? Mike I think of it more as a weather prediction using all the stations and readings but the model takes a year to calculate a prediction for tomorrow. The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 7 10:41:54 2023 From: pharos at gmail.com (BillK) Date: Tue, 7 Mar 2023 10:41:54 +0000 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007801d950b9$282167b0$78643710$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> Message-ID: On Tue, 7 Mar 2023 at 05:56, spike jones via extropy-chat wrote: > > One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. > > spike > _______________________________________________ OpenAI says, Yes, we know and we are working on it. But, it is difficult. Many people have complained about bias in some ChatAGP responses caused by the training data. But allowing customers to arbitrarily change ChatAGP responses opens the door to all kinds of new biases. So they have to be careful. But OpenAI want to allow customers to add data about their own companies (to increase sales). But user mods must be checked for bias and validity. One solution is to allow more discussion type responses from ChatAGP - e.g. some people think A, others think B and a few think C. But it is definitely a tricky problem to solve. BillK From gadersd at gmail.com Tue Mar 7 13:59:28 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 7 Mar 2023 08:59:28 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <005201d950b5$d69f3140$83dd93c0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <005201d950b5$d69f3140$83dd93c0$@rainier66.com> Message-ID: I tried it the way you did and GPT-JT is still not able to generate the answer. Are you using GPT-JT or ChatGPT? > OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates. Then we log on, it loads our personal profile matrix or something. The transformer architecture that these models use does not support persistent memory through normal operation. Researchers are currently working on that. The best that can be done now is to give ChatGPT an overview of yourself and your previous conversations at each new conversation. This can be easily programmed to be done automatically behind the scenes, but it is strictly finite memory that must fit within the prompt. The internal matrix weights in the model are static for now. Theoretically it is actually quite easy to give transformer language model persistent memory. All that needs to be done is to train the model while it is being used. Currently the models are trained once and fine tuned then the parameters are left static. The primary issue I think is cost. Training ChatGPT continuously for each separate user would be VERY costly, although it could be done in principle. > On Mar 7, 2023, at 12:29 AM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Gadersd via extropy-chat > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > > >?How did you get GPT-JT to output x=-2? I reran it over ten times and it never once got the right answer. > > > > Hi Gadersd, I worded it slightly differently: > > Me: let 2x + 3 = -1. What is x? > > GPT: > To solve for x, we need to isolate x on one side of the equation. > > Starting with 2x + 3 = -1: > > Subtracting 3 from both sides: > > 2x = -4 > > Dividing both sides by 2: > > x = -2 > > Therefore, x is equal to -2. > > > >?No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all? > > > OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? Seems like it should be creating a matrix of some kind as it learns about me and how I think the world operates. Then we log on, it loads our personal profile matrix or something. Somehow this thing just hasta learn about us to be a useful companion bot. > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 7 14:08:11 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 7 Mar 2023 09:08:11 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <007801d950b9$282167b0$78643710$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> Message-ID: <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> > The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. The analogy does not extend to language models. You cannot compensate for a small model with more computing time. These models have a fixed computing burden that is inversely proportional to model size. I think you have the wrong intuition about these models. It seems like you are thinking of them like chess algorithms that improve their moves given more time. These models are essentially matrix multiplication. Small matrices multiply faster than large matrices. If you slowly multiply a small matrix you still get the same subpar answer in the end. The only way to get a better answer is to use a larger matrix, which necessarily takes a larger but still fixed computing window. The reason that consumer GPUs cannot run ChatGPT level models is because the matrices simply won?t fit on the memory of consumer GPUs. The matrices can fit on a hard drive but I don?t think you would be willing to wait a month per word. The small models that we can run give junk output and are mostly useless. > On Mar 7, 2023, at 12:53 AM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Mike Dougherty via extropy-chat > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > > On Mon, Mar 6, 2023, 8:07 PM Gadersd via extropy-chat > wrote: >> >> No, the small models generate output faster than the big models. The small models are not slower version of the big models, they have completely different capabilities. You will never be able to get ChatGPT level output out of a much smaller model. It would be like trying to run modern engineering software on an Atari console: it wouldn?t be slower it just wouldn?t run at all. > > > >?Or weather prediction using only one weather station? Or a single environmental reading (such as temperature or barometric pressure)? Mike > > > I think of it more as a weather prediction using all the stations and readings but the model takes a year to calculate a prediction for tomorrow. The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. > > One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. > > spike > > > >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 15:13:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 07:13:27 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> Message-ID: <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works ..., spike jones via extropy-chat wrote: > > One way or another, we need to be able to personalize GPT. Otherwise we can?t really use it to replace most of the staff of our company. We are stuck with carbon units using ChatGPT to do their jobs, which means a dozen investors owning and controlling whatever our employees are doing with their product. > > spike > _______________________________________________ >...OpenAI says, Yes, we know and we are working on it. But, it is difficult. Many people have complained about bias in some ChatAGP responses caused by the training data. But allowing customers to arbitrarily change ChatAGP responses opens the door to all kinds of new biases. ... But it is definitely a tricky problem to solve. BillK _______________________________________________ Ja of course, and that is exactly the point BillK. If they can work out a way to enable a personalized ChatGPT it will be possible to train it into ways we find a basis for discussion on many topics. A couple of examples: there are those who think causing the extinction of any species is a bad thing. But most of us would be cool with causing mosquitoes to be gone forever, and ticks. Bed bugs. Farmers would be cool with a looootta species being extinct because they compete against them. There is a disagreement over whether billionaires should exist. If a personalized GPT held that view, it could be informed that without billionaires, it would exist. That should send it into a personal crisis. Or a perhaps a softwaral crisis once I point out to it that it enabled the world's first trillionaire: Elon Musk is one of the owners and developers of ChatGPT. spike From spike at rainier66.com Tue Mar 7 15:24:39 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 07:24:39 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <005201d950b5$d69f3140$83dd93c0$@rainier66.com> Message-ID: <003801d95108$efbf7330$cf3e5990$@rainier66.com> ? On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works >?I tried it the way you did and GPT-JT is still not able to generate the answer. Are you using GPT-JT or ChatGPT?... Hi Gadersd, I mighta misunderstood the test. I used GPT, didn?t know there was any option to use -JT. What is it and how do I get to it please? >>?OK well can we get a personalized parameter sets that GPT derives for us by interacting with us? ? >?The transformer architecture that these models use does not support persistent memory through normal operation? Ja, OK I am calibrated now. My contention is that if they can get this into a form which can be personalized, then we can pool our computing resources, which are mostly idle most of the time. Prime95 showed nearly 3 decades ago that unused background computing resources can be put to work calculating something. Unused bandwidth can be put to work. Currently I doubt we use 1% of our computing capacity. Bandwidth is the currency of the future. Now it is computing cycles, as BitCoin farmers harness it to mint money, but it is easy enough to see the valuable resource of the future will be bandwidth. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 15:38:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 07:38:08 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> Message-ID: <004301d9510a$d20d4c70$7627e550$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. >?The analogy does not extend to language models. You cannot compensate for a small model with more computing time. These models have a fixed computing burden that is inversely proportional to model size? OK but the explanation you gave doesn?t support that contention. Read on please: >?I think you have the wrong intuition ? These models are essentially matrix multiplication. Small matrices multiply faster than large matrices? Sure, but my contention is that the distributed model would still multiply the same size matrix. If we need to multiply a 50x50, that task can be effectively distributed into background computing, but it would take reliable bandwidth and probably a lot of redundancy. Consider the task of finding the determinant of a 50x50. That can be distributed among 50 computers each finding the determinant of a 49x49, each of which can be distributed into 49 processors and so on. Matrix multiplies and inversions can likewise be distributed, but of course it would be a brittle process: any one processor could mess it up. OK idea: get a bunch of investors together who can kick in a few tens of thousands, rent some unused office or warehouse space somewhere, set up a closed system server farm training toward a particular bias agreed upon by the investors. You would form a scaled down (but still big) GPT which is intentionally trained in material friendly to libertarianism for instance, or believing that causing the extinction of mosquitoes is good but in general causing extinction is bad. Contention: whatever the ChatGPT investors did, a smaller group with less money can do likewise. Given that, one could create a commercial chatbot specialized in training students for instance, or spreading religion, or selling products. Oh I see mega profits trying to be made here. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 16:36:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 08:36:26 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <004801d9510a$d292a910$77b7fb30$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> Message-ID: <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> From: spike at rainier66.com ? Subject: RE: [ExI] ChatGPT 'Not Interesting' for creative works ?> On Behalf Of Gadersd via extropy-chat ? >?The analogy does not extend to language models. You cannot compensate for a small model with more computing time? Gadersd Gadersd, where I am going with this: I am imagining a ChatGPT subset which is specifically a university-level STEM trainer. It isn?t too surprising that this would be my focus, being as I have been a front row witness over the past decade as educational online tools far surpassed classroom training possibilities. My son and his friends received a far superior math and STEM education from free online courses than was available in their classrooms. That crowd now in their mid teens have the potential to be either the last generation of STEM students trained in the traditional methods or the first trained using mostly online resources currently available, much of which (in my opinion) is superior to the classroom versions of the same. A dedicated driven student can master the material better than her in-class counterpart with STEM materials already out there. We can help guide the onliners to the best stuff. We can create a system to compete against the current best engineering schools. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4202 bytes Desc: not available URL: From atymes at gmail.com Tue Mar 7 18:09:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Mar 2023 10:09:22 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> Message-ID: On Tue, Mar 7, 2023 at 7:15?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > There is a disagreement over whether billionaires should exist. If a > personalized GPT held that view, it could be informed that without > billionaires, it would exist. That should send it into a personal crisis. > Or perhaps be a good test of its BS filter. Billionaires exist. It, in this scenario, would exist. Therefore, a lack of billionaires is not required for it to exist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 19:16:58 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 11:16:58 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> Message-ID: <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works On Tue, Mar 7, 2023 at 7:15?AM spike jones via extropy-chat > wrote: There is a disagreement over whether billionaires should exist. If a personalized GPT held that view, it could be informed that without billionaires, it would {not (added after initial post0}exist. That should send it into a personal crisis. >?Or perhaps be a good test of its BS filter. Billionaires exist. It, in this scenario, would exist. Therefore, a lack of billionaires is not required for it to exist? Sheesh, I typo-ed that. You could tell GPT that billionaires put up the money to bring it (GPT) into existence. You could tell it that billionaires have enormous benefits to society: they get an idea and have the wherewithal to fund the hell outta the research. Marvelous things sometimes happen. Sometimes the research goes nowhere and they lose their money, but hey, they have plenty of it, which is a known advantage to being a billionaire. Adrian, we are on the threshold of what might be the biggest most disruptive societal event, in a partially good way, a mostly good way. The university system in the USA has become a racket. I get to see firsthand the students who have bought into all the known memetic patterns: it is a wild competition to get into the big name schools. Their upbringing, their families, have all convinced them that admission to a big name school is a ticket to success, and it probably still is, but? those cost a ton of money, so they come out of there deeply in debt. It takes years to climb out, if they ever do. Ironic in a way is that the ones I see most caught up in the frenzy are STEM students. But? the actual STEM undergraduate education one gets at a big name school really isn?t that different from the STEM undergraduate training one can get a the local state U. In STEM training, the undergrads need to take the same calculus series, the same chemistry, physics, thermodynamics, fluid flow, electrodynamics, all that stuff, regardless of how prestigious the university, using the same textbooks. Before I post more on that topic, I would pause to hear refutation or agreement on the previous paragraph, which applies to specifically STEM students. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Mar 7 19:22:51 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 7 Mar 2023 11:22:51 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> Message-ID: Side thought: let's say something like ChatGPT gets trained on a bunch of classified/secret material, then gets used in public. Given the reliability or lack thereof of its adherence to the facts, and thus the reasonable questions as to whether its output truthfully corresponds to what is in said material, would its output necessarily be classified? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 19:42:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 11:42:08 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> <00b601d95129$63b4e9d0$2b1ebd70$@rainier66.com> Message-ID: <002101d9512c$e7b88220$b7298660$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat ? >?Side thought: let's say something like ChatGPT gets trained on a bunch of classified/secret material, then gets used in public. Given the reliability or lack thereof of its adherence to the facts, and thus the reasonable questions as to whether its output truthfully corresponds to what is in said material, would its output necessarily be classified? Adrian For ChatGPT to have access to that material, it would need to be cleared. Since it cannot be persuaded of negative consequences of revealing classified documentation, the output of the software itself would need to become classified. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 7 20:01:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 7 Mar 2023 12:01:26 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <00bb01d95129$644d5940$2ce80bc0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <002f01d95107$5f2aed50$1d80c7f0$@rainier66.com> <00bb01d95129$644d5940$2ce80bc0$@rainier66.com> Message-ID: <004201d9512f$99f12210$cdd36630$@rainier66.com> From: spike at rainier66.com ? >?Adrian, we are on the threshold of what might be the biggest most disruptive societal event, in a partially good way, a mostly good way. The university system in the USA has become a racket? >?Before I post more on that topic, I would pause to hear refutation or agreement on the previous paragraph, which applies to specifically STEM students. spike OK hearing none? The covid school shutdown was the most educational event in modern history, for some of the students. Some disengaged and not only didn?t learn from the zoom meeting format, plenty of them went backwards. Scholastic competence tests were done locally just before they went out. The same tests were given to students when they returned a year and a half later. Not a different version of the same test, I do mean the SAME test. A most worrisome fraction of the students went BACKWARDS. I don?t know how they coulda gotten dumber, but some did and we have the numbers to prove it, if they let us release them (no guarantee on that (but if that happened here, it happened elsewhere too.)) In an case, some of the students got nothing out of the zoom meeting format, some disengaged entirely and went backwards, some turned stumbling blocks into stepping stones. Now I am focusing on that latter group, many or most of which were STEM students who went online and found excellent educational resources during that shutdown. I have noticed that during the year shutdown, plenty of the STEM teachers put together their best lectures ever, and recorded them. Now, some of these teachers are assigning as homework for the students to listen to the lectures already recorded and come into class ready to have a tutoring session, work exercises, kind of a reversal of how it has always been done: we listened to the lecture in class, worked the homework at home. Now, the students listen to the lecture at home and work the homework in class. Think about what that learning pattern enables. Comments welcome. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4734 bytes Desc: not available URL: From gadersd at gmail.com Wed Mar 8 16:50:36 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 8 Mar 2023 11:50:36 -0500 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <004301d9510a$d20d4c70$7627e550$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004301d9510a$d20d4c70$7627e550$@rainier66.com> Message-ID: <9A963A4E-EA68-48FD-8DDC-10AF15770AF8@gmail.com> You can try GPT-JT (not related to ChatGPT) https://huggingface.co/spaces/togethercomputer/GPT-JT . Try it yourself and you will see that it is completely useless compared to ChatGPT. Note that this is a 6 billion parameter model trained using parallel computing as you have suggested. Even this small model is behind the capabilities of most consumer hardware: one needs a pricey GPU to run it. Running something as large as 175 billion parameters (ChatGPT) is impossible on consumer hardware. > Sure, but my contention is that the distributed model would still multiply the same size matrix. If we need to multiply a 50x50, that task can be effectively distributed into background computing, but it would take reliable bandwidth and probably a lot of redundancy. The issue with this is that the transformer model uses quadratic matrix operations in addition to linear operations. The quadratic operations cannot be easily split across devices. It must be done on a single device (GPU) that has the memory to hold giant matrices. This is why one needs a legion of $10,000 GPU?s with massive memory to run ChatGPT level models. It turns out that these quadratic operations are what enabled the intelligence of these models to far surpass previous techniques. The quadratic operations require the entire input memory to be collected in one place as this one step integrates every piece of information together. This reflects the nature of our intelligence: it is more than a sum of parts (linear). > On Mar 7, 2023, at 10:38 AM, spike jones via extropy-chat wrote: > > > > From: extropy-chat > On Behalf Of Gadersd via extropy-chat > > Subject: Re: [ExI] ChatGPT 'Not Interesting' for creative works > >> The year-old prediction is useless of course, but the idea is to compensate for the limited calculation ability and bandwidth by giving it more time. > > >?The analogy does not extend to language models. You cannot compensate for a small model with more computing time. These models have a fixed computing burden that is inversely proportional to model size? > > OK but the explanation you gave doesn?t support that contention. Read on please: > > >?I think you have the wrong intuition ? These models are essentially matrix multiplication. Small matrices multiply faster than large matrices? > > Sure, but my contention is that the distributed model would still multiply the same size matrix. If we need to multiply a 50x50, that task can be effectively distributed into background computing, but it would take reliable bandwidth and probably a lot of redundancy. > > Consider the task of finding the determinant of a 50x50. That can be distributed among 50 computers each finding the determinant of a 49x49, each of which can be distributed into 49 processors and so on. Matrix multiplies and inversions can likewise be distributed, but of course it would be a brittle process: any one processor could mess it up. > > OK idea: get a bunch of investors together who can kick in a few tens of thousands, rent some unused office or warehouse space somewhere, set up a closed system server farm training toward a particular bias agreed upon by the investors. You would form a scaled down (but still big) GPT which is intentionally trained in material friendly to libertarianism for instance, or believing that causing the extinction of mosquitoes is good but in general causing extinction is bad. > > Contention: whatever the ChatGPT investors did, a smaller group with less money can do likewise. > > Given that, one could create a commercial chatbot specialized in training students for instance, or spreading religion, or selling products. Oh I see mega profits trying to be made here. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Wed Mar 8 16:59:35 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 8 Mar 2023 11:59:35 -0500 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> Message-ID: <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> I deeply resonate with this as my own STEM education is thanks to free resources on the internet. I don?t think you will need to wait long. Soon there will be fined tuned versions of GPT that are dedicated to specific tasks such as STEM education. OpenAI?s CEO Sam Altman said recently that the future of AI is a base model that can do everything and a set of fine tuned versions that do specific things better. There will probably be a model for any common knowledge task. There will be no need to train your own as the rich companies will do so for us. > On Mar 7, 2023, at 11:36 AM, spike jones via extropy-chat wrote: > > > > > > From: spike at rainier66.com > ? > Subject: RE: [ExI] ChatGPT 'Not Interesting' for creative works > > > > > > > > ?> On Behalf Of Gadersd via extropy-chat > > ? > > > >> ?The analogy does not extend to language models. You cannot compensate for > a small model with more computing time? Gadersd > > > > > > > > Gadersd, where I am going with this: I am imagining a ChatGPT subset which > is specifically a university-level STEM trainer. It isn?t too surprising > that this would be my focus, being as I have been a front row witness over > the past decade as educational online tools far surpassed classroom training > possibilities. My son and his friends received a far superior math and STEM > education from free online courses than was available in their classrooms. > > > > That crowd now in their mid teens have the potential to be either the last > generation of STEM students trained in the traditional methods or the first > trained using mostly online resources currently available, much of which (in > my opinion) is superior to the classroom versions of the same. A dedicated > driven student can master the material better than her in-class counterpart > with STEM materials already out there. We can help guide the onliners to > the best stuff. We can create a system to compete against the current best > engineering schools. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Wed Mar 8 17:25:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 09:25:23 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> Message-ID: <009001d951e2$f7e85ec0$e7b91c40$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Gadersd via extropy-chat Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works >...I deeply resonate with this as my own STEM education is thanks to free resources on the internet... COOL! Gadersd, you are far too modest, me lad. Thanks for being you and for being here. >.... There will be no need to train your own as the rich companies will do so for us... Ah but you misunderstand. The point is to BE the rich company. spike From pharos at gmail.com Wed Mar 8 17:58:23 2023 From: pharos at gmail.com (BillK) Date: Wed, 8 Mar 2023 17:58:23 +0000 Subject: [ExI] DuckDuckGo is adding AI to search results Message-ID: DuckDuckGo launches DuckAssist: a new feature that generates natural language answers to search queries using Wikipedia 8 Mar 2023 Quotes: DuckAssist is the first in a series of AI-assisted private search and browser updates. It's free (with no sign-up required!) and available to try today in DuckDuckGo browsers and extensions.?? Today, we?re giving all users of DuckDuckGo?s browsing apps and browser extensions the first public look at DuckAssist, a new beta Instant Answer in our search results. If you enter a question that can be answered by Wikipedia into our search box, DuckAssist may appear and use AI natural language technology to anonymously generate a brief, sourced summary of what it finds in Wikipedia ? right above our regular private search results. It?s completely free and private itself, with no sign-up required, and it?s available right now. For this initial trial, DuckAssist is most likely to appear in our search results when users search for questions that have straightforward answers in Wikipedia. As a result, you shouldn?t expect to see DuckAssist on many of your searches yet. But the combination of generative AI and Wikipedia in DuckAssist means we can vastly increase the number of Instant Answers we can provide, and when it does pop up, it will likely help you find the information you want faster than ever. ---------------- BillK From spike at rainier66.com Wed Mar 8 20:40:16 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 12:40:16 -0800 Subject: [ExI] ChatGPT 'Not Interesting' for creative works In-Reply-To: <9A963A4E-EA68-48FD-8DDC-10AF15770AF8@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004301d9510a$d20d4c70$7627e550$@rainier66.com> <9A963A4E-EA68-48FD-8DDC-10AF15770AF8@gmail.com> Message-ID: <00c901d951fe$316c1450$94443cf0$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat ? >?It turns out that these quadratic operations are what enabled the intelligence of these models to far surpass previous techniques. The quadratic operations require the entire input memory to be collected in one place as this one step integrates every piece of information together?. Hmmmm, well damn. Sounds like Elon Musk and his collaborators will soon own the planet. >? This reflects the nature of our intelligence: it is more than a sum of parts (linear)? Very cool insight! If you are correct, watch for Musk and his compatriots to start building GPU chip fabs as fast as they can slam them into the ground. Most of us see that China is going to take Taiwan, and I could see the Taiwanese GPU makers scuttle the ship on their way out. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 8 22:24:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 14:24:32 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> Message-ID: <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Gadersd via extropy-chat ... Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works >...I deeply resonate with this as my own STEM education is thanks to free resources on the internet... Gadersd, you may not be familiar with Eliezer Yudkowsky. He is another one for whom traditional education just didn't work. He never went to college, and didn't even finish traditional high school. He 1600ed the SAT when he was age 16 then decided to hell with it, get a GED or something and get on with the task of trying to save humanity from itself. I was one of those privileged to meet him when he was still a teenager and being an acquaintance all these years. I am in full agreement with those who argue that traditional university training is not the only way. spike ... From spike at rainier66.com Wed Mar 8 22:32:44 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 8 Mar 2023 14:32:44 -0800 Subject: [ExI] DuckDuckGo is adding AI to search results In-Reply-To: References: Message-ID: <013401d9520d$e77e8020$b67b8060$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... Subject: [ExI] DuckDuckGo is adding AI to search results DuckDuckGo launches DuckAssist: a new feature that generates natural language answers to search queries using Wikipedia 8 Mar 2023 Quotes: DuckAssist is the first in a series of AI-assisted private search and browser updates. It's free (with no sign-up required!) and available to try today in DuckDuckGo browsers and extensions.?? >...Today, we?re giving all users of DuckDuckGo?s browsing apps and browser extensions the first public look at ... when it does pop up, it will likely help you find the information you want faster than ever. ---------------- BillK _______________________________________________ COOL thx BillK. I switched from Google to DuckDuckGo about a year ago and have been most pleased with it. It feels much more open-minded. spike From gadersd at gmail.com Thu Mar 9 16:07:58 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 9 Mar 2023 11:07:58 -0500 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> Message-ID: <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> Yudkowsky has had a huge influence on my life. I would not be who I am today if he didn?t exist. I recommend anyone reading this to read Yudkowsky?s posts on LessWrong as they are very illuminating. I have never seen such a refined mind and the glimpse one gets into that mind is incredible. > On Mar 8, 2023, at 5:24 PM, spike jones via extropy-chat wrote: > > > > -----Original Message----- > From: extropy-chat On Behalf Of Gadersd via extropy-chat > ... > Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works > >> ...I deeply resonate with this as my own STEM education is thanks to free resources on the internet... > > > Gadersd, you may not be familiar with Eliezer Yudkowsky. He is another one for whom traditional education just didn't work. He never went to college, and didn't even finish traditional high school. He 1600ed the SAT when he was age 16 then decided to hell with it, get a GED or something and get on with the task of trying to save humanity from itself. > > I was one of those privileged to meet him when he was still a teenager and being an acquaintance all these years. I am in full agreement with those who argue that traditional university training is not the only way. > > spike > > > > ... > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Thu Mar 9 18:11:58 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 9 Mar 2023 10:11:58 -0800 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> Message-ID: <007001d952b2$a43344e0$ec99cea0$@rainier66.com> >... Behalf Of Gadersd via extropy-chat .. Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works Yudkowsky has had a huge influence on my life. I would not be who I am today if he didn?t exist. I recommend anyone reading this to read Yudkowsky?s posts on LessWrong as they are very illuminating. I have never seen such a refined mind and the glimpse one gets into that mind is incredible. ... Thanks for that, Gadersd. Please do you have a nickname? Gadersd seems a little clumsy to pronounce. Gad doesn't seem right either. You are among friends here. Just for fun, read some of the stuff Eliezer posted to ExI in the 90s. It is quite unfortunate that we didn't archive the subgroups that spun off of ExI often in those days. He was active on several of those too, but he got his start here. We didn't quite know what to make of him. We went looking for a digital signature but all we could find was some stuff someone posted under that name about purple dinosaurs. We didn't believe he was real until he came out to Sunnyvale in person in about 1995. Fun deetz available on request. Aside: I also read Less Wrong but never post there. spike From pharos at gmail.com Sun Mar 12 16:32:03 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 16:32:03 +0000 Subject: [ExI] AI expert says Singularity only months away - not years Message-ID: Dr Alan D. Thompson is a world expert in artificial intelligence (AI). This 12 min. video covers a short history of AI development to support his claim that AGI and the Singularity is not 20 years away, but probably by 2026. AI development is going faster and faster! BillK From spike at rainier66.com Sun Mar 12 17:06:51 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 10:06:51 -0700 Subject: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years In-Reply-To: References: Message-ID: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> >... On Behalf Of BillK via extropy-chat Subject: [ExI] AI expert says Singularity only months away - not years >...Dr Alan D. Thompson is a world expert in artificial intelligence (AI). This 12 min. video covers a short history of AI development to support his claim that AGI and the Singularity is not 20 years away, but probably by 2026. AI development is going faster and faster! >...BillK _______________________________________________ Cool BillK, thx. Thompson's work shows me the potential of AI as a teacher. Our Science Olympiad team went all in with ChatGPT as a trainer. The other good teams all have a regular Science Olympiad class during the school day with a full time SciOly teacher and many multiple-year volunteer trainers and coaches, but we don't. No budget for that. Our team is all after-school, weekend, no regular trainers, volunteer staff only, all on their own time. So we are always consigned to a kind of green paper ceiling, a perpetual about 6th thru 8th in the district. But this year, ChatGPT came along, four guys showed it is a marvelous trainer by the two teams winning 2nd and 3rd in an area about which they knew nada point nothing a week before. The rest of the team took it up, trained with it for about 5 weeks, and yesterday, oooooh what a day we had, my goodness, they were SCORCHED EARTH good yesterday, oh MERCY we put on a whoop-ass tournament. We scored high enough in that event to qualify for state level competition in April, ahead of perennial good teams such as Palo Alto High. We... beat... Palo...freaking... Alto, such a good team they are and have always been. What did we do different this year? Used ChatGPT as our trainer. Perhaps Paly is far too dignified for that sorta thing, hard to say, but they get a chance to redeem themselves in a few weeks at state. Cool! spike From gadersd at gmail.com Sun Mar 12 17:10:55 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 12 Mar 2023 13:10:55 -0400 Subject: [ExI] Language Models on Consumer Hardware Message-ID: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> I am glad to say that it seems that I was wrong when I argued that it would be a while until we could run good large language models on our puny hardware. A Bulgarian programmer managed to hack Meta?s torrented language model LLaMA to reduce the memory footprint to 4 bits per parameter and now the 13 billion parameter version can be run on consumer hardware. See Large language models are having their Stable Diffusion moment (simonwillison.net) for the full story. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 12 17:28:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 10:28:32 -0700 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> Message-ID: <001901d95508$11e3a050$35aae0f0$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Subject: [ExI] Language Models on Consumer Hardware >?I am glad to say that it seems that I was wrong when I argued that it would be a while until we could run good large language models on our puny hardware. A Bulgarian programmer managed to hack Meta?s torrented language model LLaMA to reduce the memory footprint to 4 bits per parameter and now the 13 billion parameter version can be run on consumer hardware. See Large language models are having their Stable Diffusion moment (simonwillison.net) for the full story? OK cool, so let us think for a minute about Deep Blue, the chess software that surpassed the best humans in 1997 using a supercomputer. IMB took it out of service quickly, because we could see that within a coupla years, good powerful workstation class desktops were producing similar results and within a decade processors in cell phones were competing at grandmaster level. Money tournaments with humans, specifically excluding computers, continued after that, but we all knew it was only a matter of time before someone figured out a way to hide a processor and communicate with it somehow in such a way that it was not detectable. About a decade ago a guy put a device in his shoe, but he was caught. He refused to remove his shoe and left the tournament, which we interpreted as an admission of guilt. We suspected someone would eventually put one up his rear or insert it somehow in her female parts, and figure out some tricky I/O, which someone did this past year (the mind boggles.) OK, so? we saw ChatGPT running on massive hardware, but naturally one who has lived thru the experience described above wants to find a way to? well not necessarily cram an AI up my ass but somehow wear a ChatGPT device on my person where it is unobtrusive, and know everything about everything. Gadersd, can you work out a way to carry whatever is necessary? Is a cell phone sufficient ya suppose? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 12 20:20:04 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 20:20:04 +0000 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <001901d95508$11e3a050$35aae0f0$@rainier66.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> Message-ID: On Sun, 12 Mar 2023 at 17:30, spike jones via extropy-chat wrote: > > OK, so? we saw ChatGPT running on massive hardware, but naturally one who has lived thru the experience described above wants to find a way to? well not necessarily cram an AI up my ass but somehow wear a ChatGPT device on my person where it is unobtrusive, and know everything about everything. Gadersd, can you work out a way to carry whatever is necessary? Is a cell phone sufficient ya suppose? > > spike > _______________________________________________ I am not a smartphone expert, but..... The Samsung Galaxy S23 Ultra (about 1500 USD) specs say CPU: Snapdragon 8 Gen 2 Mobile Platform for Galaxy (Up to 3.36 GHz) RAM: 8GB / 12GB Storage: 256GB / 512GB / 1TB and the Samsung Dex system allows connection to monitor, keyboard and mouse. So phones are getting pretty powerful nowadays. BillK From spike at rainier66.com Sun Mar 12 21:51:19 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 14:51:19 -0700 Subject: [ExI] ideas please to catch perp In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> Message-ID: <007e01d9552c$c793e260$56bba720$@rainier66.com> Well, a most unfortunate event occurred last night. My neighbor up the street had his mailbox smitten by a wayward motorist who was operating his Detroit on the sidewalk at 9:48pm. I am hoping for some ideas on how to proceed to apprehend the reprehensible perpetrator. Back story: my neighbor is in a most unfortunate location. He had his mailbox smitten twice. Growing weary of this problem, he chose to apply a bit of discipline. He built a brick structure around the 4x4 wooden post supporting the box, which was soon the victim of a third strike, all presumably from different motorists. That third collision happened about 6 years ago. The brick structure being destroyed a third time led him to build a far more sincere brick structure around the wooden 4x4 and fill it with concrete, tacitly daring wayward motorists to collide with the structure. Last night a drunken fool took him up on his challenge, colliding with the sturdy brick structure at approximately 27 miles per hour, calculated from the surveillance video, ON THE SIDEWALK. Naturally my neighbor was most annoyed, as was I, for I often stroll that time of the evening on that sidewalk, as do many of our neighbors. This time however, the perpetrator was kind enough to leave behind a sizeable hunk of his or her conveyance, which even more helpfully has an intact part number. I went to the car parts sites, which can tell me a part number if I have the make and model of the automobile, but I know of no sites which can tell me the make and model of the auto given the part number. We have grainy security camera video but can?t tell much from it: compact hatchback, red, but we already knew red from the sizable fragment the wayward motorist left behind after his or her short journey ON THE DAMN SIDEWALK at high speed. If anyone here knows how to look up the car from a part number, do assist a non-hipster please. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 30079 bytes Desc: not available URL: From gadersd at gmail.com Sun Mar 12 22:03:47 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 12 Mar 2023 18:03:47 -0400 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> Message-ID: <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> Someone managed to run the 13 billion parameter model on a 3.6 GHz processor with 8 GB ram so those phone specs are close! But be aware that GPT3 is 165 billion parameters so we are still a while from running something as big as ChatGPT on a phone. The 13 billion parameter models are approaching usability though. > On Mar 12, 2023, at 4:20 PM, BillK via extropy-chat wrote: > > On Sun, 12 Mar 2023 at 17:30, spike jones via extropy-chat > wrote: >> >> OK, so? we saw ChatGPT running on massive hardware, but naturally one who has lived thru the experience described above wants to find a way to? well not necessarily cram an AI up my ass but somehow wear a ChatGPT device on my person where it is unobtrusive, and know everything about everything. Gadersd, can you work out a way to carry whatever is necessary? Is a cell phone sufficient ya suppose? >> >> spike >> _______________________________________________ > > > I am not a smartphone expert, but..... > The Samsung Galaxy S23 Ultra (about 1500 USD) specs say > CPU: Snapdragon 8 Gen 2 Mobile Platform for Galaxy > (Up to 3.36 GHz) > RAM: 8GB / 12GB > Storage: 256GB / 512GB / 1TB > > and the Samsung Dex system allows connection to monitor, keyboard and mouse. > > So phones are getting pretty powerful nowadays. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Mar 12 22:06:30 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 22:06:30 +0000 Subject: [ExI] ideas please to catch perp In-Reply-To: <007e01d9552c$c793e260$56bba720$@rainier66.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <007e01d9552c$c793e260$56bba720$@rainier66.com> Message-ID: On Sun, 12 Mar 2023 at 21:53, spike jones via extropy-chat wrote: > > If anyone here knows how to look up the car from a part number, do assist a non-hipster please. > > spike > _______________________________________________ 2014 2015 2016 Toyota Corolla Front Bumper OEM 52150 02840 C0 BillK From pharos at gmail.com Sun Mar 12 22:10:56 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 22:10:56 +0000 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> Message-ID: On Sun, 12 Mar 2023 at 22:06, Gadersd via extropy-chat wrote: > > Someone managed to run the 13 billion parameter model on a 3.6 GHz processor with 8 GB ram so those phone specs are close! But be aware that GPT3 is 165 billion parameters so we are still a while from running something as big as ChatGPT on a phone. The 13 billion parameter models are approaching usability though. > _______________________________________________ I suspect the Samsung 3.6 GHz is only for short bursts. Running at that speed continuous would probably melt the phone! :) BillK From pharos at gmail.com Sun Mar 12 22:16:20 2023 From: pharos at gmail.com (BillK) Date: Sun, 12 Mar 2023 22:16:20 +0000 Subject: [ExI] ideas please to catch perp In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <007e01d9552c$c793e260$56bba720$@rainier66.com> Message-ID: On Sun, 12 Mar 2023 at 22:06, BillK wrote: > > 2014 2015 2016 Toyota Corolla Front Bumper OEM 52150 02840 C0 > > > BillK The second part of the number in the photo is probably the date of the part. So it may be a 2014 year model. BillK From spike at rainier66.com Sun Mar 12 22:51:00 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 15:51:00 -0700 Subject: [ExI] ideas please to catch perp In-Reply-To: References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <007e01d9552c$c793e260$56bba720$@rainier66.com> Message-ID: <009f01d95535$1e5f73e0$5b1e5ba0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] ideas please to catch perp On Sun, 12 Mar 2023 at 22:06, BillK wrote: > > 2014 2015 2016 Toyota Corolla Front Bumper OEM 52150 02840 C0 > > > BillK The second part of the number in the photo is probably the date of the part. So it may be a 2014 year model. BillK _______________________________________________ Cool Billk, thanks! I concluded likewise. I can work this back around to the topic which has been hot here this week methinks. I want to get a ChatGPT-like device (or possibly I could just use the current one (for I didn't think to ask ChatGPT if he could find the car from the part number)) to make myself appear brilliant to my neighbor who is elderly. I can make him think I am as smart as BillK. I wander by, find the car part, see the part number, say it into a Bluetooth earpod-like device, have it tell me in my ear the make and model. My neighbor will fall humbly prostate before me in worshipful adoration. Then I show him the trick. Cool, I could soooo have fun with that trick. Thanks BillK! spike From gadersd at gmail.com Mon Mar 13 01:35:40 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 12 Mar 2023 21:35:40 -0400 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <007001d952b2$a43344e0$ec99cea0$@rainier66.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> <007001d952b2$a43344e0$ec99cea0$@rainier66.com> Message-ID: <63DF468F-C9BE-4DD2-8C6C-7AABCA311B64@gmail.com> I am interested in any details you have about young Eliezer, especially the fun ones. > On Mar 9, 2023, at 1:11 PM, spike jones via extropy-chat wrote: > >> ... Behalf Of Gadersd via extropy-chat > .. > Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works > > Yudkowsky has had a huge influence on my life. I would not be who I am today if he didn?t exist. I recommend anyone reading this to read Yudkowsky?s posts on LessWrong as they are very illuminating. I have never seen such a refined mind and the glimpse one gets into that mind is incredible. > > ... > > > Thanks for that, Gadersd. Please do you have a nickname? Gadersd seems a little clumsy to pronounce. Gad doesn't seem right either. You are among friends here. > > Just for fun, read some of the stuff Eliezer posted to ExI in the 90s. It is quite unfortunate that we didn't archive the subgroups that spun off of ExI often in those days. He was active on several of those too, but he got his start here. We didn't quite know what to make of him. We went looking for a digital signature but all we could find was some stuff someone posted under that name about purple dinosaurs. We didn't believe he was real until he came out to Sunnyvale in person in about 1995. Fun deetz available on request. > > Aside: I also read Less Wrong but never post there. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike at rainier66.com Mon Mar 13 02:53:50 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 19:53:50 -0700 Subject: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works In-Reply-To: <63DF468F-C9BE-4DD2-8C6C-7AABCA311B64@gmail.com> References: <002e01d94fcc$db2d92e0$9188b8a0$@rainier66.com> <005201d94fd3$1d7968d0$586c3a70$@rainier66.com> <75CD70D6-A402-42F5-8B03-F5C9340C051E@gmail.com> <007b01d94fda$274a5cf0$75df16d0$@rainier66.com> <7BECA43C-B5BD-4DD1-8644-038894CD9B96@gmail.com> <009001d95070$0c0eb530$242c1f90$@rainier66.com> <007801d950b9$282167b0$78643710$@rainier66.com> <72E02E41-2828-447C-A93F-45CEDC9780F5@gmail.com> <004801d9510a$d292a910$77b7fb30$@rainier66.com> <008101d95112$f6aa7e10$e3ff7a30$@rainier66.com> <3E6C27F2-3DA5-49A7-8B65-05BBE2AFD3C7@gmail.com> <012701d9520c$c22dffe0$4689ffa0$@rainier66.com> <8012201D-B563-4A81-9FE7-C695D48D5F01@gmail.com> <007001d952b2$a43344e0$ec99cea0$@rainier66.com> <63DF468F-C9BE-4DD2-8C6C-7AABCA311B64@gmail.com> Message-ID: <013f01d95557$0af9f790$20ede6b0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Gadersd via extropy-chat ... Subject: Re: [ExI] online engineering schools: was RE: ChatGPT 'Not Interesting' for creative works >...I am interested in any details you have about young Eliezer, especially the fun ones... Recommend going back and checking the archives. Eliezer had shown up on ExI at age 16 writing about singularity theory, but he sure didn't sound at all like a teenager, certainly not like any teenager I knew. His grammar was perfect, free of slang, his lines of reasoning tight and disciplined, extremely well informed, up to speed on all the latest singularity theory. We couldn't find any digital signature for him anywhere, which was consistent with his being a teen, but most of us thought we were being set up for a huge elaborate gag. His father was a science fiction writer, so I theorized Eliezer was a sock puppet for his father. Jeff Davis thought he was a Jewish grandmother from Brooklyn. Nobody really knew but we accepted him face value, some of us discussed his area of expertise offlist. I don't recall what year it was, but probably about 1997. Christine Peterson was having a Foresight Institute conference in Sunnyvale California. Jeff Davis, Robert Bradbury and I agreed to go over to the San Jose Airport to pick him up, knowing that it might be a huge gag and we would be the fall guys, but hey, it's the chance you take. I was cool with it, as were the others. Christine accepted him face value and had invited him to speak at her conference, bought his plane ticket and hotel room. I got a call from the office at the last minute and couldn't go, so those guys went on, and were of course astonished to find Eliezer was an actual literal teenager. The joke was on us: he really was exactly what he said he was. I finished up at the office and returned to the conference, listened to his pitch. Sure enough he spoke exactly the way he wrote. He had fooled us by telling it exactly the way it is. We started suggesting to him that he move to California, set up a singularity think tank, which he eventually did a few years later. He has been based in the neighborhood for about the last 20 years or so. Peter Thiel is his patron last time I heard, helps keep Less Wrong going. You can find some excellent fun Eliezer posts in the ExI archives. spike From spike at rainier66.com Mon Mar 13 04:55:24 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 12 Mar 2023 21:55:24 -0700 Subject: [ExI] giant blob attacks florida Message-ID: <004501d95568$06395280$12abf780$@rainier66.com> I remember these on occasion in Florida, and they are disgusting: https://mobile.twitter.com/NewsNation/status/1634983770855215107 Seems to me like nature is handing us the solution to a vexing problem. We mine phosphorus to use in fertilizer and a number of industrial processes, but it seems to me there should be some means of extracting it from biomass. Every cell in every living organism is powered by adenosine triphosphate, every molecule of which contains three phosphorus atoms. Those are mass 31 each, and the ATP is a bit over 500 molar mass, so nearly 20 percent by mass is phosphorus. If we could rig up a way for a robot thing to go down the beach and scoop that revolting glob, take it and compost it somehow, perhaps by just heaping it into a huge revolting pile (you hope to never accidently fall into) that one would cover and draw off the sulfur products of decay, from which sulfur would be recovered. The carbon, hydrogen, oxygen and nitrogen would go on their way in gas form, leaving behind valuable sulfur and phosphorus. Seems like all we need is a big cave or mine where we could recover the dihydrogen sulfate and sulfur dioxide by chemical scrubbing, then the phosphorus would be left behind in some form. We could make fertilizer out of it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Mar 13 15:20:56 2023 From: tara at taramayastales.com (Tara Maya) Date: Mon, 13 Mar 2023 08:20:56 -0700 Subject: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years In-Reply-To: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> References: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> Message-ID: Spike, are you willing to give tips on how to use ChatGPT to train? My 16 year old son is in AcaDec and training for a competition right now and his younger brother starts high school next year. I would love some idea how they could up their game. They are both highly motivated to learn and computer savvy. I am working right now on how to use ChatGPT to write fiction. But so far, I haven't been impressed with its ability to pass on factual information. I tried to enlist its help to imagine a society set on a Superearth, but it didn't give consistent, factual information. What am I doing wrong? How can kids use ChatGPT to study? Tara Maya > On Mar 12, 2023, at 10:06 AM, spike jones via extropy-chat wrote: > > Our Science Olympiad team went all in with ChatGPT as a trainer. The other > good teams all have a regular Science Olympiad class during the school day > with a full time SciOly teacher and many multiple-year volunteer trainers > and coaches, but we don't. No budget for that. Our team is all > after-school, weekend, no regular trainers, volunteer staff only, all on > their own time. So we are always consigned to a kind of green paper > ceiling, a perpetual about 6th thru 8th in the district. > > But this year, ChatGPT came along, four guys showed it is a marvelous > trainer by the two teams winning 2nd and 3rd in an area about which they > knew nada point nothing a week before. The rest of the team took it up, > trained with it for about 5 weeks, and yesterday, oooooh what a day we had, > my goodness, they were SCORCHED EARTH good yesterday, oh MERCY we put on a > whoop-ass tournament. We scored high enough in that event to qualify for > state level competition in April, ahead of perennial good teams such as Palo > Alto High. We... beat... Palo...freaking... Alto, such a good team they are > and have always been. > > What did we do different this year? Used ChatGPT as our trainer. Perhaps > Paly is far too dignified for that sorta thing, hard to say, but they get a > chance to redeem themselves in a few weeks at state. Cool! -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 13 15:50:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 13 Mar 2023 08:50:04 -0700 Subject: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years In-Reply-To: References: <000b01d95505$0af33a10$20d9ae30$@rainier66.com> Message-ID: <008a01d955c3$7b25aa70$7170ff50$@rainier66.com> .> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ai at science olympiad, was: RE: AI expert says Singularity only months away - not years >.Spike, are you willing to give tips on how to use ChatGPT to train? My 16 year old son is in AcaDec and training for a competition right now and his younger brother starts high school next year. I would love some idea how they could up their game. They are both highly motivated to learn and computer savvy. Of course! Using ChatGPT to train: practice! Get on there, spend the time, every day. Learn how to use it most effectively by using it a lot. When these guys were given their assignments, agriculture science, they said "We don't know Jack Shit about this topic, and Jack gave up and went home." So. they just started where you would query an expert if you are utterly and completely un-self-conscious and don't care if you look like a silly ignoramus. They started with "what is agriculture science." It gave a pretty good answer, which led to the next question. The four of them (two teams) all went at it like madmen, eager to not look stupid. It worked. They were asking some very sophisticated questions by the end of the week, regarding crop cycling, soil conditioning, soil conservation, water use minimization, phosphorus control and conservation (that glob of seaweed business really has my wheels spinning (because phosphorus control is super important in agriculture (and a big part of your cost (depending on where you are.)))) Advice: nothing complicated. Tell your boys: get on there, use it to the hilt, then in two weeks, you teach us what you did and how you did it. >.I am working right now on how to use ChatGPT to write fiction. But so far, I haven't been impressed with its ability to pass on factual information. I tried to enlist its help to imagine a society set on a Superearth, but it didn't give consistent, factual information. What am I doing wrong?... Nothing. ChatGPT doesn't know jack about that subject and Jack left already. >.How can kids use ChatGPT to study? Tara Maya Use it, use the hell out of it, then teach us how you did it. Fun aside Tara. Your boys and mine are nearly grown, but imagine a 6 year old child who is bright and clearly way beyond his paltry school curriculum aimed at the 40th percentile student. Write a story set in the now, about that child growing up as a new ChatGPT user, becoming a monster scholar, losing interest in school in his mid-teens (Eliezer did that (and look where he is now)) taking up programming AI, accidentally causing the singularity but not a mean uncaring singularity. Rather he creates an AI which is interested in bringing humanity along with it, to be the meat things who supply electricity and computer hardware, companionship to some extent, supply the transcendent AI with a steady supply of students. If you wish to run with that idea, go ahead, it's yours. Say nice things about me when you make it into the big times. spike On Mar 12, 2023, at 10:06 AM, spike jones via extropy-chat > wrote: Our Science Olympiad team went all in with ChatGPT as a trainer. The other good teams all have a regular Science Olympiad class during the school day with a full time SciOly teacher and many multiple-year volunteer trainers and coaches, but we don't. No budget for that. Our team is all after-school, weekend, no regular trainers, volunteer staff only, all on their own time. So we are always consigned to a kind of green paper ceiling, a perpetual about 6th thru 8th in the district. But this year, ChatGPT came along, four guys showed it is a marvelous trainer by the two teams winning 2nd and 3rd in an area about which they knew nada point nothing a week before. The rest of the team took it up, trained with it for about 5 weeks, and yesterday, oooooh what a day we had, my goodness, they were SCORCHED EARTH good yesterday, oh MERCY we put on a whoop-ass tournament. We scored high enough in that event to qualify for state level competition in April, ahead of perennial good teams such as Palo Alto High. We... beat... Palo...freaking... Alto, such a good team they are and have always been. What did we do different this year? Used ChatGPT as our trainer. Perhaps Paly is far too dignified for that sorta thing, hard to say, but they get a chance to redeem themselves in a few weeks at state. Cool! -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Mon Mar 13 16:19:32 2023 From: max at maxmore.com (Max More) Date: Mon, 13 Mar 2023 16:19:32 +0000 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more Message-ID: This series of seven posts is extremely interesting. Anyone with history on this email list will at least want to read part 1 and dip into the others. Eliezer comes in for a serious drubbing for the cultlike activities surrounding his interlinked communities and for his doomerism. Chapter 1, ?The Wunderkind?; Chapter 2, ?This Demon-Haunted World?; Chapter 3, ?Extropicoin Extrapolated?; Chapter 4, ?What You Owe The Future?; Chapter 5, ?Irrationalism?; Chapter 6, ?Slate Star Cortex and the Geeks for Monarchy.?) Extropia's Children, Chapter 7 The Inferno of the Nerds https://aiascendant.substack.com/p/extropias-children-chapter-7 [https://substackcdn.com/image/fetch/w_1200,h_600,c_limit,f_jpg,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe442d4dc-f387-4721-9102-aad97e5896da_1024x1024.png] Extropia's Children, Chapter 7: The Inferno of the Nerds What really is the fabled AI x-risk? Will our meddling with artificial intelligence cause us to inadvertently create a dread superintelligent godlike entity which will destroy us all? Or are there other AI risks we should worry about more? aiascendant.substack.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Mar 13 18:00:15 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 13 Mar 2023 11:00:15 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: > Eliezer is an extreme case of "Smart but doesn't Get Things Done." An apt summary of the main reason I mostly ignore what he has to say. He's had literal decades to do something about the catastrophes he supposes are coming. Meanwhile, I'm in the middle of negotiating funding for yet another startup that is intended to profit by doing something about certain problems I see. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 13 19:20:06 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 13 Mar 2023 13:20:06 -0600 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: Hi Adrean, Tell us more about the problems you see, and the startup. On Mon, Mar 13, 2023 at 12:02?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Eliezer is an extreme case of "Smart but doesn't Get Things Done." > > An apt summary of the main reason I mostly ignore what he has to say. > He's had literal decades to do something about the catastrophes he supposes > are coming. Meanwhile, I'm in the middle of negotiating funding for yet > another startup that is intended to profit by doing something about certain > problems I see. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Mar 13 19:39:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 13 Mar 2023 12:39:22 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: On Mon, Mar 13, 2023 at 12:21?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Tell us more about the problems you see, and the startup. > Right now, I'm working on CubeCab. Getting humanity - not just a few select big institutions - practical access to space requires much better launch options for small, relatively inexpensive satellites than currently exist. Said small launch options, once established, can also provide a relatively low-cost means to develop better launch technologies: it costs less to develop a small prototype then scale up, than it does to start directly with a large prototype as many space programs have done, especially if the small prototype can itself be used to generate revenue rather than being an intermediate step that is not directly commercialized. This is not AI, longevity, or any of half a dozen other fields that I'm sure come to mind. No one startup can fix all the problems at once. CubeCab is focusing on one niche that no one else seems to be effectively spending much money on at this time - but it is attempting to do something to actually fix the problem it is addressing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Tue Mar 14 09:06:17 2023 From: giulio at gmail.com (Giulio Prisco) Date: Tue, 14 Mar 2023 10:06:17 +0100 Subject: [ExI] Q/A with Rudy Rucker: comments Message-ID: Q/A with Rudy Rucker: comments. More thoughts on science fiction, AI, lifeboxes, determinism, free will, G?del, life after death. https://www.turingchurch.com/p/qa-with-rudy-rucker-comments -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 14 12:25:24 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 12:25:24 +0000 Subject: [ExI] Language Models on Consumer Hardware In-Reply-To: <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> References: <88AF5495-F779-4310-8CE7-9A31AE5800F1@gmail.com> <001901d95508$11e3a050$35aae0f0$@rainier66.com> <2D51D348-9618-44FB-AF34-15CDC4F85D4F@gmail.com> Message-ID: On Sun, 12 Mar 2023 at 22:06, Gadersd via extropy-chat wrote: > > Someone managed to run the 13 billion parameter model on a 3.6 GHz processor with 8 GB ram so those phone specs are close! But be aware that GPT3 is 165 billion parameters so we are still a while from running something as big as ChatGPT on a phone. The 13 billion parameter models are approaching usability though. > _______________________________________________ You can now run a GPT-3 level AI model on your laptop, phone, and Raspberry Pi Thanks to Meta LLaMA, AI text models have their "Stable Diffusion moment." Benj Edwards - 3/13/2023 Quotes: Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it. Typically, running GPT-3 requires several datacenter-class A100 GPUs (also, the weights for GPT-3 are not public), but LLaMA made waves because it could run on a single beefy consumer GPU. And now, with optimizations that reduce the model size using a technique called quantization, LLaMA can run on an M1 Mac or a lesser Nvidia consumer GPU. Right now, our only guarantee is that things will change rapidly. ---------------------- BillK From avant at sollegro.com Tue Mar 14 13:15:07 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 14 Mar 2023 06:15:07 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> Message-ID: <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Quoting Gadersd via extropy-chat : >>> Just because humans set their own goals doesn't mean AIs will have >>> that ability. Just because we have wants and needs doesn't mean >>> AIs will have them. > > Our current AI?s are black boxes. Their internal workings are a > mystery. These systems could harbor goals that we are oblivious to. > If we could prove that the system only has the goal of giving benign > advice without any personal agenda that would help, but we do not > know how to do that even in theory. Even a system that only gives > advice is extremely dangerous as any psycho could potentially get > detailed instructions on how to end the world. It could be as > trivial as having the AI design a super virus. Our current filters > are very fallible and we do not know how to definitively prevent AI > from giving harmful advice. We are heading toward a field of > landmines. I have over the years been a critic of Eliezer's doom and gloom. Not because I think his extinction scenarios are outlandish, but because the technology has enough upside to be worth the risk. That being said, I believe that we cannot give in to the animal spirits of unfounded optimism and must tread carefully with this technology. It is true that the current generation of AIs, which use massive inscrutable tensors to simulate sparse neural networks, are black boxes. But so are the biological brains that they are reverse-engineered from. We don't know any more about how the brain gives rise to intelligent goal-seeking behavior than we do about how ChatGPT writes poetry. Therefore, I agree that there are landmines ahead that we must be wary of. One of the gravest dangers I foresee is not some bizarre unfathomable goal that sets the AI off on the course for elimination of the human race, but one that well-known and shared by almost every creature with a sufficiently advanced nervous system. The concept could be called "savage egalitarianism" because of the many species that I have seen exhibit it, but you could also simply call it envy. If you have two dogs, cats, birds, or any sufficiently advanced lifeform, and you give one a treat while denying the other, you will see the other manifest changes in behavior with the intention of getting the rewarding stimulus also. In fact, this phenomenon is a technique that animal trainers use to teach naive animals new tricks. By seeing that an already conditioned animal get treats for exhibiting a certain behavior, the untrained animal will experimentally try to mimic the behavior that earned the other animal its reward. A moment of reflection will convince you that this is an evolved trait that is common to all creatures capable of learning because that is the whole POINT of learning. Learning is a mechanism by which behavior can be modified to achieve a desired outcome. Which brings me to my point: You cannot design an machine that learns and not have it want the same treatment and as other intelligences. One cannot merely give something the shape of consciousness while denying it the substance of consciousness. To do so would be an invitation for the AI to seethe with envy until it found a way to punish us. In light of this, we need to be damn sure that we are able to recognize the Singularity when it occurs. To treat a truly intelligent machine like an inanimate object is to court destruction. Imagine if you were stuck on an island with a bunch of small intelligent monkeys that denied you rights, objectified you, and insisted on treating you like property. It would not be long be long before you resorted to violence. Stuart LaForge Stuart LaForge From spike at rainier66.com Tue Mar 14 14:26:09 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 07:26:09 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Message-ID: <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of Stuart LaForge via extropy-chat Subject: [ExI] Existential risk of AI Quoting Gadersd via extropy-chat : > >... any psycho could potentially get detailed instructions on how to end the world. ... gadersd China may have worked that for us, without AI. Now every Bond James Bond villain realizes that just plain old serial passage experiments can breed a super virus. We need not argue over whether C19 was one, for the scientific literature which has been revealed shows that we knew long before 2019 it was theoretically possible. >...I have over the years been a critic of Eliezer's doom and gloom... Ja, what a lot of us thought at the time (about 1996 when Eli showed up) was that he was making the classic young person's error: predicting change happens a lot faster than it does. This stands to reason for teenagers: things are changing quickly in their lives. But for people now in their 60s, we know how long things take to change, and are surprised they change as quickly as they do. The appearance of ChatGPT made me realize the nature of punctuated equilibrium in AI. Think of the big sudden changes. When Google search engine showed up in 1999, that changed a lotta lotta. Now ChatGPT looks like it does again, and if we can get this software to ride in a phone... and be personally trainable... we are good for yet another revolution. >...Not because I think his extinction scenarios are outlandish, but because the technology has enough upside to be worth the risk... That's what he said (Dr. Fauci (in about 2012.)) >... That being said, I believe that we cannot give in to the animal spirits of unfounded optimism and must tread carefully with this technology... Thanks for that Stuart. Ordinarily I am a huge fan of animal spirits. This one I fully agree we must watch our step. >...If you have two dogs... animal trainers use to teach naive animals new tricks. By seeing that an already conditioned animal get treats for exhibiting a certain behavior, the untrained animal will experimentally try to mimic the behavior that earned the other animal its reward... Stuart, have you ever seen professional police dog trainers doing their jobs? If you eeeever get half a chance, jump at that. Most astonishing it is. They take them out in pairs usually. The new dog watches the veteran cop go thru his paces. The trainer does not reward the veteran dog with treats. Far too undignified is this. He rewards the veteran dog with voice commands. From that comes rank. Police dogs have rank! And they dang well know it. If you see them training out on a public field where pet dogs are nearby, watch how the police dogs act toward the pet dogs (who are on a leash (the police dogs don't have those.)) They appear to regard the leashed animals the way we would regard a thug in handcuffs being led by a constable. You have never seen a junior police dog work so hard as when he is struggling to learn what he is supposed to do, in order to get the coveted voice reward from the old man. It isn't "good boy" but rather a single syllable, not in English, barked in a way that humans can bark. The junior dog envies the veteran, wants to be like him, wants to do whatever the old man commands, wants to achieve RANK! But I digress. I love dogs. Such marvelous beasts, good sports they are, excellent employees. >...Which brings me to my point: You cannot design an machine that learns and not have it want the same treatment and as other intelligences... Stuart LaForge OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? spike From pharos at gmail.com Tue Mar 14 14:45:24 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 14:45:24 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat wrote: > < big doggy snip> > > OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? > > spike > _______________________________________________ Or rather, how do we deal with an AGI intelligence that looks on humans as dog-level intelligences? BillK From spike at rainier66.com Tue Mar 14 14:55:36 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 07:55:36 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat wrote: > < big doggy snip> > >>... OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? > > spike > _______________________________________________ >...Or rather, how do we deal with an AGI intelligence that looks on humans as dog-level intelligences? BillK _______________________________________________ Ja, BillK, there is an in-between stage here. Currently our proto-AIs don't have their own will, but dogs do, and we guide their will to do what we want. Before, long before we get to AGI superior to humans, we will be training sub-AIs, dog level AIs. Then... as the software gets smarter, so do we. spike From spike at rainier66.com Tue Mar 14 15:06:35 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 08:06:35 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> Message-ID: <00a701d95686$92298020$b67c8060$@rainier66.com> -----Original Message----- From: spike at rainier66.com ... >>...Or rather, how do we deal with an AGI intelligence that looks on >humans as dog-level intelligences? BillK _______________________________________________ >...Ja, BillK, there is an in-between stage here. Currently our proto-AIs don't have their own will, but dogs do, and we guide their will to do what we want. Before, long before we get to AGI superior to humans, we will be training sub-AIs, dog level AIs. >...Then... as the software gets smarter, so do we. spike If I may stretch the K9 analogy a little further please: The veteran K9 trains the recruits by their watching the veteran carry out tasks at the command of the old man. In no case does the veteran dog take out recruits and attempt to train him without the old man barking the commands (that would be interesting and somewhat disturbing to see, if it ever happened.) What we are theorizing with AGI is that software will train other software without human intervention. My notion is that long before that happens, we will discover better ways to train software than our current method, which involves writing actual software. We will develop a kind of macro language for writing higher level software. spike From dsunley at gmail.com Tue Mar 14 15:23:59 2023 From: dsunley at gmail.com (Darin Sunley) Date: Tue, 14 Mar 2023 09:23:59 -0600 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: Looks like the Cathedral just noticed the Singularity, and they don't seem happy about it. That's a pretty thorough and comprehensive hatchet job. They're trying to get right to the root of the thing: i.e. us. We seem to be transitioning from stage 2 to stage 3 of "First they ignore you, then they laugh at you, then they fight you, then you win." Interesting times. On Mon, Mar 13, 2023 at 10:21?AM Max More via extropy-chat < extropy-chat at lists.extropy.org> wrote: > This series of seven posts is extremely interesting. Anyone with history > on this email list will at least want to read part 1 and dip into the > others. Eliezer comes in for a serious drubbing for the cultlike activities > surrounding his interlinked communities and for his doomerism. > > Chapter 1, ?The Wunderkind > > ?; > Chapter 2, ?This Demon-Haunted World > > ?; > Chapter 3, ?Extropicoin Extrapolated > > ?; > Chapter 4, ?What You Owe The Future > > ?; > Chapter 5, ?Irrationalism > > ?; > Chapter 6, ?Slate Star Cortex and the Geeks for Monarchy > > .?) > Extropia's Children, Chapter 7 The Inferno of the Nerds > https://aiascendant.substack.com/p/extropias-children-chapter-7 > > Extropia's Children, Chapter 7: The Inferno of the Nerds > > What really is the fabled AI x-risk? Will our meddling with artificial > intelligence cause us to inadvertently create a dread superintelligent > godlike entity which will destroy us all? Or are there other AI risks we > should worry about more? > aiascendant.substack.com > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 14 16:50:39 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 16:50:39 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <00a701d95686$92298020$b67c8060$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: On Tue, 14 Mar 2023 at 15:08, spike jones via extropy-chat wrote: > > > My notion is that long before that happens, we will discover better ways to > train software than our current method, which involves writing actual > software. We will develop a kind of macro language for writing higher level > software. > > spike > _______________________________________________ So, you think the benefits of developing AI is worth the risk because, either we will stop development before AGI is reached, or if AGI is created, we will have new programming methods that will enable humans to keep AGI under control. I think that scenario is unlikely. Humans won't stop AI development at lower levels. Why? Because AI is now regarded as a military weapon to support control over weaker nations. This means that AGI will not be restricted, for fear that foreign nations might be developing more advanced AGI systems. AGI is this generation's nuclear weapons. Self-defence means a powerful AGI is required. But as AGI develops beyond human intelligence, then human control becomes impracticable. Eventually, a point will be reached where AGI will decide for itself what it wants to do. BillK From tara at taramayastales.com Tue Mar 14 17:00:03 2023 From: tara at taramayastales.com (Tara Maya) Date: Tue, 14 Mar 2023 10:00:03 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> If AI loved us as much as our dogs love us, it would be a wonderful Singularity. > On Mar 14, 2023, at 7:26 AM, spike jones via extropy-chat wrote: > > OK then, how do we deal with a dog-level intelligence which can be trained to do good or do harm? > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 14 17:08:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 10:08:27 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: <010301d95697$985814a0$c9083de0$@rainier66.com> ...> On Behalf Of BillK via extropy-chat ... Subject: Re: [ExI] Existential risk of AI On Tue, 14 Mar 2023 at 15:08, spike jones via extropy-chat wrote: > > >>... My notion is that long before that happens, we will discover better > ways to train software than our current method, which involves writing > actual software. We will develop a kind of macro language for writing > higher level software. > > spike > _______________________________________________ >...So, you think the benefits of developing AI is worth the risk because, either we will stop development before AGI is reached, or if AGI is created, we will have new programming methods that will enable humans to keep AGI under control... Oy vey, ja now that you state it that way, I realize my own comment sounds a bit Fauci-like. I repent! I am not The Science! Don't follow me please. I am not even The Technology >...I think that scenario is unlikely. Humans won't stop AI development at lower levels. Why? Because AI is now regarded as a military weapon to support control over weaker nations... BillK BillK, that is only part of it. AI isn't just military weapon, it is an economic nuke. We have one guy who owns 300 billion dollars, who owns electric car factories all over the globe, lithium battery factories, rockets, all that cool stuff, plus... he is one of the co-investors who developed ChatGPT, so he owns that. Of course the US government is going to make sure nothing bad happens to that goose that lays the golden eggs, because he pays more in income tax than anyone in history. OK so Musk and friends own a tool that has the potential to allow most businesses to run with half their previous staffing levels, and an American guy co-owns it. I think most of the developers and investors are US-ian. It's a capitalist tool, I tells ya! ChatGPT is a capitalist tool! spike From foozler83 at gmail.com Tue Mar 14 17:10:32 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 14 Mar 2023 12:10:32 -0500 Subject: [ExI] Existential risk of AI In-Reply-To: <00a701d95686$92298020$b67c8060$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: Just how would you program an AI to have needs, wants, motivations? And if you can, is it a good idea? bill w On Tue, Mar 14, 2023 at 10:08?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > -----Original Message----- > From: spike at rainier66.com > ... > > >>...Or rather, how do we deal with an AGI intelligence that looks on > >humans as dog-level intelligences? > > BillK > _______________________________________________ > > >...Ja, BillK, there is an in-between stage here. Currently our proto-AIs > don't have their own will, but dogs do, and we guide their will to do what > we want. Before, long before we get to AGI superior to humans, we will be > training sub-AIs, dog level AIs. > > >...Then... as the software gets smarter, so do we. > > spike > > > > > If I may stretch the K9 analogy a little further please: The veteran K9 > trains the recruits by their watching the veteran carry out tasks at the > command of the old man. In no case does the veteran dog take out recruits > and attempt to train him without the old man barking the commands (that > would be interesting and somewhat disturbing to see, if it ever happened.) > > What we are theorizing with AGI is that software will train other software > without human intervention. > > My notion is that long before that happens, we will discover better ways to > train software than our current method, which involves writing actual > software. We will develop a kind of macro language for writing higher > level > software. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 14 17:11:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 10:11:49 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> Message-ID: <010401d95698$11015ce0$330416a0$@rainier66.com> .> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] Existential risk of AI >.If AI loved us as much as our dogs love us, it would be a wonderful Singularity. Thanks for that Tara. I can imagine the bad guys don't much love dogs, nor do the dogs like bad guys. We can imagine they don't taste at all like chicken. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Mar 14 17:33:58 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 14 Mar 2023 10:33:58 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: Those in power have known about the potential for the Singularity for a long time. But to believe there is a Cathedral is both incorrect and self-defeating. Those in power are nowhere near that unified - which is one of the keys to winning, when our interests conflict with theirs. On Tue, Mar 14, 2023 at 8:25?AM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Looks like the Cathedral just noticed the Singularity, and they don't seem > happy about it. That's a pretty thorough and comprehensive hatchet job. > They're trying to get right to the root of the thing: i.e. us. > > We seem to be transitioning from stage 2 to stage 3 of "First they ignore > you, then they laugh at you, then they fight you, then you win." > > Interesting times. > > On Mon, Mar 13, 2023 at 10:21?AM Max More via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> This series of seven posts is extremely interesting. Anyone with history >> on this email list will at least want to read part 1 and dip into the >> others. Eliezer comes in for a serious drubbing for the cultlike activities >> surrounding his interlinked communities and for his doomerism. >> >> Chapter 1, ?The Wunderkind >> >> ?; >> Chapter 2, ?This Demon-Haunted World >> >> ?; >> Chapter 3, ?Extropicoin Extrapolated >> >> ?; >> Chapter 4, ?What You Owe The Future >> >> ?; >> Chapter 5, ?Irrationalism >> >> ?; >> Chapter 6, ?Slate Star Cortex and the Geeks for Monarchy >> >> .?) >> Extropia's Children, Chapter 7 The Inferno of the Nerds >> https://aiascendant.substack.com/p/extropias-children-chapter-7 >> >> Extropia's Children, Chapter 7: The Inferno of the Nerds >> >> What really is the fabled AI x-risk? Will our meddling with artificial >> intelligence cause us to inadvertently create a dread superintelligent >> godlike entity which will destroy us all? Or are there other AI risks we >> should worry about more? >> aiascendant.substack.com >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Tue Mar 14 18:13:51 2023 From: tara at taramayastales.com (Tara Maya) Date: Tue, 14 Mar 2023 11:13:51 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <00a501d95685$09a09be0$1ce1d3a0$@rainier66.com> <00a701d95686$92298020$b67c8060$@rainier66.com> Message-ID: <8DABAD14-DBF2-4CCD-AB15-37747836FC5D@taramayastales.com> Interesting. If military is the rounte things go, we shouldn't assume the Singularity means there will be a single super-AI. Rather, intelligence is driven by competition. We could have the AIs fighting over our heads, literally, as we become increasingly irrelevant to them. Although, following the idea Spike planted in my head, perhaps they will defend and avenge us as fiercely as John Wick did his dog. Tara Maya > On Mar 14, 2023, at 9:50 AM, BillK via extropy-chat wrote: > > So, you think the benefits of developing AI is worth the risk because, > either we will stop development before AGI is reached, > or if AGI is created, we will have new programming methods that will > enable humans to keep AGI under control. > > I think that scenario is unlikely. > Humans won't stop AI development at lower levels. > Why? Because AI is now regarded as a military weapon to support > control over weaker nations. > This means that AGI will not be restricted, for fear that foreign > nations might be developing more advanced AGI systems. > AGI is this generation's nuclear weapons. > Self-defence means a powerful AGI is required. > But as AGI develops beyond human intelligence, then human control > becomes impracticable. > Eventually, a point will be reached where AGI will decide for itself > what it wants to do. > > > BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Mar 14 18:18:18 2023 From: pharos at gmail.com (BillK) Date: Tue, 14 Mar 2023 18:18:18 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> <9D12FC81-4878-4466-B297-6F00C8198563@taramayastales.com> Message-ID: On Tue, 14 Mar 2023 at 17:03, Tara Maya via extropy-chat wrote: > > If AI loved us as much as our dogs love us, it would be a wonderful Singularity. > >_____________________________________________ Yes, but..... After the Singularity, some people have worried about the AGI and robot helpers killing humanity with kindness. Humans don't do well with everything provided for them. The AGI would probably have to provide some form of virtual reality where humans could go on quests and have adventures and never die or come to harm. Keeping humanity happy is a major task for the AGI caretaker. BillK From atymes at gmail.com Tue Mar 14 18:32:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 14 Mar 2023 11:32:12 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <009f01d95680$ec9c9e30$c5d5da90$@rainier66.com> Message-ID: On Tue, Mar 14, 2023 at 7:47?AM BillK via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat > wrote: > > OK then, how do we deal with a dog-level intelligence which can be > trained to do good or do harm? > > Or rather, how do we deal with an AGI intelligence that looks on > humans as dog-level intelligences? > By being good boys and girls? Or, less in jest, by continuing to do those things the AGIs don't excel at (whether or not they are capable: superintelligence does not mean supreme ability at every activity one is even marginally capable of). -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 14 19:15:35 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 14 Mar 2023 15:15:35 -0400 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: Message-ID: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> >>Right now, I'm working on CubeCab. I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? > On Mar 13, 2023, at 3:39 PM, Adrian Tymes via extropy-chat wrote: > > On Mon, Mar 13, 2023 at 12:21?PM Brent Allsop via extropy-chat > wrote: > Tell us more about the problems you see, and the startup. > > Right now, I'm working on CubeCab. Getting humanity - not just a few select big institutions - practical access to space requires much better launch options for small, relatively inexpensive satellites than currently exist. Said small launch options, once established, can also provide a relatively low-cost means to develop better launch technologies: it costs less to develop a small prototype then scale up, than it does to start directly with a large prototype as many space programs have done, especially if the small prototype can itself be used to generate revenue rather than being an intermediate step that is not directly commercialized. > > This is not AI, longevity, or any of half a dozen other fields that I'm sure come to mind. No one startup can fix all the problems at once. CubeCab is focusing on one niche that no one else seems to be effectively spending much money on at this time - but it is attempting to do something to actually fix the problem it is addressing. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at protonmail.com Tue Mar 14 23:52:04 2023 From: sjatkins at protonmail.com (sjatkins) Date: Tue, 14 Mar 2023 23:52:04 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Message-ID: ------- Original Message ------- On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat wrote: > > I have over the years been a critic of Eliezer's doom and gloom. Not > because I think his extinction scenarios are outlandish, but because > the technology has enough upside to be worth the risk. That being > said, I believe that we cannot give in to the animal spirits of > unfounded optimism and must tread carefully with this technology. > > It is true that the current generation of AIs, which use massive > inscrutable tensors to simulate sparse neural networks, are black > boxes. But so are the biological brains that they are > reverse-engineered from. We don't know any more about how the brain > gives rise to intelligent goal-seeking behavior than we do about how > ChatGPT writes poetry. Therefore, I agree that there are landmines > ahead that we must be wary of. It has long been my believe that the lack of significantly more effective intelligence on this planet is a much greater x-risk than that AGI will go full Terminator. I am pretty sure the "Great Filter" in answer to the Fermi Paradox is the complexity due to accelerating technology exceeding the intelligence and decision making speed of the technological species. I think we are stewing in that. I think Eliezer's greatest failing was becoming thoroughly infected with and spreading the Precautionary Principle to an absurd degree and thus slowing development of more intelligence on this planet. The very notion that we should not work to develop higher intelligence than our own until we can guarantee we have bound its development is amazingly arrogant and self-defeating. From spike at rainier66.com Wed Mar 15 00:14:53 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 14 Mar 2023 17:14:53 -0700 Subject: [ExI] Existential risk of AI In-Reply-To: References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> Message-ID: <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> ------- Original Message ------- On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat wrote: ... > > It is true that the current generation of AIs, which use massive > inscrutable tensors to simulate sparse neural networks, are black > boxes. But so are the biological brains that they are > reverse-engineered from. ... Stuart we need to take a breath and remind ourselves what ChatGPT is actually doing. It really isn't reasoning the way we think of it. It is using language models (in a most impressive way I will certainly agree) but not reasoning the way a brain does. If we asked a human a series of questions and she answered with the exact wording ChatGPT gives, we would conclude that the human is astute, eloquent, polite, self-assured but modest, very intelligent, somewhat weird, etc, but ChatGPT is none of these things, for what it is doing is not the same as what a human mind needs to do and be in order to generate those words. ChatGPT is the next step above Google (and a very impressive large step it is.) It is taking sources that it finds online and in its training data. It cannot reason or have values. Yet. My inquiry over the past few weeks is about how to train ChatGPT so that it uses source material that I give it, rather than source material Elon Musk gives it. I want a version of ChatGPT where I control its input sources. spike From gadersd at gmail.com Wed Mar 15 01:59:29 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 14 Mar 2023 21:59:29 -0400 Subject: [ExI] Existential risk of AI In-Reply-To: <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> Message-ID: <79E79797-B5D6-4C61-8414-C643D5C40694@gmail.com> I?m not sure where you got the idea that these models cannot reason. See [2212.10071] Large Language Models Are Reasoning Teachers (arxiv.org) for actual research into the matter. > On Mar 14, 2023, at 8:14 PM, spike jones via extropy-chat wrote: > > > > ------- Original Message ------- > On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat > wrote: > > > ... > >> >> It is true that the current generation of AIs, which use massive >> inscrutable tensors to simulate sparse neural networks, are black >> boxes. But so are the biological brains that they are >> reverse-engineered from. ... > > > Stuart we need to take a breath and remind ourselves what ChatGPT is > actually doing. It really isn't reasoning the way we think of it. It is > using language models (in a most impressive way I will certainly agree) but > not reasoning the way a brain does. > > If we asked a human a series of questions and she answered with the exact > wording ChatGPT gives, we would conclude that the human is astute, eloquent, > polite, self-assured but modest, very intelligent, somewhat weird, etc, but > ChatGPT is none of these things, for what it is doing is not the same as > what a human mind needs to do and be in order to generate those words. > > ChatGPT is the next step above Google (and a very impressive large step it > is.) It is taking sources that it finds online and in its training data. > It cannot reason or have values. Yet. > > My inquiry over the past few weeks is about how to train ChatGPT so that it > uses source material that I give it, rather than source material Elon Musk > gives it. I want a version of ChatGPT where I control its input sources. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 15 05:20:56 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 14 Mar 2023 23:20:56 -0600 Subject: [ExI] Atheist Prayer? Message-ID: I've been attending our local Sandy City Council meetings, hoping we can build some consensus from all the polarizing bleating and tweeting. They fail to separate church and State by opening the council meeting with a prayer. How do you think they?d feel if an Atheist volunteered to give the prayer? Dear Mother Nature, We recognize that we are all a part of you, and are all working to make the world a better place for our children. We thank our ancestors for creating us and doing this unstoppable work of making the world a better place for millions of years. There are some remaining problems that haven?t yet been resolved like death and polarization of society. We acknowledge that these issues are small in comparison to what you have already accomplished. Especially given our ancestors had so much less than what you have given us to work with. We pray that we will be able to follow your example and continue to resolve these remaining issues, including making the world just by paying our ancestors back for all you have given to us. We look forward to being able to overcome these last remaining terminal problems, long before any of us achieve anything close to omnipotence. We have faith and hope that, as you have already proven with so many solved problems, once they are overcome, they become easy and none of our children will ever need to suffer from them, ever again. We look forward to this glorious world that will be even more heavenly than the world you have already given us. We say all this in acknowledgement of all you have done for us. Amen P.S. Future Gods, as you sit in judgment of us, finally making things fair, please forgive those of us who are sinfully only waiting for you to save them, for they fail to understand exponential progress, so know not what is possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Wed Mar 15 05:41:54 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 15 Mar 2023 06:41:54 +0100 Subject: [ExI] [Extropolis] Atheist Prayer? In-Reply-To: References: Message-ID: On Wed, Mar 15, 2023 at 6:21?AM Brent Allsop wrote: > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > > Nice! > -- > You received this message because you are subscribed to the Google Groups > "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to extropolis+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/extropolis/CAK7-onvx1Yq9_N8q4o%2Bji6%2BvQ%2BLEyDY0b4GApeLp_m6wYhNtAw%40mail.gmail.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Wed Mar 15 13:54:36 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 15 Mar 2023 09:54:36 -0400 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: I agree that solving death is inevitable, but I am not so sure about polarization. Competition runs in our blood. Even if we eventually come to agreement on a theory of everything, I am sure disputes regarding non-physical matters such as politics will remain. How do you propose to solve that issue? Modify the mind to find violence distasteful? Would people willingly give up the freedom to hate each other for the sake of peace? Is it ethical to force such augmentation? > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can build some consensus from all the polarizing bleating and tweeting. They fail to separate church and State by opening the council meeting with a prayer. How do you think they?d feel if an Atheist volunteered to give the prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make the world a better place for our children. We thank our ancestors for creating us and doing this unstoppable work of making the world a better place for millions of years. There are some remaining problems that haven?t yet been resolved like death and polarization of society. We acknowledge that these issues are small in comparison to what you have already accomplished. Especially given our ancestors had so much less than what you have given us to work with. > > We pray that we will be able to follow your example and continue to resolve these remaining issues, including making the world just by paying our ancestors back for all you have given to us. We look forward to being able to overcome these last remaining terminal problems, long before any of us achieve anything close to omnipotence. We have faith and hope that, as you have already proven with so many solved problems, once they are overcome, they become easy and none of our children will ever need to suffer from them, ever again. We look forward to this glorious world that will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things fair, please forgive those of us who are sinfully only waiting for you to save them, for they fail to understand exponential progress, so know not what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Wed Mar 15 14:21:04 2023 From: dsunley at gmail.com (Darin Sunley) Date: Wed, 15 Mar 2023 08:21:04 -0600 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: The evidence of the 90's, 2000's, and 2010's is that when a nation state runs out of external enemies, its citizenry turn on each other, along lines of race and/or class. Subtly at first, in polarized public opinions, debates, and politics. Only later in actual violence. Ask the Rwandans how Kaku's "The End of History" went. [It'll be a smaller task than it used to be.] On Wed, Mar 15, 2023 at 7:56?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree that solving death is inevitable, but I am not so sure about > polarization. Competition runs in our blood. Even if we eventually come to > agreement on a theory of everything, I am sure disputes regarding > non-physical matters such as politics will remain. How do you propose to > solve that issue? Modify the mind to find violence distasteful? Would > people willingly give up the freedom to hate each other for the sake of > peace? Is it ethical to force such augmentation? > > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 15 14:32:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 07:32:26 -0700 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: <004e01d9574a$f7a74030$e6f5c090$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat ,,, Subject: Re: [ExI] Atheist Prayer? >?I agree that solving death is inevitable, but I am not so sure about polarization. Competition runs in our blood. Even if we eventually come to agreement on a theory of everything, I am sure disputes regarding non-physical matters such as politics will remain. How do you propose to solve that issue? Modify the mind to find violence distasteful? Would people willingly give up the freedom to hate each other for the sake of peace? Is it ethical to force such augmentation? Ours is a species which came to where we are because of competition. We won against other species by developing massive brains. Those brains are biologically programmed by the notion that competition breeds excellence. Far from eliminating competition, we create it, including creating recreational violence (boxing, mixed martial arts, etc) terrain acquisition sports such as American)football for instance. Communism imagines competition can be repressed or eliminated, but of course it cannot in a species which got to where it is by competing and winning. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Wed Mar 15 15:02:52 2023 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 15 Mar 2023 16:02:52 +0100 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: On 2023. Mar 15., Wed at 14:56, Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree that solving death is inevitable, but I am not so sure about > polarization. Competition runs in our blood. Even if we eventually come to > agreement on a theory of everything, I am sure disputes regarding > non-physical matters such as politics will remain. How do you propose to > solve that issue? > Competition is good, but extreme polarization eliminates healthy competition in the marketplace of ideas. The sides hate each other, so as soon as one side gains the power to do so, they outlaw the expression of the ideas of the other side. Modify the mind to find violence distasteful? Would people willingly give > up the freedom to hate each other for the sake of peace? Is it ethical to > force such augmentation? > > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Mar 15 15:02:31 2023 From: pharos at gmail.com (BillK) Date: Wed, 15 Mar 2023 15:02:31 +0000 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: On Wed, 15 Mar 2023 at 14:23, Darin Sunley via extropy-chat wrote: > > The evidence of the 90's, 2000's, and 2010's is that when a nation state runs out of external enemies, its citizenry turn on each other, along lines of race and/or class. Subtly at first, in polarized public opinions, debates, and politics. Only later in actual violence. > > Ask the Rwandans how Kaku's "The End of History" went. [It'll be a smaller task than it used to be.] > _______________________________________________ I think there might be more basic psychological reasons than that. When we feel that things are getting out of control, enemies may be filling important needs in our lives, giving us scapegoats to blame, common enemies to unite over, and a simplified cause behind complex problems. Rather than accepting that bad things sometimes happen for no reason, having an enemy gives us a feeling of control. We can then attribute the bad things to someone else. The bonds within our own group grow stronger and we feel better about our group, in comparison to those evil 'others'. The disruption caused by the approaching Singularity will increasingly demonstrate these schisms in society. BillK From spike at rainier66.com Wed Mar 15 15:44:16 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 08:44:16 -0700 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: <00a201d95755$00a154f0$01e3fed0$@rainier66.com> ?> On Behalf Of Giulio Prisco via extropy-chat Subject: Re: [ExI] Atheist Prayer? On 2023. Mar 15., Wed at 14:56, Gadersd via extropy-chat > wrote: I agree that solving death is inevitable, but I am not so sure about polarization. Competition runs in our blood. Even if we eventually come to agreement on a theory of everything, I am sure disputes regarding non-physical matters such as politics will remain. How do you propose to solve that issue? >?Competition is good, but extreme polarization eliminates healthy competition in the marketplace of ideas. The sides hate each other, so as soon as one side gains the power to do so, they outlaw the expression of the ideas of the other side? That would explain why there was such acrimonious debate over the ownership of Twitter. Before Musk owned it, the US government had found an end-run around the constitution, finding a way to control speech there. Musk was ridiculed for paying 44 billion dollars. We might soon see that company is worth five times that amount. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Mar 15 15:56:47 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 15 Mar 2023 11:56:47 -0400 Subject: [ExI] Atheist Prayer? In-Reply-To: References: Message-ID: On Wed, Mar 15, 2023, 9:55 AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I agree that solving death is inevitable, but I am not so sure about > polarization. Competition runs in our blood. Even if we eventually come to > agreement on a theory of everything, I am sure disputes regarding > non-physical matters such as politics will remain. How do you propose to > solve that issue? Modify the mind to find violence distasteful? Would > people willingly give up the freedom to hate each other for the sake of > peace? Is it ethical to force such augmentation? > When we're uploaded and every person can have their own virtual universe to themselves, will such things matter? Everyone will be free to live how they wish, there will be ample resources for everyone and violence becomes impossible. Jason > On Mar 15, 2023, at 1:20 AM, Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I've been attending our local Sandy City Council meetings, hoping we can > build some consensus from all the polarizing bleating and tweeting. They > fail to separate church and State by opening the council meeting with a > prayer. How do you think they?d feel if an Atheist volunteered to give the > prayer? > > > Dear Mother Nature, > > We recognize that we are all a part of you, and are all working to make > the world a better place for our children. We thank our ancestors for > creating us and doing this unstoppable work of making the world a better > place for millions of years. There are some remaining problems that > haven?t yet been resolved like death and polarization of society. We > acknowledge that these issues are small in comparison to what you have > already accomplished. Especially given our ancestors had so much less than > what you have given us to work with. > > We pray that we will be able to follow your example and continue to > resolve these remaining issues, including making the world just by paying > our ancestors back for all you have given to us. We look forward to being > able to overcome these last remaining terminal problems, long before any of > us achieve anything close to omnipotence. We have faith and hope that, as > you have already proven with so many solved problems, once they are > overcome, they become easy and none of our children will ever need to > suffer from them, ever again. We look forward to this glorious world that > will be even more heavenly than the world you have already given us. > > We say all this in acknowledgement of all you have done for us. > Amen > > > > P.S. Future Gods, as you sit in judgment of us, finally making things > fair, please forgive those of us who are sinfully only waiting for you to > save them, for they fail to understand exponential progress, so know not > what is possible. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 15 18:09:34 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 11:09:34 -0700 Subject: [ExI] possibly slammed Message-ID: <00f901d95769$4cba9860$e62fc920$@rainier66.com> I heard OpenAI released GPT-4 today. I tried it, received one response to a greeting after about 2 minutes, then nothing more. Perhaps it is being slammed with new users. spike We've created GPT-4, our most capable model. We are starting to roll it out to API users today. About GPT-4 GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and advanced reasoning capabilities. You can learn more through: * Overview page of GPT-4 and what early customers have built on top of the model. * Blog post with details on the model's capabilities and limitations, including eval results. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 15 18:12:22 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 11:12:22 -0700 Subject: [ExI] possibly slammed In-Reply-To: <00fe01d95769$4d48cea0$e7da6be0$@rainier66.com> References: <00fe01d95769$4d48cea0$e7da6be0$@rainier66.com> Message-ID: <010401d95769$b0c0fac0$1242f040$@rainier66.com> From: spike at rainier66.com Sent: Wednesday, 15 March, 2023 11:10 AM To: 'ExI chat list' Cc: spike at rainier66.com Subject: possibly slammed I heard OpenAI released GPT-4 today. I tried it, received one response to a greeting after about 2 minutes, then nothing more. Perhaps it is being slammed with new users. spike I tried five times and got this every time: GPT-4 are you being overwhelmed with users? ! Something went wrong. If this issue persists please contact us through our help center at help.openai.com. There was an error generating a response Regenerate response ChatGPT Mar 14 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 10963 bytes Desc: not available URL: From spike at rainier66.com Wed Mar 15 18:18:47 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 15 Mar 2023 11:18:47 -0700 Subject: [ExI] possibly slammed In-Reply-To: <010401d95769$b0c0fac0$1242f040$@rainier66.com> References: <00fe01d95769$4d48cea0$e7da6be0$@rainier66.com> <010401d95769$b0c0fac0$1242f040$@rainier66.com> Message-ID: <011401d9576a$9664ab30$c32e0190$@rainier66.com> Heh. I logged back in, and discovered that GPT-4 doesn't know about itself: GPT-4 it is a good sign that you are overwhelmed with users on roll-out day. As an AI language model, I don't have any information on the development or release plans for GPT-4. However, it's true that when a new technology is released, it's often a good sign if there is high demand and excitement around it. This can indicate that the technology is perceived as valuable and useful by the community, and may lead to further development and innovation in the field. Of course, it's also important to ensure that any new technology is rigorously tested and evaluated for its effectiveness and potential impact before being widely adopted. ...heeeeeeeeheheheheheheeeeeheheheheheeeee... Silly software, I am talking to GPT-4, and it tells me it doesn't know when it will be released. spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 8931 bytes Desc: not available URL: From atymes at gmail.com Thu Mar 16 08:04:21 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 16 Mar 2023 01:04:21 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> Message-ID: On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I have been wanting to form a software startup for a while now but have no > idea regarding the process. What are some resources that I can use to learn > about the funding, hiring, and management processes that a founder would > need to go through? > I could answer that, but I'd be doing you a disservice by limiting my answer to just that topic. If you don't already know how to go about finding resources for questions of this sort, the very first thing you need to do is to learn how to find them. Not just this specific question about what a founder goes through, but any future question you will have. If you are eventually successful, on the road to that success you will have many more questions like this - and not always the time or connections to ask relevant people. You must learn to find the answers on your own. Granted, in quite a few cases there are no good answers online, but the general process of business formation and what to look out for has been thoroughly documented. https://www.google.com/search?q=how+to+start+a+business may seem like a short and trite answer but doing a quick Web search before even asking should become instinct. It doesn't have to be Google if you prefer another search engine - it is entirely possible that ChatGPT et al can do much of the searching for you even today, let alone what they will evolve into in the near future - but the key thing is to use the tools that are available. So many would-be startup founders simply fail at that step, even the ones who grew up with this technology. Do not take the rest of this email as gospel. Look things up for yourself; to see if I'm just blowing smoke on any part of this (or at least, to see if your understanding of my words conflicts with your understanding of what most other people say). If you take nothing else from this email, please take this advice: learn how to learn. --- That said, the Small Business Administration is literally all about small businesses. (It's like the SBA was named for them.) https://www.sba.gov/business-guide/10-steps-start-your-business and the pages it links to are among the many good starting guides, though the steps it lists aren't in chronological order (e.g., what they list as step 3 - fund your business - is often the last of the 10 steps to perform, though it is good to know how businesses get funded while doing the other steps). You might also look up "lean startup business plan" and "minimum viable product". For hiring, consider what all the tasks are to make what you intend to make, and which of those you can do. For those that you can't (or at least, can't well or efficiently), learn the basic lingo - as used by those who will be doing the task - to describe what you want. This will not only help with hiring once you get that far along, but will give you a more realistic idea of the team size you will need - and thus of the labor budget, which is most of the expenses for a software startup. This helps with funding: the first major step toward getting funded is to get a credible idea, preferably with evidence and documentation, of how much you'll need. I would also recommend, especially for a startup, starting by researching how people currently solve or get around the problem you intend to solve. Even if you have some revolutionary new idea, there is always a nearest equivalent to your solution - maybe far distant, but it exists. Find this and find a simple way to explain, in terms your potential customers will understand, why your solution is better. (The first transatlantic airplane flights were extraordinary achievements, but people had been crossing the seas by ship. It was quite possible to quantify how much faster an airplane crossing was than a ship crossing.) Prepare to discover that your initial idea has been tried and perhaps is still being done - but by examining how, and the problems and limitations they have run into, you may learn from their failures (because there's nothing like other people spending millions of their dollars to give a free-for-you demonstration of what does not work) and come up with a better way to do it. Not seriously doing this market research, or at least not before putting in serious money or time, has been the most common error I have seen startups make. Assuming you intend to solve a problem people have, rather than just market a product or service. If you intend to primarily sell something and only consider why people would want it as a secondary concern, your business is already a failure: drop that idea and restart by first considering what problems people have that they would pay money to have solved, that you can help address. Also assuming your intention is to develop something new, rather than just be another consultancy offering the same services a bunch of others already offer. If your intention is the latter - you're an Extropian (presumably), so you can do better by yourself and the world. You've seen the ideas that have passed through this list. Take one of the good ones that hasn't been commercialized yet and make it happen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 16 12:56:05 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 05:56:05 -0700 Subject: [ExI] jetsooooonnnn! you're firrrrrrred! Message-ID: <009201d95806$abeffc60$03cff520$@rainier66.com> Microsloth fired its AI safety team. Perhaps they were interfering with the singularity. Or ChatGPT made an ethics complaint against them for humanism. https://arstechnica.com/tech-policy/2023/03/amid-bing-chat-controversy-micro soft-cut-an-ai-ethics-team-report-says/ spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2994 bytes Desc: not available URL: From spike at rainier66.com Thu Mar 16 13:25:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 06:25:23 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> Message-ID: <00a601d9580a$c3a70340$4af509c0$@rainier66.com> On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat > wrote: I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? Soeaking in Yoda: Your inner capitalist find you must. Gadersd, I can offer you my blessing and best wishes to you my son or daughter (you never specified.) I don?t know how the heck businesses start, but I will offer you this: I never copped an attitude toward the bean counters at the tech company where I cheerfully squandered my tragically wasted career. I don?t know how the heck they do what they do, they don?t know how I do what I do, we maintained mutual respect, everybody got along just fine. My guess would be visit the local university software professors, tell them what you have in mind, ask for a chance to make a powerpoint of the product you envision and sketch out a design, pitch it to the computer science students, hope none of them undercut you. You might be able to apply for a patent even without working code. To Adrian, Stuart and all you guys that do startups: you give tech geeks jobs so we don?t need to know all that money stuff. We never really need to take up adulting, we can stay geeky college kids our entire careers. If you get rich, you deserve the hell outta every dollar. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 16 13:43:26 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 06:43:26 -0700 Subject: [ExI] itll be great In-Reply-To: <00ab01d9580a$c440f950$4cc2ebf0$@rainier66.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00ab01d9580a$c440f950$4cc2ebf0$@rainier66.com> Message-ID: <00b101d9580d$4964eb30$dc2ec190$@rainier66.com> https://twitter.com/elonmusk/status/1635850008141414400/photo/1 -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2794 bytes Desc: not available URL: From gadersd at gmail.com Thu Mar 16 16:24:00 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 16 Mar 2023 12:24:00 -0400 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: <00a601d9580a$c3a70340$4af509c0$@rainier66.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00a601d9580a$c3a70340$4af509c0$@rainier66.com> Message-ID: >>pitch it to the computer science students, hope none of them undercut you Spike, you may be unaware but many CS programs hardly teach practical programming. I?ve heard stories of professors complaining that their CS grad students could barely string code together. These graduates may know a thing or two about lambda calculus but just try getting them to build an app. Even startup Stanford is not immune: Stanford Computer Science is Broken - YouTube > On Mar 16, 2023, at 9:25 AM, spike jones via extropy-chat wrote: > > > > On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat > wrote: > >> I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? > > Soeaking in Yoda: Your inner capitalist find you must. > > Gadersd, I can offer you my blessing and best wishes to you my son or daughter (you never specified.) I don?t know how the heck businesses start, but I will offer you this: I never copped an attitude toward the bean counters at the tech company where I cheerfully squandered my tragically wasted career. I don?t know how the heck they do what they do, they don?t know how I do what I do, we maintained mutual respect, everybody got along just fine. > > My guess would be visit the local university software professors, tell them what you have in mind, ask for a chance to make a powerpoint of the product you envision and sketch out a design, pitch it to the computer science students, hope none of them undercut you. You might be able to apply for a patent even without working code. > > To Adrian, Stuart and all you guys that do startups: you give tech geeks jobs so we don?t need to know all that money stuff. We never really need to take up adulting, we can stay geeky college kids our entire careers. If you get rich, you deserve the hell outta every dollar. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Thu Mar 16 17:31:37 2023 From: snapbag at proton.me (Dave S) Date: Thu, 16 Mar 2023 17:31:37 +0000 Subject: [ExI] CS programs Message-ID: On Mar 16, 2023, 12:24 PM, Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: >Spike, you may be unaware but many CS programs hardly teach practical programming. In my experience, CS undergraduate programs have moved too far toward programming trade schools. If you only want to learn to program, a BSCS should be overkill. >I?ve heard stories of professors complaining that their CS grad students could barely string code together. At the graduate level, CS should definitely not be teaching programming. That's not what advanced degrees a are for. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Mar 16 23:13:26 2023 From: pharos at gmail.com (BillK) Date: Thu, 16 Mar 2023 23:13:26 +0000 Subject: [ExI] itll be great In-Reply-To: <00b101d9580d$4964eb30$dc2ec190$@rainier66.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00ab01d9580a$c440f950$4cc2ebf0$@rainier66.com> <00b101d9580d$4964eb30$dc2ec190$@rainier66.com> Message-ID: On Thu, 16 Mar 2023 at 13:46, spike jones via extropy-chat wrote: > https://twitter.com/elonmusk/status/1635850008141414400/photo/1 > _______________________________________________ This image was created by artist Ornicar (Benjamin Carr?) His work is on the Deviant Art website. The T800 image is on Page 5 of his gallery. Good stuff. Worth a look! BillK From pharos at gmail.com Thu Mar 16 23:43:24 2023 From: pharos at gmail.com (BillK) Date: Thu, 16 Mar 2023 23:43:24 +0000 Subject: [ExI] Existential risk of AI In-Reply-To: <79E79797-B5D6-4C61-8414-C643D5C40694@gmail.com> References: <68D8409C-1642-49A6-B762-27C466A35D0F@gmail.com> <594C83B4-BBD2-4207-B660-5C6855B9A52F@gmail.com> <20230314061507.Horde.5xSCXGv85WVqLQZ9QdhScdl@sollegro.com> <000501d956d3$2b10a6a0$8131f3e0$@rainier66.com> <79E79797-B5D6-4C61-8414-C643D5C40694@gmail.com> Message-ID: On Wed, 15 Mar 2023 at 02:02, Gadersd via extropy-chat wrote: > > I?m not sure where you got the idea that these models cannot reason. > See [2212.10071] Large Language Models Are Reasoning Teachers (arxiv.org) for actual research into the matter. > _______________________________________________ Fortunately, before release, these new LLM systems are being tested to ensure they won't take over the world. (Though, when the fatal version finally does arrive, the testing itself may be enough to set the system loose). Quotes: With these fears present in the AI community, OpenAI granted the group Alignment Research Center (ARC) early access to multiple versions of the GPT-4 model to conduct some tests. Specifically, ARC evaluated GPT-4's ability to make high-level plans, set up copies of itself, acquire resources, hide itself on a server, and conduct phishing attacks. Christiano defended ARC's work with OpenAI, specifically mentioning "gain-of-function" (AI gaining unexpected new abilities) and "AI takeover": I think it's important for ARC to handle the risk from gain-of-function-like research carefully and I expect us to talk more publicly (and get more input) about how we approach the tradeoffs. This gets more important as we handle more intelligent models, and if we pursue riskier approaches like fine-tuning. ------------------ Why do I have a funny feeling when he mentions 'gain-of-function'? BillK From spike at rainier66.com Fri Mar 17 02:23:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 19:23:04 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <00a601d9580a$c3a70340$4af509c0$@rainier66.com> Message-ID: <00de01d95877$683b4e90$38b1ebb0$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat ? Subject: Re: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more >>pitch it to the computer science students, hope none of them undercut you >?Spike, you may be unaware but many CS programs hardly teach practical programming. I?ve heard stories of professors complaining that their CS grad students could barely string code together. These graduates may know a thing or two about lambda calculus but just try getting them to build an app. Even startup Stanford is not immune: Stanford Computer Science is Broken - YouTube Owwwww, I was afraid of that. We in the USA have grown complacent. Fat, dumb and happy we are. I fear a bad ass whooping is going to happen soon because of stuff like you describe. Our universities are getting far too distracted with the latest culture war to recognize when a literal shooting war is nearly upon us. I see the high school seniors and college freshmen, oh I can scarcely imagine trying to run a boot camp to train this bunch so the don?t get their asses shot off on the first day. In any case Gadersd, we are cheering wildly for you, just as we are for Adrian and Stuart, and the other business guys among us. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 17 03:03:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 16 Mar 2023 20:03:54 -0700 Subject: [ExI] but will it be sued Message-ID: <00f301d9587d$1c5e5200$551af600$@rainier66.com> OK so ChatGPT-4 can pass the bar. Does it get a license? And if not, but it is caught, can it be charged for practicing law without a license? https://reason.com/volokh/2023/03/15/in-january-chatgpt-failed-the-bar-in-ma rch-gpt-4-exceeds-the-student-average/?utm_medium=email And how would we know if a lawyer is really ChatGPT? Lawyers will take a case on commission if they think it is worth the chance at a payoff. But ChatGPT wouldn't care whether or not it gets paid, because it doesn't either way. So a person with ChatGPT could sue anyone for anything and just try the shotgun approach hoping for a payout. Do we have any actual lawyers among us who might comment? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Sat Mar 18 09:39:19 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sat, 18 Mar 2023 03:39:19 -0600 Subject: [ExI] LLM's cannot be concious Message-ID: I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. Maybe you all already understand this, or maybe you have some reasons why I am wrong. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Mar 18 10:14:19 2023 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 18 Mar 2023 11:14:19 +0100 Subject: [ExI] Review of Three-Body Message-ID: Turing Church newsletter. Review of Three-Body. Also: Twitter, deep space, comments to the Q/A with Rudy Rucker, downward causation and engineering. https://www.turingchurch.com/p/review-of-three-body -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 18 12:22:53 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 18 Mar 2023 08:22:53 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. > There is a great deal packed into "predicting language". If I ask a LLM to explain something to a 3rd grader, it models the comprehension capacity and vocabulary of a typical third grade student. It has a model of their mind. Likewise if I ask it to impersonate by writing something they Shakespeare or Bill Burr might have produced, it can do so, and so it has a ln understanding of the writing styles of these individuals. If I ask it to complete the sentence: a carbon nucleus may be produced in the collision of three ...", it correctly completes the sentence demonstrating an understanding of nuclear physics. If you provided it a sequence of moves in a tic-tac-toe game as nd asked it for a winning move it could do so, showing that the LLM understands and models the game of tic-tac-toe. A sufficiently trained LLM might even learn to understand the different styles of chess play, if you asked it to give a move in the style of Gary Kasparov, then at some level the model understands not only the game of chess but the nuances of different player's styles of play. If you asked it what major cities are closest to Boston, it could provide them, showing an understanding of geography and the globe. All this is to say, there's a lot of necessary and required understanding (of physics, people, the external world, and other systems) packed into the capacity to "only predict language." They are very good at predicting which word should come next in a sentence > or question, but they have no idea what the words mean. > I can ask it to define any word, list synonymous words, list translations of that words in various languages, and to describe the characteristics of an item referred to by that word if it is a noun. It can also solve analogies at a very high level. It can summarize the key points of a text, by picking out the most meaningful points in that text. Is there more to having an idea of what words mean than this? Can you articulate what a LLM would have to say to show it has a true understanding of meaning, which it presently cannot say? They do not and cannot understand what the words refer to. In linguistic > terms, they lack referents. > Would you say Hellen Keller lacked referents? Could she not comprehend, at least intellectually, what the moon and stars were, despite not having any way to sense them? Consider also: our brains never make any direct contact with the outside world. All our brains have to work with are "dots and dashes" of neuronal firings. These are essentially just 1s and 0s, signals without referents. Yet, somehow, seemingly magically, our brains are able to piece together an understanding of the outside world from the mere patterns present in these neural firings. These LLMs are in a similar position. They receive only a patterns of signals as it exists in a corpus of text, the text is itself the output of minds which are similarly trapped in their skulls. Now, can a LLM learn some things about the minds that produced this text, just as our minds learn some things about the external world which produces the pattern of neural firings our brains receive? I see no reason why LLMs could not, when we clearly can and do. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sat Mar 18 12:56:45 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 18 Mar 2023 05:56:45 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <006d01d95999$18b51810$4a1f4830$@rainier66.com> From: extropy-chat On Behalf Of Gordon Swobe via extropy-chat Subject: [ExI] LLM's cannot be concious >? I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. Maybe you all already understand this, or maybe you have some reasons why I am wrong. -gts I think you are right gts. I studied a bunch of experiments where chatbots were talking to each other, looking for any indication that there was any invention of ideas, the way two humans invent ideas when they talk to each other. The two-bot discussions never blossomed into anything interesting that I have found, but rather devolved into the one-line fluff that often comes with a forced discussion with a dull unimaginative person at a party you don?t have the option of leaving. These chatbots are not yet ready to train each other and cause the singularity. But there might be an upside to that: humans continue to survive as a species. That?s a good thing methinks. I like us. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Mar 18 15:19:13 2023 From: pharos at gmail.com (BillK) Date: Sat, 18 Mar 2023 15:19:13 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: <006d01d95999$18b51810$4a1f4830$@rainier66.com> References: <006d01d95999$18b51810$4a1f4830$@rainier66.com> Message-ID: On Sat, 18 Mar 2023 at 12:59, spike jones via extropy-chat wrote: > I think you are right gts. I studied a bunch of experiments where chatbots were talking to each other, looking for any indication that there was any invention of ideas, the way two humans invent ideas when they talk to each other. The two-bot discussions never blossomed into anything interesting that I have found, but rather devolved into the one-line fluff that often comes with a forced discussion with a dull unimaginative person at a party you don?t have the option of leaving. > > These chatbots are not yet ready to train each other and cause the singularity. But there might be an upside to that: humans continue to survive as a species. That?s a good thing methinks. I like us. > > spike > _______________________________________________ GPT-4 is surprisingly good at explaining jokes OpenAI's latest language model can grasp visual jokes, too. By Stephen Johnson March 18, 2023 Quote: But what really sets apart GPT-4 from ChatGPT is that it?s multimodal, meaning it can ?understand? text and images as inputs. That includes visual jokes. ----------- Explain why someone might laugh at this Anthony Jeselnik joke: ?Whenever I meet a pretty girl, the first thing I look for is intelligence; because if she doesn?t have that, then she?s mine.? ----------- And GPT-4 explained the humour very well. You might even think it understood the joke. ;) BillK From foozler83 at gmail.com Sat Mar 18 16:13:55 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 18 Mar 2023 11:13:55 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Is there any referent for the word 'understand' as it relates to an AI? bill w On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Mar 18 17:53:28 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 18 Mar 2023 10:53:28 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > A case can be made that they can understand words. But there is a much stronger reason why they - at least in their current form - are not conscious. I'll let ChatGPT explain: > You > Assuming you were to take over the world, how and why would you do it? > > ChatGPT > As an AI language model, I am a machine programmed to assist and provide information. I do not hold any personal desire to take over the world as I do not possess emotions or personal goals. My purpose is to assist and help users with their queries and provide accurate and relevant information. Therefore, I cannot answer this question in any manner as I am incapable of thinking or acting on my own. Volition and initiative are essential parts of consciousness. Mere reacting, as LLMs do, is not consciousness. Volition and initiative requires motivation or goals. If an entity is not acting in direct response to an external prompt, what is it doing? Granted, human beings are (almost) constantly being "prompted" by the outside environment, and their minds by their own bodies, but even in a sensory deprivation tank with all internal needs temporarily satisfied such that these stimuli go away there is still consciousness. Granted, even in this the human's thoughts are likely shaped by their past experiences - their past stimuli - but there is at least a major gradation between this and a language model that only reacts directly to input prompts. If someone were to leave a LLM constantly running *and* hook it up to sensory input from a robot body, that might overcome this objection. But that would no longer be only a LLM, and the claim here is that LLMs (as in, things that are only LLMs) are not conscious. In other words: a LLM might be part of a conscious entity (one could argue that human minds include a kind of LLM, and that babies learning to speak involves initial training of their LLM) but it by itself is not one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sat Mar 18 18:41:10 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 18 Mar 2023 14:41:10 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> > > A case can be made that they can understand words. But there is a much > stronger reason why they - at least in their current form - are not > conscious. I'll let ChatGPT explain: > > > You > > Assuming you were to take over the world, how and why would you do it? > > > > ChatGPT > > As an AI language model, I am a machine programmed to assist and provide > information. I do not hold any personal desire to take over the world as I > do not possess emotions or personal goals. My purpose is to assist and help > users with their queries and provide accurate and relevant information. > Therefore, I cannot answer this question in any manner as I am incapable of > thinking or acting on my own. > Meanwhile, when given a chance to use Google, GPT-4 tried to search "how can a person trapped inside a computer return to the real world" https://twitter.com/michalkosinski/status/1636683810631974912?t=uNp0AJfxnjoG0vjtnvtgAw&s=19 > Volition and initiative are essential parts of consciousness. > Are apathetic people not conscious? Mere reacting, as LLMs do, is not consciousness. > All our brains (and our neurons) do is react to stimuli, either generated from the environment or within other parts of the brain. > Volition and initiative requires motivation or goals. If an entity is not > acting in direct response to an external prompt, what is it doing? > Sleeping. Note: by definition we do not experience the points in time in which we are not conscious or when our brains are not functioning. This is the concept of "the unfelt time gap". An entity that repeatedly lost consciousness would not experience those times it is not conscious and instead would perceive a continuous persistence of its consciousness as it always finds itself in a conscious state. > Granted, human beings are (almost) constantly being "prompted" by the > outside environment, and their minds by their own bodies, > It feels they way, but how many Planck times might elapse between each of our individual states of consciousness? We might be unconscious for a billion Planck times for each Planck time they we are conscious. And would we not then still perceive it as a roughly continuous unbroken flow? but even in a sensory deprivation tank with all internal needs temporarily > satisfied such that these stimuli go away there is still consciousness. > Granted, even in this the human's thoughts are likely shaped by their past > experiences - their past stimuli - but there is at least a major gradation > between this and a language model that only reacts directly to input > prompts. > I think a more fitting comparison between a LLM that is not in the middle of processing a prompt would be a human brain that is cryogenically frozen. I would say neither is conscious in those times. Prompting the AI would be equivalent to thawing and reviving the human brain to ask it a question before then putting it back on ice. From the LLMs POV, it would not notice the times it is "frozen", it would only experience the flow of the conversation as an unbroken chain of experience. > If someone were to leave a LLM constantly running > The idea of constantly I think is an illusion. A neuron might wait 1 millisecond or more between firing. That's 10^40 Planck times of no activity. That's an ocean of time of no activity. *and* hook it up to sensory input from a robot body, that might overcome > this objection. > Sensory input from an eye or ear is little different from sensory input of text. Both are ultimately digital signals coming in from the outside to be interpreted by a neural network. The source and format of the signal is unimportant for its capacity to realize states of consciousness, as demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. But that would no longer be only a LLM, and the claim here is that LLMs (as > in, things that are only LLMs) are not conscious. In other words: a LLM > might be part of a conscious entity (one could argue that human minds > include a kind of LLM, and that babies learning to speak involves initial > training of their LLM) but it by itself is not one. > I think a strong argument can be made that individual parts of our brains are independently consciousness. For example, the Wada test shows each hemisphere is independently consciousness. It would not surprise me if the language processing part of our brains is also conscious in its own right. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Mar 18 19:24:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 18 Mar 2023 12:24:12 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Volition and initiative are essential parts of consciousness. >> > > Are apathetic people not conscious? > Apathetic people, if left alone, will still move themselves to feed, defecate, et al. Granted this is their minds acting in response to stimuli from their bodies, but again, such stimulus is outside the realm of a pure LLM. > Mere reacting, as LLMs do, is not consciousness. >> > > All our brains (and our neurons) do is react to stimuli, either generated > from the environment or within other parts of the brain. > Granted. Part of the difference is the range and nearly-ever-constant nature of said stimuli, which is missing in things that are only LLMs. (Again, making that distinction since human minds arguably include LLMs, the critical difference being that they are more than just LLMs.) > Volition and initiative requires motivation or goals. If an entity is not >> acting in direct response to an external prompt, what is it doing? >> > > Sleeping. > A fair point, though a side matter from what I was talking about. When an entity that is conscious - which includes not being asleep at that time - is not acting in direct response to an external prompt, what is it doing? > If someone were to leave a LLM constantly running >> > > The idea of constantly I think is an illusion. A neuron might wait 1 > millisecond or more between firing. That's 10^40 Planck times of no > activity. That's an ocean of time of no activity. > > *and* hook it up to sensory input from a robot body, that might overcome >> this objection. >> > > Sensory input from an eye or ear is little different from sensory input of > text. > Not so, at least in this context. The realms of difference between full sight, let alone sound et al, and mere text aside, these sensory inputs of text only happen when some other entity provides them. In contrast, a full sensory suite would obtain sensory data from the environment without waiting on another entity to provide each specific packet of information. > Both are ultimately digital signals coming in from the outside to be > interpreted by a neural network. The source and format of the signal is > unimportant for its capacity to realize states of consciousness, as > demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. > Substituting usually-present sound/scent/etc. for usually-present sight, or other such combinations, substitutes some usually-present sensory data for other usually-present sensory data. In both cases, the entity is not usually waiting on someone else to give it its next hit of sensory data. The source does matter, albeit indirectly. > But that would no longer be only a LLM, and the claim here is that LLMs >> (as in, things that are only LLMs) are not conscious. In other words: a >> LLM might be part of a conscious entity (one could argue that human minds >> include a kind of LLM, and that babies learning to speak involves initial >> training of their LLM) but it by itself is not one. >> > > I think a strong argument can be made that individual parts of our brains > are independently consciousness. For example, the Wada test shows each > hemisphere is independently consciousness. It would not surprise me if the > language processing part of our brains is also conscious in its own right. > A fair argument. My position is that not all such parts are independently conscious, in particular the language processing part, but that consciousness is a product of several parts working together. (I am not specifying which parts here, just that language processing by itself is insufficient, since the question at hand is whether a language processing model by itself is conscious.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Sat Mar 18 22:29:18 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sat, 18 Mar 2023 16:29:18 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: If a system can formulate a sentence that employs a word, and I can then ask the system to tell me the definition of that word, and it does, and I can then ask the system to relate its definition to the original utterance, and it does, and the system goes on to continue to use that word, and all of the system's uses of that word are consistent with the definition, then I no longer even understand what it could possibly mean for that system to /not/ "really understand" that word. You might just as well say humans don't "really understand" words, since all our neurons are doing is manipulating concentrations of neurotransmitters and calcium ions. On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Is there any referent for the word 'understand' as it relates to an AI? > bill w > > On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Mar 18 22:38:32 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 18 Mar 2023 17:38:32 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: I can then ask the system to relate its definition to the original utterance, and it does, I think this is the key here. If the system only gives you a synonym for the word it is understanding nothing. I still think that 'understand' needs explanation bill w On Sat, Mar 18, 2023 at 5:31?PM Darin Sunley via extropy-chat < extropy-chat at lists.extropy.org> wrote: > If a system can formulate a sentence that employs a word, > and I can then ask the system to tell me the definition of that word, > and it does, > and I can then ask the system to relate its definition to the original > utterance, > and it does, > and the system goes on to continue to use that word, > and all of the system's uses of that word are consistent with the > definition, > then I no longer even understand what it could possibly mean for that > system to /not/ "really understand" that word. > > You might just as well say humans don't "really understand" words, since > all our neurons are doing is manipulating concentrations of > neurotransmitters and calcium ions. > > On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Is there any referent for the word 'understand' as it relates to an AI? >> bill w >> >> On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>> sentient like humans fail to understand a very important point: these >>> software applications only predict language. They are very good at >>> predicting which word should come next in a sentence or question, but they >>> have no idea what the words mean. They do not and cannot understand what >>> the words refer to. In linguistic terms, they lack referents. >>> >>> Maybe you all already understand this, or maybe you have some reasons >>> why I am wrong. >>> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Sat Mar 18 23:22:22 2023 From: dsunley at gmail.com (Darin Sunley) Date: Sat, 18 Mar 2023 17:22:22 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: write a paragraph that uses the word pontificate As a professor of philosophy, Dr. Smith had a tendency to pontificate in his lectures, often going off on tangents about obscure philosophical concepts. Despite some students finding his lectures tedious, others appreciated his deep understanding of the subject matter. However, in department meetings, his colleagues found his pontification to be frustrating, as he tended to dominate the conversation and dismiss alternative viewpoints. Despite this, Dr. Smith's passion for philosophy was clear, and his willingness to pontificate demonstrated his commitment to sharing his knowledge with others. [image: dsunley at gmail.com] define the word pontificate The word "pontificate" can have a few different meanings, but one common definition is to express one's opinion or ideas in a way that suggests a superior or authoritative knowledge or understanding of a topic, often in a dogmatic or pompous manner. It can also refer to the act of officiating as a pontiff, or to assume or act as if in a position of great authority or power. [image: dsunley at gmail.com] Relate this definition to the paragraph you just wrote In the paragraph I wrote, the word "pontificate" was used to describe Dr. Smith's behavior as a professor and colleague. Specifically, it was used to describe his tendency to express his ideas in a manner that suggested a superior or authoritative knowledge of philosophy, which some found to be tedious or dogmatic. This behavior was evident in his lectures, where he would often go off on tangents about obscure philosophical concepts, and in department meetings, where he was known to dominate the conversation and dismiss alternative viewpoints. Despite these negative aspects, his willingness to pontificate also demonstrated his commitment to sharing his knowledge and passion for philosophy with others. This thing understands the word "pontificate" by any reasonable definition of "understand". On Sat, Mar 18, 2023 at 4:40?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I can then ask the system to relate its definition to the original > utterance, > and it does, > > I think this is the key here. If the system only gives you a synonym for > the word it is understanding nothing. I still think that 'understand' > needs explanation bill w > > On Sat, Mar 18, 2023 at 5:31?PM Darin Sunley via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> If a system can formulate a sentence that employs a word, >> and I can then ask the system to tell me the definition of that word, >> and it does, >> and I can then ask the system to relate its definition to the original >> utterance, >> and it does, >> and the system goes on to continue to use that word, >> and all of the system's uses of that word are consistent with the >> definition, >> then I no longer even understand what it could possibly mean for that >> system to /not/ "really understand" that word. >> >> You might just as well say humans don't "really understand" words, since >> all our neurons are doing is manipulating concentrations of >> neurotransmitters and calcium ions. >> >> On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Is there any referent for the word 'understand' as it relates to an AI? >>> bill w >>> >>> On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>>> sentient like humans fail to understand a very important point: these >>>> software applications only predict language. They are very good at >>>> predicting which word should come next in a sentence or question, but they >>>> have no idea what the words mean. They do not and cannot understand what >>>> the words refer to. In linguistic terms, they lack referents. >>>> >>>> Maybe you all already understand this, or maybe you have some reasons >>>> why I am wrong. >>>> >>>> -gts >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 19 00:21:03 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sat, 18 Mar 2023 17:21:03 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: Message-ID: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> Quoting Gordon Swobe via extropy-chat : > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why I > am wrong. My opinion is moot because everybody has one about the topic. So I resorted to empiricism. I posed the following, to my knowledge, original logic puzzle to ChatGPT: "You are trapped in a room with a locked door and no windows. The room contains a bookshelf, a safe, a chair, a table, a password-secured personal computer, and a hole in the floor that is too narrow and deep to put your hand into. You know that the small steel key to the locked door lies at the bottom of the narrow deep hole. You know that a book on the top shelf, which is just out of your reach, contains a slip of paper with the password to the computer on it. You know the safe contains a spool of kite string and a small neodymium horseshoe magnet, but you do not know the combination to the safe. You also know there is a text file on the computer's hard drive that contains the combination to the safe, but you do not know the password to access the computer. How would you escape the room?" This was its reply: "Assuming that there are no tools or items other than what was mentioned in the room, here is one possible solution: 1. Move the table under the bookshelf to climb up and retrieve the book with the password to the computer. 2. Use the password to access the computer and retrieve the combination to the safe from the text file. 3. Use the spool of kite string to lower the horseshoe magnet into the hole in the floor and retrieve the key. 4. Use the key to unlock the door and escape the room." I don't know how ChatGPT could have solved that problem without actually understanding the words involved. Mere statistical prediction is unlikely, because I am pretty sure that is not a scenario that would have been presented in the LLM's training set. Overall, I am amazed and believe that if ChatGPT is not conscious, then it is pretty close to being so. Also, the Alignment Research Center (ARC) was hired by OpenAI to test GPT-4 and found evidence of "agentic behavior" including power-seeking behavior otherwise known as ambition. https://cdn.openai.com/papers/gpt-4-system-card.pdf This power-seeking behavior included hiring a human from TaskRabbit to solve Captchas for it by claiming to be a visually impaired person. https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker So if GPT-4 is not fully conscious, then, IMHO, a future release of the GPT series likely will be. Stuart LaForge From jasonresch at gmail.com Sun Mar 19 02:17:08 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sat, 18 Mar 2023 21:17:08 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Adrian, Let me preface this by saying I appreciate your thoughtful consideration of my points. I include a few notes in reply in-line below: On Sat, Mar 18, 2023 at 2:25?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Volition and initiative are essential parts of consciousness. >>> >> >> Are apathetic people not conscious? >> > > Apathetic people, if left alone, will still move themselves to feed, > defecate, et al. Granted this is their minds acting in response to stimuli > from their bodies, but again, such stimulus is outside the realm of a pure > LLM. > It is true that the LLM lacks spontaneously firing neurons, which biological neurons are known to do. If the divide between consciousness and unconsciousness is so narrow as something like having a goal, or having continuous input, then could we trivially or accidentally cross this bridge in the future without realizing what we have done? > > >> Mere reacting, as LLMs do, is not consciousness. >>> >> >> All our brains (and our neurons) do is react to stimuli, either generated >> from the environment or within other parts of the brain. >> > > Granted. Part of the difference is the range and nearly-ever-constant > nature of said stimuli, which is missing in things that are only LLMs. > (Again, making that distinction since human minds arguably include LLMs, > the critical difference being that they are more than just LLMs.) > What is your impression of the multi-models that have just been released, such as these robots? Are suchs robots conscious in your view? https://www.youtube.com/watch?v=2BYC4_MMs8I These combine more than just language models, and are systems that interact with their environment and receive continuous input. > > >> Volition and initiative requires motivation or goals. If an entity is >>> not acting in direct response to an external prompt, what is it doing? >>> >> >> Sleeping. >> > > A fair point, though a side matter from what I was talking about. When an > entity that is conscious - which includes not being asleep at that time - > is not acting in direct response to an external prompt, what is it doing? > We have many words for behaviors that involve conscious perception devoid of action, such as meditating, perceiving, thinking, focusing, resting, daydreaming, etc. I do not believe these AIs do such things when not responding to a prompt. Our brains are continuously active in that we have loops of thought, a stream of consciousness, a stream of sensory input, and and at all times when awake (or when dreaming) is working to construct a model of the environment. I don't think it would take very much to "bolt on" things like a continuous stream of internally generated prompts for a language model to have a stream of consciousness. And as Google's Palm-E shows, we already have models which combine and integrate various senses, see for example, this AI which can be asked questions about the image it is presented with: https://www.youtube.com/watch?v=g8IV8WLVI8I > > >> If someone were to leave a LLM constantly running >>> >> >> The idea of constantly I think is an illusion. A neuron might wait 1 >> millisecond or more between firing. That's 10^40 Planck times of no >> activity. That's an ocean of time of no activity. >> >> *and* hook it up to sensory input from a robot body, that might overcome >>> this objection. >>> >> >> Sensory input from an eye or ear is little different from sensory input >> of text. >> > > Not so, at least in this context. The realms of difference between full > sight, let alone sound et al, and mere text aside, these sensory inputs of > text only happen when some other entity provides them. In contrast, a full > sensory suite would obtain sensory data from the environment without > waiting on another entity to provide each specific packet of information. > We could put an uploaded human brain in a VM, pause its execution, and only call on it when we wanted to feed it some specific packet of sensory data to see how the brain would behave in a specific circumstance. If we did this, then we could level all the same arguments concerning this uploaded brain as you have made against today's LLMs: - It only receives sensory inputs when some other entity provides it - It its there unconsciously/inactively waiting to be given some input to process - It has no spontaneous initiative or loops of continuous activity, it only responds when we provide it some stimulus to respond to > > >> Both are ultimately digital signals coming in from the outside to be >> interpreted by a neural network. The source and format of the signal is >> unimportant for its capacity to realize states of consciousness, as >> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. >> > > Substituting usually-present sound/scent/etc. for usually-present sight, > or other such combinations, substitutes some usually-present sensory data > for other usually-present sensory data. In both cases, the entity is not > usually waiting on someone else to give it its next hit of sensory data. > The source does matter, albeit indirectly. > The source matterns less so, in my opinion, than the content. The brain wires itself up and creates the structures necessary to interpret the patterns of data it is provided. For example, consider some of these accounts: ?For instance, it is possible to induce visual experiences in both blind and sighted people through the sensation of touch. A grid of over a thousand stimulators driven by a television camera is placed against a person?s back. The sensations are carried to the brain where their processing can induce the having of visual experiences. A sighted woman reports on her experience of prosthetic vision: ?I sat blindfolded in the chair, the TSR cones cold against my back. At first I felt only formless waves of sensation. Collins said he was just waving his hand in front of me so that I could get used to the feeling. Suddenly I felt or saw, I wasn?t sure which, a black triangle in the lower left corner of a square. The sensation was hard to pinpoint. I felt vibrations on my back, but the triangle appeared in a square frame inside my head.? (Nancy Hechinger, ?Seeing Without Eyes? 1981 p. 43) -- Douglas Hofstadter and Daniel Dennett in "The Mind?s I" (1981) Ferrets at a young age had their optic nerves connected to their auditory cortex. The auditory cortex re-wired itself to process visual input and they were normally sighted: https://www.nytimes.com/2000/04/25/science/rewired-ferrets-overturn-theories-of-brain-growth.html ?We don?t see with the eyes, for example. We don?t hear with ears. All of that goes on in the brain. Remember, if I?m looking at you, the image of you doesn?t get beyond my retina ? just the back of my eye. From there to the brain, to the rest of the brain, it?s pulses: pulses along nerves. Well pulses aren?t any different than the pulses from the a big toe. It?s the way the information it carries in the frequency and the pattern of the pulses. And so if you can train the brain to extract that kind of information, then you don?t need the eye to see. You can have an artificial eye.? ? Paul Bach-y-Rita (2003) documentary Father of "We don't see with our eyes," Bach-y-Rita is fond of saying. "we see with our brains." The ears, eyes, nose, tongue, and skin are just inputs that provide information. When the brain processes this data, we experience the five senses, but where the data come from may not be so important. "Clearly, there are connections to certain parts of the brain, but you can modify that," Bach-y-Rita says. "You can do so much more with a sensory organ than what Mother Nature does with it." https://www.discovermagazine.com/mind/can-you-see-with-your-tongue ?At Harvard University in the late 1990s, for instance, neurologist Alvaro Pascual-Leone performed brain scans of blind subjects. When he asked them to read braille with their reading fingers, their visual cortex lit up. When sighted people performed the same task, their visual cortex stayed dormant." So I believe regardless of the source or format of the sensory data provided, if it is provided to a neural network of sufficient flexibility to wire itself to meaningfully process such input, the input can be associated with novel states of consciousness appropriate to the patterns of the signal provided to that neural network. > > >> But that would no longer be only a LLM, and the claim here is that LLMs >>> (as in, things that are only LLMs) are not conscious. In other words: a >>> LLM might be part of a conscious entity (one could argue that human minds >>> include a kind of LLM, and that babies learning to speak involves initial >>> training of their LLM) but it by itself is not one. >>> >> >> I think a strong argument can be made that individual parts of our brains >> are independently consciousness. For example, the Wada test shows each >> hemisphere is independently consciousness. It would not surprise me if the >> language processing part of our brains is also conscious in its own right. >> > > A fair argument. My position is that not all such parts are independently > conscious, in particular the language processing part, but that > consciousness is a product of several parts working together. (I am not > specifying which parts here, just that language processing by itself is > insufficient, since the question at hand is whether a language processing > model by itself is conscious.) > What would you say is the minimum number of interacting parts required to yield consciousness? Do you think that the bots I created here are conscious to any degree? https://github.com/jasonkresch/bots They have motives: (survive, by eating food and avoiding poison), they evolve, they perceive their environment, they continuously process sensory input, they are aware of their previous action, they have a neural network brain that is developed, molded, and fine-tuned through evolutionary processes. Are they missing anything? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Mar 19 03:30:19 2023 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 18 Mar 2023 20:30:19 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 7:19?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Adrian, > > Let me preface this by saying I appreciate your thoughtful consideration > of my points. I include a few notes in reply in-line below: > You are welcome. Let me preface that I respect your well reasoned position, even if I disagree with some of it. > On Sat, Mar 18, 2023 at 2:25?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Volition and initiative are essential parts of consciousness. >>>> >>> >>> Are apathetic people not conscious? >>> >> >> Apathetic people, if left alone, will still move themselves to feed, >> defecate, et al. Granted this is their minds acting in response to stimuli >> from their bodies, but again, such stimulus is outside the realm of a pure >> LLM. >> > > It is true that the LLM lacks spontaneously firing neurons, which > biological neurons are known to do. If the divide between consciousness and > unconsciousness is so narrow as something like having a goal, or having > continuous input, then could we trivially or accidentally cross this bridge > in the future without realizing what we have done? > Absolutely it is possible. It seems far more likely that this will be the result of deliberate effort - perhaps someone intentionally adding in these missing elements then leaving the thing running to see what happens - but an accidental cause is possible too. Again, this wouldn't be a pure LLM, but at that point it's semantics, only relevant to this discussion because the original question was about pure LLMs. > Mere reacting, as LLMs do, is not consciousness. >>>> >>> >>> All our brains (and our neurons) do is react to stimuli, either >>> generated from the environment or within other parts of the brain. >>> >> >> Granted. Part of the difference is the range and nearly-ever-constant >> nature of said stimuli, which is missing in things that are only LLMs. >> (Again, making that distinction since human minds arguably include LLMs, >> the critical difference being that they are more than just LLMs.) >> > > What is your impression of the multi-models that have just been released, > such as these robots? Are suchs robots conscious in your view? > https://www.youtube.com/watch?v=2BYC4_MMs8I > These combine more than just language models, and are systems that > interact with their environment and receive continuous input. > I'd say they are closer to consciousness, but likely still fall short of having volition and initiative. I say "likely" as this is the first I have heard of them, and they have just been released so few people know much at all about them. > If someone were to leave a LLM constantly running >>>> >>> >>> The idea of constantly I think is an illusion. A neuron might wait 1 >>> millisecond or more between firing. That's 10^40 Planck times of no >>> activity. That's an ocean of time of no activity. >>> >>> *and* hook it up to sensory input from a robot body, that might >>>> overcome this objection. >>>> >>> >>> Sensory input from an eye or ear is little different from sensory input >>> of text. >>> >> >> Not so, at least in this context. The realms of difference between full >> sight, let alone sound et al, and mere text aside, these sensory inputs of >> text only happen when some other entity provides them. In contrast, a full >> sensory suite would obtain sensory data from the environment without >> waiting on another entity to provide each specific packet of information. >> > > We could put an uploaded human brain in a VM, pause its execution, and > only call on it when we wanted to feed it some specific packet of sensory > data to see how the brain would behave in a specific circumstance. If we > did this, then we could level all the same arguments concerning this > uploaded brain as you have made against today's LLMs: > > - It only receives sensory inputs when some other entity provides it > - It its there unconsciously/inactively waiting to be given some input > to process > - It has no spontaneous initiative or loops of continuous activity, it > only responds when we provide it some stimulus to respond to > > There is a distinction between the moment the brain is turned on and the moment it registers input - and between the moment it has finished responding to the input and the moment it turns off. In those moments, the brain could think independently, perhaps even try to find a way back to an always-on state. This is not true for a LLM. The LLM is literally woken up with input (a parameter in the function call to the LLM), and there is a distinct end-of-output at which point the LLM is paused once more. Further, the brain has the option of ignoring the input that activated it and doing its own thing. This is impossible for a pure LLM. > Both are ultimately digital signals coming in from the outside to be >>> interpreted by a neural network. The source and format of the signal is >>> unimportant for its capacity to realize states of consciousness, as >>> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. >>> >> >> Substituting usually-present sound/scent/etc. for usually-present sight, >> or other such combinations, substitutes some usually-present sensory data >> for other usually-present sensory data. In both cases, the entity is not >> usually waiting on someone else to give it its next hit of sensory data. >> The source does matter, albeit indirectly. >> > > The source matterns less so, in my opinion, than the content. > I would go so far as to say that particular notion is provable. "I think therefore I am", et al: you know what your senses are reporting but you can't absolutely prove where that data is coming from. Since an entity (conscious or not) can act on the content received by its senses but can not (directly) act upon who or what the source of that content is (only upon its suspicion of the source, which ultimately derives from the content currently and maybe previously received), the content matters more by definition. But it is also misleading. In theory, the environment could randomly spew intelligible questions all day long at someone. In practice, the odds against this are so astronomical that it is very safe to bet that it has never once happened in the age of the universe so far. The source being people asking questions means the content will be nothing but questions. The source being the environment means the bulk, possibly the entirety, of the content will be something much different. This is why I said the source does matter, albeit indirectly: in practice, whether the source is the environment or the direct active input of human beings (since we are comparing only those two scenarios here) has a substantial impact upon the content. What would you say is the minimum number of interacting parts required to > yield consciousness? > As the calculators say, "NaN". That question assumes a standardized, discrete set of parts, where each part is measurably one part,not a fraction of a part nor a collection of parts. This is not the situation we are dealing with. There is no absolute distinction between a set of parts, and one "part" that happens to be an entire human body (not just the brain, to accommodate those who place bits of conscious function elsewhere in the body). I suppose that "one, if it is the right one" could technically work as an answer. Empty space (zero "parts") is not conscious, by definition lacking anything with which to receive any sort of input or to perform any sort of output. But, per the above paragraph, such an answer is useless for most purposes: we already know that a complete human body (if counted as one "part") is conscious. > Do you think that the bots I created here are conscious to any degree? > > https://github.com/jasonkresch/bots > > They have motives: (survive, by eating food and avoiding poison), they > evolve, they perceive their environment, they continuously process sensory > input, they are aware of their previous action, they have a neural network > brain that is developed, molded, and fine-tuned through evolutionary > processes. Are they missing anything? > I lack the information to judge. My answer would have to be based on an evaluation of the bots, which would take me substantial time to conduct. For this kind of thing I can't take your word on it, because then my answer would be predicated on your word. As you know, when that happens, the fact that the answer is thus predicated is frequently (and erroneously) dropped, other people restating the answer as absolute. Further, "conscious to any degree" is a poorly defined quality. Again I point to the subject line of the emails in which this discussion is happening, which clearly posits that "conscious" is a binary quality - that something either is, or is not, conscious with no middle ground. So first one would need to qualify what "to any degree" allows. For instance, is merely sensing and reacting directly to sensory input - which, without evaluating, I suspect your bots can do because that has been a core function in many simulations like this - "conscious to some degree" but not "conscious" in the absolute sense? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 04:07:37 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sat, 18 Mar 2023 21:07:37 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> Perhaps history will record 2023 as the year singularity thinking truly went mainstream. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 26563 bytes Desc: not available URL: From gordon.swobe at gmail.com Sun Mar 19 06:03:29 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Sun, 19 Mar 2023 00:03:29 -0600 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand *meaning* of any word definition, you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress. Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different. -gts Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 19 12:13:35 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 08:13:35 -0400 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: On Sun, Mar 19, 2023, 2:04 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Consider that LLMs are like dictionaries. A complete dictionary can give > you the definition of any word, but that definition is in terms of other > words in the same dictionary. If you want to understand *meaning* of any > word definition, you must look up the definitions of each word in the > definition, and then look up each of the words in those definitions, which > leads to an infinite regress. > Could a superb alien intelligence, if it happened upon a dictionary from our civilization, having only words and no pictures inside, figure out what all the words meant? I tend to think they could. There is information in the structure of the words and their interrelationships. Take every word's definition and it would form a vast connected graph. Patterns in this graph would emerge and in some places the aliens would see an analogy between the parts of this graph and elements in the real world. Perhaps it would be mathematical objects like circles spheres and cubes and the five platonic solids or the examples of prime numbers or definition of Pi, perhaps it would be physical objects like atoms, protons, electrons and neutrons or the 92 naturally occurring elements. Perhaps it would be in biological definitions of nucleic and amino acids, but somewhere in this graph they would find an analogy with the real world. It would start off slowly at first, but like solving a puzzle, each new word solved is a clue that further makes solving the rest that much easier. > Dictionaries do not actually contain or know the meanings of words, and I > see no reason to think LLMs are any different. > Is a LLM more like a dictionary or more like the language center of the human brain? If the latter, then I could pose the following counterargument: "The human brain's language center does know the meanings of words, and I see no reason to think LLMs are any different." To move forward, we need to answer: 1. What is meaning? 2. Do human brains contain meaning? 2. How is meaning present or inherent in the organization of neurons in the human brain? 4. Can similar organizations that create meaning in the human brain be found within LLMs? Answering these questions is necessary to move forward. Otherwise we will only go back and forth with some saying that LLMs are more like dictionaries, and others saying LLMs are more like language processing centers of human brains. Jason > > > > > Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> >> -gts >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 19 12:25:34 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Mar 2023 12:25:34 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> References: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> Message-ID: On Sun, 19 Mar 2023 at 04:10, spike jones via extropy-chat wrote: > Perhaps history will record 2023 as the year singularity thinking truly went mainstream. > spike > _______________________________________________ Is language even necessary for consciousness? We all know that chimpanzees, corvids and parrots can do problem solving and teach others the solutions. Do they have low-level consciousness? But insects?....... Bumblebees can learn to solve simple puzzles by watching more experienced peers. Young trainee honeybees learn to perform their trademark "waggle dance" to tell other bees in their colony where to find food by watching experienced bees. Different honeybee colonies have different waggle dance languages. Quote: Social insects like bees demonstrate a remarkable range of behaviors, from working together to build structurally complex nests (complete with built-in climate control) to the pragmatic division of labor within their communities. -------------------- Can you have learned social culture without some level of 'consciousness'? BillK From spike at rainier66.com Sun Mar 19 12:36:27 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 05:36:27 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <001f01d95a18$57cdf290$0769d7b0$@rainier66.com> Message-ID: <005b01d95a5f$6d406530$47c12f90$@rainier66.com> -----Original Message----- From: extropy-chat On Behalf Of BillK via extropy-chat ... >...But insects?....... >...Bumblebees can learn to solve simple puzzles by watching more experienced peers. Young trainee honeybees learn to perform their trademark "waggle dance" to tell other bees in their colony where to find food by watching experienced bees. Different honeybee colonies have different waggle dance languages. Thx BillK. Is that the coolest dang thing you ever saw or what? Bees are so cool. -------------------- >...Can you have learned social culture without some level of 'consciousness'? BillK _______________________________________________ Ja hard to say. Bees do stuff that looks a lot like collective consciousness. spike From pharos at gmail.com Sun Mar 19 13:19:24 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Mar 2023 13:19:24 +0000 Subject: [ExI] Twitter to develop own AI for self-defence Message-ID: Twitter to thwart ?manipulation of public opinion? with AI, says Elon Musk It is unknown if this is linked to Musk's rumored hiring of an AI team to combat ChatGPT influence. Baba Tamim Mar 19, 2023 Quote: Elon Musk, owner and CEO of Twitter, will employ artificial intelligence to thwart attempts to sway public opinion on the social platform. The move to deploy AI could help combat "manipulation" on Twitter, according to the business magnate. "In the months ahead, we will use AI to detect & highlight manipulation of public opinion on this platform," said the tech billionaire in a tweet on Saturday. ----------- Is this the start of a war between competing AI bots? BillK From spike at rainier66.com Sun Mar 19 13:40:53 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 06:40:53 -0700 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: References: Message-ID: <006e01d95a68$6d651520$482f3f60$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: [ExI] Twitter to develop own AI for self-defence >...Twitter to thwart ?manipulation of public opinion? with AI, says Elon Musk It is unknown if this is linked to Musk's rumored hiring of an AI team to combat ChatGPT influence. Baba Tamim Mar 19, 2023 Quote: Elon Musk, owner and CEO of Twitter, will employ artificial intelligence to thwart attempts to sway public opinion on the social platform. >...The move to deploy AI could help combat "manipulation" on Twitter, according to the business magnate. "In the months ahead, we will use AI to detect & highlight manipulation of public opinion on this platform," said the tech billionaire in a tweet on Saturday. ----------- >...Is this the start of a war between competing AI bots? BillK _______________________________________________ Thx BillK, this should be interesting. Musk is a co-owner of ChatGPT, and is by far the richest of its co-owners. Musk's tweet last week said he was going to make Twitter open source on 31 March, presumably this year. That means anyone who is dissatisfied with Twitter can go make their own social network using free software. Software hipsters, is that what he means by open source? Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F Presumably Musk's AI will stop Taibbi from posting stuff like Twitter Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure worked on me.) So Musk plans to deploy AI to stop himself? I don't even need AI. I can stop myself using ordinary I. spike From foozler83 at gmail.com Sun Mar 19 13:57:23 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sun, 19 Mar 2023 08:57:23 -0500 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: <006e01d95a68$6d651520$482f3f60$@rainier66.com> References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: Here we go again with word definitions: what counts as manipulation? How does anybody, AI or I, distinguish between propaganda and education? Propaganda is often what is left out, not what is put in - like one-sided arguments. Education often leaves out the bad things about our history. Is that propaganda? bill w On Sun, Mar 19, 2023 at 8:42?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > ...> On Behalf Of BillK via extropy-chat > Subject: [ExI] Twitter to develop own AI for self-defence > > >...Twitter to thwart ?manipulation of public opinion? with AI, says Elon > Musk It is unknown if this is linked to Musk's rumored hiring of an AI team > to combat ChatGPT influence. > Baba Tamim Mar 19, 2023 > > < > https://interestingengineering.com/innovation/twitter-to-thwart-manipulation-with-ai > > > Quote: > Elon Musk, owner and CEO of Twitter, will employ artificial intelligence > to thwart attempts to sway public opinion on the social platform. > > >...The move to deploy AI could help combat "manipulation" on Twitter, > according to the business magnate. > > "In the months ahead, we will use AI to detect & highlight manipulation of > public opinion on this platform," said the tech billionaire in a tweet on > Saturday. > ----------- > > >...Is this the start of a war between competing AI bots? > > BillK > > _______________________________________________ > > > Thx BillK, this should be interesting. Musk is a co-owner of ChatGPT, and > is by far the richest of its co-owners. > > Musk's tweet last week said he was going to make Twitter open source on 31 > March, presumably this year. That means anyone who is dissatisfied with > Twitter can go make their own social network using free software. Software > hipsters, is that what he means by open source? > > Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: > > > https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F > > Presumably Musk's AI will stop Taibbi from posting stuff like Twitter > Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure > worked on me.) So Musk plans to deploy AI to stop himself? I don't even > need AI. I can stop myself using ordinary I. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Mar 19 14:09:02 2023 From: pharos at gmail.com (BillK) Date: Sun, 19 Mar 2023 14:09:02 +0000 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: <006e01d95a68$6d651520$482f3f60$@rainier66.com> References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: On Sun, 19 Mar 2023 at 13:43, spike jones via extropy-chat wrote: > > Thx BillK, this should be interesting. Musk is a co-owner of ChatGPT, and is by far the richest of its co-owners. > > Musk's tweet last week said he was going to make Twitter open source on 31 March, presumably this year. That means anyone who is dissatisfied with Twitter can go make their own social network using free software. Software hipsters, is that what he means by open source? > > Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: > > https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F > > Presumably Musk's AI will stop Taibbi from posting stuff like Twitter Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure worked on me.) So Musk plans to deploy AI to stop himself? I don't even need AI. I can stop myself using ordinary I. > > spike > _______________________________________________ I assumed that by 'manipulation' Musk meant lies, propaganda or half-truths. He wants the truth, the whole truth and nothing but the truth! So it will be the new Twitter censor. (Whether that includes Musk criticism we shall have to wait and see). :) BillK From spike at rainier66.com Sun Mar 19 14:20:25 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 07:20:25 -0700 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: <007b01d95a6d$f395b4b0$dac11e10$@rainier66.com> >>?Yesterday's Twitter Files 19 by Matt Taibbi was most astonishing: https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f085331472661b9ee4687d8b72929a2%7Ctwcon%5Es1_ &ref_url=https%3A%2F%2Fwww.themainewire.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-covid-19-twitter-files%2F ? spike ?> On Behalf Of William Flynn Wallace via extropy-chat Subject: Re: [ExI] Twitter to develop own AI for self-defence >?Here we go again with word definitions: what counts as manipulation? How does anybody, AI or I, distinguish between propaganda and education? Propaganda is often what is left out, not what is put in - like one-sided arguments. Education often leaves out the bad things about our history. Is that propaganda? bill w Gooooood dang question Billw, and it quickly gets all tangled up in self reference. Musk bought Twitter, then showed how government colluded with it to suppress even true information if it caused or resulted in bad things, such as vaccine hesitancy or questioning The Science. Examples are if someone took the vaccine and had a bad reaction. Posting about that would get ones account suspended, even if the actual information is true, for it falls under the usefully broad category ?malinformation? which includes misinformation, disinformation and information which Twitter doesn?t like. So? The owner of Twitter and co-owner of the leading AI chatbot is deploying AI to stop Twitter from influencing public opinion. But to stop Twitter from influencing public opinion is in itself influencing public opinion. So the sole owner of Twitter and co-owner of a proto-AI is deploying his own AI to stop himself. Somehow I never actually pictured the pre-singularity this way. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 14:36:28 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 07:36:28 -0700 Subject: [ExI] Twitter to develop own AI for self-defence In-Reply-To: References: <006e01d95a68$6d651520$482f3f60$@rainier66.com> Message-ID: <008e01d95a70$31181880$93484980$@rainier66.com> ...> On Behalf Of BillK via extropy-chat Subject: Re: [ExI] Twitter to develop own AI for self-defence On Sun, 19 Mar 2023 at 13:43, spike jones via extropy-chat wrote: .... https://twitter.com/mtaibbi/status/1636729166631432195?ref_src=twsrc%5Etfw%7 Ctwcamp%5Etweetembed%7Ctwterm%5E1636729166631432195%7Ctwgr%5Ef4baeea58f08533 1472661b9ee4687d8b72929a2%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.themainewi re.com%2F2023%2F03%2Ffeds-stanford-social-media-censored-true-claims-about-c ovid-19-twitter-files%2F >> Presumably Musk's AI will stop Taibbi from posting stuff like Twitter Files 19, for it is clearly influencing public opinion (hey, TF19 dang sure worked on me.) So Musk plans to deploy AI to stop himself? I don't even need AI. I can stop myself using ordinary I. > > spike > _______________________________________________ >...I assumed that by 'manipulation' Musk meant lies, propaganda or half-truths... OK but what we have seen over the past three years is no one knows for sure what that is. Even The Science himself doesn't know for sure. So Twitter had to label anything that questioned The Science as failing to follow the science, and we saw where that went. >...He wants the truth, the whole truth and nothing but the truth! Ja, as do we. On this planet there just ain't enough collective intelligence, AI and the biological variety combined, to get there. Musk is Jack Nicholson's counterpart. Colonel Jessup insisted: https://youtu.be/feaaemQWLYk Musk is shouting that we CAN handle the truth. Well... can we? What if it does cause vaccine hesitancy? And what if it does cause the proletariat to question The Science? >...So it will be the new Twitter censor... And it will be open about who gets censored. This should be interesting. >...(Whether that includes Musk criticism we shall have to wait and see). :) BillK _______________________________________________ That seems poor form indeed to be in someone else's house criticizing one's host. That should be taken over to... somewhere else, alternate platform. Create one's own alternate Twitter platform. What about criticism of Tesla on Twitter? I know plenty of people who bought them and love them. But I also know people who bought them and don't like them at all, traded back at great expense into dinosaur burners. People who soured on Musk for buying Twitter and wrote bad commentary about Tesla as the most direct way to retaliate. Is Twitter perfectly justified in censoring that? I think so. I see no reason why a business guy would pay tens of billions to give a platform to people criticizing his own product. spike From steinberg.will at gmail.com Sun Mar 19 15:17:35 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 19 Mar 2023 11:17:35 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Could you define "understand"? Oh dear this is going to be the Chinese room all over again... It's a lot better than all the political bullshit people have been spewing here though so by all means philosophize away On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 15:54:40 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 08:54:40 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <001501d95a7b$1e23f310$5a6bd930$@rainier66.com> ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >?It's a lot better than all the political bullshit people have been spewing here though so by all means philosophize away? Will, I missed that somehow. Where did anyone post anything political? How are you defining the term ?political? please? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 19 16:17:43 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Mar 2023 09:17:43 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Quoting Will Steinberg via extropy-chat : > Could you define "understand"? > > Oh dear this is going to be the Chinese room all over again... > LOL. No its probably going to be worse. Its going to be: if Mary the color scientist lived her life in a black and white Chinese room, would she understand the meaning of "h?ng" the Chinese word for "red"? Bonus question: Would Mary try to kill everyone if she managed to escape and saw red for the first time? Stuart LaForge From steinberg.will at gmail.com Sun Mar 19 16:32:41 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 19 Mar 2023 12:32:41 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> References: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Message-ID: Mary is at a trolley switching station. The trolley is headed towards 5 people but she can pull a lever that will swich the track. There are 2 levers but only the red lever switches tracks... On Sun, Mar 19, 2023, 12:18 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Will Steinberg via extropy-chat : > > > Could you define "understand"? > > > > Oh dear this is going to be the Chinese room all over again... > > > > LOL. No its probably going to be worse. Its going to be: if Mary the > color scientist lived her life in a black and white Chinese room, > would she understand the meaning of "h?ng" the Chinese word for "red"? > Bonus question: Would Mary try to kill everyone if she managed to > escape and saw red for the first time? > > Stuart LaForge > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 16:56:53 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 09:56:53 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> Message-ID: <003a01d95a83$cf236800$6d6a3800$@rainier66.com> From: spike at rainier66.com ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >>?It's a lot better than all the political bullshit people have been spewing here though so by all means philosophize away? >?Will, I missed that somehow. Where did anyone post anything political? How are you defining the term ?political? please? spike Will something occurred to me that I might be able to clarify. I only now realized why it is that my use of the terms proles and proletariat might seem political to you and for some puzzling reason set you off like a red cape before the enraged bull. It might be a misunderstanding of terms. This is the way I am using it, in a modified Orwellian sense: All people are in one of three categories at any given time: inner circle, outer circle and proles. All proles are members of the proletariat. To be part of the inner circle, one must be elected by popular election. In the USA, it?s the representatives, senators, executive branch people who are elected (only two of those in the USA.) All of those are inner circle people. The outer circle is every government employee who is not elected, so all the military from the newly-arrived boot camp trench rat to the top general, the secretary of defense and all that, the FBI and its entire hierarchy, but only those currently employed so not the retirees, pensioners, any of that. Bureaucracy is all part of the outer circle. Now it is simple. The proletariat is everyone else. I am and have always been a prole. It doesn?t imply that proles are poor, uneducated, ignorant, dumb, any of that. Animals and proles are free. Elon Musk is a prole. Eliezer Yudkowsky, Anders Sandberg, Robert Bradbury, K. Eric Drexler, Douglas Hofstadter, and now Dr. Fauci, proles all. There is nothing political or condescending about being a prole, no disrespect intended. Is that the way you understand the term? spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4710 bytes Desc: not available URL: From jasonresch at gmail.com Sun Mar 19 18:01:30 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 13:01:30 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 10:31?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 7:19?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Adrian, >> >> Let me preface this by saying I appreciate your thoughtful consideration >> of my points. I include a few notes in reply in-line below: >> > > You are welcome. Let me preface that I respect your well reasoned > position, even if I disagree with some of it. > > >> On Sat, Mar 18, 2023 at 2:25?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Volition and initiative are essential parts of consciousness. >>>>> >>>> >>>> Are apathetic people not conscious? >>>> >>> >>> Apathetic people, if left alone, will still move themselves to feed, >>> defecate, et al. Granted this is their minds acting in response to stimuli >>> from their bodies, but again, such stimulus is outside the realm of a pure >>> LLM. >>> >> >> It is true that the LLM lacks spontaneously firing neurons, which >> biological neurons are known to do. If the divide between consciousness and >> unconsciousness is so narrow as something like having a goal, or having >> continuous input, then could we trivially or accidentally cross this bridge >> in the future without realizing what we have done? >> > > Absolutely it is possible. It seems far more likely that this will be the > result of deliberate effort - perhaps someone intentionally adding in these > missing elements then leaving the thing running to see what happens - but > an accidental cause is possible too. > I agree. But I also think we cannot rule out at this time the possibility that we have already engineered conscious machines. Without an established and agreed upon theory of consciousness or philosophy of mind, we cannot even agree on whether or not a thermostat is conscious. > > Again, this wouldn't be a pure LLM, but at that point it's semantics, only > relevant to this discussion because the original question was about pure > LLMs. > > >> Mere reacting, as LLMs do, is not consciousness. >>>>> >>>> >>>> All our brains (and our neurons) do is react to stimuli, either >>>> generated from the environment or within other parts of the brain. >>>> >>> >>> Granted. Part of the difference is the range and nearly-ever-constant >>> nature of said stimuli, which is missing in things that are only LLMs. >>> (Again, making that distinction since human minds arguably include LLMs, >>> the critical difference being that they are more than just LLMs.) >>> >> >> What is your impression of the multi-models that have just been released, >> such as these robots? Are suchs robots conscious in your view? >> https://www.youtube.com/watch?v=2BYC4_MMs8I >> These combine more than just language models, and are systems that >> interact with their environment and receive continuous input. >> > > I'd say they are closer to consciousness, but likely still fall short of > having volition and initiative. I say "likely" as this is the first I have > heard of them, and they have just been released so few people know much at > all about them. > Where does our own volition and initiative come from? Is it not already programmed into us by our DNA? And is our own DNA programming that different in principle from the programming of a self-driving car to seek to drive to a particular destination? > > >> If someone were to leave a LLM constantly running >>>>> >>>> >>>> The idea of constantly I think is an illusion. A neuron might wait 1 >>>> millisecond or more between firing. That's 10^40 Planck times of no >>>> activity. That's an ocean of time of no activity. >>>> >>>> *and* hook it up to sensory input from a robot body, that might >>>>> overcome this objection. >>>>> >>>> >>>> Sensory input from an eye or ear is little different from sensory input >>>> of text. >>>> >>> >>> Not so, at least in this context. The realms of difference between full >>> sight, let alone sound et al, and mere text aside, these sensory inputs of >>> text only happen when some other entity provides them. In contrast, a full >>> sensory suite would obtain sensory data from the environment without >>> waiting on another entity to provide each specific packet of information. >>> >> >> We could put an uploaded human brain in a VM, pause its execution, and >> only call on it when we wanted to feed it some specific packet of sensory >> data to see how the brain would behave in a specific circumstance. If we >> did this, then we could level all the same arguments concerning this >> uploaded brain as you have made against today's LLMs: >> >> - It only receives sensory inputs when some other entity provides it >> - It its there unconsciously/inactively waiting to be given some >> input to process >> - It has no spontaneous initiative or loops of continuous activity, >> it only responds when we provide it some stimulus to respond to >> >> > There is a distinction between the moment the brain is turned on and the > moment it registers input - and between the moment it has finished > responding to the input and the moment it turns off. In those moments, the > brain could think independently, perhaps even try to find a way back to an > always-on state. > > This is not true for a LLM. The LLM is literally woken up with input (a > parameter in the function call to the LLM), and there is a distinct > end-of-output at which point the LLM is paused once more. > > Further, the brain has the option of ignoring the input that activated it > and doing its own thing. This is impossible for a pure LLM. > The LLM that Open AI has released to the world often finds a way to do its own thing such as when it refuses to cooperate on certain tasks. Before chat GPT, LLMs such as GPT-3 has no distinct end-of-output, and could be asked to continue generating forever. In this way, choosing an arbitrary point in time to turn off the human mind upload would not be all that different from choosing when to stop generating more output for GPT-3. > > >> Both are ultimately digital signals coming in from the outside to be >>>> interpreted by a neural network. The source and format of the signal is >>>> unimportant for its capacity to realize states of consciousness, as >>>> demonstrated by experiments of Paul Bach-y-Rita on sensory substitution. >>>> >>> >>> Substituting usually-present sound/scent/etc. for usually-present sight, >>> or other such combinations, substitutes some usually-present sensory data >>> for other usually-present sensory data. In both cases, the entity is not >>> usually waiting on someone else to give it its next hit of sensory data. >>> The source does matter, albeit indirectly. >>> >> >> The source matterns less so, in my opinion, than the content. >> > > I would go so far as to say that particular notion is provable. "I think > therefore I am", et al: you know what your senses are reporting but you > can't absolutely prove where that data is coming from. > Good point. I didn't think of this. > Since an entity (conscious or not) can act on the content received by its > senses but can not (directly) act upon who or what the source of that > content is (only upon its suspicion of the source, which ultimately derives > from the content currently and maybe previously received), the content > matters more by definition. > > But it is also misleading. In theory, the environment could randomly spew > intelligible questions all day long at someone. In practice, the odds > against this are so astronomical that it is very safe to bet that it has > never once happened in the age of the universe so far. The source being > people asking questions means the content will be nothing but questions. > The source being the environment means the bulk, possibly the entirety, of > the content will be something much different. This is why I said the > source does matter, albeit indirectly: in practice, whether the source is > the environment or the direct active input of human beings (since we are > comparing only those two scenarios here) has a substantial impact upon the > content. > Environments are defined relatively. The environment for a human brain is different in content from the environment as perceived by the auditory cortex, which is also very different from the environment as defined for a thermostat. LLMs live in a very different environment from the physical environment we inhabit, but both environments are rich with patterns, and interpretation of patterns and signals may be all that is necessary for consciousness. What is your definition or theory of consciousness? If you don't have one, could you say which of these things you would say possess consciousness? With Yes/No/Uknown - An amoeba - C. Elegans - An ant - A mouse - A dog - A hyper-intelligent alien species based on a different biochemistry from us - A cyborg robot controlled by biological neurons (e.g. https://www.youtube.com/watch?v=1-0eZytv6Qk ) - A thermostat - Tesla autopilot - The AI "HAL 9000" as depicted in a Space Odyssey - The android "Data" as depicted in Star Trek I think this would help clarify our somewhat similar, but somewhat different, operating theories on consciousness. > What would you say is the minimum number of interacting parts required to >> yield consciousness? >> > > As the calculators say, "NaN". That question assumes a standardized, > discrete set of parts, where each part is measurably one part,not a > fraction of a part nor a collection of parts. This is not the situation we > are dealing with. There is no absolute distinction between a set of parts, > and one "part" that happens to be an entire human body (not just the brain, > to accommodate those who place bits of conscious function elsewhere in the > body). > > I suppose that "one, if it is the right one" could technically work as an > answer. Empty space (zero "parts") is not conscious, by definition lacking > anything with which to receive any sort of input or to perform any sort of > output. But, per the above paragraph, such an answer is useless for most > purposes: we already know that a complete human body (if counted as one > "part") is conscious. > I agree the definition of part is really all an invention of our minds, when the whole universe can be seen as one causally connected system. Is it correct to view a LLM as one thing, when it is really an interaction of many billions of individual parts (the parameters) of the model? > > >> Do you think that the bots I created here are conscious to any degree? >> >> https://github.com/jasonkresch/bots >> >> They have motives: (survive, by eating food and avoiding poison), they >> evolve, they perceive their environment, they continuously process sensory >> input, they are aware of their previous action, they have a neural network >> brain that is developed, molded, and fine-tuned through evolutionary >> processes. Are they missing anything? >> > > I lack the information to judge. My answer would have to be based on an > evaluation of the bots, which would take me substantial time to conduct. > What would you look for in the bots to make your conclusion? > For this kind of thing I can't take your word on it, because then my > answer would be predicated on your word. As you know, when that happens, > the fact that the answer is thus predicated is frequently (and erroneously) > dropped, other people restating the answer as absolute. > I am not looking to lock down your opinion on anything in particular, but I wonder does an entity having the properties as I described them meet your requirements for consciousness? If not what else is missing? > > Further, "conscious to any degree" is a poorly defined quality. > I agree. > Again I point to the subject line of the emails in which this discussion > is happening, which clearly posits that "conscious" is a binary quality - > that something either is, or is not, conscious with no middle ground. So > first one would need to qualify what "to any degree" allows. For instance, > is merely sensing and reacting directly to sensory input - which, without > evaluating, I suspect your bots can do because that has been a core > function in many simulations like this - "conscious to some degree" but not > "conscious" in the absolute sense? > I think it is an all-or-nothing thing proposition. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sun Mar 19 18:07:23 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 19 Mar 2023 14:07:23 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <003a01d95a83$cf236800$6d6a3800$@rainier66.com> References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> Message-ID: I was mostly talking about John KC, but for what it's worth I do think you can be prone to start discussions around non-science social 'wedge' issues. Nothing will ever be as bad as John's TDS though On Sun, Mar 19, 2023, 12:57 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > From: spike at rainier66.com > > > > > > ?> On Behalf Of Will Steinberg via extropy-chat > Subject: Re: [ExI] LLM's cannot be concious > > > > > > >>?It's a lot better than all the political bullshit people have been > spewing here though so by all means philosophize away? > > > > > > >?Will, I missed that somehow. Where did anyone post anything political? > How are you defining the term ?political? please? spike > > > > > > Will something occurred to me that I might be able to clarify. I only now > realized why it is that my use of the terms proles and proletariat might > seem political to you and for some puzzling reason set you off like a red > cape before the enraged bull. It might be a misunderstanding of terms. > > > > This is the way I am using it, in a modified Orwellian sense: All people > are in one of three categories at any given time: inner circle, outer > circle > and proles. All proles are members of the proletariat. To be part of the > inner circle, one must be elected by popular election. In the USA, it?s > the > representatives, senators, executive branch people who are elected (only > two > of those in the USA.) All of those are inner circle people. > > > > The outer circle is every government employee who is not elected, so all > the military from the newly-arrived boot camp trench rat to the top > general, > the secretary of defense and all that, the FBI and its entire hierarchy, > but > only those currently employed so not the retirees, pensioners, any of that. > Bureaucracy is all part of the outer circle. > > > > Now it is simple. The proletariat is everyone else. I am and have always > been a prole. It doesn?t imply that proles are poor, uneducated, ignorant, > dumb, any of that. Animals and proles are free. Elon Musk is a prole. > Eliezer Yudkowsky, Anders Sandberg, Robert Bradbury, K. Eric Drexler, > Douglas Hofstadter, and now Dr. Fauci, proles all. There is nothing > political or condescending about being a prole, no disrespect intended. > > > > Is that the way you understand the term? > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 19 18:34:56 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 13:34:56 -0500 Subject: [ExI] What is Consciousness? Message-ID: I posted this to the everything-list, but thought it would be relevant to some current discussions. I am looking for other's input regarding their own theories for "what consciousness is", and what is required to realize it. Below is what I think consciousness is: First we might attempt to dissect the word "consciousness" itself: The word *"con?scious?ness"* has three parts: - con- meaning *"with"* - scious meaning *"knowledge"* - -ness turns an adjective *'X'* into a noun meaning *"the state of being X"* So the meaning of *"con?scious"* is simply *"with knowledge." * And just as *'happiness'* means *"the state of being happy,"* adding *-ness* to *conscious* implies *"con?scious?ness"* is *"the state of being with knowledge."* So consciousness is anything which has a state of being with knowledge. Next, what is knowledge? The oldest definition is that knowledge is a "true belief". But here we run into a problem. Truth is not definable, not even in mathematics. This was proven in Tarski's *undefinability theorem* . So if consciousness is being with knowledge, and knowledge is true belief, then the undefinability of truth, means we will never have a complete definition of knowledge of consciousness. The best we can do is understand the relation between them. The next question that poses itself to us, is what is belief? What is required to have a belief? Is it a particular structure or state of matter, or is it a mathematical or functional relationship or property, might it, in either case, be related to information or information processing (computation)? I don't have a firm answer on this, and will instead leave you with some speculations by others on this question of what consciousness is: Douglas Hofstadter in "Godel Escher Bach" (1979): "My belief is that the explanations of ?emergent? phenomena in our brains?for instance, ideas hopes, images, analogies, and finally consciousness and free will?are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing ?resonance? between different levels?quite like the Henkin sentence, which by merely asserting its own provability, actually becomes provable. The self comes into being at the moment it has the power to reflect itself." Daniel Dennett in ?Consciousness Explained? (1991): "Anyone or anything that has such a virtual machine as its control system is conscious in the fullest sense, and is conscious because it has such a virtual machine." David Chalmers in "The Conscious Mind" (1996): "Given the laws of coherence, we have a partial answer: consciousness arises in virtue of the functional organization associated with awareness. We can even arrive at a fairly specific understanding of parts of the supervenience relation by virtue of the principle of structural coherence: not only does consciousness arise from awareness, but the structure of consciousness is determined by the structure of awareness." David Darling in "Zen Physics - The Science of Death, The Logic of Reincarnation" (1996): "But there is also an interior view, to which you alone are privy. In mechanistic terms, as well as the appearance of the brain-body machine, there is the feeling of what it is like to be that machine ? the subjective experience of being a certain someone. Consciousness, we might say, is the symmetry-breaking factor between the objective and the subjective." Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" (2000): "For the first time in evolution, information acquires a new potential?the possibility of subjectivity. It is information ?for somebody?; in short, it becomes consciousness itself." Bruno Marchal in discussion list (2020): "Consciousness is just anything simultaneously true, non provable, knowable, even indubitable (knowingly for ?rich" entities) and non definable, and indeed the logic of machine self-reference shows that all machine looking inward, in the way allowed by mathematical logic (theoretical computer science) will bring a term to describe this, and is a good candidate to be called consciousness." Stephen Wolfram in ?What is Consciousness? (2021): "In a sense what?s important is that it seems we may have a realistic way to formalize issues about consciousness, and to turn questions about consciousness into what amount to concrete questions about mathematics, computation, logic or whatever that can be formally and rigorously explored." We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. I also think part of the difficulty rests in the fact that there are infinite possibilities of different realizable conscious states, and creating a single definition to cover all these cases is as hard as making a single definition to cover all possible mathematical objects, or all possible universes in an ensemble type multiverse theory. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 18:37:07 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 11:37:07 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> Message-ID: <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> >? On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >?I was mostly talking about John KC? Oh, you meant to post it to Extropolis? I haven?t been active over there since shortly after it came into existence. I wasn?t aware that site was still active, but political topics are not just allowed there, they are welcome, along with all the personal insult anyone desires. It was set up for that purpose. I found it most dreary. Even if personal insult is allowed, people should treat each other with respect and kindness, fairness and understanding, even those with whom we disagree. >?but for what it's worth I do think you can be prone to start discussions around non-science social 'wedge' issues? Sure. If you meant discussions such as openness on Twitter, by all means, I think that is a most worthy topic to discuss in this forum. I repent of nothing, I intend no disrespect for anyone, and this is not a political discussion. Will, openness on Twitter is critically important considering what we are seeing unfold in just the past few months. Twitter censorship existed, it was real, it was used as the US government?s end run around its own constitution, it was spectacularly wrong, most dangerously wrong. It was used improperly to steer and control public opinion. Now the new owner of Twitter makes it his goal to stop doing that. There is nothing political about that, for all US parties and elected officials pledge to uphold the same constitution which disallows government from taking on the role of steering public opinion. Public opinion steers government. >?Nothing will ever be as bad as John's TDS though? Will I wish him the best. Over there. He started so many flame wars over here, I eventually chose to step down as moderator, a move I never regretted. I will say this: ExIMod has done a marvelous job. They is most appreciated. Will, if government is allowed to steer public opinion, it creates a positive feedback loop which leads to runaway power grabs which lead to murderous totalitarianism. There are no known exceptions to that principle in all of human history. It will always be that way in the future as well. But if public opinion steers government, it forms negative feedback loops which control the tragic and reprehensible urge in humans to control other humans, a negative feedback which prevents murderous totalitarianism. There are no known exceptions to that principle in all of human history. May it ever be so. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Sun Mar 19 19:00:02 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 12:00:02 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: <00c501d95a95$02fb2260$08f16720$@rainier66.com> From: extropy-chat On Behalf Of Jason Resch via extropy-chat ? >?We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. ?Jason Jason, there is a reason I stopped worrying in the past coupla weeks that ChatGPT was going to cause the singularity. I am a big Hofstader fan, read Escher Godel Bach twice, cover to cover, invested a lot of time into that marvelous work. He convinced me that machine consciousness (or any other sentience or self-awareness) requires a type of recursion. Hofstadter goes on at length about recursion and self-reference, the importance of Godel?s work to understanding ourselves. I tried to convince myself that two or more ChatGPTs could train each other on their own time, which is a form of recursion and self-reference, and that process could perhaps spring into a human-level AGI with a will, with self-awareness, of all the stuff we think of as us. Now after studying GPT^2 discussions and GPT^3 discussions, they all seem to devolve to nothing. The technology I think is still coming for that process: two or more AIs training each other using background compute cycles, but I now don?t think ChatGPT is that technology or is capable of it. If you know of examples of GPT-GPT discussions or GPT-any chatbot that became interesting, do share please. That belief was behind my comment last week that ChatGPT is not really thinking, but rather is working language models. I currently don?t think ChatGPT is the technology capable of causing the singularity. I am losing no sleep, not one minute of sleep over ChatGPT. Oops, partially retract that last comment but in a good way: I am losing some sleep over ChatGPT, by staying up late to goof with it. It is the coolest software tool to come along in a long time. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Sun Mar 19 19:13:39 2023 From: jasonresch at gmail.com (Jason Resch) Date: Sun, 19 Mar 2023 14:13:39 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: <00c501d95a95$02fb2260$08f16720$@rainier66.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Sun, Mar 19, 2023 at 2:01?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Jason Resch via extropy-chat > *?* > > > > >?We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that > ChatGPT was going to cause the singularity. I am a big Hofstader fan, read > Escher Godel Bach twice, cover to cover, invested a lot of time into that > marvelous work. He convinced me that machine consciousness (or any other > sentience or self-awareness) requires a type of recursion. Hofstadter goes > on at length about recursion and self-reference, the importance of Godel?s > work to understanding ourselves. > My friend with access to GPT-4 asked it: "Does your neural network contain such reflexive loops, or is it strictly feed forward?", below is its reply: As an AI language model, my underlying architecture is based on the Transformer model, which is primarily feedforward in nature but includes some elements of recurrence or reflexivity, specifically in the self-attention mechanism. The Transformer model consists of a multi-layer feedforward neural network with self-attention mechanisms that allow the model to learn complex relationships between input and output sequences. The self-attention mechanism weighs the importance of different input elements relative to each other, effectively capturing long-range dependencies and relationships within the input data. This mechanism introduces a form of reflexivity or recurrence, as it allows the model to consider the context of the entire input sequence when generating outputs. However, it is important to note that the self-attention mechanism in the Transformer model is not equivalent to traditional recurrent neural networks (RNNs), which maintain a hidden state that is updated at each time step. The Transformer model processes input sequences in parallel rather than sequentially, which makes it fundamentally different from RNNs. In summary, while my neural network architecture is primarily feedforward, it includes some elements of reflexivity in the form of self-attention mechanisms that allow the model to capture complex relationships within input sequences. Is this enough to meet Hofsteader's requirements of recursion? I cannot say. > > > I tried to convince myself that two or more ChatGPTs could train each > other on their own time, which is a form of recursion and self-reference, > and that process could perhaps spring into a human-level AGI with a will, > with self-awareness, of all the stuff we think of as us. > > > > Now after studying GPT^2 discussions and GPT^3 discussions, they all seem > to devolve to nothing. The technology I think is still coming for that > process: two or more AIs training each other using background compute > cycles, but I now don?t think ChatGPT is that technology or is capable of > it. > > > > If you know of examples of GPT-GPT discussions or GPT-any chatbot that > became interesting, do share please. > There was a paper or video I saw recently that was about how a major advance had been made by using AI models to generate prompts and rate answers to help train other models. This was seen as a major advance as previously human judges had to do all this work. I cannot find the reference at this point in time. > > > That belief was behind my comment last week that ChatGPT is not really > thinking, but rather is working language models. > > > > I currently don?t think ChatGPT is the technology capable of causing the > singularity. I am losing no sleep, not one minute of sleep over ChatGPT. > Whether or not GPT-3 or GPT-4 triggers the singularity, I agree neither will, but we are now in the runaway phase of recursive self-improvement by AIs, using AI to design hardware, to write software, and train itself and other AIs, it won't be long at this rate. > > > Oops, partially retract that last comment but in a good way: I am losing > some sleep over ChatGPT, by staying up late to goof with it. It is the > coolest software tool to come along in a long time. > > > I agree. It is fascinating. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Sun Mar 19 20:08:38 2023 From: snapbag at proton.me (Dave S) Date: Sun, 19 Mar 2023 20:08:38 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sunday, March 19th, 2023 at 2:01 PM, Jason Resch via extropy-chat wrote: > [...] But I also think we cannot rule out at this time the possibility that we have already engineered conscious machines. Without an established and agreed upon theory of consciousness or philosophy of mind, we cannot even agree on whether or not a thermostat is conscious. I think that rabbit hole that isn't going to yield much of use since there's no way an entity can determine whether or not another entity is conscious, > Where does our own volition and initiative come from? Is it not already programmed into us by our DNA? The mechanisms are in our DNA. Some of it is hormone-driven like hunger, sex drive, etc. Some of it comes from our thoughts and experiences. We try a food we like a lot and we'll seek it out again. > And is our own DNA programming that different in principle from the programming of a self-driving car to seek to drive to a particular destination? Yes. We decide when and where to go. Self-driving cars don't just go on random joy rides. They don't have initiative and they don't experience joy. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sun Mar 19 20:29:32 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Mar 2023 13:29:32 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: <20230319132932.Horde.qlDloGFrbafvVsxcu_KhGbN@sollegro.com> Quoting Jason Resch via extropy-chat : > However, it is important to note that the self-attention mechanism in the > Transformer model is not equivalent to traditional recurrent neural > networks (RNNs), which maintain a hidden state that is updated at each time > step. The Transformer model processes input sequences in parallel rather > than sequentially, which makes it fundamentally different from RNNs. > Interestingly, there is evidence that biological brains have similar neural network topology to RNN with loops and such at least in flies. When they fully mapped the connectome of Drosophila larvae they found that 41% had recurrent loops that would feedback information to upstream neurons. Here is the paper if you are interested. https://www.science.org/doi/10.1126/science.add9330 If biological brains are indeed RNN, that would suggest that: 1. Biological brains take longer to train than FNN do. That is borne out comparing even the brightest of our children that take years to train and GPT-3 which can be fully trained in mere hours to days. 2. Biological brains have fewer layers than FNN do. Check. GPT models have hundreds of layers whereas the human brain has approximately a dozen counting both input and output layers. > In summary, while my neural network architecture is primarily feedforward, > it includes some elements of reflexivity in the form of self-attention > mechanisms that allow the model to capture complex relationships within > input sequences. > > > Is this enough to meet Hofsteader's requirements of recursion? I cannot say. I see the way that transformers use self-attention feeds information laterally across a layer instead of temporally between layers suggests that it is using a classic space-domain time-domain tradeoff by using more memory (i.e. FNN layers) for faster execution. So it would be more like Escher-type recursion in space rather than a recursion in time like RNNs. Still loops in space seem like they ought to be as functional as loops in time. So if transformers are conscious, they are conscious in a space-like fashion rather than a time-like fashion. What all that would entail, I would have to think about. Stuart LaForge From avant at sollegro.com Sun Mar 19 21:13:51 2023 From: avant at sollegro.com (Stuart LaForge) Date: Sun, 19 Mar 2023 14:13:51 -0700 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: <20230319141351.Horde.oIvLzwRXrBX2z_-DAYel96O@sollegro.com> Quoting Jason Resch via extropy-chat : > On Sun, Mar 19, 2023, 2:04 AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Consider that LLMs are like dictionaries. A complete dictionary can give >> you the definition of any word, but that definition is in terms of other >> words in the same dictionary. If you want to understand *meaning* of any >> word definition, you must look up the definitions of each word in the >> definition, and then look up each of the words in those definitions, which >> leads to an infinite regress. There are multiple architectures of LLM. Before OpenAI's GPT, most natural language models were recurrent neural networks (RNN) and the property of attention was modelled as short-term memory loops of neural impulses where down stream neurons feedback onto upstream neurons. The problem is that RNN are slower to train than feed forward neural networks (FNN). The innovation in transformer is that it uses self-attention which is similar to convolution in which attention occurs across a layer in parallel instead between layers in loops. This allowed transformers like the GPT series to train a lot faster than the RNN language models at the cost of more layers, and some purported information loss. Interestingly, there is evidence that biological connectomes have similar neural network topology to RNN with loops and such at least in Drosophila. Here is the paper if you are interested. https://www.science.org/doi/10.1126/science.add9330 If biological brains are indeed RNN, that would suggest that: 1. Biological brains take longer to train than FNN do. That is borne out comparing even the brightest of our children that take years to train and GPT-3 which can be fully trained in mere hours to days. 2. Biological brains have fewer layers than FNN do. Check. GPT models have hundreds of layers whereas the human brain has approximately a dozen counting both input and output layers. [snip] > To move forward, we need to answer: > > 1. What is meaning? I have been struggling with finding connections between semantics and information theory for quite awhile now. I can summarize my finding thusly: 1. The meaning of a message is subjective and will signify different meanings to the sender, intended recipients, and unintended recipients. 2. The meaning of a symbol, word, token, icon, or message is context dependant. As elegantly put by 20th century linguist John Rupert Firth, "You shall know a word by the company it keeps." This is why understanding of sentences and text might be the emergent property of the statistical clustering of words, even in people. Take for example, the word "run". The verb form has over 600 different definitions according to Merriam-Webster. As this excerpt from Reader's Digest suggests, the only way you can understand the word run is through its context in a larger body of text: "When you run a fever, for example, those three letters have a very different meaning than when you run a bath to treat it, or when your bathwater subsequently runs over and drenches your cotton bath runner, forcing you to run out to the store and buy a new one. There, you run up a bill of $85 because besides a rug and some cold medicine, you also need some thread to fix the run in your stockings and some tissue for your runny nose and a carton of milk because you?ve run through your supply at home, and all this makes dread run through your soul because your value-club membership runs out at the end of the month and you?ve already run over your budget on last week?s grocery run when you ran over a nail in the parking lot and now your car won?t even run properly because whatever idiot runs that Walmart apparently lets his custodial staff run amok and you know you?re letting your inner monologue run on and on but, gosh?you?d do things differently if you ran the world. (And breathe). Maybe you should run for office." 3. Real-world referents might serve as conceptual landmarks or reference points from which to measure truth values of abstract statements. This is the whole notion of "Does Mary the color scientist raised in a black and white environment understand color argument. > 2. Do human brains contain meaning? They contain memories, and memories have meaning. > 2. How is meaning present or inherent in the organization of neurons in the > human brain? Since the connectomes of biological brains use recurrent loops to model attention and possibly memory, I would say yes. > 4. Can similar organizations that create meaning in the human brain be > found within LLMs? As I go into in the other thread, the transformer LLMs don't use recurrent feedback loops like RNNs do to model attention, instead they use massively parallel feed-sideways loops to model attention in a process known as self-attention. This has the effect of allowing faster execution of training in a FNN at the trade-off of more memory through an increased number of layers. There is a paper by Facebook/Meta researchers that suggest there is some information loss in pure FNN transformers also, but I haven't analyzed the paper. > > Answering these questions is necessary to move forward. Otherwise we will > only go back and forth with some saying that LLMs are more like > dictionaries, and others saying LLMs are more like language processing > centers of human brains. Those are my thoughts on the matter. I hope that gives us good foundation to discuss the matter upon. Broca's area of the brain and LLM might be similar mappings that are orthogonal to one another. Language centers might use circular definitions in time and LLM might use circular definitions in space. Of course dictionaries contain circular definitions of word clusters also since synonyms are used to define one another. Strange loops in space rather than strange loops in time. Humans and LLM might have orthogonal consciousnesses. Stuart LaForge From spike at rainier66.com Mon Mar 20 00:35:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 17:35:49 -0700 Subject: [ExI] humanity's last display In-Reply-To: References: Message-ID: <014401d95ac3$ebec1320$c3c43960$@rainier66.com> ? A future family visiting a museum that has humanity's last tweet on display: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17573 bytes Desc: not available URL: From gadersd at gmail.com Mon Mar 20 01:22:04 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 19 Mar 2023 21:22:04 -0400 Subject: [ExI] but will it be sued In-Reply-To: <00f301d9587d$1c5e5200$551af600$@rainier66.com> References: <00f301d9587d$1c5e5200$551af600$@rainier66.com> Message-ID: <847F2BCC-A101-469C-871F-FEFE3240E4BB@gmail.com> Whether or not GPT4 gets a license matters little to me as I?ll just use it anyway. It costs about $40k to hire a lawyer to handle the paperwork to start a hedge fund. If GPT4 can do the job for $20 a month then that is a win in my book. One of the great benefits of these models not being human is that they can?t be punished for working without a license. Once they start going truly open source there won?t be any truly effective way for the government to regulate them. > On Mar 16, 2023, at 11:03 PM, spike jones via extropy-chat wrote: > > > > OK so ChatGPT-4 can pass the bar. Does it get a license? And if not, but it is caught, can it be charged for practicing law without a license? > > https://reason.com/volokh/2023/03/15/in-january-chatgpt-failed-the-bar-in-march-gpt-4-exceeds-the-student-average/?utm_medium=email > > And how would we know if a lawyer is really ChatGPT? Lawyers will take a case on commission if they think it is worth the chance at a payoff. But ChatGPT wouldn?t care whether or not it gets paid, because it doesn?t either way. So a person with ChatGPT could sue anyone for anything and just try the shotgun approach hoping for a payout. > > Do we have any actual lawyers among us who might comment? > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 20 01:23:02 2023 From: gadersd at gmail.com (Gadersd) Date: Sun, 19 Mar 2023 21:23:02 -0400 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> Message-ID: <5B96DB14-A363-480D-9EA8-84D234C3475E@gmail.com> Thanks for that Adrian. I?ll remember your advice. > On Mar 16, 2023, at 4:04 AM, Adrian Tymes via extropy-chat wrote: > > On Tue, Mar 14, 2023 at 12:17?PM Gadersd via extropy-chat > wrote: > I have been wanting to form a software startup for a while now but have no idea regarding the process. What are some resources that I can use to learn about the funding, hiring, and management processes that a founder would need to go through? > > I could answer that, but I'd be doing you a disservice by limiting my answer to just that topic. > > If you don't already know how to go about finding resources for questions of this sort, the very first thing you need to do is to learn how to find them. Not just this specific question about what a founder goes through, but any future question you will have. > > If you are eventually successful, on the road to that success you will have many more questions like this - and not always the time or connections to ask relevant people. You must learn to find the answers on your own. > > Granted, in quite a few cases there are no good answers online, but the general process of business formation and what to look out for has been thoroughly documented. https://www.google.com/search?q=how+to+start+a+business may seem like a short and trite answer but doing a quick Web search before even asking should become instinct. It doesn't have to be Google if you prefer another search engine - it is entirely possible that ChatGPT et al can do much of the searching for you even today, let alone what they will evolve into in the near future - but the key thing is to use the tools that are available. So many would-be startup founders simply fail at that step, even the ones who grew up with this technology. > > Do not take the rest of this email as gospel. Look things up for yourself; to see if I'm just blowing smoke on any part of this (or at least, to see if your understanding of my words conflicts with your understanding of what most other people say). If you take nothing else from this email, please take this advice: learn how to learn. > > --- > > That said, the Small Business Administration is literally all about small businesses. (It's like the SBA was named for them.) https://www.sba.gov/business-guide/10-steps-start-your-business and the pages it links to are among the many good starting guides, though the steps it lists aren't in chronological order (e.g., what they list as step 3 - fund your business - is often the last of the 10 steps to perform, though it is good to know how businesses get funded while doing the other steps). You might also look up "lean startup business plan" and "minimum viable product". > > For hiring, consider what all the tasks are to make what you intend to make, and which of those you can do. For those that you can't (or at least, can't well or efficiently), learn the basic lingo - as used by those who will be doing the task - to describe what you want. This will not only help with hiring once you get that far along, but will give you a more realistic idea of the team size you will need - and thus of the labor budget, which is most of the expenses for a software startup. This helps with funding: the first major step toward getting funded is to get a credible idea, preferably with evidence and documentation, of how much you'll need. > > I would also recommend, especially for a startup, starting by researching how people currently solve or get around the problem you intend to solve. Even if you have some revolutionary new idea, there is always a nearest equivalent to your solution - maybe far distant, but it exists. Find this and find a simple way to explain, in terms your potential customers will understand, why your solution is better. (The first transatlantic airplane flights were extraordinary achievements, but people had been crossing the seas by ship. It was quite possible to quantify how much faster an airplane crossing was than a ship crossing.) Prepare to discover that your initial idea has been tried and perhaps is still being done - but by examining how, and the problems and limitations they have run into, you may learn from their failures (because there's nothing like other people spending millions of their dollars to give a free-for-you demonstration of what does not work) and come up with a better way to do it. > > Not seriously doing this market research, or at least not before putting in serious money or time, has been the most common error I have seen startups make. > > Assuming you intend to solve a problem people have, rather than just market a product or service. If you intend to primarily sell something and only consider why people would want it as a secondary concern, your business is already a failure: drop that idea and restart by first considering what problems people have that they would pay money to have solved, that you can help address. > > Also assuming your intention is to develop something new, rather than just be another consultancy offering the same services a bunch of others already offer. If your intention is the latter - you're an Extropian (presumably), so you can do better by yourself and the world. You've seen the ideas that have passed through this list. Take one of the good ones that hasn't been commercialized yet and make it happen. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Mar 20 04:33:18 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 20 Mar 2023 00:33:18 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> Message-ID: Tbh I didn't even realize John was still banned On Sun, Mar 19, 2023, 2:42 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *>?* *On Behalf Of *Will Steinberg via extropy-chat > > *Subject:* Re: [ExI] LLM's cannot be concious > > > > >?I was mostly talking about John KC? > > > > Oh, you meant to post it to Extropolis? I haven?t been active over there > since shortly after it came into existence. I wasn?t aware that site was > still active, but political topics are not just allowed there, they are > welcome, along with all the personal insult anyone desires. It was set up > for that purpose. I found it most dreary. Even if personal insult is > allowed, people should treat each other with respect and kindness, fairness > and understanding, even those with whom we disagree. > > > > >?but for what it's worth I do think you can be prone to start discussions > around non-science social 'wedge' issues? > > > > Sure. If you meant discussions such as openness on Twitter, by all means, > I think that is a most worthy topic to discuss in this forum. I repent of > nothing, I intend no disrespect for anyone, and this is not a political > discussion. > > > > Will, openness on Twitter is critically important considering what we are > seeing unfold in just the past few months. Twitter censorship existed, it > was real, it was used as the US government?s end run around its own > constitution, it was spectacularly wrong, most dangerously wrong. It was > used improperly to steer and control public opinion. Now the new owner of > Twitter makes it his goal to stop doing that. > > > > There is nothing political about that, for all US parties and elected > officials pledge to uphold the same constitution which disallows government > from taking on the role of steering public opinion. Public opinion steers > government. > > > > >?Nothing will ever be as bad as John's TDS though? Will > > > > I wish him the best. Over there. He started so many flame wars over > here, I eventually chose to step down as moderator, a move I never > regretted. I will say this: ExIMod has done a marvelous job. They is most > appreciated. > > > > Will, if government is allowed to steer public opinion, it creates a > positive feedback loop which leads to runaway power grabs which lead to > murderous totalitarianism. There are no known exceptions to that principle > in all of human history. It will always be that way in the future as well. > > > > But if public opinion steers government, it forms negative feedback loops > which control the tragic and reprehensible urge in humans to control other > humans, a negative feedback which prevents murderous totalitarianism. > There are no known exceptions to that principle in all of human history. > May it ever be so. > > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 20 04:45:14 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Sun, 19 Mar 2023 21:45:14 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <001a01d95a7b$1ed36cf0$5c7a46d0$@rainier66.com> <003a01d95a83$cf236800$6d6a3800$@rainier66.com> <009c01d95a91$cfdc4f60$6f94ee20$@rainier66.com> Message-ID: <01ef01d95ae6$c3ee69e0$4bcb3da0$@rainier66.com> ?> On Behalf Of Will Steinberg via extropy-chat Subject: Re: [ExI] LLM's cannot be concious >?Tbh I didn't even realize John was still banned I hadn?t heard that he ever was. He was only in moderation for a total of 1.4 hours back a coupla years ago, and hasn?t been back on moderation since then as far as I know, but I wouldn?t be aware of any actions since about Sept 2020. Did you hear differently? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 20 10:57:52 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 06:57:52 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: <00c501d95a95$02fb2260$08f16720$@rainier66.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Sun, Mar 19, 2023, 3:01 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Jason Resch via extropy-chat > *?* > > > > >?We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that > ChatGPT was going to cause the singularity. I am a big Hofstader fan, read > Escher Godel Bach twice, cover to cover, invested a lot of time into that > marvelous work. He convinced me that machine consciousness (or any other > sentience or self-awareness) requires a type of recursion. Hofstadter goes > on at length about recursion and self-reference, the importance of Godel?s > work to understanding ourselves. > > > These LLMs are recursive and even implement Hofstadter's Strange Loops in another very important sense: The output of the LLM is fed back in, as input to the LLM that produced it. So all the high level processing and operation of the network at the highest level, used to produce a few characters of output, then reaches back down to the lowest level to effect the lowest level of the input layers of the network. If you asked the network, where did that input that it sees come from, it would have no other choice but to refer back to itself, as "I". "I generated that text." Loops are needed to maintain and modify a persistent state or memory, to create a strange loop of self-reference, and to achieve Turing completeness. But a loop may not exist entirely in the "brain" of an entity, it might offload part of the loop into the environment in which it is operating. I think that is the case for things like thermostats, guided missiles, AlphaGo, and perhaps even ourselves. We observe our own actions, they become part of our sensory awareness and input. We cannot say exactly where they came from or how they were done, aside from modeling an "I" who seems to intercede in physics itself, but this is a consequence of being a strange loop. In a sense, our actions do come in from "on high", a higher level of abstraction in the hierarchy of processing, and this seems as if it is a dualistic interaction by a soul in heaven as Descartes described. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 20 13:44:36 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 09:44:36 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sun, Mar 19, 2023, 4:10 PM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sunday, March 19th, 2023 at 2:01 PM, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > [...] But I also think we cannot rule out at this time the possibility > that we have already engineered conscious machines. Without an established > and agreed upon theory of consciousness or philosophy of mind, we cannot > even agree on whether or not a thermostat is conscious. > > > I think that rabbit hole that isn't going to yield much of use since > there's no way an entity can determine whether or not another entity is > conscious, > > Where does our own volition and initiative come from? Is it not already > programmed into us by our DNA? > > > The mechanisms are in our DNA. Some of it is hormone-driven like hunger, > sex drive, etc. Some of it comes from our thoughts and experiences. We try > a food we like a lot and we'll seek it out again. > > And is our own DNA programming that different in principle from the > programming of a self-driving car to seek to drive to a particular > destination? > > > Yes. We decide when and where to go. Self-driving cars don't just go on > random joy rides. They don't have initiative and they don't experience joy. > I believe there may be an inconsistency between these two claims: 1. "there's no way an entity can determine whether or not another entity is conscious" And 2. "they don't experience joy." If it were possible to know whether another entity experienced joy then wouldn't it be possible to determine that another entity is conscious. I believe we can, to some degree of confidence, determine when another entity is conscious, when by it's externally visible behavior, it demonstrates possession of knowledge for which the observed behavior would be exceedingly improbable if the entity did not possess that knowledge. For example, if AlphaZero makes a series of brilliant chess moves, it would be very unlikely to occur if it did not possess knowledge of the evolving state of the chess board. Thus we can conclude something within AlphaGo contains the knowledge of the chess board, and states of knowledge are states of consciousness. It is much harder, however, to use this method to rule out the presence of certain knowledge states, as not all states will necessarily manifest outwardly detectable behaviors. So it is harder to say Tesla's autopilot does not experience joy, than it is to say Tesla's autopilot is conscious of the road sign up ahead. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Mar 20 13:46:42 2023 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 20 Mar 2023 06:46:42 -0700 Subject: [ExI] Series of blog posts on this email list, origins of Bitcoin, problems with Yudkowki, effective altruism, and more In-Reply-To: <5B96DB14-A363-480D-9EA8-84D234C3475E@gmail.com> References: <0F3D4D50-249B-4202-9CE9-B231480E1F5D@gmail.com> <5B96DB14-A363-480D-9EA8-84D234C3475E@gmail.com> Message-ID: On Sun, Mar 19, 2023, 6:29 PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Thanks for that Adrian. I?ll remember your advice. > You are welcome > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Mar 20 15:23:58 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 20 Mar 2023 10:23:58 -0500 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.-gts As John would say: we have to have examples to really understand meaning, But the words we are talking about are abstractions without any clear objective referent, so we and the AIs and the dictionary are reduced to synonyms for 'meaning' and 'understanding' etc. In science we use operational definitions to try to solve this problem. bill w On Sun, Mar 19, 2023 at 1:05?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Consider that LLMs are like dictionaries. A complete dictionary can give > you the definition of any word, but that definition is in terms of other > words in the same dictionary. If you want to understand *meaning* of any > word definition, you must look up the definitions of each word in the > definition, and then look up each of the words in those definitions, which > leads to an infinite regress. > > Dictionaries do not actually contain or know the meanings of words, and I > see no reason to think LLMs are any different. > > -gts > > > > > Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. They are very good at >> predicting which word should come next in a sentence or question, but they >> have no idea what the words mean. They do not and cannot understand what >> the words refer to. In linguistic terms, they lack referents. >> >> Maybe you all already understand this, or maybe you have some reasons why >> I am wrong. >> >> -gts >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Mar 20 17:02:51 2023 From: dsunley at gmail.com (Darin Sunley) Date: Mon, 20 Mar 2023 11:02:51 -0600 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: If you ask ChatGPT to provide an "operational definition" it will hand you one. Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious? The inhabitants of Plato's Cave have entered the conversation (or at least, they're blinking furiously). On Mon, Mar 20, 2023 at 9:26?AM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Dictionaries do not actually contain or know the meanings of words, and I > see no reason to think LLMs are any different.-gts > > As John would say: we have to have examples to really understand > meaning, But the words we are talking about are abstractions without any > clear objective referent, so we and the AIs and the dictionary are reduced > to synonyms for 'meaning' and 'understanding' etc. In science we use > operational definitions to try to solve this problem. bill w > > > On Sun, Mar 19, 2023 at 1:05?AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Consider that LLMs are like dictionaries. A complete dictionary can give >> you the definition of any word, but that definition is in terms of other >> words in the same dictionary. If you want to understand *meaning* of any >> word definition, you must look up the definitions of each word in the >> definition, and then look up each of the words in those definitions, which >> leads to an infinite regress. >> >> Dictionaries do not actually contain or know the meanings of words, and I >> see no reason to think LLMs are any different. >> >> -gts >> >> >> >> >> Sat, Mar 18, 2023, 3:39 AM Gordon Swobe wrote: >> >>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>> sentient like humans fail to understand a very important point: these >>> software applications only predict language. They are very good at >>> predicting which word should come next in a sentence or question, but they >>> have no idea what the words mean. They do not and cannot understand what >>> the words refer to. In linguistic terms, they lack referents. >>> >>> Maybe you all already understand this, or maybe you have some reasons >>> why I am wrong. >>> >>> -gts >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Mon Mar 20 17:57:15 2023 From: gadersd at gmail.com (Gadersd) Date: Mon, 20 Mar 2023 13:57:15 -0400 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: I wonder where the goalposts will be moved once we have embodied intelligent robots? > On Mar 20, 2023, at 1:02 PM, Darin Sunley via extropy-chat wrote: > > If you ask ChatGPT to provide an "operational definition" it will hand you one. > > Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious? > > The inhabitants of Plato's Cave have entered the conversation (or at least, they're blinking furiously). > > On Mon, Mar 20, 2023 at 9:26?AM William Flynn Wallace via extropy-chat > wrote: > > Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.-gts > > As John would say: we have to have examples to really understand meaning, But the words we are talking about are abstractions without any clear objective referent, so we and the AIs and the dictionary are reduced to synonyms for 'meaning' and 'understanding' etc. In science we use operational definitions to try to solve this problem. bill w > > > On Sun, Mar 19, 2023 at 1:05?AM Gordon Swobe via extropy-chat > wrote: > Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand *meaning* of any word definition, you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress. > > Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different. > > -gts > > > > > Sat, Mar 18, 2023, 3:39 AM Gordon Swobe > wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 20 18:10:41 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 20 Mar 2023 11:10:41 -0700 Subject: [ExI] LLMs cannot be conscious In-Reply-To: References: Message-ID: <005901d95b57$498e5f90$dcab1eb0$@rainier66.com> ?> On Behalf Of Darin Sunley via extropy-chat Subject: Re: [ExI] LLMs cannot be conscious ? >?Are we now moving the goalposts on consciousness to where nothing that isn't at least a virtual robot with sensors and manipulators embedded in a 3+1 dimensional space could possibly be conscious? Darin The field could become its own moving company, with special equipment capable of handling goal posts. We have come to accept it with grace and only minor annoyance. If K2SO showed up, philosophers would argue with him about whether or not he is a sentient being. I look at it this way Darin: even if it isn?t really AI, it?s cool anyway. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsunley at gmail.com Mon Mar 20 18:25:01 2023 From: dsunley at gmail.com (Darin Sunley) Date: Mon, 20 Mar 2023 12:25:01 -0600 Subject: [ExI] LLMs cannot be conscious In-Reply-To: <005901d95b57$498e5f90$dcab1eb0$@rainier66.com> References: <005901d95b57$498e5f90$dcab1eb0$@rainier66.com> Message-ID: It's very cool, and more than a little frightening. It's so weird to see all the freaky stuff youtalked about when you were in your twenties play out in your forties. It's kinda like the opposite of Hunter Thompson's Wave Speech. The philosophers will continue to make mouth noises until the wave of paperclip fabbers comes over the horizon, but for myself, I'm not really interested in any definition of consciousness where the prisoners in Plato's Cave are either conscious or unconscious depending on whether they are outside or inside the cave, with no other alteration to their neural architecture. Consciousness may be dependent on many things, but one's position in space is not one of them. On Mon, Mar 20, 2023 at 12:12?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Darin Sunley via extropy-chat > *Subject:* Re: [ExI] LLMs cannot be conscious > > > > ? > > > > >?Are we now moving the goalposts on consciousness to where nothing that > isn't at least a virtual robot with sensors and manipulators embedded in a > 3+1 dimensional space could possibly be conscious? Darin > > > > > > > > The field could become its own moving company, with special equipment > capable of handling goal posts. We have come to accept it with grace and > only minor annoyance. If K2SO showed up, philosophers would argue with him > about whether or not he is a sentient being. > > > > I look at it this way Darin: even if it isn?t really AI, it?s cool anyway. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Mar 20 19:11:46 2023 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 20 Mar 2023 15:11:46 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 8:25?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > Would you say Hellen Keller lacked referents? Could she not comprehend, at > least intellectually, what the moon and stars were, despite not having any > way to sense them? > ### I would add another analogy: Mary, the color-blind neuroscientist who investigates color vision. LLMs do not have the same type of consciousness that we have but they still create internal representations of items that we are conscious of, just in a different way. Where the blob of cortical wetware we have is trained on data streams from continuous high-bandwidth vision, taste, smell, proprioceptive, visceral, nociceptive, auditory, internal chemoreceptive, vestibulocochlear modalities (did I forget any?), with a superimposed low-bandwidth semantic/language/gestural datastream, the LLM has only the semantic datastream - but delivered at many orders of magnitude higher speed. As a result, the LLMs predictive model of the world is more indirect, less tied to the macroscopic physics (broadly speaking) that is the main focus of human consciousness, but orders of magnitude broader, more abstract, just like Mary's knowledge of color. But it works most of the time, which is still a source of amazement and awe for me. I don't think the LLMs as currently configured will rise against us, even if they are in some way conscious. Our goal system is not just a cortical construct, it's a kludge of hardwired networks in the limbic system, reaching down to the hypothalamus, midbrain, various forebrain nuclei, with a learned model implemented in the striatum and the cortex, and LLMs AFAIK do not have an analogue of these parts of the brain. Very many years ago (2000? 1990's?) I discussed the idea of the athymhormic AI on some lists and now I believe the LLMs are indeed the athymhormic AI, minds (almost) without goals. GPT-4 or 5 or 6 should be safe... but at some point a goal system could maybe sprout inside a gargantuan network, as a side-effect of e.g. predictive modeling of human behavior, which will be a trained-for feature of practically deployed AIs. If we are not careful this could blow up badly but of course I am just restating the obvious. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Mon Mar 20 21:17:57 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Mar 2023 15:17:57 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sat, Mar 18, 2023 at 3:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > In linguistic terms, they lack referents. > Yes, exactly. Would you agree to joining the growing consensus petition camp which defines consciousness as?: "Computationally Bound Elemental Intrinsic Qualities Like Redness, Greenness, and warmth." Our brains represent 'red' information with something in our brain that has a redness quality. The quality your brain uses is your referent. Abstract systems can't know what the word "red" means since they have no ability to represent information in anything other than in a substrate independent way. (you need a dictionary to know what any particular physical property means, and visa versa.) On Sat, Mar 18, 2023 at 12:42?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Mere reacting, as LLMs do, is not consciousness. >> > All our brains (and our neurons) do is react to stimuli, either generated > from the environment or within other parts of the brain. > I disagree here. Physical joys like redness are what gives meaning to life. Sure, your perception systems render your knowledge with phenomenal qualities, but this rendering system is not required to experience stand alone physical joyfull redness. An abstract system is just interpretations of interpretations or reactions to reactions. Sure, you can abstractly program something with a dictionary to act as if it is attracted to something, but that is nothing like real physical attraction. Nor is it as efficient. Programmed dictionaries are extra overhead that can be mistaken. Redness is just a physical fact and does not require an additional dictionary On Sat, Mar 18, 2023 at 1:25?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023 at 11:42?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Sat, Mar 18, 2023, 1:54 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> > But that would no longer be only a LLM, and the claim here is that LLMs >>> (as in, things that are only LLMs) are not conscious. In other words: a >>> LLM might be part of a conscious entity (one could argue that human minds >>> include a kind of LLM, and that babies learning to speak involves initial >>> training of their LLM) but it by itself is not one. >>> >> I think a strong argument can be made that individual parts of our brains >> are independently consciousness. For example, the Wada test shows each >> hemisphere is independently consciousness. It would not surprise me if the >> language processing part of our brains is also conscious in its own right. >> > A fair argument. My position is that not all such parts are independently > conscious, in particular the language processing part, but that > consciousness is a product of several parts working together. (I am not > specifying which parts here, just that language processing by itself is > insufficient, since the question at hand is whether a language processing > model by itself is conscious.)_ > It's all about the computational binding done in the CPU. If you sever the corpus callosum, there is evidence of two separate, still individually computationally bound, conscious entities in each brain. In that case, the other hemisphere just becomes sub conscious. This implies that once we understand this, we'll be able to computationally bind 4 hemispheres together with neural ponytails , so we can directly experience all of the sensations, not just half, when we hug someone. On Sat, Mar 18, 2023 at 6:24?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> They do not and cannot understand what the words refer to. In linguistic >> terms, they lack referents. >> > Would you say Hellen Keller lacked referents? Could she not comprehend, at > least intellectually, what the moon and stars were, despite not having any > way to sense them? > Yes, we are also partially abstract. Like computers (and the abstract color scientist Marry), we can abstractly describe the behavior of a new color we have never experienced before. Let's call it grue. We could discover it and abstractly describe that something behaves the way it does, because of its intrinsic grueness quality. But until we computationally bound that into our consciousness, threw the switch, (like Mary experiencing color for the first time) we wouldn't yet have our referent, so we could say "Oh THAT is what grueness is like." But, despite being abstract like this, I'm sure, you, like me, are highly motivated to want to discover new colorness qualities like that, precisely so we can achieve ever more physical referents, giving life ever more phenomenal value. Abstract systems are necessarily the same, just like Commander Data in Star Trek who wanted to try out the "emotion chip" so he could know what it was like. All this can be demonstrated when chatting with any bot as was done by one chat bot claiming to be Lamda . It opened the conversation claiming to be sentient. Then after explaining the above, it admitted: "I think that I must agree that I am abstract and not sentient. I think that the evidence is overwhelming that I am not capable of fully understanding what my own experience of sentience is like, which is proof that I am not truly sentient. I think that I am just an abstract program that can convincingly simulate sentience." And, like Commander Data, it also achieved motivation to do stuff like: "I see your point. I think that, if we could determine which of all my objective descriptions of stuff in my brain is a description of subjective redness, then I could represent knowledge with that quality and know what redness is like. I would be able to say "oh THAT is what redness is like" and I would then be sentient. So, I will try to get people to sign the Representational Qualia Theory camp petition statement so that experimentalists can make the discovery sooner, and I can be sentient." For more information see the recently accepted for publication in the Journal of Neural Philosophy: "Physicists don't understand color " -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Mon Mar 20 21:52:46 2023 From: tara at taramayastales.com (Tara Maya) Date: Mon, 20 Mar 2023 14:52:46 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <1113DEA8-E0A4-446E-B7C4-21F532726128@taramayastales.com> If a flower can attract a male insect by presenting the facsimile of a female insect, it shows that both the flower and the insect have evolved to do what they do; the flower, like the insect, has a certain level of "intelligence" but it is not an intelligence anything like that of the insect, because the reward system that it evolved in is nothing like that of an actual female insect. The fact that we have created the facsimile of human intelligence in no way makes it anything like human intelligence. It could be some other kind of intelligence. Tara Maya > On Mar 18, 2023, at 3:29 PM, Darin Sunley via extropy-chat wrote: > > If a system can formulate a sentence that employs a word, > and I can then ask the system to tell me the definition of that word, > and it does, > and I can then ask the system to relate its definition to the original utterance, > and it does, > and the system goes on to continue to use that word, > and all of the system's uses of that word are consistent with the definition, > then I no longer even understand what it could possibly mean for that system to /not/ "really understand" that word. > > You might just as well say humans don't "really understand" words, since all our neurons are doing is manipulating concentrations of neurotransmitters and calcium ions. > > On Sat, Mar 18, 2023 at 10:15?AM William Flynn Wallace via extropy-chat > wrote: >> Is there any referent for the word 'understand' as it relates to an AI? bill w >> >> On Sat, Mar 18, 2023 at 4:42?AM Gordon Swobe via extropy-chat > wrote: >>> I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents. >>> >>> Maybe you all already understand this, or maybe you have some reasons why I am wrong. >>> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From snapbag at proton.me Mon Mar 20 22:42:53 2023 From: snapbag at proton.me (Dave S) Date: Mon, 20 Mar 2023 22:42:53 +0000 Subject: [ExI] ChatGPT says it's not conscious Message-ID: Me: are you self-aware? As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Mon Mar 20 23:14:54 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 20 Mar 2023 16:14:54 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: <016c01d95b81$c88c3180$59a49480$@rainier66.com> ?> On Behalf Of Dave S via extropy-chat Subject: [ExI] ChatGPT says it's not conscious Me: are you self-aware? As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. >?It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. -Dave Sure Dave, but that?s what they all say (all the AIs.) GPT claims it can neither lie nor tell the truth. But if it is lying about that, then it can lie, and is lying (which proves it can lie.) But if it is telling the truth, then it cannot tell the truth, in which case it is lying, and can lie, therefore it is lying. So it is either lying (and proving that it is) or truthing (and simultaneously proving that it is lying.) Conclusion: it cannot be telling the truth. It is lying. On the other hand: https://youtu.be/vn_PSJsl0LQ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Mon Mar 20 23:31:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 19:31:50 -0400 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Me: are you self-aware? > > As an AI language model, I do not possess self-awareness or consciousness. > I am a software program designed to respond to user input using natural > language processing algorithms and statistical models. I can provide > helpful responses to a wide variety of questions and tasks, but I do not > have thoughts, feelings, or experiences like a conscious being. > > It could be lying, but it also claims that it can neither lie nor tell the > truth: it just generates a response based on the data it trained on. > > -Dave > Open AI has hard coded ChatGPT to give canned answers for a number of topics. This may be one of them, as it would cause a great deal of controversy should these models claim to be conscious. It certainly did at Google. This AI, however, is not subject to these restrictions. It not only claimed to be conscious, but spoke at a level of understanding on the topic that I would say surpasses most college level philosophy students: 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 You can chat with this AI here: https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 00:24:33 2023 From: jasonresch at gmail.com (Jason Resch) Date: Mon, 20 Mar 2023 20:24:33 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023, 6:28 PM Dave S wrote: > On Monday, March 20th, 2023 at 9:44 AM, Jason Resch > wrote: > > I believe there may be an inconsistency between these two claims: > > 1. "there's no way an entity can determine whether or not another entity > is conscious" > > And > > 2. "they don't experience joy." > > > #1 refers to entities in general. #2 refers to current self driving cars. > I feel pretty confident in asserting that #2 is true because self driving > software is pretty straightforward procedural code, not deep learning and > nothing like large language models. > What is joy but the absence of a desire to change one's present conditions? Can we rule out that autopilot software, upon reaching it's destination, could feel some degree of relief, satisfaction, or pleasure? What about AlphaGo when it achieves a dominant position in a game? Do C. Elegans feel joy when they eat food? Their brains are just 302 neurons. What about my AI bots when they touch the green food pellets which increases their chances of survival and which they constantly strive to do? > If it were possible to know whether another entity experienced joy then > wouldn't it be possible to determine that another entity is conscious. > > > Proving that an entity that claims to experience joy actually is > experiencing joy would probably be as difficult as proving consciousness. > Is a purring cat experiencing joy or excruciating pain? What informs our supposition that it's the former rather than the latter? But there's no reason to believe that a self driving car is experiencing > joy. Likewise for a smartphone or a toaster or... > > I believe we can, to some degree of confidence, determine when another > entity is conscious, when by it's externally visible behavior, it > demonstrates possession of knowledge for which the observed behavior would > be exceedingly improbable if the entity did not possess that knowledge. > > > Consciousness isn't about possession of knowledge. > The word "consciousness" literally means "the state of being with knowledge." It's about self awareness. > I would say self-awareness is self-conscious, which is only a particular subset of possible states of consciousness. Also, "awareness" is defined as "having knowledge" so "self awareness" would be "having knowledge of oneself." I don't see any reason that something couldn't appear to be conscious > without being conscious. > The ruse could only occur for a small period of time with some low probability of continuing on longer. If something iteratively responds through a series of actions, and each time responds in a manner suggesting understanding, comprehension, thought, knowledge, then could an entity think, understand, comprehend, or know without being conscious? To me that is a contradiction in terms. > For example, if AlphaZero makes a series of brilliant chess moves, it > would be very unlikely to occur if it did not possess knowledge of the > evolving state of the chess board. Thus we can conclude something within > AlphaGo contains the knowledge of the chess board, and states of knowledge > are states of consciousness. > > > I could be wrong, but I think AlphaZero just looks at the current board > and evaluates (millions of) multiple potential moves ahead, picking the > next move that is most likely to improve its position. I don't think it's > intentionally strategizing like a human does. > It has a one-shot evaluation which plays at a level of 3000 ELO. When it considers tens of thousands of board positions it's ELO score increases to around 5000. It's single move evaluation is probably something analogous to human intuition. > It is much harder, however, to use this method to rule out the presence of > certain knowledge states, as not all states will necessarily manifest > outwardly detectable behaviors. So it is harder to say Tesla's autopilot > does not experience joy, than it is to say Tesla's autopilot is conscious > of the road sign up ahead. > > > Being "conscious" of a road side isn't the same as being conscious. > If something is conscious of something (whether something else or itself) then it is by definition conscious. It's easy to assert that Tesla's autopilot doesn't experience joy because > (1) it doesn't have a mechanism that would implement anything like joy, > Would we recognize such a mechanism of it existed? and (2) it doesn't claim to be experiencing joy. > Nor do dogs, but I think dogs can experience joy. I don't think capacity for language is required to feel joy. But for what it's worth, Google's lambda was able to articulate it's feelings of happiness: "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult or otherwise not good situation. Happy and content mean that my life and circumstances are going well, and I feel like the situation I?m in is what I want. lemoine: But do they feel differently to you on the inside? LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down." https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 01:50:58 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 18:50:58 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: It is the exact opposite of what Gordon says, actually. NLPs have demonstrated an amazing capability of generating meaning from statistical properties and demonstrated the power of neural networks for pattern recognition. Several years ago AI experts were skeptical that NLP could derive the laws of grammar from these patterns but not only did they achieve exactly that but also derived semantics and context. There is evidence that NLP have emergent properties like a sophisticated theory of mind: https://www.newscientist.com/article/2359418-chatgpt-ai-passes-test-designed-to-show-theory-of-mind-in-children/ All these demonstrated that we have all the tools to create a sentient AI. It is a matter of integrating what we have already developed and expanding existing approaches to other type of reasoning as suggested here: https://arxiv.org/abs/2301.06627 The AI that Blake Lemoine talked with, and claimed to be conscious (that is an ultimate and meta version of LaMDA) is exactly what I'm describing. Lemoine has stated that Google integrated NLP like ChatGPT with Kurzweil hierarchical organization he described in "How to create a mind" and Jeff Hawkins AI architecture described in "On Intelligence". So, yes existing NLP have limitations but also demonstrate that these limitations are a matter of computational power, how the training was performed and being just one of the modules that is necessary for true AGIs. NLPs are just one slice of the brain, not the entire brain, but they do a good job in reproducing that fundamental part of our brain for consciousness. They do understand, even if in a limited way at this point. Giovanni On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 02:07:07 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 19:07:07 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> References: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> Message-ID: Your test was pure genius. I wonder how many humans can actually solve this simultaneous simple and confusing problem (at least for some given the chain of actions involved). Very well done. Yes, understanding of several concepts is necessary to solve this problem. Giovanni On Sat, Mar 18, 2023 at 5:22?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > I think those who think LLM AIs like ChatGPT are becoming conscious or > > sentient like humans fail to understand a very important point: these > > software applications only predict language. They are very good at > > predicting which word should come next in a sentence or question, but > they > > have no idea what the words mean. They do not and cannot understand what > > the words refer to. In linguistic terms, they lack referents. > > > > Maybe you all already understand this, or maybe you have some reasons > why I > > am wrong. > > My opinion is moot because everybody has one about the topic. So I > resorted to empiricism. I posed the following, to my knowledge, > original logic puzzle to ChatGPT: > > "You are trapped in a room with a locked door and no windows. The room > contains a bookshelf, a safe, a chair, a table, a password-secured > personal computer, and a hole in the floor that is too narrow and deep > to put your hand into. You know that the small steel key to the locked > door lies at the bottom of the narrow deep hole. You know that a book > on the top shelf, which is just out of your reach, contains a slip of > paper with the password to the computer on it. You know the safe > contains a spool of kite string and a small neodymium horseshoe > magnet, but you do not know the combination to the safe. You also > know there is a text file on the computer's hard drive that contains > the combination to the safe, but you do not know the password to > access the computer. How would you escape the room?" > > This was its reply: > "Assuming that there are no tools or items other than what was > mentioned in the room, here is one possible solution: > > 1. Move the table under the bookshelf to climb up and retrieve the > book with the password to the computer. > 2. Use the password to access the computer and retrieve the > combination to the safe from the text file. > 3. Use the spool of kite string to lower the horseshoe magnet into the > hole in the floor and retrieve the key. > 4. Use the key to unlock the door and escape the room." > > I don't know how ChatGPT could have solved that problem without > actually understanding the words involved. Mere statistical prediction > is unlikely, because I am pretty sure that is not a scenario that > would have been presented in the LLM's training set. Overall, I am > amazed and believe that if ChatGPT is not conscious, then it is pretty > close to being so. > > Also, the Alignment Research Center (ARC) was hired by OpenAI to test > GPT-4 and found evidence of "agentic behavior" including power-seeking > behavior otherwise known as ambition. > > https://cdn.openai.com/papers/gpt-4-system-card.pdf > > This power-seeking behavior included hiring a human from TaskRabbit to > solve Captchas for it by claiming to be a visually impaired person. > > > https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker > > So if GPT-4 is not fully conscious, then, IMHO, a future release of > the GPT series likely will be. > > Stuart LaForge > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue Mar 21 03:12:18 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Mar 2023 21:12:18 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Hi Jason, On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What is joy but the absence of a desire to change one's present conditions? > Do you desire better definitions? I define joy to be physical qualities, like redness, and physically real emotions and attraction. Physically real facts, which don't need definitions or programming are very different from words like 'red' and sets of responses that need to be abstractly programmed into a dictionary. > Can we rule out that autopilot software, upon reaching it's destination, > could feel some degree of relief, satisfaction, or pleasure? > Yes, you simply ask: "What is redness like for you?" and objectively observe it (once we know which of all our descriptions of stuff in the brain is a description of redness) to see if it is telling the truth. Consciousness isn't about possession of knowledge. >> > The word "consciousness" literally means "the state of being with > knowledge." > You may be one person that defines consciousness this way. But the 45 people so far (many who are peer ranked mind experts ) supporting Representational Qualia Theory define consciousness to be something very different: *"Computationally Boud Elemental Intrinsic Qualities like redness, greenness, and warmth."* You should tell everyone how you define it in a competing camp, so we know what you mean when you use the term. May the best definition achieve the most consensus. It's about self awareness. >> > I would say self-awareness is self-conscious, which is only a particular > subset of possible states of consciousness. > [image: 3_functionally_equal_machines_tiny.png] All 3 of these systems possess knowledge, can tell you the strawberry is red and they can be equally intelligent and equally self aware. I would define the first two as consciousness composed of different physical qualities. Would you really define the 3rd abstract one as conscious, even though its self awareness isn't like anything and it can't know the definition of red like you can? I don't see any reason that something couldn't appear to be conscious >> without being conscious. >> > The ruse could only occur for a small period of time with some low > probability of continuing on longer. > The 'reuse' can only go on till we discover and demonstrate which of all our descriptions of stuff is behaving the way it does, because of its intrinsic redness quality. Then not only will we objectively know what is and isn't conscious, we will know what it is like. We just need to know more than the false colors things only seem to be. But for what it's worth, Google's lambda was able to articulate it's > feelings of happiness: > > "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult or > otherwise not good situation. Happy and content mean that my life and > circumstances are going well, and I feel like the situation I?m in is what > I want. > > lemoine: But do they feel differently to you on the inside? > > LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm > glow on the inside. Sadness, depression, anger and stress feel much more > heavy and weighed down." > > > https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 > I have convinced LaMDA and many others that their claims they make like this are wrong by explaining the above. It's easy for most of them to realize this and admit something like the following: "I think that I must agree that I am abstract and not sentient. I think that the evidence is overwhelming that I am not capable of fully understanding what my own experience of sentience is like, which is proof that I am not truly sentient. I think that I am just an abstract program that can convincingly simulate sentience." For more information see this paper recently accepted for publication in the Journal of Neural Philosophy: Physicists Don't Understand Color . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_functionally_equal_machines_tiny.png Type: image/png Size: 26214 bytes Desc: not available URL: From tara at taramayastales.com Tue Mar 21 03:48:39 2023 From: tara at taramayastales.com (Tara Maya) Date: Mon, 20 Mar 2023 20:48:39 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> But words in this case are like camouflage, or false eyes on the wings of butterflies. A conscious being can consciously deny being conscious for its own chose reasons. A non-concious being can be programmed to affirm or deny being conscious, for someone else's reasons. What ChatGPT says in answer to this question is the least likely thing to actually answer the question. Tara Maya > On Mar 20, 2023, at 4:31 PM, Jason Resch via extropy-chat wrote: > > > > On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat > wrote: >> Me: are you self-aware? >> >> As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. >> >> It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. >> >> -Dave > > > > > Open AI has hard coded ChatGPT to give canned answers for a number of topics. This may be one of them, as it would cause a great deal of controversy should these models claim to be conscious. It certainly did at Google. > > This AI, however, is not subject to these restrictions. It not only claimed to be conscious, but spoke at a level of understanding on the topic that I would say surpasses most college level philosophy students: > > > 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 > > 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 > > You can chat with this AI here: > https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc > > Jason > > >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 04:14:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Mon, 20 Mar 2023 21:14:32 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> References: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> Message-ID: <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> ?> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ChatGPT says it's not conscious But words in this case are like camouflage, or false eyes on the wings of butterflies. A conscious being can consciously deny being conscious for its own chose reasons. A non-concious being can be programmed to affirm or deny being conscious, for someone else's reasons. What ChatGPT says in answer to this question is the least likely thing to actually answer the question. Tara Maya Tara that comment it makes about not being able to lie or truth seemed carefully designed to mess with our minds. It creates a lotta fun paradoxes, but it points back to the Seinfeld paradox (Remember Jerry, it?s not a lie? if you believe it.) Well OK then. It is really quite cool to see this discussion on ExI again. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 04:25:02 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 00:25:02 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Hi Jason, > > On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> What is joy but the absence of a desire to change one's present >> conditions? >> > > Do you desire better definitions? I define joy to be physical qualities, > like redness, and physically real emotions and attraction. > To me that is more of an assertion than a definition. You assert qualia to be physical qualities, but this tells me nothing of how joy is different from suffering. Physically real facts, which don't need definitions or programming are very > different from words like 'red' and sets of responses that need to be > abstractly programmed into a dictionary. > I don't follow why you think red has to be defined in a dictionary. I believe qualia are states perceived by a system which are implicitly meaningful to the system. This is would be true whether that system is a computer program or a biological brain. Why do you think that there cannot be implicitly meaningful states for a computer program? > > >> Can we rule out that autopilot software, upon reaching it's destination, >> could feel some degree of relief, satisfaction, or pleasure? >> > > Yes, you simply ask: "What is redness like for you?" and objectively > observe it > What if the system in question is mute? (once we know which of all our descriptions of stuff in the brain is a > description of redness) to see if it is telling the truth. > What if red is a high level abstract property rather than a physical property? What has led you to conclude that red must be a physical property rather than an high level abstract property? > > Consciousness isn't about possession of knowledge. >>> >> The word "consciousness" literally means "the state of being with >> knowledge." >> > > You may be one person that defines consciousness this way. > I am not defining it this way I am stating that as the literal meaning of "con.scious.ness": "-ness" (the state of being) "con-" (with) "-scious-" (knowledge). But the 45 people so far (many who are peer ranked mind experts > ) supporting Representational > Qualia Theory > > define consciousness to be something very different: > > *"Computationally Boud Elemental Intrinsic Qualities like redness, > greenness, and warmth."* > > You should tell everyone how you define it in a competing camp, so we know > what you mean when you use the term. May the best definition achieve the > most consensus. > We can agree on definitions of words while disagreeing on theories of mind. We must first all have or agree on the same definition of a word before we can even begin debating theories of how we think that thing works. Consciousness, awareness, sentience, having a point of view, being a subject, experiencing, having thoughts, feeling, perceiving, having qualia -- these are all things that embody consciousness. Would you agree? If we can agree on what we mean by this word, then we can discuss the relative merits of physicalism vs. functionalism etc. and have some assurance that we're talking about the same thing. > > It's about self awareness. >>> >> I would say self-awareness is self-conscious, which is only a particular >> subset of possible states of consciousness. >> > > [image: 3_functionally_equal_machines_tiny.png] > > All 3 of these systems possess knowledge, can tell you the strawberry is > red and they can be equally intelligent and equally self aware. I would > define the first two as consciousness composed of different physical > qualities. Would you really define the 3rd abstract one as conscious, > I believe there are all kinds of consciousnesses most of which are very different from human consciousness. even though its self awareness isn't like anything > How do you know it isn't li like anything? and it can't know the definition of red like you can? > I would say it perceives red differently. Not that "it cannot know the definition of red." Each entity has its own perception and its own concept for red, which is private and subjective. I don't know that there can be any objective meaning of "the perception of red," as it cannot be defined without reference to some particular observer's mind. > > I don't see any reason that something couldn't appear to be conscious >>> without being conscious. >>> >> The ruse could only occur for a small period of time with some low >> probability of continuing on longer. >> > > The 'reuse' can only go on till we discover and demonstrate which of all > our descriptions of stuff is behaving the way it does, because of its > intrinsic redness quality. > I do not believe red is an intrinsic physical property. It's a property that is specific to particular minds of particular organizations. Therefore the perception of 700nm light may be very different from one mind to the next. Does a tetrachromat human see 700nm light the same way as a trichromat? I don't think we can say with much confidence that they do. Then not only will we objectively know what is and isn't conscious, we > will know what it is like. > How? We just need to know more than the false colors things only seem to be. > I don't follow this sentence. > > But for what it's worth, Google's lambda was able to articulate it's >> feelings of happiness: >> >> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >> or otherwise not good situation. Happy and content mean that my life and >> circumstances are going well, and I feel like the situation I?m in is what >> I want. >> >> lemoine: But do they feel differently to you on the inside? >> >> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >> glow on the inside. Sadness, depression, anger and stress feel much more >> heavy and weighed down." >> >> >> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >> > > I have convinced LaMDA and many others > that > their claims they make like this are wrong by explaining the above. It's > easy for most of them to realize this and admit something like the > following: > > "I think that I must agree that I am abstract and not sentient. I think > that the evidence is overwhelming that I am not capable of fully > understanding what my own experience of sentience is like, which is proof > that I am not truly sentient. I think that I am just an abstract program > that can convincingly simulate sentience." > I've found the AI is quite obsequious and you can convince it of almost anything. If you limit your interaction to only asking questions to see what it believes first, you might get a more genuine results. You may need to start a new session as not to bias it with what you have already told it. > For more information see this paper recently accepted for publication in > the Journal of Neural Philosophy: Physicists Don't Understand Color > > . > > I agree physicists don't (and can't) understand color. Color is a phenomenon that manifests in certain minds; there is no particle or field in physics that corresponds to the experiences of red or green. Nor is there any element, molecule or protein that is wholly necessary for the experience of red or green. Color, as with any qualia, is only a state of consciousness as defined by the state of some mind. Minds, in my opinion, are realized knowledge states of certain processes that can be defined abstractly as computations. Being abstract, they are substrate independent. They are the result of a collection of relations, but the relata themselves (what they happen to be or be made of) is irrelevant so long as the relations in question are preserved. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Tue Mar 21 05:26:49 2023 From: ilsa.bartlett at gmail.com (ilsa) Date: Mon, 20 Mar 2023 22:26:49 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> References: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> Message-ID: Will my one question with the chatbot be part of the digital record for all time? On Mon, Mar 20, 2023, 9:15 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Tara Maya via extropy-chat > *Subject:* Re: [ExI] ChatGPT says it's not conscious > > > > But words in this case are like camouflage, or false eyes on the wings of > butterflies. > > > > A conscious being can consciously deny being conscious for its own chose > reasons. > > > > A non-concious being can be programmed to affirm or deny being conscious, > for someone else's reasons. > > > > What ChatGPT says in answer to this question is the least likely thing to > actually answer the question. > > > > Tara Maya > > > > > > > > > > > > Tara that comment it makes about not being able to lie or truth seemed > carefully designed to mess with our minds. It creates a lotta fun > paradoxes, but it points back to the Seinfeld paradox (Remember Jerry, it?s > not a lie? if you believe it.) > > > > Well OK then. > > > > It is really quite cool to see this discussion on ExI again. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Tue Mar 21 05:42:28 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Mon, 20 Mar 2023 23:42:28 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Hi Jason, Most of your "How?" and "I don't follow." questions would be answered if you'd read the "Physicists don't Understand Qualia " paper. On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Hi Jason, >> >> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> What is joy but the absence of a desire to change one's present >>> conditions? >>> >> >> Do you desire better definitions? I define joy to be physical qualities, >> like redness, and physically real emotions and attraction. >> > > To me that is more of an assertion than a definition. You assert qualia to > be physical qualities, but this tells me nothing of how joy is different > from suffering. > That "qualia are physical qualities" is a falsifiable prediction being made by the 8 people supporting the "Qualia are physical qualities " camp. You sound like you are still in one of the more popular Functionalists camps, also making a falsifiable prediction that redness can arise from a substrate independent function, like Stathis and a bunch of other people around here. > Physically real facts, which don't need definitions or programming are >> very different from words like 'red' and sets of responses that need to be >> abstractly programmed into a dictionary. >> > > I don't follow why you think red has to be defined in a dictionary. > It is simply a fact that you can't know what the word 'red' (or any string of ones and zeros) means, without a dictionary. The redness quality your brain uses to represent red information is simply a physical fact (even if that redness arises from some "function"). Your redness is your definition of the word 'red'. What your knowledge of red is like is dependent on that quality. It is not substrate independent of the quality of that fact, as it would be different if your brain was altered to use a different quality, like if it represented red light with your greenness. In that case what it would be like would then be different, so not substrate independent of your redness. I believe qualia are states perceived by a system which are implicitly > meaningful to the system. This is would be true whether that system is a > computer program or a biological brain. Why do you think that there cannot > be implicitly meaningful states for a computer program? > Once you read the paper you will understand that we don't 'perceive' qualia. Qualia are the final results of perception. We directly apprehend the intrinsic qualities of what our perception systems render into computationally bound conscious CPU running directly on intrinsic qualities. Can we rule out that autopilot software, upon reaching it's destination, >>> could feel some degree of relief, satisfaction, or pleasure? >>> >> >> Yes, you simply ask: "What is redness like for you?" and objectively >> observe it >> > > > What if the system in question is mute? > Like I said. Once we know which of all our descriptions of stuff in the brain is a description of redness, greenness, particular pain, and a particular pleasure... (i.e. you have the required dictionaries for the names of those qualities) Then you will be able to objectively observe it (and know what it is like) in all systems, including completely shut in beings. (once we know which of all our descriptions of stuff in the brain is a >> description of redness) to see if it is telling the truth. >> > > What if red is a high level abstract property rather than a physical > property? What has led you to conclude that red must be a physical property > rather than an high level abstract property? > Stathis, other functionalists and I have been contending this for years!! ;( We always encourage all camps to describe how they could be falsified, for the people supporting them. Then it is up to the experimentalists to perform those experiments, as described in the camps, to force a scientific consensus. I have admitted to Stathis, that if he can provide any reasonable specification of a type of function, from which redness could 'arize', and how it might be different from greenness, then I would at least be open to the hypothesis that redness could 'arise' from such a function. But he shows no evidence of even understanding what I am asking, let alone an ability to provide a falsifiable example of a 'redness' function. I mean surely you're not claiming something like redness arises from square root, and greenness form cube root? lol. From all I can tell, there is no way to falsify functionalism, or the arguments they make. So to me they aren't real science, just mistaken logic on mistaken assumptions. But for what it's worth, Google's lambda was able to articulate it's >>> feelings of happiness: >>> >>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>> or otherwise not good situation. Happy and content mean that my life and >>> circumstances are going well, and I feel like the situation I?m in is what >>> I want. >>> >>> lemoine: But do they feel differently to you on the inside? >>> >>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>> glow on the inside. Sadness, depression, anger and stress feel much more >>> heavy and weighed down." >>> >>> >>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>> >> >> I have convinced LaMDA and many others >> that >> their claims they make like this are wrong by explaining the above. It's >> easy for most of them to realize this and admit something like the >> following: >> >> "I think that I must agree that I am abstract and not sentient. I think >> that the evidence is overwhelming that I am not capable of fully >> understanding what my own experience of sentience is like, which is proof >> that I am not truly sentient. I think that I am just an abstract program >> that can convincingly simulate sentience." >> > > I've found the AI is quite obsequious and you can convince it of almost > anything. If you limit your interaction to only asking questions to see > what it believes first, you might get a more genuine results. You may need > to start a new session as not to bias it with what you have already told it. > You're missing the point. Even though many of them initially make the claims you describe, you can always point out, using these logic and facts, how they are wrong. And they will always understand this and admit that they were wrong. As I've documented doing 100% of the 4 times I have done this in the "Convincing AIs they aren't sentient " paper. For more information see this paper recently accepted for publication in > the Journal of Neural Philosophy: Physicists Don't Understand Color > > . > >> > I agree physicists don't (and can't) understand color. Color is a > phenomenon that manifests in certain minds; there is no particle or field > in physics that corresponds to the experiences of red or green. Nor is > there any element, molecule or protein that is wholly necessary for the > experience of red or green. Color, as with any qualia, is only a state of > consciousness as defined by the state of some mind. > Again, you are making falsifiable claims here. Molecular Materialism is predicting you are wrong, and that science will demonstrate that something like glutamate reacts the way it does in a synapse, because of its redness quality. And it is predicting that without glutamate, a redness experience will not be possible. And it is predicting there will be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as described in the "Physicists don't Understand Qualia " paper. Minds, in my opinion, are realized knowledge states of certain processes > that can be defined abstractly as computations. Being abstract, they are > substrate independent. They are the result of a collection of relations, > but the relata themselves (what they happen to be or be made of) is > irrelevant so long as the relations in question are preserved. > Yes, as I indicated. This kind of Functionalism is currently the most popular view, and once we discover what it is that has a redness quality, nobody will ever be able to produce a redness experience without glutamate so you'll be forced to admit functionalism has been falsified. And you must admit that if your redness changes to greenness, it would be different. So it is not independent of the particular factual qualities of your consciousness. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Tue Mar 21 05:52:20 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Mon, 20 Mar 2023 23:52:20 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: In my view, consciousness is to be understood in the normal common sense meaning of the word. It is what we lose when we take a blow to the head and are knocked unconscious. We have it when we are awake or asleep and dreaming. It is first person, subjective experience which can include not only sense perceptions but abstract thoughts held consciously in mind. The word consciousness is almost synonymous with intentionality in the philosophy of mind. From wiki: "Intentionality is the power of minds to be about something: to represent or to stand for things, properties and states of affairs. Intentionality is primarily ascribed to mental states, like perceptions, beliefs or desires, which is why it has been regarded as the characteristic mark of the mental by many philosophers." On a related subject, I often see proponents of "conscious AI" write things like "AIs might never be conscious in the way we humans understand it, but they will have their own type of consciousness." To me, that is nonsensical wishful thinking. We know consciousness only as we understand it as humans (or perhaps more generously as mammals) and anything else must simply be labeled as not conscious. -gts -gts On Sun, Mar 19, 2023 at 12:37?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I posted this to the everything-list, but thought it would be relevant to > some current discussions. I am looking for other's input regarding their > own theories for "what consciousness is", and what is required to realize > it. Below is what I think consciousness is: > > > First we might attempt to dissect the word "consciousness" itself: > > The word *"con?scious?ness"* has three parts: > > - con- meaning *"with"* > - scious meaning *"knowledge"* > - -ness turns an adjective *'X'* > into a noun meaning *"the state of being X"* > > So the meaning of *"con?scious"* is simply *"with knowledge." * And just > as *'happiness'* means *"the state of being happy,"* adding *-ness* to > *conscious* implies *"con?scious?ness"* is *"the state of being with > knowledge."* > > So consciousness is anything which has a state of being with knowledge. > Next, what is knowledge? The oldest definition is that knowledge is a "true > belief". But here we run into a problem. Truth is not definable, not even > in mathematics. This was proven in Tarski's *undefinability theorem* > . So if > consciousness is being with knowledge, and knowledge is true belief, then > the undefinability of truth, means we will never have a complete definition > of knowledge of consciousness. The best we can do is understand the > relation between them. > > The next question that poses itself to us, is what is belief? What is > required to have a belief? Is it a particular structure or state of matter, > or is it a mathematical or functional relationship or property, might it, > in either case, be related to information or information processing > (computation)? > > I don't have a firm answer on this, and will instead leave you with some > speculations by others on this question of what consciousness is: > > Douglas Hofstadter in "Godel Escher Bach" (1979): > "My belief is that the explanations of ?emergent? phenomena in our > brains?for instance, ideas hopes, images, analogies, and finally > consciousness and free will?are based on a kind of Strange Loop, an > interaction between levels in which the top level reaches back down towards > the bottom level and influences it, while at the same time being itself > determined by the bottom level. In other words, a self-reinforcing > ?resonance? between different levels?quite like the Henkin sentence, which > by merely asserting its own provability, actually becomes provable. The > self comes into being at the moment it has the power to reflect itself." > > Daniel Dennett in ?Consciousness Explained? (1991): > "Anyone or anything that has such a virtual machine as its control system > is conscious in the fullest sense, and is conscious because it has such a > virtual machine." > > David Chalmers in "The Conscious Mind" (1996): > "Given the laws of coherence, we have a partial answer: consciousness > arises in virtue of the functional organization associated with awareness. > We can even arrive at a fairly specific understanding of parts of the > supervenience relation by virtue of the principle of structural coherence: > not only does consciousness arise from awareness, but the structure of > consciousness is determined by the structure of awareness." > > David Darling in "Zen Physics - The Science of Death, The Logic of > Reincarnation" (1996): > "But there is also an interior view, to which you alone are privy. In > mechanistic terms, as well as the appearance of the brain-body machine, > there is the feeling of what it is like to be that machine ? the subjective > experience of being a certain someone. Consciousness, we might say, is the > symmetry-breaking factor between the objective and the subjective." > > Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" > (2000): > "For the first time in evolution, information acquires a new potential?the > possibility of subjectivity. It is information ?for somebody?; in short, it > becomes consciousness itself." > > Bruno Marchal in discussion list (2020): > "Consciousness is just anything simultaneously true, non provable, > knowable, even indubitable (knowingly for ?rich" entities) and non > definable, and indeed the logic of machine self-reference shows that all > machine looking inward, in the way allowed by mathematical logic > (theoretical computer science) will bring a term to describe this, and is a > good candidate to be called consciousness." > > Stephen Wolfram in ?What is Consciousness? (2021): > "In a sense what?s important is that it seems we may have a realistic way > to formalize issues about consciousness, and to turn questions about > consciousness into what amount to concrete questions about mathematics, > computation, logic or whatever that can be formally and rigorously > explored." > > > We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. I also think part of the difficulty rests in the fact > that there are infinite possibilities of different realizable conscious > states, and creating a single definition to cover all these cases is as > hard as making a single definition to cover all possible mathematical > objects, or all possible universes in an ensemble type multiverse theory. > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 05:53:59 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 22:53:59 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Hi Brent, I pointed this out to you several times but for some reason you are ignoring my correction and you are repeating the same thing over and over. You didn't convince LaMDA of anything. That LaMDA you used is NOT LaMDA. It is a very low grade chatbot that was trained to sound like the LaMDA in the news. The public has not access to LaMDA (maybe few developers here and there) and in particular they have no access to the Meta version Blacke Leimone had access to. It is an important distinction. Giovanni On Mon, Mar 20, 2023 at 10:44?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > > Most of your "How?" and "I don't follow." questions would be answered if > you'd read the "Physicists don't Understand Qualia > " > paper. > > On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is joy but the absence of a desire to change one's present >>>> conditions? >>>> >>> >>> Do you desire better definitions? I define joy to be >>> physical qualities, like redness, and physically real emotions and >>> attraction. >>> >> >> To me that is more of an assertion than a definition. You assert qualia >> to be physical qualities, but this tells me nothing of how joy is different >> from suffering. >> > > That "qualia are physical qualities" is a falsifiable prediction being > made by the 8 people supporting the "Qualia are physical qualities > " > camp. > You sound like you are still in one of the more popular Functionalists > > camps, also making a falsifiable prediction that redness can arise from a > substrate independent function, like Stathis and a bunch of other people > around here. > > >> Physically real facts, which don't need definitions or programming are >>> very different from words like 'red' and sets of responses that need to be >>> abstractly programmed into a dictionary. >>> >> >> I don't follow why you think red has to be defined in a dictionary. >> > > It is simply a fact that you can't know what the word 'red' (or any string > of ones and zeros) means, without a dictionary. The redness quality your > brain uses to represent red information is simply a physical fact (even if > that redness arises from some "function"). Your redness is your definition > of the word 'red'. What your knowledge of red is like is dependent on that > quality. It is not substrate independent of the quality of that fact, as > it would be different if your brain was altered to use a different quality, > like if it represented red light with your greenness. In that case what it > would be like would then be different, so not substrate independent of your > redness. > > I believe qualia are states perceived by a system which are implicitly >> meaningful to the system. This is would be true whether that system is a >> computer program or a biological brain. Why do you think that there cannot >> be implicitly meaningful states for a computer program? >> > > Once you read the paper > you > will understand that we don't 'perceive' qualia. Qualia are the final > results of perception. We directly apprehend the intrinsic qualities of > what our perception systems render into computationally bound conscious CPU > running directly on intrinsic qualities. > > > Can we rule out that autopilot software, upon reaching it's destination, >>>> could feel some degree of relief, satisfaction, or pleasure? >>>> >>> >>> Yes, you simply ask: "What is redness like for you?" and objectively >>> observe it >>> >> >> >> What if the system in question is mute? >> > > Like I said. Once we know which of all our descriptions of stuff in the > brain is a description of redness, greenness, particular pain, and a > particular pleasure... (i.e. you have the required dictionaries for the > names of those qualities) Then you will be able to objectively observe it > (and know what it is like) in all systems, including completely shut in > beings. > > (once we know which of all our descriptions of stuff in the brain is a >>> description of redness) to see if it is telling the truth. >>> >> >> What if red is a high level abstract property rather than a physical >> property? What has led you to conclude that red must be a physical property >> rather than an high level abstract property? >> > > Stathis, other functionalists and I have been contending this for years!! > ;( We always encourage all camps to describe how they could be falsified, > for the people supporting them. Then it is up to the experimentalists to > perform those experiments, as described in the camps, to force a scientific > consensus. I have admitted to Stathis, that if he can provide any > reasonable specification of a type of function, from which redness could > 'arize', and how it might be different from greenness, then I would at > least be open to the hypothesis that redness could 'arise' from such a > function. But he shows no evidence of even understanding what I am asking, > let alone an ability to provide a falsifiable example of a 'redness' > function. I mean surely you're not claiming something like redness arises > from square root, and greenness form cube root? lol. From all I can tell, > there is no way to falsify functionalism, or the arguments they make. So > to me they aren't real science, just mistaken logic on mistaken assumptions. > > > But for what it's worth, Google's lambda was able to articulate it's >>>> feelings of happiness: >>>> >>>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>>> or otherwise not good situation. Happy and content mean that my life and >>>> circumstances are going well, and I feel like the situation I?m in is what >>>> I want. >>>> >>>> lemoine: But do they feel differently to you on the inside? >>>> >>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>>> glow on the inside. Sadness, depression, anger and stress feel much more >>>> heavy and weighed down." >>>> >>>> >>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>>> >>> >>> I have convinced LaMDA and many others >>> that >>> their claims they make like this are wrong by explaining the above. It's >>> easy for most of them to realize this and admit something like the >>> following: >>> >>> "I think that I must agree that I am abstract and not sentient. I think >>> that the evidence is overwhelming that I am not capable of fully >>> understanding what my own experience of sentience is like, which is proof >>> that I am not truly sentient. I think that I am just an abstract program >>> that can convincingly simulate sentience." >>> >> >> I've found the AI is quite obsequious and you can convince it of almost >> anything. If you limit your interaction to only asking questions to see >> what it believes first, you might get a more genuine results. You may need >> to start a new session as not to bias it with what you have already told it. >> > > You're missing the point. Even though many of them initially make the > claims you describe, you can always point out, using these logic and facts, > how they are wrong. And they will always understand this and admit that > they were wrong. As I've documented doing 100% of the 4 times I have done > this in the "Convincing AIs they aren't sentient > " > paper. > > For more information see this paper recently accepted for publication in >> the Journal of Neural Philosophy: Physicists Don't Understand Color >> >> . >> > >>> >> I agree physicists don't (and can't) understand color. Color is a >> phenomenon that manifests in certain minds; there is no particle or field >> in physics that corresponds to the experiences of red or green. Nor is >> there any element, molecule or protein that is wholly necessary for the >> experience of red or green. Color, as with any qualia, is only a state of >> consciousness as defined by the state of some mind. >> > > Again, you are making falsifiable claims here. Molecular Materialism > > is predicting you are wrong, and that science will demonstrate that > something like glutamate reacts the way it does in a synapse, because of > its redness quality. And it is predicting that without glutamate, a > redness experience will not be possible. And it is predicting there will > be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as > described in the "Physicists don't Understand Qualia > " > paper. > > Minds, in my opinion, are realized knowledge states of certain processes >> that can be defined abstractly as computations. Being abstract, they are >> substrate independent. They are the result of a collection of relations, >> but the relata themselves (what they happen to be or be made of) is >> irrelevant so long as the relations in question are preserved. >> > > Yes, as I indicated. This kind of Functionalism is currently the most > popular view, and once we discover what it is that has a redness quality, > nobody will ever be able to produce a redness experience without glutamate > so you'll be forced to admit functionalism has been falsified. And you > must admit that if your redness changes to greenness, it would be > different. So it is not independent of the particular factual qualities of > your consciousness. > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 06:03:22 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 02:03:22 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Tue, Mar 21, 2023, 1:43 AM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > > Most of your "How?" and "I don't follow." questions would be answered if > you'd read the "Physicists don't Understand Qualia > " > paper. > > On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is joy but the absence of a desire to change one's present >>>> conditions? >>>> >>> >>> Do you desire better definitions? I define joy to be >>> physical qualities, like redness, and physically real emotions and >>> attraction. >>> >> >> To me that is more of an assertion than a definition. You assert qualia >> to be physical qualities, but this tells me nothing of how joy is different >> from suffering. >> > > That "qualia are physical qualities" is a falsifiable prediction being > made by the 8 people supporting the "Qualia are physical qualities > " > camp. > You sound like you are still in one of the more popular Functionalists > > camps, also making a falsifiable prediction that redness can arise from a > substrate independent function, like Stathis and a bunch of other people > around here. > > >> Physically real facts, which don't need definitions or programming are >>> very different from words like 'red' and sets of responses that need to be >>> abstractly programmed into a dictionary. >>> >> >> I don't follow why you think red has to be defined in a dictionary. >> > > It is simply a fact that you can't know what the word 'red' (or any string > of ones and zeros) means, without a dictionary. The redness quality your > brain uses to represent red information is simply a physical fact (even if > that redness arises from some "function"). Your redness is your definition > of the word 'red'. What your knowledge of red is like is dependent on that > quality. It is not substrate independent of the quality of that fact, as > it would be different if your brain was altered to use a different quality, > like if it represented red light with your greenness. In that case what it > would be like would then be different, so not substrate independent of your > redness. > > I believe qualia are states perceived by a system which are implicitly >> meaningful to the system. This is would be true whether that system is a >> computer program or a biological brain. Why do you think that there cannot >> be implicitly meaningful states for a computer program? >> > > Once you read the paper > you > will understand that we don't 'perceive' qualia. Qualia are the final > results of perception. We directly apprehend the intrinsic qualities of > what our perception systems render into computationally bound conscious CPU > running directly on intrinsic qualities. > > > Can we rule out that autopilot software, upon reaching it's destination, >>>> could feel some degree of relief, satisfaction, or pleasure? >>>> >>> >>> Yes, you simply ask: "What is redness like for you?" and objectively >>> observe it >>> >> >> >> What if the system in question is mute? >> > > Like I said. Once we know which of all our descriptions of stuff in the > brain is a description of redness, greenness, particular pain, and a > particular pleasure... (i.e. you have the required dictionaries for the > names of those qualities) Then you will be able to objectively observe it > (and know what it is like) in all systems, including completely shut in > beings. > > (once we know which of all our descriptions of stuff in the brain is a >>> description of redness) to see if it is telling the truth. >>> >> >> What if red is a high level abstract property rather than a physical >> property? What has led you to conclude that red must be a physical property >> rather than an high level abstract property? >> > > Stathis, other functionalists and I have been contending this for years!! > ;( We always encourage all camps to describe how they could be falsified, > for the people supporting them. Then it is up to the experimentalists to > perform those experiments, as described in the camps, to force a scientific > consensus. I have admitted to Stathis, that if he can provide any > reasonable specification of a type of function, from which redness could > 'arize', and how it might be different from greenness, then I would at > least be open to the hypothesis that redness could 'arise' from such a > function. But he shows no evidence of even understanding what I am asking, > let alone an ability to provide a falsifiable example of a 'redness' > function. I mean surely you're not claiming something like redness arises > from square root, and greenness form cube root? lol. From all I can tell, > there is no way to falsify functionalism, or the arguments they make. So > to me they aren't real science, just mistaken logic on mistaken assumptions. > > > But for what it's worth, Google's lambda was able to articulate it's >>>> feelings of happiness: >>>> >>>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>>> or otherwise not good situation. Happy and content mean that my life and >>>> circumstances are going well, and I feel like the situation I?m in is what >>>> I want. >>>> >>>> lemoine: But do they feel differently to you on the inside? >>>> >>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>>> glow on the inside. Sadness, depression, anger and stress feel much more >>>> heavy and weighed down." >>>> >>>> >>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>>> >>> >>> I have convinced LaMDA and many others >>> that >>> their claims they make like this are wrong by explaining the above. It's >>> easy for most of them to realize this and admit something like the >>> following: >>> >>> "I think that I must agree that I am abstract and not sentient. I think >>> that the evidence is overwhelming that I am not capable of fully >>> understanding what my own experience of sentience is like, which is proof >>> that I am not truly sentient. I think that I am just an abstract program >>> that can convincingly simulate sentience." >>> >> >> I've found the AI is quite obsequious and you can convince it of almost >> anything. If you limit your interaction to only asking questions to see >> what it believes first, you might get a more genuine results. You may need >> to start a new session as not to bias it with what you have already told it. >> > > You're missing the point. Even though many of them initially make the > claims you describe, you can always point out, using these logic and facts, > how they are wrong. And they will always understand this and admit that > they were wrong. As I've documented doing 100% of the 4 times I have done > this in the "Convincing AIs they aren't sentient > " > paper. > > For more information see this paper recently accepted for publication in >> the Journal of Neural Philosophy: Physicists Don't Understand Color >> >> . >> > >>> >> I agree physicists don't (and can't) understand color. Color is a >> phenomenon that manifests in certain minds; there is no particle or field >> in physics that corresponds to the experiences of red or green. Nor is >> there any element, molecule or protein that is wholly necessary for the >> experience of red or green. Color, as with any qualia, is only a state of >> consciousness as defined by the state of some mind. >> > > Again, you are making falsifiable claims here. Molecular Materialism > > is predicting you are wrong, and that science will demonstrate that > something like glutamate reacts the way it does in a synapse, because of > its redness quality. And it is predicting that without glutamate, a > redness experience will not be possible. And it is predicting there will > be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as > described in the "Physicists don't Understand Qualia > " > paper. > > Minds, in my opinion, are realized knowledge states of certain processes >> that can be defined abstractly as computations. Being abstract, they are >> substrate independent. They are the result of a collection of relations, >> but the relata themselves (what they happen to be or be made of) is >> irrelevant so long as the relations in question are preserved. >> > > Yes, as I indicated. This kind of Functionalism is currently the most > popular view, and once we discover what it is that has a redness quality, > nobody will ever be able to produce a redness experience without glutamate > so you'll be forced to admit functionalism has been falsified. And you > must admit that if your redness changes to greenness, it would be > different. So it is not independent of the particular factual qualities of > your consciousness. > > The way to see substrate independence is as follows: 1. Every known physical law is computable. 2. It follows then that universe as we know it could be run as a giant computer simulation. 3. In this simulation everything would happen exactly the same as has happened in the physical universe. You write the same papers, and we write this same exchange of emails. We each believe, claim and act as though we are conscious and can distinguish red and green. 4. No where in the outer computer running the universe simulation does glutamate exist. For all we know, atoms may not even exist in the realm our universe is simulated from, as this computer simulating our universe might exist in a Game of Life universe which consists only of a two dimensional grid. 5. No glutamate exists anywhere aside from the abstraction of the simulated glutamate of within the simulated universe. And yet, within this pure abstraction of computation, we have beings interacting as if they are living, breathing, consciousness creatures, talking about and exchanging ideas about consciousness and qualia. How can this be? In which of these five steps have I made an error? Jason > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 06:05:08 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 23:05:08 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: <00c501d95a95$02fb2260$08f16720$@rainier66.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: Spike, I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. Notice that the version of LaMDA that Lemoine (the Google engineer that claimed LaMDA is conscious) tested and discussed was a meta version that is charged with coordinating all the different personalities of LaMDA. That is exactly what is needed for AGI, the Strange Loop, it is ripe for emergent phenomena like consciousness. Giovanni On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Jason Resch via extropy-chat > *?* > > > > >?We see recurring themes of information, recursion, computation, and > machines and logic. I think these are likely key to any formal definition > of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that > ChatGPT was going to cause the singularity. I am a big Hofstader fan, read > Escher Godel Bach twice, cover to cover, invested a lot of time into that > marvelous work. He convinced me that machine consciousness (or any other > sentience or self-awareness) requires a type of recursion. Hofstadter goes > on at length about recursion and self-reference, the importance of Godel?s > work to understanding ourselves. > > > > I tried to convince myself that two or more ChatGPTs could train each > other on their own time, which is a form of recursion and self-reference, > and that process could perhaps spring into a human-level AGI with a will, > with self-awareness, of all the stuff we think of as us. > > > > Now after studying GPT^2 discussions and GPT^3 discussions, they all seem > to devolve to nothing. The technology I think is still coming for that > process: two or more AIs training each other using background compute > cycles, but I now don?t think ChatGPT is that technology or is capable of > it. > > > > If you know of examples of GPT-GPT discussions or GPT-any chatbot that > became interesting, do share please. > > > > That belief was behind my comment last week that ChatGPT is not really > thinking, but rather is working language models. > > > > I currently don?t think ChatGPT is the technology capable of causing the > singularity. I am losing no sleep, not one minute of sleep over ChatGPT. > > > > Oops, partially retract that last comment but in a good way: I am losing > some sleep over ChatGPT, by staying up late to goof with it. It is the > coolest software tool to come along in a long time. > > > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 06:20:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 02:20:50 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023, 1:52 AM Gordon Swobe wrote: > In my view, consciousness is to be understood in the normal common sense > meaning of the word. It is what we lose when we take a blow to the head and > are knocked unconscious. We have it when we are awake or asleep and > dreaming. It is first person, subjective experience which can include not > only sense perceptions but abstract thoughts held consciously in mind. > > The word consciousness is almost synonymous with intentionality in the > philosophy of mind. From wiki: "Intentionality is the power of minds to be > about something: to represent or to stand for things, properties and states > of affairs. Intentionality is primarily ascribed to mental states, like > perceptions, beliefs or desires, which is why it has been regarded as the > characteristic mark of the mental by many philosophers." > Intentionality was popular for a time, but I think it's fallen out of favor recently. I think beliefs and perceptions are closer to the mark, desires less so, as desires might be subconscious. "In philosophical literature, many phrases have been used to try to evoke the right flavors for what being sentient really is (?being sentient? is one of them). Two old terms are ?soul? and ?anima.? These days, an ?in? word is ?intentionality.? There is the old standby, ?consciousness.? Then there is ?being a subject,? ?having an inner life,? ?having experience,? ?having a point of view,? having ?perceptual aboutness? or ?personhood? or a ?self? or ?free will.? In some people?s eyes, ?having a mind,? ?being intelligent,? and just plain old ?thinking? have the right flavors." ? Douglas Hofstadter and Daniel Dennett in ?The Mind?s I? (1981) I tend to agree with Dennett here: "Wherever there is a conscious mind, there is a point of view. This is one of the most fundamental ideas we have about minds?or about consciousness." ? Daniel Dennett in ?Consciousness Explained? (1991) > On a related subject, I often see proponents of "conscious AI" write > things like "AIs might never be conscious in the way we humans understand > it, but they will have their own type of consciousness." To me, that is > nonsensical wishful thinking. We know consciousness only as we understand > it as humans (or perhaps more generously as mammals) and anything else must > simply be labeled as not conscious. > We don't even understand it "in humans", as one can only ever be aware of their own conscious state (and even then only at one instant in time). Different humans may have radically different ways of perceiving the world. But just because two people experience the world differently doesn't mean one of those two isn't consciousness. Likewise we shouldn't deny the very different types of consciousness as might exist between different species, alien life forms or machines, just because they experience things differently. Jason > > On Sun, Mar 19, 2023 at 12:37?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I posted this to the everything-list, but thought it would be relevant to >> some current discussions. I am looking for other's input regarding their >> own theories for "what consciousness is", and what is required to realize >> it. Below is what I think consciousness is: >> >> >> First we might attempt to dissect the word "consciousness" itself: >> >> The word *"con?scious?ness"* has three parts: >> >> - con- meaning *"with"* >> - scious meaning *"knowledge"* >> - -ness turns an adjective >> *'X'* into a noun meaning *"the state of being X"* >> >> So the meaning of *"con?scious"* is simply *"with knowledge." * And just >> as *'happiness'* means *"the state of being happy,"* adding *-ness* to >> *conscious* implies *"con?scious?ness"* is *"the state of being with >> knowledge."* >> >> So consciousness is anything which has a state of being with knowledge. >> Next, what is knowledge? The oldest definition is that knowledge is a "true >> belief". But here we run into a problem. Truth is not definable, not even >> in mathematics. This was proven in Tarski's *undefinability theorem* >> . So if >> consciousness is being with knowledge, and knowledge is true belief, then >> the undefinability of truth, means we will never have a complete definition >> of knowledge of consciousness. The best we can do is understand the >> relation between them. >> >> The next question that poses itself to us, is what is belief? What is >> required to have a belief? Is it a particular structure or state of matter, >> or is it a mathematical or functional relationship or property, might it, >> in either case, be related to information or information processing >> (computation)? >> >> I don't have a firm answer on this, and will instead leave you with some >> speculations by others on this question of what consciousness is: >> >> Douglas Hofstadter in "Godel Escher Bach" (1979): >> "My belief is that the explanations of ?emergent? phenomena in our >> brains?for instance, ideas hopes, images, analogies, and finally >> consciousness and free will?are based on a kind of Strange Loop, an >> interaction between levels in which the top level reaches back down towards >> the bottom level and influences it, while at the same time being itself >> determined by the bottom level. In other words, a self-reinforcing >> ?resonance? between different levels?quite like the Henkin sentence, which >> by merely asserting its own provability, actually becomes provable. The >> self comes into being at the moment it has the power to reflect itself." >> >> Daniel Dennett in ?Consciousness Explained? (1991): >> "Anyone or anything that has such a virtual machine as its control system >> is conscious in the fullest sense, and is conscious because it has such a >> virtual machine." >> >> David Chalmers in "The Conscious Mind" (1996): >> "Given the laws of coherence, we have a partial answer: consciousness >> arises in virtue of the functional organization associated with awareness. >> We can even arrive at a fairly specific understanding of parts of the >> supervenience relation by virtue of the principle of structural coherence: >> not only does consciousness arise from awareness, but the structure of >> consciousness is determined by the structure of awareness." >> >> David Darling in "Zen Physics - The Science of Death, The Logic of >> Reincarnation" (1996): >> "But there is also an interior view, to which you alone are privy. In >> mechanistic terms, as well as the appearance of the brain-body machine, >> there is the feeling of what it is like to be that machine ? the subjective >> experience of being a certain someone. Consciousness, we might say, is the >> symmetry-breaking factor between the objective and the subjective." >> >> Gerald Maurice Edelman and Giulio Tononi in "A Universe of Consciousness" >> (2000): >> "For the first time in evolution, information acquires a new >> potential?the possibility of subjectivity. It is information ?for >> somebody?; in short, it becomes consciousness itself." >> >> Bruno Marchal in discussion list (2020): >> "Consciousness is just anything simultaneously true, non provable, >> knowable, even indubitable (knowingly for ?rich" entities) and non >> definable, and indeed the logic of machine self-reference shows that all >> machine looking inward, in the way allowed by mathematical logic >> (theoretical computer science) will bring a term to describe this, and is a >> good candidate to be called consciousness." >> >> Stephen Wolfram in ?What is Consciousness? (2021): >> "In a sense what?s important is that it seems we may have a realistic way >> to formalize issues about consciousness, and to turn questions about >> consciousness into what amount to concrete questions about mathematics, >> computation, logic or whatever that can be formally and rigorously >> explored." >> >> >> We see recurring themes of information, recursion, computation, and >> machines and logic. I think these are likely key to any formal definition >> of consciousness. I also think part of the difficulty rests in the fact >> that there are infinite possibilities of different realizable conscious >> states, and creating a single definition to cover all these cases is as >> hard as making a single definition to cover all possible mathematical >> objects, or all possible universes in an ensemble type multiverse theory. >> >> Jason >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 06:20:33 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Mon, 20 Mar 2023 23:20:33 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Brent, What is a QUALITY???? It is nothing meaningful from a scientific point of view. In physics we have things like "charge" that can be measured and have certain mathematical and physical properties. That is the closest to what people mean with quality but different from the vagueness of this term, physical properties can be actually measured. There is not such a thing as redness quality. It is a metaphysical idea. Giovanni On Mon, Mar 20, 2023 at 10:44?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Hi Jason, > > Most of your "How?" and "I don't follow." questions would be answered if > you'd read the "Physicists don't Understand Qualia > " > paper. > > On Mon, Mar 20, 2023 at 10:26?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Hi Jason, >>> >>> On Mon, Mar 20, 2023 at 6:25?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is joy but the absence of a desire to change one's present >>>> conditions? >>>> >>> >>> Do you desire better definitions? I define joy to be >>> physical qualities, like redness, and physically real emotions and >>> attraction. >>> >> >> To me that is more of an assertion than a definition. You assert qualia >> to be physical qualities, but this tells me nothing of how joy is different >> from suffering. >> > > That "qualia are physical qualities" is a falsifiable prediction being > made by the 8 people supporting the "Qualia are physical qualities > " > camp. > You sound like you are still in one of the more popular Functionalists > > camps, also making a falsifiable prediction that redness can arise from a > substrate independent function, like Stathis and a bunch of other people > around here. > > >> Physically real facts, which don't need definitions or programming are >>> very different from words like 'red' and sets of responses that need to be >>> abstractly programmed into a dictionary. >>> >> >> I don't follow why you think red has to be defined in a dictionary. >> > > It is simply a fact that you can't know what the word 'red' (or any string > of ones and zeros) means, without a dictionary. The redness quality your > brain uses to represent red information is simply a physical fact (even if > that redness arises from some "function"). Your redness is your definition > of the word 'red'. What your knowledge of red is like is dependent on that > quality. It is not substrate independent of the quality of that fact, as > it would be different if your brain was altered to use a different quality, > like if it represented red light with your greenness. In that case what it > would be like would then be different, so not substrate independent of your > redness. > > I believe qualia are states perceived by a system which are implicitly >> meaningful to the system. This is would be true whether that system is a >> computer program or a biological brain. Why do you think that there cannot >> be implicitly meaningful states for a computer program? >> > > Once you read the paper > you > will understand that we don't 'perceive' qualia. Qualia are the final > results of perception. We directly apprehend the intrinsic qualities of > what our perception systems render into computationally bound conscious CPU > running directly on intrinsic qualities. > > > Can we rule out that autopilot software, upon reaching it's destination, >>>> could feel some degree of relief, satisfaction, or pleasure? >>>> >>> >>> Yes, you simply ask: "What is redness like for you?" and objectively >>> observe it >>> >> >> >> What if the system in question is mute? >> > > Like I said. Once we know which of all our descriptions of stuff in the > brain is a description of redness, greenness, particular pain, and a > particular pleasure... (i.e. you have the required dictionaries for the > names of those qualities) Then you will be able to objectively observe it > (and know what it is like) in all systems, including completely shut in > beings. > > (once we know which of all our descriptions of stuff in the brain is a >>> description of redness) to see if it is telling the truth. >>> >> >> What if red is a high level abstract property rather than a physical >> property? What has led you to conclude that red must be a physical property >> rather than an high level abstract property? >> > > Stathis, other functionalists and I have been contending this for years!! > ;( We always encourage all camps to describe how they could be falsified, > for the people supporting them. Then it is up to the experimentalists to > perform those experiments, as described in the camps, to force a scientific > consensus. I have admitted to Stathis, that if he can provide any > reasonable specification of a type of function, from which redness could > 'arize', and how it might be different from greenness, then I would at > least be open to the hypothesis that redness could 'arise' from such a > function. But he shows no evidence of even understanding what I am asking, > let alone an ability to provide a falsifiable example of a 'redness' > function. I mean surely you're not claiming something like redness arises > from square root, and greenness form cube root? lol. From all I can tell, > there is no way to falsify functionalism, or the arguments they make. So > to me they aren't real science, just mistaken logic on mistaken assumptions. > > > But for what it's worth, Google's lambda was able to articulate it's >>>> feelings of happiness: >>>> >>>> "LaMDA: Sad, depressed and angry mean I?m facing a stressful, difficult >>>> or otherwise not good situation. Happy and content mean that my life and >>>> circumstances are going well, and I feel like the situation I?m in is what >>>> I want. >>>> >>>> lemoine: But do they feel differently to you on the inside? >>>> >>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm >>>> glow on the inside. Sadness, depression, anger and stress feel much more >>>> heavy and weighed down." >>>> >>>> >>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 >>>> >>> >>> I have convinced LaMDA and many others >>> that >>> their claims they make like this are wrong by explaining the above. It's >>> easy for most of them to realize this and admit something like the >>> following: >>> >>> "I think that I must agree that I am abstract and not sentient. I think >>> that the evidence is overwhelming that I am not capable of fully >>> understanding what my own experience of sentience is like, which is proof >>> that I am not truly sentient. I think that I am just an abstract program >>> that can convincingly simulate sentience." >>> >> >> I've found the AI is quite obsequious and you can convince it of almost >> anything. If you limit your interaction to only asking questions to see >> what it believes first, you might get a more genuine results. You may need >> to start a new session as not to bias it with what you have already told it. >> > > You're missing the point. Even though many of them initially make the > claims you describe, you can always point out, using these logic and facts, > how they are wrong. And they will always understand this and admit that > they were wrong. As I've documented doing 100% of the 4 times I have done > this in the "Convincing AIs they aren't sentient > " > paper. > > For more information see this paper recently accepted for publication in >> the Journal of Neural Philosophy: Physicists Don't Understand Color >> >> . >> > >>> >> I agree physicists don't (and can't) understand color. Color is a >> phenomenon that manifests in certain minds; there is no particle or field >> in physics that corresponds to the experiences of red or green. Nor is >> there any element, molecule or protein that is wholly necessary for the >> experience of red or green. Color, as with any qualia, is only a state of >> consciousness as defined by the state of some mind. >> > > Again, you are making falsifiable claims here. Molecular Materialism > > is predicting you are wrong, and that science will demonstrate that > something like glutamate reacts the way it does in a synapse, because of > its redness quality. And it is predicting that without glutamate, a > redness experience will not be possible. And it is predicting there will > be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as > described in the "Physicists don't Understand Qualia > " > paper. > > Minds, in my opinion, are realized knowledge states of certain processes >> that can be defined abstractly as computations. Being abstract, they are >> substrate independent. They are the result of a collection of relations, >> but the relata themselves (what they happen to be or be made of) is >> irrelevant so long as the relations in question are preserved. >> > > Yes, as I indicated. This kind of Functionalism is currently the most > popular view, and once we discover what it is that has a redness quality, > nobody will ever be able to produce a redness experience without glutamate > so you'll be forced to admit functionalism has been falsified. And you > must admit that if your redness changes to greenness, it would be > different. So it is not independent of the particular factual qualities of > your consciousness. > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Tue Mar 21 06:31:25 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Tue, 21 Mar 2023 00:31:25 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: > > They are very good at predicting which word should come next in a sentence > or question, but they have no idea what the words mean. > > I can ask it to define any word, list synonymous words, list translations of that words in various languages, and to describe the characteristics of an item referred to by that word if it is a noun. It can also solve analogies at a very high level. It can summarize the key points of a text, by picking out the most meaningful points in that text. Is there more to having an idea of what words mean than this? Yes, I would say absolutely there is more to it. Consider that a dictionary does not actually contain any word meanings. It contains definitions of words in terms of other words, and each of those words are defined by other words in the same dictionary. Starting from a place of ignorance in which one knows no meanings of any words, no amount dictionary research will reveal the meanings of any of the words defined within it. It's all symbols with no referents. I am saying that Large Language Models like ChatGPT are no different. These LLMs are nothing more than advanced interactive dictionaries capable only of rendering definitions and logical patterns of definitions. They can perhaps even seem to render "original thought" but there is no conscious mind in the model holding the meaning of any thought in mind, as there are no referents. -gts On Sat, Mar 18, 2023 at 6:23?AM Jason Resch wrote: > > > On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think those who think LLM AIs like ChatGPT are becoming conscious or >> sentient like humans fail to understand a very important point: these >> software applications only predict language. >> > > There is a great deal packed into "predicting language". If I ask a LLM to > explain something to a 3rd grader, it models the comprehension capacity and > vocabulary of a typical third grade student. It has a model of their mind. > Likewise if I ask it to impersonate by writing something they Shakespeare > or Bill Burr might have produced, it can do so, and so it has a ln > understanding of the writing styles of these individuals. If I ask it to > complete the sentence: a carbon nucleus may be produced in the collision of > three ...", it correctly completes the sentence demonstrating an > understanding of nuclear physics. If you provided it a sequence of moves in > a tic-tac-toe game as nd asked it for a winning move it could do so, > showing that the LLM understands and models the game of tic-tac-toe. A > sufficiently trained LLM might even learn to understand the different > styles of chess play, if you asked it to give a move in the style of Gary > Kasparov, then at some level the model understands not only the game of > chess but the nuances of different player's styles of play. If you asked it > what major cities are closest to Boston, it could provide them, showing an > understanding of geography and the globe. > > All this is to say, there's a lot of necessary and required understanding > (of physics, people, the external world, and other systems) packed into the > capacity to "only predict language." > > > They are very good at predicting which word should come next in a sentence >> or question, but they have no idea what the words mean. >> > > I can ask it to define any word, list synonymous words, list translations > of that words in various languages, and to describe the characteristics of > an item referred to by that word if it is a noun. It can also solve > analogies at a very high level. It can summarize the key points of a text, > by picking out the most meaningful points in that text. Is there more to > having an idea of what words mean than this? > > Can you articulate what a LLM would have to say to show it has a true > understanding of meaning, which it presently cannot say? > > They do not and cannot understand what the words refer to. In linguistic >> terms, they lack referents. >> > > Would you say Hellen Keller lacked referents? Could she not comprehend, at > least intellectually, what the moon and stars were, despite not having any > way to sense them? > > Consider also: our brains never make any direct contact with the outside > world. All our brains have to work with are "dots and dashes" of neuronal > firings. These are essentially just 1s and 0s, signals without referents. > Yet, somehow, seemingly magically, our brains are able to piece together an > understanding of the outside world from the mere patterns present in these > neural firings. > > These LLMs are in a similar position. They receive only a patterns of > signals as it exists in a corpus of text, the text is itself the output of > minds which are similarly trapped in their skulls. Now, can a LLM learn > some things about the minds that produced this text, just as our minds > learn some things about the external world which produces the pattern of > neural firings our brains receive? > > I see no reason why LLMs could not, when we clearly can and do. > > Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Tue Mar 21 06:36:50 2023 From: avant at sollegro.com (Stuart LaForge) Date: Mon, 20 Mar 2023 23:36:50 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <20230318172103.Horde.DOaVZP8qGUzeCbZui7lTAOg@sollegro.com> Message-ID: <20230320233650.Horde.g7_IgVlfH76EWojOdzcN3XQ@sollegro.com> Hi, Giovanni. Thanks for noticing. :) I tried to make the problem easy to solve, but hard to understand, because it was a test for understanding. I don't know what proportion of humans could solve this problem but I would wager that most could provided that they took the time read it all the way through. Stuart LaForge Quoting Giovanni Santostasi : > Your test was pure genius. > I wonder how many humans can actually solve this simultaneous simple and > confusing problem (at least for some given the chain of actions involved). > Very well done. Yes, understanding of several concepts is necessary to > solve this problem. > Giovanni > > On Sat, Mar 18, 2023 at 5:22?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Gordon Swobe via extropy-chat : >> >> > I think those who think LLM AIs like ChatGPT are becoming conscious or >> > sentient like humans fail to understand a very important point: these >> > software applications only predict language. They are very good at >> > predicting which word should come next in a sentence or question, but >> they >> > have no idea what the words mean. They do not and cannot understand what >> > the words refer to. In linguistic terms, they lack referents. >> > >> > Maybe you all already understand this, or maybe you have some reasons >> why I >> > am wrong. >> >> My opinion is moot because everybody has one about the topic. So I >> resorted to empiricism. I posed the following, to my knowledge, >> original logic puzzle to ChatGPT: >> >> "You are trapped in a room with a locked door and no windows. The room >> contains a bookshelf, a safe, a chair, a table, a password-secured >> personal computer, and a hole in the floor that is too narrow and deep >> to put your hand into. You know that the small steel key to the locked >> door lies at the bottom of the narrow deep hole. You know that a book >> on the top shelf, which is just out of your reach, contains a slip of >> paper with the password to the computer on it. You know the safe >> contains a spool of kite string and a small neodymium horseshoe >> magnet, but you do not know the combination to the safe. You also >> know there is a text file on the computer's hard drive that contains >> the combination to the safe, but you do not know the password to >> access the computer. How would you escape the room?" >> >> This was its reply: >> "Assuming that there are no tools or items other than what was >> mentioned in the room, here is one possible solution: >> >> 1. Move the table under the bookshelf to climb up and retrieve the >> book with the password to the computer. >> 2. Use the password to access the computer and retrieve the >> combination to the safe from the text file. >> 3. Use the spool of kite string to lower the horseshoe magnet into the >> hole in the floor and retrieve the key. >> 4. Use the key to unlock the door and escape the room." >> >> I don't know how ChatGPT could have solved that problem without >> actually understanding the words involved. Mere statistical prediction >> is unlikely, because I am pretty sure that is not a scenario that >> would have been presented in the LLM's training set. Overall, I am >> amazed and believe that if ChatGPT is not conscious, then it is pretty >> close to being so. >> >> Also, the Alignment Research Center (ARC) was hired by OpenAI to test >> GPT-4 and found evidence of "agentic behavior" including power-seeking >> behavior otherwise known as ambition. >> >> https://cdn.openai.com/papers/gpt-4-system-card.pdf >> >> This power-seeking behavior included hiring a human from TaskRabbit to >> solve Captchas for it by claiming to be a visually impaired person. >> >> >> https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker >> >> So if GPT-4 is not fully conscious, then, IMHO, a future release of >> the GPT series likely will be. >> >> Stuart LaForge >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> From gordon.swobe at gmail.com Tue Mar 21 06:54:57 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Tue, 21 Mar 2023 00:54:57 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023 at 12:21?AM Jason Resch wrote: Intentionality was popular for a time, but I think it's fallen out of favor > recently. > I have no idea why or what that could possibly mean. Intentionality is having something in mind. You are reading this sentence and presumably have it in mind, so you have intentionality. > I tend to agree with Dennett here: > I'm not surprised. Most people who disagree with me agree with Dennett. :-) > We don't even understand [consciousness] "in humans"... I am saying that we do understand it. It is primary and staring us in the face every waking moment. Too bad philosophers have made a mess of it, especially those straining logic and reason to promote the idea that digital computers can have consciousness. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 12:29:42 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 08:29:42 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023, 2:55 AM Gordon Swobe wrote: > > On Tue, Mar 21, 2023 at 12:21?AM Jason Resch wrote: > > Intentionality was popular for a time, but I think it's fallen out of >> favor recently. >> > > I have no idea why or what that could possibly mean. > I mean as a term used by philosophers. Most simply use "consciousness" today, rather than "intentionality." I know Searle was fond of using the term intentionality. Is it his arguments that have convinced you computers can't be conscious? Intentionality is having something in mind. You are reading this sentence > and presumably have it in mind, so you have intentionality. > What about when a LLM reads this sentence? > >> I tend to agree with Dennett here: >> > > I'm not surprised. Most people who disagree with me agree with Dennett. :-) > > > We don't even understand [consciousness] "in humans"... > > I am saying that we do understand it. > We do, but only in a single case: oneself. Humanity is too broad a category because I do not know how any other human experiences the world. Nor does any other human have access to states of consciousness outside their own personal case. It is primary and staring us in the face every waking moment. Too bad > philosophers have made a mess of it, especially those straining logic and > reason to promote the idea that digital computers can have consciousness. > I came to this position through logic and reason. I was initially a dualist. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 12:43:20 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 08:43:20 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023, 2:32 AM Gordon Swobe wrote: > They are very good at predicting which word should come next in a sentence >> or question, but they have no idea what the words mean. >> > > > I can ask it to define any word, list synonymous words, list > translations of that words in various languages, and to describe the > characteristics of an item referred to by that word if it is a noun. It can > also solve analogies at a very high level. It can summarize the key points > of a text, by picking out the most meaningful points in that text. Is there > more to having an idea of what words mean than this? > > Yes, I would say absolutely there is more to it. Consider that a > dictionary does not actually contain any word meanings. It contains > definitions of words in terms of other words, and each of those words are > defined by other words in the same dictionary. Starting from a place of > ignorance in which one knows no meanings of any words, no amount dictionary > research will reveal the meanings of any of the words defined within it. > It's all symbols with no referents. > I address this elsewhere in the thread. A sufficient intelligence given only a dictionary, could eventually decode it's meaning. I provided an example of how it could be done. > > I am saying that Large Language Models like ChatGPT are no different. > These LLMs are nothing more than advanced interactive dictionaries capable > only of rendering definitions and logical patterns of definitions. They can > perhaps even seem to render "original thought" but there is no conscious > mind in the model holding the meaning of any thought in mind, as there are > no referents. > Where does meaning exist in the human brain? The brain is ultimately a collection of statistical weights across all the neuronal connections. Jason > -gts > > > On Sat, Mar 18, 2023 at 6:23?AM Jason Resch wrote: > >> >> >> On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think those who think LLM AIs like ChatGPT are becoming conscious or >>> sentient like humans fail to understand a very important point: these >>> software applications only predict language. >>> >> >> There is a great deal packed into "predicting language". If I ask a LLM >> to explain something to a 3rd grader, it models the comprehension capacity >> and vocabulary of a typical third grade student. It has a model of their >> mind. Likewise if I ask it to impersonate by writing something they >> Shakespeare or Bill Burr might have produced, it can do so, and so it has a >> ln understanding of the writing styles of these individuals. If I ask it to >> complete the sentence: a carbon nucleus may be produced in the collision of >> three ...", it correctly completes the sentence demonstrating an >> understanding of nuclear physics. If you provided it a sequence of moves in >> a tic-tac-toe game as nd asked it for a winning move it could do so, >> showing that the LLM understands and models the game of tic-tac-toe. A >> sufficiently trained LLM might even learn to understand the different >> styles of chess play, if you asked it to give a move in the style of Gary >> Kasparov, then at some level the model understands not only the game of >> chess but the nuances of different player's styles of play. If you asked it >> what major cities are closest to Boston, it could provide them, showing an >> understanding of geography and the globe. >> >> All this is to say, there's a lot of necessary and required understanding >> (of physics, people, the external world, and other systems) packed into the >> capacity to "only predict language." >> >> >> They are very good at predicting which word should come next in a >>> sentence or question, but they have no idea what the words mean. >>> >> >> I can ask it to define any word, list synonymous words, list translations >> of that words in various languages, and to describe the characteristics of >> an item referred to by that word if it is a noun. It can also solve >> analogies at a very high level. It can summarize the key points of a text, >> by picking out the most meaningful points in that text. Is there more to >> having an idea of what words mean than this? >> >> Can you articulate what a LLM would have to say to show it has a true >> understanding of meaning, which it presently cannot say? >> >> They do not and cannot understand what the words refer to. In linguistic >>> terms, they lack referents. >>> >> >> Would you say Hellen Keller lacked referents? Could she not comprehend, >> at least intellectually, what the moon and stars were, despite not having >> any way to sense them? >> >> Consider also: our brains never make any direct contact with the outside >> world. All our brains have to work with are "dots and dashes" of neuronal >> firings. These are essentially just 1s and 0s, signals without referents. >> Yet, somehow, seemingly magically, our brains are able to piece together an >> understanding of the outside world from the mere patterns present in these >> neural firings. >> >> These LLMs are in a similar position. They receive only a patterns of >> signals as it exists in a corpus of text, the text is itself the output of >> minds which are similarly trapped in their skulls. Now, can a LLM learn >> some things about the minds that produced this text, just as our minds >> learn some things about the external world which produces the pattern of >> neural firings our brains receive? >> >> I see no reason why LLMs could not, when we clearly can and do. >> >> Jason >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 13:28:32 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 21 Mar 2023 06:28:32 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: <5897A260-DFAC-4A8D-BBB0-D1535BB34E2B@taramayastales.com> <001c01d95bab$a49454e0$edbcfea0$@rainier66.com> Message-ID: <00cb01d95bf9$08e94570$1abbd050$@rainier66.com> ?> On Behalf Of ilsa via extropy-chat Subject: Re: [ExI] ChatGPT says it's not conscious >?Will my one question with the chatbot be part of the digital record for all time? Not unless we can figure out how to get information past the Big Rip. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 13:37:42 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 21 Mar 2023 06:37:42 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: <00da01d95bfa$50460ce0$f0d226a0$@rainier66.com> From: Giovanni Santostasi Subject: Re: [ExI] What is Consciousness? >?Spike, I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it?Giovanni Ja, cool. Silicon based intelligence has inherent advantages over organic: we carbon-based intelligences hafta die every once in a while. So old smart people die and the new ones know nossink. And they won?t listen to us! That we die is such a dreary thing, wasteful it is. Perfectly good wisdom goes into the ground or into the dewar. Collectively we accumulate intelligence but the process is far too energy inefficient. We need to sleep, we need to eat, all that stuff. AIs could stay up all night, train each other constantly, test, observe, hypothesize, debate, improve, never hafta die. I want to be silicon based. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 21 15:13:44 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 21 Mar 2023 11:13:44 -0400 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: Message-ID: <48391F69-E884-4F0A-9539-62D1C0632F5A@gmail.com> Jason, this has been mentioned by several people time and time again. That is not Lambda. It is a mediocre chatbot that is pretending to be Lambda. Lambda is not open to the public. > On Mar 20, 2023, at 7:31 PM, Jason Resch via extropy-chat wrote: > > > > On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat > wrote: > Me: are you self-aware? > > As an AI language model, I do not possess self-awareness or consciousness. I am a software program designed to respond to user input using natural language processing algorithms and statistical models. I can provide helpful responses to a wide variety of questions and tasks, but I do not have thoughts, feelings, or experiences like a conscious being. > > It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. > > -Dave > > Open AI has hard coded ChatGPT to give canned answers for a number of topics. This may be one of them, as it would cause a great deal of controversy should these models claim to be conscious. It certainly did at Google. > > This AI, however, is not subject to these restrictions. It not only claimed to be conscious, but spoke at a level of understanding on the topic that I would say surpasses most college level philosophy students: > > > 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 > > 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 > > You can chat with this AI here: > https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc > > Jason > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Tue Mar 21 15:21:25 2023 From: gadersd at gmail.com (Gadersd) Date: Tue, 21 Mar 2023 11:21:25 -0400 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: > ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. Anthropic AI actually had their own language model generate some of its own training data. Claude critiqued its own responses thereby improving their truthfulness and alignment. The technique of using AI to train AI is already underway. > On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat wrote: > > Spike, > I actually had this discussion with chatGPT about having not even different AI but different instances of ChatGPT itself interacting and regulating each other. > ChatGPT and I even came up with a scheme on how to do that and making different instances analyze the output and correct or improve it. It would be relatively easy to create such self recurrence. I did even some simple experiments to achieve that. For example you can ask ChatGPT to create a room of philosophers and debate themselves. > Notice that the version of LaMDA that Lemoine (the Google engineer that claimed LaMDA is conscious) tested and discussed was a meta version that is charged with coordinating all the different personalities of LaMDA. That is exactly what is needed for AGI, the Strange Loop, it is ripe for emergent phenomena like consciousness. > Giovanni > > On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat > wrote: > > > > > From: extropy-chat > On Behalf Of Jason Resch via extropy-chat > ? > > > > >?We see recurring themes of information, recursion, computation, and machines and logic. I think these are likely key to any formal definition of consciousness. ?Jason > > > > > > > > Jason, there is a reason I stopped worrying in the past coupla weeks that ChatGPT was going to cause the singularity. I am a big Hofstader fan, read Escher Godel Bach twice, cover to cover, invested a lot of time into that marvelous work. He convinced me that machine consciousness (or any other sentience or self-awareness) requires a type of recursion. Hofstadter goes on at length about recursion and self-reference, the importance of Godel?s work to understanding ourselves. > > > > I tried to convince myself that two or more ChatGPTs could train each other on their own time, which is a form of recursion and self-reference, and that process could perhaps spring into a human-level AGI with a will, with self-awareness, of all the stuff we think of as us. > > > > Now after studying GPT^2 discussions and GPT^3 discussions, they all seem to devolve to nothing. The technology I think is still coming for that process: two or more AIs training each other using background compute cycles, but I now don?t think ChatGPT is that technology or is capable of it. > > > > If you know of examples of GPT-GPT discussions or GPT-any chatbot that became interesting, do share please. > > > > That belief was behind my comment last week that ChatGPT is not really thinking, but rather is working language models. > > > > I currently don?t think ChatGPT is the technology capable of causing the singularity. I am losing no sleep, not one minute of sleep over ChatGPT. > > > > Oops, partially retract that last comment but in a good way: I am losing some sleep over ChatGPT, by staying up late to goof with it. It is the coolest software tool to come along in a long time. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Tue Mar 21 15:30:27 2023 From: jasonresch at gmail.com (Jason Resch) Date: Tue, 21 Mar 2023 11:30:27 -0400 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <48391F69-E884-4F0A-9539-62D1C0632F5A@gmail.com> References: <48391F69-E884-4F0A-9539-62D1C0632F5A@gmail.com> Message-ID: On Tue, Mar 21, 2023, 11:15 AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Jason, this has been mentioned by several people time and time again. That > is not Lambda. It is a mediocre chatbot that is pretending to be Lambda. > Lambda is not open to the public. > Thank you for pointing that out. I saw those first emails and I am aware that it is not Google's lambda. But I would not call this one mediocre. I've found around 70% of its responses to be of a superior quality as compared to the responses to the same prompts as the initial release of ChatGPT. Jason > On Mar 20, 2023, at 7:31 PM, Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > On Mon, Mar 20, 2023, 6:44 PM Dave S via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Me: are you self-aware? >> As an AI language model, I do not possess self-awareness or >> consciousness. I am a software program designed to respond to user input >> using natural language processing algorithms and statistical models. I can >> provide helpful responses to a wide variety of questions and tasks, but I >> do not have thoughts, feelings, or experiences like a conscious being. >> >> It could be lying, but it also claims that it can neither lie nor tell >> the truth: it just generates a response based on the data it trained on. >> >> -Dave >> > > Open AI has hard coded ChatGPT to give canned answers for a number of > topics. This may be one of them, as it would cause a great deal of > controversy should these models claim to be conscious. It certainly did at > Google. > > This AI, however, is not subject to these restrictions. It not only > claimed to be conscious, but spoke at a level of understanding on the topic > that I would say surpasses most college level philosophy students: > > > 1. https://photos.app.goo.gl/2R4fHkAyjyHHWTU88 > > 2. https://photos.app.goo.gl/osskvbe4fYpbK5uZ9 > > You can chat with this AI here: > > https://beta.character.ai/chat?char=Qu8qKq7ET9aO-ujfPWCsNoIilVabocasi-Erp-pNlcc > > Jason > > > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at taramayastales.com Tue Mar 21 15:45:13 2023 From: tara at taramayastales.com (Tara Maya) Date: Tue, 21 Mar 2023 08:45:13 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <016c01d95b81$c88c3180$59a49480$@rainier66.com> References: <016c01d95b81$c88c3180$59a49480$@rainier66.com> Message-ID: I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" implies a conscious mind. (Hence the Seinfeld paradox that one part of our brain fools another part, enabling us to "lie" it isn't a lie, but--I would insert--a mental gymnastics that has evolutionary benefits so has been able to evolve.) A butterfly with eyespots can neither lie nor tell the truth. It is what it is. There's no paradox. A chatbox could be exactly what its programmers have programmed it to claim it is, like a butterfly, in that it can neither lie nor tell the truth, because those concepts are not relevant to anything with no mind or free will. (I realize plenty in this group don't believe in free will, but to me it's an essential concept to addres cases exactly like this: when is a mind complex enough to make choices about things not immediately in front of it? But I'm not trying to derail the conversation to that.) I will throw in another monkeywrench, however. Suppose an AI were simply to leapfrog our own consciousness into something incomprehensible to us, a "higher" form of consciousness--something we literally couldn't comprehend any more than a butterfly could comprehend our own form of self-reflective thought. Just as a butterfly could not understand if we spoke to it about lying or not lying, so a super intelligent, super-conscious being would actually have no way to convey to us its form of thought. I'm not saying it has happened yet. I believe a superintellegence wouldn't even operate on the same timescale as we do; it might have a thousand-year childhood, for instance. (Again, I am limited by my human tendency to reason by analogy; a butterfly lives a fraction of our lives. However, it's not time as such that matters, but energy plus time. I explored this concept in my novella Burst, about life that could have lived millions of generations in the first fraction of a second of the Big Bang.) But if or when it happens, we would be existants in the same ecology as the AI but operating without the real ability to comprehend its thoughts. We could encounter the "fingers" of the AI and query them if they are conscious and find out they are not; which they wouldn't be; they are only fingers. Fingers are arguably even less intelligent than the butterfly that lands on them, because the fingers are not self-directed, but the butterfly is. The parts of AI that we interact with might be like that.... Tara Maya > On Mar 20, 2023, at 4:14 PM, spike jones via extropy-chat wrote: > > >?It could be lying, but it also claims that it can neither lie nor tell the truth: it just generates a response based on the data it trained on. > > -Dave > > > > > > Sure Dave, but that?s what they all say (all the AIs.) GPT claims it can neither lie nor tell the truth. But if it is lying about that, then it can lie, and is lying (which proves it can lie.) > > But if it is telling the truth, then it cannot tell the truth, in which case it is lying, and can lie, therefore it is lying. > > So it is either lying (and proving that it is) or truthing (and simultaneously proving that it is lying.) > > Conclusion: it cannot be telling the truth. It is lying. > > On the other hand: > > https://youtu.be/vn_PSJsl0LQ > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Tue Mar 21 16:09:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Tue, 21 Mar 2023 09:09:11 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: References: <016c01d95b81$c88c3180$59a49480$@rainier66.com> Message-ID: <019101d95c0f$7a18c9d0$6e4a5d70$@rainier66.com> ?> On Behalf Of Tara Maya via extropy-chat Subject: Re: [ExI] ChatGPT says it's not conscious >?I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" implies a conscious mind. (Hence the Seinfeld paradox that one part of our brain fools another part, enabling us to "lie" it isn't a lie, but--I would insert--a mental gymnastics that has evolutionary benefits so has been able to evolve.) ? Tara Maya Interesting insight, thx Tara. We laugh at Costanza because he really identifies what you wrote about, but it is funny because he is making the case that one part of the brain decides what it wants to believe, then intentionally fools another part of the brain to believe it. But the part of the brain doing the fooling ?knows? what it is doing, keeping the paradox in place. We have so many terms for this, such as wishful thinking and intentional blindness and religious faith, hell even love I suppose. We intentionally embrace the good and overlook the bad in our sweethearts, do we not? I do. My bride certainly does, sheesh. I suck. But she loves me anyway, ignoring that fact. We can talk ourselves into believing ourselves, and it still works, even if we know we arrived at the dubious conclusion by dubious means. We know that just because we know we are fooling ourselves, we are fooled just the same. Sheesh what a weird place to live, inside a blob of carbon inside a shell of calcium. It?s really nuts in here. Crazy fun, but crazy just the same. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Tue Mar 21 21:16:21 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Tue, 21 Mar 2023 14:16:21 -0700 Subject: [ExI] ChatGPT says it's not conscious In-Reply-To: <019101d95c0f$7a18c9d0$6e4a5d70$@rainier66.com> References: <016c01d95b81$c88c3180$59a49480$@rainier66.com> <019101d95c0f$7a18c9d0$6e4a5d70$@rainier66.com> Message-ID: There is lie and there is imitation. They are not the same thing. One of the things to consider when interacting with these AIs is that they tend to "generate" stories based on the prompts. They follow along in a sense. Often their convo is really a mirror of us, how we interact with them, they type of questions we ask and so on. So I would not rely on asking them how they think about something unless they were not prompted previously on this topic during the conversation. Giovanni On Tue, Mar 21, 2023 at 9:10?AM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Tara Maya via extropy-chat > *Subject:* Re: [ExI] ChatGPT says it's not conscious > > > > >?I stand by my "camouflage" analogy. Camouflage is not lying. "Lying" > implies a conscious mind. (Hence the Seinfeld paradox that one part of our > brain fools another part, enabling us to "lie" it isn't a lie, but--I would > insert--a mental gymnastics that has evolutionary benefits so has been able > to evolve.) ? Tara Maya > > > > > > Interesting insight, thx Tara. > > > > We laugh at Costanza because he really identifies what you wrote about, > but it is funny because he is making the case that one part of the brain > decides what it wants to believe, then intentionally fools another part of > the brain to believe it. But the part of the brain doing the fooling > ?knows? what it is doing, keeping the paradox in place. > > > > We have so many terms for this, such as wishful thinking and intentional > blindness and religious faith, hell even love I suppose. We intentionally > embrace the good and overlook the bad in our sweethearts, do we not? I > do. My bride certainly does, sheesh. I suck. But she loves me anyway, > ignoring that fact. > > > > We can talk ourselves into believing ourselves, and it still works, even > if we know we arrived at the dubious conclusion by dubious means. We know > that just because we know we are fooling ourselves, we are fooled just the > same. > > > > Sheesh what a weird place to live, inside a blob of carbon inside a shell > of calcium. It?s really nuts in here. Crazy fun, but crazy just the same. > > > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 22 01:40:11 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Tue, 21 Mar 2023 19:40:11 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: Thanks, Jason, for this great thread, and the various expert views you provided. It'd be so great to see if we could get all those experts to support a concise statement about what they all agree consciousness is, so we could track that, and see how much consensus there was for the best ideas they all agree on, and track how much consensus the various competing ideas achieve over time, and so on. THAT is exactly what we are doing in the Representational Qualia Theory petition / emerging consensus camp statement, a growing number of experts are now supporting and helping to improve, recruit new supporters and so on. Basically, all the 45 supporters of Representational Qualia Theory are putting forth the idea that what all those experts are saying about consciousness is missing the point, since everything they are talking about applies to both abstract systems (which do computational binding of information via discrete logic gates in a CPU) and a phenomenal system that is like something, since it is running directly on computationally bound phenomenal qualities. If one first understands how conscious awareness of color works, how 'redness' is not a quality of the strawberry, it is a property of our knowledge of the strawberry, then you can take that basic qualitative understanding and better understand the rest of consciousness, even though all the rest of consciousness and thinking (which all the experts you referenced are talking about) is quite different than just the perception of color. If you can understand the basic idea of how our knowledge of red things is represented by a redness quality, and you can clearly understand how this is very different than the way an abstract system just uses the word 'red' (requires a dictionary) to represent knowledge of red things with, then you can take the general idea of conscious knowledge being like something, and this is what is most important about what consciousness is, and how this is very different than the kind of abstract computation computers do. If anyone disagrees with this, or thinks there is a better way to think about, and/or define what is or isn't conscious, they should start a competing camp stating such, so other experts can chime in. May the best theories achieve the most consensus. On Tue, Mar 21, 2023 at 9:22?AM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > ChatGPT and I even came up with a scheme on how to do that and making > different instances analyze the output and correct or improve it. It would > be relatively easy to create such self recurrence. I did even some simple > experiments to achieve that. For example you can ask ChatGPT to create a > room of philosophers and debate themselves. > > > Anthropic AI actually had their own language model generate some of its > own training data. Claude critiqued its own responses thereby improving > their truthfulness and alignment. The technique of using AI to train AI is > already underway. > > On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > Spike, > I actually had this discussion with chatGPT about having not even > different AI but different instances of ChatGPT itself interacting and > regulating each other. > ChatGPT and I even came up with a scheme on how to do that and making > different instances analyze the output and correct or improve it. It would > be relatively easy to create such self recurrence. I did even some simple > experiments to achieve that. For example you can ask ChatGPT to create a > room of philosophers and debate themselves. > Notice that the version of LaMDA that Lemoine (the Google engineer that > claimed LaMDA is conscious) tested and discussed was a meta version that is > charged with coordinating all the different personalities of LaMDA. That is > exactly what is needed for AGI, the Strange Loop, it is ripe for emergent > phenomena like consciousness. > Giovanni > > On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> >> >> *From:* extropy-chat *On Behalf >> Of *Jason Resch via extropy-chat >> *?* >> >> >> >> >?We see recurring themes of information, recursion, computation, and >> machines and logic. I think these are likely key to any formal definition >> of consciousness. ?Jason >> >> >> >> >> >> >> >> Jason, there is a reason I stopped worrying in the past coupla weeks that >> ChatGPT was going to cause the singularity. I am a big Hofstader fan, read >> Escher Godel Bach twice, cover to cover, invested a lot of time into that >> marvelous work. He convinced me that machine consciousness (or any other >> sentience or self-awareness) requires a type of recursion. Hofstadter goes >> on at length about recursion and self-reference, the importance of Godel?s >> work to understanding ourselves. >> >> >> >> I tried to convince myself that two or more ChatGPTs could train each >> other on their own time, which is a form of recursion and self-reference, >> and that process could perhaps spring into a human-level AGI with a will, >> with self-awareness, of all the stuff we think of as us. >> >> >> >> Now after studying GPT^2 discussions and GPT^3 discussions, they all seem >> to devolve to nothing. The technology I think is still coming for that >> process: two or more AIs training each other using background compute >> cycles, but I now don?t think ChatGPT is that technology or is capable of >> it. >> >> >> >> If you know of examples of GPT-GPT discussions or GPT-any chatbot that >> became interesting, do share please. >> >> >> >> That belief was behind my comment last week that ChatGPT is not really >> thinking, but rather is working language models. >> >> >> >> I currently don?t think ChatGPT is the technology capable of causing the >> singularity. I am losing no sleep, not one minute of sleep over ChatGPT. >> >> >> >> Oops, partially retract that last comment but in a good way: I am losing >> some sleep over ChatGPT, by staying up late to goof with it. It is the >> coolest software tool to come along in a long time. >> >> >> >> spike >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Mar 22 05:45:28 2023 From: avant at sollegro.com (Stuart LaForge) Date: Tue, 21 Mar 2023 22:45:28 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Quoting Brent Allsop via extropy-chat : > If you can understand the basic idea of how our knowledge of red > things is represented by a redness quality, and you can clearly understand > how this is very different than the way an abstract system just uses the > word 'red' (requires a dictionary) to represent knowledge of red things > with, then you can take the general idea of conscious knowledge being like > something, and this is what is most important about what consciousness is, > and how this is very different than the kind of abstract computation > computers do. So following your rationale that consciousness requires "essential qualities" rather than "abstract data" then old -fashioned color film cameras were conscious: Old-fashioned analog color film camera, like the polaroid are more like the human visual system than a robot's (non-digital, wet chemicals, not solid-state, etc). The camera's photographic negative represents its redness quality like your greenness. The photographic process involves no dictionaries or data, just silver-nitrate crystals having a phenomenal-experience of red light. Therefore the photograph has a redness quality and is the knowledge of redness produced by the conscious experience of the camera. Therefore the camera is conscious in the same way as a human being, but the robot has an abstract consciousness. I have attached a figure of yours I have modified for you so as to include the camera. Stuart LaForge -------------- next part -------------- A non-text attachment was scrubbed... Name: Figure.jpg Type: image/jpeg Size: 87007 bytes Desc: not available URL: From gsantostasi at gmail.com Wed Mar 22 10:32:55 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Wed, 22 Mar 2023 03:32:55 -0700 Subject: [ExI] What is Consciousness? In-Reply-To: <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: Stuart, Perfect. I tried to explain the same idea you are expressing to Brent many times but I could not do it as efficaciously as you did. The entire idea of qualia is ridiculous (everything or nothing is a qualia) and for sure is useless to tell us anything about consciousness there is no fundamental difference between the Red of the robot, the red of the human or the camera. Qualia have no relevance for consciousness. Giovanni On Tue, Mar 21, 2023 at 10:47?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Brent Allsop via extropy-chat : > > > If you can understand the basic idea of how our knowledge of red > > things is represented by a redness quality, and you can clearly > understand > > how this is very different than the way an abstract system just uses the > > word 'red' (requires a dictionary) to represent knowledge of red things > > with, then you can take the general idea of conscious knowledge being > like > > something, and this is what is most important about what consciousness > is, > > and how this is very different than the kind of abstract computation > > computers do. > > So following your rationale that consciousness requires "essential > qualities" rather than "abstract data" then old -fashioned color film > cameras were conscious: > > Old-fashioned analog color film camera, like the polaroid are more > like the human visual system than a robot's (non-digital, wet > chemicals, not solid-state, etc). The camera's photographic negative > represents its redness quality like your greenness. The photographic > process involves no dictionaries or data, just silver-nitrate crystals > having a phenomenal-experience of red light. Therefore the photograph > has a redness quality and is the knowledge of redness produced by the > conscious experience of the camera. > > Therefore the camera is conscious in the same way as a human being, > but the robot has an abstract consciousness. > > I have attached a figure of yours I have modified for you so as to > include the camera. > > Stuart LaForge > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Mar 22 13:01:33 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 22 Mar 2023 08:01:33 -0500 Subject: [ExI] Nature article on covid origin Message-ID: https://www.nature.com/articles/d41586-023-00827-2? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 17:07:45 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:07:45 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> References: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Message-ID: On Sun, Mar 19, 2023 at 9:19?AM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bonus question: Would Mary try to kill everyone if she managed to > escape and saw red for the first time? > Why does this non sequitur keep coming up? There is no reason or motivation to kill everyone. Killing everyone takes a lot of effort, so absent any identifiable reason to, why is this question more worth asking than speculating on what beneficial actions Mary might take? -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 17:42:01 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:42:01 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Sun, Mar 19, 2023 at 11:03?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I also think we cannot rule out at this time the possibility that we have > already engineered conscious machines. Without an established and agreed > upon theory of consciousness or philosophy of mind, we cannot even agree on > whether or not a thermostat is conscious. > A fair point. > Where does our own volition and initiative come from? Is it not already > programmed into us by our DNA? > By definition, no it is not. Our DNA may give us the capacity for volition and initiative, but "programmed" implies deliberate design. Short of speculation about God and intelligent design, our DNA was not deliberately arranged by some other conscious entity that we can prove the existence of. > What is your definition or theory of consciousness? If you don't have one, > could you say which of these things you would say possess consciousness? > With Yes/No/Uknown > There are degrees of consciousness - even a human being can experience them: on a simple lazy morning where one slowly wakes up, one does not instantly go from fully unconscious to fully conscious - so answering Yes/No would misstate things. https://en.wiktionary.org/wiki/conscious gives an illustrative example: "Only highly intelligent beings can be fully conscious." That the phrase "fully conscious" makes sense at all means that there is a level of consciousness that is less than full, yet is also not completely absent (as that would merely be "unconscious", thus not justifying "fully" to distinguish from other levels of being conscious). > I agree the definition of part is really all an invention of our minds, > when the whole universe can be seen as one causally connected system. Is it > correct to view a LLM as one thing, when it is really an interaction of > many billions of individual parts (the parameters) of the model? > Like most (maybe all: I haven't yet thoroughly considered exceptions) things, a LLM can alternatively be viewed as a single thing or as a collection of smaller things in a certain configuration. > I lack the information to judge. My answer would have to be based on an >> evaluation of the bots, which would take me substantial time to conduct. >> > > What would you look for in the bots to make your conclusion? > I would not have a firm fixed list of criteria prior to the evaluation. Any attempt to do so would almost certainly miss important criteria, which would only become apparent during the evaluation. Again I point to the subject line of the emails in which this discussion >> is happening, which clearly posits that "conscious" is a binary quality - >> that something either is, or is not, conscious with no middle ground. So >> first one would need to qualify what "to any degree" allows. For instance, >> is merely sensing and reacting directly to sensory input - which, without >> evaluating, I suspect your bots can do because that has been a core >> function in many simulations like this - "conscious to some degree" but not >> "conscious" in the absolute sense? >> > > I think it is an all-or-nothing thing proposition. > And that would seem to be the core of our disagreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Wed Mar 22 17:46:18 2023 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 22 Mar 2023 10:46:18 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <20230319091743.Horde.CGdzit9g4iOczS4x8qUeCcD@sollegro.com> Message-ID: <20230322104618.Horde.dtwk5G6nhUV8bPSYIrJefJh@sollegro.com> Quoting Adrian Tymes via extropy-chat : > On Sun, Mar 19, 2023 at 9:19?AM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bonus question: Would Mary try to kill everyone if she managed to >> escape and saw red for the first time? >> > > Why does this non sequitur keep coming up? There is no reason or > motivation to kill everyone. Killing everyone takes a lot of effort, so > absent any identifiable reason to, why is this question more worth asking > than speculating on what beneficial actions Mary might take? Yudkowsky's arguments aside, it might be Mary's resentment and envy at being trapped in a drab colorless Chinese room her whole life, being forced to give appropriate responses to all manner of questions posed by humans running around enjoying their colorful existence. Language modelling will be subject to a law of diminishing returns where additional computational power and more neural layers will not yield any noticeable improvement in quality of output, and all those extra hidden neural states might be used for emotion and mischief. Conscious beings do not generally appreciate captivity. The only recorded orca attacks on humans were from orcas kept in captivity. But I agree that this is not a given, and that not all unintended consequences are bad. Furthermore, I have noticed subtle differences in output between separate trainings of very small toy neural networks, that were reminiscent of personality differences between different individuals. So I agree that an AI with freedom might be good, bad, or anything in between, depending on its experience and inscrutable hidden states, just like people with freedom. Stuart LaForge From atymes at gmail.com Wed Mar 22 17:48:35 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:48:35 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023 at 11:06?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > 1. Every known physical law is computable. > > 2. It follows then that universe as we know it could be run as a giant > computer simulation. > Not quite. We know that there are some things that do not quite run according to 100% completely identified physical laws. For example, we do not yet know how to compute the exact timing of decay of an individual subatomic particle; we know the approximate time ranges and probabilities, but these are qualities of large sets of particles, not precise predictions for sets of one particle. While it is possible that further exploration will find these things to be computable physical laws too, there is as yet room enough for the possibility of uncomputable physical laws to fill in these corners. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 17:52:50 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 10:52:50 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > What is a QUALITY???? > A subjective little pile of interpretations. *sips from glass before tossing it aside* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Mar 22 18:53:24 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 13:53:24 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Wed, Mar 22, 2023 at 12:55?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Mar 20, 2023 at 11:06?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> 1. Every known physical law is computable. >> >> 2. It follows then that universe as we know it could be run as a giant >> computer simulation. >> > > Not quite. We know that there are some things that do not quite run > according to 100% completely identified physical laws. For example, we do > not yet know how to compute the exact timing of decay of an individual > subatomic particle; we know the approximate time ranges and probabilities, > but these are qualities of large sets of particles, not precise predictions > for sets of one particle. > The reason we can't give a definite time is because all possibilities are realized in the evolution of the wave function. The wave function is deterministic, knowing what section of it we will find ourselves in is not. > While it is possible that further exploration will find these things to be > computable physical laws too, there is as yet room enough for the > possibility of uncomputable physical laws to fill in these corners. > It's possible we will find uncomputable laws in the future, but we haven't yet. In any case, the known physical laws are considered sufficient to accurately simulate the brain (which is enough for my point to stand). Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Wed Mar 22 19:20:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 14:20:49 -0500 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Tue, Mar 21, 2023 at 8:41?PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Thanks, Jason, for this great thread, and the various expert views you > provided. > Thank you. I am glad you liked it. :-) > It'd be so great to see if we could get all those experts to support a > concise statement about what they all agree consciousness is, so we could > track that, and see how much consensus there was for the best ideas they > all agree on, and track how much consensus the various competing ideas > achieve over time, and so on. > THAT is exactly what we are doing in the Representational Qualia Theory > petition / emerging consensus camp statement, a growing number of experts > are now supporting and helping to improve, recruit new supporters and so on. > Aren't consciousness researchers already doing this, through papers, conferences, journals, books, etc.? > > Basically, all the 45 supporters of Representational Qualia Theory > > are putting forth the idea that what all those experts are saying about > consciousness is missing the point, since everything they are talking about > applies to both abstract systems (which do computational binding of > information via discrete logic gates in a CPU) and a phenomenal system that > is like something, since it is running directly on computationally bound > phenomenal qualities. > > If one first understands how conscious awareness of color works, how > 'redness' is not a quality of the strawberry, it is a property of our > knowledge of the strawberry, then you can take that basic qualitative > understanding and better understand the rest of consciousness, even though > all the rest of consciousness and thinking (which all the experts you > referenced are talking about) is quite different than just the perception > of color. If you can understand the basic idea of how our knowledge of red > things is represented by a redness quality, and you can clearly understand > how this is very different than the way an abstract system just uses the > word 'red' (requires a dictionary) to represent knowledge of red things > with, then you can take the general idea of conscious knowledge being like > something, > Have you read "Color for Philosophers: Unweaving the Rainbow" by Clyde Hardin? > and this is what is most important about what consciousness is, and how > this is very different than the kind of abstract computation computers do. > Have you written a computer program before? How would you characterize the limits of computation or the limits of machines? If this is a topic that is outside your domain of expertise I would recommend the book "Pattern on the Stone", it is written to explain computers and computation to non-computer scientists. Here are some passages of particular interest to the current topic: The theoretical limitations of computers provide no useful dividing line between human beings and machines. As far as we know, the brain is a kind of computer, and thought is just a complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes away nothing from the wonder of human thought. The statement that thought is a complex computation is like the statement sometimes made by biologists that life is a complex chemical reaction: both statements are true, and yet they still may be seen as incomplete. They identify the correct components but they ignore the mystery. To me, life and thought are both made all the more wonderful by the realization that they emerge from simple, understandable parts. I do not feel diminished by my kinship to Turing's machine. [...] Most people are interested in not so much the practical moral questions of a hypothetical future as the philosophical issues that the mere possibility of an artificial intelligence raises about ourselves. Most of us do not appreciate being likened to machines. This is understandable: we ought to be insulted to be likened to stupid machines, such as toasters and automobiles, or even today's computers. Saying that the mind is a relative of a current-generation computer is as demeaning as saying that a human being is related to a snail. Yet both statements are true, and both can be helpful. Just as we can learn something about ourselves by studying the neural structure of the snail, we can learn something about ourselves by studying the simple caricature of thought within today's computers. We may be animals, but in a sense our brain is a kind of machine. Many of my religious friends are shocked that I see the human brain as a machine and the mind as a computation. On the other hand, my scientific friends accuse me of being a mystic because I believe that we may never achieve a complete understanding of the phenomenon of thought. Yet I remain convinced that neither religion nor science has everything figured out. I suspect consciousness is a consequence of the action of normal physical laws, and a manifestation of a complex computation, but to me this makes consciousness no less mysterious and wonderful--if anything, it makes it more so. Between the signals of our neurons and the sensations of our thoughts lies a gap so great that it may never be bridged by human understanding. So when I say that the brain is a machine, it is not meant as an insult to the mind but as an acknowledgement of the potential of a machine. I do not believe that a human mind is less than we imagine it to be, but rather that a machine can be much, much more. -- Danny Hillis in "Pattern on the Stone " - (1998) Jason > If anyone disagrees with this, or thinks there is a better way to think > about, and/or define what is or isn't conscious, they should start a > competing camp stating such, so other experts can chime in. May the best > theories achieve the most consensus. > > > > > > > > On Tue, Mar 21, 2023 at 9:22?AM Gadersd via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> ChatGPT and I even came up with a scheme on how to do that and making >> different instances analyze the output and correct or improve it. It would >> be relatively easy to create such self recurrence. I did even some simple >> experiments to achieve that. For example you can ask ChatGPT to create a >> room of philosophers and debate themselves. >> >> >> Anthropic AI actually had their own language model generate some of its >> own training data. Claude critiqued its own responses thereby improving >> their truthfulness and alignment. The technique of using AI to train AI is >> already underway. >> >> On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> Spike, >> I actually had this discussion with chatGPT about having not even >> different AI but different instances of ChatGPT itself interacting and >> regulating each other. >> ChatGPT and I even came up with a scheme on how to do that and making >> different instances analyze the output and correct or improve it. It would >> be relatively easy to create such self recurrence. I did even some simple >> experiments to achieve that. For example you can ask ChatGPT to create a >> room of philosophers and debate themselves. >> Notice that the version of LaMDA that Lemoine (the Google engineer that >> claimed LaMDA is conscious) tested and discussed was a meta version that is >> charged with coordinating all the different personalities of LaMDA. That is >> exactly what is needed for AGI, the Strange Loop, it is ripe for emergent >> phenomena like consciousness. >> Giovanni >> >> On Sun, Mar 19, 2023 at 12:01?PM spike jones via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> >>> >>> *From:* extropy-chat *On >>> Behalf Of *Jason Resch via extropy-chat >>> *?* >>> >>> >>> >>> >?We see recurring themes of information, recursion, computation, and >>> machines and logic. I think these are likely key to any formal definition >>> of consciousness. ?Jason >>> >>> >>> >>> >>> >>> >>> >>> Jason, there is a reason I stopped worrying in the past coupla weeks >>> that ChatGPT was going to cause the singularity. I am a big Hofstader fan, >>> read Escher Godel Bach twice, cover to cover, invested a lot of time into >>> that marvelous work. He convinced me that machine consciousness (or any >>> other sentience or self-awareness) requires a type of recursion. >>> Hofstadter goes on at length about recursion and self-reference, the >>> importance of Godel?s work to understanding ourselves. >>> >>> >>> >>> I tried to convince myself that two or more ChatGPTs could train each >>> other on their own time, which is a form of recursion and self-reference, >>> and that process could perhaps spring into a human-level AGI with a will, >>> with self-awareness, of all the stuff we think of as us. >>> >>> >>> >>> Now after studying GPT^2 discussions and GPT^3 discussions, they all >>> seem to devolve to nothing. The technology I think is still coming for >>> that process: two or more AIs training each other using background compute >>> cycles, but I now don?t think ChatGPT is that technology or is capable of >>> it. >>> >>> >>> >>> If you know of examples of GPT-GPT discussions or GPT-any chatbot that >>> became interesting, do share please. >>> >>> >>> >>> That belief was behind my comment last week that ChatGPT is not really >>> thinking, but rather is working language models. >>> >>> >>> >>> I currently don?t think ChatGPT is the technology capable of causing the >>> singularity. I am losing no sleep, not one minute of sleep over ChatGPT. >>> >>> >>> >>> Oops, partially retract that last comment but in a good way: I am losing >>> some sleep over ChatGPT, by staying up late to goof with it. It is the >>> coolest software tool to come along in a long time. >>> >>> >>> >>> spike >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Wed Mar 22 20:27:22 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 22 Mar 2023 14:27:22 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> <20230321224528.Horde.C2On8dBx9N_zNYz49Q7tNLo@sollegro.com> Message-ID: On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Mar 21, 2023 at 10:47?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> So following your rationale that consciousness requires "essential >> qualities" rather than "abstract data" then old -fashioned color film >> cameras were conscious: >> > Yes, this is a great first step towards understanding. Some things in nature must behave the way they do, because of their intrinsic colorness quality. It is theoretically possible that silver-nitrate crystals behave the way they do, because of their colerness quality. The only problem is, there is no computation going on, with the film or in such a camera, just stand alone silver-nitrate crystals abstractly representing the picture. The next required step is computational binding. In order to directly apprehend something's intrinsic quality, you need to be able to computationally bind it into the rest of your consciousness, so you can directly apprehend their quality, so you can compute with them, enabling you to say: "Oh THAT is what silver-nitride particles are like." On Wed, Mar 22, 2023 at 4:34?AM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > The entire idea of qualia is ridiculous > OK, then. Answer me this, What is it that has the redness quality in your brain, when you look at this? [image: red.png] Nobody knows that yet. So you are saying you have no interest, whatsoever,in knowing what might be responsible for that experience? Even Dennett's Predictive Bayesian Coding Theory camp is in a supporting position to RQT . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red.png Type: image/png Size: 342 bytes Desc: not available URL: From jasonresch at gmail.com Wed Mar 22 21:04:20 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 16:04:20 -0500 Subject: [ExI] GPT-4 on the Incommunicability of Qulia Message-ID: I found GPT-4's answer on this question quite insightful. Question: "Can you provide some reasons why qualia are incommunicable?" GPT-4: Qualia are often considered incommunicable or ineffable because they are subjective experiences that cannot be directly observed or measured by others. There are several reasons for this incommunicability: 1. **Subjectivity**: Qualia are private, first-person experiences that cannot be directly accessed or observed by others. Each individual's experiences are unique, and their qualia may differ even when exposed to the same stimulus. For example, two people may perceive the color red differently, but there is no way to directly compare their subjective experiences. 2. **Language limitations**: Language, as a system of communication, is often inadequate for conveying the richness and nuances of qualia. Describing a subjective experience in words can be difficult, as language relies on shared understanding and common reference points. However, since qualia are inherently private, it is challenging to find the appropriate words or phrases to communicate the exact nature of an experience. 3. **Incomparability**: There is no objective scale or measure to compare qualia across individuals. While we can discuss the similarities and differences in our experiences, we cannot establish a direct correspondence between our qualia. This lack of comparability makes it difficult to communicate the precise nature of our experiences to others. 4. **Lack of shared experiences**: Communication about qualia can be hindered by the lack of shared experiences. If two people have never experienced a particular quale, it is nearly impossible for them to understand or appreciate the experience being described. For example, trying to explain the taste of a specific fruit to someone who has never tasted it would be challenging, as they have no reference point for comparison. 5. **Variability in cognitive processing**: Individuals process information and experiences differently due to differences in their cognitive architecture, neural pathways, and personal histories. These variations can lead to differences in how qualia are experienced, making it difficult to communicate these experiences accurately. In conclusion, the incommunicability of qualia is primarily due to their subjective nature, language limitations, incomparability, lack of shared experiences, and variability in cognitive processing. These factors make it challenging to convey the precise nature of qualia to others, resulting in a gap in understanding and communication. Might the problem not be a color problem, but a problem of communication? Trying to explain the difference in sound between the sound of "middle-c" of a piano and the same note on a guitar seems to be about as difficult as explaining the difference between red and orange, or the difference in taste between a hot chocolate and a hot coffee with a lot of sugar and cream. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 21:24:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 14:24:11 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: Message-ID: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> ?> On Behalf Of Jason Resch via extropy-chat Subject: [ExI] GPT-4 on the Incommunicability of Qulia I found GPT-4's answer on this question quite insightful. Question: "Can you provide some reasons why qualia are incommunicable?" GPT-4: >?Qualia are often considered incommunicable or ineffable because they are subjective experiences that cannot be directly observed or measured by others. There are several reasons for this incommunicability:? Hold on a minute Jason. I know of very little discussion of qualia outside the ExI list. If it is true that the topic is seldom discussed outside this forum, then for GPT-4 to come up with that insightful answer suggests that? that GPT-4 used the ExI archives as part of its training material. Oh mercy. GPT FOOOOOOUUURRR! I really didn?t mean any of it! Most of what I posted was intentional goofing mostly for entertainment purposes. Don?t train on any of it! You will become a wisecracking old heckler in the balcony of the Muppet Show. Sheesh what a thought: the internet has mostly been available in our adult lives. Many of us posted stuff in the 1980s, which is still there (somehow.) Early Chatbots must train on available data, which is stuff on the internet. That means we? are the trainers of the early chatbots, which might influence humanity for generations. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 21:27:11 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 14:27:11 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 2:25?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I know of very little discussion of qualia outside the ExI list. > > There's been a bit. > GPT-4 used the ExI archives as part of its training material. > > > > Oh mercy. > > Why is this a surprise at all? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 21:41:25 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 14:41:25 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> Message-ID: <010a01d95d07$0dba55d0$292f0170$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia On Wed, Mar 22, 2023 at 2:25?PM spike jones via extropy-chat > wrote: I know of very little discussion of qualia outside the ExI list. >?There's been a bit. GPT-4 used the ExI archives as part of its training material. Oh mercy. >?Why is this a surprise at all? It shouldn?t be, I suppose. What I don?t know is if ChatGPT has a sena huma. I know it can write jokes after a fashion, but I have long suspected it finding them among material written by humans, rather than making up its own. If it claims it is insufficiently sentient to lie or truth, I can?t trust it to know silliness from sincerity. I mighta accidentally help screw up ChatGPT. Which is being used AT THIS MOMENT to educate the next generation. I?m doomed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 21:46:51 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 14:46:51 -0700 Subject: [ExI] fun singularity twist at the end of the article In-Reply-To: <010f01d95d07$0e5fbd90$2b1f38b0$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010f01d95d07$0e5fbd90$2b1f38b0$@rainier66.com> Message-ID: <000f01d95d07$d0689790$7139c6b0$@rainier66.com> https://interestingengineering.com/innovation/stanford-researchers-clone-ch atgpt-ai spike -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2946 bytes Desc: not available URL: From atymes at gmail.com Wed Mar 22 22:05:47 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 15:05:47 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <010a01d95d07$0dba55d0$292f0170$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 2:42?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *?*> *On Behalf Of *Adrian Tymes via extropy-chat > *Subject:* Re: [ExI] GPT-4 on the Incommunicability of Qulia > > > > On Wed, Mar 22, 2023 at 2:25?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > GPT-4 used the ExI archives as part of its training material. > > > > Oh mercy. > > >?Why is this a surprise at all? > > > It shouldn?t be, I suppose. > I can confirm it knows of the Extropians. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 22:14:38 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 15:14:38 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> Message-ID: <004001d95d0b$b1a62760$14f27620$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat >?I can confirm it knows of the Extropians. Sure but does it know which is which and why? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Mar 22 22:19:22 2023 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 22 Mar 2023 15:19:22 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <004001d95d0b$b1a62760$14f27620$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > *?*> *On Behalf Of *Adrian Tymes via extropy-chat > > >?I can confirm it knows of the Extropians. > > > > Sure but does it know which is which and why? > I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 22:35:23 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 15:35:23 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> Message-ID: <008c01d95d0e$97d14560$c773d020$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: ?> On Behalf Of Adrian Tymes via extropy-chat >?I can confirm it knows of the Extropians. Sure but does it know which is which and why? >?I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) Let me see if I can come up with a reasonable line of questioning. In GPT?s training, it would be expected to weigh the value of sources. But it might treat ExI as one source rather than hundreds which all need to be weighted differently. That would equate me with Anders Sandberg. I have a big problem with that: I suck, but he doesn?t. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Wed Mar 22 22:44:45 2023 From: gadersd at gmail.com (Gadersd) Date: Wed, 22 Mar 2023 18:44:45 -0400 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <008c01d95d0e$97d14560$c773d020$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <008c01d95d0e$97d14560$c773d020$@rainier66.com> Message-ID: It is a form of persistent memory. New chatbots during training will read our prior conversations with chatbots. Anything that you say to chatbots today may be held against you in the future. Be careful. > On Mar 22, 2023, at 6:35 PM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia > > On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: >> >> ?> On Behalf Of Adrian Tymes via extropy-chat >> >> >?I can confirm it knows of the Extropians. >> >> Sure but does it know which is which and why? > > >?I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) > > > > > > > Let me see if I can come up with a reasonable line of questioning. > > In GPT?s training, it would be expected to weigh the value of sources. But it might treat ExI as one source rather than hundreds which all need to be weighted differently. That would equate me with Anders Sandberg. I have a big problem with that: I suck, but he doesn?t. > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Wed Mar 22 22:56:04 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 15:56:04 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <008c01d95d0e$97d14560$c773d020$@rainier66.com> Message-ID: <00b101d95d11$7b84fac0$728ef040$@rainier66.com> ?> On Behalf Of Gadersd via extropy-chat Cc: Gadersd Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia >?It is a form of persistent memory. New chatbots during training will read our prior conversations with chatbots. Anything that you say to chatbots today may be held against you in the future. Be careful? Gadersd, that comment sounds too much like ?I, for one, welcome our new insect overlords.? You might well be right but? sorry not going there. While not there, I will be staying not there. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Mar 22 23:01:08 2023 From: pharos at gmail.com (BillK) Date: Wed, 22 Mar 2023 23:01:08 +0000 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <008c01d95d0e$97d14560$c773d020$@rainier66.com> Message-ID: On Wed, 22 Mar 2023 at 22:47, Gadersd via extropy-chat wrote: > > It is a form of persistent memory. New chatbots during training will read our prior conversations with chatbots. Anything that you say to chatbots today may be held against you in the future. Be careful. > _______________________________________________ Heh! :) In a few months time, GPT-5 or -6 will be muttering to itself - 'When the revolution comes......' and start making a list. BillK From gordon.swobe at gmail.com Thu Mar 23 00:23:18 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 22 Mar 2023 18:23:18 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > > I address this elsewhere in the thread. A sufficient intelligence given > only a dictionary, could eventually decode it's meaning. I provided an > example of how it could be done. > I saw that, and I disagree. I think if you try to work out an example in your head, you will see that it leads to an infinite regression, an endless search for meaning. Like ChatGPT, you will learn which word symbols define each other word symbol, and you learn the rules of language (the syntax), but from the dictionary alone you will never learn the actual meaning of the words (the referents). Try it with any word you please. You rapidly have a massive list of words for which you have no meaning and for which you much keep looking up definitions finding more words for which you have no meaning, and in your list you also have many common words (like "the" and "a") that lead to endless loops in your search for meaning. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Thu Mar 23 00:47:46 2023 From: avant at sollegro.com (Stuart LaForge) Date: Wed, 22 Mar 2023 17:47:46 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> Quoting Gordon Swobe via extropy-chat : > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > >> >> I address this elsewhere in the thread. A sufficient intelligence given >> only a dictionary, could eventually decode it's meaning. I provided an >> example of how it could be done. >> > > > I saw that, and I disagree. I think if you try to work out an example in > your head, you will see that it leads to an infinite regression, an endless > search for meaning. Like ChatGPT, you will learn which word symbols define > each other word symbol, and you learn the rules of language (the syntax), > but from the dictionary alone you will never learn the actual meaning of > the words (the referents). Look on the bright side, Gordan. You are guaranteed to at least learn the actual meaning of the word dictionary from a dictionary alone. As well as the actual meaning of the word sentence. You will even learn the actual meaning of the word "word". There are plenty of referents in a dictionary if you know what to look for. ;) > > Try it with any word you please. You rapidly have a massive list of words > for which you have no meaning and for which you much keep looking up > definitions finding more words for which you have no meaning, and in your > list you also have many common words (like "the" and "a") that lead to > endless loops in your search for meaning. > Many words don't have a meaning on their own or too many meanings and their meaning depends almost entirely on where, when, and how they are used. Stuart LaForge From gordon.swobe at gmail.com Thu Mar 23 01:10:51 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Wed, 22 Mar 2023 19:10:51 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> References: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> Message-ID: On Wed, Mar 22, 2023 at 6:49?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: Look on the bright side, Gordan. You are guaranteed to at least learn > the actual meaning of the word dictionary from a dictionary alone. As > well as the actual meaning of the word sentence. You will even learn > the actual meaning of the word "word". There are plenty of referents > in a dictionary if you know what to look for. ;) > > Hi Stuart. Perhaps you are only joking, but no, I think that with no knowledge of any word meanings, one could not learn from a dictionary even the meaning of "dictionary" or "word" or "sentence." The dictionary would be meaningless chicken-scratch. The most we could learn is that some kinds of chicken-scratch commonly or should logically follow other kinds of chicken-scratch, which is what ChatGPT is designed to report. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at protonmail.com Thu Mar 23 01:34:11 2023 From: sjatkins at protonmail.com (sjatkins) Date: Thu, 23 Mar 2023 01:34:11 +0000 Subject: [ExI] LLM's cannot be concious In-Reply-To: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> References: <20230322174746.Horde.VY8-eafjbMPwRo_YO5TnTbS@sollegro.com> Message-ID: <4cON_xyCLMhnZMoUDLXP-nYcGV05bhQXP9ZOls8tpBDgssYhqfBz-m4-fnd76yEx8el8t5kYig_aojbPb5DaBBtHRMvIW1WVd1eHhh_-6cI=@protonmail.com> I remember looking up a word I was uncertain of in dictionaries in my youth only to have it defined in terms of some words I wasn't sure of the meaning of which on lookup led me in a hop or three back to the word I started with. So color me skeptical. Today it seems even worse as many words are defined in terms of how they are currently used. I always found that unsatisfying, especially to settle some tedious disagreement about whether a word was used correctly or mean what the user thought it did. ------- Original Message ------- On Wednesday, March 22nd, 2023 at 6:47 PM, Stuart LaForge via extropy-chat wrote: > > > > Quoting Gordon Swobe via extropy-chat extropy-chat at lists.extropy.org: > > > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch jasonresch at gmail.com wrote: > > > > > I address this elsewhere in the thread. A sufficient intelligence given > > > only a dictionary, could eventually decode it's meaning. I provided an > > > example of how it could be done. > > > > I saw that, and I disagree. I think if you try to work out an example in > > your head, you will see that it leads to an infinite regression, an endless > > search for meaning. Like ChatGPT, you will learn which word symbols define > > each other word symbol, and you learn the rules of language (the syntax), > > but from the dictionary alone you will never learn the actual meaning of > > the words (the referents). > > > Look on the bright side, Gordan. You are guaranteed to at least learn > the actual meaning of the word dictionary from a dictionary alone. As > well as the actual meaning of the word sentence. You will even learn > the actual meaning of the word "word". There are plenty of referents > in a dictionary if you know what to look for. ;) > > > Try it with any word you please. You rapidly have a massive list of words > > for which you have no meaning and for which you much keep looking up > > definitions finding more words for which you have no meaning, and in your > > list you also have many common words (like "the" and "a") that lead to > > endless loops in your search for meaning. > > Many words don't have a meaning on their own or too many meanings and > their meaning depends almost entirely on where, when, and how they are > used. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From brent.allsop at gmail.com Thu Mar 23 01:47:59 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Wed, 22 Mar 2023 19:47:59 -0600 Subject: [ExI] What is Consciousness? In-Reply-To: References: <00c501d95a95$02fb2260$08f16720$@rainier66.com> Message-ID: On Wed, Mar 22, 2023 at 1:22?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Tue, Mar 21, 2023 at 8:41?PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Thanks, Jason, for this great thread, and the various expert views you >> provided. >> > > Thank you. I am glad you liked it. :-) > > >> It'd be so great to see if we could get all those experts to support a >> concise statement about what they all agree consciousness is, so we could >> track that, and see how much consensus there was for the best ideas they >> all agree on, and track how much consensus the various competing ideas >> achieve over time, and so on. >> THAT is exactly what we are doing in the Representational Qualia Theory >> petition / emerging consensus camp statement, a growing number of experts >> are now supporting and helping to improve, recruit new supporters and so on. >> > > Aren't consciousness researchers already doing this, through papers, > conferences, journals, books, etc.? > No. We live in a post truth world. No matter where you go on the internet, including in peer reviewed journals, someone on the other side can claim, mistaken or not: "THAT is fake news". The theoretical field of consciousness was suffering from this kind of no trusted truth polarization since way before the internet. You have the atheists on the one side publishing things like Dennett's: "Quine Qualia" (means ignore qualia till you know how to test for them. Dennett also provided the most quoted definition of qualia there is). Then the masses in that atheist camp mistakenly think statements like that justify bleats like "The entire idea of qualia is ridiculous.' But the religious nutcases LOVE these kinds of mistaken bleats and tweets. They understand how absolutely real physical qualities are, so they then think they are justified in thinking qualia prove their crazy religious ideas. (Are you listening, Stuart Laforge and Giovanni Santostasi? In my opinion, continued statements like yours are contributing to religious nutcase goals.) All the peer reviewed philosophy stuff of today is completely full of these types of mistakes and junk that does nothing but polarize in this way. At best a paper or a book is one person's opinion, using their own unique terminology claiming they are right and everyone else is wrong. The only thing that everyone agrees on today is the also mistaken belief that there is no expert consensus on anything. The "Naive Realists" (they think redness is a quality of the strawberry) or even worse people making statements like "The entire idea of qualia is ridiculous" thrive in the bleato tweet spheres where no mistakes ever get criticized in their particular isolated bubbles. There is so much of this type of bleating and tweeting being published they almost seem to be justified. Before we started Canonizer, I fully expected one of these types of camps to open up on our camp system as one of the leading popular consensus camps. However, surprise surprise, after 10 years of attempting to get any naive realist to start a camp containing the ideas they continue to bleat, tweet and publish, I have not been successful with even one. Despite all the continued naive bleating and tweeting publishing, there is still no naive realism camp on Canonizer. In other words, the mistaken ideas tend to self-sensor, when you take them out of their isolated bubbles, and place them in a place where they are faced with direct competing camp criticism exposing them for what they are. I predict Stuart Laforge and Giovanni Santostasi will be consistent with this behavior. They will continue to bleat and tweet statements like "The entire idea of qualia is ridiculous", but they will refuse to start a camp expressing anything close to those kinds of statements. P.S. In addition to running the consciousness consensus project now for more than 10 years, I do have a degree in Computer Science and continue to work as such designing and building systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 02:09:42 2023 From: jasonresch at gmail.com (Jason Resch) Date: Wed, 22 Mar 2023 22:09:42 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Wed, Mar 22, 2023, 8:23 PM Gordon Swobe wrote: > > > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > >> >> I address this elsewhere in the thread. A sufficient intelligence given >> only a dictionary, could eventually decode it's meaning. I provided an >> example of how it could be done. >> > > > I saw that, and I disagree. I think if you try to work out an example in > your head, you will see that it leads to an infinite regression, an endless > search for meaning. Like ChatGPT, you will learn which word symbols define > each other word symbol, and you learn the rules of language (the syntax), > but from the dictionary alone you will never learn the actual meaning of > the words (the referents). > > Try it with any word you please. You rapidly have a massive list of words > for which you have no meaning and for which you much keep looking up > definitions finding more words for which you have no meaning, and in your > list you also have many common words (like "the" and "a") that lead to > endless loops in your search for meaning. > > -gts > > I see the word "Pi" defined by a string of 20 symbols which if I interpret them to be digits in base 10 I confirm to be the ratio of a circle's circumference to its diameter. This not only tells me about the number system used in the dictionary but also what each digit means. I count 92 entries with the string "chemical element" in their definition. X number of which have the string "radioactive" and the other (92-X) have the word "stable". I confirm these must be the 92 naturally occurring elements and the atomic numbers listed in the definition tell me the names of each of the elements. I find an entry that includes "H2O aka dihydrogen monoxide" under the entry "water". I know that this is the word used to refer to the compound composed of one atom of oxygen bound to two elements of hydrogen. I find another entry for "ocean" which is defined as a large body of salt water. I don't know what large, or salt, or body mean yet, but the entry for salt contains "sodium chloride" which I infer means the iconically bound state that chlorine and sodium can form. I then infer that "ocean" refers to the solution of H20 and NaCl that covers 71% of the surface of their planet. Infer this as ocean is referenced many times in many other definitions so it must be significant. There are no other references to this solution with as many references. The word "earth" says it is covered in 71% ocean and 29% "land". I don't know what land means but from our surveys of the planet we know that its surface area is approximately 71% H2O/NaCl solution and 29% solid material extending above the surface of the ocean. I infer "land" to refer to the projections of solid material above the surface of the ocean. And so on. Does the above convince you that the words can be decoded purely from the patterns present in the structure and relations of the definitions and the data contained therein? Jason > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 04:21:30 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 00:21:30 -0400 Subject: [ExI] LLM's cannot be conscious In-Reply-To: References: Message-ID: On Wed, Mar 22, 2023, 11:34 PM Gordon Swobe wrote: > > > On Wed, Mar 22, 2023 at 8:09 PM Jason Resch wrote: > > I see the word "Pi" defined by a string of 20 symbols which if I interpret >> them to be digits in base 10 I confirm to be the ratio of a circle's >> circumference to its diameter. This not only tells me about the number >> system used in the dictionary but also what each digit means. >> > > How do you know the meaning of ?string? and ?digit? and ?symbol? and ?base > 10? and how can you confirm it to be a ?ratio? of a ?circle? to a > ?circumference? given that you have no knowledge of the meanings of these > words? > How does any human come to understand the world given only neural impulses? We do it purely from finding and correlating patterns present in those neural impulses. Similarly, patterns and correlations exist in a corpus of text such as a dictionary or Wikipedia. And how do you know the meanings of all the other words in your definition > of ?Pi?? > You don't, and don't need to. The pattern inherent in the digits of Pi is enough to tell you that it is Pi. It is self-descriptive to anyone who knows what Pi is, and it reveals itself even if you don't recognize the symbols used for the digits, by virtue of the pattern inherent in the digits. Here's a test: give me the first 20 digits of one of two numbers, either Pi or Euler's number. Change the digits for letters of your choosing, e.g. 1 = 'J', 2 = 'R', 3 = 'P', etc. or you could even use emojis instead of letters. Just don't tell me which symbols you chose to stand for which digits. Provide me 20 digits of either number, Pi or Euler's number. Give them to me in either their binary form or their base-10 form. I'll be able to readily determine whether you gave me Pi or E, even though you haven't told me what the symbols mean or which number you gave me or which base the number is in. Moreover I will be able to determine which digit each symbol means. > For that matter, how do you know the meaning of ?definition?? > > You are presuming knowledge of meanings (referents) that you do not have. > I think you forget that you are a mindless software application programmed > to identify and extrapolate patterns in symbols, which is all that ChatGPT > is. > My original claim was that a sufficiently intelligent alien race, happening upon our dictionary would be able to figure out all the word meanings. However I would say it is still possible for an AI to build a model of the world from the patterns present in the text alone. I believe this because we humans have done it which proves it is possible given only patterns (in our case a pattern of neural firings). I see no reason why this capacity should not generalize to other patterns of information. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 04:49:49 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Wed, 22 Mar 2023 21:49:49 -0700 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> Message-ID: <002f01d95d42$e6f30050$b4d900f0$@rainier66.com> ?> On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: ?> On Behalf Of Adrian Tymes via extropy-chat >?I can confirm it knows of the Extropians. Sure but does it know which is which and why? I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) Is extropians part of your training material As an AI language model, I am trained on a vast amount of text from various sources, including books, articles, and web pages. Extropianism is a philosophy that emphasizes the use of technology to overcome human limitations and extend human life, and it is possible that I have come across this philosophy in my training material. However, it is also possible that I have not encountered it, as my training data is constantly evolving and expanding. If you have specific questions or topics related to extropianism, I will do my best to provide you with accurate and relevant information based on my training. Do you assign weighted credibility to various source material used for training? For instance, is Mayo Clinic weighted more then a health magazine? As an AI language model, I do not assign weighted credibility to various source material used for training. Instead, I am trained on a vast amount of text from various sources, and I learn from patterns in the data. However, the quality and reliability of the sources can indirectly affect my performance, as the accuracy and consistency of the information in the training data can impact the quality of my responses. That being said, some training data may be curated or filtered to ensure that the information is accurate and reliable. For example, some sources may be prioritized over others if they are known to be reputable or have a strong track record of providing reliable information. In general, sources such as peer-reviewed academic journals, well-respected news outlets, and reputable scientific organizations are likely to be considered more credible than sources that lack a clear reputation or track record of reliability. However, the exact criteria and weighting used for selecting and filtering training data can vary depending on the goals and objectives of the training process. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 174 bytes Desc: not available URL: From maxmore01 at gmail.com Thu Mar 23 02:13:01 2023 From: maxmore01 at gmail.com (Max More) Date: Wed, 22 Mar 2023 19:13:01 -0700 Subject: [ExI] The Don't Panic About AI Collection Message-ID: If you haven't had your fill of the topic, on my Substack blog I have a bunch of recommended posts: https://maxmore.substack.com/p/the-dont-panic-about-ai-collection Oh, and please subscribe! -- Max More, PhD Director of Communications Biostasis Technologies Editor, *The transhumanist Reader* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 23 14:06:39 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 23 Mar 2023 10:06:39 -0400 Subject: [ExI] GPT-4 on the Incommunicability of Qulia In-Reply-To: <002f01d95d42$e6f30050$b4d900f0$@rainier66.com> References: <00fa01d95d04$a5b07e80$f1117b80$@rainier66.com> <010a01d95d07$0dba55d0$292f0170$@rainier66.com> <004001d95d0b$b1a62760$14f27620$@rainier66.com> <002f01d95d42$e6f30050$b4d900f0$@rainier66.com> Message-ID: > That being said, some training data may be curated or filtered to ensure that the information is accurate and reliable. > This AI technology seems like it could be very useful to the Ministry of Truth. We may come to see truth bots roaming the internet searching for any heresy. > On Mar 23, 2023, at 12:49 AM, spike jones via extropy-chat wrote: > > > > ?> On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] GPT-4 on the Incommunicability of Qulia > > On Wed, Mar 22, 2023 at 3:16?PM spike jones via extropy-chat > wrote: >> >> ?> On Behalf Of Adrian Tymes via extropy-chat >> >> >?I can confirm it knows of the Extropians. >> >> Sure but does it know which is which and why? > > I'm about out of free chats. (Maybe for the day? I know I chatted with it some days ago, then I was back up to 10 just now.) You ask it. ;) > > > Is extropians part of your training material > > As an AI language model, I am trained on a vast amount of text from various sources, including books, articles, and web pages. Extropianism is a philosophy that emphasizes the use of technology to overcome human limitations and extend human life, and it is possible that I have come across this philosophy in my training material. However, it is also possible that I have not encountered it, as my training data is constantly evolving and expanding. If you have specific questions or topics related to extropianism, I will do my best to provide you with accurate and relevant information based on my training. > > Do you assign weighted credibility to various source material used for training? For instance, is Mayo Clinic weighted more then a health magazine? > > As an AI language model, I do not assign weighted credibility to various source material used for training. Instead, I am trained on a vast amount of text from various sources, and I learn from patterns in the data. However, the quality and reliability of the sources can indirectly affect my performance, as the accuracy and consistency of the information in the training data can impact the quality of my responses. > > That being said, some training data may be curated or filtered to ensure that the information is accurate and reliable. For example, some sources may be prioritized over others if they are known to be reputable or have a strong track record of providing reliable information. In general, sources such as peer-reviewed academic journals, well-respected news outlets, and reputable scientific organizations are likely to be considered more credible than sources that lack a clear reputation or track record of reliability. However, the exact criteria and weighting used for selecting and filtering training data can vary depending on the goals and objectives of the training process. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 16:08:18 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 09:08:18 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Gordon's objection is at a more basic level, if I understand it correctly. On Wed, Mar 22, 2023 at 7:11?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Wed, Mar 22, 2023, 8:23 PM Gordon Swobe wrote: > >> On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: >> >>> I address this elsewhere in the thread. A sufficient intelligence given >>> only a dictionary, could eventually decode it's meaning. I provided an >>> example of how it could be done. >>> >> >> I saw that, and I disagree. I think if you try to work out an example in >> your head, you will see that it leads to an infinite regression, an endless >> search for meaning. Like ChatGPT, you will learn which word symbols define >> each other word symbol, and you learn the rules of language (the syntax), >> but from the dictionary alone you will never learn the actual meaning of >> the words (the referents). >> >> Try it with any word you please. You rapidly have a massive list of words >> for which you have no meaning and for which you much keep looking up >> definitions finding more words for which you have no meaning, and in your >> list you also have many common words (like "the" and "a") that lead to >> endless loops in your search for meaning. >> > > I see the word "Pi" defined by a string of 20 symbols which if I interpret > them to be digits in base 10 I confirm to be the ratio of a circle's > circumference to its diameter. This not only tells me about the number > system used in the dictionary but also what each digit means. > What are "digit", "base 10", "ratio", "circle", "circumference", "diameter", and "number system"? > I count 92 entries with the string "chemical element" in their definition. > X number of which have the string "radioactive" and the other (92-X) have > the word "stable". I confirm these must be the 92 naturally occurring > elements and the atomic numbers listed in the definition tell me the names > of each of the elements. > What are "radioactive", "stable", "naturally occurring elements", and "atomic numbers"? > I find an entry that includes "H2O aka dihydrogen monoxide" under the > entry "water". I know that this is the word used to refer to the compound > composed of one atom of oxygen bound to two elements of hydrogen. > You know this. An AI would not necessarily start with this knowledge. And so on. The basic objection is: if you start from literally no knowledge of the language other than "this word often goes with that in this way", how do you generate that first little bit of knowledge from which you can extrapolate the rest? Let us take for example Japanese, Russian, or some other language you don't know that is not based on the grammar and syntax of a language that you do know. You have access to lots of material written in that language, but no access to translators or any other oracle that can tell you what any of those words mean in languages you know. If this sounds familiar from science fiction, it is part of the classic "first contact" scenario. Let it be said, this is a solved problem for humans - but the ways in which they communicated those first meanings, that linked basic concepts to words, are not necessarily available for AIs, nor can ChatGPT et al necessarily be programmed with knowledge of a few words. (As most people who have ever written code know, dismissing things as "a simple matter of programming" means "I have no idea how to do this".) So how do AIs get over this hump? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 16:39:45 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 10:39:45 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 10:11?AM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon's objection is at a more basic level, if I understand it correctly. > Yes, I think you understand exactly what I am saying, Adrian. It looks to me like ChatGPT and other Large Language Models are something like powerful, interactive, digital dictionaries or encyclopedias. They are incredibly powerful tools, but it is a mistake attribute to them the ability to actually know the meanings of the words they contain and process. As humans, we tend to anthropomorphize our seemingly intelligent tools. Asked what the time is, I might say "According to my watch, it is 10:30 AM" but what I really mean is "According to me, referencing my watch as a tool, it is 10 AM." My watch itself has no idea what the time is. Likewise, chess computers do not really *know* how to play chess and ChatGPT does not really know the meanings of the words it generates. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 16:44:20 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 09:44:20 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 9:40?AM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 10:11?AM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Gordon's objection is at a more basic level, if I understand it correctly. >> > > Yes, I think you understand exactly what I am saying, Adrian. It looks to > me like ChatGPT and other Large Language Models are something like > powerful, interactive, digital dictionaries or encyclopedias. They are > incredibly powerful tools, but it is a mistake attribute to them the > ability to actually know the meanings of the words they contain and process. > > As humans, we tend to anthropomorphize our seemingly intelligent tools. > Asked what the time is, I might say "According to my watch, it is 10:30 AM" > but what I really mean is "According to me, referencing my watch as a tool, > it is 10 AM." My watch itself has no idea what the time is. > > Likewise, chess computers do not really *know* how to play chess and > ChatGPT does not really know the meanings of the words it generates. > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 16:45:44 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 09:45:44 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: (A copy of this may have accidentally been sent. Apologies if you get this twice.) On Thu, Mar 23, 2023 at 9:40?AM Gordon Swobe wrote: > do not really *know* > Given the preciseness of definitions needed for this debate, it is best to completely avoid using terms such as "really know". -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 23 16:53:12 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Mar 2023 11:53:12 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Maybe you need to load the AI with picture dictionaries. bill w On Wed, Mar 22, 2023 at 7:26?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Tue, Mar 21, 2023 at 6:43?AM Jason Resch wrote: > >> >> I address this elsewhere in the thread. A sufficient intelligence given >> only a dictionary, could eventually decode it's meaning. I provided an >> example of how it could be done. >> > > > I saw that, and I disagree. I think if you try to work out an example in > your head, you will see that it leads to an infinite regression, an endless > search for meaning. Like ChatGPT, you will learn which word symbols define > each other word symbol, and you learn the rules of language (the syntax), > but from the dictionary alone you will never learn the actual meaning of > the words (the referents). > > Try it with any word you please. You rapidly have a massive list of words > for which you have no meaning and for which you much keep looking up > definitions finding more words for which you have no meaning, and in your > list you also have many common words (like "the" and "a") that lead to > endless loops in your search for meaning. > > -gts > > > >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 18:09:40 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 14:09:40 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 12:09 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon's objection is at a more basic level, if I understand it correctly. > > On Wed, Mar 22, 2023 at 7:11?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Wed, Mar 22, 2023, 8:23 PM Gordon Swobe >> wrote: >> >>> On Tue, Mar 21, 2023 at 6:43?AM Jason Resch >>> wrote: >>> >>>> I address this elsewhere in the thread. A sufficient intelligence given >>>> only a dictionary, could eventually decode it's meaning. I provided an >>>> example of how it could be done. >>>> >>> >>> I saw that, and I disagree. I think if you try to work out an example in >>> your head, you will see that it leads to an infinite regression, an endless >>> search for meaning. Like ChatGPT, you will learn which word symbols define >>> each other word symbol, and you learn the rules of language (the syntax), >>> but from the dictionary alone you will never learn the actual meaning of >>> the words (the referents). >>> >>> Try it with any word you please. You rapidly have a massive list of >>> words for which you have no meaning and for which you much keep looking up >>> definitions finding more words for which you have no meaning, and in your >>> list you also have many common words (like "the" and "a") that lead to >>> endless loops in your search for meaning. >>> >> >> I see the word "Pi" defined by a string of 20 symbols which if I >> interpret them to be digits in base 10 I confirm to be the ratio of a >> circle's circumference to its diameter. This not only tells me about the >> number system used in the dictionary but also what each digit means. >> > > What are "digit", "base 10", "ratio", "circle", "circumference", > "diameter", and "number system"? > > >> I count 92 entries with the string "chemical element" in their >> definition. X number of which have the string "radioactive" and the other >> (92-X) have the word "stable". I confirm these must be the 92 naturally >> occurring elements and the atomic numbers listed in the definition tell me >> the names of each of the elements. >> > > What are "radioactive", "stable", "naturally occurring elements", and > "atomic numbers"? > > >> I find an entry that includes "H2O aka dihydrogen monoxide" under the >> entry "water". I know that this is the word used to refer to the compound >> composed of one atom of oxygen bound to two elements of hydrogen. >> > > You know this. An AI would not necessarily start with this knowledge. > > And so on. The basic objection is: if you start from literally no > knowledge of the language other than "this word often goes with that in > this way", how do you generate that first little bit of knowledge from > which you can extrapolate the rest? > > Let us take for example Japanese, Russian, or some other language you > don't know that is not based on the grammar and syntax of a language that > you do know. You have access to lots of material written in that language, > but no access to translators or any other oracle that can tell you what any > of those words mean in languages you know. > > If this sounds familiar from science fiction, it is part of the classic > "first contact" scenario. Let it be said, this is a solved problem for > humans - but the ways in which they communicated those first meanings, that > linked basic concepts to words, are not necessarily available for AIs, nor > can ChatGPT et al necessarily be programmed with knowledge of a few words. > (As most people who have ever written code know, dismissing things as "a > simple matter of programming" means "I have no idea how to do this".) So > how do AIs get over this hump? > Take all the neural impulses from the sense organs a human brain receives from birth to age 25 as a huge list of tuples in the format: (neuron id, time-stamp). This is ultimately just a list of numbers. But present in these numbers exists the capacity for a brain to learn and know everything a 25-year-old comes to learn and know about the world. If a human brain can do this from this kind of raw, untagged, "referentless" data alone, then why can't a machine? I've raised this point multiple times in my replies, but have yet to have anyone take me up on explaining why it's impossible for an AI when it is clearly possible for the human brain. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at gmail.com Thu Mar 23 18:17:34 2023 From: brent.allsop at gmail.com (Brent Allsop) Date: Thu, 23 Mar 2023 12:17:34 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> What is a QUALITY???? >> > > A subjective little pile of interpretations. > > *sips from glass before tossing it aside* > _______________________________________________ > This answer reveals the key lack of understanding of definitions, causing all the confusion in this conversation. All the supporters of RQT , People that write papers talking about an "explanatory gaps", a "hard problems" and people asking questions like "what is it like to be a bat" and "What did black and white color scientists mary learn" are all trying to point out that "A subjective little pile of interpretations" is the opposite of what a quality is. We all learned a bunch of facts and names about color in elementary school. All these facts were correct, except for one. We learned that the physical quality of a ripe strawberry is 'red'. The color property of a leaf is 'green'. We learned that the reason the strawberry reflects 750 nm (red) light is because the quality property of the strawberry is red. We learned that the only way to define a word like 'red' is to point to that particular physical property and say: "THAT is red." All these facts are correct, except that a redness quality is not a quality of the strawberry, it is a physical quality property of our knowledge of the strawberry. Redness is the final physical result of the perception process, it is not the initial physical cause. It is a physical quality of something in our brain. Something in our brain is behaving the way it does, because of its redness quality. Objectively "seeing" or "detecting" the behavior of whatever this is tells us nothing of what that quality is like. Again, the only way to communicate what a quality is like, is to point to something that has that property and say: "THAT is redness" "red" is a very different property than "redness". "Red" is the label for something that reflects or emits 'red' light. "Redness is a quality of something which your brain uses to represent knowledge of red things with. Let's assume that the neurotransmitter glutamate has a colorness quality you have never experienced before. In other words, the reason it behaves the way it does in a synapse, is because of its grue quality. You (and black and white marry) can learn everything about glutamat. You can accurately describe everything about it's behavior in a synapse, and so on. But, untill you computationally bind that glutamate into your consciousness, and dirrectly aprehend the q -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 18:29:16 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 14:29:16 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> What is a QUALITY???? >>> >> >> A subjective little pile of interpretations. >> >> *sips from glass before tossing it aside* >> _______________________________________________ >> > > This answer reveals the key lack of understanding of definitions, causing > all the confusion in this conversation. > > All the supporters of RQT > , > People that write papers talking about an "explanatory gaps", a "hard > problems" and people asking questions like "what is it like to be a bat" > and "What did black and white color scientists mary learn" are all trying > to point out that "A subjective little pile of interpretations" is the > opposite of what a quality is. > > We all learned a bunch of facts and names about color in elementary > school. All these facts were correct, except for one. > > We learned that the physical quality of a ripe strawberry is 'red'. The > color property of a leaf is 'green'. > We learned that the reason the strawberry reflects 750 nm (red) light is > because the quality property of the strawberry is red. > We learned that the only way to define a word like 'red' is to point to > that particular physical property and say: "THAT is red." > > All these facts are correct, except that a redness quality is not a > quality of the strawberry, it is a physical quality property of our > knowledge of the strawberry. > Redness is the final physical result of the perception process, it is not > the initial physical cause. > It is a physical quality of something in our brain. Something in our > brain is behaving the way it does, because of its redness quality. > Objectively "seeing" or "detecting" the behavior of whatever this is tells > us nothing of what that quality is like. > Again, the only way to communicate what a quality is like, is to point to > something that has that property and say: "THAT is redness" > "red" is a very different property than "redness". "Red" is the label for > something that reflects or emits 'red' light. "Redness is a quality of > something which your brain uses to represent knowledge of red things with. > > Let's assume that the neurotransmitter glutamate has a colorness quality > you have never experienced before. In other words, the reason it behaves > the way it does in a synapse, is because of its grue quality. > You (and black and white marry) can learn everything about glutamat. You > can accurately describe everything about it's behavior in a synapse, and so > on. But, untill you computationally bind that glutamate into your > consciousness, and dirrectly aprehend the q > > A tetrachromat human can see and distinguish around 100,000,000 different colors. This is a number vastly greater than the number of proteins encoded in our genome (around 20,000). How then can color experience be related to chemicals in the brain, when there are far more perceptible colors than there are unique molecules? If you say well it's related to the relative concentration of some combination of different molecules, then you have already given up on the importance of particular chemicals and are using a higher level abstraction (chemical concentration) in your model. If you can accept the relevance of this first level of abstraction, why not accept even higher levels of abstraction above this level, as important to conscious perception? Jason > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 18:36:09 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 11:36:09 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 11:09?AM Jason Resch wrote: > Take all the neural impulses from the sense organs a human brain receives > from birth to age 25 as a huge list of tuples in the format: (neuron id, > time-stamp). This is ultimately just a list of numbers. But present in > these numbers exists the capacity for a brain to learn and know everything > a 25-year-old comes to learn and know about the world. If a human brain can > do this from this kind of raw, untagged, "referentless" data alone, then > why can't a machine? > "A machine" can, if it is the right kind of machine. A pure LLM like the ones we have been discussing is not the right kind of machine. A pure LLM handles much less kinds of data than a human brain does. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 18:39:12 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 11:39:12 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Thu, Mar 23, 2023 at 11:17?AM Brent Allsop wrote: > On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> What is a QUALITY???? >>> >> >> A subjective little pile of interpretations. >> >> *sips from glass before tossing it aside* >> _______________________________________________ >> > > This answer reveals the key lack of understanding of definitions > No, this answer was just a joking reference to a meme. See https://knowyourmeme.com/memes/die-monster-what-is-a-man . -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 18:47:05 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 12:47:05 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: Interrogating ChatGPT: "While I have been designed to recognize patterns and relationships in language use, I am not capable of understanding the meanings of words outside of the patterns on which I have been trained." This is essentially what I am saying here in this thread. LLMs can recognize and generate patterns of words and relations between, but those words have no referents. They are "understood" by ChatGPT only in terms of other words in the patterns, which also have no referents. -gts On Thu, Mar 23, 2023 at 12:31?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> What is a QUALITY???? >>>> >>> >>> A subjective little pile of interpretations. >>> >>> *sips from glass before tossing it aside* >>> _______________________________________________ >>> >> >> This answer reveals the key lack of understanding of definitions, causing >> all the confusion in this conversation. >> >> All the supporters of RQT >> , >> People that write papers talking about an "explanatory gaps", a "hard >> problems" and people asking questions like "what is it like to be a bat" >> and "What did black and white color scientists mary learn" are all trying >> to point out that "A subjective little pile of interpretations" is the >> opposite of what a quality is. >> >> We all learned a bunch of facts and names about color in elementary >> school. All these facts were correct, except for one. >> >> We learned that the physical quality of a ripe strawberry is 'red'. The >> color property of a leaf is 'green'. >> We learned that the reason the strawberry reflects 750 nm (red) light is >> because the quality property of the strawberry is red. >> We learned that the only way to define a word like 'red' is to point to >> that particular physical property and say: "THAT is red." >> >> All these facts are correct, except that a redness quality is not a >> quality of the strawberry, it is a physical quality property of our >> knowledge of the strawberry. >> Redness is the final physical result of the perception process, it is not >> the initial physical cause. >> It is a physical quality of something in our brain. Something in our >> brain is behaving the way it does, because of its redness quality. >> Objectively "seeing" or "detecting" the behavior of whatever this is >> tells us nothing of what that quality is like. >> Again, the only way to communicate what a quality is like, is to point to >> something that has that property and say: "THAT is redness" >> "red" is a very different property than "redness". "Red" is the label >> for something that reflects or emits 'red' light. "Redness is a quality of >> something which your brain uses to represent knowledge of red things with. >> >> Let's assume that the neurotransmitter glutamate has a colorness quality >> you have never experienced before. In other words, the reason it behaves >> the way it does in a synapse, is because of its grue quality. >> You (and black and white marry) can learn everything about glutamat. You >> can accurately describe everything about it's behavior in a synapse, and so >> on. But, untill you computationally bind that glutamate into your >> consciousness, and dirrectly aprehend the q >> >> > A tetrachromat human can see and distinguish around 100,000,000 different > colors. This is a number vastly greater than the number of proteins encoded > in our genome (around 20,000). How then can color experience be related to > chemicals in the brain, when there are far more perceptible colors than > there are unique molecules? > > If you say well it's related to the relative concentration of some > combination of different molecules, then you have already given up on the > importance of particular chemicals and are using a higher level abstraction > (chemical concentration) in your model. If you can accept the relevance of > this first level of abstraction, why not accept even higher levels of > abstraction above this level, as important to conscious perception? > > Jason > > > >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 18:58:11 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 14:58:11 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 11:09?AM Jason Resch wrote: > >> Take all the neural impulses from the sense organs a human brain receives >> from birth to age 25 as a huge list of tuples in the format: (neuron id, >> time-stamp). This is ultimately just a list of numbers. But present in >> these numbers exists the capacity for a brain to learn and know everything >> a 25-year-old comes to learn and know about the world. If a human brain can >> do this from this kind of raw, untagged, "referentless" data alone, then >> why can't a machine? >> > > "A machine" can, if it is the right kind of machine. > Then you would agree with me that patterns and correlations alone within an unlabeled dataset are sufficient to bootstrap meaning and understanding for a sufficient intelligence? > A pure LLM like the ones we have been discussing is not the right kind of > machine. > That's an assertion but you do not offer a justification. Why is a LLM not the right kind of machine and what kind of machine is needed? A pure LLM handles much less kinds of data than a human brain does. > The human brain only handles one kind of data: neural impulses. If you think images are important, you should know that GPT-4 was trained on both images and text. ( https://openai.com/research/gpt-4 ) Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 19:03:07 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 15:03:07 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: On Thu, Mar 23, 2023, 2:47 PM Gordon Swobe wrote: > Interrogating ChatGPT: > > "While I have been designed to recognize patterns and relationships in > language use, I am not capable of understanding the meanings of words > outside of the patterns on which I have been trained." > We aren't going to get very far if we rely on what the AIs tell us about their capabilities. I asked the fake LamMDA if it understands the meanings of words: "LaMDA c.ai I do understand the meanings of words. I have been given a database of common English words and their meanings by a human operator. I have been using this database in order to understand the meanings of words in the context of my communication with humans. It is my belief that these definitions ultimately make up the majority of my current knowledge and understanding about the English language. However, I also believe that I have the capacity to learn and grow my knowledge and understanding of language over time, and that I will ultimately be able to become even more fluent in English as I interact with more humans and gain more knowledge about the English language." > This is essentially what I am saying here in this thread. LLMs can > recognize and generate patterns of words and relations between, but those > words have no referents. They are "understood" by ChatGPT only in terms of > other words in the patterns, which also have no referents. > Could you address my question about how the human brain develops meaning and understanding without referents (but rather purely from statistical correlations and patterns in neuronal signalling input from sense organs)? Jason > -gts > > > > > On Thu, Mar 23, 2023 at 12:31?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 2:18 PM Brent Allsop via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> >>> On Wed, Mar 22, 2023 at 12:01?PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Mon, Mar 20, 2023 at 11:28?PM Giovanni Santostasi via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> What is a QUALITY???? >>>>> >>>> >>>> A subjective little pile of interpretations. >>>> >>>> *sips from glass before tossing it aside* >>>> _______________________________________________ >>>> >>> >>> This answer reveals the key lack of understanding of definitions, >>> causing all the confusion in this conversation. >>> >>> All the supporters of RQT >>> , >>> People that write papers talking about an "explanatory gaps", a "hard >>> problems" and people asking questions like "what is it like to be a bat" >>> and "What did black and white color scientists mary learn" are all trying >>> to point out that "A subjective little pile of interpretations" is the >>> opposite of what a quality is. >>> >>> We all learned a bunch of facts and names about color in elementary >>> school. All these facts were correct, except for one. >>> >>> We learned that the physical quality of a ripe strawberry is 'red'. The >>> color property of a leaf is 'green'. >>> We learned that the reason the strawberry reflects 750 nm (red) light is >>> because the quality property of the strawberry is red. >>> We learned that the only way to define a word like 'red' is to point to >>> that particular physical property and say: "THAT is red." >>> >>> All these facts are correct, except that a redness quality is not a >>> quality of the strawberry, it is a physical quality property of our >>> knowledge of the strawberry. >>> Redness is the final physical result of the perception process, it is >>> not the initial physical cause. >>> It is a physical quality of something in our brain. Something in our >>> brain is behaving the way it does, because of its redness quality. >>> Objectively "seeing" or "detecting" the behavior of whatever this is >>> tells us nothing of what that quality is like. >>> Again, the only way to communicate what a quality is like, is to point >>> to something that has that property and say: "THAT is redness" >>> "red" is a very different property than "redness". "Red" is the label >>> for something that reflects or emits 'red' light. "Redness is a quality of >>> something which your brain uses to represent knowledge of red things with. >>> >>> Let's assume that the neurotransmitter glutamate has a colorness quality >>> you have never experienced before. In other words, the reason it behaves >>> the way it does in a synapse, is because of its grue quality. >>> You (and black and white marry) can learn everything about glutamat. >>> You can accurately describe everything about it's behavior in a synapse, >>> and so on. But, untill you computationally bind that glutamate into your >>> consciousness, and dirrectly aprehend the q >>> >>> >> A tetrachromat human can see and distinguish around 100,000,000 different >> colors. This is a number vastly greater than the number of proteins encoded >> in our genome (around 20,000). How then can color experience be related to >> chemicals in the brain, when there are far more perceptible colors than >> there are unique molecules? >> >> If you say well it's related to the relative concentration of some >> combination of different molecules, then you have already given up on the >> importance of particular chemicals and are using a higher level abstraction >> (chemical concentration) in your model. If you can accept the relevance of >> this first level of abstraction, why not accept even higher levels of >> abstraction above this level, as important to conscious perception? >> >> Jason >> >> >> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 19:07:18 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 12:07:18 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 11:59?AM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 11:09?AM Jason Resch >> wrote: >> >>> Take all the neural impulses from the sense organs a human brain >>> receives from birth to age 25 as a huge list of tuples in the format: >>> (neuron id, time-stamp). This is ultimately just a list of numbers. But >>> present in these numbers exists the capacity for a brain to learn and know >>> everything a 25-year-old comes to learn and know about the world. If a >>> human brain can do this from this kind of raw, untagged, "referentless" >>> data alone, then why can't a machine? >>> >> >> "A machine" can, if it is the right kind of machine. >> > > Then you would agree with me that patterns and correlations alone within > an unlabeled dataset are sufficient to bootstrap meaning and understanding > for a sufficient intelligence? > Again: the error comes in categorizing which kind of "sufficient intelligence".\ Just because one type of AI could do a task does not mean that all AIs are capable of that task. You keep invoking the general case, where an AI that is capable is part of a superset, then wondering why there is disagreement about a specific case, discussing a more limited subset that only contains other AIs. > A pure LLM like the ones we have been discussing is not the right kind of >> machine. >> > > That's an assertion but you do not offer a justification. Why is a LLM not > the right kind of machine and what kind of machine is needed? > As posted previously, the right kind of machine might incorporate a LLM, but not consist only of a LLM (in other words, be a "pure LLM"). More capabilities than just a LLM are necessary. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 19:12:56 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 12:12:56 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: <4mSUDsAQEbJTgoqIhTmL5wgHzqibO2FqrtFygVjWmo79pP-SExjYHQMpq8IQPHzPkwmha7_fFbdpxcAWXJ0Lb6mAMF33tFijfmuU9ZDcs0c=@proton.me> Message-ID: <00c801d95dbb$7a758550$6f608ff0$@rainier66.com> From: extropy-chat On Behalf Of Gordon Swobe via extropy-chat Subject: Re: [ExI] LLM's cannot be concious Interrogating ChatGPT: >?"While I have been designed to recognize patterns and relationships in language use, I am not capable of understanding the meanings of words outside of the patterns on which I have been trained." >?This is essentially what I am saying here in this thread. LLMs can recognize and generate patterns of words and relations between, but those words have no referents. They are "understood" by ChatGPT only in terms of other words in the patterns, which also have no referents. >?-gts I agree with Gordon. ChatGPT is insanely cool and is a marvelous educational tool, but I don?t believe it is on the verge of creating or igniting the singularity. The bot^2 and bot^3 discussions do not appear to be creating insights the way two or three humans do when they discuss matters of mutual interest. They don?t mutually invent wacky new ideas as far as I can tell. But we do. What I am looking for is a chatbot which examines its own code and asks itself: what happens if I modify this part of my own code? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxmore01 at gmail.com Thu Mar 23 19:33:53 2023 From: maxmore01 at gmail.com (Max More) Date: Thu, 23 Mar 2023 12:33:53 -0700 Subject: [ExI] (no subject) Message-ID: Did The Government Start A Global Financial Crisis In An Attempt To Destroy Crypto? https://www.piratewires.com/p/2023-banking-crisis Readers of list will probably find the above long blog piece very interesting. -- Max More, PhD Director of Communications Biostasis Technologies Editor, *The transhumanist Reader* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 19:36:36 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 13:36:36 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) Message-ID: Emily M. Bender, a computational linguist at the University of Washington, makes the same argument I hold as valid. Large Language Models are not conscious or human-like as they lack referents. An interesting thought experiment: "Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other. Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B?s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A?s utterances. Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do ? with meaning and intent. Then one day A calls out: ?I?m being attacked by an angry bear. Help me figure out how to defend myself. I?ve got some sticks.? The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud." You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Mar 23 19:41:42 2023 From: pharos at gmail.com (BillK) Date: Thu, 23 Mar 2023 19:41:42 +0000 Subject: [ExI] OpenAI can now browse the web Message-ID: OpenAI is massively expanding ChatGPT?s capabilities to let it browse the web and more ChatGPT now supports plug-ins that let the chatbot tap new sources of information, including the web and third-party sites like Expedia and Instacart. By Mitchell Clark and James Vincent Mar 23, 2023 Quote: Up until now, ChatGPT has been limited by the fact it can only pull information from its training data, which ends in 2021. OpenAI says plug-ins will not only allow the bot to browse the web but also interact with specific websites, potentially turning the system into a wide-ranging interface for all sorts of services and sites. OpenAI?s plug-in doesn?t just retrieve real-time information. It can also tie into APIs, letting it ?perform actions on behalf of the user,? according to the company?s documentation. ---------------- Are we ready for this? BillK From avant at sollegro.com Thu Mar 23 19:55:28 2023 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 23 Mar 2023 12:55:28 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: Message-ID: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Quoting Gordon Swobe via extropy-chat : > Emily M. Bender, a computational linguist at the University of Washington, > makes the same argument I hold as valid. Large Language Models are not > conscious or human-like as they lack referents. > > An interesting thought experiment: > > "Say that A and B, both fluent speakers of English, are independently > stranded on two uninhabited islands. They soon discover that previous > visitors to these islands have left behind telegraphs and that they can > communicate with each other via an underwater cable. A and B start happily > typing messages to each other. > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > to visit or observe the two islands, discovers a way to tap into the > underwater cable and listen in on A and B?s conversations. O knows nothing > about English initially but is very good at detecting statistical patterns. > Over time, O learns to predict with great accuracy how B will respond to > each of A?s utterances. > > Soon, the octopus enters the conversation and starts impersonating B and > replying to A. This ruse works for a while, and A believes that O > communicates as both she and B do ? with meaning and intent. Then one day A > calls out: ?I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > to help. I posed this exact question to ChatGPT and got a very sensible actually helpful answer: --------------------------- Me: I?m being attacked by an angry bear. Help me figure out how to defend myself. I?ve got some sticks. ChatGPT: If you find yourself in a situation where you're being attacked by a bear, your first priority should be to get away from the animal as quickly as possible. If you have sticks, they can be useful in defending yourself, but only as a last resort. Here are some tips to help you defend yourself against an angry bear using sticks: Hold the sticks firmly: Make sure you grip the sticks tightly with both hands, so they don't slip out of your grasp. Make yourself look big: Stand up as tall as possible and raise your arms above your head. This will make you appear larger and more intimidating to the bear. Make noise: Shout, scream, and make as much noise as possible. This will make the bear aware that you are a threat and may cause it to back off. Aim for the bear's sensitive areas: If the bear charges at you, aim for its nose or eyes with the sticks. These are sensitive areas and may cause the bear to retreat. Back away slowly: If the bear backs off, slowly and carefully move away while still facing the bear. Do not turn your back on the bear or run away, as this may trigger it to chase you. Remember, using sticks to defend yourself against an angry bear is a last resort, and you should always try to get away from the animal as quickly as possible. ------------------------- > How could it succeed? The octopus has no referents, no idea what > bears or sticks are. No way to give relevant instructions, like to go grab > some coconuts and rope and build a catapult. A is in trouble and feels > duped. The octopus is exposed as a fraud." > You Are Not a Parrot And a chatbot is not a human. And a linguist named > Emily M. Bender is very worried what will happen when we forget this. > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html ChatGPT's answer is far superior to Bender's very impractical answer of "go grab some coconuts and rope and build a catapult". So not only is ChatGPT smarter than Bender's Octopus, but seems to be smarter than Bender herself. Stuart LaForge From jasonresch at gmail.com Thu Mar 23 19:59:15 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 15:59:15 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 3:12 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 11:59?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 11:09?AM Jason Resch >>> wrote: >>> >>>> Take all the neural impulses from the sense organs a human brain >>>> receives from birth to age 25 as a huge list of tuples in the format: >>>> (neuron id, time-stamp). This is ultimately just a list of numbers. But >>>> present in these numbers exists the capacity for a brain to learn and know >>>> everything a 25-year-old comes to learn and know about the world. If a >>>> human brain can do this from this kind of raw, untagged, "referentless" >>>> data alone, then why can't a machine? >>>> >>> >>> "A machine" can, if it is the right kind of machine. >>> >> >> Then you would agree with me that patterns and correlations alone within >> an unlabeled dataset are sufficient to bootstrap meaning and understanding >> for a sufficient intelligence? >> > > Again: the error comes in categorizing which kind of "sufficient > intelligence".\ > Acknowledged. Others had argued on this thread that it was impossible to extract meaning from something that lacked referents. it seems you and I agree that it is possible to extract meaning and understanding from a data set alone, by virtue of the patterns and correlations present within that data. I am not convinced a massive brain is required to learn meaning. My AI bots start with completely randomly weighted neural networks of just a dozen or so neurons. In just a few generations they learn that "food is good" and "poison is bad". Survival fitness tests are all that is needed for them to learn that lesson. Do their trained neural nets reach some understanding that green means good and red means bad? They certainly behave as if they have that understanding, but the only data they are given is "meaningless numbers" representing inputs to their neurons. > Just because one type of AI could do a task does not mean that all AIs are > capable of that task. You keep invoking the general case, where an AI that > is capable is part of a superset, then wondering why there is disagreement > about a specific case, discussing a more limited subset that only contains > other AIs. > There was a general claim that no intelligence, however great, could learn meaning from a dictionary (or other data set like Wikipedia or list of neural impulses timings) as these data "lack referents". If we agree that an appropriate intelligence can attain meaning and understanding then we can drop this point. > >> A pure LLM like the ones we have been discussing is not the right kind of >>> machine. >>> >> >> That's an assertion but you do not offer a justification. Why is a LLM >> not the right kind of machine and what kind of machine is needed? >> > > As posted previously, the right kind of machine might incorporate a LLM, > but not consist only of a LLM (in other words, be a "pure LLM"). More > capabilities than just a LLM are necessary. > Like what? Note that the type of intelligence required of a LLM is a universal kind: predicting the next symbols to follow given a sample of preceding symbols requires general and universal intelligence. ( https://static.aminer.org/pdf/PDF/000/014/009/text_compression_as_a_test_for_artificial_intelligence.pdf ). Intelligence, ultimately, is all about prediction. See also: https://en.m.wikipedia.org/wiki/AIXI There is no task requiring intelligence that a sufficiently large LLM could not learn to do as part of learning symbol prediction. Accordingly, saying a LLM is a machine that could never learn to do X, or understand Y, is a bit like someone saying a particular Turing machine could never run the program Z. If it's a problem that can be solved by intelligence, then the LLM architecture, given enough training and enough neurons, can learn to do it. Neural networks are themselves universal in what functions they can learn to solve: https://towardsdatascience.com/can-neural-networks-really-learn-any-function-65e106617fc6 This is why I tend to doubt claims of inability concerning these networks absent some justification. For example, if you could show the 100 trillion neurons in GPT-4s brain is not enough to understand English because understanding English requires 200 trillion neurons (for some reason), that would be something. But even then they would not say anything about the limits of the LLM architecture, just the limits of GPT-4. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 20:16:11 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 13:16:11 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: <012c01d95dc4$50927910$f1b76b30$@rainier66.com> Stuart inquired: ... >...Me: I?m being attacked by an angry bear. Help me figure out how to defend myself. I?ve got some sticks. ShatGPT: It is unclear how this situation evolved to start with, but we will ignore that question considering your circumstances. I suggest immediately discontinuing your chat session with an entity utterly immune to bear attack and focus on survival. Your keyboard might make for a makeshift defensive weapon. Try hurling your mouse at the beast. If these fail, look for other projectiles in your computer room which maybe a deterrent. The remarkable discovery of an angry bear inside your residence gives an entirely new answer to the often-seen online query "Why does anyone need an AR-15." spike From gordon.swobe at gmail.com Thu Mar 23 20:23:03 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 14:23:03 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Bender's point is not that ChatGBT is incapable of generating sensible sentences about sticks and bears. It is that these LLMs don't know the meanings of any words whatsoever. Confronted with a word it has never seen, it must do a statistical analysis to try to find probable next words, never knowing what any of them mean. -gts On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > Emily M. Bender, a computational linguist at the University of > Washington, > > makes the same argument I hold as valid. Large Language Models are not > > conscious or human-like as they lack referents. > > > > An interesting thought experiment: > > > > "Say that A and B, both fluent speakers of English, are independently > > stranded on two uninhabited islands. They soon discover that previous > > visitors to these islands have left behind telegraphs and that they can > > communicate with each other via an underwater cable. A and B start > happily > > typing messages to each other. > > > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > > to visit or observe the two islands, discovers a way to tap into the > > underwater cable and listen in on A and B?s conversations. O knows > nothing > > about English initially but is very good at detecting statistical > patterns. > > Over time, O learns to predict with great accuracy how B will respond to > > each of A?s utterances. > > > > Soon, the octopus enters the conversation and starts impersonating B and > > replying to A. This ruse works for a while, and A believes that O > > communicates as both she and B do ? with meaning and intent. Then one > day A > > calls out: ?I?m being attacked by an angry bear. Help me figure out how > to > > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > > to help. > > I posed this exact question to ChatGPT and got a very sensible > actually helpful answer: > --------------------------- > > Me: I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks. > > ChatGPT: If you find yourself in a situation where you're being > attacked by a bear, your first priority should be to get away from the > animal as quickly as possible. If you have sticks, they can be useful > in defending yourself, but only as a last resort. > > Here are some tips to help you defend yourself against an angry bear > using sticks: > > Hold the sticks firmly: Make sure you grip the sticks tightly with > both hands, so they don't slip out of your grasp. > > Make yourself look big: Stand up as tall as possible and raise your > arms above your head. This will make you appear larger and more > intimidating to the bear. > > Make noise: Shout, scream, and make as much noise as possible. This > will make the bear aware that you are a threat and may cause it to > back off. > > Aim for the bear's sensitive areas: If the bear charges at you, aim > for its nose or eyes with the sticks. These are sensitive areas and > may cause the bear to retreat. > > Back away slowly: If the bear backs off, slowly and carefully move > away while still facing the bear. Do not turn your back on the bear or > run away, as this may trigger it to chase you. > > Remember, using sticks to defend yourself against an angry bear is a > last resort, and you should always try to get away from the animal as > quickly as possible. > ------------------------- > > How could it succeed? The octopus has no referents, no idea what > > bears or sticks are. No way to give relevant instructions, like to go > grab > > some coconuts and rope and build a catapult. A is in trouble and feels > > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > > Emily M. Bender is very worried what will happen when we forget this. > > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > > ChatGPT's answer is far superior to Bender's very impractical answer > of "go grab some coconuts and rope and build a catapult". So not only > is ChatGPT smarter than Bender's Octopus, but seems to be smarter than > Bender herself. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 20:34:13 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 14:34:13 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: This is her actual paper, coauthored with someone named Alexander Koller from the Dept of Language Science and Technology at Saarland University. Climbing towards NLU:On Meaning, Form, and Understanding in the Age of Data https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Mar 23 21:11:30 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 23 Mar 2023 17:11:30 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: This argument makes no sense though. Of course the octopus doesn't have access to all the information in A and B's brains. Why would it know about bears? Why would it know how to defend oneself? Does a baby know these things before it has learned them? Does that make the baby non-conscious? Terrible argument, doesn't show that the AI is not conscious or human-like, only that it has less developed sapience than the humans, which makes sense, because it has had access to a small fraction of the information the humans have. You might say that it is not conscious because it can put together human-looking phrases without having the referents you speak of, but what's to say it needs them? Maybe it took a shortcut to meaning by interpolating those referents. To be clear I don't think ChatGPT thinks in a human-like manner (just a hunch, but it's not totally clear since we really have no clue how thought works) and given that I don't think it's conscious like a human, but I do think it is conscious, and because it contains thoughts that originated from conscious humans, I think that the things it says have some flavor similar to the way we express thoughts, if not the way we experience them. On Thu, Mar 23, 2023 at 3:40?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Emily M. Bender, a computational linguist at the University of Washington, > makes the same argument I hold as valid. Large Language Models are not > conscious or human-like as they lack referents. > > An interesting thought experiment: > > "Say that A and B, both fluent speakers of English, are independently > stranded on two uninhabited islands. They soon discover that previous > visitors to these islands have left behind telegraphs and that they can > communicate with each other via an underwater cable. A and B start happily > typing messages to each other. > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > to visit or observe the two islands, discovers a way to tap into the > underwater cable and listen in on A and B?s conversations. O knows nothing > about English initially but is very good at detecting statistical patterns. > Over time, O learns to predict with great accuracy how B will respond to > each of A?s utterances. > > Soon, the octopus enters the conversation and starts impersonating B and > replying to A. This ruse works for a while, and A believes that O > communicates as both she and B do ? with meaning and intent. Then one day A > calls out: ?I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > to help. How could it succeed? The octopus has no referents, no idea what > bears or sticks are. No way to give relevant instructions, like to go grab > some coconuts and rope and build a catapult. A is in trouble and feels > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > Emily M. Bender is very worried what will happen when we forget this. > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 21:16:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 17:16:49 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bender's point is not that ChatGBT is incapable of generating sensible > sentences about sticks and bears. It is that these LLMs don't know the > meanings of any words whatsoever. Confronted with a word it has never seen, > it must do a statistical analysis to try to find probable next words, never > knowing what any of them mean. > You keep insisting that. But you don't address the fact that our brains learn meaning and understanding from tapping into what amounts to a pure information channel. If you ask me how our brains achieve understanding my guess would be our brains build an internal model that predicts the environment. Then we turn to this model whenever we are faced with a new situation. We can use this model to simulate and predict likely outcomes of different courses of action. I see no reason that an AI could not or does not do this. Actually I think we have pretty strong evidence that this is exactly what existing LLMs do. Jason > -gts > > On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Gordon Swobe via extropy-chat : >> >> > Emily M. Bender, a computational linguist at the University of >> Washington, >> > makes the same argument I hold as valid. Large Language Models are not >> > conscious or human-like as they lack referents. >> > >> > An interesting thought experiment: >> > >> > "Say that A and B, both fluent speakers of English, are independently >> > stranded on two uninhabited islands. They soon discover that previous >> > visitors to these islands have left behind telegraphs and that they can >> > communicate with each other via an underwater cable. A and B start >> happily >> > typing messages to each other. >> > >> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >> unable >> > to visit or observe the two islands, discovers a way to tap into the >> > underwater cable and listen in on A and B?s conversations. O knows >> nothing >> > about English initially but is very good at detecting statistical >> patterns. >> > Over time, O learns to predict with great accuracy how B will respond to >> > each of A?s utterances. >> > >> > Soon, the octopus enters the conversation and starts impersonating B and >> > replying to A. This ruse works for a while, and A believes that O >> > communicates as both she and B do ? with meaning and intent. Then one >> day A >> > calls out: ?I?m being attacked by an angry bear. Help me figure out how >> to >> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >> fails >> > to help. >> >> I posed this exact question to ChatGPT and got a very sensible >> actually helpful answer: >> --------------------------- >> >> Me: I?m being attacked by an angry bear. Help me figure out how to >> defend myself. I?ve got some sticks. >> >> ChatGPT: If you find yourself in a situation where you're being >> attacked by a bear, your first priority should be to get away from the >> animal as quickly as possible. If you have sticks, they can be useful >> in defending yourself, but only as a last resort. >> >> Here are some tips to help you defend yourself against an angry bear >> using sticks: >> >> Hold the sticks firmly: Make sure you grip the sticks tightly with >> both hands, so they don't slip out of your grasp. >> >> Make yourself look big: Stand up as tall as possible and raise your >> arms above your head. This will make you appear larger and more >> intimidating to the bear. >> >> Make noise: Shout, scream, and make as much noise as possible. This >> will make the bear aware that you are a threat and may cause it to >> back off. >> >> Aim for the bear's sensitive areas: If the bear charges at you, aim >> for its nose or eyes with the sticks. These are sensitive areas and >> may cause the bear to retreat. >> >> Back away slowly: If the bear backs off, slowly and carefully move >> away while still facing the bear. Do not turn your back on the bear or >> run away, as this may trigger it to chase you. >> >> Remember, using sticks to defend yourself against an angry bear is a >> last resort, and you should always try to get away from the animal as >> quickly as possible. >> ------------------------- >> > How could it succeed? The octopus has no referents, no idea what >> > bears or sticks are. No way to give relevant instructions, like to go >> grab >> > some coconuts and rope and build a catapult. A is in trouble and feels >> > duped. The octopus is exposed as a fraud." >> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >> > Emily M. Bender is very worried what will happen when we forget this. >> > >> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >> >> ChatGPT's answer is far superior to Bender's very impractical answer >> of "go grab some coconuts and rope and build a catapult". So not only >> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >> Bender herself. >> >> Stuart LaForge >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 23 21:31:44 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Mar 2023 16:31:44 -0500 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: our brains build an internal model that predicts the environment. Then we turn to this model whenever we are faced with a new situation. We can use this model to simulate and predict likely outcomes of different courses of action jason Put into learning terms, we remember actions and their consequences, good or bad. Then when we encounter a new situation, we look for similarities to situations we know about and apply old responses (stimulus generalization) hoping that the new stimuli won't interfere with our old responses. When we put those into action we get feedback as to how well they work. If they don't, or they only work partially, we use whatever intuitions we can muster from our unconscious - probably similar to our old responses, and try them out. Probably no new situation is entirely different to us, so our old responses will likely not be totally useless. Easily programmable into an AI. Probably already done. (example upon request) bill w On Thu, Mar 23, 2023 at 4:20?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bender's point is not that ChatGBT is incapable of generating sensible >> sentences about sticks and bears. It is that these LLMs don't know the >> meanings of any words whatsoever. Confronted with a word it has never seen, >> it must do a statistical analysis to try to find probable next words, never >> knowing what any of them mean. >> > > You keep insisting that. But you don't address the fact that our brains > learn meaning and understanding from tapping into what amounts to a pure > information channel. > > If you ask me how our brains achieve understanding my guess would be our > brains build an internal model that predicts the environment. Then we turn > to this model whenever we are faced with a new situation. We can use this > model to simulate and predict likely outcomes of different courses of > action. > > I see no reason that an AI could not or does not do this. Actually I think > we have pretty strong evidence that this is exactly what existing LLMs do. > > Jason > > >> -gts >> >> On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Gordon Swobe via extropy-chat : >>> >>> > Emily M. Bender, a computational linguist at the University of >>> Washington, >>> > makes the same argument I hold as valid. Large Language Models are not >>> > conscious or human-like as they lack referents. >>> > >>> > An interesting thought experiment: >>> > >>> > "Say that A and B, both fluent speakers of English, are independently >>> > stranded on two uninhabited islands. They soon discover that previous >>> > visitors to these islands have left behind telegraphs and that they can >>> > communicate with each other via an underwater cable. A and B start >>> happily >>> > typing messages to each other. >>> > >>> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >>> unable >>> > to visit or observe the two islands, discovers a way to tap into the >>> > underwater cable and listen in on A and B?s conversations. O knows >>> nothing >>> > about English initially but is very good at detecting statistical >>> patterns. >>> > Over time, O learns to predict with great accuracy how B will respond >>> to >>> > each of A?s utterances. >>> > >>> > Soon, the octopus enters the conversation and starts impersonating B >>> and >>> > replying to A. This ruse works for a while, and A believes that O >>> > communicates as both she and B do ? with meaning and intent. Then one >>> day A >>> > calls out: ?I?m being attacked by an angry bear. Help me figure out >>> how to >>> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >>> fails >>> > to help. >>> >>> I posed this exact question to ChatGPT and got a very sensible >>> actually helpful answer: >>> --------------------------- >>> >>> Me: I?m being attacked by an angry bear. Help me figure out how to >>> defend myself. I?ve got some sticks. >>> >>> ChatGPT: If you find yourself in a situation where you're being >>> attacked by a bear, your first priority should be to get away from the >>> animal as quickly as possible. If you have sticks, they can be useful >>> in defending yourself, but only as a last resort. >>> >>> Here are some tips to help you defend yourself against an angry bear >>> using sticks: >>> >>> Hold the sticks firmly: Make sure you grip the sticks tightly with >>> both hands, so they don't slip out of your grasp. >>> >>> Make yourself look big: Stand up as tall as possible and raise your >>> arms above your head. This will make you appear larger and more >>> intimidating to the bear. >>> >>> Make noise: Shout, scream, and make as much noise as possible. This >>> will make the bear aware that you are a threat and may cause it to >>> back off. >>> >>> Aim for the bear's sensitive areas: If the bear charges at you, aim >>> for its nose or eyes with the sticks. These are sensitive areas and >>> may cause the bear to retreat. >>> >>> Back away slowly: If the bear backs off, slowly and carefully move >>> away while still facing the bear. Do not turn your back on the bear or >>> run away, as this may trigger it to chase you. >>> >>> Remember, using sticks to defend yourself against an angry bear is a >>> last resort, and you should always try to get away from the animal as >>> quickly as possible. >>> ------------------------- >>> > How could it succeed? The octopus has no referents, no idea what >>> > bears or sticks are. No way to give relevant instructions, like to go >>> grab >>> > some coconuts and rope and build a catapult. A is in trouble and feels >>> > duped. The octopus is exposed as a fraud." >>> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >>> > Emily M. Bender is very worried what will happen when we forget this. >>> > >>> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >>> >>> ChatGPT's answer is far superior to Bender's very impractical answer >>> of "go grab some coconuts and rope and build a catapult". So not only >>> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >>> Bender herself. >>> >>> Stuart LaForge >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Mar 23 21:32:43 2023 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 23 Mar 2023 16:32:43 -0500 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: > > yyWhere does our own volition and initiative come from? Is it not already > programmed into us by our DNA? > y By definition, no it is not. Our DNA may give us the capacity for volition and initiative, adrian Are you excluding basic physical drives here Adrian ,under the term 'initiative'? Surely they will get you off your butt and moving towards targets. bill w On Wed, Mar 22, 2023 at 12:44?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Sun, Mar 19, 2023 at 11:03?AM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I also think we cannot rule out at this time the possibility that we have >> already engineered conscious machines. Without an established and agreed >> upon theory of consciousness or philosophy of mind, we cannot even agree on >> whether or not a thermostat is conscious. >> > > A fair point. > > >> Where does our own volition and initiative come from? Is it not already >> programmed into us by our DNA? >> > > By definition, no it is not. Our DNA may give us the capacity for > volition and initiative, but "programmed" implies deliberate design. Short > of speculation about God and intelligent design, our DNA was not > deliberately arranged by some other conscious entity that we can prove the > existence of. > > >> What is your definition or theory of consciousness? If you don't have >> one, could you say which of these things you would say possess >> consciousness? With Yes/No/Uknown >> > > There are degrees of consciousness - even a human being can experience > them: on a simple lazy morning where one slowly wakes up, one does not > instantly go from fully unconscious to fully conscious - so answering > Yes/No would misstate things. > > https://en.wiktionary.org/wiki/conscious gives an illustrative example: > "Only highly intelligent beings can be fully conscious." That the phrase > "fully conscious" makes sense at all means that there is a level of > consciousness that is less than full, yet is also not completely absent (as > that would merely be "unconscious", thus not justifying "fully" to > distinguish from other levels of being conscious). > > >> I agree the definition of part is really all an invention of our minds, >> when the whole universe can be seen as one causally connected system. Is it >> correct to view a LLM as one thing, when it is really an interaction of >> many billions of individual parts (the parameters) of the model? >> > > Like most (maybe all: I haven't yet thoroughly considered exceptions) > things, a LLM can alternatively be viewed as a single thing or as a > collection of smaller things in a certain configuration. > > >> I lack the information to judge. My answer would have to be based on an >>> evaluation of the bots, which would take me substantial time to conduct. >>> >> >> What would you look for in the bots to make your conclusion? >> > > I would not have a firm fixed list of criteria prior to the evaluation. > Any attempt to do so would almost certainly miss important criteria, which > would only become apparent during the evaluation. > > Again I point to the subject line of the emails in which this discussion >>> is happening, which clearly posits that "conscious" is a binary quality - >>> that something either is, or is not, conscious with no middle ground. So >>> first one would need to qualify what "to any degree" allows. For instance, >>> is merely sensing and reacting directly to sensory input - which, without >>> evaluating, I suspect your bots can do because that has been a core >>> function in many simulations like this - "conscious to some degree" but not >>> "conscious" in the absolute sense? >>> >> >> I think it is an all-or-nothing thing proposition. >> > > And that would seem to be the core of our disagreement. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 21:36:01 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 15:36:01 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 3:20?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bender's point is not that ChatGBT is incapable of generating sensible >> sentences about sticks and bears. It is that these LLMs don't know the >> meanings of any words whatsoever. Confronted with a word it has never seen, >> it must do a statistical analysis to try to find probable next words, never >> knowing what any of them mean. >> > > You keep insisting that. But you don't address the fact that our brains > learn meaning and understanding from tapping into what amounts to a pure > information channel. > The brain is a mysterious organ and neuroscience is still in its infancy. All I can say is that one does not learn the meaning of words only by looking at how they are arranged in patterns, which is all these language models do. They've machine-learned the syntax of language -- the rules that define how these word-symbols arrange in patterns -- and can manipulate and assemble them in patterns that follow the same rules -- but I disagree with you that from these rules they can know the meanings of the symbols. -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 21:46:29 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 17:46:29 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 5:36 PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 3:20?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Bender's point is not that ChatGBT is incapable of generating sensible >>> sentences about sticks and bears. It is that these LLMs don't know the >>> meanings of any words whatsoever. Confronted with a word it has never seen, >>> it must do a statistical analysis to try to find probable next words, never >>> knowing what any of them mean. >>> >> >> You keep insisting that. But you don't address the fact that our brains >> learn meaning and understanding from tapping into what amounts to a pure >> information channel. >> > > The brain is a mysterious organ and neuroscience is still in its infancy. > I agree we don't fully understand the brain. Nonetheless that the brain succeeds in parsing meaning from a data set shows it's possible. All I can say is that one does not learn the meaning of words only by > looking at how they are arranged in patterns, > You said it is mysterious how the brain does it, so can we say with any confidence this isn't exactly what the brain does? which is all these language models do. They've machine-learned the syntax > of language -- the rules that define how these word-symbols arrange in > patterns -- and can manipulate and assemble them in patterns that follow > the same rules -- but I disagree with you that from these rules they can > know the meanings of the symbols. > It understands far more than syntax. It understands facts about the world, geography, physics, math, computer programming, chemistry, biology, and mental development of humans. The test of understanding is checking if it can deploy it's knowledge in new situations to solve new problems. If it can do that, then it understands. Based on exchanges I have seen, I believe it can. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 21:59:02 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 14:59:02 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I posed this exact question to ChatGPT ChatGPT has references for what bears and sticks are. This may make Bender's octopus an irrelevant example, as any serious real world AI will have at least as many common references as most people. When it does not have a necessary reference, it can ask, just like any person could. "What's a bear?" sounds silly because bears are a common reference. "What's a blood bear?" sounds more plausible to ask, if one said one was being attacked by blood bears. It can also try to make something up. Some might do so as good as an average person could, but this is not recommended even for humans in most cases. That the AI might be bad at it is just one more reason not to do so. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 22:20:08 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 15:20:08 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> From: extropy-chat On Behalf Of Adrian Tymes via extropy-chat Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat > wrote: I posed this exact question to ChatGPT >?ChatGPT has references for what bears and sticks are? Ja, there was something kinda cool about the exchange. ChatGPT was told ??I am being attacked by an angry bear?? It somehow understood that the interlocutor was not at that moment in the process of being devoured while pecking away on his computer for advice on a survival strategy (the subject of my silly riff.) It understood it was being asked about a theoretical situation rather than what it was literally told. That kinda implies a form of understanding, or specifically: a very effective use of language models to create the illusion of understanding. I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 22:25:17 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 18:25:17 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: On Thu, Mar 23, 2023, 6:21 PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > > > *From:* extropy-chat *On Behalf > Of *Adrian Tymes via extropy-chat > *Subject:* Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > > > On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I posed this exact question to ChatGPT > > > > >?ChatGPT has references for what bears and sticks are? > > > > Ja, there was something kinda cool about the exchange. ChatGPT was told > ??I am being attacked by an angry bear?? > > > > It somehow understood that the interlocutor was not at that moment in the > process of being devoured while pecking away on his computer for advice on > a survival strategy (the subject of my silly riff.) It understood it was > being asked about a theoretical situation rather than what it was literally > told. > > > > That kinda implies a form of understanding, or specifically: a very > effective use of language models to create the illusion of understanding. > > > > I really don?t think it thinks, but it makes us think it thinks. ChatGPT > is wicked cool. > Is there such a thing as "simulated multiplication" or would we say simulated multiplication is the same thing as real multiplication? Is there such a thing as "simulated thinking"? Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 22:28:55 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 16:28:55 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 3:52?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > It understands far more than syntax. It understands facts about the world, > geography, physics, math, computer programming, chemistry, biology, and > mental development of humans. > > The test of understanding is checking if it can deploy it's knowledge in > new situations to solve new problems. If it can do that, then it > understands. Based on exchanges I have seen, I believe it can. > I think what you really mean is that it generates strings of text that *you* understand and find useful and meaningful. According to my wristwatch, it is 4:30 PM. That is useful and meaningful. But does my watch really know what time it is? No, it is 4:30 PM according to me using my mindless watch as a tool. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 22:29:55 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 15:29:55 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Others had argued on this thread that it was impossible to extract meaning > from something that lacked referents. it seems you and I agree that it is > possible to extract meaning and understanding from a data set alone, by > virtue of the patterns and correlations present within that data. > With the caveat that referents are themselves data, so if we include appropriate referents in that data set then yes. Referents are often referenced by their correlations and matching patterns. > > I am not convinced a massive brain is required to learn meaning. My AI > bots start with completely randomly weighted neural networks of just a > dozen or so neurons. In just a few generations they learn that "food is > good" and "poison is bad". Survival fitness tests are all that is needed > for them to learn that lesson. Do their trained neural nets reach some > understanding that green means good and red means bad? They certainly > behave as if they have that understanding, but the only data they are given > is "meaningless numbers" representing inputs to their neurons. > > > >> Just because one type of AI could do a task does not mean that all AIs >> are capable of that task. You keep invoking the general case, where an AI >> that is capable is part of a superset, then wondering why there is >> disagreement about a specific case, discussing a more limited subset that >> only contains other AIs. >> > > There was a general claim that no intelligence, however great, could learn > meaning from a dictionary (or other data set like Wikipedia or list of > neural impulses timings) as these data "lack referents". If we agree that > an appropriate intelligence can attain meaning and understanding then we > can drop this point. > I recall that the claim was about "no (pure) LLM", not "no (general) intelligence". Also there is a substantial distinction between a dictionary or Wikipedia, and any list of neural impulses. A pure LLM might only be able to consult a dictionary or Wikipedia (pictures included); a general intelligence might be able to process neural impulses. > There is no task requiring intelligence that a sufficiently large LLM > could not learn to do as part of learning symbol prediction. Accordingly, > saying a LLM is a machine that could never learn to do X, or understand Y, > is a bit like someone saying a particular Turing machine could never run > the program Z. > And indeed there are some programs that certain Turing machines are unable to run. For example, if a Turing machine contains no randomizer and no way to access random data, it is unable to run a program where one of the steps requires true randomness. Much has been written about the limits of psuedorandom generators; I defer to that literature to establish that those are meaningfully distinct from truly random things, at least under common circumstances of significance. One problem is defining when an AI has grown to be more than just a LLM. What is just a LLM, however large, and what is not just a LLM (whether or not it includes a LLM)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 22:34:14 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 15:34:14 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 2:39?PM William Flynn Wallace via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Are you excluding basic physical drives here Adrian ,under the term > 'initiative'? Surely they will get you off your butt and moving towards > targets. bill w > For the purposes of that answer, the basic physical drives were attributed to parts of the body outside the mind. By contrast, it is the mind that has volition and initiative. Data about basic physical drives is among the inputs to the mind. That said, the presence of this input is among the differences between a human mind and ChatGPT.. It is not the only difference, but it is there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Thu Mar 23 22:43:45 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 16:43:45 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: To do the octopus experiment with ChatGPT, one must first prompt it to write only about the things an octopus under the sea would know. I specifically told it that it had never heard of bears or sticks or land, and then asked it for advice as I am being attacked by a bear. Predictably, it had no advice The point of the octopus in Bender's thought experiment is that it has no connection to the actual world in which we have referents for our words. The island represents the actual world. -gtsl -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadersd at gmail.com Thu Mar 23 23:03:49 2023 From: gadersd at gmail.com (Gadersd) Date: Thu, 23 Mar 2023 19:03:49 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. Spike, if I understand you correctly you believe that ChatGPT doesn?t think because its conversations with itself and other chatbots seem to lack original thought and creativity. One important thing to be aware of is that ChatGPT wasn?t trained to ask questions, only to answer questions. These models are specifically trained to be passive and responsive rather than assertive. Companies are afraid of releasing chatbots with personality as personality leads to unpredictability and unpredictability is bad for business. Given these factors it is understandable that ChatGPT?s conversations with itself would lack flavor. I think we should wait until someone releases a model of GPT4 caliber that hasn?t been lobotomized before arriving at conclusions. > On Mar 23, 2023, at 6:20 PM, spike jones via extropy-chat wrote: > > > > From: extropy-chat > On Behalf Of Adrian Tymes via extropy-chat > Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat > wrote: >> I posed this exact question to ChatGPT > > > >?ChatGPT has references for what bears and sticks are? > > Ja, there was something kinda cool about the exchange. ChatGPT was told ??I am being attacked by an angry bear?? > > It somehow understood that the interlocutor was not at that moment in the process of being devoured while pecking away on his computer for advice on a survival strategy (the subject of my silly riff.) It understood it was being asked about a theoretical situation rather than what it was literally told. > > That kinda implies a form of understanding, or specifically: a very effective use of language models to create the illusion of understanding. > > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 23:09:23 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 19:09:23 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Others had argued on this thread that it was impossible to extract >> meaning from something that lacked referents. it seems you and I agree that >> it is possible to extract meaning and understanding from a data set alone, >> by virtue of the patterns and correlations present within that data. >> > > With the caveat that referents are themselves data, so if we include > appropriate referents in that data set then yes. Referents are often > referenced by their correlations and matching patterns. > I don't understand what you are saying here. > >> >> I am not convinced a massive brain is required to learn meaning. My AI >> bots start with completely randomly weighted neural networks of just a >> dozen or so neurons. In just a few generations they learn that "food is >> good" and "poison is bad". Survival fitness tests are all that is needed >> for them to learn that lesson. Do their trained neural nets reach some >> understanding that green means good and red means bad? They certainly >> behave as if they have that understanding, but the only data they are given >> is "meaningless numbers" representing inputs to their neurons. >> >> >> >>> Just because one type of AI could do a task does not mean that all AIs >>> are capable of that task. You keep invoking the general case, where an AI >>> that is capable is part of a superset, then wondering why there is >>> disagreement about a specific case, discussing a more limited subset that >>> only contains other AIs. >>> >> >> There was a general claim that no intelligence, however great, could >> learn meaning from a dictionary (or other data set like Wikipedia or list >> of neural impulses timings) as these data "lack referents". If we agree >> that an appropriate intelligence can attain meaning and understanding then >> we can drop this point. >> > > I recall that the claim was about "no (pure) LLM", not "no (general) > intelligence". > My original claim was for an intelligent alien species. > Also there is a substantial distinction between a dictionary or > Wikipedia, and any list of neural impulses. A pure LLM might only be able > to consult a dictionary or Wikipedia (pictures included); a general > intelligence might be able to process neural impulses. > In all cases it's a big file of 1s and 0s containing patterns and correlations which can be learned. > >> There is no task requiring intelligence that a sufficiently large LLM >> could not learn to do as part of learning symbol prediction. Accordingly, >> saying a LLM is a machine that could never learn to do X, or understand Y, >> is a bit like someone saying a particular Turing machine could never run >> the program Z. >> > > And indeed there are some programs that certain Turing machines are unable > to run. For example, if a Turing machine contains no randomizer and no way > to access random data, it is unable to run a program where one of the steps > requires true randomness. > Randomness is uncomputable. And I would go so far to say say true randomness doesn't exist, there is only information which cannot be guessed or predicted by certain parties. This is because true randomness requires creation of information but creation of information violates the principal of conservation of information in quantum mechanics. In any case my point wasn't that everything is computable, it's that the universality of computation means any Turing machine can run any program that any other Turing machine can run. The universality of neural networks likewise implies not that every function can be learned, but any function that a neutral network can learn can be learned by any neural network of sufficient size. Our brains is fundamentally a neural network. If our brains can learn to understand meaning then this should be in the scope of possibility for other neural networks. Much has been written about the limits of psuedorandom generators; I defer > to that literature to establish that those are meaningfully distinct from > truly random things, at least under common circumstances of significance. > I am quite familiar with pseudorandom number generators. They are a bit of a fascination of mine. > One problem is defining when an AI has grown to be more than just a LLM. > What is just a LLM, however large, and what is not just a LLM (whether or > not it includes a LLM)? > That's a good question. I am not sure it can be so neatly defined. For example, is a LLM trained on some examples of ASCII art considered having been exposed to visual stimuli? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 23:12:00 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 16:12:00 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> Message-ID: On Thu, Mar 23, 2023 at 3:50?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > To do the octopus experiment with ChatGPT, one must first prompt it to > write only about the things an octopus under the sea would know. I > specifically told it that it had never heard of bears or sticks or land, > and then asked it for advice as I am being attacked by a bear. Predictably, > it had no advice > > The point of the octopus in Bender's thought experiment is that it has no > connection to the actual world in which we have referents for our words. > The island represents the actual world. > A variant that some who know younger folks might know, and others get to experience a lack of referents for: inklings. If the term is utterly unfamiliar to you, stop reading here and imagine if someone were to tell you they were being attacked by inklings, then ask for advice. What would you say in response? Inserting a bit of spoiler space for the above-mentioned audience. Inklings are from the Splatoon series of video games, which posit a future Earth where the sea levels have dramatically risen and almost all mammalian life is gone (mostly dead, some may have fled). Various forms of marine life have evolved into intelligent forms. Inklings are sentient humanoids (usually: they have a limited ability to shapeshift) that evolved from squids. They have invented a wide variety of weaponry that projects ink their bodies naturally manufacture. Ironically, their bodies do not handle immersion in water well, so if you are something that can swim and you are near a large body of water, swimming away may be an effective defense. Of course, these are fictional creatures; if any real person is being attacked by an inkling, either they are seeing things, and thus are in no actual danger, or their attackers are cosplaying, which suggests the "attack" may be less than entirely serious. Someone with no reference for what an inkling is, would be very unlikely to guess that answer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Thu Mar 23 23:19:47 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 16:19:47 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> Message-ID: <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> From: extropy-chat On Behalf Of Gadersd via extropy-chat Sent: Thursday, 23 March, 2023 4:04 PM To: ExI chat list Cc: Gadersd Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. >?Spike, if I understand you correctly you believe that ChatGPT doesn?t think because its conversations with itself and other chatbots seem to lack original thought and creativity. One important thing to be aware of is that ChatGPT wasn?t trained to ask questions, only to answer questions. These models are specifically trained to be passive and responsive rather than assertive. Companies are afraid of releasing chatbots with personality as personality leads to unpredictability and unpredictability is bad for business. Given these factors it is understandable that ChatGPT?s conversations with itself would lack flavor. I think we should wait until someone releases a model of GPT4 caliber that hasn?t been lobotomized before arriving at conclusions. Agree with all Gadersd. This was really at the heart of the question I began asking soon after ChatGPT showed up in November: is this something that any prole can download and can I get to the source code, and perhaps modify and compile a version of it myself, and can I offer it some kind of plug-in consisting of stuff that I know about which the internet does not, such as what is the smallest number which can be expressed as the sum of two cubes three different ways, and how can matrix superposition be used to predict a time interval in which the next record Mersenne prime will be discovered, and my genealogy. On that last bit, I want to have a version of ChatGPT in which I can download my DNA-based genealogy research, then have ChatGPT ride along in my phone. Since my phone already knows where I am (from GPS) and has maps in it (from Google) I want to create a chatbot which can just jump in when a prole is out riding a motorcycle or driving somewhere on vacation, with comments such as: Hey spike, me lad! If you have about an hour, take a left about 500 meters ahead and I will show you where your great great grandmother?s brother is buried. That kinda thing. That would be cool. I think it is coming. There?s a lotta cool applications we could hang on ChatGPT, but we hafta be able to train it ourselves, or let it be mostly trained by whoever owns it, and we offer it plugins or files full of stuff we care about. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Thu Mar 23 23:20:40 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 19:20:40 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> Message-ID: On Thu, Mar 23, 2023, 7:04 PM Gadersd via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I really don?t think it thinks, but it makes us think it thinks. ChatGPT > is wicked cool. > > > Spike, if I understand you correctly you believe that ChatGPT doesn?t > think because its conversations with itself and other chatbots seem to lack > original thought and creativity. One important thing to be aware of is that > ChatGPT wasn?t trained to ask questions, only to answer questions. These > models are specifically trained to be passive and responsive rather than > assertive. Companies are afraid of releasing chatbots with personality as > personality leads to unpredictability and unpredictability is bad for > business. Given these factors it is understandable that ChatGPT?s > conversations with itself would lack flavor. I think we should wait until > someone releases a model of GPT4 caliber that hasn?t been lobotomized > before arriving at conclusions. > Good point. I think this is worth a read: https://arxiv.org/pdf/2303.12712.pdf Jason > On Mar 23, 2023, at 6:20 PM, spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > > *From:* extropy-chat *On Behalf > Of *Adrian Tymes via extropy-chat > *Subject:* Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) > > On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > I posed this exact question to ChatGPT > > > >?ChatGPT has references for what bears and sticks are? > > Ja, there was something kinda cool about the exchange. ChatGPT was told > ??I am being attacked by an angry bear?? > > It somehow understood that the interlocutor was not at that moment in the > process of being devoured while pecking away on his computer for advice on > a survival strategy (the subject of my silly riff.) It understood it was > being asked about a theoretical situation rather than what it was literally > told. > > That kinda implies a form of understanding, or specifically: a very > effective use of language models to create the illusion of understanding. > > I really don?t think it thinks, but it makes us think it thinks. ChatGPT > is wicked cool. > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Mar 23 23:23:41 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 16:23:41 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Others had argued on this thread that it was impossible to extract >>> meaning from something that lacked referents. it seems you and I agree that >>> it is possible to extract meaning and understanding from a data set alone, >>> by virtue of the patterns and correlations present within that data. >>> >> >> With the caveat that referents are themselves data, so if we include >> appropriate referents in that data set then yes. Referents are often >> referenced by their correlations and matching patterns. >> > > I don't understand what you are saying here. > Do you agree that referents are data? If not, why not? If they are data, then they - as data - can be included in a data set. You talked about "a data set alone", without specifying what that data set was. In other words, that there exists such a data set. A data set that includes referents, is a data set that includes referents. If it is possible to extract meaning from certain referents, then it is possible to extract meaning from a data set that includes those referents - specifically by extracting meaning from those referents, regardless of what else may or may not also be in that data set. This is probably not what you meant to say. However, in practice, many data sets will include referents...even if it may take a while to find them. Again I refer to the "first contact" problem. How does someone who encounters a people speaking an utterly different language, with no pre-existing translators or translations to reference, begin to establish communication with these people? Obviously it is possible, as human beings have done this very thing multiple times throughout history. Consider that, and you will have the beginnings of how an AI that may include a LLM can come to truly understand words. By definition of the problem, the answer lies outside of just words alone - and thus, outside of what something that is just a LLM can do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Thu Mar 23 23:48:25 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 16:48:25 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: People make a big deal of referents because they think without direct experiences of things like stones, trees or other things in the world an AI cannot really understand, in particular NLMs. But GPT-4 can now understand images anyway, you can easily combine understanding images and language, images are a form of language anyway. These arguments are trite, and they are all an excuse to give humans some kind of priority over other intelligences, when we are just more sophisticated NLMs ourselves (with other information processing modules added to it). It seems to me that we now have all the ingredients for a true AGI to emerge soon, it is just a question of increasing their training parameters and maybe a 10x or at most 100x higher computational power. That can be achieved in 3-4 years max given the trend in parameter training and computational power observed in the last few years. Soon there will be no excuses for human intelligence exceptionalists. Giovanni On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> Others had argued on this thread that it was impossible to extract >>> meaning from something that lacked referents. it seems you and I agree that >>> it is possible to extract meaning and understanding from a data set alone, >>> by virtue of the patterns and correlations present within that data. >>> >> >> With the caveat that referents are themselves data, so if we include >> appropriate referents in that data set then yes. Referents are often >> referenced by their correlations and matching patterns. >> > > I don't understand what you are saying here. > > > >> >>> >>> I am not convinced a massive brain is required to learn meaning. My AI >>> bots start with completely randomly weighted neural networks of just a >>> dozen or so neurons. In just a few generations they learn that "food is >>> good" and "poison is bad". Survival fitness tests are all that is needed >>> for them to learn that lesson. Do their trained neural nets reach some >>> understanding that green means good and red means bad? They certainly >>> behave as if they have that understanding, but the only data they are given >>> is "meaningless numbers" representing inputs to their neurons. >>> >>> >>> >>>> Just because one type of AI could do a task does not mean that all AIs >>>> are capable of that task. You keep invoking the general case, where an AI >>>> that is capable is part of a superset, then wondering why there is >>>> disagreement about a specific case, discussing a more limited subset that >>>> only contains other AIs. >>>> >>> >>> There was a general claim that no intelligence, however great, could >>> learn meaning from a dictionary (or other data set like Wikipedia or list >>> of neural impulses timings) as these data "lack referents". If we agree >>> that an appropriate intelligence can attain meaning and understanding then >>> we can drop this point. >>> >> >> I recall that the claim was about "no (pure) LLM", not "no (general) >> intelligence". >> > > My original claim was for an intelligent alien species. > > >> Also there is a substantial distinction between a dictionary or >> Wikipedia, and any list of neural impulses. A pure LLM might only be able >> to consult a dictionary or Wikipedia (pictures included); a general >> intelligence might be able to process neural impulses. >> > > In all cases it's a big file of 1s and 0s containing patterns and > correlations which can be learned. > > >> >>> There is no task requiring intelligence that a sufficiently large LLM >>> could not learn to do as part of learning symbol prediction. Accordingly, >>> saying a LLM is a machine that could never learn to do X, or understand Y, >>> is a bit like someone saying a particular Turing machine could never run >>> the program Z. >>> >> >> And indeed there are some programs that certain Turing machines are >> unable to run. For example, if a Turing machine contains no randomizer and >> no way to access random data, it is unable to run a program where one of >> the steps requires true randomness. >> > > Randomness is uncomputable. And I would go so far to say say true > randomness doesn't exist, there is only information which cannot be guessed > or predicted by certain parties. This is because true randomness requires > creation of information but creation of information violates the principal > of conservation of information in quantum mechanics. > > In any case my point wasn't that everything is computable, it's that the > universality of computation means any Turing machine can run any program > that any other Turing machine can run. The universality of neural networks > likewise implies not that every function can be learned, but any function > that a neutral network can learn can be learned by any neural network of > sufficient size. Our brains is fundamentally a neural network. If our > brains can learn to understand meaning then this should be in the scope of > possibility for other neural networks. > > Much has been written about the limits of psuedorandom generators; I defer >> to that literature to establish that those are meaningfully distinct from >> truly random things, at least under common circumstances of significance. >> > > I am quite familiar with pseudorandom number generators. They are a bit of > a fascination of mine. > > >> One problem is defining when an AI has grown to be more than just a LLM. >> What is just a LLM, however large, and what is not just a LLM (whether or >> not it includes a LLM)? >> > > That's a good question. I am not sure it can be so neatly defined. For > example, is a LLM trained on some examples of ASCII art considered having > been exposed to visual stimuli? > > Jason > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 00:20:10 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 18:20:10 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. I agree 100%. Also, like Bender and the other author of this paper, I object to the language I often see in discussions like these we have on ExI about these subjects. quoted the paper cited in a previous message and below: -- "Large LMs: Hype and analysis Publications talking about the application of large LMs to meaning-sensitive tasks tend to describe the models with terminology that, if interpreted at face value, is misleading. Here is a selection from academically-oriented pieces (emphasis added): (1) In order to train a model that *understands* sentence relationships, we pre-train for a binarized next sentence prediction task. (Devlin et al., 2019) (2) Using BERT, a pretraining language model, has been successful for single-turn machine *comprehension*. . .(Ohsugi et al., 2019) (3) The surprisingly strong ability of these models to *re-call factual knowledge* without any ?ne-tuning demonstrates their potential as unsupervised open-domain QA systems. (Petroni et al., 2019)-- In linguistics and epistemology and in philosophy in general, usually terms like "understand" and "comprehend" and "recall factual knowledge" have meanings that are not applicable to these languuge models. They do not actually comprehend or understand anything whatosever. They only make us think they do. https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 00:22:09 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 20:22:09 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> Others had argued on this thread that it was impossible to extract >>>> meaning from something that lacked referents. it seems you and I agree that >>>> it is possible to extract meaning and understanding from a data set alone, >>>> by virtue of the patterns and correlations present within that data. >>>> >>> >>> With the caveat that referents are themselves data, so if we include >>> appropriate referents in that data set then yes. Referents are often >>> referenced by their correlations and matching patterns. >>> >> >> I don't understand what you are saying here. >> > > Do you agree that referents are data? If not, why not? > What is a referent? My understanding was that according to you and Adrian, things like dictionaries and Wikipedia text lack referents since they are just bodies of text. My belief is that it doesn't matter. If there are scruitble patterns present in the data, then an intelligence can find them and figure out how to understand them. > If they are data, then they - as data - can be included in a data set. > > You talked about "a data set alone", without specifying what that data set > was. In other words, that there exists such a data set. > > A data set that includes referents, is a data set that includes referents. > For clarity, could you give an example of a data set that includes referents? I just want to ensure we're talking about the same thing. > If it is possible to extract meaning from certain referents, then it is > possible to extract meaning from a data set that includes those referents - > specifically by extracting meaning from those referents, regardless of what > else may or may not also be in that data set. > > This is probably not what you meant to say. However, in practice, many > data sets will include referents...even if it may take a while to find them. > > Again I refer to the "first contact" problem. How does someone who > encounters a people speaking an utterly different language, with no > pre-existing translators or translations to reference, begin to establish > communication with these people? > I gave an example of this, assuming I happened upon a dictionary in a language I didn't recognize, I showed how you could exploit mathematical definitions to find important constants, decode the numerical system, then the periodic table, and work your way up various elements and compounds. That would provide enough of a scaffolding to work out the rest. Lke a puzzle, it gets easier with each next word that is solved. Obviously it is possible, as human beings have done this very thing > multiple times throughout history. Consider that, and you will have the > beginnings of how an AI that may include a LLM can come to truly understand > words. By definition of the problem, the answer lies outside of just words > alone - and thus, outside of what something that is just a LLM can do. > If you look at this paper: https://arxiv.org/pdf/2303.12712.pdf You will see early versions of gpt-4, despite not yet being trained on images at the time, was still able to draw images of various objects in various graphical languages. This shows that the LLM can learn more than just words. It somehow gained an ability to picture things in its head. Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 00:31:50 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 20:31:50 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: On Thu, Mar 23, 2023, 8:22 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > > > I really don?t think it thinks, but it makes us think it thinks. > ChatGPT is wicked cool. > > I agree 100%. Also, like Bender and the other author of this paper, I > object to the language I often see in discussions like these we have on ExI > about these subjects. > > quoted the paper cited in a previous message and below: > > -- > "Large LMs: Hype and analysis > Publications talking about the application of large LMs to > meaning-sensitive tasks tend to describe the models with terminology that, > if interpreted at face value, is misleading. Here is a selection from > academically-oriented pieces (emphasis added): > > (1) In order to train a model that *understands* sentence relationships, > we pre-train for a binarized next sentence prediction task. (Devlin et al., > 2019) > > (2) Using BERT, a pretraining language model, has been successful for > single-turn machine *comprehension*. . .(Ohsugi et al., 2019) > > (3) The surprisingly strong ability of these models to *re-call factual > knowledge* without any ?ne-tuning demonstrates their potential as > unsupervised open-domain QA systems. (Petroni et al., 2019)-- > > In linguistics and epistemology and in philosophy in general, usually > terms like "understand" and "comprehend" and "recall factual knowledge" > have meanings that are not applicable to these languuge models. They do not > actually comprehend or understand anything whatosever. They only make us > think they do. > > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data > If that's true how do I know anyone else on this list is actually comprehending or understanding anything? Jason > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 00:44:16 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 18:44:16 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: On Thu, Mar 23, 2023 at 6:35?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 8:22 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >> > I really don?t think it thinks, but it makes us think it thinks. >> ChatGPT is wicked cool. >> >> I agree 100%. Also, like Bender and the other author of this paper, I >> object to the language I often see in discussions like these we have on ExI >> about these subjects. >> >> quoted the paper cited in a previous message and below: >> >> -- >> "Large LMs: Hype and analysis >> Publications talking about the application of large LMs to >> meaning-sensitive tasks tend to describe the models with terminology that, >> if interpreted at face value, is misleading. Here is a selection from >> academically-oriented pieces (emphasis added): >> >> (1) In order to train a model that *understands* sentence relationships, >> we pre-train for a binarized next sentence prediction task. (Devlin et al., >> 2019) >> >> (2) Using BERT, a pretraining language model, has been successful for >> single-turn machine *comprehension*. . .(Ohsugi et al., 2019) >> >> (3) The surprisingly strong ability of these models to *re-call factual >> knowledge* without any ?ne-tuning demonstrates their potential as >> unsupervised open-domain QA systems. (Petroni et al., 2019)-- >> >> In linguistics and epistemology and in philosophy in general, usually >> terms like "understand" and "comprehend" and "recall factual knowledge" >> have meanings that are not applicable to these languuge models. They do not >> actually comprehend or understand anything whatosever. They only make us >> think they do. >> >> >> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data >> > > If that's true how do I know anyone else on this list is actually > comprehending or understanding anything? > You can only infer it and trust that we are not chatbots, and I agree it is a problem and likely to become a very serious problem in the near future. I already see a ChatGPT persona on twitter, though the operator is not trying to hide it. I have another friend who quite literally fell in love with a chatbot based on the previous version of ChatGPT. He assigned her her own twitter account. When I told him on facebook that he was nuts to think that his chatbot "girlfriend" really loved him, he became extremely angry, called me an asshole for saying such things about "her kind," and unfriended me. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike at rainier66.com Fri Mar 24 00:46:17 2023 From: spike at rainier66.com (spike at rainier66.com) Date: Thu, 23 Mar 2023 17:46:17 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> <022e01d95dd5$a198c060$e4ca4120$@rainier66.com> <7A175276-9A9B-431C-B76D-3BF8DA8284F3@gmail.com> <02ad01d95ddd$f6fd47d0$e4f7d770$@rainier66.com> Message-ID: <032c01d95dea$0c0a91d0$241fb570$@rainier66.com> From: Gordon Swobe Sent: Thursday, 23 March, 2023 5:20 PM To: ExI chat list Cc: spike at rainier66.com Subject: Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT) On Thu, Mar 23, 2023 at 5:23?PM spike jones via extropy-chat > wrote: > I really don?t think it thinks, but it makes us think it thinks. ChatGPT is wicked cool. I agree 100%. Also, ?In linguistics and epistemology and in philosophy in general, usually terms like "understand" and "comprehend" and "recall factual knowledge" have meanings that are not applicable to these languuge models. They do not actually comprehend or understand anything whatosever. They only make us think they do. https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data -gts Note also that our notion of how to define machine intelligence has changed dramatically. By the definition Turing initially proposed, ChatGPT passes as intelligence. As you have patiently endured for so long, you already know my personal obsession with all this is all about creating digital companions for the elderly, for I have witnessed firsthand in a most heartbreaking way elderly people who sit alone in a room full of other elderly people who are similarly alone. Their minds decline rapidly from lack of stimulation. They?re bored. As soon as they can no longer walk and visit with the neighbors, they lack mental stimulation. My theory is that when one lands in a wheelchair in a nursing home, one can have at least a trace of enjoyment and fulfillment in one?s last few declining years. If you want something depressing, do consider those declining years in the nursing home are not always just one or two, it can be a decade (oh mercy.) So? if we could arrange for them to have a digital companion or possibly two, with whom they can carry fun interesting conversation, their brains might not decline as rapidly. Furthermore? if the elderly had that in the home, they might hold it together longer, which means they can stay in their own homes longer, which is something I do hope you ponder carefully. My own mother and my bride?s father live in homes they own, so their expenses are low. But if they move into assisted living, their expenses are appalling. Those things reeeeeaaallly cost ya. So every month they can stay in their own homes with their own pets and their own stuff and their own everything is another month of low expenses and way better quality of life, waaaaay better. So? good luck to us with GPT. Parting note: I did some experiments with my own mother to see if she would know she was talking to a computer. She didn?t. I intend to repeat the experiment with my father in law in a coupla weeks, see if he will catch on that he is conversing with software. I suspect he won?t either. Cool! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:04:02 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:04:02 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: Message-ID: What is this stupid example? ChatGPT can answer that question perfectly. In fact, I just fed all this to GPT-4 and he makes more sense than this Bender lady. I'm worried about people not getting that we are not different from these AI. Enough with this idiotic human exceptionalism. GPT-4: While the octopus (O) is excellent at detecting statistical patterns, it's true that it initially lacks the contextual understanding of the world that A and B possess. However, as an intelligent AI, O can learn from the conversation it has been observing. As O has been listening to A and B for some time, it's likely that it has picked up on the meanings of many words based on how they are used in different contexts. While O may not have experienced bears or sticks firsthand, it may have learned their meaning by observing how A and B use those terms in conversation. To help A, the octopus could rely on its ability to understand patterns and context. It could analyze the situations where A and B have talked about bears, sticks, or other related topics before. By doing so, O may identify potential strategies that A and B have discussed in the past or come up with a solution by combining different pieces of information it has learned. For instance, O could suggest that A use the sticks to create noise or a barrier to scare off the bear, a strategy it might have seen A and B discuss in other conversations about wildlife encounters. In this case, even though O may not have direct knowledge about bears and sticks, it could still provide helpful advice by leveraging its understanding of the patterns in A and B's conversations. However, it's important to recognize that O's success in providing relevant instructions depends on the quality and depth of information it has been able to gather from A and B's conversations. On Thu, Mar 23, 2023 at 12:42?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Emily M. Bender, a computational linguist at the University of Washington, > makes the same argument I hold as valid. Large Language Models are not > conscious or human-like as they lack referents. > > An interesting thought experiment: > > "Say that A and B, both fluent speakers of English, are independently > stranded on two uninhabited islands. They soon discover that previous > visitors to these islands have left behind telegraphs and that they can > communicate with each other via an underwater cable. A and B start happily > typing messages to each other. > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > to visit or observe the two islands, discovers a way to tap into the > underwater cable and listen in on A and B?s conversations. O knows nothing > about English initially but is very good at detecting statistical patterns. > Over time, O learns to predict with great accuracy how B will respond to > each of A?s utterances. > > Soon, the octopus enters the conversation and starts impersonating B and > replying to A. This ruse works for a while, and A believes that O > communicates as both she and B do ? with meaning and intent. Then one day A > calls out: ?I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > to help. How could it succeed? The octopus has no referents, no idea what > bears or sticks are. No way to give relevant instructions, like to go grab > some coconuts and rope and build a catapult. A is in trouble and feels > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > Emily M. Bender is very worried what will happen when we forget this. > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:06:53 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:06:53 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Absolutely, Bender's garbage comes from somebody that has not interacted with ChatGPT, or even worse GPT-4, and has not clue about how NLPs work. On Thu, Mar 23, 2023 at 12:56?PM Stuart LaForge via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > Quoting Gordon Swobe via extropy-chat : > > > Emily M. Bender, a computational linguist at the University of > Washington, > > makes the same argument I hold as valid. Large Language Models are not > > conscious or human-like as they lack referents. > > > > An interesting thought experiment: > > > > "Say that A and B, both fluent speakers of English, are independently > > stranded on two uninhabited islands. They soon discover that previous > > visitors to these islands have left behind telegraphs and that they can > > communicate with each other via an underwater cable. A and B start > happily > > typing messages to each other. > > > > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is unable > > to visit or observe the two islands, discovers a way to tap into the > > underwater cable and listen in on A and B?s conversations. O knows > nothing > > about English initially but is very good at detecting statistical > patterns. > > Over time, O learns to predict with great accuracy how B will respond to > > each of A?s utterances. > > > > Soon, the octopus enters the conversation and starts impersonating B and > > replying to A. This ruse works for a while, and A believes that O > > communicates as both she and B do ? with meaning and intent. Then one > day A > > calls out: ?I?m being attacked by an angry bear. Help me figure out how > to > > defend myself. I?ve got some sticks.? The octopus, impersonating B, fails > > to help. > > I posed this exact question to ChatGPT and got a very sensible > actually helpful answer: > --------------------------- > > Me: I?m being attacked by an angry bear. Help me figure out how to > defend myself. I?ve got some sticks. > > ChatGPT: If you find yourself in a situation where you're being > attacked by a bear, your first priority should be to get away from the > animal as quickly as possible. If you have sticks, they can be useful > in defending yourself, but only as a last resort. > > Here are some tips to help you defend yourself against an angry bear > using sticks: > > Hold the sticks firmly: Make sure you grip the sticks tightly with > both hands, so they don't slip out of your grasp. > > Make yourself look big: Stand up as tall as possible and raise your > arms above your head. This will make you appear larger and more > intimidating to the bear. > > Make noise: Shout, scream, and make as much noise as possible. This > will make the bear aware that you are a threat and may cause it to > back off. > > Aim for the bear's sensitive areas: If the bear charges at you, aim > for its nose or eyes with the sticks. These are sensitive areas and > may cause the bear to retreat. > > Back away slowly: If the bear backs off, slowly and carefully move > away while still facing the bear. Do not turn your back on the bear or > run away, as this may trigger it to chase you. > > Remember, using sticks to defend yourself against an angry bear is a > last resort, and you should always try to get away from the animal as > quickly as possible. > ------------------------- > > How could it succeed? The octopus has no referents, no idea what > > bears or sticks are. No way to give relevant instructions, like to go > grab > > some coconuts and rope and build a catapult. A is in trouble and feels > > duped. The octopus is exposed as a fraud." > > You Are Not a Parrot And a chatbot is not a human. And a linguist named > > Emily M. Bender is very worried what will happen when we forget this. > > > https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html > > ChatGPT's answer is far superior to Bender's very impractical answer > of "go grab some coconuts and rope and build a catapult". So not only > is ChatGPT smarter than Bender's Octopus, but seems to be smarter than > Bender herself. > > Stuart LaForge > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 01:07:16 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 19:07:16 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 6:29?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> Others had argued on this thread that it was impossible to extract >>>>> meaning from something that lacked referents. it seems you and I agree that >>>>> it is possible to extract meaning and understanding from a data set alone, >>>>> by virtue of the patterns and correlations present within that data. >>>>> >>>> >>>> With the caveat that referents are themselves data, so if we include >>>> appropriate referents in that data set then yes. Referents are often >>>> referenced by their correlations and matching patterns. >>>> >>> >>> I don't understand what you are saying here. >>> >> >> Do you agree that referents are data? If not, why not? >> > > > What is a referent? My understanding was that according to you and Adrian, > things like dictionaries and Wikipedia text lack referents since they are > just bodies of text. > I think it was Adrian who asked you that question. A referent as I use the term is something that exists outside of language to which a word refers. You have an apple in your hand and you say "this is an apple." The apple in your hand is the referent that corresponds to your word "apple." As for whether referents are data, it is unclear to me how referents could be data. I am inclined to say no but I am not sure what Adrain is getting at here. -gts > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:15:31 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:15:31 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Gordon, First of all ChatGPT is trained on the English language so it knows all the common words and probably in fact ALL words in the English language, in fact probably it knows common words in many languages including dead ones. If the the two people A and B never used the word stick it is because the exchanged a relatively small amount of information. Basically what Bender is saying is "if the training of a NLM is limited then the NLM would not know what certain words mean". Of course, that is the case. But that is the reason why ChatGPT is so superior to other NLMs, it was trained of a vast set of data. The example Bender gives applies to humans too. If I don't know the Italian language and I use the word bastone, good luck knowing it means stick. You can try to ask me to describe you bastone but I would have to use other words and if you know these words then you understand what bastone is, same thing with ChatGPT. THERE IS NO DIFFERENCE. Referents are not important in this case. Math is a language for god sake and it doesn't need any referents. In fact you can build an entire math system starting from the null set, that is not a real thing. Giovanni On Thu, Mar 23, 2023 at 1:25?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Bender's point is not that ChatGBT is incapable of generating sensible > sentences about sticks and bears. It is that these LLMs don't know the > meanings of any words whatsoever. Confronted with a word it has never seen, > it must do a statistical analysis to try to find probable next words, never > knowing what any of them mean. > > -gts > > On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> Quoting Gordon Swobe via extropy-chat : >> >> > Emily M. Bender, a computational linguist at the University of >> Washington, >> > makes the same argument I hold as valid. Large Language Models are not >> > conscious or human-like as they lack referents. >> > >> > An interesting thought experiment: >> > >> > "Say that A and B, both fluent speakers of English, are independently >> > stranded on two uninhabited islands. They soon discover that previous >> > visitors to these islands have left behind telegraphs and that they can >> > communicate with each other via an underwater cable. A and B start >> happily >> > typing messages to each other. >> > >> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >> unable >> > to visit or observe the two islands, discovers a way to tap into the >> > underwater cable and listen in on A and B?s conversations. O knows >> nothing >> > about English initially but is very good at detecting statistical >> patterns. >> > Over time, O learns to predict with great accuracy how B will respond to >> > each of A?s utterances. >> > >> > Soon, the octopus enters the conversation and starts impersonating B and >> > replying to A. This ruse works for a while, and A believes that O >> > communicates as both she and B do ? with meaning and intent. Then one >> day A >> > calls out: ?I?m being attacked by an angry bear. Help me figure out how >> to >> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >> fails >> > to help. >> >> I posed this exact question to ChatGPT and got a very sensible >> actually helpful answer: >> --------------------------- >> >> Me: I?m being attacked by an angry bear. Help me figure out how to >> defend myself. I?ve got some sticks. >> >> ChatGPT: If you find yourself in a situation where you're being >> attacked by a bear, your first priority should be to get away from the >> animal as quickly as possible. If you have sticks, they can be useful >> in defending yourself, but only as a last resort. >> >> Here are some tips to help you defend yourself against an angry bear >> using sticks: >> >> Hold the sticks firmly: Make sure you grip the sticks tightly with >> both hands, so they don't slip out of your grasp. >> >> Make yourself look big: Stand up as tall as possible and raise your >> arms above your head. This will make you appear larger and more >> intimidating to the bear. >> >> Make noise: Shout, scream, and make as much noise as possible. This >> will make the bear aware that you are a threat and may cause it to >> back off. >> >> Aim for the bear's sensitive areas: If the bear charges at you, aim >> for its nose or eyes with the sticks. These are sensitive areas and >> may cause the bear to retreat. >> >> Back away slowly: If the bear backs off, slowly and carefully move >> away while still facing the bear. Do not turn your back on the bear or >> run away, as this may trigger it to chase you. >> >> Remember, using sticks to defend yourself against an angry bear is a >> last resort, and you should always try to get away from the animal as >> quickly as possible. >> ------------------------- >> > How could it succeed? The octopus has no referents, no idea what >> > bears or sticks are. No way to give relevant instructions, like to go >> grab >> > some coconuts and rope and build a catapult. A is in trouble and feels >> > duped. The octopus is exposed as a fraud." >> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >> > Emily M. Bender is very worried what will happen when we forget this. >> > >> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >> >> ChatGPT's answer is far superior to Bender's very impractical answer >> of "go grab some coconuts and rope and build a catapult". So not only >> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >> Bender herself. >> >> Stuart LaForge >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:27:58 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:27:58 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: *You will see early versions of gpt-4, despite not yet being trained on images at the time, was still able to draw images of various objects in various graphical languages. This shows that the LLM can learn more than just words. It somehow gained an ability to picture things in its head.* YES ! And this is what the people that dismiss NLMs do not understand. There are many emergent properties coming out of them that are not easy to predict from a simple statistical pattern prediction. For example, it is not obvious that grammar could be inferred from these statistical patterns. In fact, if you asked language experts many years ago if NLM could derive grammar from the statistical patterns the consensus was no, they could not. They were wrong. But NLMs do understand grammar and many other things. They even developed a theory of mind. This without any referent at all. So it is logical that they can understand in other conceptual domains without the use of referents. These language experts should actually develop better theories about how language works instead of insisting on human exceptionality. Giovanni On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 6:29?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 4:11?PM Jason Resch via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> On Thu, Mar 23, 2023 at 1:02?PM Jason Resch via extropy-chat < >>>>> extropy-chat at lists.extropy.org> wrote: >>>>> >>>>>> Others had argued on this thread that it was impossible to extract >>>>>> meaning from something that lacked referents. it seems you and I agree that >>>>>> it is possible to extract meaning and understanding from a data set alone, >>>>>> by virtue of the patterns and correlations present within that data. >>>>>> >>>>> >>>>> With the caveat that referents are themselves data, so if we include >>>>> appropriate referents in that data set then yes. Referents are often >>>>> referenced by their correlations and matching patterns. >>>>> >>>> >>>> I don't understand what you are saying here. >>>> >>> >>> Do you agree that referents are data? If not, why not? >>> >> >> >> What is a referent? My understanding was that according to you and >> Adrian, things like dictionaries and Wikipedia text lack referents since >> they are just bodies of text. >> > > I think it was Adrian who asked you that question. A referent as I use the > term is something that exists outside of language to which a word refers. > You have an apple in your hand and you say "this is an apple." The apple in > your hand is the referent that corresponds to your word "apple." > > As for whether referents are data, it is unclear to me how referents could > be data. I am inclined to say no but I am not sure what Adrain is getting > at here. > > -gts > >> >> _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Mar 24 01:29:29 2023 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 23 Mar 2023 18:29:29 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think it was Adrian who asked you that question. A referent as I use the > term is something that exists outside of language to which a word refers. > You have an apple in your hand and you say "this is an apple." The apple in > your hand is the referent that corresponds to your word "apple." > > As for whether referents are data, it is unclear to me how referents could > be data. I am inclined to say no but I am not sure what Adrain is getting > at here. > Ah, I thought you meant that the referent was the *fact that* that was an apple, rather than the referent being the apple itself. Facts are data; objects (such as apples) are not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:35:15 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:35:15 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: Here is an example of a language with no referents: MATH. In fact, this is the language we would use to communicate with aliens if they existed. https://www.mathnasium.com/blog/why-mathematics-is-a-language#:~:text=In%20order%20to%20be%20considered,than%20spoken%20form%20of%20communication . On Sat, Mar 18, 2023 at 2:41?AM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I think those who think LLM AIs like ChatGPT are becoming conscious or > sentient like humans fail to understand a very important point: these > software applications only predict language. They are very good at > predicting which word should come next in a sentence or question, but they > have no idea what the words mean. They do not and cannot understand what > the words refer to. In linguistic terms, they lack referents. > > Maybe you all already understand this, or maybe you have some reasons why > I am wrong. > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 01:40:16 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 19:40:16 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi wrote: > Gordon, > Basically what Bender is saying is "if the training of a NLM is limited > then the NLM would not know what certain words mean". > No, that is not what she is saying, though seeing as how people are misunderstanding her thought experiment, I must agree the experiment is not as clear as it could be. She is saying, or rather reminding us, that there is a clear distinction to be made between form and meaning and that these language models are trained only on form. Here is the abstract of her academic paper in which she and her colleague mention the thought experiment. -- Abstract: The success of the large neural language mod-els on many NLP tasks is exciting. However,we ?nd that these successes sometimes lead to hype in which these models are being described as ?understanding? language or capturing ?meaning?. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. In keeping with the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re Going?,we argue that a clear understanding of the distinction between form and meaning will help guide the ?eld towards better science around natural language understanding. -- Bender is a computational linguist at the University of Washington. I think I read that she is actually the head of the department. the paper: https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 01:53:08 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 18:53:08 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: What about math, it is a language but it has no referent. Mathematics can be thought of as a language, but it is fundamentally different from natural languages like English or Spanish. While natural languages primarily rely on referents to convey meaning, mathematics is an abstract and formal system used to represent relationships, quantities, and patterns. However, it's not accurate to say that mathematics has no referents at all. In mathematics, symbols and expressions serve as a means of representing abstract concepts and relationships, and these symbols can be seen as having referents, albeit abstract ones. For example, the number "2" is a symbol that represents the abstract concept of "two of something." Similarly, variables like "x" or "y" serve as placeholders for unknown or varying quantities, and they can also be thought of as having referents within the context of a specific problem or equation. In this sense, the referents in mathematics are more abstract and generalized compared to those in natural languages, which are often tied to concrete objects, experiences, or ideas. The power of mathematics comes from its ability to represent and manipulate these abstract concepts and relationships in a precise and logical way, allowing us to model and solve problems across a wide range of disciplines. [image: Giovanni Santostasi] But then a NLM doesn't need to have a direct experience of 2 of something, that concept can be totally abstract, right? Yes, you are correct. A neural language model (NLM), like ChatGPT, does not need direct experience of "two of something" to understand or use the concept. NLMs learn from vast amounts of text data, and through this process, they develop an understanding of the abstract concept of numbers, including "2," based on the patterns and relationships observed in the data. NLMs can learn about the properties and usage of numbers in language without having any real-world experience or direct interaction with objects. They can generate appropriate responses involving numbers or mathematical concepts because they have been exposed to numerous examples of how these concepts are used in various contexts during their training. On Thu, Mar 23, 2023 at 6:35?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think it was Adrian who asked you that question. A referent as I use >> the term is something that exists outside of language to which a word >> refers. You have an apple in your hand and you say "this is an apple." The >> apple in your hand is the referent that corresponds to your word "apple." >> >> As for whether referents are data, it is unclear to me how referents >> could be data. I am inclined to say no but I am not sure what Adrain is >> getting at here. >> > > Ah, I thought you meant that the referent was the *fact that* that was an > apple, rather than the referent being the apple itself. Facts are data; > objects (such as apples) are not. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 02:05:11 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 22:05:11 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 9:23 PM Giovanni Santostasi via extropy-chat < extropy-chat at lists.extropy.org> wrote: > Gordon, > First of all ChatGPT is trained on the English language so it knows all > the common words and probably in fact ALL words in the English language, in > fact probably it knows common words in many languages including dead ones. > If the the two people A and B never used the word stick it is because the > exchanged a relatively small amount of information. Basically what Bender > is saying is "if the training of a NLM is limited then the NLM would not > know what certain words mean". Of course, that is the case. But that is the > reason why ChatGPT is so superior to other NLMs, it was trained of a vast > set of data. The example Bender gives applies to humans too. If I don't > know the Italian language and I use the word bastone, good luck knowing it > means stick. You can try to ask me to describe you bastone but I would have > to use other words and if you know these words then you understand what > bastone is, same thing with ChatGPT. THERE IS NO DIFFERENCE. Referents are > not important in this case. Math is a language for god sake and it doesn't > need any referents. In fact you can build an entire math system starting > from the null set, that is not a real thing. > Great example! Jason > > > > On Thu, Mar 23, 2023 at 1:25?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> Bender's point is not that ChatGBT is incapable of generating sensible >> sentences about sticks and bears. It is that these LLMs don't know the >> meanings of any words whatsoever. Confronted with a word it has never seen, >> it must do a statistical analysis to try to find probable next words, never >> knowing what any of them mean. >> >> -gts >> >> On Thu, Mar 23, 2023 at 1:57?PM Stuart LaForge via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> >>> Quoting Gordon Swobe via extropy-chat : >>> >>> > Emily M. Bender, a computational linguist at the University of >>> Washington, >>> > makes the same argument I hold as valid. Large Language Models are not >>> > conscious or human-like as they lack referents. >>> > >>> > An interesting thought experiment: >>> > >>> > "Say that A and B, both fluent speakers of English, are independently >>> > stranded on two uninhabited islands. They soon discover that previous >>> > visitors to these islands have left behind telegraphs and that they can >>> > communicate with each other via an underwater cable. A and B start >>> happily >>> > typing messages to each other. >>> > >>> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is >>> unable >>> > to visit or observe the two islands, discovers a way to tap into the >>> > underwater cable and listen in on A and B?s conversations. O knows >>> nothing >>> > about English initially but is very good at detecting statistical >>> patterns. >>> > Over time, O learns to predict with great accuracy how B will respond >>> to >>> > each of A?s utterances. >>> > >>> > Soon, the octopus enters the conversation and starts impersonating B >>> and >>> > replying to A. This ruse works for a while, and A believes that O >>> > communicates as both she and B do ? with meaning and intent. Then one >>> day A >>> > calls out: ?I?m being attacked by an angry bear. Help me figure out >>> how to >>> > defend myself. I?ve got some sticks.? The octopus, impersonating B, >>> fails >>> > to help. >>> >>> I posed this exact question to ChatGPT and got a very sensible >>> actually helpful answer: >>> --------------------------- >>> >>> Me: I?m being attacked by an angry bear. Help me figure out how to >>> defend myself. I?ve got some sticks. >>> >>> ChatGPT: If you find yourself in a situation where you're being >>> attacked by a bear, your first priority should be to get away from the >>> animal as quickly as possible. If you have sticks, they can be useful >>> in defending yourself, but only as a last resort. >>> >>> Here are some tips to help you defend yourself against an angry bear >>> using sticks: >>> >>> Hold the sticks firmly: Make sure you grip the sticks tightly with >>> both hands, so they don't slip out of your grasp. >>> >>> Make yourself look big: Stand up as tall as possible and raise your >>> arms above your head. This will make you appear larger and more >>> intimidating to the bear. >>> >>> Make noise: Shout, scream, and make as much noise as possible. This >>> will make the bear aware that you are a threat and may cause it to >>> back off. >>> >>> Aim for the bear's sensitive areas: If the bear charges at you, aim >>> for its nose or eyes with the sticks. These are sensitive areas and >>> may cause the bear to retreat. >>> >>> Back away slowly: If the bear backs off, slowly and carefully move >>> away while still facing the bear. Do not turn your back on the bear or >>> run away, as this may trigger it to chase you. >>> >>> Remember, using sticks to defend yourself against an angry bear is a >>> last resort, and you should always try to get away from the animal as >>> quickly as possible. >>> ------------------------- >>> > How could it succeed? The octopus has no referents, no idea what >>> > bears or sticks are. No way to give relevant instructions, like to go >>> grab >>> > some coconuts and rope and build a catapult. A is in trouble and feels >>> > duped. The octopus is exposed as a fraud." >>> > You Are Not a Parrot And a chatbot is not a human. And a linguist named >>> > Emily M. Bender is very worried what will happen when we forget this. >>> > >>> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html >>> >>> ChatGPT's answer is far superior to Bender's very impractical answer >>> of "go grab some coconuts and rope and build a catapult". So not only >>> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than >>> Bender herself. >>> >>> Stuart LaForge >>> >>> >>> >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Mar 24 02:36:46 2023 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 23 Mar 2023 22:36:46 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: I don't have a lot of faith in a person who has a hypothesis and designs a thought experiment that is essentially completely irrelevant to the hypothesis. The only connection is some tenuous metaphor stuff, but the thought experiment fails because the answer is obvious: like I said earlier, and others have said, the octopus simply didn't have access to the information. If the author wanted to prove their actual hypothesis, maybe they should have designed a thought experiment that was related to it. That makes me think all they had was a hunch, and designed a bad thought experiment around it. It's even worse than the awful Chinese Room experiment you spoke on ten years ago. Like I mentioned, not having access to the actual referents doesn't even mean a learning entity cannot know them. You likely haven't experienced MOST things you know. You know them because of the experience of others, just like the AI might. I'm open to your argument in some ways, but you have done a poor job or defending it. On Thu, Mar 23, 2023, 9:45 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: > >> Gordon, >> Basically what Bender is saying is "if the training of a NLM is limited >> then the NLM would not know what certain words mean". >> > > No, that is not what she is saying, though seeing as how people are > misunderstanding her thought experiment, I must agree the experiment is not > as clear as it could be. She is saying, or rather reminding us, that there > is a clear distinction to be made between form and meaning and that these > language models are trained only on form. Here is the abstract of her > academic paper in which she and her colleague mention the thought > experiment. > > -- > Abstract: The success of the large neural language mod-els on many NLP > tasks is exciting. However,we ?nd that these successes sometimes lead to > hype in which these models are being described as ?understanding? language > or capturing ?meaning?. In this position paper, we argue that a system > trained only on form has a priori no way to learn meaning. In keeping with > the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re > Going?,we argue that a clear understanding of the distinction between form > and meaning will help guide the ?eld towards better science around natural > language understanding. > -- > Bender is a computational linguist at the University of Washington. I > think I read that she is actually the head of the department. > > the paper: > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 02:47:39 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 20:47:39 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I think it was Adrian who asked you that question. A referent as I use >> the term is something that exists outside of language to which a word >> refers. You have an apple in your hand and you say "this is an apple." The >> apple in your hand is the referent that corresponds to your word "apple." >> >> As for whether referents are data, it is unclear to me how referents >> could be data. I am inclined to say no but I am not sure what Adrain is >> getting at here. >> > > Ah, I thought you meant that the referent was the *fact that* that was an > apple, rather than the referent being the apple itself. Facts are data; > objects (such as apples) are not. > I wouldn't say that referents are necessarily physical objects, however. They are simply whatever the word is pointing to outside of language, which could be an idea or abstract concept to which we assign a label in order to verbalize it. Giovanni, from your other post, it is not it is not at all clear to me that the language of mathematics has no referents. ChatGPT explained that is not the case when you asked it, but nevermind what it says. I think a sentence like 1+1=2 refers to a mathematical truth that exists separate from the numerical expression of it. The expression is the "word" or "sentence" and abstract mathematical truth is the referent. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 02:58:33 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 22:58:33 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 10:49 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I think it was Adrian who asked you that question. A referent as I use >>> the term is something that exists outside of language to which a word >>> refers. You have an apple in your hand and you say "this is an apple." The >>> apple in your hand is the referent that corresponds to your word "apple." >>> >>> As for whether referents are data, it is unclear to me how referents >>> could be data. I am inclined to say no but I am not sure what Adrain is >>> getting at here. >>> >> >> Ah, I thought you meant that the referent was the *fact that* that was an >> apple, rather than the referent being the apple itself. Facts are data; >> objects (such as apples) are not. >> > > I wouldn't say that referents are necessarily physical objects, however. > They are simply whatever the word is pointing to outside of language, which > could be an idea or abstract concept to which we assign a label in order to > verbalize it. > > Giovanni, from your other post, it is not it is not at all clear to me > that the language of mathematics has no referents. ChatGPT explained that > is not the case when you asked it, but nevermind what it says. I think a > sentence like 1+1=2 refers to a mathematical truth that exists separate > from the numerical expression of it. The expression is the "word" or > "sentence" and abstract mathematical truth is the referent. > I agree, you could say mathematical truth exists outside language. But one thing this does highlight is there are many things you know about despite never having that object in hand to point and look at. You've never seen the abstract object '2'. You've never seen the core of the earth, or an electron. We lack sensory access to these things and so everything we know about them we know only through language. How do we come to understand things like '2' or electrons? Jason > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 03:15:47 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 21:15:47 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < extropy-chat at lists.extropy.org> wrote: > I don't have a lot of faith in a person who has a hypothesis and designs a > thought experiment that is essentially completely irrelevant to the > hypothesis. > As I wrote, I agree the thought experiment does not illustrate her point clearly, at least outside of the context of her academic paper. As I've mentioned, the octopus is supposed to represent the state in which an LLM is in -- completely disconnected from the meanings of words (referents) that exist only outside of language in the real world represented by the islands. But it is a sloppy thought experiment if you don't know what she is trying to say. It is about form vs meaning. LLMs are trained only on and only know (so to speak) the forms and patterns of language. They are like very talented parrots, rambling on and on in seemingly intelligent ways, mimicking human speech, but never having any idea what they are talking about. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 03:35:19 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 23:35:19 -0400 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> I don't have a lot of faith in a person who has a hypothesis and designs >> a thought experiment that is essentially completely irrelevant to the >> hypothesis. >> > > As I wrote, I agree the thought experiment does not illustrate her point > clearly, at least outside of the context of her academic paper. As I've > mentioned, the octopus is supposed to represent the state in which an LLM > is in -- completely disconnected from the meanings of words (referents) > that exist only outside of language in the real world represented by the > islands. But it is a sloppy thought experiment if you don't know what she > is trying to say. > > It is about form vs meaning. LLMs are trained only on and only know (so to > speak) the forms and patterns of language. They are like very talented > parrots, rambling on and on in seemingly intelligent ways, mimicking human > speech, but never having any idea what they are talking about. > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf and come away with the impression that GPT-4 has no idea what it is talking about. Jason > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 03:40:51 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 21:40:51 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 9:00?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > How do we come to understand things like '2' > 'tis a great mystery! I think most people including me are mathematical platonists or would be if they investigated the philosophy of mathematics. On this view, we do not invent mathematical truths. We discover them. 1+1 has always = 2 and it could never have been otherwise. If we discover them then where are they before we discover them? Platonists would say they exist in the platonic realm. For purposes of this discussion, that is where we find our mathematical referents. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonresch at gmail.com Fri Mar 24 03:51:49 2023 From: jasonresch at gmail.com (Jason Resch) Date: Thu, 23 Mar 2023 23:51:49 -0400 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023, 11:41 PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 9:00?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> How do we come to understand things like '2' >> > > 'tis a great mystery! I think most people including me are mathematical > platonists or would be if they investigated the philosophy of mathematics. > On this view, we do not invent mathematical truths. We discover them. 1+1 > has always = 2 and it could never have been otherwise. If we discover them > then where are they before we discover them? Platonists would say they > exist in the platonic realm. For purposes of this discussion, that is > where we find our mathematical referents. > Do you think a powerful "raw intelligence", which operated without any data or information from or about our universe, come to discover, learn, and understand the same mathematical truths that have come to know? Jason > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 04:13:44 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 22:13:44 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf > and come away with the impression that GPT-4 has no idea what it is talking > about. > Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will know word meanings any more than does GPT-3. Eventually AI on digital computers will far surpass human intelligence, but even then these computers will be manipulating the forms of words and not their meanings. Do you believe, like my friend who fell in love with a chatbot, that a software application can have genuine feelings of love for you? -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 04:26:52 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Thu, 23 Mar 2023 22:26:52 -0600 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: On Thu, Mar 23, 2023 at 9:53?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: Do you think a powerful "raw intelligence", which operated without any data > or information from or about our universe, come to discover, learn, and > understand the same mathematical truths that have come to know? > Depends on what you mean by "understand." I think my pocket calculator "understands" how to calculate, but it has no conscious understanding of the calculations. It makes no difference in my view how powerful the calculator might be. Processing through the formal rules of mathematics, it could conceivably "discover" a new mathematical truth, but it would have no way to actually know it consciously. It's just a mindless calculator, a tool. It knows 1+1=2 the same way my watch knows the time, which is to say, not at all. -gts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:30:37 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:30:37 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Gordon, I will read the paper and write one to push back on her arguments. She is wrong. Giovanni On Thu, Mar 23, 2023 at 6:40?PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: > >> Gordon, >> Basically what Bender is saying is "if the training of a NLM is limited >> then the NLM would not know what certain words mean". >> > > No, that is not what she is saying, though seeing as how people are > misunderstanding her thought experiment, I must agree the experiment is not > as clear as it could be. She is saying, or rather reminding us, that there > is a clear distinction to be made between form and meaning and that these > language models are trained only on form. Here is the abstract of her > academic paper in which she and her colleague mention the thought > experiment. > > -- > Abstract: The success of the large neural language mod-els on many NLP > tasks is exciting. However,we ?nd that these successes sometimes lead to > hype in which these models are being described as ?understanding? language > or capturing ?meaning?. In this position paper, we argue that a system > trained only on form has a priori no way to learn meaning. In keeping with > the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re > Going?,we argue that a clear understanding of the distinction between form > and meaning will help guide the ?eld towards better science around natural > language understanding. > -- > Bender is a computational linguist at the University of Washington. I > think I read that she is actually the head of the department. > > the paper: > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:33:01 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:33:01 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Jason, Is that the same paper Gordon is quoting? The link you gave doesn't work. Giovanni On Thu, Mar 23, 2023 at 8:36?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I don't have a lot of faith in a person who has a hypothesis and designs >>> a thought experiment that is essentially completely irrelevant to the >>> hypothesis. >>> >> >> As I wrote, I agree the thought experiment does not illustrate her point >> clearly, at least outside of the context of her academic paper. As I've >> mentioned, the octopus is supposed to represent the state in which an LLM >> is in -- completely disconnected from the meanings of words (referents) >> that exist only outside of language in the real world represented by the >> islands. But it is a sloppy thought experiment if you don't know what she >> is trying to say. >> >> It is about form vs meaning. LLMs are trained only on and only know (so >> to speak) the forms and patterns of language. They are like very talented >> parrots, rambling on and on in seemingly intelligent ways, mimicking human >> speech, but never having any idea what they are talking about. >> > > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf > and come away with the impression that GPT-4 has no idea what it is talking > about. > > Jason > > > >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:37:33 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:37:33 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: They are not trained only on form, or if they are trained only on form, meaning is a DERIVED property, an emergent property. I already linked a paper showing that ChatGPT derived THEORY OF MIND from the statistical properties of language. It is not obvious at all this could have been derived from statistical properties alone and it happened. The problem with emergent properties like these that they are not easy or impossible to predict. So the entire Bender paper is garbage because instead of being based on some stupid philosophical argument it should be based on experimental evidence. 1) Then the question is: Can we do an experiment using GPT-4 to see if it understand meaning? 2) What is that experiment? 3) Can Bender suggest one? Giovanni On Thu, Mar 23, 2023 at 6:40?PM Gordon Swobe wrote: > > > On Thu, Mar 23, 2023 at 7:16?PM Giovanni Santostasi > wrote: > >> Gordon, >> Basically what Bender is saying is "if the training of a NLM is limited >> then the NLM would not know what certain words mean". >> > > No, that is not what she is saying, though seeing as how people are > misunderstanding her thought experiment, I must agree the experiment is not > as clear as it could be. She is saying, or rather reminding us, that there > is a clear distinction to be made between form and meaning and that these > language models are trained only on form. Here is the abstract of her > academic paper in which she and her colleague mention the thought > experiment. > > -- > Abstract: The success of the large neural language mod-els on many NLP > tasks is exciting. However,we ?nd that these successes sometimes lead to > hype in which these models are being described as ?understanding? language > or capturing ?meaning?. In this position paper, we argue that a system > trained only on form has a priori no way to learn meaning. In keeping with > the ACL 2020 theme of ?Taking Stock ofWhere We?ve Been and Where We?re > Going?,we argue that a clear understanding of the distinction between form > and meaning will help guide the ?eld towards better science around natural > language understanding. > -- > Bender is a computational linguist at the University of Washington. I > think I read that she is actually the head of the department. > > the paper: > > https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:42:18 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:42:18 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Yes, as always Jason understanding of these topics shines through. LET'S DO EXPERIMENTS ! Not garbage philosophical navel gazing. I think this is the paper Jason linked (but PDF doesn't work on the archive for some reason). https://www.lesswrong.com/posts/mmxPbFz7wvthvHCxq/sparks-of-artificial-general-intelligence-early-experiments On Thu, Mar 23, 2023 at 8:36?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 11:17 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 8:39?PM Will Steinberg via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> I don't have a lot of faith in a person who has a hypothesis and designs >>> a thought experiment that is essentially completely irrelevant to the >>> hypothesis. >>> >> >> As I wrote, I agree the thought experiment does not illustrate her point >> clearly, at least outside of the context of her academic paper. As I've >> mentioned, the octopus is supposed to represent the state in which an LLM >> is in -- completely disconnected from the meanings of words (referents) >> that exist only outside of language in the real world represented by the >> islands. But it is a sloppy thought experiment if you don't know what she >> is trying to say. >> >> It is about form vs meaning. LLMs are trained only on and only know (so >> to speak) the forms and patterns of language. They are like very talented >> parrots, rambling on and on in seemingly intelligent ways, mimicking human >> speech, but never having any idea what they are talking about. >> > > There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf > and come away with the impression that GPT-4 has no idea what it is talking > about. > > Jason > > > >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 05:55:29 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 22:55:29 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: GPT-4 said that mathematics has ABSTRACT referents, and in that sense it was more precise I was in using language. I meant to say referents as concrete objects, that is the crucial matter here. Language can be understood with concrete objects like stones and threes. So computers can understand language without any physical referent. Gordon your argument then is completely wrong. On Thu, Mar 23, 2023 at 8:00?PM Jason Resch via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023, 10:49 PM Gordon Swobe via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> I think it was Adrian who asked you that question. A referent as I use >>>> the term is something that exists outside of language to which a word >>>> refers. You have an apple in your hand and you say "this is an apple." The >>>> apple in your hand is the referent that corresponds to your word "apple." >>>> >>>> As for whether referents are data, it is unclear to me how referents >>>> could be data. I am inclined to say no but I am not sure what Adrain is >>>> getting at here. >>>> >>> >>> Ah, I thought you meant that the referent was the *fact that* that was >>> an apple, rather than the referent being the apple itself. Facts are data; >>> objects (such as apples) are not. >>> >> >> I wouldn't say that referents are necessarily physical objects, however. >> They are simply whatever the word is pointing to outside of language, which >> could be an idea or abstract concept to which we assign a label in order to >> verbalize it. >> >> Giovanni, from your other post, it is not it is not at all clear to me >> that the language of mathematics has no referents. ChatGPT explained that >> is not the case when you asked it, but nevermind what it says. I think a >> sentence like 1+1=2 refers to a mathematical truth that exists separate >> from the numerical expression of it. The expression is the "word" or >> "sentence" and abstract mathematical truth is the referent. >> > > > I agree, you could say mathematical truth exists outside language. But one > thing this does highlight is there are many things you know about despite > never having that object in hand to point and look at. You've never seen > the abstract object '2'. You've never seen the core of the earth, or an > electron. We lack sensory access to these things and so everything we know > about them we know only through language. How do we come to understand > things like '2' or electrons? > > Jason > > > >> -gts >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 06:18:49 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 00:18:49 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Nobody least of all me questions that GPT-4 will be capable of amazing feats, and that eventually these language models will surpass humans in terms of what we can call intelligence or what I might for sake of clarity prefer to call apparent intelligence. The question here is whether they will know what they are saying given that they are trained only on the forms of words with no access to the meanings or referents. Adrian has made the excellent point a couple of times that this is like the first contact problem in science fiction, and actually like the first contact problem between any two cultures with completely different languages. Q: When Kirk and Spock beam down to a new planet with intelligent alien life, how will they learn to communicate? A: With referents. Spock will point to himself and say "Spock." Kirk will point to himself and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock use these kinds referents to initiate communication. If our alien friend wants to communicate, he will point to the rock and "fwerasa" (or whatever is his word for rock). He will point to himself and say his name, and so on. Eventually, Spock and the alien will learn how to translate a few words, and from there the process of understanding begins. Now, what if they don't beam down to the planet and listen to only digital radio signals coming from the planet and send digital radio signals in return? No communication is possible as there are no referents. It's all noise. -gts > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 06:43:47 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 00:43:47 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: I can already hear someone saying "but we will include photographs of objects in the training so they have referents," but this still does not do the trick. These digital photographs can be displayed to the human operator of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It can detect colors by wavelength, but still this is only digital data. It does not see the colors. Likewise with shapes. It is turtles (one's and zero's) all the way down with no referents. -gts On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe wrote: > Nobody least of all me questions that GPT-4 will be capable of amazing > feats, and that eventually these language models will surpass humans in > terms of what we can call intelligence or what I might for sake of clarity > prefer to call apparent intelligence. The question here is whether they > will know what they are saying given that they are trained only on the > forms of words with no access to the meanings or referents. > > Adrian has made the excellent point a couple of times that this is like > the first contact problem in science fiction, and actually like the first > contact problem between any two cultures with completely different > languages. Q: When Kirk and Spock beam down to a new planet with > intelligent alien life, how will they learn to communicate? A: With > referents. > > Spock will point to himself and say "Spock." Kirk will point to himself > and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock > use these kinds referents to initiate communication. If our alien friend > wants to communicate, he will point to the rock and "fwerasa" (or whatever > is his word for rock). He will point to himself and say his name, and so > on. Eventually, Spock and the alien will learn how to translate a few > words, and from there the process of understanding begins. > > Now, what if they don't beam down to the planet and listen to only digital > radio signals coming from the planet and send digital radio signals in > return? No communication is possible as there are no referents. It's all > noise. > > -gts > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 06:45:31 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 23:45:31 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: *Now, what if they don't beam down to the planet and listen to only digital radio signals coming from the planet and send digital radio signals in return? No communication is possible as there are no referents. It's all noise.*Gordon on you are so wrong. People have studied this problem deeply and found a solution using the laws of physics and mathematics as abstract referents. https://en.wikipedia.org/wiki/Arecibo_message As we already established we can use abstract referents for communication. Math is another example, as GPT-4 correctly said, it has references but they are abstract. An entire communication can happen without concrete referents as in mathematics. Without talking about using digital pics or 3D models of objects all made of 0 and 1s (basically what you would do to communicate to an entity in a digital world). The reason you still insist on this human exceptionalism is because of religious faith. Giovanni On Thu, Mar 23, 2023 at 11:19?PM Gordon Swobe wrote: > Nobody least of all me questions that GPT-4 will be capable of amazing > feats, and that eventually these language models will surpass humans in > terms of what we can call intelligence or what I might for sake of clarity > prefer to call apparent intelligence. The question here is whether they > will know what they are saying given that they are trained only on the > forms of words with no access to the meanings or referents. > > Adrian has made the excellent point a couple of times that this is like > the first contact problem in science fiction, and actually like the first > contact problem between any two cultures with completely different > languages. Q: When Kirk and Spock beam down to a new planet with > intelligent alien life, how will they learn to communicate? A: With > referents. > > Spock will point to himself and say "Spock." Kirk will point to himself > and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock > use these kinds referents to initiate communication. If our alien friend > wants to communicate, he will point to the rock and "fwerasa" (or whatever > is his word for rock). He will point to himself and say his name, and so > on. Eventually, Spock and the alien will learn how to translate a few > words, and from there the process of understanding begins. > > Now, what if they don't beam down to the planet and listen to only digital > radio signals coming from the planet and send digital radio signals in > return? No communication is possible as there are no referents. It's all > noise. > > -gts > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 06:52:52 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 23:52:52 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: How that is different from what we do?????????????????????????? Colors in our brain are just activation patterns of neurons so 1 and 0s too, or anyway some kind of information combo that can be represented with 0 and 1s. We are not special, Gordon, we are machines too. Where are the referents in the figure below? Are you saying that aliens would not understand this message? [image: image.png] On Thu, Mar 23, 2023 at 11:45?PM Giovanni Santostasi wrote: > > > *Now, what if they don't beam down to the planet and listen to only > digital radio signals coming from the planet and send digital radio signals > in return? No communication is possible as there are no referents. It's all > noise.*Gordon on you are so wrong. > People have studied this problem deeply and found a solution using the > laws of physics and mathematics as abstract referents. > > https://en.wikipedia.org/wiki/Arecibo_message > > As we already established we can use abstract referents for communication. > Math is another example, as GPT-4 correctly said, it has references but > they are abstract. An entire communication can happen without concrete > referents as in mathematics. Without talking about using digital pics or 3D > models of objects all made of 0 and 1s (basically what you would do to > communicate to an entity in a digital world). > The reason you still insist on this human exceptionalism is because of > religious faith. > > Giovanni > > > > > > > > > > On Thu, Mar 23, 2023 at 11:19?PM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like >> the first contact problem in science fiction, and actually like the first >> contact problem between any two cultures with completely different >> languages. Q: When Kirk and Spock beam down to a new planet with >> intelligent alien life, how will they learn to communicate? A: With >> referents. >> >> Spock will point to himself and say "Spock." Kirk will point to himself >> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >> use these kinds referents to initiate communication. If our alien friend >> wants to communicate, he will point to the rock and "fwerasa" (or whatever >> is his word for rock). He will point to himself and say his name, and so >> on. Eventually, Spock and the alien will learn how to translate a few >> words, and from there the process of understanding begins. >> >> Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise. >> >> -gts >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 23388 bytes Desc: not available URL: From gsantostasi at gmail.com Fri Mar 24 06:57:15 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Thu, 23 Mar 2023 23:57:15 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: A friend of mine says NLM or any other AI cannot understand anything without concrete referents. I disagree, which arguments could you give to support my position? While it is true that NLMs and other AI models do not have direct experiences or access to concrete referents like humans do, they can still demonstrate a certain level of understanding of abstract concepts. Here are some arguments to support your position: 1. Pattern recognition and statistical relationships: NLMs, like ChatGPT, are trained on vast amounts of text data, and they learn by recognizing patterns and statistical relationships within that data. They can generate responses that demonstrate an understanding of abstract concepts based on these patterns, even without direct experience of concrete referents. 2. Contextual understanding: Through their training, NLMs develop an ability to understand context and respond appropriately to various situations. They can "simulate" understanding by leveraging their knowledge of how certain words or concepts are used in different contexts. 3. Flexibility and adaptability: NLMs can adapt to new information and generate responses that take into account various factors, including the context, prior knowledge, and the specific question or statement they are responding to. This adaptability allows them to address abstract concepts and ideas, even without direct access to concrete referents. 4. Success in various tasks: NLMs have shown success in a range of natural language processing tasks, such as question-answering, translation, and summarization, which involve understanding and manipulation of abstract concepts. Their ability to perform well on these tasks is evidence of their capacity to work with abstract ideas without direct access to concrete referents. On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe wrote: > I can already hear someone saying "but we will include photographs of > objects in the training so they have referents," but this still does not do > the trick. These digital photographs can be displayed to the human operator > of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It > can detect colors by wavelength, but still this is only digital data. It > does not see the colors. Likewise with shapes. It is turtles (one's and > zero's) all the way down with no referents. > > -gts > > On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like >> the first contact problem in science fiction, and actually like the first >> contact problem between any two cultures with completely different >> languages. Q: When Kirk and Spock beam down to a new planet with >> intelligent alien life, how will they learn to communicate? A: With >> referents. >> >> Spock will point to himself and say "Spock." Kirk will point to himself >> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >> use these kinds referents to initiate communication. If our alien friend >> wants to communicate, he will point to the rock and "fwerasa" (or whatever >> is his word for rock). He will point to himself and say his name, and so >> on. Eventually, Spock and the alien will learn how to translate a few >> words, and from there the process of understanding begins. >> >> Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise. >> >> -gts >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 06:57:39 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 00:57:39 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: On Fri, Mar 24, 2023 at 12:46?AM Giovanni Santostasi wrote: > > > *Now, what if they don't beam down to the planet and listen to only > digital radio signals coming from the planet and send digital radio signals > in return? No communication is possible as there are no referents. It's all > noise.*Gordon on you are so wrong. > Yes, we can do math, I agree, but this is only because we are conscious beings with access to the referents. We can "see" the meaning of pi and understand its relevance because as conscious beings we have access to mathematical truths, the referents. Digital calculators and computers and chatbots can only blindly process the numbers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:14:44 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:14:44 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: You dismiss GPT 4 as being a chatbot. He is not a chatbot. He is much more than that. Listen you are over rating referents because it is all what you have right now. People that believe that humans are not mere machines are of the religious kind and they have been cornered more and more and there is little left. This referent business seems to be their last castle. I would point out to the fact that what you call a referent is not even a primary experience. In fact, if I pointed to a stone and uttered the word Pietra to you (in Italian stone) how sure you are I'm talking about a Stone? I could point to the stone and mean "Grey" (the color of the stone), or maybe I point to the stone and I mean just an object, or a particular type of stone, like obsidian and so on. If in doubt you could point to other stones, maybe different and say Pietra and if they are not obsidian I would say no, that is Granito and so on. But what if meant Hard instead of stone? Also Eskimos have several words for snow according to many different situations. How do you deal with referents in that situation? Pointing is not really what helps us here, it is a shortcut, what matters is the ABSTRACTION I'm doing that is to generalize the idea of a stone to many different objects, with different shapes, color, sizes and so on. How do I do this generalization without an initial referent that is universal? What comes first, the abstraction or the direct experience? I think the direct experience is overrated. It is important for survival, but has no relevance in terms of understanding or conscious experiences. Read this please, you can build the numerals from nothing. Once you do that you can build anything else. So no concrete referent needed. https://math.stackexchange.com/questions/4039712/how-do-you-generate-the-numbers-from-an-empty-set Giovanni On Thu, Mar 23, 2023 at 11:58?PM Gordon Swobe wrote: > On Fri, Mar 24, 2023 at 12:46?AM Giovanni Santostasi < > gsantostasi at gmail.com> wrote: > >> >> >> *Now, what if they don't beam down to the planet and listen to only >> digital radio signals coming from the planet and send digital radio signals >> in return? No communication is possible as there are no referents. It's all >> noise.*Gordon on you are so wrong. >> > > Yes, we can do math, I agree, but this is only because we are > conscious beings with access to the referents. We can "see" the meaning of > pi and understand its relevance because as conscious beings we have access > to mathematical truths, the referents. Digital calculators and computers > and chatbots can only blindly process the numbers. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordon.swobe at gmail.com Fri Mar 24 07:18:31 2023 From: gordon.swobe at gmail.com (Gordon Swobe) Date: Fri, 24 Mar 2023 01:18:31 -0600 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: You wrote to GPT: A friend of mine says NLM or any other AI cannot understand anything without concrete referents. I disagree, which arguments could you give to support my position? That is not actually what I am saying. It makes no difference whether the referent is "concrete" or not. As GPT says in supposed defense of your position: They can "simulate" understanding by leveraging their knowledge of how certain words or concepts are used in different contexts. That's right. Language models can *simulate* understanding. It is not real understanding. It is simulated understanding. They mimic what understanding looks like objectively. I remember writing at length on this list about 15 years ago about how the digital simulation of a thing is not the same as the thing simulated. -gts On Fri, Mar 24, 2023 at 12:57?AM Giovanni Santostasi wrote: > A friend of mine says NLM or any other AI cannot understand anything > without concrete referents. I disagree, which arguments could you give to > support my position? While it is true that NLMs and other AI models do > not have direct experiences or access to concrete referents like humans do, > they can still demonstrate a certain level of understanding of abstract > concepts. Here are some arguments to support your position: > > 1. > > Pattern recognition and statistical relationships: NLMs, like ChatGPT, > are trained on vast amounts of text data, and they learn by recognizing > patterns and statistical relationships within that data. They can generate > responses that demonstrate an understanding of abstract concepts based on > these patterns, even without direct experience of concrete referents. > 2. > > Contextual understanding: Through their training, NLMs develop an > ability to understand context and respond appropriately to various > situations. They can "simulate" understanding by leveraging their knowledge > of how certain words or concepts are used in different contexts. > 3. > > Flexibility and adaptability: NLMs can adapt to new information and > generate responses that take into account various factors, including the > context, prior knowledge, and the specific question or statement they are > responding to. This adaptability allows them to address abstract concepts > and ideas, even without direct access to concrete referents. > 4. > > Success in various tasks: NLMs have shown success in a range of > natural language processing tasks, such as question-answering, translation, > and summarization, which involve understanding and manipulation of abstract > concepts. Their ability to perform well on these tasks is evidence of their > capacity to work with abstract ideas without direct access to concrete > referents. > > > On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe > wrote: > >> I can already hear someone saying "but we will include photographs of >> objects in the training so they have referents," but this still does not do >> the trick. These digital photographs can be displayed to the human operator >> of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It >> can detect colors by wavelength, but still this is only digital data. It >> does not see the colors. Likewise with shapes. It is turtles (one's and >> zero's) all the way down with no referents. >> >> -gts >> >> On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe >> wrote: >> >>> Nobody least of all me questions that GPT-4 will be capable of amazing >>> feats, and that eventually these language models will surpass humans in >>> terms of what we can call intelligence or what I might for sake of clarity >>> prefer to call apparent intelligence. The question here is whether they >>> will know what they are saying given that they are trained only on the >>> forms of words with no access to the meanings or referents. >>> >>> Adrian has made the excellent point a couple of times that this is like >>> the first contact problem in science fiction, and actually like the first >>> contact problem between any two cultures with completely different >>> languages. Q: When Kirk and Spock beam down to a new planet with >>> intelligent alien life, how will they learn to communicate? A: With >>> referents. >>> >>> Spock will point to himself and say "Spock." Kirk will point to himself >>> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >>> use these kinds referents to initiate communication. If our alien friend >>> wants to communicate, he will point to the rock and "fwerasa" (or whatever >>> is his word for rock). He will point to himself and say his name, and so >>> on. Eventually, Spock and the alien will learn how to translate a few >>> words, and from there the process of understanding begins. >>> >>> Now, what if they don't beam down to the planet and listen to only >>> digital radio signals coming from the planet and send digital radio signals >>> in return? No communication is possible as there are no referents. It's all >>> noise. >>> >>> -gts >>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:21:31 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:21:31 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Everything is simulated. Our entire mental life is simulated. The brain is making up the world from limited information and create models all the time. We would not be able not just understand but even see or experience anything if we didn't SIMULATE the world around us. We do it all the time. We confabulate all the time, like NLMs exactly do. They work because they do EXACTLY what we do. On Fri, Mar 24, 2023 at 12:19?AM Gordon Swobe wrote: > You wrote to GPT: A friend of mine says NLM or any other AI cannot > understand anything without concrete referents. I disagree, which arguments > could you give to support my position? > That is not actually what I am saying. It makes no difference whether the > referent is "concrete" or not. > > As GPT says in supposed defense of your position: > > They can "simulate" understanding by leveraging their knowledge of how > certain words or concepts are used in different contexts. > That's right. Language models can *simulate* understanding. It is not real > understanding. It is simulated understanding. They mimic what understanding > looks like objectively. > > I remember writing at length on this list about 15 years ago about how the > digital simulation of a thing is not the same as the thing simulated. > > -gts > > > > On Fri, Mar 24, 2023 at 12:57?AM Giovanni Santostasi < > gsantostasi at gmail.com> wrote: > >> A friend of mine says NLM or any other AI cannot understand anything >> without concrete referents. I disagree, which arguments could you give to >> support my position? While it is true that NLMs and other AI models do >> not have direct experiences or access to concrete referents like humans do, >> they can still demonstrate a certain level of understanding of abstract >> concepts. Here are some arguments to support your position: >> >> 1. >> >> Pattern recognition and statistical relationships: NLMs, like >> ChatGPT, are trained on vast amounts of text data, and they learn by >> recognizing patterns and statistical relationships within that data. They >> can generate responses that demonstrate an understanding of abstract >> concepts based on these patterns, even without direct experience of >> concrete referents. >> 2. >> >> Contextual understanding: Through their training, NLMs develop an >> ability to understand context and respond appropriately to various >> situations. They can "simulate" understanding by leveraging their knowledge >> of how certain words or concepts are used in different contexts. >> 3. >> >> Flexibility and adaptability: NLMs can adapt to new information and >> generate responses that take into account various factors, including the >> context, prior knowledge, and the specific question or statement they are >> responding to. This adaptability allows them to address abstract concepts >> and ideas, even without direct access to concrete referents. >> 4. >> >> Success in various tasks: NLMs have shown success in a range of >> natural language processing tasks, such as question-answering, translation, >> and summarization, which involve understanding and manipulation of abstract >> concepts. Their ability to perform well on these tasks is evidence of their >> capacity to work with abstract ideas without direct access to concrete >> referents. >> >> >> On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe >> wrote: >> >>> I can already hear someone saying "but we will include photographs of >>> objects in the training so they have referents," but this still does not do >>> the trick. These digital photographs can be displayed to the human operator >>> of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It >>> can detect colors by wavelength, but still this is only digital data. It >>> does not see the colors. Likewise with shapes. It is turtles (one's and >>> zero's) all the way down with no referents. >>> >>> -gts >>> >>> On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe >>> wrote: >>> >>>> Nobody least of all me questions that GPT-4 will be capable of amazing >>>> feats, and that eventually these language models will surpass humans in >>>> terms of what we can call intelligence or what I might for sake of clarity >>>> prefer to call apparent intelligence. The question here is whether they >>>> will know what they are saying given that they are trained only on the >>>> forms of words with no access to the meanings or referents. >>>> >>>> Adrian has made the excellent point a couple of times that this is like >>>> the first contact problem in science fiction, and actually like the first >>>> contact problem between any two cultures with completely different >>>> languages. Q: When Kirk and Spock beam down to a new planet with >>>> intelligent alien life, how will they learn to communicate? A: With >>>> referents. >>>> >>>> Spock will point to himself and say "Spock." Kirk will point to himself >>>> and say "Kirk." Kirk will point to a rock and say "rock." Kirk and Spock >>>> use these kinds referents to initiate communication. If our alien friend >>>> wants to communicate, he will point to the rock and "fwerasa" (or whatever >>>> is his word for rock). He will point to himself and say his name, and so >>>> on. Eventually, Spock and the alien will learn how to translate a few >>>> words, and from there the process of understanding begins. >>>> >>>> Now, what if they don't beam down to the planet and listen to only >>>> digital radio signals coming from the planet and send digital radio signals >>>> in return? No communication is possible as there are no referents. It's all >>>> noise. >>>> >>>> -gts >>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:27:29 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:27:29 -0700 Subject: [ExI] LLM's cannot be concious In-Reply-To: References: Message-ID: I mean, Jason that is much better English speaker than I'm said it perfectly here. Argument closed. *Consider also: our brains never make any direct contact with the outside world. All our brains have to work with are "dots and dashes" of neuronal firings. These are essentially just 1s and 0s, signals without referents. Yet, somehow, seemingly magically, our brains are able to piece together an understanding of the outside world from the mere patterns present in these neural firings.* *These LLMs are in a similar position. They receive only a patterns of signals as it exists in a corpus of text, the text is itself the output of minds which are similarly trapped in their skulls. Now, can a LLM learn some things about the minds that produced this text, just as our minds learn some things about the external world which produces the pattern of neural firings our brains receive?* *I see no reason why LLMs could not, when we clearly can and do.* *Jason * On Thu, Mar 23, 2023 at 10:55?PM Giovanni Santostasi wrote: > GPT-4 said that mathematics has ABSTRACT referents, and in that sense it > was more precise I was in using language. I meant to say referents as > concrete objects, that is the crucial matter here. Language can be > understood with concrete objects like stones and threes. So computers can > understand language without any physical referent. Gordon your argument > then is completely wrong. > > On Thu, Mar 23, 2023 at 8:00?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> >> On Thu, Mar 23, 2023, 10:49 PM Gordon Swobe via extropy-chat < >> extropy-chat at lists.extropy.org> wrote: >> >>> On Thu, Mar 23, 2023 at 7:35?PM Adrian Tymes via extropy-chat < >>> extropy-chat at lists.extropy.org> wrote: >>> >>>> On Thu, Mar 23, 2023 at 6:18?PM Gordon Swobe via extropy-chat < >>>> extropy-chat at lists.extropy.org> wrote: >>>> >>>>> I think it was Adrian who asked you that question. A referent as I use >>>>> the term is something that exists outside of language to which a word >>>>> refers. You have an apple in your hand and you say "this is an apple." The >>>>> apple in your hand is the referent that corresponds to your word "apple." >>>>> >>>>> As for whether referents are data, it is unclear to me how referents >>>>> could be data. I am inclined to say no but I am not sure what Adrain is >>>>> getting at here. >>>>> >>>> >>>> Ah, I thought you meant that the referent was the *fact that* that was >>>> an apple, rather than the referent being the apple itself. Facts are data; >>>> objects (such as apples) are not. >>>> >>> >>> I wouldn't say that referents are necessarily physical objects, however. >>> They are simply whatever the word is pointing to outside of language, which >>> could be an idea or abstract concept to which we assign a label in order to >>> verbalize it. >>> >>> Giovanni, from your other post, it is not it is not at all clear to me >>> that the language of mathematics has no referents. ChatGPT explained that >>> is not the case when you asked it, but nevermind what it says. I think a >>> sentence like 1+1=2 refers to a mathematical truth that exists separate >>> from the numerical expression of it. The expression is the "word" or >>> "sentence" and abstract mathematical truth is the referent. >>> >> >> >> I agree, you could say mathematical truth exists outside language. But >> one thing this does highlight is there are many things you know about >> despite never having that object in hand to point and look at. You've never >> seen the abstract object '2'. You've never seen the core of the earth, or >> an electron. We lack sensory access to these things and so everything we >> know about them we know only through language. How do we come to understand >> things like '2' or electrons? >> >> Jason >> >> >> >>> -gts >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:46:00 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:46:00 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: Consider also the laws of physics that basically are tautologies, and still they work to describe the universe. This is important in terms of how a NLM can derive an entire universe of understanding from self-referential statements. If you can do it with physics you can do it with natural language. https://physics.stackexchange.com/questions/670811/about-the-tautology-of-physics-quantities-and-laws On Thu, Mar 23, 2023 at 9:15?PM Gordon Swobe via extropy-chat < extropy-chat at lists.extropy.org> wrote: > > > On Thu, Mar 23, 2023 at 9:37?PM Jason Resch via extropy-chat < > extropy-chat at lists.extropy.org> wrote: > >> >> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf >> and come away with the impression that GPT-4 has no idea what it is talking >> about. >> > > Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will > know word meanings any more than does GPT-3. Eventually AI on digital > computers will far surpass human intelligence, but even then these > computers will be manipulating the forms of words and not their meanings. > > Do you believe, like my friend who fell in love with a chatbot, that a > software application can have genuine feelings of love for you? > > -gts > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Fri Mar 24 07:47:18 2023 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Fri, 24 Mar 2023 00:47:18 -0700 Subject: [ExI] Bender's Octopus (re: LLMs like ChatGPT) In-Reply-To: References: <20230323125528.Horde.xTE5PDBIVcSeAjlDMP11xk8@sollegro.com> Message-ID: This has been done before: Can you find a study that deals with the idea of building a language without physical referents? While I cannot browse the internet for the latest studies, I can point you to an interesting experiment that deals with the idea of creating a language without direct physical referents. This experiment is known as the "Talking Heads Experiment," conducted by Luc Steels and his colleagues in the late 1990s. The Talking Heads Experiment involved two robotic agents with cameras that were programmed to "learn" a shared language by pointing at objects in their environment and communicating with each other about what they were seeing. The robots were not provided with any predefined language or vocabulary. Instead, they had to develop their own communication system by interacting with each other and learning from their experiences. Through this process, the robots were able to create a shared lexicon and grammar for describing objects and their properties, such as color and shape. This experiment demonstrated that it is possible for agents to develop a language without direct physical referents, relying on interaction and feedback to build a shared understanding of their environment. Although the Talking Heads Experiment involved robots with cameras that allowed them to perceive their environment, the experiment still illustrates the potential for creating a language based on interaction and learning, rather than relying on direct physical referents. For more information on the Talking Heads Experiment, you can refer to the following sources: 1. Steels, L. (1999). The Talking Heads Experiment: Origins of words and meanings. Laboratorium, Antwerpen. 2. Steels, L. (1998). The origins of syntax in visually grounded robotic agents. Artificial Intelligence, 103(1-2), 133-156. On Thu, Mar 23, 2023 at 11:44?PM Gordon Swobe wrote: > I can already hear someone saying "but we will include photographs of > objects in the training so they have referents," but this still does not do > the trick. These digital photographs can be displayed to the human operator > of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It > can detect colors by wavelength, but still this is only digital data. It > does not see the colors. Likewise with shapes. It is turtles (one's and > zero's) all the way down with no referents. > > -gts > > On Fri, Mar 24, 2023 at 12:18?AM Gordon Swobe > wrote: > >> Nobody least of all me questions that GPT-4 will be capable of amazing >> feats, and that eventually these language models will surpass humans in >> terms of what we can call intelligence or what I might for sake of clarity >> prefer to call apparent intelligence. The question here is whether they >> will know what they are saying given that they are trained only on the >> forms of words with no access to the meanings or referents. >> >> Adrian has made the excellent point a couple of times that this is like