<div dir="ltr">Also my bad @Gio, I started out there talking to you but switched to generally addressing the argument Gordon is making. Sorry for any confusion</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 5, 2023 at 5:44 PM Will Steinberg <<a href="mailto:steinberg.will@gmail.com">steinberg.will@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto">Yes there's a strange primacy of 'objects' that people seem to imagine. There is a form that reflects a certain type of electromagnetic light. We have a form that receives it. Our form finds meaning in this by comparing it against everything else in its interior language system of electrochemical signals. If all was red, there would be no red. Red is found only in the difference. ChatGPT also has an explicit understanding of when to use 'red'. It must have this understanding because it would just spew total nonsense otherwise. It doesn't really matter whether it has the same referents for red as we do, because in the end it is all information anyway. Red does not exist in this world.</div><div dir="auto"><br></div><div>Let me explain with a thought experiment, I call it "The English Room":</div><div><br></div><div>There is a room with a microphone and speaker each both inside and out. The inner microphone passes anything said on it to the outer speaker by encoding it digitally, passing it through radio waves, and decoding it. The same happens for the outer microphone to the inner speaker.</div><div><br></div><div>Your friend walks into the room and closes the door. You start a conversation using the microphone and speaker on the outside. Are you speaking with your friend?</div><div><br></div><div>What I mean to say is that it is very difficult to philosophically separate the initial speaker (text corpus) and the final speaker (ChatGPT). Would this experiment be different if you were speaking with 2 people in 2 rooms and some algorithm determined the best answer for you?</div><div><br></div><div>Really the philosophical issues here are much more than asking "is the algorithm sentient?" We have to ask where the line of separation even is between the corpus and the response. And to ask what happens when the consciousness of multiple people (provided through language) is condensed into one signal. Is this any different from the way your brain works? We also have multiple thought streams that internally interact with one another and produce a single result. Would you say we aren't conscious because all we are doing is choosing a thought to speak from the many unspoken ones?</div><div><br></div><div>The symbol grounding thing here is a total spook. Whether there even is a 'referent' in a case you speak of is totally dependent on what boundaries you draw, but those boundaries don't affect what actually matters, which is the response. I think that focusing on the symbol grounding is getting us further away from a real answer.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 5, 2023, 5:23 PM Giovanni Santostasi via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Gordon,<div>Others have said that it seems you are basically repeating the same thing over and over again without engaging in a real conversation with the people that disagree with you. You are doing the same here. I just gave you examples of how it seems we are doing the opposite of what you are saying. To abstract from a physical sensation of an object like an apple to the general idea of an apple it seems is actually where the power of language is, not the fact that it needs an apple to make sense. <br>IT IS EXACTLY THE OPPOSITE OF WHAT YOU ARE SAYING, can you discuss why you think it is not? <br>I can do this with anything even very abstract things like 1 and 0. All that you need is to have an experience (or differentiate between states) of on and off that a computer can have certainly. <br>You can build an entire language and communicate with another entity just based on this. <br>Can you discuss this example instead of repeating your mantras? <br>PS</div><div>I agree that from an evolutionary point of view, we evolved language after being able to recognize objects, for example, eatable fruits vs rocks, but that doesn't require language. Language came later as an emergent property of different skills and abilities we developed to survive in the world that does require making contact with the real world. But language is exactly the opposite of what you think it is. It is actually getting away from the concreteness of things. It doesn't need referents at all. I gave you examples of this, I'm not just making this statement out of dogma. In the example of 0s and 1s <br>based communication example GPT-4 gave us where is the referent? <br>Please address this issue directly instead of going around it. <br><br>Giovanni <br><br><br><br><br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 5, 2023 at 1:47 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com" rel="noreferrer" target="_blank">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>You are referring here to the ancient problem of universals and particulars. Philosophers have been debating it since Plato.<br><br>The bottom line, Gio, is that words refer to things and ideas. In and of themselves -- outside of the context of those things and ideas -- words are empty symbols with no meaning. <br><br>-gts<br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 5, 2023 at 2:05 PM Giovanni Santostasi via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Gordon,<div>In fact, now that I'm thinking about it, it is the exact opposite of what you say. Referents are not just not necessary for the language but because of language we can actually make the association between abstract ideas in our head and the object in the external world. We can associate a physical apple with an apple because we are able to abstract in the first place that is what is the real essence of language. Abstraction is the ability to extract essential properties of an event, object, or another abstract idea beyond the immediate physical characteristics of the object of abstraction. This is what we do when we see 1 apple and say 1 or 1 apple and 1 orange and say 2. <br>I would say that language allows to actually recognize objects in the world as objects in a given category or give them names or qualities. You can still perceive an apple as something, you can smell it and taste it and maybe a lower animal can associate an apple with something good to eat but it would not be able to do the association with a given word or idea because it cannot do the abstraction to a general concept of an apple. That is what language is about, that is the opposite of what you claim. Without language (creating abstract ideas and generalization in our head) there is no object to refer to, not the other way around. <br><br>Giovanni <br><br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 5, 2023 at 12:29 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com" rel="noreferrer" target="_blank">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Gordon,<div>you say: <span style="font-weight:bold;color:rgb(80,0,80)">By referents, I mean the things and ideas outside of language to which words point. If you hold an apple in your hand and say "this is an apple," the apple is the referent that gives your word "apple" meaning.</span><span style="color:rgb(80,0,80)"><b> </b><br><br>Absolutely not. This is not how language works. <br>It takes a long time for a child, that is strongly wired to learn language, to understand what you mean when you point to them an apple and say "apple". It also requires a certain level of brain development. Teaching children colors is even more difficult and requires more time. The difficulty is exactly the opposite of what you are saying is the essence and importance of having referents. It is all in the ABSTRACTION that is needed to actually make the association. <br><br>This has been pointed out to you many times (also to Brent with its insistence on quality of redness nonsense). It takes time to make the association between what an adult calls an apple and what a child sees. <br><br>What is the essence of an apple? It is being round? Being a round eatable object (so different from a round ball)? What about an orange? That is another round eatable object, but it is not an apple because... What about an apple in a picture vs a real apple? What about our dog called Apple? You understand what I'm trying to express. It is not as easy as you think to associate the apple with an object because it is a complex process that has basically almost nothing to do with the referent itself. The referent plays very little role and it is not at all what gives language meaning and power. It is all in the ABSTRACTIONS, all the relationships at higher levels (in fact statistical ones that we calculate approximately in our brain). <br><br>This is why we can give meaning to things that are abstract in the first place like love or meaning itself. <br>This is why we can imagine dragons, flying pigs, and so on. This is why languages can be bootstrapped from a single axiom or definition (even an arbitrary one) as one does with the null set in mathematics. </span></div><div><span style="color:rgb(80,0,80)"><br>I have looked for somebody writing a paper on how one can bootstrap an entire language from something similar to the null set, it is probably somewhere there but if not one day I will try it myself. But mathematics derived from the null set is at least a counterexample to your statement that language needs referents for meaning to emerge. <br><br>Also one has to be clever on how to use GPT-4 on these topics. </span></div><div><span style="color:rgb(80,0,80)">Instead of asking if it is conscious or understands language do tests to see if it does. <br><br>One test I did was to ask to imagine a conversation between beings in different dimensions that don't even share the same laws of physics let alone common possible referents like chemical elements or things like rocks or stars. It gave me a very interesting example of using a series of 0s and 1s in a given sequence to let the other entity know they understood similar and different, following a sequence in time, yes, no, and so on. It was an incredibly fascinating example because it shows how you could communicate with another being with almost no referents in common and needing just a few fundamental abstract ideas as different and similar that don't need any rocks to be defined. One can see that once you establish, "I'm here", "I understand", "Yes", "No", "same", and "different" one can little by little build an entire language with basically no physical referents. <br>GPT-4 came up with that. <br><br>So you are simply wrong Gordon. You have an example above from GPT-4 that shows referents may be useful for survival in biological beings like us but they are completely unnecessary for language and meaning. <br>The case should be closed. <br>Giovanni <br><br><br><br><br><br><br><br></span></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 5, 2023 at 7:20 AM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Wed, 5 Apr 2023 at 14:20, spike jones via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> From: extropy-chat <<a href="mailto:extropy-chat-bounces@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat-bounces@lists.extropy.org</a>> On Behalf Of Jason Resch via extropy-chat<br>
> >…This is a phenomenon we are all subject to and which we should all be aware of called cognitive dissonance. It can occur whenever our brains encounter information perceived as threatening to our existing beliefs …Jason<br>
><br>
> Ja. In our world today, we are in a culture war in which many of our most fundamental beliefs are being challenged. Those with the most cognitive dissonance see offense in what looks like perfectly innocuous observations to those who have little if any cog-dis. Thx Jason.<br>
><br>
> spike<br>
> _______________________________________________<br>
<br>
<br>
<br>
No problem. It just takes a bit of practice. :)<br>
<br>
Quote:<br>
“Alice laughed. 'There's no use trying,' she said. 'One can't believe<br>
impossible things.'<br>
<br>
I daresay you haven't had much practice,' said the Queen. 'When I was<br>
your age, I always did it for half-an-hour a day. Why, sometimes I've<br>
believed as many as six impossible things before breakfast!”<br>
― Lewis Carroll<br>
---------------<br>
<br>
BillK<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
</blockquote></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>
</blockquote></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
</blockquote></div>