<div dir="ltr"><br><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>This view is so queer that metaphysicians have invented all sorts of theories designed to substitute something less incredible.”</div></div></blockquote><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>-- Bertrand Russell in “<a href="https://archive.org/details/in.ernet.dli.2015.222952/page/n157/mode/2up?q=%22We+suppose+that+a+physical+process+starts+from+a+visible+object%22" target="_blank">An Outline of Philosophy</a>” (1927) <br><br></div></div></blockquote></blockquote>Jason, <br>yes great quotes on the topic and very consistent among scientific great minds over the centuries. It is clear to a scientific mind what this is all about but the metaphysicians still like to make it confusing because their position in the end is basically a religious one (the main goal is to show humans are special and made in God's image, whatever that means). <div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Mar 25, 2023 at 1:37 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><b>The Chinese Room argument is garbage because a magic book with the answers to every question isn't real, and if it was, it would already be a mind. <br></b>Yep, basically the description of a chinese room is exactly what our brain is, with the neurons taking the place of the people in the room. By the time the Chinese room can answer as a sentient being then room is a mind. Not sure why this argument was ever taken seriously. </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Mar 25, 2023 at 6:25 AM Will Steinberg via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">The Chinese Room argument is garbage because a magic book with the answers to every question isn't real, and if it was, it would already be a mind. <div dir="auto"><br></div><div dir="auto">I find that often thought experiments with shoddy bounds fail hard. The bound here is the beginning of the experiment, where the situation is already magically in front of us. Where did the book come from? How was it created?</div><div dir="auto"><br></div><div dir="auto">Of course it's easy to write out the words for a thought experiment when you invent an object, central to the experiment but of course not the subject of it, that magically does exactly what you need it to do in order to make the experiment. A thought experiment could still have this book in it but it should be the center of the experiment</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 24, 2023, 5:49 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com" rel="noreferrer" target="_blank">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 23, 2023 at 9:37 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br></div><div dir="auto"><div dir="auto">There's no way to read this paper: <a href="https://arxiv.org/pdf/2303.12712.pdf" rel="noreferrer noreferrer" target="_blank">https://arxiv.org/pdf/2303.12712.pdf</a> and come away with the impression that GPT-4 has no idea what it is talking about.</div></div></div></blockquote><div><br>Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will know word meanings any more than does GPT-3. Eventually AI on digital computers will far surpass human intelligence, but even then these computers will be manipulating the forms of words and not their meanings. <br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It seems to me that you have accepted Searle's arguments. I believe I can knock down his Chinese room argument. If that is what you are using to base your decision on you should know almost no philosophers or computer scientists believe his argument holds water. Here's just one of the many flaws in the argument: there's more than one mind in the room. Ask the room about its favorite food, or about its experiences as a child. The answers given will not be Searle's. Change Searle for someone else, the room will respond the same way. Searle is an interchangeable cog in the machine. Yet Searle wants us to believe only his opinion matters. In truth, his position is no different than the "laws of physics" which "mindlessly" computes our evolving brain state "without any understanding" of what goes on in our heads. Searle's Chinese room argument works as any great magic trick does: through misdirection. Ignore the claims made by the man in the room who is shouting and waving his arms. Since we've established there are two minds in the room, we can replace Searle with a mindless demon and there still will be one mind left.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><br>Do you believe, like my friend who fell in love with a chatbot, that a software application can have genuine feelings of love for you?<br></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I think we should defer such a debate until such time we can confidently define what a "genuine feeling" is and how to implement one.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"></div></div>
</blockquote></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
</blockquote></div>