<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><blockquote type="cite" class=""><div dir="ltr" class=""><div class="gmail_quote"><div dir="ltr" class=""><div class="" style="font-family: "comic sans ms", sans-serif; font-size: large;">(re an AI trying to pass the Turing test): 'It is easy to fool them, not only because they don't have the kind of detailed contextual knowledge of the outside world that any normal person would have, but because they don't have memories even of the conversation they have been having, much less ways to integrate such memories into the rest of the discussion.'</div></div></div></div></blockquote><blockquote type="cite" class=""><div dir="ltr" class=""><div class="gmail_quote"><div dir="ltr" class=""><div class="" style="font-family: "comic sans ms", sans-serif; font-size: large;">No memory? bill w</div></div></div></div></blockquote><div class=""><br class=""></div>I think Carroll is referring to traditional chat bots that were unable to effectively utilize information scattered all throughout the conversation in formulating responses. Simple bots, Markov chain bots for example, only base the next word generated on the past few words. Utilizing all available information in formulating a response requires a much more sophisticated model such as the much hyped transformer based large language models.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On May 4, 2023, at 2:44 PM, William Flynn Wallace via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class="gmail_default" style="font-family: "comic sans ms", sans-serif; font-size: large;"><br class=""></div><div class="gmail_quote"><div class="gmail_default" style="font-family: "comic sans ms", sans-serif; font-size: large;">(I decided that you needed what the author wrote next)</div><br class=""><div dir="ltr" class=""><div style="font-family: "comic sans ms", sans-serif; font-size: large;" class="">from The Big PIcture, by Sean Carroll, astrophysicist:</div><div style="font-family: "comic sans ms", sans-serif; font-size: large;" class=""><br class=""></div><div style="font-family: "comic sans ms", sans-serif; font-size: large;" class="">(re an AI trying to pass the Turing test): 'It is easy to fool them, not only because they don't have the kind of detailed contextual knowledge of the outside world that any normal person would have, but because they don't have memories even of the conversation they have been having, much less ways to integrate such memories into the rest of the discussion.'</div><div style="font-family: "comic sans ms", sans-serif; font-size: large;" class=""><br class=""></div><div style="font-family: "comic sans ms", sans-serif; font-size: large;" class=""><div class="gmail_default" style="font-family: "comic sans ms", sans-serif; font-size: large;">In order to do so, they would have to have inner mental states that depended on their entire histories in an integrated way, as well as the ability to conjure up hypothetical future situations, all along distinguishing the past from the future, themselves from their environment, and reality from imagination. As Turing suggested, a program that was really good enough to convincingly sustain human-level interactions would have to be actually thinking.</div><br class=""></div><div style="font-family: "comic sans ms", sans-serif; font-size: large;" class="">No memory? bill w</div></div>
</div></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></body></html>