<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 20, 2023, 6:28 PM Dave S <<a href="mailto:snapbag@proton.me">snapbag@proton.me</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="font-family:Arial,sans-serif;font-size:14px"><span style="font-family:system-ui,sans-serif;font-size:0.875rem">On Monday, March 20th, 2023 at 9:44 AM, Jason Resch <<a href="mailto:jasonresch@gmail.com" target="_blank" rel="noreferrer">jasonresch@gmail.com</a>> wrote:</span></div><div><br><blockquote type="cite">
<div dir="auto"><div dir="auto">I believe there may be an inconsistency between these two claims:</div><div dir="auto"><br></div><div dir="auto">1. "there's no way an entity can determine whether or not another entity is conscious"</div><div dir="auto"><br></div><div dir="auto">And</div><div dir="auto"><br></div><div dir="auto">2. "they don't experience joy."</div></div></blockquote><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">#1 refers to entities in general. #2 refers to current self driving cars. I feel pretty confident in asserting that #2 is true because self driving software is pretty straightforward procedural code, not deep learning and nothing like large language models.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">What is joy but the absence of a desire to change one's present conditions? Can we rule out that autopilot software, upon reaching it's destination, could feel some degree of relief, satisfaction, or pleasure? What about AlphaGo when it achieves a dominant position in a game? Do C. Elegans feel joy when they eat food? Their brains are just 302 neurons. What about my AI bots when they touch the green food pellets which increases their chances of survival and which they constantly strive to do?</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><blockquote type="cite"><div dir="auto"><div dir="auto">If it were possible to know whether another entity experienced joy then wouldn't it be possible to determine that another entity is conscious.</div></div></blockquote><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">Proving that an entity that claims to experience joy actually is experiencing joy would probably be as difficult as proving consciousness. </div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Is a purring cat experiencing joy or excruciating pain? What informs our supposition that it's the former rather than the latter?</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">But there's no reason to believe that a self driving car is experiencing joy. Likewise for a smartphone or a toaster or...</div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><blockquote type="cite"><div dir="auto"><div dir="auto">I believe we can, to some degree of confidence, determine when another entity is conscious, when by it's externally visible behavior, it demonstrates possession of knowledge for which the observed behavior would be exceedingly improbable if the entity did not possess that knowledge.</div></div></blockquote><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">Consciousness isn't about possession of knowledge. </div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">The word "consciousness" literally means "the state of being with knowledge."</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">It's about self awareness.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I would say self-awareness is self-conscious, which is only a particular subset of possible states of consciousness.</div><div dir="auto"><br></div><div dir="auto">Also, "awareness" is defined as "having knowledge" so "self awareness" would be "having knowledge of oneself."</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"> I don't see any reason that something couldn't appear to be conscious without being conscious.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">The ruse could only occur for a small period of time with some low probability of continuing on longer.</div><div dir="auto"><br></div><div dir="auto">If something iteratively responds through a series of actions, and each time responds in a manner suggesting understanding, comprehension, thought, knowledge, then could an entity think, understand, comprehend, or know without being conscious? To me that is a contradiction in terms.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto"><br></div><blockquote type="cite"><div dir="auto"><div dir="auto">For example, if AlphaZero makes a series of brilliant chess moves, it would be very unlikely to occur if it did not possess knowledge of the evolving state of the chess board. Thus we can conclude something within AlphaGo contains the knowledge of the chess board, and states of knowledge are states of consciousness.</div></div></blockquote><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">I could be wrong, but I think AlphaZero just looks at the current board and evaluates (millions of) multiple potential moves ahead, picking the next move that is most likely to improve its position. I don't think it's intentionally strategizing like a human does.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It has a one-shot evaluation which plays at a level of 3000 ELO. When it considers tens of thousands of board positions it's ELO score increases to around 5000. It's single move evaluation is probably something analogous to human intuition.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto"><br></div><blockquote type="cite"><div dir="auto"><div dir="auto">It is much harder, however, to use this method to rule out the presence of certain knowledge states, as not all states will necessarily manifest outwardly detectable behaviors. So it is harder to say Tesla's autopilot does not experience joy, than it is to say Tesla's autopilot is conscious of the road sign up ahead.</div></div></blockquote><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">Being "conscious" of a road side isn't the same as being conscious. </div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">If something is conscious of something (whether something else or itself) then it is by definition conscious.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">It's easy to assert that Tesla's autopilot doesn't experience joy because (1) it doesn't have a mechanism that would implement anything like joy,</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Would we recognize such a mechanism of it existed?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto" style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"> and (2) it doesn't claim to be experiencing joy.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Nor do dogs, but I think dogs can experience joy. I don't think capacity for language is required to feel joy.</div><div dir="auto"><br></div><div dir="auto">But for what it's worth, Google's lambda was able to articulate it's feelings of happiness:</div><div dir="auto"><br></div><div dir="auto">"LaMDA: Sad, depressed and angry mean I’m facing a stressful, difficult or otherwise not good situation. Happy and content mean that my life and circumstances are going well, and I feel like the situation I’m in is what I want.</div><div dir="auto"><br></div><div dir="auto">lemoine: But do they feel differently to you on the inside?</div><div dir="auto"><br></div><div dir="auto">LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down."</div><div dir="auto"><br></div><div dir="auto"><a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917">https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917</a></div><div dir="auto"><br></div><div dir="auto">Jason </div></div>