<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 19, 2023, 4:10 PM Dave S via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="font-family:Arial,sans-serif;font-size:14px"><span style="font-family:system-ui,sans-serif;font-size:0.875rem">On Sunday, March 19th, 2023 at 2:01 PM, Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:</span></div><div><br><blockquote type="cite">
<div dir="ltr"><div class="gmail_quote"><div>[...] But I also think we cannot rule out at this time the possibility that we have already engineered conscious machines. Without an established and agreed upon theory of consciousness or philosophy of mind, we cannot even agree on whether or not a thermostat is conscious.</div></div></div></blockquote><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">I think that rabbit hole that isn't going to yield much of use since there's no way an entity can determine whether or not another entity is conscious,</div><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><blockquote type="cite"><div dir="ltr"><div class="gmail_quote"><div>Where does our own volition and initiative come from? Is it not already programmed into us by our DNA?</div></div></div></blockquote><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">The mechanisms are in our DNA. Some of it is hormone-driven like hunger, sex drive, etc. Some of it comes from our thoughts and experiences. We try a food we like a lot and we'll seek it out again.</div><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><blockquote type="cite"><div dir="ltr"><div class="gmail_quote"><div>And is our own DNA programming that different in principle from the programming of a self-driving car to seek to drive to a particular destination?</div></div></div></blockquote><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)"><br></div><div style="font-family:Arial,sans-serif;font-size:14px;color:rgb(0,0,0)">Yes. We decide when and where to go. Self-driving cars don't just go on random joy rides. They don't have initiative and they don't experience joy.</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I believe there may be an inconsistency between these two claims:</div><div dir="auto"><br></div><div dir="auto">1. "there's no way an entity can determine whether or not another entity is conscious"</div><div dir="auto"><br></div><div dir="auto">And</div><div dir="auto"><br></div><div dir="auto">2. "they don't experience joy."</div><div dir="auto"><br></div><div dir="auto">If it were possible to know whether another entity experienced joy then wouldn't it be possible to determine that another entity is conscious.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I believe we can, to some degree of confidence, determine when another entity is conscious, when by it's externally visible behavior, it demonstrates possession of knowledge for which the observed behavior would be exceedingly improbable if the entity did not possess that knowledge.</div><div dir="auto"><br></div><div dir="auto">For example, if AlphaZero makes a series of brilliant chess moves, it would be very unlikely to occur if it did not possess knowledge of the evolving state of the chess board. Thus we can conclude something within AlphaGo contains the knowledge of the chess board, and states of knowledge are states of consciousness.</div><div dir="auto"><br></div><div dir="auto">It is much harder, however, to use this method to rule out the presence of certain knowledge states, as not all states will necessarily manifest outwardly detectable behaviors. So it is harder to say Tesla's autopilot does not experience joy, than it is to say Tesla's autopilot is conscious of the road sign up ahead.</div><div dir="auto"><br></div><div dir="auto">JasonĀ </div></div>