<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"> From a operational viewpoint a emotion is just a predisposition to do X rather than Y, John Clark</span><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">So, I think I was right - the computer leans (has an attitude towards something it considers positive) and bases its decision on that leaning; meaning the probabilities it calculates always influence its decision: it always goes with the highest probability of success - I am assuming. </span></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">Probabilities are then the analog of emotions, and are to be preferred to "Oh, I don't exactly know why, I just kinda like that one."</span></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large"><br></span></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:large">bill w</span></div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Oct 22, 2018 at 2:14 PM John Clark <<a href="mailto:johnkclark@gmail.com">johnkclark@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Sun, Oct 21, 2018 at 1:47 PM William Flynn Wallace <<a href="mailto:foozler83@gmail.com" target="_blank">foozler83@gmail.com</a>> wrote:</span><br></div><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Recent books by Damasio and Sapolsky show conclusively that without emotions, people just cannot make decisions that make much sense. </div></div></div></blockquote><div><br></div><div class="gmail_default"><font face="arial, helvetica, sans-serif"></font><font size="4">AlphaZero makes decisions when it plays GO or Chess or Shogi and those decisions make sense, more sense than the decisions the humans make when they play against it. So I guess AlphaZero has emotions.</font></div><div class="gmail_default"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Are AIs somehow to be equipped with superhuman emotions? </div></div></div></blockquote><div><br></div><div class="gmail_default"><font face="arial, helvetica, sans-serif"></font><font size="4">They don't need to be super, regular old emotions will do. From a operational viewpoint a emotion is just a predisposition to do X rather than Y, and I see no reason that would be especially hard to program. For example, pain could be a subroutine such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.</font></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="color:rgb(0,0,0);font-family:"comic sans ms",sans-serif"><span class="gmail_default" style="font-family:arial,helvetica,sans-serif">> </span>Historically,of course, emotional decisions were regarded as greatly flawed and to be avoided, and now we find out that we simply cannot avoid them. </span></blockquote><div><br></div><div><div class="gmail_default"><font face="arial, helvetica, sans-serif"></font><font size="4">I agree, there is no logical reason to prefer doing something rather than nothing or to prefer life more than death, but I like something better than nothing and life better than death anyway because that's the way my brain is wired.<br><br>John K Clark</font><br></div><br></div><br>
</div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>