<br><br><div class="gmail_quote">2010/11/18 spike <span dir="ltr"><<a href="mailto:spike66@att.net">spike66@att.net</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div lang="EN-US" link="blue" vlink="purple"><div><p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D">…</span><span style="font-size:10.0pt"> <b>On Behalf Of </b>Florent Berthet</span> </p></div></div></blockquote>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div lang="EN-US" link="blue" vlink="purple"><div><p class="MsoNormal"><font class="Apple-style-span" color="#1F497D"><br>
</font></p><div><div><p class="MsoNormal"><span style="color:#1F497D">>…</span>I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me<span style="color:#1F497D">…</span></p>
</div><div><div class="im"><p class="MsoNormal">More generally, wouldn't it be a shame to prevent an AGI to create an advanced civilization (eg computronium based) just because this outcome could turn out to be less "friendly" to us than the one of a human-friendly AGI?<span style="color:#1F497D"> </span>In the end, isn't the goal about maximizing collective happiness?</p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D"> </span></p></div><p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D">Florent you are a perfect example of dangerous person to have on the AGI development team. You (ad I too) might go down this perfectly logical line of reasoning, then decide to take it upon ourselves to release the AGI, in order to maximize happiness.</span></p>
</div></div></div></div></blockquote><div><br></div><div>Do you know what the Singinst folks (who I support, by the way) think about that ?</div><div> </div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div lang="EN-US" link="blue" vlink="purple"><div><div><div><div class="im"><p class="MsoNormal"><span style="color:#1F497D">>…</span>So why don't we just figure out how to make the AGI understand the concept of happiness (which shouldn't be hard since we already understand it), and make it maximize it?<span style="color:#1F497D"></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D"> </span></p></div><p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D">Doh! You were doing so well up to that point, then the fumble right at the goal line. We don’t really understand happiness. We know what makes us feel good, because we have endorphins. An AGI would (probably) not have endorphins. We don’t know if it would be happy or what would make it happy.</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D"> </span></p><p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D">spike</span></p><p class="MsoNormal"><span style="font-size:11.0pt;color:#1F497D"> </span></p>
</div></div></div></div></blockquote><div><br></div><div>Yeah I was tempted to moderate this statement. What I meant was that we although we don't fully grasp all the mechanisms of the feeling of happiness, and we certainly don't know all the kinds of happiness that could exist, we understand reasonably well what it means for somebody to be happy or unhappy. An AGI should be able to get this, too, for it would understand that we all seek this state of mind, and it would probably try to duplicate the phenomenon on itself (which shouldn't be hard, because everything is computable, the effects of endorphins included).</div>
<div><br></div><div> </div></div>