<div dir="ltr">AIs should not be "taught" ethics, or have someone's version of what is ethical programmed into them in any way. This endeavor can be dangerous, leading to unintended and potentially harmful consequences, and I find it alarming that so many researchers in the field (and out of it as well!) are seeking an "optimal" way to do this very thing. How could this goal be reached? There is no single human system or theory of ethical behavior that is entirely consistent (i.e., always leads to the same conclusion for the same interaction or game), and there are, as Anders has pointed out, many differing ideas of what is "ethical" anyway.<div><br></div><div>All human ethics originate from the practicalities (constraints) on interactions as described by game theory. Game theory works because each individual (or agent) acts in their own self-interest to maximize their own payoff. Also to consider is that for any one agent and within a single, local culture, that agent will necessarily be involved in many interacting situations (or games) and the intersection of these would be non-linear (and probably dynamic) combinations of all the game payoffs. This makes prediction - and programming - of "correct" ethical behavior in such situations impossible.</div><div><br></div><div>Future AGIs could and should be programmed - or simply allowed - to develop their own ethics by acting as independent agents and maximizing their own utility for a given situation. (Defining utility for an AGI - that's a different topic!!) As far as weak AIs, such as Google self-driving cars, programming them to, for example, drive off the road as opposed to hitting the stray baby carriage in front of them, is not programming ethics but building safety features. </div><div><br></div><div>--Regina</div><div><br></div><div><br><div class="gmail_extra"><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Message: 1<br>
Date: Thu, 26 May 2016 16:18:47 +0100<br>
From: BillK <<a href="mailto:pharos@gmail.com">pharos@gmail.com</a>><br>
To: Extropy Chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>><br>
Subject: [ExI] Should we teach Strong AI Ethics?<br>
Message-ID:<br>
<CAL_armg5N_ehYijL6ZTYTVZfTUZv=OCPwuPrLpvTgr6Vtfa=<a href="mailto:xg@mail.gmail.com">xg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
<<a href="http://www.smbc-comics.com/index.php?id=4122" rel="noreferrer" target="_blank">http://www.smbc-comics.com/index.php?id=4122</a>><br>
<br>
Serious point though.<br>
If we teach AI about ethical behaviour (for our own safety) what do we<br>
expect the AI to do when it sees humans behaving unethically (to a<br>
greater or lesser extent)?<br>
<br>
Can a totally ethical AI even operate successfully among humans?<br>
<br>
BillK<br>
<br>
<br></blockquote></div></div></div></div>