<div dir="ltr"><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">In fact, most pure moral systems are very bad at "live and let live". We humans tend to de facto behave like that because our power is about equal; entities that are orders of magnitude more powerful may not behave like that unless we get the value code just right. anders</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">I find that people who construct moral systems, as well as those who just interpret them, are often less concerned about being right than with other people being wrong/bad.</span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px"><br></span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:12.8px">In the American South, sermons, of which I have heard hundreds, from Baptist to Episcopalian, are full of fingerpointing, though sometimes at oneself. And the more vociferous (Baptist) the better for those who like to hear about how bad the bad guys are (and by comparison how righteous we are). Or if you are listening to this and are a bad guy, you break down emotionally and come forward to be saved and give your testimony.</span></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><br></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)">It would be very easy to program an AI to sermonize like this. Just get books of sermons and have the AI scramble them, and perhaps use different examples (something real preachers do all the time), and you could go into business as a Dial a Sermon (ClickOn a Sermon perhaps?). </div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><br></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)">It would be hilarious (to us) and make tons of money. </div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><br></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)">I'll bet Spike has some ideas on the visuals.</div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)"><br></div><div class="gmail_default" style="font-family:'comic sans ms',sans-serif;font-size:small;color:rgb(0,0,0)">bill w</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 26, 2016 at 4:20 PM, Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
On 2016-05-26 17:18, BillK wrote:<br>
<blockquote type="cite">
<pre><a href="http://www.smbc-comics.com/index.php?id=4122" target="_blank"><http://www.smbc-comics.com/index.php?id=4122></a>
Serious point though.
If we teach AI about ethical behaviour (for our own safety) what do we
expect the AI to do when it sees humans behaving unethically (to a
greater or lesser extent)?
Can a totally ethical AI even operate successfully among humans?
</pre>
</blockquote>
<br></span>
What is "totally ethical"? <br>
<br>
[Philosopher hat on!]<br>
<br>
Normally when we say something like that, we mean somebody who
follows the One True moral system perfectly. Or at least one moral
system perfectly. There are no humans that do it, so we do not have
reliable intuitions about what it would mean. Now, a caricature
view of moral perfection is somebody being a saintly wuss: super
kind, but exploitable by imperfect and nasty actors. <br>
<br>
But there is no reason to think this is the only choice. You could
imagine a morally perfect Objectivist, following rules of
enlightened selfishness. Or a perfect average utilitarian maximizing
the average happiness of all entities in our future lightcone.
Neither would be a pushover ("If I give you my wallet there will be
less resources for my von Neumann probe program. So, no, I will not
give it to you. In fact, I will now force you to give me your money
- I see that this will enable a further quintillion minds. Thank
you.") Convergent instrumental goal behavior likely tends to turn
wussy nice agents non-wussy.<br>
<br>
There is an interesting issue about what to do with imperfect moral
agents if you are a perfect one. A Kantian agent would presumably
respect their autonomy and try to guide them to see how to obey the
categorical imperative. A consequentialist agent would try to
manipulate them to behave better, but the means might be anything
from incentives to persuation to brainwashing. A virtue agent might
not care at all, just demonstrating its own excellence. A paperclip
maximizing agent would find non-paperclip maximizers a waste of
resources and work to remove them.<br>
<br>
In fact, most pure moral systems are very bad at "live and let
live". We humans tend to de facto behave like that because our power
is about equal; entities that are orders of magnitude more powerful
may not behave like that unless we get the value code just right. <br><span class="HOEnZb"><font color="#888888">
<br>
<pre cols="72">--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</font></span></div>
<br>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></div><br></div>