[ExI] Should we teach Strong AI Ethics?

William Flynn Wallace foozler83 at gmail.com
Fri May 27 01:37:13 UTC 2016


In fact, most pure moral systems are very bad at "live and let live". We
humans tend to de facto behave like that because our power is about equal;
entities that are orders of magnitude more powerful may not behave like
that unless we get the value code just right.   anders

I find that people who construct moral systems, as well as those who just
interpret them, are often less concerned about being right than with other
people being wrong/bad.

In the American South, sermons, of which I have heard hundreds, from
Baptist to Episcopalian, are full of fingerpointing, though sometimes at
oneself.  And the more vociferous (Baptist) the better for those who like
to hear about how bad the bad guys are (and by comparison how righteous we
are).  Or if you are listening to this and are a bad guy, you break down
emotionally and come forward to be saved and give your testimony.

It would be very easy to program an AI to sermonize like this.  Just get
books of sermons and have the AI scramble them, and perhaps use different
examples (something real preachers do all the time), and you could go into
business as a Dial a Sermon (ClickOn a Sermon perhaps?).

It would be hilarious (to us) and make tons of money.

I'll bet Spike has some ideas on the visuals.

bill w

On Thu, May 26, 2016 at 4:20 PM, Anders Sandberg <anders at aleph.se> wrote:

> On 2016-05-26 17:18, BillK wrote:
>
> <http://www.smbc-comics.com/index.php?id=4122> <http://www.smbc-comics.com/index.php?id=4122>
>
> Serious point though.
> If we teach AI about ethical behaviour (for our own safety) what do we
> expect the AI to do when it sees humans behaving unethically (to a
> greater or lesser extent)?
>
> Can a totally ethical AI even operate successfully among humans?
>
>
> What is "totally ethical"?
>
> [Philosopher hat on!]
>
> Normally when we say something like that, we mean somebody who follows the
> One True moral system perfectly. Or at least one moral system perfectly.
> There are no humans that do it, so we do not have reliable intuitions about
> what it would mean. Now, a caricature view  of moral perfection is somebody
> being a saintly wuss: super kind, but exploitable by imperfect and nasty
> actors.
>
> But there is no reason to think this is the only choice. You could imagine
> a morally perfect Objectivist, following rules of enlightened selfishness.
> Or a perfect average utilitarian maximizing the average happiness of all
> entities in our future lightcone. Neither would be a pushover ("If I give
> you my wallet there will be less resources for my von Neumann probe
> program. So, no, I will not give it to you. In fact, I will now force you
> to give me your money - I see that this will enable a further quintillion
> minds. Thank you.") Convergent instrumental goal behavior likely tends to
> turn wussy nice agents non-wussy.
>
> There is an interesting issue about what to do with imperfect moral agents
> if you are a perfect one. A Kantian agent would presumably respect their
> autonomy and try to guide them to see how to obey the categorical
> imperative. A consequentialist agent would try to manipulate them to behave
> better, but the means might be anything from incentives to persuation to
> brainwashing. A virtue agent might not care at all, just demonstrating its
> own excellence. A paperclip maximizing agent would find non-paperclip
> maximizers a waste of resources and work to remove them.
>
> In fact, most pure moral systems are very bad at "live and let live". We
> humans tend to de facto behave like that because our power is about equal;
> entities that are orders of magnitude more powerful may not behave like
> that unless we get the value code just right.
>
> --
> Dr Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160526/e6c8ef3f/attachment.html>


More information about the extropy-chat mailing list