[ExI] Should we teach Strong AI Ethics?

Colin Hales col.hales at gmail.com
Thu May 26 23:07:59 UTC 2016


<http://www.smbc-comics.com/index.php?id=4122>
<http://www.smbc-comics.com/index.php?id=4122>

<sigh> The technical term 'strong AI', has reached its graveyard in the
popular lexicon. I read the first box "Strong AI invented" and saw
"{Hypothesis} invented". Interesting to watch Searle's terminology get bent
and broken by a crowd and ignorance. A bit like ethics, really.


On Fri, May 27, 2016 at 7:20 AM, Anders Sandberg <anders at aleph.se> wrote:

> On 2016-05-26 17:18, BillK wrote:
>
> <http://www.smbc-comics.com/index.php?id=4122> <http://www.smbc-comics.com/index.php?id=4122>
>
> Serious point though.
> If we teach AI about ethical behaviour (for our own safety) what do we
> expect the AI to do when it sees humans behaving unethically (to a
> greater or lesser extent)?
>
> Can a totally ethical AI even operate successfully among humans?
>
>
> What is "totally ethical"?
>
> [Philosopher hat on!]
>
> Normally when we say something like that, we mean somebody who follows the
> One True moral system perfectly. Or at least one moral system perfectly.
> There are no humans that do it, so we do not have reliable intuitions about
> what it would mean. Now, a caricature view  of moral perfection is somebody
> being a saintly wuss: super kind, but exploitable by imperfect and nasty
> actors.
>
> But there is no reason to think this is the only choice. You could imagine
> a morally perfect Objectivist, following rules of enlightened selfishness.
> Or a perfect average utilitarian maximizing the average happiness of all
> entities in our future lightcone. Neither would be a pushover ("If I give
> you my wallet there will be less resources for my von Neumann probe
> program. So, no, I will not give it to you. In fact, I will now force you
> to give me your money - I see that this will enable a further quintillion
> minds. Thank you.") Convergent instrumental goal behavior likely tends to
> turn wussy nice agents non-wussy.
>
> There is an interesting issue about what to do with imperfect moral agents
> if you are a perfect one. A Kantian agent would presumably respect their
> autonomy and try to guide them to see how to obey the categorical
> imperative. A consequentialist agent would try to manipulate them to behave
> better, but the means might be anything from incentives to persuation to
> brainwashing. A virtue agent might not care at all, just demonstrating its
> own excellence. A paperclip maximizing agent would find non-paperclip
> maximizers a waste of resources and work to remove them.
>
> In fact, most pure moral systems are very bad at "live and let live". We
> humans tend to de facto behave like that because our power is about equal;
> entities that are orders of magnitude more powerful may not behave like
> that unless we get the value code just right.
>
> --
> Dr Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160527/e9582a8a/attachment.html>


More information about the extropy-chat mailing list