[ExI] Should we teach Strong AI Ethics?

Re Rose rocket at earthlight.com
Sun May 29 15:34:53 UTC 2016


Well, small, new humans learn ethics using game theory, supplanted by
parental and social inculcation of local culture. I propose the safest way
for AGI's to develop a sense of ethics is the same way - and that's
certainly a safer approach than imagining that we could program
something better.

My concern is due to current discussions both in the media and at
professional conferences of programming AGI's with some form
of altruism towards humans. The unintended consequences of this could be
devastating - this has been known (and parodied) for decades already. Yet
people still talk about, and champion, this altruistic design as a goal.
It's just not a good idea.

Early UNIX system designs had all sorts of weak security measures back in
the day - out-of-the-box open ports, sendmail bugs,
clear-text password files, the ability to delete admin log files, etc etc
etc. Because when UNIX was designed, no one thought how it might one day
used so widely. Fixing these security flaws has been very expensive and
very time-consuming. Let's not play around with programming ethics while
we are still in the dark as to how to even define ethics - and have it turn
out to be likewise very expensive and difficult to redesign 20 years from
now.

--Regina


On 27 May 2016 at 13:57, Re Rose wrote:
<snip>
> All human ethics originate from the practicalities (constraints) on
> interactions as described by game theory. Game theory works because each
> individual (or agent) acts in their own self-interest to maximize their
own
> payoff. Also to consider is that for any one agent and within a single,
> local culture, that agent will necessarily be involved in many interacting
> situations (or games) and the intersection of these would be non-linear
(and
> probably dynamic) combinations of all the game payoffs. This makes
> prediction - and programming - of "correct" ethical behavior in such
> situations impossible.
>
> Future AGIs could and should be programmed - or simply allowed - to
develop
> their own ethics by acting as independent agents and maximizing their own
> utility for a given situation. (Defining utility for an AGI - that's a
> different topic!!) As far as weak AIs, such as Google self-driving cars,
> programming them to, for example, drive off the road as opposed to hitting
> the stray baby carriage in front of them, is not programming ethics but
> building safety features.
>

As Anders pointed out humans 'live and let live' different ethical
systems because everybody is roughly equal. Where they are not, the
unbelievers tend to get wiped out (or reduced to small enclaves).

I don't like the idea of an AGI using game theory to maximise its own
payoff. At least we should instruct it that 'might doesn't make
right'. Many humans could do with that instruction as well.

BillK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160529/2fad3839/attachment.html>


More information about the extropy-chat mailing list