[ExI] Should we teach Strong AI Ethics?
rocket at earthlight.com
Fri May 27 12:57:04 UTC 2016
AIs should not be "taught" ethics, or have someone's version of what is
ethical programmed into them in any way. This endeavor can be dangerous,
leading to unintended and potentially harmful consequences, and I find it
alarming that so many researchers in the field (and out of it as well!) are
seeking an "optimal" way to do this very thing. How could this goal be
reached? There is no single human system or theory of ethical behavior that
is entirely consistent (i.e., always leads to the same conclusion for the
same interaction or game), and there are, as Anders has pointed out, many
differing ideas of what is "ethical" anyway.
All human ethics originate from the practicalities (constraints) on
interactions as described by game theory. Game theory works because each
individual (or agent) acts in their own self-interest to maximize their own
payoff. Also to consider is that for any one agent and within a single,
local culture, that agent will necessarily be involved in many interacting
situations (or games) and the intersection of these would be non-linear
(and probably dynamic) combinations of all the game payoffs. This makes
prediction - and programming - of "correct" ethical behavior in such
Future AGIs could and should be programmed - or simply allowed - to develop
their own ethics by acting as independent agents and maximizing their own
utility for a given situation. (Defining utility for an AGI - that's a
different topic!!) As far as weak AIs, such as Google self-driving cars,
programming them to, for example, drive off the road as opposed to hitting
the stray baby carriage in front of them, is not programming ethics but
building safety features.
> Message: 1
> Date: Thu, 26 May 2016 16:18:47 +0100
> From: BillK <pharos at gmail.com>
> To: Extropy Chat <extropy-chat at lists.extropy.org>
> Subject: [ExI] Should we teach Strong AI Ethics?
> xg at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
> Serious point though.
> If we teach AI about ethical behaviour (for our own safety) what do we
> expect the AI to do when it sees humans behaving unethically (to a
> greater or lesser extent)?
> Can a totally ethical AI even operate successfully among humans?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat