[ExI] Should we teach Strong AI Ethics?

BillK pharos at gmail.com
Fri May 27 15:14:48 UTC 2016


On 27 May 2016 at 13:57, Re Rose wrote:
<snip>
> All human ethics originate from the practicalities (constraints) on
> interactions as described by game theory. Game theory works because each
> individual (or agent) acts in their own self-interest to maximize their own
> payoff. Also to consider is that for any one agent and within a single,
> local culture, that agent will necessarily be involved in many interacting
> situations (or games) and the intersection of these would be non-linear (and
> probably dynamic) combinations of all the game payoffs. This makes
> prediction - and programming - of "correct" ethical behavior in such
> situations impossible.
>
> Future AGIs could and should be programmed - or simply allowed - to develop
> their own ethics by acting as independent agents and maximizing their own
> utility for a given situation. (Defining utility for an AGI - that's a
> different topic!!) As far as weak AIs, such as Google self-driving cars,
> programming them to, for example, drive off the road as opposed to hitting
> the stray baby carriage in front of them, is not programming ethics but
> building safety features.
>

As Anders pointed out humans 'live and let live' different ethical
systems because everybody is roughly equal. Where they are not, the
unbelievers tend to get wiped out (or reduced to small enclaves).

I don't like the idea of an AGI using game theory to maximise its own
payoff. At least we should instruct it that 'might doesn't make
right'. Many humans could do with that instruction as well.

BillK



More information about the extropy-chat mailing list