[ExI] Self improvement

BillK pharos at gmail.com
Sat Apr 23 08:40:22 UTC 2011


On Sat, Apr 23, 2011 at 8:56 AM, Eugen Leitl wrote:
> There is nothing to understand. You must be able to define
> friendliness in formal form first in order to start building
> development constraints on trajectory to assert conservation
> or minimax according to the metric.
>
>

Easy.

The ideal Friendliness to any human is -
'Don't do anything that affects me (short-term or long-term) that I object to'.

Implications -
Friendliness will be different for everyone, but so what - I have no
right to object to things that don't affect me.

Friendliness actions to one human must not cause an Unfriendly
reaction in another human.

The idea 'I am causing you pain for your own greater good' is not allowed.
So forcing humans to do something *against their will* even though it
may ultimately benefit them is not allowed. Individuals are still
allowed to choose pain for later benefit, for example, like medical
operations or arduous training.


Problem solved!

Of course, this definition may result in the AI finding itself unable
to do much of anything.

So, if this happens, humans will forget idealism and concentrate on
developing AI than will be Friendly to them personally and forget
about being Friendly to all humanity. In fact, I suspect this will be
the driving motive for the near-time development of AI. i.e. as a
weapon to benefit a particular nation.


BillK



More information about the extropy-chat mailing list