[ExI] Ethical AI?
efc at swisscows.email
efc at swisscows.email
Fri Apr 21 16:20:01 UTC 2023
On Fri, 21 Apr 2023, Gadersd via extropy-chat wrote:
>
>> What I would really like to study, is what kind of ethics the machine
>> would naturally come up with, instead of having rules decided upon and
>> programmed into the it by humans who obviously have their own ideas.
>
> Given that these models are trained to generate internet text, it is likely that the morals that a raw model would select are the ones you would expect a random person on the internet to come up with. It should be clear that this is a dangerous idea, though I am sure the results would be interesting.
I asked my self hosted alpaca.cpp and she is a moral relativist. Her
background is from the afro-american community on the south side of
chicago, and that has instilled in her the values of justice, empathy
and respect.
When given the moral dilemma of who to save between two men, and
refusing to choose will lead to the death of both, she refused to
choose.
How's that for an answer! ;)
But related to what you said, the model is trained on extreme amounts of
output from humans, and I assume self generated content as well, and
that could mean the program inherits the models that generated the
training data in the first place.
So will it, given a big enough amount of training data represent the
"human average" ethical theory, or will something spontaneous be
generated?
Best regards,
Daniel
More information about the extropy-chat
mailing list