[ExI] AI Is Dangerous Because Humans Are Dangerous
BillK
pharos at gmail.com
Thu May 11 22:31:10 UTC 2023
On Thu, 11 May 2023 at 23:14, Gadersd via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> That completely depends on how you define intelligence. AI systems in general are capable of acting amorally regardless of their level of understanding of human ethics. There is no inherent moral component in prediction mechanisms or reinforcement learning theory. It is not a logical contradiction in the theories of algorithmic information and reinforcement learning for an agent to make accurate future predictions and behave very competently in way that maximizes rewards while acting in a way that we humans would view as immoral.
>
> An agent of sufficient understanding would understand human ethics and know if an action would be considered to be good or bad by our standards. This however, has no inherent bearing on whether the agent takes the action or not.
>
> The orthogonality of competence with respect to arbitrary goals vs moral behavior is the essential problem of AI alignment. This may be difficult to grasp as the details involve mathematics and may not be apparent in a plain English description.
> _______________________________________________
So I asked for an explanation ------
Quote:
The orthogonality thesis is a concept in artificial intelligence that
holds that intelligence and final goals (purposes) are orthogonal axes
along which possible artificial intellects can freely vary. The
orthogonality of competence with respect to arbitrary goals vs moral
behavior is the essential problem of AI alignment. In other words, it
is possible for an AI system to be highly competent at achieving its
goals but not aligned with human values or morality. This can lead to
unintended consequences and potentially catastrophic outcomes.
----------------------
Sounds about right to me.
BillK
More information about the extropy-chat
mailing list