[ExI] AI Is Dangerous Because Humans Are Dangerous
Brent Allsop
brent.allsop at gmail.com
Thu May 11 22:50:36 UTC 2023
I guess I'm not convinced.
To me, an example of necessary good is survival is better than non survival.
That is why evolutionary progress (via survival of the fittest) must take
place in all sufficiently complex systems.
All 'arbitrary' goals, if they are in the set of moral goals, are good
goals.
And, again, even if you win a war, and achieve your goal first, you will
eventually lose, yourself.
So the only way to reliably get what you want, is to work to get it all,
for everyone, till all good is achieved, the only possible ultimate final
goal.
On Thu, May 11, 2023 at 4:32 PM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Thu, 11 May 2023 at 23:14, Gadersd via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > That completely depends on how you define intelligence. AI systems in
> general are capable of acting amorally regardless of their level of
> understanding of human ethics. There is no inherent moral component in
> prediction mechanisms or reinforcement learning theory. It is not a logical
> contradiction in the theories of algorithmic information and reinforcement
> learning for an agent to make accurate future predictions and behave very
> competently in way that maximizes rewards while acting in a way that we
> humans would view as immoral.
> >
> > An agent of sufficient understanding would understand human ethics and
> know if an action would be considered to be good or bad by our standards.
> This however, has no inherent bearing on whether the agent takes the action
> or not.
> >
> > The orthogonality of competence with respect to arbitrary goals vs moral
> behavior is the essential problem of AI alignment. This may be difficult to
> grasp as the details involve mathematics and may not be apparent in a plain
> English description.
> > _______________________________________________
>
>
> So I asked for an explanation ------
> Quote:
> The orthogonality thesis is a concept in artificial intelligence that
> holds that intelligence and final goals (purposes) are orthogonal axes
> along which possible artificial intellects can freely vary. The
> orthogonality of competence with respect to arbitrary goals vs moral
> behavior is the essential problem of AI alignment. In other words, it
> is possible for an AI system to be highly competent at achieving its
> goals but not aligned with human values or morality. This can lead to
> unintended consequences and potentially catastrophic outcomes.
> ----------------------
>
> Sounds about right to me.
> BillK
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230511/4abe1231/attachment.htm>
More information about the extropy-chat
mailing list