[ExI] AI Is Dangerous Because Humans Are Dangerous

Brent Allsop brent.allsop at gmail.com
Thu May 11 23:19:37 UTC 2023


Right, evolutionary progress is only required, till we achieve "intelligent
design".  We are in the process of switching to that (created by human
hands).
And if "intelligence" ever degrades to making mistakes (like saying yes to
an irrational "human") and start playing win/lose games, they will
eventually lose (subject to evolutionary pressures.)





On Thu, May 11, 2023 at 4:58 PM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> All 'arbitrary' goals, if they are in the set of moral goals, are good
> goals.
> And, again, even if you win a war, and achieve your goal first, you will
> eventually lose, yourself.
> So the only way to reliably get what you want, is to work to get it all,
> for everyone, till all good is achieved, the only possible ultimate final
> goal.
>
>
> AI is not created by evolution, but rather by human hands. Therefore the
> evolutionary goals that we follow are not necessarily pre-baked into the
> AIs that we create. I agree that AI may eventually reach something similar
> to our goals by virtue of competing amongst themselves. They are not
> initially created by evolution but in time will be subject to evolutionary
> pressures.
>
> On May 11, 2023, at 6:50 PM, Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
> I guess I'm not convinced.
>
> To me, an example of necessary good is survival is better than non
> survival.
> That is why evolutionary progress (via survival of the fittest) must take
> place in all sufficiently complex systems.
>
> All 'arbitrary' goals, if they are in the set of moral goals, are good
> goals.
> And, again, even if you win a war, and achieve your goal first, you will
> eventually lose, yourself.
> So the only way to reliably get what you want, is to work to get it all,
> for everyone, till all good is achieved, the only possible ultimate final
> goal.
>
>
> On Thu, May 11, 2023 at 4:32 PM BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, 11 May 2023 at 23:14, Gadersd via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>> >
>> > That completely depends on how you define intelligence. AI systems in
>> general are capable of acting amorally regardless of their level of
>> understanding of human ethics. There is no inherent moral component in
>> prediction mechanisms or reinforcement learning theory. It is not a logical
>> contradiction in the theories of algorithmic information and reinforcement
>> learning for an agent to make accurate future predictions and behave very
>> competently in way that maximizes rewards while acting in a way that we
>> humans would view as immoral.
>> >
>> > An agent of sufficient understanding would understand human ethics and
>> know if an action would be considered to be good or bad by our standards.
>> This however, has no inherent bearing on whether the agent takes the action
>> or not.
>> >
>> > The orthogonality of competence with respect to arbitrary goals vs
>> moral behavior is the essential problem of AI alignment. This may be
>> difficult to grasp as the details involve mathematics and may not be
>> apparent in a plain English description.
>> > _______________________________________________
>>
>>
>> So I asked for an explanation ------
>> Quote:
>> The orthogonality thesis is a concept in artificial intelligence that
>> holds that intelligence and final goals (purposes) are orthogonal axes
>> along which possible artificial intellects can freely vary. The
>> orthogonality of competence with respect to arbitrary goals vs moral
>> behavior is the essential problem of AI alignment. In other words, it
>> is possible for an AI system to be highly competent at achieving its
>> goals but not aligned with human values or morality. This can lead to
>> unintended consequences and potentially catastrophic outcomes.
>> ----------------------
>>
>> Sounds about right to me.
>> BillK
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230511/3b24a968/attachment.htm>


More information about the extropy-chat mailing list