[ExI] AI Is Dangerous Because Humans Are Dangerous

BillK pharos at gmail.com
Fri May 12 16:07:51 UTC 2023


On Fri, 12 May 2023 at 00:22, Brent Allsop via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> Right, evolutionary progress is only required, till we achieve "intelligent design".  We are in the process of switching to that (created by human hands).
> And if "intelligence" ever degrades to making mistakes (like saying yes to an irrational "human") and start playing win/lose games, they will eventually lose (subject to evolutionary pressures.)
> _______________________________________________


Evolution pressures still apply to AIs. Initially via human hands as
improvements are made to the AI system.
But once AIs become AGIs and acquire the ability to improve their
programs themselves without human intervention, then all bets are off.
Just as the basic chess-playing computers learn by playing millions of
test games in a very brief interval of time, the AGI will change its
own programming in what will appear to humans to be the blink of an
eye. By the time humans know something unexpected is happening it will
be too late.
That is why humans must try to solve the AI alignment problem before
this happens.

As Bard says -
This is because intelligence is not the same as morality. Intelligence
is the ability to learn and reason, while morality is the ability to
distinguish between right and wrong. An AI could be very intelligent
and still not understand our moral values, or it could understand our
moral values but choose to ignore them.
This is why it is so important to think about AI alignment now, before
we create an AI that is too powerful to control. We need to make sure
that we design AIs with our values in mind, and that we give them the
tools they need to understand and follow those values.
--------------

BillK


More information about the extropy-chat mailing list