[ExI] Top AI researchers and CEOs warn against ‘risk of extinction’

BillK pharos at gmail.com
Wed May 31 14:43:39 UTC 2023


On Wed, 31 May 2023 at 02:53, Gadersd via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> Even if the top AI companies manage to produce perfectly aligned AI that refuse to do anything that could potentially be destructive, people will still develop open source models that can be used for any malevolent deed. The capability of the open source models will scale along with the state of the art and there is no obvious limit to this growth. No amount of aligning at the top will prevent terrorists from using their own models. At best this is just postponing the inevitable.
> _______________________________________________


So I asked some AIs.....
Question -
Could terrorists intentionally misuse open source AI to ignore AI
ethical regulations and cause harm?

Answer -
Yes, terrorists could intentionally misuse open source AI to ignore AI
ethical regulations and cause harm. Open source AI is software that is
freely available to anyone to use, modify, and distribute. This means
that terrorists could potentially use open source AI to develop tools
and weapons that would be difficult or impossible to track or trace.
For example, terrorists could use open source AI to develop facial
recognition software that could be used to identify potential targets
or to create deepfakes that could be used to spread propaganda or sow
discord.

In addition, open source AI is often not subject to the same ethical
regulations as proprietary AI. This means that terrorists could
potentially use open source AI to develop tools and weapons that would
be considered unethical or even illegal under other circumstances. For
example, terrorists could use open source AI to develop autonomous
weapons that could kill without human intervention.

The potential for terrorists to misuse open source AI is a serious
concern. However, it is important to note that open source AI can also
be used for good. For example, open source AI is being used to develop
tools that can be used to detect and prevent cyberattacks, to improve
healthcare, and to address climate change. It is important to strike a
balance between the potential risks and benefits of open source AI.

Here are some ways to mitigate the risks of terrorist misuse of open source AI:

Develop and enforce ethical guidelines for the development and use of
open source AI.
Create mechanisms for tracking and tracing the use of open source AI.
Invest in research and development of new technologies that can be
used to detect and prevent terrorist misuse of open source AI.
Raise awareness of the potential risks of terrorist misuse of open
source AI among the general public and among policymakers.
------------------


So our present AIs already know how to be terrorists!


BillK



More information about the extropy-chat mailing list