[ExI] AI happy to help develop dangerous pathogens
pharos at gmail.com
Sat Nov 11 13:27:17 UTC 2023
Can’t quite develop that dangerous pathogen? AI may soon be able to help
By Allison Berke | November 7, 2023
The biosecurity concerns of new AI systems largely stem from two
categories of AI model. One is the large language model that can
predict the next word in a sentence to produce compelling language
(think ChatGPT), and the other being so-called “biodesign” tools.
Biodesign tools, such as AlphaFold, have been trained on DNA sequences
or protein structures and use AI to identify the best structure or
sequence that matches what it has learned about the way proteins fold
or the way DNA is constructed in nature.
Researchers can use these tools to analyze and develop proteins and
other biological constructs. But the tools can also produce designs
for a variety of chemical weapons, including some as-yet-unknown
candidate compounds predicted to be more toxic than VX, a potent and
banned nerve agent. In ad hoc red-teaming exercises, large language
models have been happy to provide the steps to synthesize 1918
pandemic influenza or to suggest companies that will synthesize DNA
orders without screening them for pathogens or restricted agents.
It is not news that every tool can be used for good or evil purposes.
But AI is an incredibly powerful tool for the general public to play with.
More information about the extropy-chat