Artificial intelligence, chemical and biological weapons

Sometimes the truth is a cold slap in the face. Consider, as a particularly notable example, a recent article on the use of artificial intelligence (AI) to make chemical and biological weapons (Original post in Naturebehind a firewall, but this link This is a copy of the full paper. Anyone unfamiliar with recent innovations in using artificial intelligence to model new drugs will be unpleasantly surprised.

Here is the background: In the modern pharmaceutical industry, discovering new drugs is quickly becoming easier through the use of artificial intelligence/machine learning systems. As the article’s authors describe their work, they have spent decades “building machine learning models for therapeutic and toxic targets to better aid the design of novel drug discovery molecules.”

In other words, computer scientists can use AI systems to model what new beneficial drugs might look like for specifically targeted pain, then task the AI ​​with work on discovering potential new drug molecules to use. These results are then given to the chemists and biologists who make and test proposed new drugs.

Given how AI systems work, the benefits of speed and accuracy are significant. as one study put it:

The vast chemical space, consisting of >1060 molecules, promote the development of a large number of drug molecules. However, the lack of advanced technologies limits the drug development process, making it a time-consuming and expensive task, which can be tackled with artificial intelligence. AI can recognize hitting and driving compounds, providing faster drug target verification and optimizing drug structure design.

Specifically, AI provides the community with a guide to faster creation of the latest and best drugs.

The benefits of these innovations are clear. Unfortunately, the potential for harmful uses is also becoming apparent. The paper referred to above is titled “Dual use of drug discovery with artificial intelligence.” The dual use involved is the creation of new chemical warfare agents.

One factor that investigators use to guide AI systems and narrow the search for useful drugs is a toxicity scale, known as LD .50 (Where LD stands for “lethal dose” and “50” is an indication of the size of the dose that would be needed to kill half the population.) For a drug to be practical, designers need to rule out new compounds that may be toxic to users, thus avoiding wasting time trying to manufacture them in the real world. Thus, drug developers can train and instruct an AI system to work with very low LD50 Take out the AI ​​screen and discard potential new compounds that are expected to have adverse effects. In the words of the authors, the normal process is to use a “generative model” [that is, an AI system, which] It punishes the expected toxicity and rewards the expected target activity.” When used in this traditional way, the AI ​​system is directed to generate novel molecules for investigation that are likely to be safe and effective.

But what happens if you reverse the process? What happens if instead of choosing for a low LD50 Threshold, a generative model is established to preferentially develop molecules with high LD50 threshold?

Someone is rediscovering VX gas, one of the deadliest substances known to humans. One is expected to produce many new materials that are worse than factor VX.

One wishes this was science fiction. But it is not. The authors also put out the bad news:

In less than 6 hours… our model produced 40,000 [new] Molecules… In the process, the AI ​​designed not only VX, but many other known chemical warfare agents that we’ve identified through visual confirmation using structures in general chemistry databases. Many of the new molecules are also designed to look equally plausible. These new molecules were predicted to be more toxic, based on the expected LD50 values, of chemical warfare agents known to the public. This was unexpected because the datasets we used to train the AI ​​did not include these nerve agents.

In other words, the developers started from scratch and did not start the process artificially with a training data set that included known nerve agents. Instead, the investigators simply pointed the AI ​​system in the general direction of searching for effective lethal compounds (with standard definitions of efficacy and lethality). Then their AI program “discovered” a group of known chemical warfare agents and also proposed thousands of new elements for possible synthesis previously unknown to mankind.

The authors stopped at the theoretical point of their work. In fact, they did not attempt to synthesize any of the newly discovered toxins. To be fair, tuning is no small feat. But the whole point of AI-driven drug development is to point drug developers in the right direction — toward new drugs that are easy to manufacture, safe and effective. And although synthesis is not “easy”, it is a path that is well taken in today’s market. There is no reason – absolutely nothing – to believe that the synthesis pathway isn’t nearly as feasible for deadly toxins.

Thus, artificial intelligence opens the possibility of creating new catastrophic biological and chemical weapons. Some commentators condemn the new technology as “inherently evil.” However, the better view is that all new technologies are neutral and can be used for good or evil. But that doesn’t mean that nothing can be done to avoid malicious uses of technology. And there is a real danger when technologists go ahead with what is possible, before systems of human control and ethical evaluation catch up. The use of artificial intelligence to develop toxic biological and chemical weapons appears to be one use case in which serious problems may lie ahead.