homehome Home chatchat Notifications


Chatbots won’t help anyone make weapons of mass destruction. But other AI systems just might

Over the past two years, we have seen much written about the “promise and peril” of artificial intelligence (AI). Some have suggested AI systems might aid in the construction of chemical or biological weapons. How realistic are these concerns? As researchers in the field of bioterrorism and health intelligence, we have been trying to separate […]

David Heslop
January 1, 2025 @ 4:07 pm

share Share

AI robot eye
AI-generated image.

Over the past two years, we have seen much written about the “promise and peril” of artificial intelligence (AI). Some have suggested AI systems might aid in the construction of chemical or biological weapons.

How realistic are these concerns? As researchers in the field of bioterrorism and health intelligence, we have been trying to separate the genuine risks from the online hype.

The exact implications for “chem bio” weapons are still uncertain. However, it is very clear that regulations are not keeping pace with technological developments.

Assessing the risks

Assessing the risk an AI model presents is not easy. What’s more, there is no consistent and widely followed way to do it.

Take the case of large language models (LLMs). These are the AI engines behind chatbots such as ChatGPT, Claude and Gemini.

In September, OpenAI released an LLM called o1 (nicknamed “Strawberry”). Upon its release, the developers claimed the new system had a “medium” level risk of helping someone create a biological weapon.

This assessment might sound alarming. However, a closer reading of the o1 system card reveals more trivial security risks.

The model might, for example, help an untrained individual navigate a public database of genetic information about viruses more quickly. Such assistance is unlikely to have much material impact on biosecurity.

Despite this, the media quickly reported that the new model “meaningfully contributed” to weaponisation risks.

Beyond chatbots

When the first wave of LLM chatbots launched in late 2022, there were widely reported fears that these systems could help untrained individuals unleash a pandemic.

However, these chatbots are based on already-existing data and are unlikely to come up with anything genuinely new. They might help a bioterrorism enterprise come up with some ideas and establish an initial direction, but that’s about it.

Rather than chatbots, AI systems with applications in the life sciences are of more genuine concern. Many of these, such as the AlphaFold series, will aid researchers fighting diseases and seeking new therapeutic drugs.

Some systems, however, may have the capacity for misuse. Any AI that is really useful for science is likely to be a double-edged sword: a technology that may have great benefit to humanity, while also posing risks.

AI systems like these are prime examples of what is called “dual-use research of concern”.

Prions and pandemics

Dual-use research of concern in itself is nothing new. People working on biosecurity and nuclear non-proliferation have been worrying about it for a long time. Many tools and techniques in chemistry and synthetic biology could be used for malicious ends.

In the field of protein science, for example, there has been concern for more than a decade that new computational platforms might help in the synthesis of the potentially deadly misfolded proteins called prions, or in the construction of novel toxin weapons. New AI tools such as AlphaFold may bring this scenario closer to reality.

However, while prions and toxins may be deadly to relatively small groups people, neither can cause a pandemic that could wreak true havoc. In the study of bioterrorism, our main concern is with agents that have pandemic potential.

Historically, bioterrorism planning has focused on Yersinia pestis, the bacterium that causes plague, and variola virus, which causes smallpox.

The main question is whether new AI systems make any tangible difference to an untrained individual or group seeking to obtain pathogens such as these, or to create something from scratch.

Right now, we simply do not know.

Rules to assess and regulate AI systems

Nobody yet has a definitive answer to the question of how to assess the new landscape of AI-powered biological weapons risk. The most advanced planning has been produced by the outgoing Biden administration in the United States, via an executive order on AI development issued in October 2023.

A key provision of the executive order tasks several US agencies with establishing standards to assess the impact new AI systems may have on the proliferation of chemical, biological, radiological or nuclear weapons. Experts often group these together under the heading of “CBRN”, but the new dynamic we call CBRN+AI is still uncertain.

The executive order also established new processes for regulating the hardware and software needed for gene synthesis. This is the machinery for turning the digital ideas produced by an AI system into the physical reality of biological life.

The US Department of Energy is soon due to release guidance on managing biological risks that might be generated by new AI systems. This will provide a pathway for understanding how AI might affect biosecurity in the coming years.

Political pressure

These nascent regulations are already coming under political pressure. The incoming Trump administration in the US has promised to repeal Biden’s executive order on AI, concerned it is based on “radical leftist ideas”. This stance is informed by irrelevant disputes in American identity politics that have no bearing on biosecurity.

While it is imperfect, the executive order is the best blueprint for helping us comprehend how AI will impact proliferation of chemical and biological threats in the coming years. To repeal it would be a great disservice to the US national interest, and global human security at large.


David Heslop, Associate Professor of Population Health, UNSW Sydney and Joel Keep, Biodefense Fellow at the Council on Strategic Risks and PhD Candidate in Biosecurity at the Kirby Institute, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

share Share

Godfather of AI says there's a 10-20% chance AI wipes out humanity in 30 years

AI could bring an industrial revolution-level change, but at what cost?

Leopards have unique roars, and AI can identify them

They say you can identify the leopard by its spots, but as it turns out, you can also identify it through its unique roar. Leopards, notoriously difficult to monitor due to their elusive nature, could soon be tracked using passive acoustic recorders paired with AI-based analysis. Leopard populations face significant challenges, with their ranges shrinking […]

3D-printed 'ghost guns', like the one Luigi Mangione allegedly used to kill a health care CEO, surge in popularity as law enforcement struggles to keep up

The use of 3D-printed guns in criminal and violent activities is likely to continue to increase. And governments and police will continue to have trouble regulating them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

A Factory for Cyborg Insects? Researchers Unveil Mass Production of Robo-Roaches

The new system can turn cockroaches into cyborgs in under 70 seconds.

Origami-Inspired Heart Valve May Revolutionize Treatment for Toddlers

A team of researchers at UC Irvine has developed an origami-inspired heart valve that grows with toddlers.

AI thought X-rays are connected to eating refried beans or drinking beer

Instead of finding true medical insights, these algorithms sometimes rely on irrelevant factors — leading to misleading results.

AI is scheming to stay online — and then lying to humans

An alarming third party report almost looks like a prequel to Terminator.

Scientists Built a Radioactive Diamond Battery That Could Last Longer Than Human Civilization

A tiny diamond battery could power devices for thousands of years.

Is AI the New Dot-Com Bubble? The Year 2025 Has 1999 Vibes All Over It

AI technology has promised us many advances and 2025 looms ahead of us. Will the outputs match the promises?