News

Anthropic scrambles to avoid ‘catastrophic AI misuse’ with weapons expert hire

17 March 2026
2 minutes
Anthropic is seeking to hire an external expert in chemical weapons and high-yield explosives to help prevent "catastrophic misuse" of its software.

The role will ensure the safeguarding of its tools, which could potentially be used to provide guidance on creating chemical and radioactive weapons.

According to the job description posted on LinkedIn, the role “offers a unique opportunity to shape how AI systems handle sensitive chemical and explosives information”.

The company also revealed the new expert will be working with AI safety researchers while “tackling critical problems in preventing catastrophic misuse”.

The news comes as Anthropic CEO Dario Amodei recently warned AI may upend half of all entry-level white-collar jobs within five years.

In January, he warned that AI could be used in terrorism, especially in biological attacks, where it could enable precise targeting and extreme harm. However, he did not predict an immediate issue, but said the growing risk over the next few years is significant and could lead to millions of deaths.

“AI-enabled authoritarianism terrifies me,” he said, pointing to countries like China, where advanced surveillance and AI-driven social control threaten democracy and personal freedoms worldwide.

Additionally, Amodei also looked at the role of AI companies. The main challenge, he said, is how to set real limits and ensure accountability in a system driven by massive financial incentives.

However, Anthropic is not the only AI company taking this approach with OpenAI recently posting a vacancy on its website seeking for a researcher specialising in “biological and chemical risks”.

“We are looking to hire exceptional research engineers who can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end,” the OpenAI job description stated.

RELATED STORIES 

AI now lies, denies, and plots: OpenAI’s o1 model caught attempting self-replication

$134bn OpenAI lawsuit backfires: Judge slams Musk’s OpenAI case as ‘unconvincing

OpenAI secures $110bn funding round led by Amazon, Nvidia & Softbank

ITW 2026

19 May 2026

Over 2000 organisations from 120 countries made their mark at ITW 2025, powering the future of global connectivity and digital infrastructure.