Anthropic has strengthened the usage rules for its Claude AI chatbot, adding explicit bans on developing some of the world’s most dangerous weapons. The company’s updated policy now specifically prohibits attempts to use Claude for building biological, chemical, radiological, or nuclear weapons, as well as high-yield explosives. This goes further than its earlier rules, which broadly barred weapon design and harmful systems without detailing categories.
The change follows the rollout of “AI Safety Level 3” protections in May, introduced alongside Claude Opus 4. Those safeguards were designed to resist jailbreak attempts and block the model from assisting with weapons-related development. Anthropic said these stricter policies reflect rising concerns about AI misuse in high-risk areas.
The company is also addressing risks tied to new “agentic AI” features. Tools such as Computer Use, which allows Claude to take control of a user’s computer, and Claude Code, which integrates it into developer terminals, create possibilities for abuse. To counter this, Anthropic has added a new rule called “Do Not Compromise Computer or Network Systems.” It forbids using Claude to find or exploit vulnerabilities, spread malware, or build tools for denial-of-service attacks and other cyber threats.
While tightening safety in these areas, Anthropic is also loosening restrictions on political content. Previously, Claude could not generate material related to campaigns or lobbying at all. The revised policy narrows this ban, now only blocking uses that could deceive or disrupt democratic processes, or target voters and campaigns. The company also clarified that requirements for “high-risk” uses—where Claude gives recommendations—apply only in consumer-facing settings, not for business purposes.
The update signals Anthropic’s balancing act: curbing the most dangerous misuse of its AI while refining rules to allow legitimate applications. The company has emphasized that responsible safeguards are critical as AI systems gain more power and autonomy.












