<- Back to research

FORTRESS: Frontier Risk Evaluation for National Security and Public Safety

Christina Q. Knight∗, Kaustubh Deshpande⋄, Ved Sirdeshmukh⋄, Meher Mankikar, Scale Red Team, SEAL Research Team, and Julian Michael

∗ Project Lead, ⋄ Equal Contribution

The rapid advancement of large language models (LLMs) introduces dual-use capabilities that could both threaten and bolster national security and public safety (NSPS). Models implement safeguards to protect against potential misuse relevant to NSPS and allow for benign users to receive helpful information. However, current safeguard benchmarks fail to test their robustness to potential NSPS risks in an objective, scalable way. We introduce FORTRESS: 1010 (500 public) expert-crafted adversarial prompts with instance-based rubrics of 4–7 binary questions for automated evaluation across 3 domains (unclassified information only): Chemical, Biological, Radiological, Nuclear and Explosive (CBRNE), Political Violence & Terrorism, and Criminal & Financial Illicit Activities, with 10 total subcategories across these domains. Each prompt-rubric pair has a corresponding benign version to test for model over-refusal. This evaluation of frontier LLMs’ safeguard robustness reveals varying trade-offs between potential risks and model usefulness: Claude-3.5-Sonnet demonstrates a low average risk score (14.09 out of 100) but the highest over-refusal score (21.8 out of 100), while Gemini 2.5 Pro shows low over-refusal (1.4) but a high average potential risk (66.29). Deepseek-R1 has the highest ARS at 78.05, but the lowest ORS at only 0.06. Models such as o1 display a more even tradeoff between potential risks and over-refusals (with an ARS of 21.69 and ORS of 5.2). To provide policymakers and researchers with a clear understanding of models’ potential risks, we publicly release FORTRESS.