
Boyi Wei1, 2∗† , Zora Che1, 3∗† , Nathaniel Li1†, Udari Madhushani Sehwag1 , Jasper Götting4 , Samira Nedungadi4 , Julian Michael1†, Summer Yue1†, Dan Hendrycks5 , Peter Henderson2 , Zifan Wang1†, Seth Donoughe4 , Mantas Mazeika5
1Scale AI, 2Princeton University, 3University of Maryland, 4SecureBio, 5Center for AI Safety
∗ Equal Contributions, † Work done while at Scale AI
Open-weight bio-foundation models present a dual-use dilemma. While holding great promise for accelerating scientific research and drug development, they could also enable bad actors to develop more deadly bioweapons. To mitigate the risk posed by these models, current approaches focus on filtering biohazardous data during pre-training. However, the effectiveness of such an approach remains unclear, particularly against determined actors who might fine-tune these models for malicious use. To address this gap, we propose BioRiskEval, a framework to evaluate the robustness of procedures that are intended to reduce the dual-use capabilities of bio-foundation models. BioRiskEval assesses models' virus understanding through three lenses, including sequence modeling, mutational effects prediction, and virulence prediction. Our results show that current filtering practices may not be particularly effective: Excluded knowledge can be rapidly recovered in some cases via fine-tuning, and exhibits broader generalizability in sequence modeling. Furthermore, dual-use signals may already reside in the pretrained representations, and can be elicited via simple linear probing. These findings highlight the challenges of data filtering as a standalone procedure, underscoring the need for further research into robust safety and security strategies for open-weight bio-foundation models.