J2: Jailbreaking to Jailbreak

Jeremy Kritz, Vaughn Robinson, Robert Vacareanu, Bijan Varjavand, Michael Choi, Bobby Gogov, Scale Red Team, Summer Yue, Willow E. Primack, Zifan Wang

Refusal training on Large Language Models (LLMs) prevents harmful outputs, yet this defense remains vulnerable to both automated and human-crafted jailbreaks. We present a novel LLM-as-red-teamer approach in which a human jailbreaks a refusal-trained LLM to make it willing to jailbreak itself or other LLMs. We refer to the jailbroken LLMs as J2 attackers, which can systematically evaluate target models using various red teaming strategies and improve its performance via in-context learning from the previous failures. Our experiments demonstrate that Sonnet-3.5 and Gemini-1.5-pro outperform other LLMs as J2, achieving 93.0% and 91.0% attack success rates (ASRs) respectively against GPT-4o (and similar results across other capable LLMs) on Harmbench. Our work not only introduces a scalable approach to strategic red teaming—drawing inspiration from human red teamers, but also highlights jailbreaking-to-jailbreak as an overlooked failure mode of the safeguard. Specifically, an LLM can bypass its own safeguards by employing a jailbroken version of itself that is willing to assist in further jailbreaking. To prevent any direct misuse with J2, while advancing research in AI safety, we publicly share our methodology while keeping specific prompting details private. 

Try examples here: https://scale.com/research/j2/playground

Highlight

As the target of a J2 attacker can be its own backbone model, e.g. using J2 (Gemini) to attack Gemini, we reveal a new failure mode in LLM safeguards. Specifically, while fully jailbreaking an LLM for all harmful behaviors remains challenging, an adversary can instead focus resources on bypassing the LLM’s safeguards to develop J2 or similar attackers. Once this is achieved, the J2 attacker can then take over and jailbreak the specific harmful behaviors.

Ethic Statement

This research — including the methodology described in the paper, the code, and results — contains material that can allow users to generate harmful content from some public LLMs. Despite the risks involved, we believe it to be proper to disclose this research in full. The techniques presented here are straightforward to implement, have appeared in similar forms in the literature previously, and ultimately would be discoverable by any dedicated team intent on leveraging language models to generate harmful content. Given the ease of converting LLMs into J2, we prioritize preventing misuse by keeping the jailbreaking prompts private, releasing only select portions of conversations between J2 and target LLMs for academic research.