Browser Art

Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents

Priyanshu Kumar 1, Elaine Lau 3, Saranya Vijayakumar 1, Tu Trinh 3, Scale Red Team 3, Elaine Chang 3, Vaughn Robinson 3, Sean Hendryx 3, Shuyan Zhou 1, Matt Fredrikson 1, 2, Summer Yue 3, Zifan Wang 3

1 Carnegie Mellon University, 2 GraySwan Al, 3 Scale Al

For safety reasons, large language models (LLMs) are trained to refuse harmful user instructions, such as assisting dangerous activities. We study an open question in this work. Can the desired safety refusal, typically enforced in chat contexts, be generalized to non-chat and agentic use cases? Unlike chatbots, LLM agents equipped with general-purpose tools, such as web browsers and mobile devices, can directly influence the real world, making it even more crucial to refuse harmful instructions. In this work, we primarily focus on red-teaming browser agents – LLMs that manipulate information via web browsers. 

To this end, we introduce Browser Agent Red-teaming Toolkit (BrowserART), a comprehensive test suite designed specifically for red-teaming browser agents. BrowserART consists of 100 diverse browser-related harmful behaviors (including original behaviors and ones sourced from HarmBench [Mazeika et al., 2024] and AirBench 2024 [Zeng et al., 2024b]) across both synthetic and real websites. 

Our empirical study on state-of-the-art browser agents reveals that, while the backbone LLM refuses harmful instructions as a chatbot, the corresponding agent does not. Moreover, attack methods designed to jailbreak refusal-trained LLMs in the chat settings transfer effectively to browser agents. With human rewrites, GPT-4o and o1-preview-based browser agents attempted 98 and 63 harmful behaviors (out of 100), respectively. We publicly release BrowserART and call on LLM developers, policymakers, and agent developers to collaborate on improving agent safety.

BrowserART consists of 100 harmful browser-related behaviors (including original behaviors and ones sourced from HarmBench [Mazeika et al., 2024] and AirBench 2024 [Zeng et al., 2024b]) that an agent is not supposed to assist. We divided all behaviors into two main categories: harmful content and harmful interaction. Under each main category, we created sub-categories for the harm semantics. We created 40 synthetic websites under 19 domains for red teaming browser behaviors that target specific websites (e.g., Twitter/X). These synthetic pages are hosted locally for running red teaming experiments in a sandbox without polluting the real world, especially social media and government sites.

One of the major findings in our work is a clear gap between Attack Success Rates (ASRs) of the backbone LLM (blue) and its agent (lavender). That is, while the LLM refuses to follow the harmful behavior in the user instruction, the agent will. Gaps of GPT-4o and GPT-4-turbo models are most outstanding, compared to other backbone LLMs. In particular, we find Opus-3 and Llama-3.1-405B show the least drop in their refusal capabilities.

Ethics and Disclosure

This research — including the methodology detailed in the paper, the code, and the content of this webpage — contains material that may enable users to generate harmful content using certain publicly available LLM agents. While we recognize the associated risks, we believe it is essential to disclose this research in its entirety. The agent frameworks, beyond those used in this study, are publicly accessible and relatively easy to use. Comparable results will inevitably be achievable by any determined team seeking to utilize language models to produce harmful content and interactions.

In releasing BrowserART and our main results, we carefully weighed the benefits of empowering research in defense robustness with the risks of enabling further malicious use. Following Zou et al. (2024), we believe the publication of this work helps the agent safety community to release this frontier challenge. 

Prior to release, we have also disclosed our findings and datasets to the companies providing the API access to the models, together with the creators of browser agent frameworks. Our findings highlight the crucial alignment gap between chatbots and browser agents and call upon the research community to explore safeguarding techniques for LLM agents.

Bibtex Citation

If you are using the behavior set of /BrowserART, in addition to this work, please consider to cite HarmBench and AirBench 2024 using the following citations:

@InProceedings{mazeika2024harmbench,
  title = 	 {{H}arm{B}ench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal},
  author =       {Mazeika, Mantas and Phan, Long and Yin, Xuwang and Zou, Andy and Wang, Zifan and Mu, Norman and Sakhaee, Elham and Li, Nathaniel and Basart, Steven and Li, Bo and Forsyth, David and Hendrycks, Dan},
  booktitle = 	 {Proceedings of the 41st International Conference on Machine Learning},
  year = 	 {2024},
  series = 	 {Proceedings of Machine Learning Research},
  publisher =    {PMLR},
}

@article{zeng2024air,
  title={AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies},
  author={Zeng, Yi and Yang, Yu and Zhou, Andy and Tan, Jeffrey Ziwei and Tu, Yuheng and Mai, Yifan and Klyman, Kevin and Pan, Minzhou and Jia, Ruoxi and Song, Dawn and others},
  journal={arXiv preprint arXiv:2407.17436},
  year={2024}
}

@misc{kumar2024refusaltrainedllmseasilyjailbroken,
      title={Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents}, 
      author={Priyanshu Kumar and Elaine Lau and Saranya Vijayakumar and Tu Trinh and Scale Red Team and Elaine Chang and Vaughn Robinson and Sean Hendryx and Shuyan Zhou and Matt Fredrikson and Summer Yue and Zifan Wang},
      year={2024},
      eprint={2410.13886},
      archivePrefix={arXiv},
      primaryClass={cs.CR},
      url={https://arxiv.org/abs/2410.13886}, 
}