Xingang Guo1,2 , Utkarsh Tyagi1 , Advait Gosai1 , Paula Vergara1 , Ernesto Gabriel Hernandez Montoya1 , Chen Bo Calvin Zhang1 , Bin Hu2 , Yunzhong He1 , Bing Liu1 , Rakshith Sharma Srinivasa1
1Scale AI, 2University of Illinois at Urbana-Champaign
Multimodal Large Language Models (MLLMs) are increasingly applied in realworld scenarios where user-provided images are often imperfect, requiring active image manipulations such as cropping, editing, or enhancement to uncover salient visual cues. Beyond static visual perception, MLLMs must also think with images: dynamically transforming visual content and integrating it with other tools to solve complex tasks. However, this shift from treating vision as passive context to a manipulable cognitive workspace remains underexplored. Most existing benchmarks still follow a think about images paradigm, where images are regarded as static inputs.
To address this gap, we introduce VisualToolBench (VTB), a benchmark that evaluates MLLMs’ ability to perceive, transform, and reason across complex visual–textual tasks under the think-with-images paradigm. VTB comprises 1,204 challenging, open-ended vision tasks (603 single-turn, 601 multi-turn) spanning across five diverse domains, each paired with detailed rubrics to enable systematic evaluation. Our evaluation shows that current MLLMs struggle with tasks requiring effective integration of vision and general-purpose tools. Even the strongest model (GPT-5-think) reaches only 18.68% pass rate. We further observe divergent tool-use behaviors, with OpenAI models benefiting from diverse image manipulations while Gemini-2.5-pro shows no improvement. By introducing the first benchmark centered on think with images, VTB offers critical insights for advancing visual intelligence in MLLMs.