
ENIGMAEVAL: A Benchmark of Long Multimodal Reasoning Challenges
Clinton J. Wang1, Dean Lee1 , Cristina Menghini1, Johannes Mols1, Jack Doughty1, Adam Khoja2, Jayson Lynch3, Sean Hendryx1, Summer Yue1, Dan Hendrycks2
1Scale AI, 2Center for AI Safety, 3MIT
As language models master existing reasoning benchmarks, we need new challenges to evaluate their cognitive frontiers. Puzzle-solving events are rich repositories of challenging multimodal problems that test a wide range of advanced reasoning and knowledge capabilities, making them a unique testbed for evaluating frontier language models. We introduce ENIGMAEVAL, a dataset of problems and solutions derived from puzzle competitions and events that probes models’ ability to perform implicit knowledge synthesis and multi-step deductive reasoning. Unlike existing reasoning and knowledge benchmarks, puzzle solving challenges models to discover hidden connections between seemingly unrelated pieces of information to uncover solution paths. The benchmark comprises 1184 puzzles of varying complexity – each typically requiring teams of skilled solvers hours to days to complete – with unambiguous, verifiable solutions that enable efficient evaluation. State-of-the-art language models achieve extremely low accuracy on these puzzles, even lower than other difficult benchmarks such as Humanity’s Last Exam, unveiling models’ shortcomings when challenged with problems requiring unstructured and lateral reasoning.
See the Leaderboard results here: https://scale.com/leaderboard/enigma_eval