PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models

Carolyn Jane Anderson, Joydeep Biswas, Aleksander Boruch-Gruszecki, Federico Cassano, Molly Q Feldman, Arjun Guha, Francesca Lucchetti, and Zixuan Wu, 2025

Existing benchmarks for frontier models often test specialized, ``PhD-level’’ knowledge that is difficult for non-experts to grasp. In contrast, we present a benchmark based on the NPR Sunday Puzzle Challenge that requires only general knowledge. Our benchmark is challenging for both humans and models, however correct solutions are easy to verify, and models’ mistakes are easy to spot.

Our work reveals capability gaps that are not evident in existing benchmarks: OpenAI o1 significantly outperforms other reasoning models that are on par on benchmarks that test specialized knowledge. Furthermore, our analysis of reasoning outputs uncovers new kinds of failures. DeepSeek R1, for instance, often concedes with “I give up” before providing an answer that it knows is wrong. R1 can also be remarkably “uncertain” in its output and in rare cases, it does not “finish thinking,” which suggests the need for an inference-time technique to “wrap up” before the context window limit is reached. We also quantify the effectiveness of reasoning longer with R1 and Gemini Thinking to identify the point beyond which more reasoning is unlikely to improve accuracy on our benchmark.

PDF available on arXiv

@misc{phd-not-required,
  title = {PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models},
  author = {Anderson, Carolyn Jane and Biswas, Joydeep and Boruch-Gruszecki, Aleksander and Cassano, Federico and Feldman, Molly Q and Guha, Arjun and Lucchetti, Francesca and Wu, Zixuan},
  year = {2025},
}