PathMMU: A Massive Multimodal Expert-Level Benchmark for Understanding and Reasoning in Pathology

Y Sun, H Wu, C Zhu, S Zheng, Q Chen, K Zhang… - … on Computer Vision, 2025 - Springer
Y Sun, H Wu, C Zhu, S Zheng, Q Chen, K Zhang, Y Zhang, D Wan, X Lan, M Zheng, J Li
European Conference on Computer Vision, 2025Springer
Abstract The emergence of Large Multimodal Models (LMMs) has unlocked remarkable
potential in AI, particularly in pathology. However, the lack of specialized, high-quality
benchmark impeded their development and precise evaluation. To address this, we
introduce PathMMU, the largest and highest-quality expert validated pathology benchmark
for LMMs. It comprises 33,428 multimodal multi-choice questions and 24,067 images from
various sources, each accompanied by an explanation for the correct answer. The …
Abstract
The emergence of Large Multimodal Models (LMMs) has unlocked remarkable potential in AI, particularly in pathology. However, the lack of specialized, high-quality benchmark impeded their development and precise evaluation. To address this, we introduce PathMMU, the largest and highest-quality expert validated pathology benchmark for LMMs. It comprises 33,428 multimodal multi-choice questions and 24,067 images from various sources, each accompanied by an explanation for the correct answer. The construction of PathMMU leverages GPT-4V’s advanced capabilities, utilizing over 30,000 image-caption pairs to enrich the descriptive quality of captions and generate corresponding Q&As in a cascading process. To maximize PathMMU’s authority, we invite seven pathologists to scrutinize each question under strict standards in PathMMU’s validation and test sets, while simultaneously setting an expert-level performance benchmark for PathMMU. We conduct extensive evaluations, including zero-shot assessments of 14 open-sourced and 4 closed-sourced LMMs and their robustness to image corruption. We also fine-tune representative LMMs to assess their adaptability to PathMMU. The empirical findings indicate that advanced LMMs struggle with the challenging PathMMU benchmark, with the top-performing LMM, GPT-4V, achieving only a 49.8% zero-shot performance, significantly lower than the 71.8% demonstrated by human pathologists. After fine-tuning, substantially smaller open-sourced LMMs can outperform GPT-4V but still fall short of the expertise shown by pathologists. We hope that the PathMMU will offer valuable insights and foster the development of more specialized, next-generation LMMs for pathology.
Springer
Showing the best result for this search. See all results