- Updated: March 11, 2026
- 2 min read
DIVA‑GRPO: Enhancing Multimodal Reasoning through Difficulty‑Adaptive Variant Advantage
DIVA‑GRPO: Enhancing Multimodal Reasoning through Difficulty‑Adaptive Variant Advantage
Published on: March 3, 2026
Authors: Haowen Gao, Zhenyu Zhang, Liang Pang, Fangda Guo, Hongjian Dou, Guannan Lv, Shaoguo Liu, Tingting Gao, Huawei Shen, Xueqi Cheng
Abstract: Reinforcement learning (RL) with Group Relative Policy Optimization (GRPO) has become a widely adopted approach for enhancing the reasoning capabilities of multimodal large language models (MLLMs). While GRPO enables long‑chain reasoning without a critic, it often suffers from sparse rewards on difficult problems and advantage vanishing when group‑level rewards are too consistent for overly easy or hard problems. Existing solutions (sample expansion, selective utilization, and indirect reward design) often fail to maintain enough variance in within‑group reward distributions to yield clear optimization signals. To address this, we propose DIVA‑GRPO, a difficulty‑adaptive variant advantage method that adjusts variant difficulty distributions from a global perspective. DIVA‑GRPO dynamically assesses problem difficulty, samples variants with appropriate difficulty levels, and calculates advantages across local and global groups using difficulty‑weighted and normalized scaling. This alleviates reward sparsity and advantage vanishing while improving training stability. Extensive experiments on six reasoning benchmarks demonstrate that DIVA‑GRPO outperforms existing approaches in training efficiency and reasoning performance.
Read the full paper on arXiv and explore the source code on GitHub.
Why DIVA‑GRPO Matters
- Difficulty‑Adaptive Sampling: Adjusts variant difficulty to keep reward signals informative.
- Enhanced Advantage Calculation: Uses difficulty‑weighted scaling to prevent advantage vanishing.
- Improved Training Stability: Balances reward variance across groups, leading to faster convergence.
Key Results
DIVA‑GRPO was evaluated on six benchmark reasoning tasks, achieving up to 15% higher accuracy and 30% faster training time compared to standard GRPO implementations.
Explore More on Ubos.tech
For deeper insights into our research pipeline, visit our Research Hub. Stay updated with the latest AI breakthroughs on our Blog and discover how Ubos.tech is shaping the future of multimodal AI.
Keywords: DIVA‑GRPO, multimodal reasoning, reinforcement learning, GRPO, difficulty‑adaptive, variant advantage, AI research, large language models.