Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models
Paper
•
2505.14810
•
Published
•
62
cold-RL for mathematical reasoning in our MathIF project.
Github Repository: https://github.com/TingchenFu/MathIF
We base our experiments on the DeepScaler dataset, which contains approximately 40k math reasoning samples. The training is conducted using 16 NVIDIA H100 GPUs. For reinforcement learning, we adopt the GRPO framework and use verifiable outcome-based rewards. The model is trained with VeRL framework with most hyper-parameters following the default setting.
We use nucleus sampling (T=1.0, p=0.95) with a maximum generation length of 16,384 tokens for decoding and vLLM engine for efficient inference.
BibTeX:
@article{fu2025scaling,
title={Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models},
author={Fu, Tingchen and Gu, Jiawei and Li, Yafu and Qu, Xiaoye and Cheng, Yu},
journal={arXiv preprint arXiv:2505.14810},
year={2025}
}