Research Show Reasoning Models Improve With Any Rewards
by Brian Wang from NextBigFuture.com on (#6XM6W)
RLVR amplifies reasoning patterns that already exist. Qwen2.5-Math can uniquely do code reasoning"-solving math by writing Python (without execution). Code reasoning correlates with correctness (64% w/ vs 29% w/o). Spurious training amplifies code usage to 90%+. Just having reasoning models do more work in general, makes them improve performance. Our hypothesis: RLVR amplifies reasoning patterns ...