Motion-R1: Enhancing Motion Generation with Decomposed Chain-of-Thought and RL Binding

Runqi Ouyang1,2,*, Haoyun Li1,2,*, Zhenyuan Zhang1,3*, Xiaofeng Wang1, Zeyu Zhang1,
Zheng Zhu1,†, Guan Huang1, Sirui Han3, Xingang Wang1,†,
1GigaAI     2CASIA     3HKUST
*Equal Contribution
Corresponding Authors: zhengzhu@ieee.org, xingang.wang@ia.ac.cn

Abstract


Text-to-Motion generation has become a fundamental task in human-machine interaction, enabling the synthesis of realistic human motions from natural language descriptions. Although recent advances in large language models and reinforcement learning have contributed to high-quality motion generation, two major challenges remain. Existing approaches often fail to capture the temporal and causal complexities inherent in natural language, leading to oversimplified or incoherent motions. Additionally, RL-based methods are frequently overly complex, hindering their scalability and adaptability across various motion generation tasks. To address these challenges, we propose Motion-R1, a novel framework that combines decomposed Chain-of-Thought reasoning with reinforcement learning to enhance both the quality and interpretability of generated motions. Specifically, we introduce the Decomposed CoT Data Engine, which leverages an automated pipeline to synthesize high-quality reasoning data, allowing the model to better capture the temporal dependencies and causal relationships of human motion. We also propose RL Binding, a reinforcement learning strategy that incorporates multi-modal text-motion alignment into the RL reward function, guiding the model to produce motions that are both semantically accurate and motionally realistic. Extensive experiments across benchmark datasets demonstrate that Motion-R1 achieves state-of-the-art performance, with a 3.5% improvement in MM-Dist on HumanML3D and improvements in R-Precision and FID on KIT-ML and BABEL, surpassing existing methods across key metrics and highlighting its superior capability in handling complex motion generation tasks.


Approach Overview


In-Distribution Prompts

Out-of-Distribution Prompts


We showcase Motion-R1's capability to generate diverse and high-quality motions for out-of-distribution prompts.


Complex Text

"After hearing a loud noise, a person turned around , stepped back cautiously with hands raised defensively and then slowly approached."

"A person takes a few steps forward, jumps forward with both feet , and immediately turns right upon landing"


"A person raises arms, arches back slightly, then shifts weight onto the right leg while extending the left leg backward in a poised arabesque position."

"A person jumped up happily, raised hand and spsun excitedly."


Abstract Text

"A person is serving in badminton ."

"A person is skipping rope."

"A person is dancing ballroom dance."

"The person walks as if balancing on a tightrope."

"The person mimics swimming in mid-air, as if performing a freestyle stroke without water."

"The person walks through strong wind, leans forward and braces against resistance."

Traditional Approaches vs. Motion-R1


(a) Traditional end-to-end models exhibit poor generalization on out-of-distribution motions. (b) Our Decomposed CoT Data Engine enables strong generalization by structuring high-level instructions into intermediate reasoning steps. (c) Existing RL-based methods rely on expensive human annotations to train preference models for reward signals. (d) Our RL Binding mechanism achieves efficient multi-modal alignment without additional annotation cost.


Comparisons


We compare Motion-R1 against baselines such as MoMask and MotionLLM. As shown in Left of Figure, Motion-R1 produces smooth, well-structured sequences for simple and multi-step instructions. To evaluate generalization beyond the training distribution, we present qualitative comparisons under two types of out-of-distribution captions, as shown in middle and right of Figure.


BibTeX

      
@article{ouyang2025motion,
  title={Motion-R1: Chain-of-Thought Reasoning and Reinforcement Learning for Human Motion Generation},
  author={Ouyang, Runqi and Li, Haoyun and Zhang, Zhenyuan and Wang, Xiaofeng and Zhu, Zheng and Huang, Guan and Wang, Xingang},
  journal={arXiv preprint arXiv:2506.10353},
  year={2025}
}