Inspired by state-return trajectory modeling in offline RL, we incorporate a frame-level quality indicator variable (QualVar) similar to state-level reward in RL. Our StableMotion framework utilizes a generate-discriminate approach, where a model is jointly trained to evaluate motion quality and generate motion of varying quality, according to QualVar.
Similar to the practice of prompting text-to-image model with “photo-realistic quality”, QualVar offers a knob to specify the generation quality. This allows the model to cleanup raw mocap data by first identifying corrupted segments and then inpainting them with high-quality motions.
We apply StableMotion to train a motion cleanup model on SoccerMocap, a 245-hour raw motion capture dataset captured by a prominent game studio. The resulting model, StableMotion-SoccerMocap, effectively fix motion artifacts in newly captured motions from the same mocap system.
We benchmark the performance of StableMotion on a publicly available dataset, AMASS, by introducing synthetic corruption to construct a benchmark dataset, BrokenAMASS. This demonstrates that, with StableMotion, motion cleanup models can be effectively trained even on datasets exhibiting severe motion corruptions.
Adaptive cleanup uses soft motion quality evaluation and soft motion inpainting. It allows the model to adaptively adjust the modification to each frame based on the severity of the artifacts, enabling it to better preserve the content of the original frames.
Quality-aware ensemble leverages the diversity of diffusion models and the dual functionality of generate-discriminate models. It ensembles diverse candidate motions by selecting the highest-quality motion based on predicted motion quality scores, leading to more consistent and higher-quality results than performing a single pass of the cleanup model.
@article{mu2025stablemotion,
title={StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data},
author={Mu, Yuxuan and Ling, Hung Yu and Shi, Yi and Ojeda, Ismael Baira and Xi, Pengcheng and Shu, Chang and Zinno, Fabio and Peng, Xue Bin},
journal={arXiv preprint arXiv:2505.03154},
year={2025}
}