Retargeting Matters: General Motion Retargeting for Humanoid Motion Tracking

Equal contribution, *Equal advising, 1Stanford University

Abstract

Humanoid motion tracking policies are central to building teleoperation pipelines and hierarchical controllers, yet they face a fundamental challenge: the embodiment gap between humans and humanoid robots. Current approaches address this gap by retargeting human motion data to humanoid embodiments and then training reinforcement learning (RL) policies to imitate these reference trajectories. However, artifacts introduced during retargeting, such as foot sliding, self-penetration, and physically infeasible motion are often left in the reference trajectories for the RL policy to correct. While prior work has demonstrated motion tracking abilities, they often require extensive reward engineering and domain randomization to succeed.

In this paper, we systematically evaluate how retargeting quality affects policy performance when excessive reward tuning is suppressed. To address issues that we identify with existing retargeting methods, we propose a new retargeting method, General Motion Retargeting (GMR). We evaluate GMR alongside two open-source retargeters, PHC and ProtoMotions, as well as with a high-quality closed-source dataset from Unitree. Using BeyondMimic for policy training, we isolate retargeting effects without reward tuning.

Our experiments on a diverse subset of the LAFAN1 dataset reveal that while most motions can be tracked, artifacts in retargeted data significantly reduce policy robustness, particularly for dynamic or long sequences. GMR consistently outperforms existing open-source methods in both tracking performance and faithfulness to the source motion, achieving perceptual fidelity and policy success rates close to the closed-source baseline.

Video

Motivation

The standard approach for overcoming the embodiment gap from humans to humanoids is to use kinematic retargeting from the source human motion to the target humanoid embodiment. This practice overlooks glaring artifacts introduced by the retargeting process (such as foot sliding, ground penetration, and physically impossible motion due to self-penetration), instead forcing the RL policy to imitate physically infeasible motions while maintaining physical constraints. Prior work has shown that while training policies on retargeted data with severe artifacts in simulation is possible, transferring them to the real world demands extensive trial-and-error, reward shaping, and parameter tuning. Considering this practice, our hypothesis is that with enough engineering in the reward function and domain randomization, the artifacts caused by retargeting can be mostly mitigated or removed. However, without these engineering efforts, the quality of retargeting results plays a significant role.

GMR: General Motion Retargeting

To address the artifacts (deviation from the source motion, foot sliding, ground penetrations, and self-intersections) we find in previous retargeting methods, we propose a new retargeting method, General Motion Retargeting (GMR). The main difference from the prior methods is how it handles source motion scaling, which we found to be the cause for many of the artifacts. This is followed by a two-stage optimization to find the robot motion.

Interpolate start reference image.

Evaluation

Data

We select a diverse sample from the LAFAN1 dataset, ranging from simple motions like walking and turning to dynamic and complex motions such as martial arts, kicks, and dancing. Our final dataset consists of 21 sequences with lengths ranging from 5 seconds to 2 minutes.


Method

We retarget each of the clips in the training data using the different methods and train single-clip motion imitation policies. We evaluate each policy in the training simulator with only observation noise (sim), domain randomization (sim-dr), and in a setup mimicking real-world deployment conditions (sim2sim).


Results

Interpolate start reference image.

Our experiments reveal that most motions can be successfully tracked regardless of retargeting method. For the long-horizon or dynamic motions, GMR's success rate is close to Unitree retargeted dataset and outperforms other open-source methods. However, all three methods (including GMR) produce artifacts that impact the policy robustness:

  • PHC retargets can exhibit severe ground penetration, which can make it difficult for the policy to learn to track a given motion.
  • The ProtoMotions retarget for the "Run (stop & go)" motion has serious self-collisions.
  • The GMR retarget for the "Dance 5" motion has sudden jumps in the joint values.

These artifacts should be avoided in retargeted motion to ensure the best tracking results.

User Study

In our experiments we find that the results of retargeting can deviate noticeably from the reference motion. To quantify this, we conduct a user study to measure the perceived faithfulness of the retargeted motion to the source human motion.

Interpolate start reference image.

Users consider the GMR retargets to be more faithful than the ones generated by the other methods, and very close to the Unitree retargets.


Related Work

BibTeX


@article{araujo2025gmr,
  title={Retargeting Matters: General Motion Retargeting for Humanoid Motion Tracking},
  author= {Joao Pedro Araujo and Yanjie Ze and Pei Xu and Jiajun Wu and C. Karen Liu},
  year= {2025},
  journal= {arXiv preprint arXiv:2510.02252}
}