Bin Cao, Sipeng Zheng, Hao Luo, Boyuan Li, Jing Liu, Zongqing Lu

CVPR 2026

Abstract

We present OpenT2M, a no-frill text-to-motion generation framework built upon open-source, large-scale, and high-quality motion data. Our approach democratizes motion generation research by removing dependencies on proprietary datasets.