跳到主要导航 跳到搜索 跳到主要内容

Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion Generation

  • Yin Wang
  • , Mu Li
  • , Jiapeng Liu
  • , Zhiying Leng
  • , Frederick W.B. Li
  • , Ziyao Zhang
  • , Xiaohui Liang*
  • *此作品的通讯作者
  • Beihang University
  • Durham University
  • Zhongguancun Laboratory

科研成果: 期刊稿件文章同行评审

摘要

We address the challenging problem of fine-grained text-driven human motion generation. Existing works generate imprecise motions that fail to accurately capture relationships specified in text due to: (1) lack of effective text parsing for detailed semantic cues regarding body parts, (2) not fully modeling linguistic structures between words to comprehend text comprehensively. To tackle these limitations, we propose a novel fine-grained framework Fg-T2M++ that consists of: (1) an LLMs semantic parsing module to extract body part descriptions and semantics from text, (2) a hyperbolic text representation module to encode relational information between text units by embedding the syntactic dependency graph into hyperbolic space, and (3) a multi-modal fusion module to hierarchically fuse text and motion features. Extensive experiments on HumanML3D and KIT-ML datasets demonstrate that Fg-T2M++ outperforms SOTA methods, validating its ability to accurately generate motions adhering to comprehensive text semantics.

源语言英语
页(从-至)4277-4293
页数17
期刊International Journal of Computer Vision
133
7
DOI
出版状态已出版 - 7月 2025

指纹

探究 'Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion Generation' 的科研主题。它们共同构成独一无二的指纹。

引用此