Zijie Wu

Zijie Wu (Jarrent Wu)

Ph.D. Student in Computer Vision
Huazhong University of Science and Technology (HUST)

I am a third-year Ph.D. student supervised by Prof. Xiang Bai. My research interests include Image/Video/3D/4D Generation, and Motion Synthesis.


💼 Experience

Research Intern (Qingyun Plan)

  • Selected for the Tencent Qingyun Plan (腾讯青云计划), a top-tier talent program.
  • Focusing on Mesh Animation and Motion Synthesis.

Research Intern

  • Conducted research on 2D/3D/4D Generation.
  • Published 2 papers at ECCV2024 and ICCV2025 as the first author.

📝 Selected Publications

* denotes equal contribution.

A novel feed-forward video-guided mesh animation framework that rectifies the initial pose of the input mesh according to the first frame of the guidance video, which enables downstream applications like: pose/motion retargeting, holistic 4D generation.
-->
Paper Image

AnimateAnyMesh: A Feed-Forward 4D Foundation Model for Text-Driven Universal Mesh Animation

Zijie Wu, Chaohui Yu, Fan Wang, Xiang Bai
ICCV, 2025
Proposed AnimateAnyMesh, the first feed-forward universal mesh animation framework. Designed DyMeshVAE and Text-to-Trajectory RF Model to compress and generate vertex trajectories, eliminating the dependency on rigging and skinning and pioneering the new paradigm: feed-forward vertex-based 3D animation.
Paper Image

SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer

Zijie Wu, Chaohui Yu, Yanqin Jiang, Chenjie Cao, Fan Wang, Xiang Bai
ECCV, 2024
We propopse SC4D, which disentangles motion and appearance synthesis into sparse point control and dense gaussians to achieve superior performance. Benifit from this disentangled representation, SC4D enables motion transfer applications.
Paper Image

SingleInsert: Inserting New Concepts from a Single Image into Text-to-Image Models for Flexible Editing

Zijie Wu, Chaohui Yu, Fan Wang, Xiang Bai
Arxiv, 2023
We propopse SingleInsert to faciliate concept personalization utilizing the designed mask losses.
Paper Image

CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer

Zijie Wu*, Zhen Zhu*, Junping Du, Xiang Bai
ECCV, 2022 Oral Presentation
A generic contrastive loss is proposed to alleviate the flickering artifacts in video style transfer and improve the generation quality of image style transfer.