Yuxuan Mu (Matthew)
Currently a Ph.D. student at GrUVi Lab, Simon Fraser University, working on 3D computer vision and animation. I primarily focus on 3D character motion modeling advised by Professor Xue bin (Jason) Peng.
I obtained my master’s degree at University of Alberta advised by Professor Li Cheng. I've worked as a research intern at Huawei Canada (Noah’s Ark Lab) on 3D generation and reconstruction with Juwei Lu and Xinxin Zuo.
Email /
GitHub /
LinkedIn /
X
|
|
Research
My works involve 3D human motion estimation, generation, physics simulation and rendering, aiming to eventually augment our reality by building digital dynamic clone.
'*' indicates equal contribution.
|
|
GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction
Yuxuan Mu, Xinxin Zuo, Chuan Guo, Yilin Wang, Juwei Lu, Xiaofei Wu, Songcen Xu, Peng Dai, Youliang Yan, Li Cheng
ECCV, 2024
webpage /
paper /
The first 3D diffusion model directly upon Gaussian Splatting for real‐world object reconstruction, with fine-grained view-guided conditioning.
|
|
MoMask: Generative Masked Modeling of 3D Human Motions
Chuan Guo*, Yuxuan Mu*, Muhammad Gohar Javed*, Sen Wang, Li Cheng
CVPR, 2024
webpage /
paper /
code /
demo /
Star
We introduce MoMask, a novel masked modeling frame work for text‐driven 3D human motion generation with a hierarchical quantization scheme.
|
|
Generative Human Motion Stylization in Latent Space
Chuan Guo*, Yuxuan Mu*, Xinxin Zuo, Peng Dai, Youliang Yan, Juwei Lu, Li Cheng
ICLR, 2024
webpage /
paper /
code /
We propose a flexible motion style extraction and injection method from a generative perspective to solve the motion stylization task with probabilistic style space.
|
|
RACon: Retrieval-Augmented Simulated Character Locomotion Control
Yuxuan Mu, Shihao Zou, Kangning Yin, Zheng Tian, Li Cheng, Weinan Zhang, Jun Wang
ICME (Oral), 2024
paper /
We introduce an end-to-end hierarchical reinforcement learning method utilizes a task-oriented learnable retriever, a motion controller and a retrieval-augmented discriminator.
|
|
Event‐based Human Pose Tracking by Spiking Spatiotemporal Transformer
Shihao Zou, Yuxuan Mu, Xinxin Zuo, Sen Wang, Li Cheng
Arxiv Preprint, 2023
paper /
code /
Our SNNs approach uses at most 19.1% of the computation and 3.6% of the energy costs consumed by the existing methods while achieves superior performance.
|
|