Yuxuan Mu (Matthew)

I am currently a graduate student at University of Alberta, Vision and Learning Lab, working on 3D computer vision. I primarily focus on 3D human motion modeling advised by Prof. Li Cheng.

Starting from March 2023, I worked on common 3D object reconstruction and generation with Dr. Juwei Lu and Dr. Xinxin Zuo at Noah’s Ark Lab, Huawei Canada, as a research intern.

Email  /  GitHub  /  DBLP  /  LinkedIn

profile photo

Research

My works invovle 3D human motion estimation, generation, physics simulation and rendering, aiming to eventually augment our reality by building a digital dynamic clone of the real world.

'*' indicates equal contribution.

project image

MoMask: Generative Masked Modeling of 3D Human Motions


Chuan Guo*, Yuxuan Mu*, Muhammad Gohar Javed*, Sen Wang, Li Cheng
Accepted for CVPR, 2024
webpage / paper / code / demo /

We introduce MoMask, a novel masked modeling frame work for text‐driven 3D human motion generation with a hierarchical quantization scheme.

project image

Generative Human Motion Stylization in Latent Space


Chuan Guo*, Yuxuan Mu*, Xinxin Zuo, Peng Dai, Youliang Yan, Juwei Lu, Li Cheng
Accepted for ICLR, 2024
webpage / paper / code /

We propose a flexible motion style extraction and injection method from a generative perspective to solve the motion stylization task with probabilistic style space.

project image

RACon: Retrieval-Augmented Simulated Character Locomotion Control


Yuxuan Mu, Shihao Zou, Kangning Yin, Zheng Tian, Li Cheng, Weinan Zhang, Jun Wang
Accepted for ICME (Oral), 2024

We introduce an end-to-end hierarchical reinforcement learning method utilizes a task-oriented learnable retriever, a motion controller and a retrieval-augmented discriminator.

project image

3DG2: 3D‐Gaussian Diffusion for Single‐View 3D Reconstruction


Yuxuan Mu, Xinxin Zuo, Chuan Guo, Yilin Wang, Juwei Lu, Xiaofei Wu, Songcen Xu, Peng Dai, Youliang Yan, Li Cheng
Arxiv Preprint, 2023

Diffusion model upon the emerging 3D Gausssian representation to tackle the single‐view real‐world object reconstruction, with efficient and faithful rendering, and generative modeling.

project image

Event‐based Human Pose Tracking by Spiking Spatiotemporal Transformer


Shihao Zou, Yuxuan Mu, Xinxin Zuo, Sen Wang, Li Cheng
Arxiv Preprint, 2023
paper / code /

Our SNNs approach uses at most 19.1% of the computation and 3.6% of the energy costs consumed by the existing methods while achieves superior performance.





Design and source code from Leonid Keselman's website