I’m a Ph.D. student at the Department of Computer Science and Technology, Tsinghua University (清华大学计算机科学与技术系), advised by Song-Hai Zhang (张松海). Before that, I achieved my bachelor’s degree at the School of Computer and Communication Engineering, University of Science and Technology Beijing (北京科技大学计算机与通信工程学院).

My research interest focuses on digital humans and computer vision, including digital body/head avatar creation/editing, image/video generative models, and novel 3D representations.

Now, I am in the fourth year of my five-year Ph.D. career and will be graduating in June of 2025. If you’re looking for a digital human researcher, feel free to contact me (wangcong20@mails.tsinghua.edu.cn). BTW, the workplace is preferably in Beijing or somewhere around.

🔥 News

  • 2023.08:  🎉 Neural Point-based Volumetric Avatars was accepted by SIGRRAPH Asia 2023!
  • 2023.07:  🎉 LoLep was accepted by ICCV 2023!
  • 2022.07:   I joined Tencent AI Lab as an intern.
  • 2022.02:  🎉 MotionHint was accepted by ICRA 2022!

📝 Publications

arXiV 2024
sym

MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing

Cong Wang, Di Kang, He-Yi Sun, Shen-Han Qian, Zi-Xuan Wang, Linchao Bao, Song-Hai Zhang

Project

  • MeGA adopts more suitable representations to model different head components, achieving higher-quality renderings and naturally supporting various downstream applications (including hair alteration and texture editing).
SIGGRAPH Asia 2023
sym

Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar

Cong Wang, Di Kang, Yan-Pei Cao, Linchao Bao, Ying Shan, Song-Hai Zhang

Project

  • NPVA employs neural points to achieve higher-quality renderings for challenging facial regions (e.g., mouth interior, eyes, and beard).
ICCV 2023
sym

LoLep: Single-View View Synthesis with Locally-Learned Planes and Self-Attention Occlusion Inference

Cong Wang, Yu-Ping Wang, Dinesh Manocha

Project

  • By regressing Locally-Learned Planes, LoLep is able to generate better novel views from one single RGB image.
ICRA 2022
sym

MotionHint: Self-Supervised Monocular Visual Odometry with Motion Constraints

Cong Wang, Yu-Ping Wang, Dinesh Manocha

Project

  • MotionHint is able to be easily applied to existing open-sourced state-of-the-art SSM-VO systems to greatly improve the performance (reducing ATE by up to 28.73%).

🎖 Honors and Awards

  • 2023.10 The Second Prize Scholarship (5,000RMB)
  • 2023.09 Longhu Scholarship (5,000RMB)
  • 2023.05 2023 Tencent AI Lab Rhino-Bird Elite Talent
  • 2022.10 The Second Prize Scholarship (5,000RMB)
  • 2022.09 Longhu Scholarship (5,000RMB)
  • 2020.06 Excellent Graduate of Beijing (Top 5%)
  • 2019.11 National Scholarship (8,000RMB, 1/446)
  • 2019.04 the Mathematical Contest in Modeling, Meritorious Winner (Top 4%)
  • 2018.11 National Scholarship (8,000RMB, 1/446)
  • 2018.04 the Mathematical Contest in Modeling, Meritorious Winner (Top 4%)
  • 2017.11 People’s Special Scholarship (5,000RMB, 1/145)
  • 2017.11 “Guan Zhi” Scholarship (10,000RMB, 1/446)

📖 Educations

  • 2020.09 - 2024.04 (now), Ph.D. student, the Department of Computer Science and Technology, Tsinghua University, Beijing.
  • 2016.09 - 2020.06, Undergraduate, the School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing.

💬 Invited Talks

  • 2023.12, Oral Presentation for “Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar”, SIGGRAPH Asia 2023, Sydney, NSW, Australia.
  • 2023.11, Show my paper, invited by Journal of Image and Graphics, bilibili video
  • 2022.07, Show my paper, invited by BKUNYUN, bilibili video
  • 2022.05, Oral Presentation for “MotionHint: Self-Supervised Monocular Visual Odometry with Motion Constraints”, ICRA 2022, Philadelphia, PA, USA.

💻 Internships

  • 2022.07 - 2024.04 (now), Tencent AI Lab, Beijing.