Luiz Gustavo Hafemann

prof_pic.jpg

I am an R&D Scientist at Ubisoft La Forge, conducting research at the intersection of Machine Learning, Computer Vision, and computer graphics. At La Forge, my work focuses on developing advanced methods for face capture and facial animation specifically tailored for video game applications.

My interests include capture systems and generative models for controllable and disentangled animation, aimed at advancing character technology for real-time applications. I am passionate about conducting applied research that addresses real-world production challenges – identifying problems, developing novel techniques, and seeing these integrated into tools and pipelines that facilitate more realistic and controllable facial animation.

I supervise students during their research internships in these exciting fields. We are often searching for motivated PhD students looking to contribute to cutting-edge projects in facial animation, capture systems, and generative models.

news

Jul 10, 2025 ICCV 2025: Our paper titled SEREP: Semantic Facial Expression Representation for Robust In-the-Wild Capture and Retargeting was accepted for ICCV 2025.

We propose a novel method for monocular face expression capture and re-targeting. The model is capable of accurately capturing geometric expression deformations, and is more robust to non-frontal views than other methods. We are also releasing MultiREX, the first benchmark for geometric evaluation of expression capture. More details in the project page.
May 23, 2025 We published a new post in Ubisoft La Forge’s blog, showcasing the work on the paper MoSAR: Monocular semi-supervised model for avatar reconstruction using differentiable shading.
May 09, 2025 SIGGRAPH 2025: Our paper titled Model See Model Do: Speech-Driven Facial Animation with Style Control was accepted for SIGGRAPH 2025.

We present an example-based diffusion model that generates stylistic 3D facial animations. The generated animations are lip-synced to a provided audio track, and adhere to the style delivery of the example animation. Our quantitative experiments and user-studies show improved style adherence compared to past methods that used contrastive methods for learning style. More details in the project page.

selected publications

  1. serep.gif
    SEREP: Semantic Facial Expression Representation for Robust In-the-Wild Capture and Retargeting
    Arthur Josi*, Luiz Gustavo Hafemann*, Abdallah Dib, Emeline Got, Rafael MO Cruz, and Marc-Andre Carbonneau
    In ICCV, 2025
  2. msmd.gif
    Model See Model Do: Speech-Driven Facial Animation with Style Control
    Yifang Pan, Karan Singh, and Luiz Gustavo Hafemann
    In SIGGRAPH, 2025
  3. mosar.gif
    MoSAR: Monocular semi-supervised model for avatar reconstruction using differentiable shading
    Abdallah Dib*, Luiz Gustavo Hafemann*, Emeline Got, Trevor Anderson, Amin Fadaeinejad, Rafael MO Cruz, and Marc-André Carbonneau
    In CVPR, 2024