2025_msmd_publication

SIGGRAPH 2025: Our paper titled Model See Model Do: Speech-Driven Facial Animation with Style Control was accepted for SIGGRAPH 2025.

We present an example-based diffusion model that generates stylistic 3D facial animations. The generated animations are lip-synced to a provided audio track, and adhere to the style delivery of the example animation. Our quantitative experiments and user-studies show improved style adherence compared to past methods that used contrastive methods for learning style. More details in the project page.