
Visual Optimization for Enhanced Facial Animation Accuracy
Research Team
Spencer Idenouye
Emerson Chan
Valentina Bachkarova
Kevin Santos
Partners
Jali Incorperated
NRC IRAP
CTO
Impact
- Cross-platform testing of facial animation tools.
- Converting 3D actor scans into realistic, animation-ready characters.
- Visual debugging framework to improve animation accuracy.
Optimizing Digital Double Creation and Animation Workflows
This project addressed the challenge of achieving high-fidelity automatic facial animation, particularly noting limitations in “out-of-the-box” mouth and tongue movements. The research systematically explored and compared digital double creation and animation workflows using Unreal’s Metahuman Creator and Reallusion’s Character Creator.
The methodology involved rigorous testing of motion capture retargeting, evaluating visual and performance accuracy, and developing three refined production pipelines for photogrammetry cleanup, Metahuman creation, and Character Creator integration. We successfully produced eight distinct prototypes, demonstrating enhanced realism through custom techniques like Wrap and Substance Painter.
This foundational work provides a clear understanding of system gaps, refined motion capture processes, and reusable documentation, significantly enhancing character accuracy and animation quality for cross-platform applications.
Facial Animation Accuracy Requires Smarter Debugging Tools
Automatic facial animation often struggles with mouth and eye inaccuracies, particularly in real-time applications. For digital doubles used in virtual production or game engines, these flaws undermine realism and cross-platform compatibility. To address this, the project sought to build a visual debugging workflow that supports accurate animation transfer and evaluation across different rigging systems—such as MetaHuman, Character Creator, and Unity. The goal was to compare and refine these workflows, improving fidelity from raw motion capture to final render. By identifying these system gaps, the research aimed to lay the groundwork for refining character accuracy and expanding performance capture capabilities across various platforms.
Designing a Modular Workflow to Improve Cross-Platform Facial Animation
The research team employed a comprehensive and systematic approach to improve automatic facial animation fidelity:
- Comparative Workflow Analysis
The team tested animation pipelines using MetaHuman, Character Creator, Unity, and Daz, including facial tracking with machine learning. - High-Fidelity Motion Capture and Data Processing
High-resolution RGB and optical mocap was captured, cleaned, and retargeted to multiple rigs for cross-platform testing. - Development of Improved Production Workflows
Three key workflows were developed:- A MetaHuman pipeline using Wrap, Substance Painter, and Maya for scan-based realism.
- A Character Creator pipeline using Headshot for generating and refining facial rigs.
- A photogrammetry cleanup pipeline for preparing scan data for rigging and animation.
- Extensive Prototyping
Eight prototypes were developed by creating two to three asset variations for each of three scanned actors: a raw photogrammetry scan, a standard Metahuman version, and a Wrapped Metahuman with baked high-fidelity textures. These prototypes facilitated the comparison of visual accuracy, animation performance, and cross-platform compatibility, providing a crucial basis for refining future digital double workflows. - Comprehensive Documentation and Analysis
The entire development process, tool usage, and findings were meticulously documented. This documentation supports replication, future development, and broader adoption of improved animation debugging practices
Enabling Consistent and Realistic Facial Animation
The project delivered eight animation-ready prototypes and three cross-rig production workflows, enhanced the understanding and capabilities in facial animation, specifically:
- Improved rig compatibility and animation fidelity via custom mesh and texture workflows.
- Granular control of facial expression correction using MetaHuman Animator and Maya-based blendshapes.
- High-quality documentation to replicate and expand these methods in other pipelines.
This work lays a foundation for scalable, platform-agnostic facial animation debugging tools—supporting not only more believable digital humans, but also accelerating workflows in film, games, and immersive media.