-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Lee_2024_WACV, author = {Lee, Dongyeun and Kim, Chaewon and Yu, Sangjoon and Yoo, Jaejun and Park, Gyeong-Moon}, title = {RADIO: Reference-Agnostic Dubbing Video Synthesis}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {4168-4178} }
RADIO: Reference-Agnostic Dubbing Video Synthesis
Abstract
One of the most challenging problems in audio-driven talking head generation is achieving high-fidelity detail while ensuring precise synchronization. Given only a single reference image, extracting meaningful identity attributes becomes even more challenging, often causing the network to mirror the facial and lip structures too closely. To address these issues, we introduce RADIO, a framework engineered to yield high-quality dubbed videos regardless of the pose or expression in reference images. The key is to modulate the decoder layers using latent space composed of audio and reference features. Additionally, we incorporate ViT blocks into the decoder to emphasize high-fidelity details, especially in the lip region. Our experimental results demonstrate that RADIO displays high synchronization without the loss of fidelity. Especially in harsh scenarios where the reference frame deviates significantly from the ground truth, our method outperforms state-of-the-art methods, highlighting its robustness.
Related Material