Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer

Wing-Yin Yu, Lai-Man Po, Ray C.C. Cheung, Yuzhi Zhao, Yu Xue, Kun Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 7502-7512

Abstract


Video-based human pose transfer is a video-to-video generation task that animates a plain source human image based on a series of target human poses. Considering the difficulties in transferring highly structural patterns on the garments and discontinuous poses, existing methods often generate unsatisfactory results such as distorted textures and flickering artifacts. To address these issues, we propose a novel Deformable Motion Modulation (DMM) that utilizes geometric kernel offset with adaptive weight modulation to simultaneously perform feature alignment and style transfer. Different from normal style modulation used in style transfer, the proposed modulation mechanism adaptively reconstructs smoothed frames from style codes according to the object shape through an irregular receptive field of view. To enhance the spatio-temporal consistency, we leverage bidirectional propagation to extract the hidden motion information from a warped image sequence generated by noisy poses. The proposed feature propagation significantly enhances the motion prediction ability by forward and backward propagation. Both quantitative and qualitative experimental results demonstrate superiority over the state-of-the-arts in terms of image fidelity and visual continuity. The source code is publicly available at github.com/rocketappslab/bdmm.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yu_2023_ICCV, author = {Yu, Wing-Yin and Po, Lai-Man and Cheung, Ray C.C. and Zhao, Yuzhi and Xue, Yu and Li, Kun}, title = {Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {7502-7512} }