Pretrained Pixel-Aligned Reference Network for 3D Human Reconstruction

Gee-Sern Hsu, Yu-Hong Lin, Chin-Cheng Chang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 6226-6234

Abstract


We propose the Pretrained Pixel-aligned Reference (PPR) network for 3D human reconstruction. The PPR network utilizes a pretrained model embedded with a reference mesh surface and full-view normals to better constrain spatial query processing, leading to improved mesh surface reconstruction. Our network consists of a dual-path encoder and a query network. The dual-path encoder extracts front-back view features from the input image through one path, and full-view reference features from a pretrained model through the other path. These features, along with additional spatial traits, are concatenated and processed by the query network to estimate the desired mesh surface. During training, we consider points on the pretrained model as well as around the ground-truth mesh surfaces, enabling the implicit function to better capture the mesh surface and overall posture. We evaluate the performance of our approach through experiments on the THuman 2.0 and RenderPeople datasets, and compare it with state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Hsu_2023_CVPR, author = {Hsu, Gee-Sern and Lin, Yu-Hong and Chang, Chin-Cheng}, title = {Pretrained Pixel-Aligned Reference Network for 3D Human Reconstruction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {6226-6234} }