-
[pdf]
[supp]
[code]
[bibtex]@InProceedings{Kong_2022_ACCV, author = {Kong, Di and Wang, Qiang and Qi, Yonggang}, title = {A Diffusion-ReFinement Model for Sketch-to-Point Modeling}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1522-1538} }
A Diffusion-ReFinement Model for Sketch-to-Point Modeling
Abstract
Diffusion probabilistic model has been proven effective in generative tasks. However, its variants have not yet delivered on its effectiveness in practice of cross-dimensional multimodal generation task. Generating 3D models from single free-hand sketches is a typically tricky cross-domain problem that grows even more important and urgent due to the widespread emergence of VR/AR technologies and usage of portable touch screens. In this paper, we introduce a novel Sketch-to-Point Diffusion-ReFinement model to tackle this problem. By injecting a new conditional reconstruction network and a refinement network, we overcome the barrier of multimodal generation between the two dimensions. By explicitly conditioning the generation process on a given sketch image, our method can generate plausible point clouds restoring the sharp details and topology of 3D shapes, also matching the input sketches. Extensive experiments on various datasets show that our model achieves highly competitive performance in sketch-to-point generation task. The code is available at https://github.com/Walterkd/diffusion-refine-sketch2point.
Related Material