MS3D: High-Quality 3D Generation via Multi-Scale Representation Modeling

Guan Luo, Jianfeng Zhang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 26336-26348

Abstract


High-quality textured mesh reconstruction from sparse-view images remains a fundamental challenge in computer graphics and computer vision. Traditional large reconstruction models operate in a single-scale manner, forcing the models to simultaneously capture global structure and local details, often resulting in compromised reconstructed shapes. In this work, we propose MS3D, a novel multi-scale 3D reconstruction framework. At its core, our method introduces a hierarchical structured latent representation for multi-scale modeling, coupled with a multi-scale feature extraction and integration mechanism. This enables progressive reconstruction, effectively decomposing the complex task of detailed geometry reconstruction into a sequence of easier steps. This coarse-to-fine approach effectively captures multi-frequency details, learns complex geometric patterns, and generalizes well across diverse objects while preserving fine-grained details. Extensive experiments demonstrate MS3D outperforms state-of-the-art methods and is broadly applicable to both image- and text-to-3D generation. The entire pipeline reconstructs high-quality textured meshes in under five seconds.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Luo_2025_ICCV, author = {Luo, Guan and Zhang, Jianfeng}, title = {MS3D: High-Quality 3D Generation via Multi-Scale Representation Modeling}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {26336-26348} }