-
[pdf]
[arXiv]
[bibtex]@InProceedings{Farazi_2025_WACV, author = {Farazi, Mohammad and Wang, Yalin}, title = {A Recipe for Geometry-Aware 3D Mesh Transformers}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3290-3300} }
A Recipe for Geometry-Aware 3D Mesh Transformers
Abstract
Utilizing patch-based transformers for unstructured geometric data such as polygon meshes presents significant challenges primarily due to the absence of a canonical ordering and variations in input sizes. Prior approaches to handling 3D meshes and point clouds have either relied on computationally intensive node-level tokens for large objects or resorted to resampling to standardize patch size. Moreover these methods generally lack a geometry-aware stable Structural Embedding (SE) often depending on simplistic absolute SEs such as 3D coordinates which compromise isometry invariance essential for tasks like semantic segmentation. In our study we meticulously examine the various components of a geometry-aware 3D mesh transformer from tokenization to structural encoding assessing the contribution of each. Initially we introduce a spectral-preserving tokenization rooted in algebraic multi-grid methods. Subsequently we detail an approach for embedding features at the patch level accommodating patches with variable node counts. Through comparative analyses against a baseline model employing simple point-wise Multi-Layer Perceptrons (MLP) our research highlights critical insights: 1) the importance of structural and positional embeddings facilitated by heat diffusion in general 3D mesh transformers; 2) the effectiveness of novel components such as geodesic masking and feature interaction via cross-attention in enhancing learning; and 3) the superior performance and efficiency of our proposed methods in challenging segmentation and classification tasks.
Related Material