-
[pdf]
[supp]
[bibtex]@InProceedings{Chen_2024_CVPR, author = {Chen, Minghao and Xie, Junyu and Laina, Iro and Vedaldi, Andrea}, title = {SHAP-EDITOR: Instruction-Guided Latent 3D Editing in Seconds}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26456-26466} }
SHAP-EDITOR: Instruction-Guided Latent 3D Editing in Seconds
Abstract
We propose a novel feed-forward 3D editing framework called Shap-Editor. Prior research on editing 3D objects primarily concentrated on editing individual objects by leveraging off-the-shelf 2D image editing networks utilizing a process called 3D distillation which transfers knowledge from the 2D network to the 3D asset. Distillation necessitates at least tens of minutes per asset to attain satisfactory editing results thus it is not very practical. In contrast we ask whether 3D editing can be carried out directly by a feed-forward network eschewing test-time optimization. In particular we hypothesise that this process can be greatly simplified by first encoding 3D objects into a suitable latent space. We validate this hypothesis by building upon the latent space of Shap-E. We demonstrate that direct 3D editing in this space is possible and efficient by learning a feed-forward editor network that only requires approximately one second per edit. Our experiments show that Shap-Editor generalises well to both in-distribution and out-of-distribution 3D assets with different prompts and achieves superior performance compared to methods that carry out test-time optimisation for each edited instance.
Related Material