Generating Non-Stationary Textures using Self-Rectification

Yang Zhou, Rongjun Xiao, Dani Lischinski, Daniel Cohen-Or, Hui Huang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 7767-7776

Abstract


This paper addresses the challenge of example-based non-stationary texture synthesis. We introduce a novel two-step approach wherein users first modify a reference texture using standard image editing tools yielding an initial rough target for the synthesis. Subsequently our proposed method termed "self-rectification" automatically refines this target into a coherent seamless texture while faithfully preserving the distinct visual characteristics of the reference exemplar. Our method leverages a pre-trained diffusion network and uses self-attention mechanisms to gradually align the synthesized texture with the reference ensuring the retention of the structures in the provided target. Through experimental validation our approach exhibits exceptional proficiency in handling non-stationary textures demonstrating significant advancements in texture synthesis when compared to existing state-of-the-art techniques. Code is available at https://github.com/xiaorongjun000/Self-Rectification

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhou_2024_CVPR, author = {Zhou, Yang and Xiao, Rongjun and Lischinski, Dani and Cohen-Or, Daniel and Huang, Hui}, title = {Generating Non-Stationary Textures using Self-Rectification}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {7767-7776} }