-
[pdf]
[bibtex]@InProceedings{Zheng_2024_ACCV, author = {Zheng, Lijie and Liang, Xiao}, title = {More and Larger Auxiliary Feature-Guided Spatial-Temporal Super-Resolution for Rendered Sequences}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {1986-2001} }
More and Larger Auxiliary Feature-Guided Spatial-Temporal Super-Resolution for Rendered Sequences
Abstract
The post-processing rendered sequences improves the quality of the sequences and shortens the time of the rendering phase. However, most of the current post-processing methods for sequences are suitable for video. Directly transferring these methods to rendered sequences cannot obtain high-quality results. To address this problem, we propose an end-to-end spatial-temporal super-resolution network for rendered sequences, which improves rendering efficiency by simultaneously implementing frame interpolation (FI) and super-resolution. In the FI task, accurately inferring results that closer to the real motion state of the target frames is the key and difficult point to ensure the generation effect. For this issue, we design an auxiliary feature-guided interpolation (AFGI) module. By introducing the auxiliary features corresponding to the target frames, AFGI module provides the real motion state of the target frames to the network. In the part of aggregating contextual information, we propose a weighted aggregation upsampling (WAUpS) module. Aggregation is selectively performed based on the correlation between incoming information and the current frame. WAUpS module effectively prevents irrelevant information from affecting the super-resolution results, which is a problem with the direct aggregation methods used previously. At the same time, WAUpS module combines the upsampled features with the corresponding high-resolution auxiliary features. This addition provides the output with rich detail textures and other key information, further improving the overall processing quality. Experimental results show that compared with state-of-the-art (SOTA) methods, our method not only obtains high-quality rendered sequences processing results, but also effectively improves the rendering efficiency.
Related Material