-
[pdf]
[supp]
[bibtex]@InProceedings{Zheng_2025_CVPR, author = {Zheng, Mingjun and Sun, Long and Dong, Jiangxin and Pan, Jinshan}, title = {Efficient Video Super-Resolution for Real-time Rendering with Decoupled G-buffer Guidance}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {11328-11337} }
Efficient Video Super-Resolution for Real-time Rendering with Decoupled G-buffer Guidance
Abstract
Latency is a key driver for real-time rendering applications, making super-resolution techniques increasingly popular to accelerate rendering processes. In contrast to existing methods that directly concatenate low-resolution frames and G-buffers as input without discrimination, we develop an asymmetric UNet-based super-resolution network with decoupled G-buffer guidance, dubbed RDG, to facilitate the spatial and temporal feature exploration for minimizing performance overheads and latency.We first propose a dynamic feature modulator (DFM) to selectively encode the spatial information to capture precise structural information.We then incorporate auxiliary G-buffer information to guide the decoder to generate detail-rich, temporally stable results.Specifically, we adopt a high-frequency feature booster (HFB) to adaptively transfer the high-frequency information from the normal and bidirectional reflectance distribution function (BRDF) components of the G-buffer, enhancing the details of the generated results.To further enhance the temporal stability, we design a cross-frame temporal refiner (CTR) with depth and motion vector constraints to aggregate the previous and current frames.Extensive experimental results reveal that our proposed method is capable of generating high-quality and temporally stable results in real-time rendering.The proposed RDG-s produces 1080P rendering results on a RTX 3090 GPU with a speed of 126 FPS.
Related Material