Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report

Marcos V. Conde, Manuel Kolmet, Tim Seizinger, Tom E. Bishop, Radu Timofte, Xiangyu Kong, Dafeng Zhang, Jinlong Wu, Fan Wang, Juewen Peng, Zhiyu Pan, Chengxin Liu, Xianrui Luo, Huiqiang Sun, Liao Shen, Zhiguo Cao, Ke Xian, Chaowei Liu, Zigeng Chen, Xingyi Yang, Songhua Liu, Yongcheng Jing, Michael Bi Mi, Xinchao Wang, Zhihao Yang, Wenyi Lian, Siyuan Lai, Haichuan Zhang, Trung Hoang, Amirsaeed Yazdani, Vishal Monga, Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, Thomas B. Schön, Yuxuan Zhao, Baoliang Chen, Yiqing Xu, JiXiang Niu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 1643-1659

Abstract


We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties. The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups. To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.

Related Material


[pdf]
[bibtex]
@InProceedings{Conde_2023_CVPR, author = {Conde, Marcos V. and Kolmet, Manuel and Seizinger, Tim and Bishop, Tom E. and Timofte, Radu and Kong, Xiangyu and Zhang, Dafeng and Wu, Jinlong and Wang, Fan and Peng, Juewen and Pan, Zhiyu and Liu, Chengxin and Luo, Xianrui and Sun, Huiqiang and Shen, Liao and Cao, Zhiguo and Xian, Ke and Liu, Chaowei and Chen, Zigeng and Yang, Xingyi and Liu, Songhua and Jing, Yongcheng and Mi, Michael Bi and Wang, Xinchao and Yang, Zhihao and Lian, Wenyi and Lai, Siyuan and Zhang, Haichuan and Hoang, Trung and Yazdani, Amirsaeed and Monga, Vishal and Luo, Ziwei and Gustafsson, Fredrik K. and Zhao, Zheng and Sj\"olund, Jens and Sch\"on, Thomas B. and Zhao, Yuxuan and Chen, Baoliang and Xu, Yiqing and Niu, JiXiang}, title = {Lens-to-Lens Bokeh Effect Transformation. NTIRE 2023 Challenge Report}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {1643-1659} }