EvDiG: Event-guided Direct and Global Components Separation

Xinyu Zhou, Peiqi Duan, Boyu Li, Chu Zhou, Chao Xu, Boxin Shi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 9612-9621

Abstract


Separating the direct and global components of a scene aids in shape recovery and basic material understanding. Conventional methods capture multiple frames under high frequency illumination patterns or shadows requiring the scene to keep stationary during the image acquisition process. Single-frame methods simplify the capture procedure but yield lower-quality separation results. In this paper we leverage the event camera to facilitate the separation of direct and global components enabling video-rate separation of high quality. In detail we adopt an event camera to record rapid illumination changes caused by the shadow of a line occluder sweeping over the scene and reconstruct the coarse separation results through event accumulation. We then design a network to resolve the noise in the coarse separation results and restore color information. A real-world dataset is collected using a hybrid camera system for network training and evaluation. Experimental results show superior performance over state-of-the-art methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhou_2024_CVPR, author = {Zhou, Xinyu and Duan, Peiqi and Li, Boyu and Zhou, Chu and Xu, Chao and Shi, Boxin}, title = {EvDiG: Event-guided Direct and Global Components Separation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {9612-9621} }