Multi-View Semantic Information Guidance for Light Field Image Segmentation
One of the great important fields of computer vision is semantic segmentation. As for single image semantic segmentation, due to limited available information, it appears poor performance when the occlusion and similar color interference occur, and has difficulty exploiting the rich scene information. In comparison, the special micro-len array structure of light field camera can record multi-view information of the scene, which provides us with a new solution to solve this issue. In this paper, we propose a multi-view semantic information guidance network (MSIGNet) for light field semantic segmentation. It can effectively utilize semantic information from multi-view images to guide pixel feature of center view image. First, we extract feature of each view image and further obtain semantic probability. Then all probabilities are aggregated through a self-adaptive multi-view probability fusion module. Last, the resulting coarse fusion representation interacts with center view feature to obtain the refined segmentation result. The proposed method shows excellent performance on both real-world and synthetic light field datasets.