PanoPoint: Self-Supervised Feature Points Detection and Description for 360deg Panorama
We introduce PanoPoint, the joint feature point detection and description applied to the nonlinear distortions and the multi-view geometry problems between 360deg panoramas. Our fully convolutional model operates directly in panoramas and computes pixel-level feature point locations and associated descriptors in a single forward pass rather than performing image preprocessing (e.g. panorama to Cubemap) followed by feature detection and description. To train the PanoPoint model, we propose PanoMotion, which simulates the representation between different viewpoints and generates warped panoramas. Moreover, we propose PanoMotion Adaptation, a multi-viewpoint adaptation annotation approach for boosting feature point detection repeatability instead of manual labelling. We train on the annotated synthetic dataset generated by the above method, which outperforms the traditional and other learned approaches and achieves state-of-the-art results on repeatability, localization accuracy, point correspondence precision and real-time metrics, especially for panoramas with significant viewpoint and illumination changes.