How Privacy-Preserving Are Line Clouds? Recovering Scene Details From 3D Lines

Kunal Chelani, Fredrik Kahl, Torsten Sattler; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 15668-15678

Abstract


Visual localization is the problem of estimating the camera pose of a given image with respect to a known scene. Visual localization algorithms are a fundamental building block in advanced computer vision applications, including Mixed and Virtual Reality systems. Many algorithms used in practice represent the scene through a Structure-from-Motion (SfM) point cloud, where each 3D point is associated with one or more local image features. Establishing 2D-3D matches between features in a query image and the 3D points through descriptor matching Visual localization is the problem of estimating the camera pose of a given image with respect to a known scene. Visual localization algorithms are a fundamental building block in advanced computer vision applications, including Mixed and Virtual Reality systems. Many algorithms used in practice represent the scene through a Structure-from-Motion (SfM) point cloud and use 2D-3D matches between a query image and the 3D points for camera pose estimation. As recently shown, image details can be accurately recovered from SfM point clouds by translating renderings of the sparse point clouds to images. To address the resulting potential privacy risks for user-generated content, it was recently proposed to lift point clouds to line clouds by replacing 3D points by randomly oriented 3D lines passing through these points. The resulting representation is unintelligible to humans and effectively prevents point cloud-to-image translation. This paper shows that a significant amount of information about the 3D scene geometry is preserved in these line clouds, allowing us to (approximately) recover the 3D point positions and thus to (approximately) recover image content. Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points. Code is available at \href https://github.com/kunalchelani/Line2Point https://github.com/kunalchelani/Line2Point .

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chelani_2021_CVPR, author = {Chelani, Kunal and Kahl, Fredrik and Sattler, Torsten}, title = {How Privacy-Preserving Are Line Clouds? Recovering Scene Details From 3D Lines}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {15668-15678} }