-
[pdf]
[supp]
[bibtex]@InProceedings{Song_2025_ICCV, author = {Song, Yingde and Yang, Zongyuan and Liu, Baolin and Xiong, Yongping and Chen, Sai and Yi, Lan and Zhang, Zhaohe and Yu, Xunbo}, title = {EYE3:Turn Anything into Naked-eye 3D}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {27862-27871} }
EYE3:Turn Anything into Naked-eye 3D
Abstract
Light Field Displays (LFDs), despite significant advances in hardware technology supporting larger fields of view and multiple viewpoints, still face a critical challenge of limited content availability. Producing autostereoscopic 3D content on these displays requires refracting multi-perspective images into different spatial angles, with strict demands for spatial consistency across views, which is technically challenging for non-experts. Existing image/video generation models and radiance field-based methods cannot directly generate display content that meets the strict requirements of light field display hardware from a single 2D resource.We introduces the first generative framework \rm EYE ^ 3 specifically designed for 3D light field displays, capable of converting any 2D images, videos, or texts into high-quality display content tailored for these screens. The framework employs a point-based representation rendered through off-axis perspective, ensuring precise light refraction and alignment with the hardware's optical requirements. To maintain consistent 3D coherence across multiple viewpoints, we finetune a video diffusion model to fill occluded regions based on the rendered masks.Experimental results demonstrate that our approach outperforms state-of-the-art methods, significantly simplifying content creation for LFDs. With broad potential in industries such as entertainment, advertising, and immersive display technologies, our method offers a robust solution to content scarcity and greatly enhances the visual experience on LFDs.
Related Material