Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel view synthesis as Sparse Radiance Field
(SRF) optimization using sparse voxels for efficient and fast rendering (
Plenoxels,
InstantNGP). In order to
leverage machine learning and adoption of SRFs as a 3D representation, we present
SPARF, a large-scale
ShapeNet-based synthetic dataset for novel view synthesis consisting of ~ 17 million images rendered from nearly
40,000 shapes at high resolution (400 X 400 pixels). The dataset is orders of magnitude larger than existing
synthetic datasets for novel view synthesis and includes more than one million 3D-optimized radiance fields with
multiple voxel resolutions. Furthermore, we propose a novel pipeline (
SuRFNet) that learns to generate sparse
voxel radiance fields from only few views. This is done by using the densely collected SPARF dataset and 3D sparse
convolutions. SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate
high-quality sparse voxel radiance fields that can be rendered from novel views. Our approach achieves state-of-the-art results in
the task of unconstrained novel view synthesis based on few views on ShapeNet as compared to recent baselines.