PeanutNeRF: 3D Radiance Field for Peanuts
Accurate phenotypic analysis can help plant breeders efficiently identify and analyze suitable plant traits to enhance crop yield. While 2D images from RGB cameras are easily accessible, their trait estimation performance is limited due to occlusion and the absence of depth information. On the other hand, 3D data from LiDAR sensors are noisy and limited in their ability to capture very thin plant parts such as peanut plant pegs. To combine the merits of both the 2D and 3D data analysis, the 2D images were used to capture thin parts in peanut plants, and deep learning-based 3D reconstruction using captured 2D images was performed to obtain 3D point clouds with information about the scene from different angles. The neural radiance fields were optimized for implicit 3D representation of the plants. The trained radiance fields were queried for 3D reconstruction to achieve point clouds for a 360-degree view and frontal view of the plant. With frontal-view reconstruction and the corresponding 2D images, we used Frustum PVCNN to perform 3D detection of peanut pods. We showed the effectiveness of PeanutNeRF on peanut plants with and without foliage: it showed negligible noise and a chamfer distance of less than 0.0004 from a manually cleaned version. The pod detection showed a precision of around 0.7 at the IoU threshold of 0.5 on the validation set. This method can assist in accurate plant phenotypic studies of peanuts and other important crops.