EDEN: Multimodal Synthetic Dataset of Enclosed GarDEN Scenes

Hoang-An Le, Thomas Mensink, Partha Das, Sezer Karaoglu, Theo Gevers; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 1579-1589

Abstract


Multimodal large-scale datasets for outdoor scenes are mostly designed for urban driving problems. The scenes are highly structured and semantically different from scenarios seen in nature-centered scenes such as gardens or parks. To promote machine learning methods for nature-oriented applications, such as agriculture and gardening, we propose the multimodal synthetic dataset for Enclosed garDEN scenes (EDEN). The dataset features more than 300K images captured from more than 100 garden models. Each image is annotated with various low/high-level vision modalities, including semantic segmentation, depth, surface normals, intrinsic colors, and optical flow. Experimental results on the state-of-the-art methods for semantic segmentation and monocular depth prediction, two important tasks in computer vision, show positive impact of pre-training deep networks on our dataset for unstructured natural scenes. The dataset and related materials will be available at https://lhoangan.github.io/eden.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Le_2021_WACV, author = {Le, Hoang-An and Mensink, Thomas and Das, Partha and Karaoglu, Sezer and Gevers, Theo}, title = {EDEN: Multimodal Synthetic Dataset of Enclosed GarDEN Scenes}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {1579-1589} }