鱼泪

verify-tagStanford Background Dataset

cities and urban areasearth and naturecomputer visiondeep learningimage

6

已售 0
17.06MB

数据标识:D17222530137121639

发布时间:2024/07/29

以下为卖家选择提供的数据验证报告:

数据描述

Content

The Stanford Background Dataset was introduced in Gould et al. (ICCV 2009) for evaluating methods for geometric and semantic scene understanding. The dataset contains 715 images chosen from public datasets: LabelMe, MSRC, PASCAL VOC and Geometric Context. The selection criteria were for the images were of outdoor scenes, having approximately 320-by-240 pixels, containing at least one foreground object, and having the horizon position within the image (it need not be visible). Semantic and geometric labels were obtained using Amazon's Mechanical Turk (AMT).

Acknowledgements

The dataset is derived from Stanford DAGS Lab's Stanford Background Dataset from their Scene Understanding Datasets page. If you use this dataset in your work, you should reference:

S. Gould, R. Fulton, D. Koller. Decomposing a Scene into Geometric and Semantically Consistent Regions. Proceedings International Conference on Computer Vision (ICCV), 2009. [pdf]

Inspiration

Rapid advances in Image Understanding using Computer Vision techniques have brought us many state-of-the-art deep learning models across various benchmark datasets in Scene Understanding. But most of these datasets are large and require several days of training time. Can we train sufficiently accurate scene understanding models using less data? How well do SOTA scene understanding models perform when trained under data constraints?

data icon
Stanford Background Dataset
6
已售 0
17.06MB
申请报告