Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Skip to content

SHREC'14 Track: Large Scale Comprehensive 3D Shape Retrieval

Metadata Updated: September 30, 2025

Objective: The objective of this track is to evaluate the performance of 3D shape retrieval approaches on a large-sale comprehensive 3D shape database which contains different types of models, such as generic, articulated, CAD and architecture models.

Introduction: With the increasing number of 3D models created every day and stored in databases, the development of effective and scalable 3D search algorithms has become an important research area. In this contest, the task will be retrieving 3D models similar to a complete 3D model query from a new integrated large-scale comprehensive 3D shape benchmark including various types of models. Owing to the integration of the most important existing benchmarks to date, the newly created benchmark is the most exhaustive to date in terms of the number of semantic query categories covered, as well as the variations of model types. The shape retrieval contest will allow researchers to evaluate results of different 3D shape retrieval approaches when applied on a large scale comprehensive 3D database.

The benchmark is motivated by a latest large collection of human sketches built by Eitz et al. [1]. To explore how human draw sketches and human sketch recognition, they collected 20,000 human-drawn sketches, categorized into 250 classes, each with 80 sketches. This sketch dataset is exhaustive in terms of the number of object categories. Thus, we believe that a 3D model retrieval benchmark based on their object categorizations will be more comprehensive and appropriate than currently available 3D retrieval benchmarks to more objectively and accurately evaluate the real practical performance of a comprehensive 3D model retrieval algorithm if implemented and used in the real world.

Considering this, we build a SHREC'14 Large Scale Comprehensive Track Benchmark (SHREC14LSGTB) by collecting relevant models in the major previously proposed 3D object retrieval benchmarks. Our target is to find models for as many as classes of the 250 classes and find as many as models for each class. These previous benchmarks have been compiled with different goals in mind and to date, not been considered in their sum. Our work is the first to integrate them to form a new, larger benchmark corpus for comprehensive 3D shape retrieval.

Dataset: SHREC'14 Large Scale Comprehensive Retrieval Track Benchmark has 8,987 models, categorized into 171 classes. We adopt a voting scheme to classify models. For each classification, we have at least two votes. If these two votes agree each other, we confirm that the classification is correct, otherwise, we perform a third vote to finalize the classification. All the models are categorized according to the classifications in Eitz et al. [1], based on visual similarity.

Evaluation Method: To have a comprehensive evaluation of the retrieval algorithm, we employ seven commonly adopted performance metrics in 3D model retrieval technique.

Please cite the papers:

[1] Bo Li, Yijuan Lu, Chunyuan Li, Afzal Godil, Tobias Schreck, Masaki Aono, Martin Burtscher, Qiang Chen, Nihad Karim Chowdhury, Bin Fang, Hongbo Fu, Takahiko Furuya, Haisheng Li, Jianzhuang Liu, Henry Johan, Ryuichi Kosaka, Hitoshi Koyanagi, Ryutarou Ohbuchi, Atsushi Tatsuma, Yajuan Wan, Chaoli Zhang, Changqing Zou. A Comparison of 3D Shape Retrieval Methods Based on a Large-scale Benchmark Supporting Multimodal Queries. Computer Vision and Image Understanding, November 4, 2014.

[2] Bo Li, Yijuan Lu, Chunyuan Li, Afzal Godil, Tobias Schreck, Masaki Aono, Qiang Chen, Nihad Karim Chowdhury, Bin Fang, Takahiko Furuya, Henry Johan, Ryuichi Kosaka, Hitoshi Koyanagi, Ryutarou Ohbuchi, Atsushi Tatsuma. SHREC' 14 Track: Large Scale Comprehensive 3D Shape Retrieval. Eurographics Workshop on 3D Object Retrieval 2014 (3DOR 2014): 131-140, 2014.

Access & Use Information

Public: This dataset is intended for public access and use. License: See this page for license information.

Downloads & Resources

References

https://www.nist.gov/publications/comparison-3d-shape-retrieval-methods-based-large-scale-benchmark-supporting-multimodal
http://dx.doi.org/10.1016/j.cviu.2014.10.006

Dates

Metadata Created Date November 12, 2020
Metadata Updated Date September 30, 2025

Metadata Source

Harvested from Commerce Non Spatial Data.json Harvest Source

Additional Metadata

Resource Type Dataset
Metadata Created Date November 12, 2020
Metadata Updated Date September 30, 2025
Publisher National Institute of Standards and Technology
Maintainer
Identifier ark:/88434/mds2-2219
Data First Published 2020-04-14
Language en
Data Last Modified 2014-01-01 00:00:00
Category Information Technology:Data and informatics, Mathematics and Statistics:Image and signal processing
Public Access Level public
Bureau Code 006:55
Metadata Context https://project-open-data.cio.gov/v1.1/schema/catalog.jsonld
Schema Version https://project-open-data.cio.gov/v1.1/schema
Catalog Describedby https://project-open-data.cio.gov/v1.1/schema/catalog.json
Harvest Object Id b81fc74a-6c85-4c2b-830d-6b88ca24cef9
Harvest Source Id bce99b55-29c1-47be-b214-b8e71e9180b1
Harvest Source Title Commerce Non Spatial Data.json Harvest Source
Homepage URL https://data.nist.gov/od/id/mds2-2219
License https://www.nist.gov/open/license
Program Code 006:045
Related Documents https://www.nist.gov/publications/comparison-3d-shape-retrieval-methods-based-large-scale-benchmark-supporting-multimodal, http://dx.doi.org/10.1016/j.cviu.2014.10.006
Source Datajson Identifier True
Source Hash baebcf230efc6ec64e216c56682705948685d6b0f5da74ee5f74ce729aebd825
Source Schema Version 1.1

Didn't find what you're looking for? Suggest a dataset here.