Unsupervised selective rank fusion for image retrieval tasks

Bibliographic Details
Main Author: Valem, Lucas Pascotti [UNESP]
Publication Date: 2020
Other Authors: Pedronette, Daniel Carlos Guimarães [UNESP]
Format: Article
Language: eng
Source: Repositório Institucional da UNESP
Download full: http://dx.doi.org/10.1016/j.neucom.2019.09.065
http://hdl.handle.net/11449/199603
Summary: Several visual features have been developed for content-based image retrieval in the last decades, including global, local and deep learning-based approaches. However, despite the huge advances in features development and mid-level representations, a single visual descriptor is often insufficient to achieve effective retrieval results in several scenarios. Mainly due to the diverse aspects involved in human visual perception, the combination of different features has been establishing as a relevant trend in image retrieval. An intrinsic difficulty consists in the task of selecting the features to combine, which is often supported by supervised learning approaches. Therefore, in the absence of labeled data, selecting features in an unsupervised way is a very challenging, although essential task. In this paper, an unsupervised framework is proposed to select and fuse visual features in order to improve the effectiveness of image retrieval tasks. The framework estimates the effectiveness and correlation among features through a rank-based analysis and uses a list of ranker pairs to determine the selected features combinations. High-effective retrieval results were achieved through a comprehensive experimental evaluation conducted on 5 public datasets, involving 41 different features and comparison with other methods. Relative gains up to +55% were obtained in relation to the highest effective isolated feature.
id UNSP_ee2d40cd3aee39a59b473f1311ce6429
oai_identifier_str oai:repositorio.unesp.br:11449/199603
network_acronym_str UNSP
network_name_str Repositório Institucional da UNESP
repository_id_str 2946
spelling Unsupervised selective rank fusion for image retrieval tasksContent-based image retrievalCorrelation measureEffectiveness estimationRank-aggregationUnsupervised late fusionSeveral visual features have been developed for content-based image retrieval in the last decades, including global, local and deep learning-based approaches. However, despite the huge advances in features development and mid-level representations, a single visual descriptor is often insufficient to achieve effective retrieval results in several scenarios. Mainly due to the diverse aspects involved in human visual perception, the combination of different features has been establishing as a relevant trend in image retrieval. An intrinsic difficulty consists in the task of selecting the features to combine, which is often supported by supervised learning approaches. Therefore, in the absence of labeled data, selecting features in an unsupervised way is a very challenging, although essential task. In this paper, an unsupervised framework is proposed to select and fuse visual features in order to improve the effectiveness of image retrieval tasks. The framework estimates the effectiveness and correlation among features through a rank-based analysis and uses a list of ranker pairs to determine the selected features combinations. High-effective retrieval results were achieved through a comprehensive experimental evaluation conducted on 5 public datasets, involving 41 different features and comparison with other methods. Relative gains up to +55% were obtained in relation to the highest effective isolated feature.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Department of Statistics Applied Mathematics and Computing (DEMAC) São Paulo State University (UNESP)Department of Statistics Applied Mathematics and Computing (DEMAC) São Paulo State University (UNESP)FAPESP: 2017/02091-4FAPESP: 2017/25908-6FAPESP: 2018/15597-6CNPq: 308194/2017-9Universidade Estadual Paulista (Unesp)Valem, Lucas Pascotti [UNESP]Pedronette, Daniel Carlos Guimarães [UNESP]2020-12-12T01:44:19Z2020-12-12T01:44:19Z2020-02-15info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/article182-199http://dx.doi.org/10.1016/j.neucom.2019.09.065Neurocomputing, v. 377, p. 182-199.1872-82860925-2312http://hdl.handle.net/11449/19960310.1016/j.neucom.2019.09.0652-s2.0-85074496518Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengNeurocomputinginfo:eu-repo/semantics/openAccess2024-11-27T14:09:35Zoai:repositorio.unesp.br:11449/199603Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestrepositoriounesp@unesp.bropendoar:29462024-11-27T14:09:35Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false
dc.title.none.fl_str_mv Unsupervised selective rank fusion for image retrieval tasks
title Unsupervised selective rank fusion for image retrieval tasks
spellingShingle Unsupervised selective rank fusion for image retrieval tasks
Valem, Lucas Pascotti [UNESP]
Content-based image retrieval
Correlation measure
Effectiveness estimation
Rank-aggregation
Unsupervised late fusion
title_short Unsupervised selective rank fusion for image retrieval tasks
title_full Unsupervised selective rank fusion for image retrieval tasks
title_fullStr Unsupervised selective rank fusion for image retrieval tasks
title_full_unstemmed Unsupervised selective rank fusion for image retrieval tasks
title_sort Unsupervised selective rank fusion for image retrieval tasks
author Valem, Lucas Pascotti [UNESP]
author_facet Valem, Lucas Pascotti [UNESP]
Pedronette, Daniel Carlos Guimarães [UNESP]
author_role author
author2 Pedronette, Daniel Carlos Guimarães [UNESP]
author2_role author
dc.contributor.none.fl_str_mv Universidade Estadual Paulista (Unesp)
dc.contributor.author.fl_str_mv Valem, Lucas Pascotti [UNESP]
Pedronette, Daniel Carlos Guimarães [UNESP]
dc.subject.por.fl_str_mv Content-based image retrieval
Correlation measure
Effectiveness estimation
Rank-aggregation
Unsupervised late fusion
topic Content-based image retrieval
Correlation measure
Effectiveness estimation
Rank-aggregation
Unsupervised late fusion
description Several visual features have been developed for content-based image retrieval in the last decades, including global, local and deep learning-based approaches. However, despite the huge advances in features development and mid-level representations, a single visual descriptor is often insufficient to achieve effective retrieval results in several scenarios. Mainly due to the diverse aspects involved in human visual perception, the combination of different features has been establishing as a relevant trend in image retrieval. An intrinsic difficulty consists in the task of selecting the features to combine, which is often supported by supervised learning approaches. Therefore, in the absence of labeled data, selecting features in an unsupervised way is a very challenging, although essential task. In this paper, an unsupervised framework is proposed to select and fuse visual features in order to improve the effectiveness of image retrieval tasks. The framework estimates the effectiveness and correlation among features through a rank-based analysis and uses a list of ranker pairs to determine the selected features combinations. High-effective retrieval results were achieved through a comprehensive experimental evaluation conducted on 5 public datasets, involving 41 different features and comparison with other methods. Relative gains up to +55% were obtained in relation to the highest effective isolated feature.
publishDate 2020
dc.date.none.fl_str_mv 2020-12-12T01:44:19Z
2020-12-12T01:44:19Z
2020-02-15
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://dx.doi.org/10.1016/j.neucom.2019.09.065
Neurocomputing, v. 377, p. 182-199.
1872-8286
0925-2312
http://hdl.handle.net/11449/199603
10.1016/j.neucom.2019.09.065
2-s2.0-85074496518
url http://dx.doi.org/10.1016/j.neucom.2019.09.065
http://hdl.handle.net/11449/199603
identifier_str_mv Neurocomputing, v. 377, p. 182-199.
1872-8286
0925-2312
10.1016/j.neucom.2019.09.065
2-s2.0-85074496518
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv Neurocomputing
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv 182-199
dc.source.none.fl_str_mv Scopus
reponame:Repositório Institucional da UNESP
instname:Universidade Estadual Paulista (UNESP)
instacron:UNESP
instname_str Universidade Estadual Paulista (UNESP)
instacron_str UNESP
institution UNESP
reponame_str Repositório Institucional da UNESP
collection Repositório Institucional da UNESP
repository.name.fl_str_mv Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)
repository.mail.fl_str_mv repositoriounesp@unesp.br
_version_ 1834483322181386240