Utilizando contexto na representação de imagens para a classificação de cenas

Detalhes bibliográficos
Ano de defesa: 2014
Autor(a) principal: Gazolli, Kelly Assis de Souza
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal do Espírito Santo
BR
Doutorado em Engenharia Elétrica
Centro Tecnológico
UFES
Programa de Pós-Graduação em Engenharia Elétrica
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://repositorio.ufes.br/handle/10/1626
Resumo: Scene classification is a very popular topic in the field of computer vision and it has many applications, such as, content-based image organization and retrieval and robot navigation.However, scene classification is quite a challenging task, due to the occurrence of occlusion, shadows and reflections, illumination changes and scale variability. Among the approaches to solve the scene classification problems are those that use nonparametric transform and those that improve classification results by using contextual information. Thus, this work proposes two image descriptors that associate contextual information, from neighboring regions, with a non-parametric transforms. The aim is to propose an approach that does not increase excessively the feature vector dimension and that does not use the bag-of-feature method. In this way, the proposals descrease the computational costs and eliminate the dependence parameters, which allows the use of those descriptors in applications for non-experts in the pattern recognition field. The CMCT and ECMCT descriptors are presented and their performances are evaluated, using four public datasets. Five variations of those descriptors are also proposed (GistCMCT, GECMCT, GistCMCT-SM, ECMCT-SM e GECMCT-SM), obtained through their association with other approaches. The results achieved on four public datasets show that the proposed image representations are competitive and lead to an increase in the classification rates when compared to others descriptors.