Classificação de cenas utilizando a análise da aleatoriedade por aproximação da complexidade de Kolmogorov

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Feitosa, Rafael Divino Ferreira lattes
Orientador(a): Soares, Anderson da Silva lattes
Banca de defesa: Soares, Anderson da Silva, Delbem, Alexandre Cláudio Botazzo, Soares, Fabrízzio Alphonsus Alves de Melo Nunes, Laureano, Gustavo Teodoro, Costa, Ronaldo Martins da
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Goiás
Programa de Pós-Graduação: Programa de Pós-graduação em Ciência da Computação (INF)
Departamento: Instituto de Informática - INF (RG)
País: Brasil
Palavras-chave em Português:
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: http://repositorio.bc.ufg.br/tede/handle/tede/10638
Resumo: In many pattern recognition problems, discriminant features are unknown and/or class boundaries are not well defined. Several studies have used data compression to discover knowledge, without features extraction and selection. The basic idea is two distinct objects can be grouped as similar, if the information content of one explains, in a significant way, the information content of the other. However, compressionbased techniques are not efficient for images, as they disregard the semantics present in the spatial correlation of two-dimensional data. A classifier is proposed for estimates the visual complexity of scenes, namely Pattern Recognition by Randomness (PRR). The operation of the method is based on data transformations, which expand the most discriminating features and suppress details. The main contribution of the work is the use of randomness as a measure discrimination. The approximation between scenes and trained models, based on representational distortion, promotes a lossy compression process. This loss is associated with irrelevant details, when the scene is reconstructed with the representation of true class, or with the information degradation, when it is reconstructed with divergent representations. The more information preserved, the greater the randomness of the reconstruction. From the mathematical point of view, the method is explained by two main measures in the U-dimensional plane: intersection and dispersion. The results yielded accuracy of 0.6967, for a 12-class problem, and 0.9286 for 7 classes. Compared with k-NN and a data mining toolkit, the proposed classifier was superior. The method is capable of generating efficient models from few training samples. It is invariant for vertical and horizontal reflections and resistant to some geometric transformations and image processing.