Classificadores de margem larga baseados em redes neurais de camada oculta única

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: Vítor Gabriel Reis Caitité
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Minas Gerais
Brasil
ENG - DEPARTAMENTO DE ENGENHARIA ELETRÔNICA
Programa de Pós-Graduação em Engenharia Elétrica
UFMG
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://hdl.handle.net/1843/61980
Resumo: This work explored the relevance of large-margin classifiers in the machine learning field. It was observed some relevant characteristics of these classifiers, such as their generalization capacity, robustness to noisy data, interpretability, and resistance to overfitting. Three methods were proposed, based on neural networks with a single hidden layer, which pursues to obtain a large margin: primal RP-IMA, IM-RBFNN, and dual RP-IMA. These algorithms are based on the principle of determining the hidden layer weights of the network in an unsupervised approach and the output layer weights using an incremental margin algorithm. All models were tested on synthetic and benchmark datasets, and the methodology used was a 10-fold-cross-validation. The "hard" margin measurement results demonstrated that these models were able to obtain significantly higher margins compared to other algorithms such as ELM, RBFNN, and Dual ELM, respectively. Furthermore, analyses of model accuracy showed a positive correlation between obtaining a large margin in the feature space and classification performance for the primal RP-IMA and IM-RBFNN models. Finally, a neuron pruning strategy was proposed for these methods. The experiments demonstrated that the pruning scheme can significantly reduce neural network architecture while maintaining comparable performance. This approach allows them to obtain more compact and efficient models without reducing classification performance.