Análise de desempenho de redes neurais artificiais do tipo multilayer perceptron por meio do distanciamento dos pontos do espaço de saída
Ano de defesa: | 2016 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Tese |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Uberlândia
Brasil Programa de Pós-graduação em Engenharia Elétrica |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.ufu.br/handle/123456789/17967 https://doi.org/10.14393/ufu.te.2016.133 |
Resumo: | Artificial Neural Networks (ANN) of Multilayer Perceptron (MLP) type are widely known and used in a wide range of applications related to Pattern Recognition (PR). Studies have been conducted in order to improve the performance of that tool. They search for different approaches such as the improvement of the training algorithm, the determination of optimal topologies for each problem, the initialization of synaptic training weights, and the treatment of the network input patterns. In this context, the output of MLPs hasn’t been exploited in order to improve its performance. This work is a study on the influence of the distance of the output space points in the performance of MLPs in PR tasks. The gap widening of the output points, which are the network targets, is obtained by using target bipolar and orthogonal vectors (VBO). The orthogonality condition implies in the Euclidean gap widening, which does not occur with conventional vectors VCs that are not orthogonal. This study shows a mathematical analysis of the training algorithm called backpropagation, relating its deduction from the error function with the alternative deduction from the Euclidean distance function. The assumption that the use of VBOs improves the MLP type network performance is demonstrated by the reduction of the misclassification susceptibility in relation to the use of (VCs). This work also shows experimental analysis on classification of manuscripts digits, human iris and signs of the Australian sign language. The propensity to networks misclassification of MLP type trained with VCs and VBOs is statistically evaluated. The results confirmed the findings obtained in the mathematical analysis. In another experimental analysis, we have evaluated the performance of the MLP type networks trained with VCs and VBOs, from the completion of the first training cycle to the training end cycles. The results show three important findings. Firstly, the great increase in patterns classification rates in the first training cycles. The second aspect is less susceptible to the effect of the overfitting in MLP networks trained with VBOs. The third one deals with the achievement of significant rates of performance with little training, and with little computational effort. Finally, the study also conducted a study on the robustness of the MLP networks before the change in the number of neurons of the intermediate layer, and the value of the initial learning rate. It was found that the networks trained with VBOs have little susceptibility to these changes of parameters, unlike what happens with the networks trained with conventional vectors. |